This Week in Context there’s a context-aware contacts app, how to help train the world’s first knowledge engine for robots, a watch that will save lives, open-sourcing DeepDive and a vending machine that has its own opinions on what snack you should or should not get.
This Week in Context
Your Weekly Update on All Things Context, December 22 2014
“AIs are not alive. While AIs are capable of performing tasks otherwise performed by human beings, they are not “alive” like we are. They have no genuine creativity, emotions or desires other than what we program into them or they detect from the environment. Unlike in science fiction (emphasis on the fiction) AIs would have no desire to mate, replicate or have a small AI family.”
What Artificial Intelligence is Not, Rob Smith
1. Contax: a context-aware contacts app prototype
AT&T’s prototype app Contax analyses communication patterns to figure out who matters to you most. Currently, Contax gathers information by analysing your call logs and text messaging patterns, as well as physical proximity. It then uses that info to reorganise your address book into a curated social circles containing people you are most likely to want to contact based on your past behaviour, and to actively suggest making calls according to your usual pattern.
2. RoboBrain: The World’s First Knowledge Engine For Robots
RoboBrain is a Google for robots of sorts. It can be freely accessed by any device wishing to carry out certain task. The RoboBrain database also gathers new information about these tasks as robots perform them, thereby learning as it goes. Why can’t robots just use Google? Technology Review explains:
These machines require detailed instructions even for the simplest task. For example, a robot asking a search engine “how to bring sweet tea from the kitchen” is unlikely to get the detail it needs to carry out the task since it requires all kinds of incidental knowledge such as the idea that cups can hold liquid (but not when held upside down), that water comes from taps and can be heated in a kettle or microwave, and so on.
RoboBrain learns concepts by searching the Internet (it can interprete natural language text, images, and videos), as well as by watching humans with its sensors and interacting with them.
If you want to help create the RoboBrain, it is soliciting human feedback on its learnings at RoboBrain.me.
3. DARPA open-sources Watson’s ‘brother’ DeepDive
“Watson is a question-answering engine (although now it seems to be much bigger). [In contrast] DeepDive’s goal is to extract lots of structured data from unstructured data sources,” the EETimes quotes professor Christopher Re.
DeepDive incorporates probability-based learning algorithms as well as open-source tools such as MADlib, Impala (from Oracle), and low-level techniques, such as Hogwild, some of which have also been included in Microsoft’s Adam. To build DeepDive into your application, you should be familiar with SQL and Python.
InfoWorld lists three more ‘Watson wannabes’, projects that are not as established as Watson but open source and ready for work: Apache UIMA (analysis on textual context),OpenCog (a platform to build and share artificial intelligence programs) and OAQA (question answering systems).
4. Embrace, a watch that will save lives
With only 48 hours left to go, Embrace, the “first medical-quality wearable to help measure stress, epileptic seizures, activity and sleep” is a Indiegogo success with $328,633 USD raised, more than three times its $100k goal. The stylish watch is geared at people who live with epilepsy, and their loved ones. The watch keeps an eye on electrodermal activity and if it registeres an unusual event, such as a convulsive seizure it will signal this to the companion smartphone which can then notify parents or caregivers.
5. A vending machine that will deny you snacks based on medical records
Springwise reports on a vending machine that uses facial recognition and customer’s medical records to determine if they should be allowed to buy an unhealthy snack:
The technology detects the customer’s age, build and mood in order to determine whether the purchase is a wise decision. The machine can also be programmed to access information about the user’s medical records and purchase history. If the algorithms decide that purchasing a coffee with 3 sugars or the fourth candy bar of the day is a bad idea for their health or mood, it can refuse to vend the product.
A work of Music ‘n Tech ‘n Christmas
Echo Nest’s Director of Developer Community Paul Lamere analysed almost a million songs of the type ‘music coincident with the northern hemispheric winter solstice,’ hereafter referenced as ‘Christmas’ songs. The Top 3 artists with the most Christmas tracks in the Spotify catalog are Bing Crosby, Frank Sinatra and Elvis Presley, with Crosby’s “White Christmas” being the most frequently released track.
Lamare also created a list of seasonal terms by finding the most frequently occurring words in song titles:
As a bonus, he has created the Playlist Miner, which allows you to generate a Spotify playlist based on the 1,000 most popular ‘dinner’ playlists, but excluding ‘christmas’ songs. In case you want to avoid listening to Mr Crosby too much. 😉
Papers, Talks & Research
- Machine Learning: The High-Interest Credit Card of Technical Debt (machine learning, google research, paper)
- The Uncanny Valley Effect in Behavioral Targeting and Information Processing of Peripheral Cues (online behavioural targeting, privacy, paper)
- DeepSpeech: Scaling up end-to-end speech recognition (deep learning, speech recognition, paper, article: ‘Baidu Announces Breakthrough In Speech Recognition, Claiming To Top Google and Apple – Forbes’)
- Explaining and Harnessing Adversarial Examples (neural networks, machine learning, paper submission)
Also in the news, Stanford is to host a century-long study on artificial intelligence. AI100 will look at how the effects of artificial intelligence ripple through every aspect of how people work, live and play.
The Argus Labs team wishes you Happy Holidays and a splendid New Year!