We highlight EmotionML 1.0, a new W3C recommendation to represent emotion related states in data processing systems, and have a look at StreetScore, an algorithm that knows ‘how safe’ we humans perceive a street view to be. There’s personalised public radio, robots that can teach themselves to walk again after loosing a leg, and a great read about how Disneyland is turning into Dataland.
Image: We trained a classifier to automatically predict the real-time emotions invoked when listening to music. The following plot shows emotion trajectories of several hundred songs, clustered per genre. As Vincent tweeted, there’s a clear negative valence to be seen for rap (high arousal, low valence) and a more positive valence for soul music.
This Week in Context
Your Weekly Update on All Things Context, August 1 2014
“As the Web is becoming ubiquitous, interactive, and multimodal, technology needs to deal increasingly with human factors, including emotions. The specification of Emotion Markup Language 1.0 aims to strike a balance between practical applicability and scientific well-foundedness.”
Emotion Markup Language (EmotionML) 1.0
Emotion Markup Language (EmotionML) is a new W3C recommendation to represent emotion related states in data processing systems. The language is conceived as a ‘plug-in’ language suitable for use in three different areas: manual annotation of data; automatic recognition of emotion-related states from user behaviour; and generation of emotion-related system behaviour. The recommendation contains a splendid example of how EmotionML could be used for generation of robot behaviour:
The following example describes various aspects of an emotionally competent robot whose battery is nearly empty. The robot is in a global state of high arousal, negative pleasure and low dominance, i.e. a negative state of distress paired with some urgency but quite limited power to influence the situation. It has a tendency to seek a recharge and to avoid picking up boxes. However, sensor data displays an unexpected obstacle on the way to the charging station. This triggers planning of expressive behavior of frowning.
An algorithm that knows ‘how safe’ your neighbourhood looks
StreetScore is a machine learning algorithm that assigns a score to a street view based on how safe it would look to a human observer. StreetScore was trained to predict perceived safety using a ‘training dataset’ consisting of 3,000 street views from New York and Boston and perceived safety rankings obtained from crowdsourced survey Place Pulse. (via Roel)
Robot with broken leg learns to walk again in less than 2 minutes
Engineers have created a robot capable of teaching itself to walk again when one of its six legs is damaged – in less than two minutes. The robot does not have a predefined strategy how to cope with each possible injury but instead chooses from 13,000 pre-calculated gaits, first opting for those where it has to use the damaged leg least, and then, from those, for the gaits give it greatest speed. (via Roel)
Personalised Public Radio: NPR One
“It would know what we want to hear even before we know it’s out there, bringing it all to us in real time and no cost. It’s a vision that might complete the transition of turning the phone into a virtual digital radio — and it would work on a tablet, a laptop, and even in certain connected cars.” That’s the dream of NPR’s new NPR One app. You can test drive it yourself in the iOS and Android stores. On top of that, its design is genially minimalistic. Read more on this ‘spoken-news & music recommender’ in this in-depth article on niemanlab.org, and give the app a try!
(On another note, in An epic battle in streaming music is about to begin, and only a few will survive, John McDuling wonders how we will be listening to music in the future, and quotes Lefsetz as saying: “We live in an on-demand world. ‘There’s a market’ for passive listening, he added, ‘[b]ut not run by algorithms, but people’—i.e., by radio stations, both traditional terrestrial ones, and digital ones, like SiriusXM.” We beg to differ. Algorithms can most certainly be of help in finding exactly that type of music you’re in the mood for listening to. Yes, I like Atari Teenage Riot. But I’m not always feeling like listening to digital hardcore. Mood and environment. 😉 )
On Disneyland turning into Dataland
Best-practise example on productising the tracking of your customers? That must be Disney’s MagicBand, which even tells staff the location of the table you’re seated at. “The MagicBand is the world’s largest and most diverse experiment in wearable data fashion,” Ian Bogost writes on ReForm. “Automatic visits to Dataland are limited to guests who book their stay on Disney property. But fear not, for MagicBands can be purchased for $12.95 at any Disney theme park gift shop. And everyone is allowed the opportunity to customize and personalize their MagicBands: ‘MagicSliders’ sleeves and ‘MagicBandits’ charms that bear the images of Disney characters can be purchased ($6.95-14.95) and attached to a MagicBand.”
Papers, Talks & Research
- Application of EmotionML (sentiment, W3C recommendation, <emotion>, paper)
- StreetScore – Predicting the Perceived Safety of One Million Streetscapes (computer vision, paper)
- Predicting Destinations with Smartphone Log using Trajectory-based HMMs (location, forecasting, paper)
- Surveillance, Gaming and Play (Surveillance & Society Issue) (privacy, data collection, gamification, journal)
- Detection of Behavior Change in People with Depression (mobile sensors, ubiquitous computing, health, paper)
- ALPACA: A Decentralized, Privacy-Centric and Context-Aware Framework for the Dissemination of Context Information (context-awareness, privacy-by-design, paper)
- A Research of Speech Emotion Recognition Based on Deep Belief Network and SVM (emotion detection, deep learning, research article)
- In A State: Live Emotion Detection and Visualisation for Music Performance (emotion detection, music, paper)
Enjoy the weekend, see you next week.