Mobile Sensors & Context at Strata and Image Recognition Now Gets The Entire Scene #WIC

By November 21, 2014 Week in Context No Comments

Argus presenting about Mobile Sensors, Machine Learning and Context at Strata Europe, predicting shopping habits by bra size, image-recognition now understands the scene (aka context) and features – including emotion – for 650,000 audio files on AcousticBrainz.

This Week in Context

Your Weekly Update on All Things Context, November 21 2014

The Argus talk on Mobile Sensors, Machine Learning and Context was a certified success! Take Twitter’s word for it, or check for yourself: the presentation is available here on Slideshare. We appreciate likes and shares, and comments even more! (image credit: Kris Peeters on Twitter)


How do we use the capabilities of our devices to build better human experiences? You do less! You basically use the sensors to say we don’t actually need people to give in an address.

The phone already knows. Don’t take for granted that you have to do it the way you did before. Ask yourself: “What does my device already know (that we can rely on)?”

Tim O’Reilly – Strata Europe 2014 Keynote (video)

Five Must-Reads

1. How Alibaba is Using Bra Sizes to Predict Online Shopping Habits

“Dividing intimate-apparel shoppers into four categories of spending power, analysts at the e-commerce giant found that 65% of women of cup size B fell into the “low” spend category, while those of a size C or higher mostly fit into the “middle” or higher group.”

(Possible reason: younger women with less purchasing power may be the ones buying smaller-sized bras.) – Data Dividend

2. Image-Recognition Now ‘Understands’ Entire Scene

“Until now, so-called computer vision has largely been limited to recognizing individual objects. The new software, described on Monday by researchers at Google and at Stanford University, teaches itself to identify entire scenes: a group of young men playing Frisbee, for example, or a herd of elephants marching on a grassy plain. The software then writes a caption in English describing the picture. Compared with human observations, the researchers found, the computer-written descriptions are surprisingly accurate.” (Hat tip to Mr Joren)

NY Times – Researchers Announce Advance in Image-Recognition Software
Show and tell: a neural image caption generator (computer vision, neural networks, paper)

3. Survey: U.S. Adults Feel They Are Losing Control of Their Data

“People also harbor widespread distrust toward technology companies, the survey found. Some 91% feel like they’ve lost control over the way their personal data is collected and used, while 81% don’t feel safe sharing private information over a social network — even with people they trust. More than two-thirds of those surveyed feel insecure sharing private information via chat and text message, while 57% feel insecure using email. And almost half no longer feel safe sharing private information over a cellphone.” – US Adults Feel They Are Losing Control of Their Data

4. Taking on the cybersecurity gaps created by the Internet of Thing

Study concludes United States federal government must rush to address cybersecurity gaps created by the Internet of Things before they become long-term, intractable problems.

“There is a small – and rapidly closing – window to ensure that [the Internet of Things] is adopted in a way that maximizes security and minimizes risk,” the report states. “If the country fails to do so, it will be coping with the consequences for generations.”

Inside Cybersecurity – Daniel: Obama will likely enact panel’s advice on blunting cyber risks

5. Features for 650,000 audio files on AcousticBrainz

“The AcousticBrainz project aims to crowd source acoustic information for all music in the world and to make it available to the public. This acoustic information describes the acoustic characteristics of music and includes low-level spectral information and information for genres, moods, keys, scales and much more. The goal of AcousticBrainz is to provide music technology researchers and open source hackers with a massive database of information about music. We hope that this database will spur the development of new music technology research and allow music hackers to create new and interesting recommendation engines.”


Papers, Talks & Research

EU Robotics week, filled with robot related activities starts on Monday 24/11. You can visit research labs, museums, universities, schools and industries involved in robotics. Lots of activities are open for the general public and free of charge.

  • Show and tell: a neural image caption generator (computer vision, neural networks, paper)
  • What A Nasty Ay: Exploring Mood-Weather Relationship (twitter, language, sentiment analysis, machine learning, emotion, paper and articleTwitter Firehose Reveals How Weather Affects Mood)
  • Swarm: an actuated wearable for mediating affect (wearables, affective computing, emotion, sensors, paper)
  • Continuous Mapping of Personality Traits: A Novel Challenge and Failure Conditions (HCI, affective computing, feature extraction, paper)

The proceedings of ICMI 2014 & related conferences have been published online, go here for research on Multimodal, Multi-Party, Real-World Human-Robot Interaction, the Emotion Recognition in the Wild Challenge and Intelligent Human Machine Interaction. (Amongst others Crime Prediction from Demographics and Mobile DataComputation of Emotions and Neural Networks for Emotion Recognition in the Wild.)

Enjoy the reads, and have a great weekend!
And if you haven’t done so yet, kindly consider subscribing to the Week in Context here.


Subscribe to ‘The Week in Context’

Leave a Reply

Contact us