Technology to Track Mental Health, and Smart Machines Need to be More Assertive #WIC

By November 14, 2014 Week in Context No Comments

Why artificial emotional intelligence matters and the next frontier for ‘tracking technology’: keeping tabs on our mental health. We also feature our favourite quote from ‘deep learning’ guru Geoffrey Hinton’s Q&A on Reddit, attempt to visualise what smart and not-so-smart robots are thinking and hear that Google’s self-driving cars need to be taught to be more assertive on the road.

This Week in Context

Your Weekly Update on All Things Context, November 13 2014

If you are one of the lucky people attending Strata Barcelona next week, make sure not to miss out on hearing our Head of Deep Learning, Vincent ‘Spruyt speak on the Rise of Empathic Devices: Mobile Sensor Data, Machine Learning & Mood to learn the latest about our technology. (Thursday 20th of November at 16:05)

If you’d like to suggest to Vincent some talks that are must-attends – if you’re speaking as well, feel free to suggest your own! – tweet them to @vincent_spruyt.


” If we are lucky, we may just end up building something that isn’t just about trying to understand one another’s emotions so that we can more easily manipulate them. If we’re lucky, we might just end up building something that really understands us. ” 

Gideon Rosenblatt

Five Must-Reads

1. Technology’s Latest Quest: Tracking Mental Health

Technology’s next tracking frontier, according to Newsweek, is keeping tabs on your mental health. We can’t agree more. Whilst ‘quantifying how you feel’ isn’t such an easy feat, technological tools that flag mental health problems early on will have a huge impact, and will help reach a younger generation. User input will remain important, but algorithmically computed profiles and the – opposed to user input – neutral sensor-based detections of behaviour changes can be great additions to ‘mobile’ therapy journaling.

2. Q&A with ‘deep learning’ guru Geoffrey Hinton on Reddit

Our machine learning team is still processing the full thread, but nominated Hinton’s beliefs about the human brain that inspired his work as a must-read: “The brain has about 1014 synapses and we only live for about 109 seconds. So we have a lot more parameters than data. This motivates the idea that we must do a lot of unsupervised learning since the perceptual input (including proprioception) is the only place we can get 105 dimensions of constraint per second.” For machine learning, unsupervised learning is still a huge – the huge – open question.

3. Why Emotional Intelligence Really Matters

Gideon Rosenblatt argues that emotional intelligence is absolutely essential to artificial intelligence. Firstly, solving emotional intelligence is a more natural path to solving true machine thinking and secondly, if our machines lack the capacity for understanding emotions, they will be severely handicapped in their ability to learn from us. Rosenblatt points out that working on machine EQ is important to tech companies because of the possibilities (not the least financial) it offers. But the reason artificial emotional intelligence really matters, is because technology that is capable of building rapport with patients, and does not judge, might just make for a better therapist. (<- *Spoiler alert* You really ought to read the full article. If you prefer the spoiler, just select the white text.)

4. Google’s self-driving cars: smart machines need to be more assertive

While hoping to make cars that are safer than those driven by people, Google has discovered its smart machines need to act a little human, especially when dealing with pushy motorists. “We found that we actually need to be — not aggressive — but assertive” with the vehicles, Nathaniel Fairfield, technical leader of a team that writes software fixes for problems uncovered during the driving tests told Mercury News. “If you’re always yielding and conservative, basically everybody will just stomp on you all day.”

Next up, wouldn’t improving these systems be much easier if we could see what robots are thinking? 

5. MIT’s Augmented Reality Room Shows What Robots Are Thinking

Using augmented reality, researchers placed theoretical obstacles in the path of robots around which the machines had to navigate. As the robots were computing their optimal route, a projection system displayed their “thoughts” on the ground as coloured lines and dots, so researchers could visualize them in real time. The researchers soon found that by projecting the robots’ intentions, they were able to spot problems in the underlying algorithms, and make improvements much faster than before.

A Work of Art & Tech: ‘Dumb’ robots visualized

There’s also machines that do not seem to be doing much thinking at all. Ever wondered what your average Roomba is thinking? However, even visualising ‘low-tech’ self-driving machines swerving around objects can be beautiful when done with an artistic flair, as proves the Flickr Roomba Art pool.


Photos by Andreas Dantz (lead image) and IBRoomba.

Papers, Talks & Research

  • History and Philosophy of Neural Networks (machine learning, deep learning, paper)
  • fMRI Data Reveals the Number of Parallel Processes Running in the Brain (neurology, article)
  • Google opensourced it’s RAPPOR privacy technology (differential privacy, big data, article)
  • Thing Theory: Connecting Humans to Location-Aware Smart Environments  (ubiquitous computing, user experience, artificial intelligence, IoT, paper)
  • Quantifying Mental Health Signals in Twitter (linguistics, machine learning, mental health, mhealth,  paper)

Enjoy the reads, and have a great weekend!
And if you haven’t done so yet, kindly consider subscribing to the Week in Context here.


Subscribe to ‘The Week in Context’

Leave a Reply

Contact us