When sensor data tells your TV what you are looking for
Within the framework of an R&D project named ‘Empathic Products’, Sentiance and VRT (O&I) jointly research how context and mood of a mobile user influences how you watch television. Technology that is able to adopt a person’s mindset, will make media recommendations more relevant, giving media recommender systems a level of behavioral intelligence allowing to truly personalize TV moments to individuals, families or groups of friends that regularly co-watch TV.
defining media scenarios & ‘actionable moods’
The project started with defining a range of ‘empathic’ media scenario’s and use cases and investigating people emotional reactions towards TV content in a lab setting.
During an early series of lab experiments state-of-the-art affective computing technologies such as facial coding (analyses of facial expressions from video images) and physiological measurements were used allowing to cross-validate the outputted emotion classifications. More importantly, from these experimental results we learned to isolate what we call ‘actionable moods’ from a huge amount of detected moods and emotional expressions. Moods that are actionable from the perspective of broadcaster and media stakeholders.
the entertainment paradox
Media is a challenging environment when it comes to building mood-based interactions as often during media consumption we observe a constant flux of different short and sometimes contradictory emotional expressions.
For example, we need to cope with what is called the ‘Entertainment paradox’. This is the observation that people are able to derive pleasure from the experience of negative emotions induced by watching media content. Think of being scared of even disgusted while viewing a horror movie or crying when confronted with an intense drama scene and simultaneously enjoying this experience. In this sense, key is to be able to measure engagement levels as such independently from the valence of emotions (this whether an emotion is positive or negative).
At current stage, both partners deliberately choose to move out of the lab settings focusing on real world TV experiences in the home environment, measuring not only mood but also focusing on activity and sense daily context using popular or affordable commercial devices. This way, the insights generated and algorithms developed are easily transferable to existing and new products and apps, to make the everyday media consumption experience mood- and context aware.
finding the right wearable
Soon, we noticed that sensing activity and context ‘in the wild’ was relatively easy and straightforward using the Sentiance SDK, mobile apps or wearable devices such as Google Fit. Grasping mood or fine grained data such as sleep quality was less straight forward.
We investigated almost every single wearable on the market ranging from the popular Fitbits and Jawbone to sports watches/heart rate monitoring devices and high-end almost medical quality medical devices. It was a hard choice, as popular wearables often offer an API but the data is often limited in scope.
Middle-end wearables allow to collect good quality data but often the companies behind these wearables are reluctant to share raw sensor or non aggregated data. High-end wearables are an entirely different proposition, they offer high quality and easily accessible data but it is questionable whether it’s worth the high price tag. Open source wearable projects are popping up but weren’t yet available when we started our home-based pilot.
Smartwatches were a good match in theory but some experimentation showed that the devices at current stage lack the reliability (in terms of data quality, transfer and battery drainage) needed for extended use at people’s home. Finally, we opted for a sports watch in combination with activity and context sensing apps on the smartphone.
need states and engagement define your tv moment
During this home-based pilot, we aimed to the grasp need states, moods and context which allowed us to cluster different sorts of TV moments and understand how tv viewers’ needs, motivations and engagement are linked with these tv moments. Once we truly understand this linkage between TV moments and context, we should be able come up with better or at least more context-aware content recommendations providing that we can sense context accurately.
In this sense, substantial research effort went into validating how well mobile data can approach ‘ground truth’ data which in this case was collected through a diary study where users were asked to log their TV experiences.
contextual cues for recommender systems
Each TV experience fulfills certain needs such as ‘staying up to date’, ‘learning’, ‘empathizing with others’, ‘to have some fun’ etc. These need states are a psychological concepts, that are hard to capture in the wild, however it are very powerful concepts as to determine what content fits best for a given TV moment.
Whilst it is impossible to automatically measure need states, it is possible to approach these through proxies. These are cues that correlate strongly with certain needs and that can be captured via wearables and smartphones such as mood and the presence of people. These cues -used as proxy- can be ‘sensed’ by Sentiance technology and help to grasp and understand the TV moment.
beyond engagement: experience
One of the key insights is that engagement is still a hugely important but no longer the sole purpose. For certain need states, such as ‘learning’ and ‘personal interest’ highly engaging content is adequate. Current state of the art recommender algorithms perform during these TV moments as these algorithms are set up to maximize engagement.
However, there are several need states where full-on engagement is not a requirement – or even feasible – to satisfy the TV viewer’s wants. For these moments, the overall experience is much more important than being immersed in highly engaging content.
Our research highlights the necessity of moving beyond engagement, of taking context into account and, ultimately, of focusing on personal needs fulfilled by media moments.
For example, when a father with his son are in front of the TV during a rainy Sunday afternoon, it might be more opportune to focus more on his son for making a somehow fitting movie recommendation, knowing the father will be non-engaged anyway, however the father can well enjoy this cosy family moment spent together.
In situations like this, we have found that these contextual cues are far more important than traditional clustering of past viewing preferences to make recommendations. For these TV moments it is still a challenge as how to recommend content that fits best. As only content engagement is not the primary goal, one needs to take a maximum of contextual cues into account to enable real time and context-aware engagement.
Emotion and mood emerge from the research as promising yet still slightly mystifying factors for use in media suggestions. Yet in combination with other real-time and historical context information they form a strong base from which to deduce – and even predict – a viewer’s needs. Once you know why someone turns on the TV, it becomes much easier to suggest to them exactly the content they want to see.
The results of this joint-research are currently being integrated in Sentiance’s context-aware recommender solution, being rolled out to media providers worldwide.
Context-aware Media Personalization: Better Recommendations through Context
Presentation on Slideshare
Sentiance and VRT define how context-awareness transforms the TV viewing experience, Feb 2015
Press Release on pr.co