Koen Willaert joins Argus Labs to work on technology’s EQ

By September 17, 2014 Company 4 Comments

There’s plenty of ‘smart’ doing the rounds in technology nowadays. There’s smart phones, smart watches, and smart thermostats. Smart driverless cars are in the works, and it takes a mere $3.600 to become the proud owner of a smart fridge (“Yes, it is a refrigerator. It’s also a command central” LG).  Whilst all these devices are quite clever in their own way, there’s one thing most lack to fully understand what we need from them: insights into how we humans feel. Enter Koen Willaert, who is joining Argus Labs as head of affective computing to advance our research into ’emotional intelligence’ for technology, how devices can better ‘sense’ our emotions and mood, and how machines, recommenders, algorithms, apps, .. should incorporate that emotional information in their thinking logic and behaviour.

Koen Willaert is a senior social scientist working in the areas of affective and ubiquitous computing, interaction design and human computer interaction. His background lays in social, experimental psychology and statistics. Over the years, Koen has build up a broad experience in human-centered technology innovation and developed the ability to move from theory to practice in business-relevant application/service innovation, including service and product improvement. Koen has a special interest in experimental research approaches and statistical data mining techniques.

We’re glad to have you strengthening our team. Hopefully, you’re just as excited?

KW: Yes, definitely! We are cool bunch of colleagues I guess and I feel the momentum is there to develop a few technologies through and bring them on the market.

You’ll be working on affective computing, a field that aims to make technology better at recognising and adapting to human emotions – giving them empathic abilities of sorts. What is your definition for the ‘affective computing’ field?

KW: Well, fairly simple. It’s the field in which systems and devices that can recognise, interpret, process, and simulate human emotion and affects are being developed or studied. However I can’t stress enough this distinction between ‘mere’ affective computing products and empathic products. When you tell people that you are developing technologies that are able to sense, reason and act upon their emotions you can expect quite some reserved reactions… as some people will think of surveillance for instance. And indeed like all technologies, you can use affective computing techniques for all the good or bad reasons. In terms of metaphors, there is a whole continuum ranging from ‘big brother’ to ‘caring mother’. With our technologies we should aim for the empathic ‘caring mother’ end of the spectrum where technology is there to help users, empower them and let them achieve their goals. That’s a message we will need to repeat over and over again in the near future, I expect.

In human-computer interaction and/or affective computing, what were the most incredible advances achieved the last few years?

KW: In my opinion, it is definitely the sensing aspect. We are now able to sense emotions in multimodal ways ranging from facial expressions, postures/body movements, speech, text and various physiological measurements.
This period is interesting as the current wave of wearables is offering us the platform to experiment in daily life circumstances with the high-end emotion recognition techniques nurtured from lab research. Nevertheless huge challenges are ahead of us, as emotion recognition is quite often just the input layer of an affective/empathic system, more experimentation is needed to further develop the reasoning components which in most cases requires advanced contextual knowledge embedded in the system.
Quite often people assume that an affective system acts whenever negative affects are detected in order to elicit more positive affects within the user. I’ll give one example just to illustrate the level of complexity: when someone is really immersed watching a horror movie, the system will detect negative affects as this person might be scared or even disgusted, however at a more meta-level this person may well enjoy this experience although these negative affects are present. Hence the system should be smart enough not to interrupt the person by pushing content recommendations.

The internet was up in arms recently, as it was ‘discovered’ that Facebook ran a controversial psychological experiment on its users, modifying the news feed algorithms to show more negative or positive content in their feed. They were looking for – and found – signs of ‘emotional contagion’. What is your take on such dark tests?

KW: It was an interesting case on many levels, I guess there are million of things to be said about this experiment.
First of all, to a certain level I can agree when people express they felt more ‘subject to’ than ‘participant in’ this experiment as standard procedures such as informed consent, debriefing, opt-out option etc. were not followed in the case. However, on a more meta-level, the debate about this kind of experiments also made clear that we are very rapidly entering into another area, in which standard procedures and concepts as on how to set-up and deduct a state-of-the-art psychological experiment need to be rethought within this context of large scale pilots and big data experiments. The level of complexity will only increase as when machines starts to reason on human data we will have a hard time making this distinction between human and machine data.

When it comes to the experimental manipulation itself (filtering relatively more positive or negative content in newsfeeds) there is nothing unethical about this given that it is framed as an experiment. In classic experiments, you are obliged to inform your participants about the possible risk and harms upfront and leave them the option to opt out but you shouldn’t tell them what you are manipulating because that would undermine the whole research set-up.

If we think further you could well say that in daily life we are pretty much in this experimental condition all the way without there being a neutral condition without manipulation. I mean Facebook needs to filter content anyhow as there is way too much content to fit in a newsfeed, they could deploy sentiment analysis or use other techniques for all the good or the bad reasons in order to maximise user satisfaction, ideal ad placement and so on. We need to accept that we will never know how their algorithm exactly works. It may well be the case that Facebook regularly changes the algorithms. Given this baseline of constant manipulation, you could say that this study did not involve any additional manipulation.

I’m assuming there was such a ruckus about this because people felt violated on a very personal level; how they are feeling. We like to believe that unless we explicitly talk or post about them, we are at least able to keep our thoughts and feelings private? On the other hand, sharing them might be beneficial. How do you believe ‘access to our thoughts and feelings’ should be managed, privacy and control wise?

KW: Well, the biggest problem I have with this experiment are the conclusions the authors drew which made some people feel violated whereas the data is not providing any evidence and hence people shouldn’t feel violated after all! For me these conclusions are the only dark aspect of this study… Furthermore I’m not at all impressed by the effect sizes of the result, which – also acknowledged by the authors- are very small.

Basically Facebook either reduced the positive or the negative content in the newsfeeds to a certain degree. As a result they observed a slightly increased amount of positive words used in Facebook post when someone was relatively more exposed to positive words and vice versa when someone was relatively more exposed to negative posts. If I remember correctly, when someone is expressing himself in various Facebook posts through 10000 words, about 7 words could be found that were more overtly positive or negative!

This very slightly increased amount of positive or negative words that were used in Facebook posts does not necessarily mean that any change in mood was caused in contrast to what the authors try to make us believe.

I’ll illustrate it with a daily life example: I’m excited about my new job and I’m meeting a friend who was in a car accident. As I sympathize with my friend, I’ll show some negative affect because I feel sorry for him/her Clearly, this context doesn’t motivate me to express my excitement, does this mean that I’m less excited? No, you could well hypothesize that I will express my excitement even more when the environment is stimulating…

Choose one of three, and explain why: metaverse, rat things or pattern recognition.

KW: Easy one, as during a couple of years I had the opportunity to work on a collaborative research project named ‘Metaverse’. Within this project we developed and tested a Metaverse inspired system, which aimed to support informal communication and social awareness within distributed project teams by leveraging a customized virtual world such as Second Life. The system was able to sense real life events, translate them into virtual representation based on user-defined rules, and visually represent those interpreted events in a virtual world. I designed the virtual word visuals, defined rules for the system and tested the prototype within our own offices. I was spending long hours in a cold server room installing the system and fine tuning routers, beacons and sensors within in the offices of colleagues. Some colleagues were commenting: ’Are you sure you are conducting user research? You act like an engineer!’.

Another remarkable moment was at the security control at the Tel Aviv airport, where the security guards decided they wanted to know everything about this project since they knew I attended a high tech conference. I can tell you, I talked more about this project at the airport than during the conference. At first I experienced some difficulties trying to explain this Metaverse project to security guards but after a while I took pleasure showing them some design mock-up videos and observing their reactions. Needless to say that it was an interesting piece of research!

You’re already at the forefront of researching innovative use of new technologies, but ten years from now, what fascinating topic do you see yourself working on? Something that is not quite imaginable now, but that will be near-reality by 2025.

KW: I expect fast evolutions from now on, with for instance the end of smart phones and other smart devices as we know them today. You can imagine all computing power integrated in textile (clothing), our bodies (implants) and wearables… . Hopefully we are able to tap into kinetic energy of a human body hence finding an ecological solution for this ‘battery problem’. Maybe holography gets popular and we will not be driving our cars any longer and will interact with social robots on a daily basis…

For me the most fascinating aspect are machines such as social robots developing highly meaningful bodies of concepts on their own such as a new language or learning each other about highly subjective things, explaining for instance their view on the world.

But how radical some of these examples might appear, some of these may well be considered incremental innovations. More radical innovations only occur when people’s mental model about a piece of technology is getting changed, implying new meanings.

In this respect I expect -in the long run- a lot from the work we will be conducting and the advances in artificial intelligence. Hopefully any time soon a piece of technology will be perceived more as a virtual agent which is an autonomous entity which observes and acts upon an environment and directs its activity towards achieving goals. Most of current technology isn’t following this paradigm, your media player for instance doesn’t know how to cope under a set of changing circumstances however a fairly simple system like a thermostat does, it will keep your house warm no matter the weather conditions outside.

In the end, I would say the speed at which the omnipresent concepts of ‘user / system’  erodes -in favour of this notion of interacting agents- can be seen as a measure of success for our endeavour.

Last but not least, what music would you like to hear in the office?

KW: I like this question. Like most of you I’m a music lover, collecting records and frequently attending gigs. I like all things ranging from slightly off the hook to truly experimental in various genres: indie rock, electronics, ambient, psychedelic, free jazz, noise, improvised music etc. Needless to say that most of what I listen to is fairly ‘underground’, so I would like to hear some of that stuff in the office as I guess there are quite a few little secret gems out there for you waiting to be discovered.

You can reach Koen at koen.willaert@sentiance.com and @koenwillaert on Twitter. He is particularly likely to respond to emails and tweets regarding affective computing, big data experiments and all things exotic.

4 Comments

Leave a Reply

Contact us
×