23/04/2009 By dadmin Off

Emotion-related research seminar at The Royal Society

I’ve just attended a 2-day seminar on Computation of emotions in man and machines organized by Prof. Peter Robinson and Dr. Rana El Kaliouby at The Royal Society in London.
The seminar had an amazing lineup of speakers, important figures from emotion-related research: Paul Ekman (Univ. of California San Francisco, USA), Rosalind Picard (MIT Media Lab, USA), Cynthia Breazeal (MIT Media Lab, USA), Roddy Cowie (Queen’s University, Belfast), Jeffrey Cohn (University of Pittsburgh, USA), Maja Pantic (Imperial College London, Univ of Twente, NL), Klaus Scherer (Univ. of Geneva, Switzerland), Kristina Höök (SICS, Stockholm University/KTH, Sweden), Mel Slater (UCL, UK), Ursula Hess (Univ. of Quebec, Canada), William (Bill) Gaver (Goldsmiths, University of London, UK), Amy Baylor (Florida State University, USA), Simon Baron-Cohen (Cambridge Univ., UK), Beatrice de Gelder (Tilburg University, NL, Harvard Medical School, USA), Catherine Pelachaud (Universite de Paris, France), and Chris Firth FRS (UCL, UK). Here is an the agenda and the description of the seminar.
In summary, there were 2 days full of very interesting presentations on sensing, recognizing, modelling and using emotions, with lively discussions about the purpose and challenges in the area. I will try to give a more detailed description of the talks. However, I am not sure I will manage to actually capture the essence and the importance of each talk.
Paul Ekman started the first day with a presentation on Darwin’s contribution to the study of emotions and, especially, to the study of facial expressions of emotions. Ekman is one of the most important figures in the study of emotions. Darwin’s (other) great work, “The Expression of the Emotions in Man and Animals”, set out to prove that emotions are universal across humans and animals. Ekman highlighted the main contributions of the book, such as the discrete representation of emotions, the focus on facial expressions and the theory that facial expressions of emotions are universal.
Chris Firth focused his talk on neuroimaging of emotions and on how we tend to mimic the emotional expressions we see on others. He also talked about differences on how people react to emotions exhibited by robots, on perceived trustworthiness based on facial features, like raised eyebrows and eye opening, on communicating with various facial expressions like eyebrow flash. I have to admit that I did not agree with some of the things he presented, especially related to how people react to emotions displayed by robots, which seemed to me quite generalizing especially when thinking of robots like the ones built by Cynthia Breazeal. Another thing I did not agree with was an example of a study on perceived trustworthiness based on some face pictures. I did not fully agree with the categorization of the faces presented in the trustworthy and the untrustworthy categories as I found myself reaching back to past experiences and using similarity to decide which face seems to me more trustworthy and which not. So, his hypotheses about the importance of eye size and height of eyebrows did not seem to hold for me.
Klaus Scherer is a major figure in the area of modelling emotions. He presented his Component Process Model that models emotions as episodes and not as snapshots. His appraisal-based model considers various factors involved in emotion generation as a response to external and internal stimuli.
Kia Höök presented some of her work around the “affective loop” approach to the area, where end users are deeply involved in consuming and managing their own data through creative and personalized means.
Jeffrey Cohn talked about experiments with real-life looking avatar faces on how people tend to adapt their head movements and facial expressions in response to other party’s behaviour during a social communication.
Ursula Hess’s talk focused on other face characteristics that can affect the emotion recognition. In the emotion recognition research, the main focus is on certain features that are associated with certain emotions, but other characteristics of the face itself are not actually considered: man vs. woman, dominant vs. submissive, age, ethnic group, etc. Such characteristics can actually influence the emotional display and, of course, the perception of others.
Maja Pantic’s talk focused on machine learning algorithms for facial-based emotion recognition for non-acted emotions. Most of the face-based emotion recognition research has been done based on acted emotions that are exaggerated and last longer than real-life emotional displays. She presented her research on combining various Action Units for appearance-based automatic emotion detection. She also emphasized the context-aware multi-modal approach to emotion recognition as an emergent and challenging field.
Beatrice de Gelder emphasized the role of bodies in emotion recognition and how they add important information to face expressions. Her neuroscientific research focuses on fMRI-based brain activation detection related to emotion recognition that could tell more about how people recognize emotions in other people as well as what is the emotional response to them. From this point of view, the face-based emotion recognition does not seem to generate much activation compared to the ones based on body language. Body language also plays a much more important role when distance is involved or for people with visual deficit.
The second day started with a talk by Simon Baron-Cohen on efforts to improve emotion recognition in high functioning autistic kids. He talked about a DVD that was created for and with autistic kids and their families and has the goal of enabling kids to better recognize emotions and situations. The result of the studies performed based on the resulted DVD were extremely good though there were no results yet on the long-term effects of such experiment and how well the kids could actually apply the recognition to real world. It was also not clear if the kids can actually change their empathic response to emotions they see or if the improvement is purely cognitive.
Rosalind Picard focused her talk on her group’s current work on autism. She showed and demonstrated sensing systems they developed for detecting stress on autistic kids, which makes it easier for families and educators to detect when stress starts building up before even external signs are clearly visible. She presented various examples of real-users experimenting with the sensing devices.
Roddy Cowie provided a comprehensive view of the emotion research now as well as major challenges. He talked about the emotional colouring, referring to the various types of emotions we experience during the day. He emphasized some new approaches that include probabilities and context in their model for a better definition of the emotional state.
Cynthia Breazeal presented her robots Kismet and Leonardo, as well as their internal model inspired from various domains, both technological and from human understanding. Her videos of Kismet and Leonardo had a very big impact on the audience and it was clear that humans can have a very strong emotional (positive) response to robots 🙂 She also presented one of the last improvements to Leonardo, where he is shown having an internal model about false and true beliefs of two people. I found that video extremely interesting: the scenarios is that Leonardo is following (with his eyes) these two people, one in black and one in red, how they hide some treats in two boxes. Then they both go away and only one of them returns, changing the place of one of these treats. At that point, Leonardo “understands” that one person has a true belief of the situation while the other one has a false belief of the situation, since he was not there when the change was made.
So, when the one with the false beliefs comes back and pretends to be trying to open the box where he thought the treat he is looking for is, Leonardo, by pressing the corresponding button on a remote control, opens the right box, where the treat is actually now located. When the one with the true beliefs comes back and also pretends he cannot open the box with the treat, Leonardo opens the right box with the button. I found it fascinating to watch that!
Catherine Pelachaud’s talk was on Embodied Conversational Agents, and she presented their advances on creating realistic virtual characters that can use multimodal expressions of emotions. To make them look more realistic, their agents incorporate a temporal dynamism of emotions that follows the evolution of emotions. She showed how complex emotion expressions are created by combining various facial expressions.
Mel Slater’s talk was on immersive experiences. He presented some experiments they have been doing by using the CAVE environment at UCL. I found this work quite interesting especially for the ethical issues it raised. Slater introduced the two main components of immersive experiences: PI (Place Illusion) and PSI (Plausibility Illusion). The subjects used in his experiments were wearing sensing devices that computed their heart rate, heart rate variability and GSR (Galvanic Skin Resistance) response in order to measure their emotional response to the simulated situations. The results were that people responded to simulated situations as if they were real before their cognitive side eventually overrode them with the knowledge that the experience is not real. The experiments showed that the actual immersion, where people were also able to move around in the virtual space, generates very strong emotional responses despite the knowledge that all is just simulation.
Amy Baylor’s talk was on importance of appearance of virtual interface agents. Her experiments focused on how to better design agents that are supposed to interact with people so that they are efficient in conveying their message to the end users. One of the examples she gave was an experiment where interface agents were supposed to convince teenage girls that a career in engineering was actually “cool”. The experiment used two basic agents (one male and one female) but it changed their hair, age, dress style, etc. The girls were than supposed to rate the agents on various parameters and to figure out which appearances mattered most when the message was about something intellectual, like engineering. Overall, I think it’s quite interesting use of avatars, though I am wondering what impact would have such tool when used by various companies in their hiring process for customer-facing people. Of course, there should already be enough studies on what counts for customer-facing people in various companies and situations. The advantage of using avatars is that the appearance can be changed very fast but I can very well imaging bad usages coming out of this, as “you don’t fit the profile given by the computer”.
The seminar ended with William Gaver’s talk on designing for emotions. The speaker emphasized the importance of going out and testing systems with end users instead of trying to build perfect systems in the lab. Unlike others, he presented a failed experiment they had where they used about 10 monitoring sensors in a family over a longer time period. The data from sensors was combined to create a horoscope-type of report that included status report like “you had too much to do”, “you are too stressed”, etc. with suggestions like “you should take it slower”, etc. The experiment was eventually a failure as the people in that family did not like to be told redundant or inexact things. I have to say I disagree with the conclusions of the experiment that actually people do not want to be monitored or be told what happened based on data collected like that. At least, I found the experiment not relevant enough for that as I can see lots of problems with the way it was done. His conclusion that the failure comes from its affective computing approach is quite weird as the type of application and the way the user interaction should be questioned first. The output information was too abstract and it did not give them any information they did not know since it hid exactly what they did not really know. I was surprised that they did not even try to find out from people what would they have liked to see from such (or similar) systems. Anyway, for me it just looked more like a bad interface and interaction example that had nothing to do with the sensing part.
The seminar’s final panel focused on ethics and privacy for such systems. As usual, very hard to define and discuss. As with anything it actually matters how it is used and for what purpose. The research on certain areas, like autism, proves that such technology is indeed useful. The user involvement came back again and it was recognized that now it’s the time to look more outside the labs and involve people into the design of such systems.