Slideshow Image 8 Slideshow Image 9 Slideshow Image 1 Slideshow Image 2 Slideshow Image 3 Slideshow Image 4 Slideshow Image 5 Slideshow Image 6 Slideshow Image 7
homecontact information


Sensing affect: facial analysis and multimodal fusion


Affective Computing is an emerging field aimed to develop intelligent systems able to provide a computer with the ability of recognizing, interpreting and processing human emotions. Human-computer intelligent interaction provides natural ways for humans to use computers as aids. It is argued that for a computer to be able to interact with humans it needs to have the communication skills of humans. One of these skills is the emotional aspect of communication. For this reason, affect sensing is becoming an indispensable part of advanced human-computer interfaces.

The work of the group in this area deals with the two main research focuses on Affective Computing: emotion recognition from the user's facial expressions and multimodal fusion of affective information extracted from different human communicative channels. For this purpose, a novel and effective system for facial affect sensing is developed and subsequently expanded to face the problem of multimodal human affect recognition.

The facial affect recognizer is able to sense emotions from a user's captured static facial image. Its inputs are a set of facial parameters, angles and distances between characteristic points of the face, chosen so that the face is modeled in a simple way without losing relevant facial expression information. The system implements an emotional classification mechanism that combines in a novel and robust manner the five most commonly used classffiers in the field of affect sensing, obtaining at the output an associated weight of the facial expression to each of the six Ekman's universal emotional categories plus neutral. It has been trained with an extensive universal database showing more than 60 individuals of different races, ages and genders. In this way, the system is able to analyze any subject, male or female, of any age, ethnicity and physiognomy. It has been exhaustively validated by means of statistical evaluation strategies, such as cross-validation, classification accuracy ratios and confusion matrices. Human assessment has been taken into account in the evaluation of the system, that has been proved to work in a similar way to the human brain, leading to similar confusions.


The methodology developed to deal with the multimodal fusion of affective information coming from different channels allows, initially, to pass from static to dynamic facial affect sensing, by fusing afterwards any other emotional information coming from different modalities over time. The expansion to dynamic and multimodal Affective Computing is achieved thanks to the use of a 2-dimensional description of affect that provides the system with mathematical capabilities to face temporal and multisensory emotional issues. This methodology is able to fuse any number of categorical modules, with very different time-scales and output labels, without having to redefine the whole system each time new information is added. The key step from a categorical perspective of emotions to a continuous affective space is achieved by using the Whissell's Dictionary of Affect in Language, that allows the mapping of any emotional label to a 2D point in the affective plane. The proposed methodology outputs a 2D emotional path that represents the user's detected affective progress over time. A Kalman filtering technique controls this path in real-time to ensure temporal consistency and robustness to the system. Moreover, the methodology is adaptive to eventual temporal changes in the reliability of the different inputs' quality.

The proposed affect sensing techniques and methodologies have been successfully applied to different real human-computer interaction contexts. Extracted affective information is used to improve interaction with Embodied Conversational Agents, to enhance automatic opinion analysis' performance and to develop a tutoring tool where the distant teacher can be aware of the learner's emotional progress. The potential of the multimodal fusion methodology is also demonstrated by fusing dynamic affective information extracted from the different channels of an Instant Messaging tool: video, typed-in text and emoticons.




© GIGA - Affective Lab | Departamento de Informática e Ingeniería de Sistemas C/ María de Luna, 1 50015 Zaragoza