Researchers from Human Centered Multimedia at the University of Augsburg developed a virtual character which is capable of recognizing and displaying the emotion of a user.
The affective listener Alfred presented on the video recognizes human emotions from acoustic properties of speech, such as tone of voice, loudness and speed (i.e. not using word information). Its interface captures the user’s speech and behaviour by a wireless microphone and a webcam respectively. Then, the captured signals are processed through the Social Signal Interpretation framework using EmoVoice comprehensive framework and SHORE software. Afterwards, the extracted emotions read from the face and heard in the voice are combined by mapping these 2 input parameters as vectors on a 2-dimensional space. Those vectors are combined into a summary vector sent to the virtual agent which expresses the according affective state.
Virtual Alfred has won the GALA Jury Award for 2008 at 8th International Conference on Intelligent Virtual Agents - IVA 2008.