Emotions in Embodied Conversational Agents
A model of multimodal sequential expressions of emotion for an Embodied Conversational Agent was developed. The model is based on video annotations and on descriptions found in the literature. A language has been derived to describe expressions of emotions as a sequence of facial and body movement signals. An evaluation study of our model is presented in this paper. Animations of 8 sequential expressions corresponding to the emotions - anger, anxiety, cheerfulness, embarrassment, panic fear, pride, relief, and tension - were realized with our model. The recognition rate of these expressions is higher than the chance level making us believe that our model is able to generate recognizable expressions of emotions, even for the emotional expressions not considered to be universally recognized.
Only registered members are allowed to comment.