Steve Di Paola of the Simon Fraser University demonstrated a virtual agent system to Chatbots.org. This system interprets real time emotions in input such as voice, and shows facial expressions and gestures. During the 9th International Conference on Intelligent Virtual Agents (related to IVA Gala), Steve showed us how his parameterized facial animation system operates. His goal is to develop a system that can handle any face type, behavior, and voice to be used in games, movies and virtual agent worlds. Their current focus is on how emotions should be expressed. In the first few minutes of the video, he demonstrated various types of faces. Steve grabbed his microphone and started talking. The avatar spoke similar to Steve and his (or her) lips were synchronized with his words. As soon as Steve raised or changed his voice, the avatar acted correspondingly, which was absolutely amazing. It widened its mouth when talking louder; it moved its eyebrow to emphasize what it was saying, and it responded to a drumming sound. Wow! More explanation and images after break.
He also demonstrated emotional degrees of the system such as madness or nervousness and typical behavior we would expect of an alien. Awesome!