Virtual Humans are automated agents that converse, understand, reason and exhibit emotions. They possess a three-dimensional body and perform tasks through natural language-style dialogs with humans.
Typical usage
Virtual Human, as presented on this video, constitutes behavior generation and character animation system for conversational simulations and training systems. Given a list of communicative functions (illustration, emphasis, turn-taking, etc.), and/or behavioral requests (gaze, gesture, speech, etc.), Virtual Human is capable of generating a cohesive animated performance while speaking with humans.
SmartBody system was created within the extensive Virtual Human Project, as presented on this video below. Virtual Humans are capable of hearing, understanding, responding and they are able to track behavior of humans, because they see humans’ movement in infrared cameras. Virtual Humans also process and respond to human emotions, which are essential in communication processes. The authors of this project are trying to simulate how the influence of emotions could change the decision making process. They are researching which actions can make Virtual Humans happier, angrier, or scared. As you will notice, this project brings together research in intelligent tutoring, natural language recognition and emotional modelling, immersive graphics and audio.
The Virtual Humans Project from University of Southern California (USC) offers an exciting and powerful potential initiative for extensive interactive experiences. They may provide a powerful tool for assessment, intervention, and training for medical professionals. The Virtual Patient technology has evolved to the point where researchers might use virtual reality patients to develop interviewing and diagnostics skills of clinicians in training. Virtual Patient allows novice mental health clinicians to conduct an interview with a virtual character that emulates an adolescent male with conduct. This video provides a demonstration of a virtual human being interviewed by a human.
Standardized Patients are quite common at many U.S. medical schools to assess students’ communication and diagnostic skills, which are crucial to quality patient care. Students encounter patients with progressively more difficult medical conditions, and feedback is given on their performance. Sim-Patient (abbreviation for “simulated patient”) uses high-fidelity scenarios to improve pre-hospital, primary, and emergency care by enhancing problem-based medical education, training, and practice. Sim-Patient platform, uses interactive 3-D avatars to train health care providers for pre-hospital care used by military personnel, first responders and paramedics during combat or large-scale disasters, as presented on this video below.
When people interact, their speech prosody, gesture, gaze, posture, and facial expression contribute to establishment of a sense of rapport.
The Rapport Project, as presented on this picture, explores the potential of Virtual Humans to establish rapport with humans through simple contingent nonverbal behavior. Rapport is argued to underlie success in negotiations, psychotherapeutic effectiveness, classroom performance and even susceptibility to hypnosis. The rapport project uses machine vision and prosody analysis to create Virtual Humans that can detect and respond in real-time to human gestures, facial expressions and emotional cues and create a sense of rapport.[1]
The Institute for Creative Technologies (ICT) of the USC has created a Virtual Human Toolkit with the goal of reducing some of the complexities inherent to creating Virtual Humans. The toolkit is an ever-growing collection of innovative technologies, fuelled by basic research performed at ICT and by its partners. Through this toolkit, ICT hopes to provide the virtual humans research community with a widely accepted platform on which new technologies can be built. This video is a demonstration of ICT’s Virtual Humans technology.
The next level up is where virtual humans start believing, have desires and intentions, and gain understanding of the social context. In Stability and Support Operations (SASO) a user employs natural language as the interface medium of communication with life-sized Virtual Human Agents to perform a negotiation task. This video demonstrates the Virtual Human system and its research components.
Building embodied virtual humans is a complex multi-disciplinary effort that requires many components to be connected. Interactive Media Group (iMG) at Carleton University is involved in active research on parameterization of facial personality and the relation of facial actions to the perception of personality in viewers. Examples are video messaging on cell phones and online Virtual Humans, as presented in the picture below.[2]
The JUST-TALK project, funded by the National Institute of Justice Office of Science and Technology and developed by RTI International, integrates virtual reality training software during a 3-day class at the North Carolina Justice Academy. The JUST-TALK application, presented on the video, interacted with students in a role-playing virtual environment designed to train law enforcement personnel in dealing with subjects that present symptoms of serious mental illness. Students conversed with the Virtual Human using spoken natural language and see and hear the Virtual Human’s responses, a combination of facial gesture, body movements, and spoken language.
Background
The term Virtual Human is a composite of two words: virtual and human.
The word virtual dates back to late XIV century as “influencing by physical virtues or capabilities” from Latin virtualis, virtus - “excellence, potency, efficacy”. The meaning of “being something in essence or fact, though not in name” is first recorded in 1650s, probably via sense of “capable of producing a certain effect”. Computer sense of “not physically existing but made to appear by software” is attested from 1959.[3]
The word human meaning “pertaining to man” originates from the XIV century. In earliest use humain(e) from French. Latin hūmānus, related to homō - man. The term humane persisted in general use until early XVIII, but the form human (based directly on Latin) occurs late XVII. The variant humane became restricted during XVIII for the senses (i) characterized by disposition or behaviour befitting a man (formerly specifically “gentle, courteous” XV-XVI), and (ii) pertaining to studies that tend to humanize or refine (XVII century).[4]
Human - belonging to man or mankind; having the qualities or attributes of a man; of or pertaining to man or to the race of man; as a human voice; human shape; human nature; human knowledge; human life.[5]
According to TV & multimedia editor Jon Kaufthal, Virtual Human ancestry dates back to approximately 1973, when first attempts to put a human-like model on a computer screen were made.[6]
Examples of Virtual Humans
Virtual Human pages
Although we use chatbot as the main synonym on this website, please do not be confused. There are more than 161 synonyms in use by academics, business and virtual human enthusiasts! It is simply a matter of reading between the lines.
Please check out our main directory with 1376 live virtual human examples (an overview as maintained by developers themselves),
our vendor listing with 253 virtual human companies
and virtual human news section
with already more than 368 articles! Our research tab contains lots of papers on virtual humans, 1,166 journals on virtual humans and 390 books on virtual humans. This research section also shows which universities are active in the virtual human field, indicates which publishers are publishing journals on humanlike conversational AI and informs about academic events on virtual humans. Also, check out our dedicated tab for awards, contest and games related to the virtual human field,
various forums like our AI forum by virtual human enthusiasts
and add any virtual human as created by yourself and your colleagues
to our virtual human directory. Please do not forget to register to join us in these exciting times.
A selection of pages on this website using 'virtual human':
Alternative usage of Virtual Humans
Virtual Humans, sometimes called virtual actors, refer to virtual humans employed in movies, which are digital clones of human actors.
The first digital clones appeared in 1987 on hundreds of TV stations, in a movie “Rendez-vous a Montreal”. It was a demo movie created by Daniel Thalmann and Nadia Magnenat-Thalmann from MIRAlab. The video below presents virtual Marilyn Monroe and synthetic Humphrey Bogart rendered in 3-dimensions, capable of moving, speaking, and expressing emotions.
The notion of virtual human is also used in biomedical research. Many ongoing virtual human projects concern virtual embryos having virtual hearts, virtual blood, and virtual lungs breathe. These computer models are used to study and predict the potential for environmental chemicals to affect the embryo. Virtual human embryo constitute therefore a computational framework for everything from vaccine development to cancer to environmental pollutants.[7]
The term Virtual Humans can alternatively be used for modelling human beings. A free online 3D visualisation tool Google Body Browser offers teachers, students and health care professionalsa a possibility to explore the human body for anatomy education. On the video you can see a 3D virtual human model which allows you to explore the human body close up and from every angle.
The term Virtual Humans can also be used for personalised patient computer models for the predictive healthcare of the future. 3D anatomy of a Virtual Human Body to study complex internal anatomy may be found on Visible Body website. It helps anatomy education by appealing visual model of human body, as presented on the picture. Using controls you may rotate, isolate and zoom human body elements including bones, nerves and vessels.[8]
The Virtual Physiological Human (VPH) project, an EU-funded health research, creates a constantly expanding knowledge database, which will be used to develop better patient diagnosis and treatment. This ‘personalised healthcare framework’ allows a wide range of doctors, scientists and researchers to virtually investigate the human body as a single complex organism. This will improve our ability to predict, diagnose and treat disease, and has a dramatic impact on the future of healthcare, the pharmaceutical and medical device industries.[9]
Apart from medical applications, virtual human models can also serve to study, paint or sketch the human anatomy. 3D virtual human anatomy studio developed by Cloudstars reveals both superficial and deep muscles, tendons, and bones. The video below presents this human anatomy reference application designed specifically for artists which includes a rich, accessible 3D environment.
Another research covering three-dimensional, multi-scale, interactive computer model of Virtual Human male and female anatomy is LINDSAY Virtual Human collaborative project at the University of Calgary. Its aim is to provide a collection of computational tools that allow to interactively explore human anatomy and physiology biological models, as presented on the video below.
The term Virtual Human is sometimes used to describe human beings who work remotely, alternatively called Virtual Assistant or Virtual Agent.