|
|
Member
Total posts: 12
Joined: Oct 27, 2011
|
Hi all.
I’ve recently joined your forum and thought I would see what peoples opinions are regarding where we are with ‘human’ avatars for chatbots.
I have produced many human ‘lifelike’ avatars over the past years and am always coming up against walls such as the ‘uncanny valley’. I think I have developed the avatars I produce (to the point I feel they are more akin to corporate portraits) so they aren’t disturbing to view but I am always open to other peoples opinions.
I have seen many of the options available for avatars, from 2d cel like flash images, through 3d cgi animations to videos of real people and constantly try to keep up to date with the quality and style of avatars being offered. I must admit I tend to prefer those I produce but then I am biased.
Maybe you guys/girls have a favourite human avatar that you feel works well ?
|
|
|
|
|
Posted: Oct 28, 2011 |
[ # 1 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Hi, Adrian, and welcome to chatbots.org!
I’ve taken a look at the examples of your work that you have on your website, and I have to say that I’m both impressed and jealous. I play around (when I have time) with 3D characters, but my skills are practically non-existent when compared to the quality of the characters you’ve got. Well done.
Having lifelike avatar that is “too realistic” can certainly be a problem for a lot of people, though I personally don’t understand why some find them to be disturbing. As far as I’m concerned, such levels of realism help to increase a person’s “suspension of disbelief”, and should only enhance their experience, not drive them to the psychologist’s couch.
|
|
|
|
|
Posted: Oct 28, 2011 |
[ # 2 ]
|
|
Member
Total posts: 12
Joined: Oct 27, 2011
|
Hi Dave
Thankyou for your welcome and comments.
I seem to spend a lot of my time focusing on and struggling with the fluidity and verisimilitude of the avatars as well as the genuineness of the emotions.
I must admit I do like the concept of real people (video footage) as avatars but I have trouble with the ones I have seen due to the way they are looped. Nothing puts me off of a real person avatar more than seeing a visible edit/cross fade which highlights it’s pre-recorded nature.
The nature of what I produce is pre-rendered emotions, the same as video footage, but I have spent a lot of time trying to create a smooth experience so as to reduce the possibility of breaking an audience suspense of disbelief.
Although I must admit I wish I could revisit some of the avatars I’ve made as I continue to learn of new techniques.
Sadly though with the amount of time I spend studying and trying to create people I think it is I who will be driven to a psychologist’s couch
|
|
|
|
|
Posted: Oct 28, 2011 |
[ # 3 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Welcome Adrian!
I don’t know if this applies to avatars so much as 3D animation in general, but I’ve long felt that the motion of 3D characters tends to fall into the “uncanny valley”, more than their appearance. The characters move as though they don’t feel their own weight, if you will. When your muscles flex, you’re pulling against the weight of your limbs/body, adjusting to reposition this weight so that your strongest muscles carry the burden. When you run, you push against the ground to boost yourself forward, you use gravity to help “swing” your legs and arms along. This motion looks different than the choreographed translation of limbs that even the most advanced animation studios produce. In a nutshell, animated characters move as if they’re weightless and all muscle.
I’m not sure if this limitation is a question of computing power or implementation. I imagine the former largely shaped the landscape of current animation techniques. (I don’t have experience in this field at all, and am very interested in your perspective, Adrian.) I wonder how computationally intensive it would be to implement some sort of limited finite element model as a “skeleton” for an animated character. This must be done to some extent. The nodes would each correspond to some muscle/muscle group with an ability to apply forces set by biological constraints. When you command a node to move to a certain location, the neighboring nodes would react accordingly, shifting the body around as the forces on each node strike a balance and (this part’s key) letting the strongest muscle groups available bear the majority of the internal forces. Because people move as effortlessly (lazily) as they can.
|
|
|
|
|
Posted: Oct 29, 2011 |
[ # 4 ]
|
|
Senior member
Total posts: 473
Joined: Aug 28, 2010
|
I’ve found the use of motion capture of real actors for animation to be very convincing. The recently published game “L.A.Noire” went one step further and used motion capture of facial expressions to reach a new level of subtlety in the game’s avatars. Have you investigated that technology?
|
|
|
|
|
Posted: Oct 29, 2011 |
[ # 5 ]
|
|
Member
Total posts: 12
Joined: Oct 27, 2011
|
@ Andrew Hi, I do have an extensive motion capture library which i use as a spring board for body animations. There are many facial motion capture systems which I am exploring, tbh the majority of affordable ones will only provide a spring board to start from and require a lot of manipulation of the data to produce an acceptable performance. Some technology is just inaccessible due to cost (games and film companies get a lot of budget to spend on these things).
@CR Hi. Weight in animation is sooo important. At the high end of computer generated humans a lot of time (and person power) is allocated to ‘rigging’ the characters up. This involves systems as you have described with bones that have muscles and tendons attached to them. Scripts are used to allow elements of the rig to influence each other. The level of realism in a rig boils down to the riggers skill at converting real world movements into scripted reactions (I can see it being hard locating a computer ‘programmer’ that also has good observational skills with regards to anatomy and movement so people who can do this work and push the boundaries are still quite rare).
As an example of a reactive rig the characters I build make use of 38 bones for the face that are linked to each other to react ‘like’ facial muscles. For example if you lift the corner of the mouth the movement also influences the cheeks etc. but this is still a huge stones throw away from the rigs used nowadays in hi end productions.
The trouble is, you would have to have some clever randoms thrown into the muscle movements to avoid noticeable repetitive movements (if a foot moved from A to B repeatedly it would soon become apparent that it was following a computed path. If i went to touch a button the likelihood of me being able to touch that button in exactly the same way again is very low (even if its a fraction of a millimetre off), in a computer it would hit the same spot exactly. I think this is where the human eye can pick up nuances of motion, as it is near impossible for a human body to repeat a motion exactly we are used to seeing things with a certain tolerance. we accept moving a hand from A’ish to B’ish and read it as moving from A to B, yet if we actually do see movement from A to B without deviation it becomes mechanical and unreal.
I think this is a major element of the uncanny valley. When i watch game trailers that make use of hi end motion capture studios I find the motion looks very real as do the characters, but at times one little thing will happen, maybe a top lip moves without the nostrils flaring slightly, or in tweaking the mocap data an animator may have smoothed the movement of an elbow slightly too much.
I am following a discussion at the moment about how Weta continue to use Andy Serkis as their main mocap actor as he has now developed his acting to a ‘movement to drive a character rig to deliver a narrative’ approach rather than just ‘movement to deliver a narrative’. This highlights how at the hi end of production they are aware that taking motion capture data of ‘normal’ movement (however normal that can be if you are wearing a mocap suit and performing in an empty room ) and placing it on a character doesn’t work for an audience as you would expect it to. So now they have the actor perform with a mentality that he is also a puppeteer.
I guess still there are technical limitations, budget limitations as well as production deadlines which hamper the delivery of hi end/hi profile virtual humans.
I am glad people are still striving towards the ‘holy grail’ and didn’t give up at final fantasy, polar express, Beowulf to name a few.
at last some people to discuss this with
|
|
|
|
|
Posted: Oct 29, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Good point, Andrew. Motion capture provides a way to “get around” the problem by letting humans actually do the moving.
Adrian, what kind of animation techniques does your company employ? I took a look at your website, and I really like your avatars. They strike a fine balance of realism, with just a touch of “cartoonish” feature exaggeration to avoid falling into the “uncanny valley”.
EDIT: Whoops, I see you’ve posted while I was writing this. Taking a look now…
|
|
|
|
|
Posted: Oct 29, 2011 |
[ # 7 ]
|
|
Member
Total posts: 12
Joined: Oct 27, 2011
|
Ooooo many a time i’ve broken an ankle slipping down that uncanny valley
@CR for the avatars animation is done by hand, eye and judgement (at the moment) using fairly complex rigging. Part of my plans for developing the avatars is to test drive some facial mocap for emotions which I hope to implement over the next 6 months. A lot of the ‘look and feel’ of the avatars I’ve produced stems from the clients who have expressed a preference for a semblance of real people rather than trying to achieve real. Even after 5 years of developing these avatars I am still tweaking and refining and adding to them in order to improve the user experience and facilitate empathy with the chatbot.
|
|
|
|
|
Posted: Oct 29, 2011 |
[ # 8 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Adrian Shipp - Oct 29, 2011: Weight in animation is sooo important. At the high end of computer generated humans a lot of time (and person power) is allocated to ‘rigging’ the characters up. This involves systems as you have described with bones that have muscles and tendons attached to them. Scripts are used to allow elements of the rig to influence each other.
Interesting stuff. How do these scripts work? Are the muscles and tendons really modeled as extended objects (with some sort of bulk modulus/elasticity/etc), or finite elements with prescribed moments of inertia, etc that determine how the object will exert force on its neighbors? How do these models distribute the internal forces of each element? What I mean is, if you tell the model to, say, move an arm, can it change the position of other parts of the “skeleton” to minimize the internal forces necessary to keep the model balanced? (Shift weight to another leg, etc.) Or are these “ancillary” motions put in by the ‘programmer’ by hand?
You gave the example of moving your upper lip corresponding to a flaring of the nostrils. (I admit I never noticed that, now I’m sitting here making weird faces… ) Would your models automatically move the nostrils if you moved the avatars lips, or is each motion choreographed?
Adrian Shipp - Oct 29, 2011: The level of realism in a rig boils down to the riggers skill at converting real world movements into scripted reactions (I can see it being hard locating a computer ‘programmer’ that also has good observational skills with regards to anatomy and movement so people who can do this work and push the boundaries are still quite rare).
Good point.
Adrian Shipp - Oct 29, 2011: The trouble is, you would have to have some clever randoms thrown into the muscle movements to avoid noticeable repetitive movements (if a foot moved from A to B repeatedly it would soon become apparent that it was following a computed path. If i went to touch a button the likelihood of me being able to touch that button in exactly the same way again is very low (even if its a fraction of a millimetre off), in a computer it would hit the same spot exactly. I think this is where the human eye can pick up nuances of motion, as it is near impossible for a human body to repeat a motion exactly we are used to seeing things with a certain tolerance. we accept moving a hand from A’ish to B’ish and read it as moving from A to B, yet if we actually do see movement from A to B without deviation it becomes mechanical and unreal.
How do you employ this type of randomization in your own work? It would have to be subtle with regard to motions of the face, considering even slight changes can convey different emotions.
Adrian Shipp - Oct 29, 2011: at last some people to discuss this with
That’s why I enjoy this forum!
|
|
|
|
|
Posted: Oct 29, 2011 |
[ # 9 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Dang, looks like we crossed paths again. I should really check one last time for updates before posting…
Definitely keep the forum updated as your facial mocap project progresses!
|
|
|
|
|
Posted: Oct 30, 2011 |
[ # 10 ]
|
|
Member
Total posts: 12
Joined: Oct 27, 2011
|
C R Hunt - Oct 29, 2011:
Interesting stuff. How do these scripts work? Are the muscles and tendons really modeled as extended objects (with some sort of bulk modulus/elasticity/etc), or finite elements with prescribed moments of inertia, etc that determine how the object will exert force on its neighbors? How do these models distribute the internal forces of each element? What I mean is, if you tell the model to, say, move an arm, can it change the position of other parts of the “skeleton” to minimize the internal forces necessary to keep the model balanced? (Shift weight to another leg, etc.) Or are these “ancillary” motions put in by the ‘programmer’ by hand?
the complexity of reactive scripting is limited by resources. A finger can just be 3 bones moved by ‘hand’ or can make use of inverse kinematics (a built in script that allows the computer to decide on how to move bones dependant on a destination goal). It can then have visible muscles built and attached to the bones that stretch and contract to help maintain the shape of a finger whilst bending. Extra deformers can then be added and automated to correct any bad deformations of the skin. shaders can then be animated automatically to flatten or deepen wrinkles. You could also then apply formulas to apply curving and movement to fingers next to it for the extremes of joint movement. It could go on, but there always comes a point where the need for such ‘excesses’ of rigging aren’t sufficient to merit their existence. If it takes a day to incorporate secondary finger motion into a rig yet you only see that motion for 1 sec it is much more cost effective to just animate it by hand.
C R Hunt - Oct 29, 2011:
You gave the example of moving your upper lip corresponding to a flaring of the nostrils. (I admit I never noticed that, now I’m sitting here making weird faces… ) Would your models automatically move the nostrils if you moved the avatars lips, or is each motion choreographed?
The rigging I have set up allows for secondary motion such as cheeks being effected by mouth movement, lips and cheeks by jaw movement etc but I can also go in and adjust each individually. ask yourself this, if you weren’t looking to see if your nostrils flare/move involuntarily when your upper lip moves.. would they ??
C R Hunt - Oct 29, 2011:
How do you employ this type of randomization in your own work? It would have to be subtle with regard to motions of the face, considering even slight changes can convey different emotions.
Yes, very subtle. Striving towards a non mechanized feel without compromising the emotion being delivered and any audience empathy is a constant task.
|
|
|
|
|
Posted: Oct 30, 2011 |
[ # 11 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Ok, out of all that, I’m only familiar with IK chains (Inverse Kinematics), so again with the jealousy/awe. I may not be all that good in this area, but I’m certainly interested. :D
|
|
|
|
|
Posted: Nov 1, 2011 |
[ # 12 ]
|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
Hey Adrian, welcome here as well on my behalf!
Your post reminded me of an old thread I’ve posted here a while ago, and which actually might need a bit more attention from my side:
http://www.chatbots.org/ai_zone/viewthread/239/
My reaction is a bit short, still in the middle of the campaign, but hopefully it helps…
|
|
|
|
|
Posted: Dec 23, 2011 |
[ # 13 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
Looks like Dave is on the ball and got rid of it. No rest for the wicked even at Christmas eh Dave?
|
|
|
|
|
Posted: Dec 23, 2011 |
[ # 14 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Nope. fraid not.
If anyone’s curious about what Steve’s referring to, please make private inquiries to either me or Steve. We’re not going to talk about it in public.
|
|
|
|
|
Posted: Dec 23, 2011 |
[ # 15 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
Or feel free to delete all these comments Dave and nobody need be any the wiser? Merry Christmas by the way, you are doing a great job moderating here. Have a glass of mulled wine on me
|
|
|
|