|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
It’s quite hard to imagine this is all AI:
http://www.youtube.com/v/NqZM8gDD8mY
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 1 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I watched the video, and some other videos from the same project, and here are a few thoughts that popped up in my mind:
1. Although it seems to be impressively sophisticated at it’s level of speech-based interaction, it still strikes me as a chatbot that uses canned responses. I get this from the other videos where long lists with possible responses are shown on a comp screen.
2. The avatars itself (the animation) are not autonomous; they move based on a repertoire of predefined poses and movements (this is explained in some of the videos). They do not have free motor control over their limbs, so they can not make a ‘new’ move or gesture based on learning. This is one of the things I’m looking at when I mention ‘virtual robotics’. I think using a game-engine that includes a physics-model (like the one in Blender for example, there are more of course) is the way to go here.
All in all it is pretty impressive and I think we will see more applications of NLP emerge in the near future. But I also have the strong feeling that these are all short-term solutions to the idea of building AI. It is still lacking the ‘big one’; real autonomous consciousness, where the AI ‘understands’ the concepts that are discussed and decides, based on it’s own experiences, how to react to the input.
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 2 ]
|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
Hans Peter Willems - Feb 22, 2011: The avatars itself (the animation) are not autonomous; they move based on a repertoire of predefined poses and movements (this is explained in some of the videos). They do not have free motor control over their limbs, so they can not make a ‘new’ move or gesture based on learning.
That’s a new decade of learning. A robot which just a sensors (motion sensors, location sensor, vision sensor) to observe where its limbs are, so he can learn to move. Starting from scratch, with a reason to live: ‘feel grass’, he would be rewarded internally when he manages to reach out his limbs to grass in the neighborhood. In that case all you would need is an objective, sensors and a ‘body control system’, and a little bit AI .
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 3 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I agree, but I also see it the other way around: giving AI a (virtual) body that it can interact with itself, might be an important step towards self-awareness, and ultimately ‘consciousness’.
I envision a system where I can teach AI what a ‘knee’ is, by touching the ‘knee-sensor’ that is linked to the virtual bot. Next, when I ask ‘touch your knee with your hand’, the bot will simply do so as feedback to me, but also receives feedback from it’s own ‘knee-sensor’, that way making it a reality for the bot.
We already know from neuroscience that ‘feedback loops’ are an intrinsic part of how humans learn and operate.
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 4 ]
|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
touching, interesting.
This one will require some power: if your hair (in general, not ours ), is waving is the wind and touching our schoulders, hands, and even of someone else. Virtual robots should actually be aware of all small inputs you can ever imagine.
And then: shock waves, like now in Christ Church, virtual robots should be able to feel that as well (physical robots btw too).
But eventually, we’ll get there.
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 5 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I totally agree with you there…. and most of that is already technologically possible. We have sensor-grids or sensor-nets for monitoring large areas for high density measurements. And as for the shock waves, the physics model in the game-engine of Blender already handles that, as is does gravity, acceleration/deceleration, aero- and hydrodynamics, collisions…..
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
@HP: I once heard, in the beginning of 2010 as far as I remember, about a (academic) program (not a software proram) with the objective to bring back all robot-knowledge together. Unfortunately, heard it through word-of-mouth, but the source was reliable. Unfortunately, I have never been able to track down this project. Have you ever heard of it?
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 7 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Doesn’t sound like something I’ve come across. But then again, I’ve not looked in to robotics for the last few years, other then what comes along in internet news from the big technological community sites like slashdot and the likes.
I’m trying to stay away from true robotics, as that is yet another technological domain that you can get totally submerged in. For now I’ll stay at virtual emulations of robotics without having to deal with real hardware sensors, electronics (although I do actually have an electronics diploma as well), and stuff like pattern recognition and such.
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 8 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Well, I’m not Hans Peter, and this may not be exactly what you’re looking for, but I found this to be interesting:
http://robots.net/article/3100.html
|
|
|
|
|
Posted: Feb 22, 2011 |
[ # 9 ]
|
|
Senior member
Total posts: 971
Joined: Aug 14, 2006
|
Have you seen this
http://www.erwinvanlun.com/ww/C151/
I haven’t updated it for quite a while, but still interesting.
|
|
|
|