Marco Marazzo - Jan 20, 2012:
The understanding of concept is an illusion for human being.
Yes, basically. Well, that illusion though, is good enough, practical, and servers our purpose. I think, like the computer, we humans can have an only limited level of understanding. But I’ll leave that to the philophers. The point, I think, is that “ultimate” understanding is not required. Depending on the type of system you are trying to build.
Dave Morton - Jan 20, 2012:
1.) The word “walk” - Simple, you may think, yes? Perhaps, but that depends on whether you just want the person who “understands” the word to know just the basics (e.g. putting one foot in front of the other), or perhaps something much more complex, like the entire scientific description of the process of walking, which is immensely complex.
This goes back to my statement earlier about depending on the application. If your application is just a “casual chat” chatbot, then I doubt if you need the scientic understanding of walking. However, if used for medical purposes, perhaps you would need to ‘drill down’ to a certain level. All based on applicaton. If you demand your system be an expert in everything, I hope you have enough money to employ an army of ten million people to work for a century to input the data.
If you are thinking of something like the Loebner Prize, well, having a certain level of knowledge of walking should suffice, and it’s ok if your bot says “Ok buddy, I don’t know *THAT* much about how signals get from a human’s brain, travel through the spinal cord and cause muscle contractions which move the legs”. Seriously, how many people do you know know everything to the finest detail, from the scientific explanation of walking to biology, astronomy, history,etc? No human knows everything, perhaps no AI will (however, given the fact that AI’s can “live” forever, perhaps they’ll have an enormous amount of time to gain knowledge and come extermely close )
There’s this idea that an AI must know absolutely everything about everything, and, unless it does, it is worth ZERO, and not intelligent. An idea which is nonsense of course.
AI, like humans, will be limited, just as everything in the universe is, even, perhaps the entire universe. It will have limits to understanding, to knowledge. The important thing is—does it have enough knowledge to be useful, and is it flexible, and can it learn (to whatever degree).
I watched a good video awhile back, from one of the researchers of “GOFAI” (good old fashioned AI), and he said, you cannot ever state any statment to be completely true or false.
“birds fly”
is this true?
WELL…. what if it is a dead bird?
WELL….. what if I clipped its wings?
WELL…. what if the bird is in a small cage, no room to fly?
nonsense. The system must, like humans, consider the statement “birds fly” to be GENERALLY TRUE.
GENERALLY TRUE.
But, have the capablity to , in a given conversation, know that, this or that SPECIFIC bird is dead, or has clipped wings,etc.
The trick is… can the system learn during conversation, about theese exceptions, and automatically integrate them into its knowlede base.
example…
human—- in general, birds fly
AI—thanks, I now know that in general birds fly.
human—charlie is my pet bird
AI—ok
human—can charlie fly
AI—i’m assuming so
human—charle can’t fly
AI—oh? why is that?
human—well, you see, charle just died 2 seconds ago
AI—I see, so generally birds can fly, but if they’re dead they can’t?
human—yes, that is correct
human—I just got a new bird called sam
AI—ok
human—can sam fly
AI— GENERALLY birds fly, so I’m going to say yes.
human—correct
AI -
human—my friend has a bird named Tommy, but he can’t fly
AI—oh, he’s not dead is he?
human - nope, he’s in perfect health.
AI—aw.. why can’t he fly?
human—because tommy is an Ostrich
AI - aw, I see, so generally birds can fly, but some cannot?
human— you got it
(updates its database)
human—- i picked up a new bird today
AI - ok
human—can my new bird fly?
AI—i don’t know, is he dead?
human - LOL, no… *GENERALLY* when people pick up pets from a pet shop, they are not dead !
AI - Ok (updates its database)
human- my friend bob picked up a bird today, his name is Joe
human—can Joe fly?
AI—Hum, generaly speaking birds fly, and generally when one picks up pets from a pet shop, they are NOT dead, thus I am going to assume Joe is NOT dead, and I’m going to say yes, Joe can fly, am I right?
human—yes, good for you
AI—thanks
so this is perfect examples.. but close enough to illustrate the point
humans reason this way I believe… children do. We make assumptions, we make mistakes, we learn by trial and error. We’re not going to develop an AI by first trying to figure out “scientific explanations” of things like walking, and devleoping like that. If an AGI program is going to succeed, it is going to happen like the way we learn. anyway, thats my take