|
Posted: Mar 20, 2014 |
[ # 16 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Do you think so because the test result is either a pass or no pass (as long as no-one is so close as to fool any judges), or because the programs are rated by “human”-ness?
|
|
|
|
|
Posted: Mar 20, 2014 |
[ # 17 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
The latter. Basically, you can get a lot of the way to (apparent) “human”-ness through the use of very simple techniques (relatively speaking - I don’t mean to imply that such techniques are easy to program, only that they’re simple by comparison to what would be required for true general intelligence). It’s only that last stretch that requires any real intelligence. So unless that final stretch is breached, the test would likely fail to distinguish between proto-AGIs and the less sophisticated chatbots of today.
|
|
|
|
|
Posted: Mar 21, 2014 |
[ # 18 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Suppose then, we would have a second unofficial jury or online audience rate the transcripts for “intelligence”, in comparison to (to be realistic) conversations with 5-year-olds? Both “human” and “intelligent” are of course subjective criteria, but given a statistically large enough jury, some meaningful average impression of “intelligence” may be found. Then as the AI become more sophisticated, we could gradually increase the age with which to compare.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 19 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
Yes Don, I do agree with you, since I have thought along those lines before. Kindergarten curriculum is rudimentary. Therefore, it may provide a well rounded training for a young A.I. For example, workbooks at Kindergarten level may have rudimentary lessons in Algebra, Probability, Grammar and Vocabulary. This would simplify programming a new A.I. engine to simulate the intelligence of a Kindergarten student.
|
|
|
|
|
Posted: Mar 25, 2014 |
[ # 20 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I agree with 8pla. When I was building Mitsuku’s ontology database, I started with a body of text called “100 words every pre-schooler should know”. Cat, dog, tree, house and so on. As the bot matured, so did the ontology.
In fact, the panel of AI experts at the Loebner Prize in Exeter (2011), suggested starting with a 2 year old playing the part of the human (his mother could type the child’s responses back in) and let’s see what age the AI’s most closely represent. The technology and know-how isn’t there to represent a human yet.
Similar to space exploration. You have to launch a few rockets and land on the moon first instead of going all out to build a manned colony on Mars.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 21 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Mar 25, 2014: In fact, the panel of AI experts at the Loebner Prize in Exeter (2011), suggested starting with a 2 year old playing the part of the human (his mother could type the child’s responses back in) and let’s see what age the AI’s most closely represent. The technology and know-how isn’t there to represent a human yet.
The problem I see with this is that we are bringing the ‘measurement level’ down way to far. I two year old is still incapable of reasoning, so the AI doesn’t need to be capable of reasoning. Actually, any currently available chatbot platform should (and tmo IS) capable of emulating a two year old kid, simply because a two year old kid has e very limited vocabulary, doesn’t nearly do any actual reasoning other then reacting to it’s primal needs and has a very limited attention span. So what we do is bringing the reference-model down to what we can do now in software and then declare we ‘now have working AI’ (this has actually happened in the media many times over).
The point with (for example) two year old kids is that we can’t even determine their actual level of intelligence (we have to wait and see if they develop into a ‘normal’ intelligent person or not), so how would that be a reference standard for ‘some level of intelligence’ in AI?
Also, if ‘the technology and know-how isn’t there to represent a human’ isn’t there yet, we should simply agree that we have our work cut out and get to it
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 22 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Hans Peter Willems - Mar 26, 2014: Also, if ‘the technology and know-how isn’t there to represent a human’ isn’t there yet, we should simply agree that we have our work cut out and get to it
I couldn’t have said it better myself.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 23 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
So we should have no intermediate steps? All or nothing?
I forgot to mention. The Loebner panel suggested once an AI had passed for a 2 years old, try the Turing Test again but with 3 year olds then 4 years old etc until it reaches an age level it cannot pass for. This way we can see what level we are currently at.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 24 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Mar 26, 2014: So we should have no intermediate steps? All or nothing?
I forgot to mention. The Loebner panel suggested once an AI had passed for a 2 years old, try the Turing Test again but with 3 year olds then 4 years old etc until it reaches an age level it cannot pass for. This way we can see what level we are currently at.
While I understand the idea of testing for the perceived level of intelligence, that should indeed (tmo of course) not be the primary goal to design for. I think the primary aim should indeed be to build the ultimate AI and then see how far we can get. Yes, we will not get there right away, but trying to get there implies in actually setting the goal… to get there
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 25 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
It’s interesting how your approach determines your beliefs, Hans. You’re going at it top-down because you have the top figured out. A lot of people try to work their way up through gradual improvement, but the latter does not mean that anyone will set their goals at the level of a 2-year-old or would stop there. I’m also not suggesting we make that the point at which you win the prize, It would just be a gauge to “see how far we get”, while we’re getting there.
The main reason I suggest children of elementary school ages is because kids are more manageable at such age. School tests have made them familiar with being asked trivial questions, and they are beyond the “What is that?” and “Why is that?” phases of toddlers, meaning they have basic vocabulary, basic knowledge and basic reasoning to compare with. Most importantly: I don’t want to expose young kids to occasionally harsh interrogation, even with an intermediate person.
The best candidates to be intermediates or judges could be elementary school teachers. Teachers, children and wit-testing questions are well familiar with one another. I think it’s not even necessary to use the exact Turing Test setup to compare AI to children. We could compare to several ages at once: Children with writing skills could be given the questions as “homework”, or elementary school teachers could simply judge the AI transcripts directly and compare that to their experiences with students of their grade. Any of this would be more useful to me than hearing whether or not my AI “sounds human”.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 26 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I never understood the benefit of an AI appearing human:
Question: What is the population of Brazil?
Human: No idea but many millions I would assume.
Robot: The population of Brazil is 198,739,269 people.
I know which one appears more intelligent and would be more useful to me. There’s enough humans on the planet. I see no need to make fake ones.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 27 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Mar 26, 2014: There’s enough humans on the planet. I see no need to make fake ones.
Statements like that are great for throwing some oil on the fire of a discussion, but it completely ignores the very serious debate of what we hope to achieve with creating actual AGI
I for one have a pretty clear grasp on the opportunities in our society for the application of humanoid robots with near human cognitive abilities. It seems to me others on the board (and even in this discussion) are not thinking far away from that as well.
To directly invalidate your statement: in several applicable areas (e.g. medical care) there are pretty soon not enough humans available to fill the needed capacity. Japan is the actual front-runner in this and most of the (humanoid) robotic development done there over the last few decades was initiated specifically because of this impeding problem.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 28 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
I fully understand (and welcome) the need for more robots that are physically like humans. However, I see no reason to limit their intelligence by dumbing them down to the level of a human. Are we afraid of creating something more intelligent than ourselves?
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 29 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Steve Worswick - Mar 26, 2014: However, I see no reason to limit their intelligence by dumbing them down to the level of a human. Are we afraid of creating something more intelligent than ourselves?
Ah, didn’t get that perspective from your previous postings here. Having said that, so far nobody insinuated anything about ‘dumbing down’ AI afaict.
|
|
|
|
|
Posted: Mar 26, 2014 |
[ # 30 ]
|
|
Administrator
Total posts: 2048
Joined: Jun 25, 2010
|
Part of the discussion was regarding the human age limit at which an AI could be compared. I see no reason why we should aim for a human level of intelligence at all. Let’s surpass it!
|
|
|
|