|
Posted: Mar 19, 2011 |
[ # 31 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Mar 19, 2011:
It is not difficult in practice for humans; we use tests to determine IQ, level of development (e.g. SAT), etc. We don’t test humans to determine if they are, or are not, human. We test them to see where they score on a given scale for that test and in how far they are developed as rated to a common scale of development.
So when testing a Strong-AI (let’s hypothesize that it exists) then it is conceivable that this AI scores very good on some tests and not that good on others. That is not a problem at all; it will just have a ‘more or less developed’ level of ‘human-like’ intelligence. However, should it test on any given test BELOW the score that any real human AT LEAST is known to be able to attain, then it fails.
Yes, but IQ tests (and the SAT for that matter) are notoriously inaccurate. Academics and educators still debate exactly what it is the SAT and IQ tests are measuring. The simple fact that one can study for an IQ test and become skilled at IQ test questions raises a red flag. Any test for intelligence will surely be the subject of the same doubt and scrutiny by the very people the test results are supposed to convince.
And certainly a Watson-esque bot could enter the scene and blow away the test. So we institute N number of tests? When do you decide that the bot is an intelligent agent and not a compilation of N expert systems?
Hans Peter Willems - Mar 19, 2011:
Now you can bring up the argument that there are humans that fail these tests. This is not relevant because we don’t test humans to see if they are ‘human’. So when we test strong-AI we also are not testing to see if they are human (no test needed for that: they aren’t), but to see how they scale up to the capabilities of a commonly rated human who is doing the same tests.
True, that’s an important point. But it seems to me that tests designed to rate just one subset of our capabilities will always raise the question of whether it constitutes a true metric of intelligence.
I know I’m sounding like a downer. I really do appreciate the value of testing and I think it plays a critical role in determining the intelligence of a bot. But it will always be limited to functionall intelligence. But perhaps I just haven’t met with a clever enough test.
Hans Peter Willems - Mar 19, 2011:
This has of course been argued by several philosophers as well, and there already have been several successful counter-arguments been made. The main counter-argument against your premise is this: Either it means that intelligence has to be a property of at least one of the smaller parts of the system, and we know this is not the case in humans as we don’t have individual ‘intelligent’ neurons (as you point out yourself), or it means that ‘intelligence’ is an emergent property of the total combined system and not something that can be described in ‘programming’.
Why the heck would the fact that a result is emergent mean that it cannot be described in programming? Or, more to the point, why can’t a result be the emergent product of a program algorithm? Just because the interplay of the particular mechanisms that give rise to a result may be subtle or complicated, and therefore the result not obvious, does not mean that those mechanisms are not describe-able. Or implementable as code.
Hans Peter Willems - Mar 19, 2011:
That, unfortunately, brings you back ‘into the Chinese room’: In Searle’s argument, the man in the room IS the algorithm (or ‘intelligent agent’) and he demonstrates with his argument that the algorithm can process seemingly ‘intelligently’ without actually being ‘intelligent’ (and I must say that in this specific regard, his argument is hard to invalidate).
(emphasis mine) He most certainly isn’t. He’s been given a set of instructions for how to manipulate the symbols. That is the algorithm. And in no way is it implied that he had anything intellectually to do with the design of the instructions.
Hans Peter Willems - Mar 19, 2011:
The only real solution to the Chinese Room argument is of course the ‘Actor argument’ I’ve brought up before: The man in the room is ‘acting’ like something else that he is not. However, this does not invalidate that he IS actually intelligent, but we are testing him with the wrong test. From that point is is easy to argue that the Chinese Room can actually be legitimately ‘intelligent’. However, determining if the Chinese Room is really intelligent (I prefer ‘conscious’ as it raises the bar to where ‘strong-AI really is, because of the ‘hard problem’), is a completely different matter all together (hence my ‘all tests’ argument).
I would argue that there is absolutely nothing you could do to test the intelligence of the man in the room. As you said, he’s not acting like a man. He might as well be a mindless robot or a silicon chip pushing ones and zeros. Which makes the whole argument a false dilemma. We are asked to test the intelligence of some person in a room, but yet in practice there is no person in the room. He’s a red herring. There is an algorithm in the room, regardless of how it is carried out.
|
|
|
|
|
Posted: Mar 19, 2011 |
[ # 32 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Merlin - Mar 19, 2011: Let me add my birthday wishes to you Dave. I hope your birthday present includes a birth in the top 10 of the CBC.
CR - Maybe we can just strap an AI on to “Big Dog” and start animal testing right away.
http://www.bostondynamics.com/robot_bigdog.html
Wow, that is so cool! Thanks for the link!!
(Here’s a direct link to their youtube video for those interested: http://www.youtube.com/BostonDynamics)
|
|
|
|
|
Posted: Mar 19, 2011 |
[ # 33 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
CR, I’m not going to answer all your points because, frankly, those have been answered several times in dozens of research papers written by skilled researchers. I really suggest you read up on the Chinese room in relation to the Turing test, the hard-problem (and the easy problems) in AI, Qualia and cognitive phenomena. A lot of stuff can be found when you follow the links I posted before (and if you like I have a few dozen more). C R Hunt - Mar 19, 2011: The simple fact that one can study for an IQ test and become skilled at IQ test questions raises a red flag.
As for this statement, I happen to be in the situation to invalidate that based on personal experience; the tests that can be learned are the ones that float around the Internet, but they are a far cry from professional IQ-tests. And believe me, those can not be faked when you must do them. Besides that, any serious scientific test (in any area whatsoever) always includes a ‘control-group’ to make sure the test can not be invalidated by simply saying that it could be faked one way or another.
|
|
|
|
|
Posted: Mar 19, 2011 |
[ # 34 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Mar 19, 2011: CR, I’m not going to answer all your points because, frankly, those have been answered several times in dozens of research papers written by skilled researchers. I really suggest you read up on the Chinese room in relation to the Turing test, the hard-problem (and the easy problems) in AI, Qualia and cognitive phenomena. A lot of stuff can be found when you follow the links I posted before (and if you like I have a few dozen more).
I do enough literature reviews for my “day job” and have no intention of performing one for every thread or idea that comes up in this forum. Frankly your little “appeal to authority” here does nothing to bolster your stance, nor does it add much to the discussion. If you want to cite references, do it. I’ll take a look at them. But I’m not going to go scouting around to defend your opinions.
Hans Peter Willems - Mar 19, 2011:
C R Hunt - Mar 19, 2011: The simple fact that one can study for an IQ test and become skilled at IQ test questions raises a red flag.
As for this statement, I happen to be in the situation to invalidate that based on personal experience; the tests that can be learned are the ones that float around the Internet, but they are a far cry from professional IQ-tests. And believe me, those can not be faked when you must do them. Besides that, any serious scientific test (in any area whatsoever) always includes a ‘control-group’ to make sure the test can not be invalidated by simply saying that it could be faked one way or another.
Well then we can each add our own data point to the survey. I’m referring to my own personal experience as well. I’ve been administered three IQ tests over the course of my primary/secondary education for various reasons. (One at age 7, one at age 9, and one at age 15.) The scores ranged over 20 IQ points. The tests are designed to favor abstract reasoning in the context of novel problems/questions/situations. (In order to test intelligence as opposed to simply knowledge.) The problem is, the tests incorporate standard sets of puzzles and questions that one can be trained to excel at. (At this point, it is no longer a test of reasoning speed/ability, but of how well you’ve trained yourself.)
Test developers use the term “g-loading” to describe and weigh how well an IQ test measures “general” intelligence as opposed to a specific skill set. But there is still active research in whether or not this is even a reliable criterion.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 35 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
C R Hunt - Mar 19, 2011: I do enough literature reviews for my “day job” and have no intention of performing one for every thread or idea that comes up in this forum.
But when you enter a discussion about something like the ‘Chinese room argument’, a broadly discussed argument in AI-research and deemed pretty important by many researchers, you better come prepared.
C R Hunt - Mar 19, 2011: Frankly your little “appeal to authority” here does nothing to bolster your stance, nor does it add much to the discussion. If you want to cite references, do it. I’ll take a look at them. But I’m not going to go scouting around to defend your opinions.
So you enter a discussion but when I point out that many papers are available already to dispute your views (and I already linked to several of them), you say you don’t want to ‘do the work’ to substantiate your stance.
Well, sorry but no cigar.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 36 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Mar 19, 2011: Btw, thanks to Victor for starting this topic, and for the contributors so far for hitting up a serious debate. This is helping me tremendously in preparing my research paper, so keep it coming
No problem Hans Peter, this thread really has taken off !.. we have a huge amount of talent in this group! I think we have all learned a lot from each other.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 37 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Mar 20, 2011:
But when you enter a discussion about something like the ‘Chinese room argument’, a broadly discussed argument in AI-research and deemed pretty important by many researchers, you better come prepared.
I never meant to imply I was unfamiliar with the Chinese room argument or what rebuttals have since been put forth on the subject. Nor have I claimed to be ignorant of the major tenets of AI research. I put forth very specific arguments and opinions about very specific topics, and in reply you say that your opinion is the right one because some researchers who you have not referenced (I have yet to see one actual journal paper get referenced) agree with you and that I should look up “cognitive phenomena”. Seriously?
C R Hunt - Mar 19, 2011:
So you enter a discussion but when I point out that many papers are available already to dispute your views (and I already linked to several of them), you say you don’t want to ‘do the work’ to substantiate your stance.
What papers have you linked to? (Besides Turing’s original paper on the Turing test, which hardly counts as recent research.) You’ve linked to people’s websites. And of those only a few are actually academic. At any rate, putting up a wall of links does not count as referencing. If you have a specific point, have a paper to back it up. If it’s your own unproven claim (or someone else’s unproven claim), that’s perfectly fine. This forum is here to talk ideas. But don’t behind other people’s laurels as (somehow) proof of your ideas. Let’s see the research.
As for my stance in particular on the Chinese room, I hold to what I said before:
Just because the interplay of the particular mechanisms that give rise to a result may be subtle or complicated, and therefore the result not obvious, does not mean that those mechanisms are not describe-able. Or implementable as code.
This is a generalizable statement about emergent phenomena. Give me something that contradicts it. If you can’t, fine. You can still disagree. But don’t bs about the walls of research I’m blindly ignoring.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 38 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Agh, there’s still no edit button. The second quote above is Hans’, not mine. The third is mine.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 39 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
The problem is, the tests incorporate standard sets of puzzles and questions that one can be trained to excel at. (At this point, it is no longer a test of reasoning speed/ability, but of how well you’ve trained yourself.)
They do, don’t they. My sister once bought a book on how to train for these things, when she was job-hunting (to be prepared you know).
If you can find/know of the tricks that the test-creators used, you can score a lot higher.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 40 ]
|
|
Experienced member
Total posts: 61
Joined: Jan 2, 2011
|
Yes, but IQ tests (and the SAT for that matter) are notoriously inaccurate. Academics and educators still debate exactly what it is the SAT and IQ tests are measuring. The simple fact that one can study for an IQ test and become skilled at IQ test questions raises a red flag.
They are useful or they wouldn’t exist. I read that SAT scores are a good predictor of 1st semester college GPA. Those of us who have been to college, however, know that the best grades don’t always equate with the most intelligent students.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 41 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Toby Graves - Mar 20, 2011: ...that the best grades don’t always equate with the most intelligent students.
That has a lot to do with the mismatch between the common system of teaching (which is aimed at the common denominator in intelligence) and the way that above-average intelligent people are absorbing knowledge.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 42 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
C R Hunt - Mar 20, 2011: I have yet to see one actual journal paper get referenced
Me too, by anyone here (including yourself). This is a discussion forum, not a research paper. We all post links to stuff, not posting exact references. Get real.
BTW, I have pointed in several discussions that relate to this issue, to David Chalmers (just about the most referenced opponent of the Chinese Room argument), his work on ‘Qualia’, the related scientific discussion on the ‘symbol grounding problem’ and the discussion around the ‘easy and hard problems in AI’. There is quite a lot of information there to substantiate either side of the argument (if there was a ‘right or wrong’ then we would have strong-AI already), but YOU have to do the work to substantiate YOUR side of the argument.
So let’s just agree to disagree. I will write my upcoming research paper on the Chinese room (with all proper references in place of course), and you will…. well, we have to see.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 43 ]
|
|
Experienced member
Total posts: 61
Joined: Jan 2, 2011
|
That has a lot to do with the mismatch between the common system of teaching (which is aimed at the common denominator in intelligence) and the way that above-average intelligent people are absorbing knowledge.
Nah, in college it’s more likely related to how hard people are willing to work.
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 44 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
Nah, in college it’s more likely related to how hard people are willing to work.
From my experience, I’d say there is some truth in that. In fact, I’d assert that, over here at least, most schools only test that: the willingness to work at what was dictated (aka workforce creation system).
|
|
|
|
|
Posted: Mar 20, 2011 |
[ # 45 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
(missing the edit button)
But Hans also has a point that there is a mismatch. I think it’s an ‘and’ story instead of ‘or’.
|
|
|
|