C R Hunt - Mar 19, 2011:
I agree that the only practical way of testing an AI designed to mimic a living system is to have it “pass the same tests”. That is, to have it respond the same way under the same conditions (or at least, as close to them as you can get). But given the variation between members of the same species, this is difficult in practice.
It is not difficult in practice for humans; we use tests to determine IQ, level of development (e.g. SAT), etc. We don’t test humans to determine if they are, or are not, human. We test them to see where they score on a given scale for that test and in how far they are developed as rated to a common scale of development.
So when testing a Strong-AI (let’s hypothesize that it exists) then it is conceivable that this AI scores very good on some tests and not that good on others. That is not a problem at all; it will just have a ‘more or less developed’ level of ‘human-like’ intelligence. However, should it test on any given test BELOW the score that any real human AT LEAST is known to be able to attain, then it fails.
Now you can bring up the argument that there are humans that fail these tests. This is not relevant because we don’t test humans to see if they are ‘human’. So when we test strong-AI we also are not testing to see if they are human (no test needed for that: they aren’t), but to see how they scale up to the capabilities of a commonly rated human who is doing the same tests.
C R Hunt - Mar 19, 2011:
At any rate, we’ve strayed far afield of the topic at hand.
Surely not. The Chinese Room argument is at the heart of the discussion as to how to test for strong-AI, as it uses the premise that because you can invalidate such a test, this means that you can not do such a test. From that point on Searle argues that because you can not do such a test, this means you can not ‘have’ strong-AI (this premise is of course an inversed argument that is therefor invalid by default).
C R Hunt - Mar 19, 2011:
When I first heard the problem of the Chinese room, the following resolution occurred to me immediately: it is not the man in the room that is the intelligence. The intelligence is all in the algorithm he carries out on the symbols. Just as our neurons and their varied connections and neurotransmitters need not themselves possess intelligence in order for the whole system—our brain—to have it.
This has of course been argued by several philosophers as well, and there already have been several successful counter-arguments been made. The main counter-argument against your premise is this: Either it means that intelligence has to be a property of at least one of the smaller parts of the system, and we know this is not the case in humans as we don’t have individual ‘intelligent’ neurons (as you point out yourself), or it means that ‘intelligence’ is an emergent property of the total combined system and not something that can be described in ‘programming’.
C R Hunt - Mar 19, 2011:
So the symbol processing is done via an intelligent agent—the algorithm that governs it!
That, unfortunately, brings you back ‘into the Chinese room’: In Searle’s argument, the man in the room IS the algorithm (or ‘intelligent agent’) and he demonstrates with his argument that the algorithm can process seemingly ‘intelligently’ without actually being ‘intelligent’ (and I must say that in this specific regard, his argument is hard to invalidate).
The only real solution to the Chinese Room argument is of course the ‘Actor argument’ I’ve brought up before: The man in the room is ‘acting’ like something else that he is not. However, this does not invalidate that he IS actually intelligent, but we are testing him with the wrong test. From that point is is easy to argue that the Chinese Room can actually be legitimately ‘intelligent’. However, determining if the Chinese Room is really intelligent (I prefer ‘conscious’ as it raises the bar to where ‘strong-AI really is, because of the ‘hard problem’), is a completely different matter all together (hence my ‘all tests’ argument).
Arguing for or against the Chines Room argument is somewhat like navigating a minefield, but fun nonetheless