|
Posted: Mar 11, 2011 |
[ # 16 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Roger…. yes, interesting analogy . . . .. and like Schrödinger’s cat…. once you open the box . .. .. the realities collapse… and just by OBSERVING *how* the machine works. . . . we say .. . ahhhhh!!! I see the source code of how it works. .. THUS it is merely “information processing” . ..when it is “magic” (that is, completely unknown, it is magic)
Perhaps when we FULLY understand the human mind .. we will just say .. hum . .it is just a bunch of electro-chemical exchanges between synapses. .... we see exactly how this is done… no magic. . .no unknowns. . .thus it is now simply information processing.
Thus, the act of observing affects any experiment in quantum physics, so to does simply KNOWING how a artificial or natural brain works, affect whether we regard it as thinking or “simply information processing”
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 17 ]
|
|
Senior member
Total posts: 328
Joined: Jul 11, 2009
|
Yes spot on, I agree with you completely. It’s the act of observation that finally lets us define something. Until then; yes I think you can happily call it magic, in order to get the point across.
You AI programmers are stuck with this paradox I think, you guys are always going to know it is a piece of software. Whereas an end user may not know that BUT the effect on them could be pretty remarkable.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 18 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Roger Davie - Mar 12, 2011: you guys are always going to know it is a piece of software. Whereas an end user may not know that BUT the effect on them could be pretty remarkable.
Exactly like a good actor can fool someone in believing he is something that he’s actually not. And while we have tests to examine the ‘development’ of a human in specific areas, many still discard the option to simply test AI like we test humans. This is understandable because this is what’s called the ‘hard problem’, and for a reason. It is much simpler to device a test like Turing’s so we have a goal that might be attainable within a few years (although we all know even that didn’t happen).
So I argue that the Turing test is a simplification of reality in such a way that you can only test a simplification of AI on the same scale that the test is a simplification. Hence, the Turing test, Schrodinger’s cat or whatever other similar ‘test’, can not be used to determine ‘strong AI’. Note that on the same premise, the Chinese Room can not disprove ‘strong AI’ either, because that is a simplification of reality in the exact same sense.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 19 ]
|
|
Senior member
Total posts: 328
Joined: Jul 11, 2009
|
Interesting Hans. I suppose we have to start somewhere. I don’t see the cat as being a test myself - it’s just a way of understanding what is going on or might be going on in a system. Sure if you are a scientist and it was possible to look in the box then of course you would.
So I guess from a scientific view point we do need to know what is going on in the box if we want to prove strong AI. But is that really necessary when trying to make someone believe they are talking to an intelligence of some kind ? Not necessarily… most people don’t have a clue what is going on in their PC but most people would probably say it is very clever - artificial or otherwise…
I guess I’m getting into anthropomorphism somewhere here too, I better stop !
Great thread !
Oh yes and I agree that the Turing Test is very simple. Is that what makes it useful ?
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 20 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Roger, Exactly!
I’ve never met you. I certainly haven’t taken an electron microscope to your brain to really see how it is working.
But from your posts you appear to be intelligent.
So… I am assuming you are intelligent. I also assume other people are intelligent. I don’t need to look inside.
We have a bias against machines, only because we understand them, we built them… thus they can’t think, is the age-old fallacy.
What will happen is when the first machine passes a Turing test, 99.99999% of the world will believe they are intelligent, and they will be (I don’t care if they are made of silicon and very ‘soaped up’ ALICE clones, or running on ENAIC, or EDVAC, or Charles Babbage’s mechanical ‘analytic engine’ powered by steam, or a quantum computer, or whether they are conscious) Only a handful of philosophers will continue to quibble about it, and that won’t mean a thing.
hum.. 4 votes, but only 3 of us posted in here.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 21 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
You AI programmers are stuck with this paradox I think, you guys are always going to know it is a piece of software. Whereas an end user may not know that BUT the effect on them could be pretty remarkable.
Very true.
I guess, blessed is the fool who doesn’t know better.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 22 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
ahh!! Jan .. you must be the other results person ???
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 23 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Roger Davie - Mar 12, 2011: Sure if you are a scientist and it was possible to look in the box then of course you would.
The point I was actually trying to make is that we don’t need to look ‘inside the box’. We don’t crack open the heads of children at school to look inside, we test them! So why not use those same tests to determine the level of AI?
Roger Davie - Mar 12, 2011: But is that really necessary when trying to make someone believe they are talking to an intelligence of some kind ?
Well, there you said it
So the real question seems to be; do you want to fool people into believing something, or do you want the create the real thing?
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 24 ]
|
|
Senior member
Total posts: 328
Joined: Jul 11, 2009
|
The point I was actually trying to make is that we don’t need to look ‘inside the box’. We don’t crack open the heads of children at school to look inside, we test them! So why not use those same tests to determine the level of AI?
I think we could use those tests, no reason why not in my mind.
As we don’t know the entire inner workings of the human mind yet, testing children (without cracking their heads open) is very much like testing a black box.
So the real question seems to be; do you want to fool people into believing something, or do you want the create the real thing?
I suppose it depends on what satisfies you. Is it the cause or is it the effect…
How about this…If we’re not looking in the box and we don’t know if it’s human or machine but we are happy with the results….how do you know if you have been fooled ?
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 25 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Hans Peter Willems - Mar 12, 2011:
The point I was actually trying to make is that we don’t need to look ‘inside the box’. We don’t crack open the heads of children at school to look inside, we test them! So why not use those same tests to determine the level of AI?
Yeeeeeeeees.. .this is why I am results oriented…. who cares what you call it, or what it uses, or the philosophy behind how it works…. it passes whatever test you throw at it.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 26 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
Interesting…. ‘results’ is now in the lead Hans
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 27 ]
|
|
Senior member
Total posts: 697
Joined: Aug 5, 2010
|
ahh!! Jan .. you must be the other results person ???
Actually, I wasn’t. It’s just that I like magic, and somehow, understanding the trick makes the magic go away. It’s a bit like computer games. I used to love them, until I began to think about how they work. Now, I can’t look at a game (or any software for that matter) any more without thinking about how they made it.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 28 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
That is a fascinating concept isn’t it ? I agree.
I’ve heard many people state that they had more fun building a bot than talking to it, because they knew what it was going to say. I’m hoping to have so many levels of indirection that it will be sufficiently unpredictable results. I believe that can be achieved. I wrote a very simple chess program way back, called JoeChess… very simple.. i am only an average chess player, but I did it just for fun. Funny thing is, it makes some funny moves sometimes . ..that is what I want to achieve with a bot also. That is the “just for fun” use case.
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 29 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Mar 12, 2011: who cares what you call it, or what it uses, or the philosophy behind how it works…. it passes whatever test you throw at it.
I think that is where you go wrong; it means that your AI has to be able to pass EVERY test that a human can pass !
|
|
|
|
|
Posted: Mar 12, 2011 |
[ # 30 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Mar 12, 2011: ‘results’ is now in the lead Hans
Somehow you see this as a contest between us two. For me it isn’t. I’m pretty much used to being in the minority when it comes to heavily involved research of just about any topic
So I already anticipated that my kind would be in the minority here, simply because the few academics that frequent this board don’t mingle in our discussions much (maybe I should take a hint from that).
|
|
|
|