|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
I’m working on a AI project, and one of the fundamental aspects of the project directly address issues around Turing tests (and similar). Take this statement, which I suggest sums up the current state of play in chat-bot style competitions* :
“Humans are smart, and we’re having limited success fooling them with ‘dumb’ programs.”
I think we’re going about it wrong. My approach is, if the programs are dumb, lets fool them instead - then we’re no longer up against the human judge.
My approach is to create an environment for an AI, such that ‘it’ thinks it is human, living out a normal life. Fooling the AI is much simpler, and then we only have to present the ‘truth’ (from the AI’s perspective) to the human judge.
Hopefully that’s something to think about - and I look forward to peoples views!
* Goostman is a separate discussion as I’m for ‘free-thinkers’ over scripting.
|
|
|
|
|
Posted: Sep 25, 2014 |
[ # 1 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
But aren’t Turing Tests already that? From the opposite perspective, Turing Test judges are already trying to fool the AI, or “trip it up” to use a colloqial term, while the AI are programmed to be human to the best of their knowledge.
|
|
|
|
|
Posted: Sep 25, 2014 |
[ # 2 ]
|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
Hi Don,
I disagree that scripted bots are human to their knowledge (if they have any
I think the key difference is where the ‘lie’ is.
In a scripted bot, the script/bot is the lie, and the judge is trying to pick holes in it as you say.
In what I was describing, the lie is the simulated environment, the AI is a real human *in that universe*
Heavy stuff, no?
|
|
|
|
|
Posted: Sep 25, 2014 |
[ # 3 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
“You can only begin to de-robotize yourself to the extent that you know how totally you’re automated. The more you understand your robothood, the freer you are from it. I sometimes ask people,“What percentage of your behavior is robot?” The average hip, sophisticated person will say, “Oh, 50%.” Total robots in the group will immediately say,“None of my behavior is robotized.” My own answer is that I’m 99.999999% robot. But the .000001% percent non-robot is the source of self-actualization, the inner-soul-gyroscope of self-control and responsibility.”—Timothy Leary
|
|
|
|
|
Posted: Sep 25, 2014 |
[ # 4 ]
|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
Nice, thanks Robby.
On a similar vein, I subscribe to the idea that the ‘real world’ is actually someone else’s simulation - but that’s a discussion for another day.
|
|
|
|
|
Posted: Sep 26, 2014 |
[ # 5 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Harry Holden - Sep 25, 2014: I disagree that scripted bots are human to their knowledge (if they have any
I thought you said that you weren’t talking about scripted bots, so neither was I. My point is that any entity can enter a Turing Test, including AI that has been extensively programmed to believe that they are human. The judges often introduce or address inconsistencies to undermine its story, or in your case, its “belief”. The AI in Turing Tests are equally unaware that the story they tell is a lie.
For example: I entered an AI into the Loebner Prize qualifying test. Whenever it had to access knowledge about itself, the search was rerouted to its knowledge about the average human. Therefore, literally to the best of its knowledge, it was a living breathing human who had ten fingers and 2.4 children.
Your suggestion does not seem different in principle, but more complex with a virtual environment for the AI to interact with. Under the assumption that AI is dumb anyway, how complex an environment does one need to support the AI’s belief that it is a human? And what would this test prove?
|
|
|
|
|
Posted: Sep 26, 2014 |
[ # 6 ]
|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
Don,
Kudos for getting a Loebner entry together, I’m all talk at the moment and a bit of a way off entering anything.
In principal there’s no difference to what any of us are doing, at some level the information for the answer comes from somewhere - and on that basis have nothing new to say. I just thought that the level of complexity (may be the wrong word) that I’m planning could be seen as an approach that people would have some thoughts on.
As to the point of the test, I don’t think that changes.
I’m not sure what to take away from our conversations - I suspect you’re taking nothing away! :/
|
|
|
|
|
Posted: Sep 26, 2014 |
[ # 7 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I’m just mostly reminded of The Truman Show. It is not clear to me whether you are suggesting a new test, or an experiment like The Truman Show, or that you wish to approach Turing Tests with an AI entity whose life story comes in the form of an elaborate virtual environment.
|
|
|
|
|
Posted: Sep 26, 2014 |
[ # 8 ]
|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
Definitely number 3, I know it’s overkill but it’ll give me an excuse to play with lots of algorithms and layers of architecture.
Interesting that you mention the The Trueman Show though as I’ve used the same analogy when describing the project to others.
I think my best plan now is to get my head down and implement to see if the approach is A feasible and B worthwhile.
Also, I have got a take away from our exchange; I need to get something running with a concrete purpose.
Thanks.
|
|
|
|
|
Posted: Sep 27, 2014 |
[ # 9 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
Ah, then most of my comments were off the mark, sorry.
I think your approach is interesting, but at the same time I find it difficult to imagine to what level of detail one would have to go, before the environment becomes an advantage in a Turing Test. Going after the Turing Test with great effort can lead to great disappointment.
Instead (or in addition), I would imagine that there are far more interesting tests and experiments one could run with an A.I. living in a virtual environment. For instance, recently someone tried out A.I. with three laws of robotics in a virtual environment, and wrote a paper on it: http://alanwinfield.blogspot.co.uk/2014/08/on-internal-models-part-2-ethical-robot.html
|
|
|
|
|
Posted: Sep 27, 2014 |
[ # 10 ]
|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
I take your point about effort / disappointment, but fear not:
I’m expecting to spin out many side-projected, and resources from my efforts. The ostensible target of the Turing test (and similar) is not my only focus.
Imagine all of the useful things an AI could do if it was intertwined with the real world. I’ve also speculated that, if I some layer of the ‘environment awareness’ is at a high enough level, and locomotion control is abstract enough - a real-world avatar (or at least camera/eyes) would be possible.
Anyhow - this is all theoretical until I get down to it!
PS: I read the paper, interesting stuff - but somehow I imagined something more dynamic from your description, in actuality the experiment was quite abstract.
|
|
|
|
|
Posted: Sep 28, 2014 |
[ # 11 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
SHRDLU was a program that interacted with a world composed of geometric objects. It could be told to move them, put one on top of another, etc. and could be queried about its “world.” For decades I’ve heard people say that if an AI were somehow connected to a world in which it could interact, describe, help visualize, then true AI would be born.
Most of those efforts became bogged down at some point or another. SHRDLU itself was a spaghetti code nightmare that could not be generalized or expanded to do anything useful.
I’m usually much more interested in these kinds of projects if some kind of code or API has been produced, and can be reproduced. Conceptnet, Wordnet, Link Grammar, and other projects come to mind as they have some verifiable proof of existence, and have produced something that shows progress, and in rare cases, can be leveraged by other people to make something interesting or useful.
Robby.
|
|
|
|
|
Posted: Sep 28, 2014 |
[ # 12 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I always imagined SHRDLU to be controlling a robot arm that played with real blocks. Spatial reasoning like that could be useful to clear teacups from the table or get my good suit out of the closet. Though I suppose that is now being done through vision systems, a “mental” 3D model of the world and its physics would be more accurate at the task, especially with (partly) hidden objects.
The most recent attempt at useful application I’ve seen was a humanoid-ish robot programmed to load a dishwasher, or more accurately, pick up and dump objects. Then again, that’s what robot arms have been doing for decades.
|
|
|
|
|
Posted: Sep 28, 2014 |
[ # 13 ]
|
|
Member
Total posts: 9
Joined: Sep 24, 2014
|
I remember reading about SHRDLU, “Why dd you clear off the green cube?”, “TO MAKE ROOM FOR THE TRIANGLE”, “Why did you move the triangle?”, “BECAUSE YOU TOLD ME TO”.
Great stuff at the time, its answers showing that both its sub goals and the operators goals were represented in some sort of planning tree.
Anyhow, I think *I’ve* done enough hand-waving for now, I’m going to go and get some simulated hands waving.
|
|
|
|