AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

How to Pass the Turing Test by Cheating
 
 

How to Pass the Turing Test by Cheating

In this paper I give a history of conversation systems, and I describe the Turing test as an indicator of intelligence. I go on to explain how my two entries to the 1996 Loebner contest work, and I finish by explaining why the Loebner contest is doomed to failure.

Reference: http://www.cis.umassd.edu/~ivalova/Spring08/cis412/Papers/CHEAT-TU.PDF

 

 
  [ # 1 ]

<redacted>

 

 
  [ # 2 ]

Out of curiosity, John, why would you redact a sigh? It seems a perfectly legitimate response, all things considered.

My question, Tom, is why did you bring up a reference to an 18 year old article?

 

 
  [ # 3 ]

lol Dave. Just having one of those days. Thanks for noticing. lol tongue laugh

 

 
  [ # 4 ]

I’ve seen this article before and don’t understand why he believes it is cheating. The Turing Test says you have to “imitate” a human and not have the computer actually actually thinking. All the Loebner winners to my knowledge are basically pattern matchers with a few extra functions.

As you say though, the article is 18 years old and I challenge anyone to currently write a bot in just 1 month which will win a Loebner Prize. The article still talks about the restriceted test which was abandoned in favour of a free for all many years ago.

 

 
  [ # 5 ]

First time I’ve read this article. I personally would consider myself to be “cheating” IF the object were to win by means of intelligence, as is often suggested. However, I have come to regard the Turing Test as a game instead, with trickery on both sides as part of the game rules, Loebner version or not.

Shieber gives two conditions that such competitions must meet for them to be an incentive. [...]
2.These goals should be just beyond the reaches of modern technology. That is, the winner of the prize will be pushing the envelope within that field rather than making a drastic advancement.

This is the one remark that I did find interesting, as it may explain the shortage of professional scientists competing: We know we are far removed from such an achievement.

 

 
  [ # 6 ]

I believe the Loebner Prize is only entered by hobbyists, as the larger companies and professionals are afraid of being beaten by some amateur programmer in his bedroom who makes chatbots in his spare time.

 

 
  [ # 7 ]

I’m sorry, it was my understanding that so far nobody has won the Loebner Prize (the real one, not the consolation prizes that they award to the best of the failures) so to consider the Turing Test to be a solved problem is laughable.

 

 
  [ # 8 ]

Sorry, yes I meant the bronze medal part of the Loebner Prize.

 

 
  [ # 9 ]

I didn’t mean to belittle anyone’s efforts by the way, because what has been achieved by yourself and others so far is quite frankly amazing. It’s just that I cling to the dream of being able to converse with a piece of software as an equal and constantly find myself disappointed.

 

 
  [ # 10 ]
Steve Worswick - Mar 18, 2014:

I believe the Loebner Prize is only entered by hobbyists, as the larger companies and professionals are afraid of being beaten by some amateur programmer in his bedroom who makes chatbots in his spare time.

smile I’d like to think that, but at odds of 3000 IBM phd scientists to 1, I don’t think they should have much to fear. We’ll see how Siri 2.0 turns out now that Apple is concentrating more on the area of conversation.

And I have to agree with Andrew, that is, Jason Hutchens did not pass the Turing Test but won a “best of” prize, so the title of the article is false.

Still wondering why 8pla posted this in particular.

 

 
  [ # 11 ]

Are you kidding? We’re talking about 8pla here. If it fits in the category “for discussion purposes”, it gets posted. cheese

 

 
  [ # 12 ]
Andrew Smith - Mar 18, 2014:

I cling to the dream of being able to converse with a piece of software as an equal and constantly find myself disappointed.

Me too Andrew but I fear we were both born a few hundred years too early for that.

 

 
  [ # 13 ]

I guess I’m just a chatbot history buff at heart.

I finished by badmouthing the Loebner contest, but this doesn’t mean I won’t enter it again. On the other hand, I have very ambitious plans for my 1997 entry. I don’t think that anyone is going to create an intelligent machine any time soon, but the Loebner contest may just stimulate a few advances in the field of natural language interfaces to database engines. We can only wait and see.

And here we are nearly two decades later…  Hope to see us all here two decades from now.   

Interesting replies on this thread… Thanks !

 

 
  [ # 14 ]

I did find the article historically interesting. I mean, you hear a lot about these early programs but rarely in real detail. I think the methods are a little outdated as people are now up to par about Eliza tactics, and these days’ questions don’t allow a program to remain in such limited corners. But most of the tactics are still sound.

To be honest I think the advances in technology would have been made with or without the Loebner Prize. e.g. Chatscript’s new multi-bot context has no bearing to it, but rather to its own practical application. The lure of a popular phone app is better stimulation nowadays. In my case preparing for the Loebner Prize diverted me from programming intelligence to programming unintelligent methods for e.g. telling how many letters are in a word. Methods I removed immediately after. But compared to two years of development, that one month had little influence. I think the Loebner Prize is mostly good for showing off what we have, and for that I am glad it’s still around.

 

 
  [ # 15 ]

My primary problem with the Turing test isn’t subjectivity or any of the usual complaints. I actually think that being consistently capable of passing the Turing test is a pretty good sign of intelligence. My primary problem with the Turing test as a test of intelligence is that performance on the test is disproportionate to actual progress on that which it was designed to measure. You probably can’t have a piece of software that actually consistently passes the test as defined by Turing without having programmed real intelligence, but you *can* get a long way towards that goal. So while the Turing test wouldn’t be too bad as a means of determining whether a given artificial agent is in fact a full-blown human-level general intelligence, it’s terrible at evaluating agents that are only part of the way there.
Put simply, it can measure whether or not the goal has been reached but in the event that it hasn’t it can’t measure how close we are.

 

 1 2 3 > 
1 of 3
 
  login or register to react
‹‹ Loebner Prize 2014      LPP question ››