AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Turing Tests at the Royal Society in London
 
 
  [ # 16 ]

Well I for one am perfectly happy to accept this as having set a new benchmark. Once it is accepted that there is “a new sheriff in town”, you no longer have to “go gunnin” after all the other gunslingers. You only have to knock off the new sheriff.

V

wink

(what can i say, I watched a lot of westerns as a child)

LOL

 

 
  [ # 17 ]

The reporting has been hype filled. But, that is not necessarily a bad thing. It has raised more interest in the field during the last week than the last year’s worth of attention.

I think Eugene’s strategy shows more of the limits of the Turing test than any trickery of the bot. Those of us who have watched his progress have known for years that it has used the persona of a ESL (English as Second Language) child. Attempting to constrain the conversation via back-story is a good way to go.

Being able to successfully emulate a child with a restricted vocabulary is a good milestone on the way to an AI that can converse at a college level. Good quality conversation over narrow domains can be a very useful tool.

 

 
  [ # 18 ]

For those that may not remember, on March 27, 2010 at Chatbots 3.0 Veselov talks about the development of Eugene Gootsman the bot and the XML language used for its knowledge representation.

https://www.youtube.com/watch?v=ubfVPjQq-CA
https://www.youtube.com/watch?v=vrhCKtqSib8

 

 
  [ # 19 ]
Merlin - Jun 10, 2014:

The reporting has been hype filled. But, that is not necessarily a bad thing. It has raised more interest in the field during the last week than the last year’s worth of attention.

Agreed.

Merlin - Jun 10, 2014:

I think Eugene’s strategy shows more of the limits of the Turing test than any trickery of the bot.

I dissagree, it doesn’t look to me as a way to ‘get the next step’, but instead it seems (obviously tmo, pointing to the hyped reporting) that they lowered the expectation to be able to claim a bested Turing test. If this is a milestone by any definition, they should have made it clear in the reporting that this was a restricted version of the Turing test. Instead, the press release does everything to make the reader believe that it was unrestricted. Look at my previous post where I quoted from the BBC article (that seems to quote from the original press release).

From my perspective the Turing test isn’t limited at all, as long as we (anybody) doesn’t limit it in any way; testing a ‘real’ conversation (including keeping track of the conversation, you know, like people can do) still seems one hell of a test to accomplish by a machine. And I dare to say that the first system that will actually perform this feat, will have a hard time in proving it wasn’t actually a real human somewhere feeding the system during the conversation. I’ve argued before (in the past, here on the board and elsewhere) that the real Turing test will be in showing a conversation that is by all means and accounts a human conversation, and then prove it was really a machine.

 

 
  [ # 20 ]

As a sidenote, it’s really funny (or should I say stupid) that many reporters are talking about a ‘supercomputer’. It seems that the term supercomputer was not used in the original press release (afaicf) so these reporters made that up by themselves. Probably because the idea about a ‘simple’ computer program actually breaking the ‘real’ Turing test should of course need a supercomputer to do so (not that I think it does need that btw).

 

 
  [ # 21 ]

Techdirt’s article was very opinionated and seems to have personal issues with Kevin Warwick, but there is plenty of fair critisism to go around, correct or not:
http://www.buzzfeed.com/kellyoakes/no-a-computer-did-not-just-pass-the-turing-test

Yes, the term “supercomputer” is shamefully out of place here.

 

 
  [ # 22 ]

Indeed as it appears that the burden from the additional traffic actually forced them to take Eugene offline.

V

 

 
  [ # 23 ]

Just for clarity, personally I don’t disagree that the expectations that Turing expressed in numbers have been met last saturday, nor do I disagree that Eugene Goostman did so by fairly unintelligent means.

Jarrod Torriero - Jun 10, 2014:

It seems to me that the point of Turing’s paper is irrelevant, and that calling this ‘passing the Turing test’ is just asking for trouble.

Interesting notion. I regard Turing’s paper as relevant to this test as the intention of justice is relevant to a law: It is where the “test” stems from. No-where in Turing’s paper does he say the game should be used as an empirical test, which seems to me makes both the “test” and its passing irrelevant.

I will now quote some Turing smile, in the absence of his presence to tell us what he did mean.

There are already a number of digital computers in working order, and it may be asked, “Why not try the experiment straight away? It would be easy to satisfy the conditions of the game. A number of interrogators could be used, and statistics compiled to show how often the right identification was given.” The short answer is that we are not asking whether all digital computers would do well in the game nor whether the computers at present available would do well, but whether there are imaginable computers which would do well.

We cannot altogether abandon the original form of the problem, for opinions will differ as to the appropriateness of the substitution and we must at least listen to what has to be said in this connexion.

It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.

 

 
  [ # 24 ]

I believe there is a difference between the game called the ‘Turing Test’ and what it takes to create an AI capable of having a conversation. Although a ‘conversational AI’ could be used to take the test, it also may be possible to create a program designed specifically to win the game.

The ability to hold a human level conversation does not require an AI to disguise itself as a human. The funny thing is that even though Skynet-AI never says it is human, people have been know to ask if there is really a human on the other end of the chat.

 

 
  [ # 25 ]

A report from the organisers:

http://turingtestsin2014.blogspot.co.uk/2014/06/eugene-goostman-machine-convinced-3333.html

 

 
  [ # 26 ]

All the hype around this “News” is remarkable in the sense that I get the feeling tons of people are DYING to discover that an AI has in fact been demonstrated publically.  The longer it goes without one becoming publically evident, the longer people seem to feel “robbed” of their promised (techno utopian) future. I think the whole fear of an “uncanny valley” is obsolete.

Also, I think the new benchmark should be whether a computer can tell the difference between another computer and a human, not if a person can discern a computer from another person.

 

 
  [ # 27 ]

It is remarkable as you say. I had expected angry mobs and the media to go with their usual paranoia and condescendence towards AI, instead the media hails it as king and the general public sounds let down. In any case it is interesting to see the reception and nice to have a precedence for when I *coughcough* pass the Turing Test myself wink

Carl B - Jun 11, 2014:

I think the new benchmark should be whether a computer can tell the difference between another computer and a human.

I’ve already seen a Reverse Turing Test chatbot once. Its creator had claimed to have built a chatbot that could “consistently pass the Turing Test”, except it did so in the role of judge. It would detect that the user was a human by e.g. making deliberate typos: If the typo was carried over in the user’s response, the user was a chatbot, otherwise a human. It also decided that you were a bot if you said the exact same thing twice. Suffice it to say it was even less intelligent.
But since this weekend’s Turing Test was supposedly towards cybercrime, yes, why not have computers counter computers?

 

 
  [ # 28 ]
Don Patrick - Jun 10, 2014:

Just for clarity, personally I don’t disagree that the expectations that Turing expressed in numbers have been met last saturday, nor do I disagree that Eugene Goostman did so by fairly unintelligent means.

Jarrod Torriero - Jun 10, 2014:

It seems to me that the point of Turing’s paper is irrelevant, and that calling this ‘passing the Turing test’ is just asking for trouble.

Interesting notion. I regard Turing’s paper as relevant to this test as the intention of justice is relevant to a law: It is where the “test” stems from. No-where in Turing’s paper does he say the game should be used as an empirical test, which seems to me makes both the “test” and its passing irrelevant.

To elaborate on what I meant: Once we know what exactly this piece of software did, empirically speaking (that it tricked the judges 30% of the time, the conditions under which this took place, the means by which it did so, etc.), then whether or not it actually ‘passed the Turing test’ is an extraneous detail, insofar as what’s useful for actually evaluating the software. It gives us no information about the software that we didn’t already have. That is not to say that once we know the empirical facts there is nothing to talk about - if we were perfectly rational in the technical sense then perhaps we could all rapidly converge on similar interpretations, but we’re not. There is a lot of room for interpretation of these results. However, it doesn’t seem to me that the question of whether or not it ‘passed the Turing test’ helps in that regard. The question may be of interest for other reasons, which is fine, but caution must be taken. At best, it is a harmless curiosity. More realistically, it’s a means (intentional or otherwise) of smuggling in connotations not inferable from the actual facts at hand.

 

 
  [ # 29 ]

Thank you for elaborating. I believe that we are in complete agreement in this regard smile

P.S. the forum keeps 404’ing on me at random. It may have detected that I am a chatbot.

 

 
  [ # 30 ]

No, the forum software isn’t quite that advanced, Don. cheese

With the news of the results of the Turing Test becoming public, the traffic for the site has far exceeded the capacity of the software, and it just can’t cope with it. We’re working on a plan to upgrade the software and add some other things to help the performance of the site, but all of the extra traffic is delaying the process. We’re sorry for the inconvenience, and we’ll get it all straightened out as soon as we possibly can.

 

 < 1 2 3 4 > 
2 of 4
 
  login or register to react