|
Posted: Aug 10, 2014 |
[ # 46 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
Don Patrick - Aug 10, 2014: Probably because I’m slightly out of place, there isn’t any civilised ai.org forum
Also, I think it’s fair criticism among chatbot creators to point out when a chatbot is using very cheap techniques of 50 years past, in comparison to current chatbot standards.
Don, I wasn’t singling you out, my post just fell in sequence with yours by chance.
25 years ago, comp.ai was about all there was.
|
|
|
|
|
Posted: Aug 10, 2014 |
[ # 47 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
Jarrod Torriero - Aug 10, 2014: I don’t think appreciating chatbots is mutually exclusive with recognizing and lamenting their current limitations.
I see your point. Still, consider the irony.
|
|
|
|
|
Posted: Aug 10, 2014 |
[ # 48 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I’ll give you that there is some irony and cheekyness involved in speaking negative of chatbots on a forum populated by bot masters . On the other hand, chatbots are the topic of this forum, art fora are not less critical of the artists that inhabit them. In Eugene’s case though, I think most of the complaints are due to its initial overrating more than anything else, which was not the fault of its creators.
|
|
|
|
|
Posted: Aug 10, 2014 |
[ # 49 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
Don Patrick - Aug 10, 2014: In Eugene’s case though, I think most of the complaints are due to its initial overrating more than anything else, which was not the fault of its creators.
Yes, you are quite right. Dr. Warwick made the outlandish press release, “supercomputer” passes Turing test. Took us all by surprise, even Vladimir. The 5 of us were hired to do a job, to *demonstrate* a Turing test, not knowing Warwick would pull a stunt like that. I believe Rollo Carpenter had every right to be outraged, given his program’s performance in other events prior to this.
One thing though. I don’t believe Eugene’s character, being 13 years old and ESL, was cheating. Elbot professes to be a robot. Is that cheating? ALICE always told people it was ALICE. In 2003 a dragon won the Loebner Prize contest. These are all programs that were accepted into these contests knowingly. If cheating means breaking the rules, I’m interested to know which rules were broken.
|
|
|
|
|
Posted: Aug 11, 2014 |
[ # 50 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
That just depends on your interpretation of Alan Turing’s thought experiment, and whether or not one feels that the game is supposed to be won by intelligent means to support the premesis that machines can think, or just to be won. It might be urged that when playing the “imitation game” the best strategy
for the machine may possibly be something other than imitation of the behaviour
of a man. This may be, but I think it is unlikely that there is any great effect of
this kind. In any case there is no intention to investigate here the theory of the
game, and it will be assumed that the best strategy is to try to provide answers
that would naturally be given by a man.
|
|
|
|
|
Posted: Aug 11, 2014 |
[ # 51 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
I fail to see how portraying a child rises to the potentially libelous accusation of cheating.
|
|
|
|
|
Posted: Aug 12, 2014 |
[ # 52 ]
|
|
Experienced member
Total posts: 84
Joined: Aug 10, 2013
|
I suggest that you are interpreting the word ‘cheating’ quite differently to the way in which it was intended to be interpreted, at least if you’re referring to the use of the word in the thread about the Winograd Schema Challenge*. In context, it appeared to simply refer to the tendency of current chatbots to try to give humanlike responses using very narrow, inflexible reasoning techniques that no one would classify as anything remotely resembling the necessary reasoning processes of a true general thinking machine. Reading Alan Turing’s original paper in which he presented the Turing test, this sort of narrow imitation was clearly not his intention. I do not believe that it was meant as an ‘accusation’ of anything that most authors of the relevant chatbots would not freely admit, nor do I believe that there was any intention of claiming that the rules of any of the contests in question had been violated.
* Though obviously I cannot speak for Andrew Smith. This post should simply be taken as my own interpretation of his comments.
|
|
|
|
|
Posted: Aug 12, 2014 |
[ # 53 ]
|
|
Senior member
Total posts: 179
Joined: Jul 10, 2009
|
Thanks Jarrod. My curiosity is satisfied.
|
|
|
|
|
Posted: Sep 29, 2014 |
[ # 54 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
This amusing find tells the experiences of two of the human confedorates, who apparently thought that they were chatting with computers instead of judges. A bit old news and not meaning to undermine the results (Eugene did score 29% before so it remains perfectly plausible), but some of you who foster frustrations about judges might enjoy reading:
http://turingtestsin2014.blogspot.co.uk/2014/09/hidden-human-experience-in-turing-test.html
|
|
|
|
|
Posted: Jun 19, 2015 |
[ # 55 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
One year later, the paper is published. From the preview it looks like it focuses on the failings of the 10 judges rather than the chatbots.
Looking back it is interesting to see how regard of this event has changed. Today, general uneducated opinion is that “the” Turing Test was not passed. Only a true Scotsman would pass the “real” Turing Test.
|
|
|
|
|
Posted: May 3, 2017 |
[ # 56 ]
|
|
Guru
Total posts: 1009
Joined: Jun 13, 2013
|
I discovered another paper, quite easy to read, that includes transcripts from the 2014 Turing Test where computers were mistaken for humans or vice versa. It especially examines preconceptions that judges had. It also shows that indeed the judges could only fire off 5 or 6 questions in the time allotted.
|
|
|
|
|
Posted: May 3, 2017 |
[ # 57 ]
|
|
Senior member
Total posts: 308
Joined: Mar 31, 2012
|
Some interesting reading in this thread but…
If the real premise of these “tests” is to determine whether the “Person” <chatbot> being conversed with is a real human or a chatbot, why would it even be allowable for one to enter a non-human entity (dragon, robot, alien, zombie student, etc.)?
In other words, a chatbot should pretend to be a real person, regardless of gender, age, nationality, as long as it is able to comply with the rules of said contest and preferably speak / type English.
Likewise, if a chatbot is pretending to be human, it should not be required to answer questions that would require a human to use a calculator or other reference item(s) that one might expect to find in any of the current Digital Assistants (Google Now, Alexa, Siri, etc.).
The contest, in my mind, should be a conversational exchange between two entities, a human judge and another (possible) human or computer program.
Obviously, the use of logical inferences, grammatical jousting and puns would be accepted and expected.
While the pretension of an ESL individual / program might seem novel, the judges should not be swayed by such ruses and other tactical ploys by chatbot programmers (though they will always try to come up with cute, plausible responses for their bots).
For the moment, the chatbot pretenders must remain local but if or when the day ever comes that they are allowed to connect or consult with the outside world (Internet), then the whole game will change as we know it.
This is just my take / opinion. Yours might vary.
|
|
|
|
|
Posted: May 3, 2017 |
[ # 58 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
I don’t think the “game” would change all that much, Art, other than giving the ability of the chatbot to connect with current events and weather, which at present is a serious disadvantage for chatbots in the competition. Connection to the Internet really isn’t going to be helpful to the average chatbot’s core abilities for tracking conversation states or forming relevant replies to user input at this stage of the game, and right now most (if not all) chatbots currently involved in the Loebner Competition don’t have the ability to retrieve current event or weather data for use in generating responses anyway. As a result, when/if the “no Internet” rule ever changes, the initial impact will be negligible to minor, until such time as botmasters find ways to use the web to their benefit. I suspect this will take a bit of time to accomplish.
|
|
|
|
|
Posted: May 3, 2017 |
[ # 59 ]
|
|
Senior member
Total posts: 308
Joined: Mar 31, 2012
|
Dave, my comment was not meant to address the day when connectivity to the Internet is allowable but rather the premise under which the chatbots are constructed and entered, ie. ESL student, alien, elf warrior, eight-year-old child, robot, etc. The connection comment was an after thought. But to follow, yes, to my way of thinking it absolutely would change the way in which the judges would view or rate this unknown bot / human based on gathered and reported knowledge and replies. An online bot would be able to retrieve and report that the Eiffel tower is 984 feet whereas most humans would probably not know that answer.
After all, this is very speculative at the moment and further discussion will most likely solve nothing in this arena.
Should you enter Morti, would you classify him as a human or something else? If something else then you should not enrol him in such competitions where the ultimate goal is to “fool” the judges into thinking how Human your creation is.
Being friends, we can always agree to disagree.
|
|
|
|
|
Posted: May 4, 2017 |
[ # 60 ]
|
|
Guru
Total posts: 1297
Joined: Nov 3, 2009
|
As the Loebner Prize Competition judge on record, who reported to Dr. Warwick. I judged Eugene Goostman, the chatbot that later went on to pass the Turing Test. It felt like judging a real person. So, it doesn’t surprise me at all that Eugene Goostman passed the Turing Test.
|
|
|
|