∞Pla•Net - Jun 12, 2014:
First of all, there is no debate that I respect your view and I support the AI-research you may be involved in which differs from chatbot A.I. In my view, investors are fortunate to be working in these other areas of AI-research. I fully support AGI (Artificial general intelligence) research, and other types of A.I. besides chatbots. My chatbot was once on a major news broadcast, and I have been interviewed on a popular cable TV show about my job in artificial intelligence in engineering for which I am a certified technician.
I respect your insights as well, and specifically because I do I’m a little dissapointed to see you joining the choir. I’ve read dozens of articles related to this event, both first reports and commentary and critique. I also trawled hundreds of comments below these articles and the general consensus is pretty much as Ray Kurzweil summed it up in is response to the matter.
∞Pla•Net - Jun 12, 2014:
Please try to understand that I served as a Loebner Prize Judge assigned to evaluate the Eugene Goostman technology. So I am sharing my first hand view as a judge. The Eugene Goostman technology was nearly human back in 2008, and that team of computer programmers has been perfecting the technology every since.
I have no issue with the fact that this chatbot performs very well within it’s restricted model of (simulated) intelligence. I do take issue however that there is in fact a restricted model in place that makes it more (much more) easy to met the criteria for success, and then state that the Turing test, which does not mention any options for restricting the criteria (on the opposite: Turing talks about hours of conversation, and a ‘system’ that can ‘learn and understand’ as to be able to handle conversation flow) has been bested.
∞Pla•Net - Jun 12, 2014:
If you doubt chatbot technology, may I recommend that you go see Robby Garner’s chatbots perform in the play “Hello Hi There” directed by Annie Dorsen? You will not be dissappointed! It may enlighten you about chatbots when you see the magic in the air. I went to see this play in a theater with a sold out audience that was absolutely delighted and completely entertained by the chatbot actor and actress in the play. You will come to understand why Robby Garner is in the Guinness Book of World Records for his chatbot technology.
I have been building chatbots myself for several years (using AliceAIML). In fact that experience has lead to my understanding of the shortcommings of chatbots to attain real machine intelligence. I’ve used AliceAIML for several early prototypes of my ideas, even implemented a first test of my emotion-based model that way. Chatbot technology is very impressive for what it does, but grammar based parsing linked to decision trees with predefined responses has little to do with knowledge comprehension, experience-based reasoning and the capability to formulate responses (by itself) in reply to things that happen in (its) reality.
∞Pla•Net - Jun 12, 2014:
As for the AI-research community, surely they must just be carrying on the tradition started by Joseph Weizenbaum who back in the 1960s criticized his own breakthrough ELIZA, one of the first chatbots. They will probably do the same thing again when AIML, ALICE or Mitsuku or Elbot or chatbot technologies such as ChatScript any one of the other excellent chatbot technologies available today, passes the Turing Test next.
The ‘real’ Turing test, as in having a prolonged conversation on any topic instead of one shot questions and answer sessions, will never be passed by a chatbot (by the current standard of what that implies). I guess I have to finish my research paper on why the Turing test is still very valid as a measure for human-like intelligence, but here’s the (very) short version: Humans are socially and biologically geared to recognise it’s own species. This has a lot to do with procreation and keeping the species intact. It is because of that, that we are very capable of spotting a system that seems to act human, but does not really do in all circumstances. Only when a system meets us on an emotional and social level that is (extremely) close to our own perception, it will become hard to spot the virtual system. In the same way as we are capable to find the level of knowledge on any topic in a conversation partner, after having a topical conversation with this person, we do the same in a real Turing test.
The real big mistake that is being made in the current way the Turing test is being run, is that (if we take a look at the transcripts from just any event) there is no real conversation going on. If the majority of humans would do conversations like that, we could agree that most humans are brain dead, and there is little hope for humanity.
∞Pla•Net - Jun 12, 2014:
It was amazing that Professor Kevin Warwick predicted this would happen, and just a few years later it has. In conclusion, all I can tell you is that the media is completely justified in running with this story.
It’s easy to predict that things like this will happen, just read Clarke’s three laws (http://en.wikipedia.org/wiki/Clarke’s_three_laws). Nothing amazing about that (the statements by Warwick that is, Arthur C. Clarke is/was awesome of course). And, as I still argue, it didn’t actually happen.