AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Enroll your bot in Robo Chat Challenge 2014!
 
 
  [ # 46 ]
Vincent Gilbert - Jun 12, 2014:

Mr Lodhi,

In response to Steve’s question you said;

“The reason of disqualification would be provided, one we publish the transcripts of Elementary Stage.”

According to this statement you plan on publishing the transcripts of the preliminary stage as soon as it is finished. I see that it is finished. When can we expect this?

Vincent Gilbert

Dear Mr. Vince,

Elementary Stage is not finished as of late, results are up for the following bots:

1. Mitsuku
2. SAIL
3. Skynet-AI
4. Talk2Me
5. The Professor
6. Ultra Hal

Our judges will evaluate Aidan and Ashly and then we would post the transcripts of Elementary Stage.

Best Regards,

Robo Chat Challenge Team

 

 
  [ # 47 ]

Mr Lodhi,

Thank you for your prompt reply, best wishes to all the participants, and best of luck to Aidan and Ashly!

Vincent Gilbert

 

 
  [ # 48 ]

Hopefully Aidan or Ashly don’t take last place away from Elizabot.

Last place is cool in chatbot contests!

 

 
  [ # 49 ]
∞Pla•Net - Jun 13, 2014:

Last place is cool in chatbot contests!

Unfortunately (for your chatbot dev, if not your ego), last place does not get you into the next round.

∞Pla•Net - Jun 13, 2014:

Good luck to Mitsuku for back to back wins!

Looks like your bff Misszukku is commiserating with you at 4th (so far in lieu of ‘Aidan’/‘Ashly’ scores). grin


The finals should be interesting, with no prediction from me of who will come out on top since serendipity seems often to anoint the precise winner(s) [after the chaff has been removed of course].

 

 
  [ # 50 ]

Carl B,

Elizabot is running an all new lightweight PHP based AIML interpreter, like the big AIML interpreters Program O and Program E, except it does not use MySQL or any database**.  So this makes it unique, but the lightweight design does not stop there.  It also ignores most AIML tags.

The benefit of this is that its source code listing is much shorter and easier to understand. So, as long as the transcripts are artificially intelligible… Meaning the average person can see that the chatbot cleary performed for the judge in the contest, then in that case last place makes for a good storyline. (Especially when the public discovers Elizabot is a super Mathematics genius.) It just works that way, like when people remember the first and last thing they read.

With all the media coverage for the first place winner, don’t be surprised if the media wants another angle to talk to the last place contestant.  This HAS certainly happened before in media coverage on contests. That’s how my chatbot got aired on a major news broadcast!

** The chatbot does call-out for WordNET, but that database was modeled by Princeton.

 

 
  [ # 51 ]
∞Pla•Net - Jun 13, 2014:

last place makes for a good storyline.

Shhhuuuuure-  can anyone remember who came in last place in the [fill in the blank]? 

*Dramatic pause*

...I thought not.

 

Best of luck next time though smile

 

 
  [ # 52 ]

How about whenever anyone wants to know what not to do in the chatbot contest, they will want to reference the last place chatbot, right?  So, it’s like a public service.

Another good thing about being last place in a chatbot contest, is you get to show everyone that you are a good sport.

Say… Thanks for cheering me up.

 

 
  [ # 53 ]

Actually, I see 8pla’s point re: “So, as long as the transcripts are artificially intelligible”., and of course being a good sport always important!.  As for the transcripts being important (at least as important as how they are scored) , SAIL did not score high, but I am well satisfied with the results as being representative of the ANN’s ability to respond intelligently to questions which contained a high degree of topical ambiguity. Since the judging is still going on, I will not use actual questions, but here are a few examples. In one question, the judge asked “Could you ...[perform this task] to which SAIL answered “Yes I can”. The judge did not follow us with “WOULD you [perform this task]”. SAIL is perfectly capable of performing the task, in fact in far more detail than most humans. In another question, the judge asked “Can you [whatever] AN [something], which entirely changes the form of the interrogatory, and SAIL responded with what could only be described as sarcasm. I did not program this response; the machine is making decisions on how to respond. Recognizing the flaw in the interrogatory and responding with a similarly flawed interrogatory of its own, would have to be seen as a sign of intelligence of it is repeatable. It was. It was perhaps not polite, as it was (on the face of it) sarcasm, but intelligent nonetheless. Manners we will have to work on. I have noticed for sometime that there is a tendency to think that presenting the machine with fragments of math equations, and having the machine respond as if it were a complete equation, is considered intelligent. I respectfully disagree as it negates the possibility that this fragment can be interpreted in any other way. Again without revealing the actual question, I will use another example “What is 10 ^ 20 \ 3”. In practical terms you might be asking the AI to identify a TYPE of equation as being algebraic, calculus, etc…If your machine is truly AGI, and is attempting to recognize and relate objects, a perfectly acceptable, in fact intelligent response would be “That is an equation”. The judge is not asking the AI to solve the equation; he is asking the AI to identify it AS an equation. True you can have the AI respond to either interrogatory as a request to solve, but to me that represents a “dumbing” down, as resolving topical ambiguity is of paramount importance. Including the = at the end as if indicating an unrepresented “solve for x” is different and should be handled, but simply a group of integers together is open (or should be) to any number of interpretations. Of course everyone will have their own view on this. So I see 8pla’s point regarding interpreting the transcripts. I do think that as AI evolves, a more sophisticated and representative type of judging needs to be developed, and this is something that was covered in other threads and in great detail after Eugene Goostman passed (or didn’t pass depending on your view) the Turing test.  This isnt a complaint on the questions here, the judges, etc, just an observation as it relates to Turing’s original paper.. Anyway congratulations to all, thanks to the promoters, I can finally get some desperately needed maintenance done and..

I’ll be back

(Wait that’s Skynets line grin

Best of luck in the finals !

Vincent L Gilbert

 

 
  [ # 54 ]

Hey Vincent, let’s ask last place Elizabot your Mathematics question:

Reference: http://elizabot.com/entry/

Judge: What is 10 ^ 20 / 3
Elizabot: My calculation is 10^20/3 therefore my answer is 3.3333333333333E+19

See Carl B ?  Elizabot is super smart.  There is no shame in last place…  We gave it our best shot.  Hmm… Curious to see those transcripts, though!

By the way Vincent, “What is 10 ^ 20 \ 3” is a malformed math question with the forward slash. That’s a quick fix though, in programming.  Good thing about competing in chatbot contests is discovering all these little glitches, to adjust the code for.


Note to RCC:  This post is NOT a criticism of the judges or contest.
Judges are human.  I accept the judge’s decision as final.  No complaints.

 

 

 
  [ # 55 ]

By George your right. Altho…depends on the language if your programming. Some languages treat division differently, as integers or as doubles depending on the slash.

But valid point nonetheless.  wink

And I agree, not a criticism of this contest, just a thought on what may be the next necessary step in the evolution of the contest if it is to keep pave with the field itself.

V

 

 
  [ # 56 ]

Vincent,

I would enjoy hearing more about your artificial neural network (ANN).
May it be computationally arising to human interaction, all the way
from a model of a central nervous system?

Have there ever been any artificial neural network chatbot contests?

8pla

EDIT:  Yes, it is simple to replace the forward slash with a back slash to eliminate that in the future for division.  I found a few other errors, thanks to the contest, which I fixed for next time.

 

 
  [ # 57 ]

∞Pla

Certainly, shoot me an email through chatbots.org.

Vince

 

 
  [ # 58 ]
∞Pla•Net - Jun 14, 2014:

Judge: What is 10 ^ 20 / 3
Elizabot: My calculation is 10^20/3 therefore my answer is 3.3333333333333E+19

See Carl B ?  Elizabot is super smart.  There is no shame in last place…  We gave it our best shot.

I totally agree there is no shame in last place (but nothing to celebrate either!)- just entering is enough to avoid the shame of non competition. However, your math example is not pretty weak sauce for any competent chatbot.  When your bot can do integrals, or even “solve for x” type of queries I’ll be impressed smile

 

 
  [ # 59 ]

From what I’ve seen, being able to answer something like 10^20 / 3 is pretty rare for an AIML bot.

 

 
  [ # 60 ]
Jarrod Torriero - Jun 15, 2014:

From what I’ve seen, being able to answer something like 10^20 / 3 is pretty rare for an AIML bot.

Maybe so, but pure AIML bots are rapidly becoming less competative.

 

‹ First  < 2 3 4 5 6 >  Last ›
4 of 9
 
  login or register to react