AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AGI on a yacht vs Mitsuku
 
 

I came across this report of a new conversational AI in development:
The Death of Google
and I was wondering what you thought of it, because working on a similar AI makes me biased.
The article makes it out to be AGI, an all-encompassing general intelligence, a bottom-up approach. Is this impressive? Innovative? Does it fit the bill? Is this different from what project Cyc and the like have been doing with knowledge and inference applied to categories?

P.S. I much enjoyed Mitsuku saying “Blimey” quite when she did.

 

 
  [ # 1 ]

There has long been a “schism” between the so-called “AGI” crowd and the “Loebner Prize” crowd.

The AGI crowd frequently “pooh-poohs” the Loebner crowd as mere “pattern matchers”.

However, the AGI crowd never seems to “put their money where their mouth is” and field anything concrete in a fair fight.

The problem with advanced artificial intelligence is that the better something is the less people will talk about it; and, whenever something is really, really good the people seem to fall of the radar altogether never to be heard from again….

 

 
  [ # 2 ]

I’ll be honest to let you know that I do look down on pattern matching in terms of intelligence, and that I’ve seen few chatbots exploit more advanced techniques (some exceptions being Mitsuku and Skynet-AI). But I can nevertheless admire the effort put into them, and value chatbots’ (superior) strength in their own field of making conversation.

What stage of battle would you consider “a fair fight”?

 

 
  [ # 3 ]

While this is impressive, I can’t help but feel that if afterwards the article had said, “And actually this whole article has been a lie and the software in question is actually just a regular chatbot, not an attempt at AGI,” I wouldn’t have been that surprised. What we’ve been shown in this article is a piece of software categorizing things by explicitly given verbal hierarchies (something that I’ve seen more than one chatbot do). It’s doing it competently, sure, and I don’t mean to imply that this is easy (it isn’t), but I’m pretty sure that this isn’t the difficult (relatively speaking) part of writing an AGI.

 

 
  [ # 4 ]

I’ve yet to speak with Helen myself…

It appears the article writer has just taken the word of the yacht captain that this is what it can do. Call me a cynic but after years of seeing wild claims by Arthur T Murray and no evidence, I am of the school that unless such claims can be demonstrated, I remain doubtful.

Marcus Endicott - Jan 11, 2014:

However, the AGI crowd never seems to “put their money where their mouth is” and field anything concrete in a fair fight.

Spot on. I see plenty of people who say pattern matching is old hat and amateurish, yet not one single example of anything better. I remember posting a thread months (if not years) ago asking for an example of a decent non pattern matching bot and am still waiting for a reply.

 

 
  [ # 5 ]

I can agree with Jarrod’s view. Though I think the conversation is a solid demonstration of learning and inference abilities, I have my doubts about the output. I’m not sure why an AI would choose to ask the things it does in the way it does, unless following a script.

Questioning is something I’ve also been puzzling with recently, and while I’ve had my AI search its database for properties to ask about, it would not likely chance upon “size” and “wild” so fittingly. It would have to do another synonym search to elaborate the question as “wild -or domesticated?”, and it would need a particularly sophisticated language system to vary the words “generally” and “usually”. Which I am now writing on my to-do list because it sounds like fun. But that entire output process would be much easier as a scripted question, I would say.

 

 
  [ # 6 ]
Steve Worswick - Jan 11, 2014:

I see plenty of people who say pattern matching is old hat and amateurish, yet not one single example of anything better.

Not taking this personally of course smile, I am willing to battle digital wits in an offline event, but at the same time I predict mine and most AI to lose to pattern-template techniques when the game is natural human conversation. Vica versa, can we judge the value of a chatbot by its ability to solve problems of logic?

 

 
  [ # 7 ]

Sorry Don, of course I didn’t mean anything personal. grin

The article to me looks like it was written by a journalist who deals more with finance and money than AI and was probably just impressed by the captain’s claims rather than waiting to check the bot out for himself. I remember my feelings of utter disbelief and awe when first encountering ALICE. Once you know how it works, it kind of loses its magic.

The article mentions the Turing Test:

In a Turing Test, both a computer and a human communicate with a third party observer. Not knowing who is who, the third party’s goal is to determine, through questioning, with whom they are speaking. If the third party cannot reliably tell the difference between man and machine, the machine is said to have passed the test.

So far, no computer has been able to reliably pass the Turing Test and fool people into thinking it’s human. In this sense, we have yet to create any computer with true artificial intelligence.

Had this program entered the Loebner Prize and produced the conversation that it is claimed it can do, I doubt it would have made the top half of the finalists:

Human: What is a hamster?
Helen: I don’t know.

I like to think the sample conversation he had with Mitsuku was more human-like than that.

Like you, I assume this is some kind of scripted demo which can only do this one task. Naturally, I am interested in finding out more but I have seen too many claims of people creating “strong AI” with no demo or evidence and have become rather cynical.

 

 
  [ # 8 ]

Steve, I think that you’re example is just a bit misleading.

Human: What is a hamster?

Helen: I don’t know.

Helen: Can you tell me?

To my way of thinking, Helen’s responses above should be considered as one response of 2 sentences, which is (also to my way of thinking) a perfectly acceptable answer, Turing test or no. We’re not born knowing what hamsters are, after all, and in a human conversation, it’s natural to ask for more information when the subject has turned toward “uncharted waters”. cheese

Personally, though, I think that Helen’s responses couild do with an upgrade. wink

 

 
  [ # 9 ]

I like to agree on that, Dave. I’m intelligent but lack common knowledge here and there that has people gawk, like the names of any food that doesn’t come from a farm. That’s just knowledge.
However, in my book the response “Can you tell me?” would cost Helen points, because logically, a person asking what a hamster is does not have the answer himself. Imagine if Google were to ask you what the answer to your own question was.

You may well be right Steve, possibly the writer doesn’t quite know what AGI means and took the creator’s Hubble programmming and 8 years of development, and wealth, as motivation to believe him.
I can luckily still indulge in some naivity because I make it a business to anticipate all outcomes. When someone claims they have an AI that can reason, learn and deduce, I have no reason to distrust it because I know such techniques have been around for a while. It just means he has achieved what others achieved before him. But as for proof of strong AI, I am still waiting to learn more because so far it doesn’t sound like it.

Mitsuku’s conversation was more natural, for at least one notable reason: The sample conversation speaks only in simple and complete sentences. I know that laboratory phase all too well.

 

 
  [ # 10 ]

The Loebner Prize is probably the closest thing we have to a Turing Test and if the judges there ask such nonsense as “How many plums can I fit in my shoe?” then ridicule the bots for not knowing, they would have a field day with one not knowing what a hamster was.

This is probably venturing a little off topic though and as I say, I would want to see a demo rather than what the program’s author said about his code before declaring it true AI. 

Chaktar anyone?
http://www.chatbots.org/ai_zone/viewthread/500/  wink

 

 
  [ # 11 ]

Well, true that, but then I’ve never considered a Turing Test to be an objective test of anything.
Perhaps a better alternative than “knowing” the answer would be to ask why someone would put plums in their shoe, or how big a plum is, or how big the shoe is, or how many plums they have, or whether their foot is still in their shoe.
Personally, I don’t even know which fruit a plum is, I’m Dutch, and that’s my excuse for everything.

 

 
  [ # 12 ]

Me: Did you like the Royal wedding?
Chaktar: Yes, I thought it was an extraordinary event full of British pageantry.
Me: Did you like the dress?
Chaktar: Oh yes, I thought Sarah Burton did an excellent job.
Me: Who is older, Kate or William?
Chaktar: I assume you mean the Royal couple?

Mind if I actually steal this fictional example and work at it? This is just perfect material for some language processing functions that I’ve been meaning to work on, like figuring out what “the” refers to when not apparent.

 

 
  [ # 13 ]
Don Patrick - Jan 12, 2014:

Mind if I actually steal this fictional example and work at it? This is just perfect material for some language processing functions that I’ve been meaning to work on, like figuring out what “the” refers to when not apparent.

Don,

It would be an honor if you would consider stealing my example of nonfiction.

The definite article, “the” is a determiner, in grammar, which indicates the specificity of reference of a noun phrase. 

In other words, “the” refers to the specialness of the relation between a noun phrase and then to that which it refers.

What do you think?

 

 
  [ # 14 ]

The following quote from the article is FALSE,  “No, a Baiji is a type of dolphin, but not all dolphins are Baiji.”  A Baiji is not a type of dolphin.  No dolphins are Baijis. The Baiji is extinct.  Therefore, Mitzuku is in FACT smarter than the human and the AGI in this article.

 

 
  [ # 15 ]

lol thanks 8man. smile

Of course, you can use it Don. Glad it can be of some use to you.

 

 1 2 > 
1 of 2
 
  login or register to react