AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

X-prize for Artificial Intelligence
 
 
  [ # 61 ]

Amusing indeed. Mitsuku suggested I could make a cat if I had some meat and whiskers grin
Also interesting how a presentation template with “insert here” rules (or so I interpret it) could nearly produce something suitable. Human presentations do generally follow a template with set components. Greeting, introduction of topic, main statement, arguments, controversy, conclusion. It would still be tough to gather all relevant parts to a consistent track, but the final composing of a presentation now seems to require less intelligent methods than I imagined. The most compelling sentences are actually the prewritten ones, which blend in quite well.

 

 
  [ # 62 ]
Don Patrick - Apr 27, 2014:

Human presentations do generally follow a template with set components. Greeting, introduction of topic, main statement, arguments, controversy, conclusion. It would still be tough to gather all relevant parts to a consistent track, but the final composing of a presentation now seems to require less intelligent methods than I imagined.

The main reason for this is because most Humans that speak in public have been trained to follow a “standard format” for their presentations. I had a class in public speaking when I was in high school, and one of the major components of the course involved getting us to organize our presentations in pretty much the same form, of introduction, outline, subtopics, summary and conclusion, with Q&A being optional, depending on the presentation. If this formula works well for us, why not for AI? smile

 

 
  [ # 63 ]

Indeed. And here is a thought that I have no doubt spark some [ahem] spirited debate. I have seen it criticized when an AI uses a sentence template as not being “true AI’. The sentence was written by a human.  However, if we apply that logic to the entire sentence, then even if the AI builds the entire sentence “word by word”, then it is using words that were created by humans, and a sentence structure that was created by humans, the only truly native element being the phonemes (and those are taken from human speech). So why is the cutoff for intelligence set at the sentence level? And if we set a level of intelligence that says that the AI has to create and ground its own unique token to represent a concept, and arrange those tokens into a structure that makes sense to it, then we learn its logic,  then who’s to say that some of the AIs that have received some of the harshest criticism during these contests, because their sentence structure does not follow strict grammatical rules CREATED BY HUMANS,

How are you?
grok fine doing am I

are not actually showing signs of intelligence we are not crediting them with?

V

 

 
  [ # 64 ]

Oh, I don’t object to AI using standard storyline or grammatical structure. They’d have to. It just shows me that I shouldn’t be expecting creative choices in that area where humans would at least consider it.

The difference between human intelligence and current machine intelligence is a large gap of depth. At human level, intelligence is found in the choice of relevant meanings. At word level, intelligence is found in the choice of relevant words. At sentence level, intelligence is only found in the choice of relevant sentences. At document level, you would have to call Google intelligent (which to a point it is). But when people say “intelligent”, they really mean “intelligent as a human”. We can credit AI for having an IQ of 2, but what people really want to see is AI with an IQ over 80. That’s also the deal with this contest.

Bad grammar in an AI’s output can be either a sign of an underlying level of intelligent interpretation OR equally a glitch or a sign of blindly inserting whatever text followed the verb (Eliza tactics). It would take more than a few examples to determine which is the case smile, but people can guess the most likely.

 

 
  [ # 65 ]
Don Patrick - Apr 27, 2014:

Also interesting how a presentation template with “insert here” rules (or so I interpret it) could nearly produce something suitable.

Yes, that’s exactly how it works and will be fairly obvious if you try a couple of different subjects to compare the presentations. I created a database of around 2,500 common objects which Mitsuku loads up and slots into place.

An example being:

There’s no need to spend too much money on buying <get name=“prefix”> <get name=“word”> though. You can easily make one if you have some <get name=“madefrom”> and <get name=“wordhas”>.
However, if you are not the creative type I would suggest looking <get name=“where”> and you would probably find one there to take.

becomes this for “xprize car”

There’s no need to spend too much money on buying a car though. You can easily make one if you have some metal and wheels and an engine. However, if you are not the creative type I would suggest looking on a road and you would probably find one there to take.

I then hope the chatbot part of Mitsuku will be able to handle the questions (or at least fend them off in a generic style).

I have found that making templates for exercises like this are invaluable and can produce quite convincing results. I use a similar method in the Funniest Computer Ever competition which seems to work, as I have won that contest both the years it has run.

As I say though, I would gamble that the topics chosen for the X Prize are a lot more in-depth than “cats” or “an apple”. It’s a start though.

 

 
  [ # 66 ]

http://techxplore.com/news/2014-04-automated-grading-skeptic-babel-expose-nonsense.html

The good news: A former MIT instructor and students have come up with software that can write an entire essay in less than one second; just feed it up to three keywords. The bad news: The essay is gibberish. Oh, wait, more news: The nonsense essay was fed through an online writing product using essay-scoring technology. Perelman pasted the essay into the answer field, clicked “submit,” and the paper got a score of 5.4 out of 6. The essay, after all, had good grammar and impressive vocabulary words. The end result was nonsense, nonetheless.

 

 
  [ # 67 ]

We may have some competition: Watson the Debator…

Can a computer with access to large bodies of information like Wikipedia extract relevant information, digest and reason on that information and understand the context … and present it in natural language, with no human intervention?

IBM claims Watson can extract information from Wikipedia, “understand” it, and reason from that information.

 

 
  [ # 68 ]

Well, apparently it doesn’t “understand” well enough to realise that it’s saying the same argument three times over. It’s apparent to me that Watson is quoting sentences of human articles in a predetermined speech template, but I will give it credit for figuring out which sentences are arguments and which aren’t, which is in the very least a useful function.
I can’t judge how intelligent that function actually is though, as I can well imagine Wikipedia article subheadings “Arguments for” and “Arguments against” that are simple enough to pilfer. (actually not a bad idea)
We’ll see. The first time IBM said they were going for a debating Watson, they called it a “long term endeavour”, but they’ll definitely be game for this contest.

 

‹ First  < 3 4 5
5 of 5
 
  login or register to react