AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Applied Problems, an AI evaluating standard for chatbot.
 
 

Before we request the chatbots to chat like human, there are many practical targets.
Applied problems may be a good choice to evaluate the AI level of chatbots.

For example

The cost of picture frame M is $10.00 less than 3 times the cost of picture frame N. If the cost of frame M is $50.00, how much is the cost of frame N?


The applied problems may be from simple to hard.

 

 
  [ # 1 ]

Hi,

so first the bot/parser/inferential engine has to be aware of quantitative entities and arithmetic operators. In natural language of course.

User:    What is three times eight?
Bot:    The result is 24.

User:    What is the square root of forty nine?
Bot:    Should be 7.

etc.

Then, it should be able to memoize (I guess that’s Perl talk) facts about the world. I.e. knowledge representation.

User:    A terrier is a dog
Bot:    Ok, so terrier is a dog.
User:    M costs 30US$
Bot: Ok, so M costs 30US$.
User:    A car has four tires.
Bot: Ok, so a car has 4 tires.

And then it should be able to perform semantic inference on the knowledge stored:

User:    a dog is an animal
Bot:    Ok, so dog is an animal.
User:    Is a terrier an animal?
Bot:    Yes.

Or the quantitative equivalent - which is exactly what you write about M and N. Working on that, should be up and running by the end of this month.

Richard

 

 
  [ # 2 ]

“The cost of picture frame M is $10.00 less than 3 times the cost of picture frame N. If the cost of frame M is $50.00, how much is the cost of frame N?”

The thing of it is, I doubt most people could solve that word problem. One could argue that this is a tragic consequence of underperforming education systems, but that’s the reality. Find someone who’s been out of school for ~10 years and hasn’t been to college. I doubt they could do it. So the ability to answer problems like these, while useful, would not necessarily be indicative of general intelligence in a Turing-style test.

I think developing software to solve word problems such as this one would not be nearly so challenging as unstructured NLP. By nature of being a transformation of an algebraic equation, word problems have set, structured rules for how to present them in natural language.

A real interesting challenge would be to develop a chatbot that could learn to solve word problems by being taught how in natural language!

 

 
  [ # 3 ]

Nathan,

This is a great idea and an excellent approach to building general intelligence into a bot.  Free form NL math queries could be an excellent education tool as well.

These are the kinds of things I will have my bot perform.  If you read a bit through my bot’s thread (CLUES), you will see a good example of how I want to be able to talk to my bot about electronics theory and engineering. 

I have copied and pasted your “frame” question into my ‘to-do’ list for CLUES.  Not sure when I’ll get around to it, probably not until winter, since I still have quite a few grammar rules to tell it about before it can analyze that sentence.

 

 
  [ # 4 ]

C R:

It’s only an example. There are many different applied problems.

Some simple examples.

Tom has 5 apples. David has 3 apples. How many apples do they have?

Tom had 5 apples. He gave Daivid 2 apples. How many apples does Tom have then?

There are 6 boxes. Each box has 5 apples. How many apples are in the boxes altogether?


I propose this approach of evaluating is because I believe it is better than the turing-sytle test.
A chatbot can not solve applied problems without understanding. You can not use a text pattern to solve applied problems.

Different level of applied problems can easily distinguish the understanding ability of the chatbots.

 

 
  [ # 5 ]

Victor, I’m sure we are on the similar way.

Actually, solving applied problems is not only for education applications. I believe it is a necessary functionality for virtual sales agent. With this ability, a virtual agent can provide quote for different client options through the chatting with client. It also can be a part of decision system for an avatar in virtual world.

If an evaluating test introduce applied problem solving, the winner will be a practical valuable chatting engine not only a toy.

 

 
  [ # 6 ]

Yes, I’ve decided to abandon the Loebner prize in favor of, like you say, a practical system.

For example, I think it is a waste of time for a bot to play Turing’s “imitation game”.  Turing meant that simply as a thought experiment, not to actually develop a bot based on that.

Also, I don’t care to design a bot that works like a human.  A computer performs math much, much better than a human, and completely differently.  Thus, my bot will think of itself as a computer program, work like a computer program, and be educated and learn like a computer program.  A computer program is not a human child, and this idea of teaching it with ‘rewards’ and ‘punishments’ is ridiculous to me.

 

 
  [ # 7 ]
Victor Shulist - Sep 7, 2010:

...
A computer program is not a human child, and this idea of teaching it with ‘rewards’ and ‘punishments’ is ridiculous to me.

I agree with this whole-heartedly. Besides, what would constitute a “reward” to a computer program? Or a “punishment”, for that matter? Are we to control the amount and quality of system resources to a program as a form of incentive? I can see it now:

“You did poorly on that test, Morti, so I’m restricting you to a dialup connection for the next hour. Try harder, next time.”  ohh

 

 
  [ # 8 ]

L.M.A.O. Dave, that is hilarious.  Or threaten to expose it to a computer virus.

 

 
  [ # 9 ]

Morti already seems to have one of those. it.

Poor little guy is flat on his virtual back, sucking down a chicken soup simulation like it’s water. it.

DRAT! I have it now, too! it.

 

 
  [ # 10 ]

M = 3N - 10.
50 = 3N -10
3N = 60
N = 20
sorry, had to do this…

During the AAAI conference, I met this Henry Lieberman of MIT. He was talking about common sense knowledge. Stating that AI developers are working on understanding rather complex sentences, whilst communication is based on context, and assumes you already know a lot:

-apple is fruit
-fruit is something you can eat
-fruit is healty
-apple is about 10 cm/3inch tall
-apples comes in boxes
-apples grow on trees

etcetera before you can talk about apples. When you don’t know what an apple is, an apple can be something like a planet, a discease or an economic term.

 

 
  [ # 11 ]

During the AAAI conference, I met this Henry Lieberman of MIT. He was talking about common sense knowledge. Stating that AI developers are working on understanding rather complex sentences, whilst communication is based on context, and assumes you already know a lot:

-apple is fruit
-fruit is something you can eat
-fruit is healty
-apple is about 10 cm/3inch tall
-apples comes in boxes
-apples grow on trees

etcetera before you can talk about apples. When you don’t know what an apple is, an apple can be something like a planet, a discease or an economic term.

This is exactly the conclusion I have made in my NLP travels. 

It is utterly impossible to understand without knowing information about the words themselves.

This is why, development that is based on:

                A is a B, B is a C, thus A is a C

is a waste of time when you are not caring about the values of A, B and C, but just the mechanism for moving them around.  You can’t treat this like mathematical algebra.  You’re not substiting numbers into variables here.  Words are not like numbers, they have different properites, and different ‘equations’ (‘algorithms’ for semantic computing).

When you don’t care what the values of variables are (that is, you don’t know what the words mean (the words that you are replacing variables for), the result is this kind of non-sense:

    A hamsandwich is better than nothing.
    Nothing is better than a million dollars.
    Thus, a ham sandwich is better than a million dollars.

@Erwin - any links regarding these complex sentence research ?

 

 
  [ # 12 ]

LOL grin grin grin

 

 
  [ # 13 ]
Nathan Hu - Sep 7, 2010:

It’s only an example. There are many different applied problems.

Some simple examples.

Tom has 5 apples. David has 3 apples. How many apples do they have?

Tom had 5 apples. He gave Daivid 2 apples. How many apples does Tom have then?

There are 6 boxes. Each box has 5 apples. How many apples are in the boxes altogether?

My intuition is that developing this ability in a bot would be simpler than most NLP problems. (Granted, I have not attempted it.) The reason I think so is that there is generally a one-to-one mapping of natural language phrases to mathematical symbolism. Some simple string pattern matching would probably do the trick. Now, there would be a lot of patterns to input, but it would be do-able.

What would be more interesting is developing a bot that could be fed word problems and their solutions, and learn through experience what patterns often arise and what each corresponds to mathematically. The bot, once it reads many questions, will easily see what word combinations show up repeatedly. Then, by matching different rules to each word combination until the correct answer is obtained, it can form a guess for the mapping. With each problem, the confidence of the mapping would improve.

Victor Shulist - Sep 7, 2010:

Also, I don’t care to design a bot that works like a human.  A computer performs math much, much better than a human, and completely differently.  Thus, my bot will think of itself as a computer program, work like a computer program, and be educated and learn like a computer program.

I agree! I think the focus geared towards human-like AI is grounded in the idea that “hey, this is a system we know is intelligent, let’s reproduce it!” But that is a naive viewpoint and doesn’t take into account (a) known advantages in computing over the human brain and (b) known limitations in computers/robotics compared to the brain/body. We’ve got to work with the tech we’ve got.

Victor Shulist - Sep 7, 2010:

A computer program is not a human child, and this idea of teaching it with ‘rewards’ and ‘punishments’ is ridiculous to me.

I always took “reward” to mean an external sign that it should increase its confidence in a given algorithm and “punishment” as a sign that it should decrease that confidence. Were the terms originally meant to be anthropomorphized in that way?

 

 
  [ # 14 ]
Victor Shulist - Sep 7, 2010:

    A hamsandwich is better than nothing.
    Nothing is better than a million dollars.
    Thus, a ham sandwich is better than a million dollars.

Thinking about issues like this is rather disheartening. ...But it could make for some interesting chatbot conversations!

 

 
  [ # 15 ]
C R Hunt - Sep 8, 2010:
Nathan Hu - Sep 7, 2010:

It’s only an example. There are many different applied problems.

Some simple examples.

Tom has 5 apples. David has 3 apples. How many apples do they have?

Tom had 5 apples. He gave Daivid 2 apples. How many apples does Tom have then?

There are 6 boxes. Each box has 5 apples. How many apples are in the boxes altogether?

My intuition is that developing this ability in a bot would be simpler than most NLP problems. (Granted, I have not attempted it.) The reason I think so is that there is generally a one-to-one mapping of natural language phrases to mathematical symbolism. Some simple string pattern matching would probably do the trick. Now, there would be a lot of patterns to input, but it would be do-able.

Sorry, but I very much disagree with you on this point.  Just because the naturual language (NL) statements deal with mathematical concepts doesn’t mean there is a fixed number of them that you could create a template/pattern to match against.  There are probably just as many NL statements in math as there are about any other topic (an astronomical number).  I could create a complex/compound sentence which links together any number of NL sentences with conjunctions and any depth of nested NL statements tied together with subordinate conjunctions and subordinate clauses that deal with mathematical problems.  Good luck parsing that with simple pattern matching techniques.

C R Hunt - Sep 8, 2010:

What would be more interesting is developing a bot that could be fed word problems and their solutions, and learn through experience what patterns often arise and what each corresponds to mathematically. The bot, once it reads many questions, will easily see what word combinations show up repeatedly. Then, by matching different rules to each word combination until the correct answer is obtained, it can form a guess for the mapping. With each problem, the confidence of the mapping would improve.

umm.. I thought of that 20 years ago actually, and that is no easy programming task.  Perhaps someday, someone will code something like that, and I’d like to see it in action, but me, personally, I’m taking a much more direct approach.

C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 7, 2010:

Also, I don’t care to design a bot that works like a human.  A computer performs math much, much better than a human, and completely differently.  Thus, my bot will think of itself as a computer program, work like a computer program, and be educated and learn like a computer program.

I agree! I think the focus geared towards human-like AI is grounded in the idea that “hey, this is a system we know is intelligent, let’s reproduce it!” But that is a naive viewpoint and doesn’t take into account (a) known advantages in computing over the human brain and (b) known limitations in computers/robotics compared to the brain/body. We’ve got to work with the tech we’ve got.

Absolutely ! Current digital computers are von neumann machines (http://en.wikipedia.org/wiki/Von_Neumann_machine) which, basically, were designed to be programmable calculators.  I have spent much time coding my algorithms to deal with this ‘serial’ nature of this type of architecture.
There are so many permutations in NLP that my initial processing times were 100’s of times slower than they are now, due to many programming short cuts and tricks I had to employ, because of this serial processing nature and the ‘von neumann bottleneck’ (http://c2.com/cgi/wiki?VonNeumannBottleneck).

C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 7, 2010:

A computer program is not a human child, and this idea of teaching it with ‘rewards’ and ‘punishments’ is ridiculous to me.

I always took “reward” to mean an external sign that it should increase its confidence in a given algorithm and “punishment” as a sign that it should decrease that confidence. Were the terms originally meant to be anthropomorphized in that way?

Yes, that is how I understood it as well, some variable in memory just gets incremented on success, and decremented on failure.  But, again, I take a more direct approach than the trial/error concept.

 

C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 7, 2010:

    A hamsandwich is better than nothing.
    Nothing is better than a million dollars.
    Thus, a ham sandwich is better than a million dollars.

Thinking about issues like this is rather disheartening. ...But it could make for some interesting chatbot conversations!

Not at all, the trick here is the second statement:
    Nothing is better than a million dollars.

The bot needs to know that this really means
There exists nothing that is better than a million dollars.

Based on the meanings of those words.
There are so many issues with NLP.  One of the most annoying for chatbot developers is when humans shorten a sentence, leaving out important words, and just using the word “is”.  I see it so much I’m getting sick of it…

“Time is money”

This doesn’t really mean that the words time and money are synoyms.  I don’t walk into a store and say, “I’ll give you 5 minutes for that watch”.

Yes, you could mean 5 minutes of work, 5 minutes of entertainment, whatever, but you’re not saying that, you’re just saying “TIME is money”.  This sentence makes no sense.  If “Time is money” was taken literally to mean the word time and money are synonymous, then I wouldn’t bother going to work tomorrow morning, I would just stay in bed, and wait, passing time, and get money lol..i wish smile

Someone may say “If my stock increases value over time, then time is money”.  Still doesn’t make sense, STILL the simple sentence “Time is money” is not mentioning anything about stock increasing value.

No,

“Time is money” on its own, makes no sense, and is an abbreviation of

“The time I spend working is worth money”

The challenge is to have this ‘mapping’ in the chatbot.

Just like “nothing is better than a million dollars” does not mean the word ‘nothing’ is better than a million dollars, but short for “There exists nothing which is better than a million dollars”.

 

 1 2 3 >  Last ›
1 of 4
 
  login or register to react