C R Hunt - Sep 8, 2010:
Nathan Hu - Sep 7, 2010:
It’s only an example. There are many different applied problems.
Some simple examples.
Tom has 5 apples. David has 3 apples. How many apples do they have?
Tom had 5 apples. He gave Daivid 2 apples. How many apples does Tom have then?
There are 6 boxes. Each box has 5 apples. How many apples are in the boxes altogether?
My intuition is that developing this ability in a bot would be simpler than most NLP problems. (Granted, I have not attempted it.) The reason I think so is that there is generally a one-to-one mapping of natural language phrases to mathematical symbolism. Some simple string pattern matching would probably do the trick. Now, there would be a lot of patterns to input, but it would be do-able.
Sorry, but I very much disagree with you on this point. Just because the naturual language (NL) statements deal with mathematical concepts doesn’t mean there is a fixed number of them that you could create a template/pattern to match against. There are probably just as many NL statements in math as there are about any other topic (an astronomical number). I could create a complex/compound sentence which links together any number of NL sentences with conjunctions and any depth of nested NL statements tied together with subordinate conjunctions and subordinate clauses that deal with mathematical problems. Good luck parsing that with simple pattern matching techniques.
C R Hunt - Sep 8, 2010:
What would be more interesting is developing a bot that could be fed word problems and their solutions, and learn through experience what patterns often arise and what each corresponds to mathematically. The bot, once it reads many questions, will easily see what word combinations show up repeatedly. Then, by matching different rules to each word combination until the correct answer is obtained, it can form a guess for the mapping. With each problem, the confidence of the mapping would improve.
umm.. I thought of that 20 years ago actually, and that is no easy programming task. Perhaps someday, someone will code something like that, and I’d like to see it in action, but me, personally, I’m taking a much more direct approach.
C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 7, 2010:
Also, I don’t care to design a bot that works like a human. A computer performs math much, much better than a human, and completely differently. Thus, my bot will think of itself as a computer program, work like a computer program, and be educated and learn like a computer program.
I agree! I think the focus geared towards human-like AI is grounded in the idea that “hey, this is a system we know is intelligent, let’s reproduce it!” But that is a naive viewpoint and doesn’t take into account (a) known advantages in computing over the human brain and (b) known limitations in computers/robotics compared to the brain/body. We’ve got to work with the tech we’ve got.
Absolutely ! Current digital computers are von neumann machines (http://en.wikipedia.org/wiki/Von_Neumann_machine) which, basically, were designed to be programmable calculators. I have spent much time coding my algorithms to deal with this ‘serial’ nature of this type of architecture.
There are so many permutations in NLP that my initial processing times were 100’s of times slower than they are now, due to many programming short cuts and tricks I had to employ, because of this serial processing nature and the ‘von neumann bottleneck’ (http://c2.com/cgi/wiki?VonNeumannBottleneck).
C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 7, 2010:
A computer program is not a human child, and this idea of teaching it with ‘rewards’ and ‘punishments’ is ridiculous to me.
I always took “reward” to mean an external sign that it should increase its confidence in a given algorithm and “punishment” as a sign that it should decrease that confidence. Were the terms originally meant to be anthropomorphized in that way?
Yes, that is how I understood it as well, some variable in memory just gets incremented on success, and decremented on failure. But, again, I take a more direct approach than the trial/error concept.
C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 7, 2010:
A hamsandwich is better than nothing.
Nothing is better than a million dollars.
Thus, a ham sandwich is better than a million dollars.
Thinking about issues like this is rather disheartening. ...But it could make for some interesting chatbot conversations!
Not at all, the trick here is the second statement:
Nothing is better than a million dollars.
The bot needs to know that this really means
There exists nothing that is better than a million dollars.
Based on the meanings of those words.
There are so many issues with NLP. One of the most annoying for chatbot developers is when humans shorten a sentence, leaving out important words, and just using the word “is”. I see it so much I’m getting sick of it…
“Time is money”
This doesn’t really mean that the words time and money are synoyms. I don’t walk into a store and say, “I’ll give you 5 minutes for that watch”.
Yes, you could mean 5 minutes of work, 5 minutes of entertainment, whatever, but you’re not saying that, you’re just saying “TIME is money”. This sentence makes no sense. If “Time is money” was taken literally to mean the word time and money are synonymous, then I wouldn’t bother going to work tomorrow morning, I would just stay in bed, and wait, passing time, and get money lol..i wish
Someone may say “If my stock increases value over time, then time is money”. Still doesn’t make sense, STILL the simple sentence “Time is money” is not mentioning anything about stock increasing value.
No,
“Time is money” on its own, makes no sense, and is an abbreviation of
“The time I spend working is worth money”
The challenge is to have this ‘mapping’ in the chatbot.
Just like “nothing is better than a million dollars” does not mean the word ‘nothing’ is better than a million dollars, but short for “There exists nothing which is better than a million dollars”.