AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Sample problem for AI reasoning
 
 
  [ # 16 ]
Gary Dubuque - Feb 8, 2011:

In fact, when a person is approached with the same question, they are more likely to ask what you are trying to do before they blindly cough up the 4 amps answer.

Sure, I can have Grace check for the last time you called your mom and say “Actually, instead of talking about this, you haven’t called your mom lately.  Call her, then we’ll talk more electronics.”

But—that is just more information.  The concept of calling my mom and checking KB for the last time I told her I called her, is *just more rules*.  The BASE system, the engine, the infrastructure is what I’m trying to talk about here smile

 

 
  [ # 17 ]

Dave, what is a math story problem then? I think I have a program written by one of the Loebner winners called AI Bush that might be able to answer story problems.

Deductions, inferences, wow. You know what is the problem of expert systems? They have a very narrow field of expertise and with only deduction and inference, they don’t know what they are doing. An expert system will fail dramatically once the boundaries of their rule set is exceeded.

But an expert system can tell you what they used to figure out their answer, at least.  Still they might not be able to fully describe the problem. An expert system can be a real compute intensive process. You can use inference and something like the Rete algorithm (like in Clips) to lessen the work. Deduction and inference are different than planning (although the program might use those techniques to help formulate a plan.) To make a plan you create a goal instead of deriving a goal. Making plans is open-ended without any definite stopping point unlike what deduction and inference should have (provided the rules don’t recurse.)

Then we come to what I’ve been saying apparently to a wall. The bot needs at the very least a set of meta-rules, that is, rules about rules.  It can then begin to know what it doesn’t know and start heading towards filling in the gaps. It can decide that it knows too. It may take several layers of meta-rules to get to purpose. Purpose or intention is a rudimentary element in conversations. Meta-rules are just one way to go, but it is a mechanism that fits your paradigm. It will probably take the superset of rules to describe the production rules, the model, that fits the problem when the program is asked to explain what you are talking about. Especially if the conversation wanders through several story problems. A bot, to really be a bot, needs to be self aware. It needs to “think” about what it is doing. Otherwise it is blindly processing the information in the inputs and doesn’t really “know” anything. And it has no intention, no direction to what it is saying. The conversation from the bot’s point of view stops. The processing of the input is done. The answer is resolved.

You want tons and tons of rules to solve the problem, but what happens when the rules don’t work. There will be conflicts no matter how you stack them up. Your recovery now is to ask for clarification. Can the user clarify any in the tons of rules?

Will your program deduce that a resistor burned if it achieved its goal (of determining the amps) without ever examining the consequences?

 

 
  [ # 18 ]
Gary Dubuque - Feb 9, 2011:

Then we come to what I’ve been saying apparently to a wall.

Come on now, lets not head in that direction, I’ve experienced the same phenomenon regarding your replies to my posts. Play nice.

Gary Dubuque - Feb 9, 2011:

Will your program deduce that a resistor burned if it achieved its goal (of determining the amps) without ever examining the consequences?

Your points are well taken! 

Right now, all my CLUES posts are in what I call “SQA Test Mode”.  The purpose of this mode is to give the program semi complex statements, and verify that it understands the INPUT SENTENCE.

So, not the entire universe as a whole.  Just the INPUT.  SQA Test Mode tests are LIMITED only to the Last Input.

“While I was in Africa, I shot an elephant in my pajamas”

What was I wearing?

Pajamas


That is an example of SQA Test mode—it is limited to proving CLUES understands language and semantics, and knows that in the above the elephant * probably* wasn’t wearing the pajamas smile

The resistor example was another “level” or another mode of that same **TEST**.

The final, and most complete mode, when I get there in a year or two, will be what I call “Independent” mode.  Where it will apply, rules, meta rules, and whatever it has available to it, to make sure everything makes sense.

so “How many inches are there in an hour?”

It won’t try and look that up in a table, it will notice that inches measures distance, and hour measures time.  Or yes, even ask “what’s this for??”

So I will be in SQA Test mode for awhile.  Progress is looking good though ! smile

And yes, paraphrasing, and proving to you that it understands what is being asked, will also be done in Independent mode.

Another mode, ‘Admin mode’ , in that mode, (admin password is required) all facts entered are simply believed to be true (only I have admin password hehehe). 
So no evaluation of making sense is done, just update the KB.

 

 

 
  [ # 19 ]
Victor Shulist - Feb 9, 2011:

...so “How many inches are there in an hour?”

If velocity were also included in a previous statement, then this, of course, could have meaning. raspberry After all, velocity is simply an expression of distance (inches) over time (an hour). Besides, my Dad used to take out his tape measure from time to time at church picnics, and explain that he was seeing how long he could stay. smile

 

 
  [ # 20 ]

ummmm that is a bit of a stretch there Dave smile

The bot could perhaps say, in that case, “How many inches [are traveled] in an hour [if the velocity is x]”

But there again, we need the power of language to express that.  But “How many inches are there in an hour?” by itself I mean, the bot should know that doesn’t make sense.    And then do a *semantic*  “Did you mean ____”  , yes that would be cool.

Very funny about the how long he could stay smile

 

 
  [ # 21 ]

The “rules about rules” idea is important.

I have been researching lately, and will try experimenting with, probably very late this year or early next year, in CLUES, what I am calling RGRs.  Rule Generating Rules.  One MAJOR difference between bots and humans is that a human is not given rules for everything.  We make up our own rules.  That is key.

Thus we need rules to generate rules, rule-generating-rules. 

And if that is to deterministic for you, perhaps a source of entropy could be used to guess (at least a portion) of the rule, and test it. 

if RGR #1 generated rules 1,2, and 3.  And using rule 2 seems to result in success, then we know that guess was correct (and we just get rid of rule 1 and 2)


Yes, yes, I know the response to this will be “but Vic, those are just MORE RULES !!”

But I think there is a fine point here.

Domain specific rules

pure logic rules

In CLUES, I am working with what i call pure logic rules, which will generate domain specific rules (from natural language) that combine two or more domain-specific rules (in N. L. ) into a new N.L. rule.

Thus you can provide the bot with a rule specific to the subject domain, in the form of complex natural language) and it deduce another.

example:

Natural language rule :  “if two resistors are in parallel, the voltage across both is the same”

NOW… with just that how can the system go from:

“the voltage across R1 is 2 volts”  (yes, I will use more reasonable values just for you Gary)

TO—-

“R1 is in parallel with R2”

With the above, I would have to MANUALLY CODE the logic for it to deduce “the voltage across R2 is 2 volts”

What I want to do, is have logic that can produce the logic that will allow it to make that conclusion,  just from

“if two resistors are in parallel, the voltage across both is the same”
and
“the voltage across R1 is 2 volts”

Yes, this example focuses only on language, but I think this is a very important **FIRST STEP**

Gary, before you reply, I know this is not true “thinking” which you will undoubtedly point out… but it is *ONE* of the things we need to accomplish (***ONE*** in the many, many, MANY steps to AI)

 

 
  [ # 22 ]

Victor, I think by trying to solve complexity by adding even more complexity is one of the ‘traps’ in AI-research. You add complexity and then need more complexity to manage the newly added complexity, and so on… A ‘real’ brain doesn’t work that way; there is one governing system that rules just about everything, not a whole compendium of different little systems that each deal with something in a total different way.

That is mainly why I don’t believe in the idea to start with grammar and then add additional logic and processing to handle other things. I think it is impossible to reach ‘strong AI’ that way.

 

 
  [ # 23 ]
Hans Peter Willems - Feb 9, 2011:

A ‘real’ brain doesn’t work that way;

Really? Is there any real scientific, empirical data to support that?  If there is, I’d like to see a paper on it.

Even if there is proof, I’d say there is more than one way to get to intelligent machines than to mimic the function of the human mind.

 

 
  [ # 24 ]

Perhaps a better example of AI understanding…

user: R1 and R2 are connected in parallel.
AI: Ok. I guess we’re talking electronics now.
user: The voltage across R1 is 100 volts.
AI: Interesting. May I ask what the resistence of R1 is?
user: The resistance of R2 is 25 ohms.
AI: Whoa! That’s four amps. 400 watts is a big resistor. Is that a wirewound job?
user: What is the current through the resister that is in parallel with R1?
AI: Hey I just told you that. What are you, some kind of chat robot or something?

Victor’s approach:
The final, and most complete mode, when I get there in a year or two, will be what I call “Independent” mode.  Where it will apply, rules, meta rules, and whatever it has available to it, to make sure everything makes sense.

Still no intention. It’s a chicken and not a pig (no skin in the game.) You can make sense of the words, but you don’t get what I said. I know it is hard to analyze what I meant to say when you only have what I actually said. It’s just as hard to make a bot have meaningful contributions to conversation, even if it processes the inputs correctly. The rules you are missing are not the rules for making rules (good call though.) They are rules for making replies. Rules about rules are for determining when (the results of) the information processing rules are included in the dialog. These rules form the personality, the character of the bot. They give it reason to converse.  They are not the stimulus/response of “what blah blah blah?” “answer blah blah blah.” They are not triggers for some predefined module to kick off its report. If you play dialog games, they are to pick those moves as defined in the game. I feel dialog games are often too shallow, too mechanical, too far from really participating in a chat. They frequently miss that planning aspect, that imagination, which sparks a good talk. It is much more difficult to generate dialog than to dump it from some canned script, even if that script is highly templated into chunks like what Jabberwocky does. Reporting on consistency or contents of the encyclopedia you’ve gathered from the user is a very small step. Generating something worthwhile, maybe something new, to add to the chat is the “thinking” part.

 

 
  [ # 25 ]

Good reply, I will re-read it later. 

But again, at THIS STAGE of development, I want the bot to have full understanding of the input string by the user.

That is important.    And I know it is limited where I input fact (x), and ask question (y) to get answer (z).  But I need for it to have that ability first.  A lot of bot developers just want to sip that part.

You can’t have all that advanced stuff you are talking about until full understanding of the entire string of input is understood.

Once full understanding of the user input is understood, the sentence as a whole, is understood, then it can combine with other things, and the more abstract stuff you are talking about will happen.

I’ll get there.  Good input.  I do like your top-down approach to the bot first wanting to know “what the hell is this all about?”  first smile

 

 
  [ # 26 ]
Gary Dubuque - Feb 10, 2011:

Perhaps a better example of AI understanding…

user: R1 and R2 are connected in parallel.
AI: Ok. I guess we’re talking electronics now.

Yes, of course, but I don’t care about this kind of comment in SQA Test mode right now, but later, absolutely, it needs to know the overall subject.

user: The voltage across R1 is 100 volts.
AI: Interesting. May I ask what the resistence of R1 is?

<b>Not really necessary, especially in SQA Test mode.
In independant mode, yes, even better it should stop you there and say, “are you sure, that’s a lot of voltage, this isn’t a transistor radio you’re designing is it!!!!”


user: The resistance of R2 is 25 ohms.
AI: Whoa! That’s four amps. 400 watts is a big resistor. Is that a wirewound job?

Not necessarily, in independent mode, if we said earlier in the conversation that we are dealing with high power electronics project.  This is just conversation STATE and CONTEXT.

user: What is the current through the resister that is in parallel with R1?

AI: Hey I just told you that. What are you, some kind of chat robot or something?
cool, yes, that will be possible with my design

Good input.  All covered by my design.  It is all about rules and your biggest point centers around conversation HISTORY,  that is, STATE of the conversation.

by that I mean,  the state of what subject we’re on (“I guess we’re talking electronics”), and “That’s a big resistor” (wouldn’t say that if we mentioned earlier we’re dealing with power electronics,  so that is just STATE of the conversation).

SQA Test mode—- to verify the input of the sentence ENTIRELY and deal with AMBIGUITY .

Say it again, SQA Test mode—- deals with English AMBIGUITY.  In parsing, you have many many parse trees, which ones is the meaningful one??  That’s a first big step.

Then, can you deal with ambiguity of the question?  then the ambiguity of matching a question with answer.  No, not the “end all” of course, but big steps, and I need to test them.. that is the purpose of this example.

Independent mode—FULL , HOLISTIC mode, where it considers EVERYTHING….and can even be pro-active (I like the “May I ask the resistance of R1?”)

I got to go.. I know there is some quote issue here.. no time right now, clean up later smile

 

 
  [ # 27 ]
Gary Dubuque - Feb 10, 2011:

Perhaps a better example of AI understanding…

Victor’s approach:
The final, and most complete mode, when I get there in a year or two, will be what I call “Independent” mode.  Where it will apply, rules, meta rules, and whatever it has available to it, to make sure everything makes sense.

Still no intention.

This is getting frustrating. 

the INTENTION at this LEVEL of testing is to understand the question, and find an answer.
That itself is an intention. 

Not to sound too offensive, some of your comments are ok, but you know what, I think it would be better if you were to not contribute further to this thread; you can’t seem to realize certain aspects of what we’re talking about—and missing the points, (worrying about the values in the sample for example, etc etc)    Perhaps you are facing the same situation - we seem to be going in circles and not getting through to each other.

I think we’ll save each others time if you refrain from further postings to this thread.  I will also stay out of your way smile

 

 
  [ # 28 ]

My take is that Gary is trying to say that first and foremost an intelligent bot should have it’s own angle, so to speak. Something it’s trying to accomplish in the conversation (a purpose), whether it is learning something about the user or communicating a message or understanding an input or what have you. From there it determines how best to communicate. This communication is not *speak* (wait for user to speak) *speak again*, but performed in whatever manner the bot deems best suited for executing its purpose. And not necessarily superseded by whatever the user’s latest input is.

And Victor is talking about focusing more narrowly on what the bot does with user input. That is, can the bot reliably convert natural language text into a knowledge base (KB) form that can be successfully queried later? And the purpose of this thread is to come up with clever ways of querying the KB that would display a bot’s thoroughness in generating the KB, or its flexibility when tapping into the KB.

Am I on the right track here??

 

 
  [ # 29 ]

CR : You’re right on the mark.

First step is have the bot understand. 

“Seek first to understand”

Then, talk about higher goals. . But for now, first goal is to understand.  A bot that doesn’t fully understand is just an Eliza clone, pretty much useless, except of course just for fun.  There’s no point in trying to have the bot have higher goals if it doesn’t fully understand the meaning of what is said.

I will start a new thread where we can all try and figure all the high level “block diagram” of the perfect bot. 

Then, hopefully we won’t be arguing apples versus oranges.

Name the parts,  from high level down to “nuts and bolts”, and everything in between.

I will start a document from the feedback (after majority rules)

any ideas for a name ?

 

 
  [ # 30 ]

So the document’s TOC would be, something like:

1     user input
1.1   keyboard input
1.2   graphical input

2.0 parsing
2.1 understanding
2.2 semantics

2.3 goals
2.3.1 subgoals
2.3.2 planning

2.4 execute plan
2.5 collect results of execution of plan

2.6 build response to user

I don’t know.. I just wrote that in 2 seconds,  so first thing we could do is agree on the Table Of Contents perhaps, what do you think ?

 

 < 1 2 3 4 > 
2 of 4
 
  login or register to react