AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Sample problem for AI reasoning
 
 

Please bear with me folks. This is my first attempt at a major split of threads here. What follows here is a discussion about Victor’s electronics example for GRACE, taken from This Thread:

If anything seems out of place, or if I missed anything, please refer to that thread, and then let me know. smile

 

 
  [ # 1 ]
Merlin - Feb 4, 2011:

(If Skynet-AI can solve a math problem would you say it “understands” and “reasons” about it?).

Probably not.  It depends how flexible it is, I’d have to test it smile

Here is a good example of what I would consider truly understanding and reasoning via NLP.  It is an electronics problem.

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?

AI : 4 amperes

So you have natural language understanding, correlation of facts, and deduction of the answer.  The system had to simplify “the resister that is in parallel with R1” to mean “R2” (since we only said one resistor was in parallel with it), then deduce that, since R1 and R2 in parallel, R1’s voltage = R2’s voltage = 100 volts, and then lookup of the values, and then the actual simple math (ohms law (for simple DC) : current = voltage divided by resistance).

 

 
  [ # 2 ]

@Gary

Gary Dubuque - Feb 5, 2011:

So Victor, when you say you want the bot to understand, do you want it to reiterate what you told it or do you want the bot to paraphrase in its own words from what it knows concerning what you told it? I’m guessing the latter is what Has is wanting to attempt.

paraphrase in its own words, I can do this now.

by understand I mean:

Take my input, in ANY words, in ANY sentence structure, take it apart (by figuring it for itself how the input sentence is structured, find what the whole thing means as a whole, based on all the meanings of its parts, THEN figure out what combination of its logic modules to run, that is, develop a plan for how to react to it,  then execute that plan, then produce a result). 

By the way, I hope you are not suggesting that….

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?
AI : 4 amperes

is just reiterating what I told it ?  I pretty sure that is a bit more than reiterating.

And that example is simple, I can think of much more advanced examples.

Keep in mind, I am not attempting “strong AI’ or “true AI” or whatever you want to call it, and I’m not trying to discuss any philosophy of what “knowing” really means… My goal, is to develop a bot that can:

1) carry a conversation. . . a GOOD ONE… with understanding .. as mentioned above.  Not “my name is bob, what is yours”,  and answer “hello bob, what is yours” lol
2) learn via NLP
3) be either useful to help users find or, by developing its own execution plan, a deduced answer, and/or be fun to use.   

I’m after a practical, useful system.  And if my requirements, or goals above are not considered “true AI”, or “strong AI”,  I don’t care !!  I want a bot that isn’t fooled into silly things like

me: my name is bob what’s yours
ai: hello bob what’s yours

I mean come on, we have to accomplish that before we discuss whether a computer has true knowledge and all this other stuff.  One step at a time folks.

Too much philosophy for me in this thread, I’m getting back to work on something real smile  Something better than the simple ‘stimulus -> response” bots we have now. 

Thanks for the comments

 

 
  [ # 3 ]

Victor Shulist, Feb 5th
By the way, I hope you are not suggesting that….

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?
AI : 4 amperes

is just reiterating what I told it ?

Yes I am saying that’s an reiteration.  There is no “thinking” involved there - it is a stimulus/response exchange. Nothing is added other than resolving the equation which is only information processing. Hardly what I’d want to happen in any conversation I’d have with a chat bot. If I needed to figure out that kind of stuff, I’d visit Wolfram (which isn’t a chat bot.)

Those researchers that have used things like Markov chains to derive grammar from large samples of text would have a field day at your interpretation that abstract mathematical models aren’t practical - but mostly philosophy. The fact is, as you generate more and more rules, you’ll use more and more of these “ranking” constructs to determine what’s going on, statistically picking the best fits. But let’s take another example of a learning bot with runtime data that would defy human “untangling”. There’s one called Jabberwacky which, while it might miss some fine points that you are focusing upon in your approach, does exhibit a suspension of disbelief in its ability to “know” what you are entering.

Hans Peter Willems, Feb 5th
Me: it is cold outside
AI: I suggest you put on something warm then

As I pointed out before Cyc has millions of these microtheories (rules, assertions, etc.) that might help in the kind of answers Hans just sited (about putting on something warm.) The hurdle to overcome is the part about simulation (sometimes I refer to this as storytelling.) Going from being cold outside to putting on something warm requires a sense of planning. Current technology usually builds those plans from a set of known operations or functions. Perhaps you can start with a set like instincts, but soon, when combinations of those instincts are formed and saved as learned knowledge, the identifiers that we use will become lost to the machine generated references of those collections. Practically, we won’t be able to follow what’s going on like we can do now with grammar and concepts. Language is not very good at this level of operation.

It is these patterns of behavior that you’ll eventually need to compile. Perhaps you can use some method like Jabberwacky, but on the abstract information you extract from the inputs and filtered to events of causes (or triggers) and effects. The deeper these chains of reasoning can go without losing the user, I believe the more interesting will be the conversation.  That is, of course, as long as the discussion can stay grounded between the participants. Just having the instincts doesn’t guarantee the appropriate ones will be applied to facilitate continued satisfaction.  The most common feature of current bots is the user eventually becoming fustrated sometime during the discourse. Most chat bots simply react with conversational units of canned computations (Victor’s reactors) instead of the generation of the story that paraphrases (using the generated story and not a translation of the input) and maybe develops that story more to contribute new ideas to the conversation (like your example.)

As you can surmise, this is more practical information than it is theory. At least towards the efforts I understand that interest you. The learning part is the hard stuff to explain. Mainly because when the bot has acquired significant knowledge on its own, we don’t have a very good way to communicate with the internal operations except by “knowledge” the computer has learned to explain itself. As long as we hand enter the knowledge, we can build an ontology we understand (if one can actually do that monumental feat - thanks for Cyc to put the basics in perspective cause as I suggested earlier, I think Cyc is more passive information than active knowledge. Cyc can’t tell you how to bake a cake. Do you know of a respository that has the steps for fixing a flat or driving a car or taking a trip on a train or flying a kite, etc.?)

 

 
  [ # 4 ]
Gary Dubuque - Feb 6, 2011:

Victor Shulist, Feb 5th
By the way, I hope you are not suggesting that….

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?
AI : 4 amperes
is just reiterating what I told it ?

Yes I am saying that’s an reiteration.  There is no “thinking” involved there - it is a stimulus/response exchange. Nothing is added other than resolving the equation which is only information processing.

Probably not thinking, no, I agree.  I am not really attempting real thinking.  I want a good bot, that can carry a conversation, lookup data, via NLP, or deduce and learn via NLP.  This “strong AI” or “real thinking”, no that is out of scope.

But are there any bots that can do things like the above?  I certainly haven’t found any.    The above will be a great start I believe. 

And yes, of course there is existing software to do the above, I was merely showing the language capabilities.    Capabilities I certainly haven’t seen yet in any simple bot.

Again, I need a good example I think in order to know what you would consider a good conversation with a bot.  THAT is what I am after.  A bot with a half decent understanding of a conversation.  Give me an example ,  no philosophy please smile

I do disagree that the example above is simply stimulus/response though.  Not true.

Keep in mind, the above example illustrates the COMPREHENSION of the bot and not the full functionality, I have much work yet to do.

But I believe that COMPREHENSION, or UNDERSTANDING, and I mean **FULLY**,  not picking key-words, is the first step. 

After full understanding of user input is achieved, with any free form input, then we talk about the next stages, of what the bot can do (like make a plan, etc).

  One step at a time my friend,  first step is understanding,  and I mean full understanding, and being able to resolve all the complexities of natural language, where one sentence can literally mean many of thousands of things (grammatically), and resolving it to one meaning, using knowledge of the world, inference, etc.  That , i do not see in any bot right now.

So that is the first step.  One step at a time.  NO, the above is not real thinking, there is much work to be done.  But full NL understanding is the first step.

But of course, it is very easy to shoot down someone else’s progress by pointing out limitations in a sample I/O so far.

 

 
  [ # 5 ]

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?
AI : 4 amperes
is just reiterating what I told it ?
Yes I am saying that’s an reiteration.  There is no “thinking” involved there - it is a stimulus/response exchange. Nothing is added other than resolving the equation which is only information processing.

Again, I don’t care about what it “IS” or “IS NOT”, I care about what it CAN DO.    And if the above is stimulus/response, fine.

One could argue everything the human mind does is stimulus/response then.  That’s the fun thing about arguing in this forum, I can say anything is “X”,  because I can define what I mean by X.

Someone picking up a book and learning calculus.  Stimulus/response.  Reading book = stimulus, information, response. 

Reading a book and putting into your own words, stimulus/response, reiterating.  There is nothing humans do that you can’t say is simple stimulus response, information processing, whatever, if you can use the term that loosely as you are.

 

 

 
  [ # 6 ]
Gary Dubuque - Feb 6, 2011:

Victor Shulist, Feb 5th
By the way, I hope you are not suggesting that….

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?
AI : 4 amperes

is just reiterating what I told it ?

Yes I am saying that’s an reiteration.  There is no “thinking” involved there - it is a stimulus/response exchange. Nothing is added other than resolving the equation which is only information processing. Hardly what I’d want to happen in any conversation I’d have with a chat bot. If I needed to figure out that kind of stuff, I’d visit Wolfram (which isn’t a chat bot.)

Yeah, for the most part, no, I wouldn’t want to just “chit chat” with my bot about electronics problems. 

BUT, I don’t know, a bot like that could be used for say…. TEACHING electronics.

I’m pretty sure if I am trying to solve an electronics problem, I can’t DISCUSS in FREE FORM English, with Wolfram.    (I think they are wanting to head in that direction though)

 

 
  [ # 7 ]

Warning, off topic questions:

Here is a good example of what I would consider truly understanding and reasoning via NLP.  It is an electronics problem.

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?

AI : 4 amperes

Victor, something bothers me about this example. It just keeps nagging at me like something being out of place.

Before the user asks for an answer could you add the input: “The wattage of R2 is 10 watts.” Is the answer the program gives still 4 amperes?

In my experience, unless I buy a real special wirewound resister (some what expensive), the problem asked as it is originally posed results in 0 amperes.

Absolutely the modified proposed problem will be zero amps after a very short time.

Can you ask the program to describe the problem (you gave it)? Would it include, after adding the modification I requested, the wattage of R2?

I’m trying to determine paraphrasing verses reiteration and knowledge of electronics verses resolving a formula.

 

 
  [ # 8 ]

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?

AI : 4 amperes

It’s correct, except that he misspelled “resistor” smile

 

 
  [ # 9 ]
Gary Dubuque - Feb 7, 2011:

Before the user asks for an answer could you add the input: “The wattage of R2 is 10 watts.” Is the answer the program gives still 4 amperes?

I’m trying to determine paraphrasing verses reiteration and knowledge of electronics verses resolving a formula.

4 amps = 100v/25ohm
if the resistor is 25 ohms wattage doesnt matter
you would have to be given thermal resistance, ambient temp and temp coeff to make better calculations

 

 

 
  [ # 10 ]

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?

AI : 4 amperes

No, the thoeretical current would be 4 amperes, (100 volts over 25 ohms).  That is the DC current.  And as someome mentioned in previous post, yes, ambient temperate will affect that. 

Now this example is good because it shows:

* The AI will first try, as its “top level” goal, to find current of (some-resistor).
That (some-resistor) here is not simply “R1”, or “R2”, but instead the phrase “the resister that is in parallel with R1”.

So, the bot knows that it has to figure out exactly what resistor this is.  So it realizes that is its very first goal.

It searches through the statements you gave and finds that since you said “R1 and R2 are connected in parallel.” and thus deduces that you must mean R2.

Now, it finds the formula (I = E / R). and thus deduces that it needs to know:

* voltage across R2
* resitance of R2

It searches its statements it was given, and finds out that it actually knows resistance of R2 (we told it 25 ohms).

Now, it searches its statements to find out if we told it the voltage across R2.  We have NOT.

But, it knows of a basic law of electroncis that says:

      if X and Y are in parralel, then voltage across(X) = voltage across (Y)

Thus, since it finds that we told it:

    R1 and R2 are connected in parallel.

and we also told it

  The voltage across R1 is 100 volts.

It concludes that

  The voltage across R2 is 100 volts.

Cool ! now the bot knows that it has everything for the formula I = E / R,

and does 100 / 25 = 4

thus, current through R2 is 4 amperes.

So I really don’t think this is a simple stimulus >> response.  If it *IS*, it is pretty cool stimulus response!

And the bot will be able to do even more complex examples than this, where there could be hundreds of levels of indirection.

But, you were not completely wrong when you said “just information” processing—it is a good question though, where does “information processing” end, and “thinking” begin.

Perhaps once the number of levels of indirection of deduction reach a certain number? 
Or, is it that, no matter how many levels, it will always be “information processing”. ?  Could be.  People say, ‘Well, as long as the computer is doing only what you TELL it, it is not thinking’.  This is a good point, but with a system where you give it rules, and it has to figure out the combination of those rules (I call them IFLOs - check my thread on CLUES), and, based on the information that it has about those logic modules, it chains them together itself.  I guess still it is just ‘information processing’.  I, myself, don’t really mind if that is all it is ever called, I more after the end results though.

But I think it will be very useful, even if not considered ‘thought’, if the program can figure out which combinations of logic modules (in this example, the only “logic module” was basic ohms law).  But later i want logic modules, not only with electronics, but with any other topic.

 

 
  [ # 11 ]
Gary Dubuque - Feb 7, 2011:

Here is a good example of what I would consider truly understanding and reasoning via NLP.  It is an electronics problem.

user: R1 and R2 are connected in parallel.
user: The voltage across R1 is 100 volts.
user: The resistance of R2 is 25 ohms.
user: What is the current through the resister that is in parallel with R1?

AI : 4 amperes

Can you ask the program to describe the problem (you gave it)? Would it include, after adding the modification I requested, the wattage of R2?

I’m trying to determine paraphrasing verses reiteration and knowledge of electronics verses resolving a formula.

Yes, see my explanation above.  It is not just a matter of simplifying a formula.  This would be a demonstration of language understanding and electronics understanding.

For example, “the resistor that is in parallel with R1” turns out that that is actually the object of the preposition “through”.  So, at this level, the bot is dealing 100% of Language—at this level, it does not really care that we are dealing with a math formula.  So that is the focus.    It will first find out that it can reduce “the resistor that is in parallel with R1” to “R2”.  Ok, NOW it knows that it can use ohms law.  Oh BUT ! It can’t use ohms law, not yet, because it has to find out voltage across R2—it only knows voltage across R1.  BUT, it finds a rule of electronics that says:  if component X is in parallel with component Y, then voltage across X is equal to voltage across Y.  So, then, using that understanding of that (which is in Natural language), finds voltage across R2.  Then, at the end of all this, is the simple math of I=E/R.  So I think this really does illustrate: a) dealing with free form language,  b) knowledge of electronics, c) self generated logic-module chaining (that’s where if I know by goal is A, but I need “B” first, then find a logic module that gives me B, oh, that module in turns requires information “C”, so….on and on…. to any depth), then work your way back until you have all the information you need for initial method, execute it, and provide results. smile

Also, yes, if you were to give the bot, not the voltage across R1, but say “The power being consumed by R1 is 200 watts”.  and scratch “The voltage across R1 is 100 volts.” and make it “The current through R1 is 20 amperes”.

Then, the bot would have to find another logic module that takes those 2 arguments.  It would then say, 200 watts divided by 20 amps = 10 volts (Power = current times voltage, so voltage = power divided by current). 

In that case, its second objective would be figuring that voltage R1 = 10 volts.

Then, it would apply the parallel voltage rule to know that voltage across R2 was 10 volts also.

The main point here is that, we are not giving the bot a complete “execution path” of all these modules, it will connect them altogether itself based on:

a) full understanding of input sentence

b) determine what it has to figure out (“top level” goal).

c) figure out subgoals of b), and sub-sub-goals, etc, etc, to any depth

d) for each of those goals and subgoals, find the logic modules that can provide that info. 

e) find the required information for each of those logic modules.

f) if one or more required facts are not present for any of those logic modules, find another logic module to produce that result

That is the level of complexity I’m talking about. 

There is no way that is a simple stimulus>>response bot smile

 

 

 
  [ # 12 ]

Unless the R2 resistor can handle 400 watts, it will burn up. That’s electronics. The rest is theory.

Common sense in electronics might question a 25 oms resistor that can handle that many amps, although folks who work with power circuits might not.  They could easily make the comment about it being a power resistor though because that truely is in context with the discussion.

The questions were to determine if all the inputs are used or just the ones that fit a model. I don’t care how it is computed, I care how the program explains the problem. If the program simulated the electronics, with the extra input I added, it would know something is wrong (by experience most likely although by using calculations it could come across the inconsistency.) Or it would describe the problem without the size of the resistor. That would be cool, but then I’ve have to point out the next logical step of frying the resistor. Then it could learn.

If the program reiterates all the problem’s parameters yet still comes up with the 4 amp answer, then it is just a complex calculation. Sure it is using rules, but all of the processing of the syntax, semantics, etc. are using rules, so where do I draw the line? Without a model that fits the problem, it is very much like a machine translator between one language and another. It might be able to find all the pieces to state the translation, but it hasn’t yet grasped what is going on. It doesn’t have the idea yet.

In fact, when a person is approached with the same question, they are more likely to ask what you are trying to do before they blindly cough up the 4 amps answer. And thinking about the goal they would become aware of the circumstances the problem creates. Unless, of course, the goal was clarified to only be working the math computation.

I didn’t say it was a simple stimulus/response. Obviously it is quite complex. But the side effect of this type of exchange is that it is a conversation stopper. You might get away with a Rogerian type of dialog using it. It might be real hard to get away with more without bringing something more to the chat. Perhaps I’m wrong because the user can always say “thank you” to the bot. After that does the bot respond with a canned reply of “you’re welcome”?

 

 
  [ # 13 ]

Gary, with all due respect, I think you’re picking nits here. The numbers and values in the resistance/amperage example are unimportant here, and could easily be substituted with more “real world” values, or even with values that make even less sense to us, and still be valid. The point that Victor is trying to get across is that GRACE isn’t simply doing “simple math” from a set of input variables. It’s deducing and inferring what those values are by correctly analyzing the input across several free-form sentences to (very accurately, IMHO) gain “understanding” into the needs of the user, and providing a solution for those needs. This goes way beyond simply doing the math and spitting out an answer, since the values, as given, aren’t enough information by themselves to simply “plug in” to an equation. Equations, by themselves, can neither infer, nor deduce, and that’s what’s being demonstrated here. Not the math itself.

 

 
  [ # 14 ]

Thank you Dave (very much),

Gary, yes of course 400 watts is way to much for most resistors, that wasn’t the point.

I can’t very well describe the entire functionality of my bot here, unless you want a 100 pages of text in one post.

I think it makes sense to describe one functionality at a time. 

And yes, I plan on teaching Grace tons of electronics rules, including power rating of resistors, so she can say stuff like, “The answer is <x> amperes.  Oh and by the way, the wattage through that thing will be <z> watts”.

The cool thing will be, as I add more knowledge to Grace, she will find these rules, connect them together, to deduce “by the way, the wattage is <z> watts, which is over 10% higher than most resistor values”.

That is not a short coming of Grace—you are simply pointing out that she will need that much more information to work with, information I do intend on providing.  Via NATURAL LANGUAGE:)

If a human were taught about electronics, only ohms law, but no knowledge of power ratings, no concept of that, they wouldn’t care either, and end up blowing up a resistor.

 

 
  [ # 15 ]
Gary Dubuque - Feb 8, 2011:

In fact, when a person is approached with the same question, they are more likely to ask what you are trying to do before they blindly cough up the 4 amps answer.

Ok, I think I have to limit the scope for one post, don’t you think ? 

The first stage of testing Grace is to provide facts in NL, and ask questions in NL.  Stage 2 is ask questions in NL where she has to combine many NL facts with logic.  Stage 3 is teaching via NL.  One step at a time.

Gary Dubuque - Feb 8, 2011:

I didn’t say it was a simple stimulus/response. Obviously it is quite complex. But the side effect of this type of exchange is that it is a conversation stopper.

What??  I don’t agree.  If this were an electronics tutoring tool, the student could ask, “well, no, I get 7.5 amps”,  the bot could say “Ok, let’s work this through…...”.

Conversation stopper, don’t agree.

 

 1 2 3 >  Last ›
1 of 4
 
  login or register to react