AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

AI Philosophy; When does an AI ‘Understand’?
 
 
  [ # 16 ]
Victor Shulist - Feb 8, 2011:

Especially our understanding of understanding.

seriously, fuzzy?  Yes, but not in all cases.

Maybe I should have said our understanding of understanding is imperfect.

In many cases as we are exposed to new concepts we take what we know and attempt to learn and integrate what we do not know.

What an AI “understands” may be limited in scope. An AI that is higher functioning may also have the additional ability to learn and clarify unclear input thereby adding understanding. But, even without realtime learning, AI might still be considered to “understand” a basic concept.

 

 
  [ # 17 ]
Merlin - Feb 8, 2011:

But, even without realtime learning, AI might still be considered to “understand” a basic concept.

Agreed. 

If it knows how all the parts of the input are related to each other and how this latest input relates to conversation context and its long term knowledge base, and can reply to questions based on it, either directly or by any number of levels of indirection of logical deduction, I’d say yes.

So I think the most useful requirement for saying a bot has understanding is directly proportional to the number of actions the bot can perform on that input (the extend to which that information can be related to existing information in its KB).

I think it boils down to how you want your AI to react when it is only partially understanding.

Do you

(a) go with the parts you understand, ignore the rest, and react based on understood portion

or

(b) inform the user that you only partially understood, and state the part(s) you understood, so he/she will have an idea of what further information to provide.

I’m going with (b)

 

 
  [ # 18 ]

If I talked to my wife the way you’re characterizing a bot indicates that it “gets” what you said, I’m reasonably sure she complain that I don’t understand.  Now my wife is pretty forgiving of my quirks so just imagine what a stranger might think.

There are programs that read news feeds and can pick out appropriate items which they can then explain what the article contains. IBM used to have (and probably still does) a system that would scan the web to determine the profile or what the public thinks of a business.

These few examples are a far cry from parsing a sentence into the meaning of its words. Sure that might be referred to as understanding, but for a chatbot, a sentence is not a chat.  I know, I know, you’ve got to start somewhere.

How about starting with dialog management? What about building a belief network for the user and playing that against one already extablished for the bot? If you can capture the beliefs of the user, you might be on the road to understanding. That is, provided your bot is sympathetic to those beliefs. Else you might have a case of agreeing to disagree or not to come to an understanding at all.

Even if the bot heard (exactly) what the user said. Still no understanding eh? Not because it can’t translate the input into its internal representation, but because it doesn’t know how to deal with that information other than to record it in a transcript somewhere.

 

 
  [ # 19 ]
Gary Dubuque - Feb 8, 2011:

How about starting with dialog management? What about building a belief network for the user and playing that against one already extablished for the bot? If you can capture the beliefs of the user, you might be on the road to understanding. That is, provided your bot is sympathetic to those beliefs. Else you might have a case of agreeing to disagree or not to come to an understanding at all.

Even if the bot heard (exactly) what the user said. Still no understanding eh? Not because it can’t translate the input into its internal representation, but because it doesn’t know how to deal with that information other than to record it in a transcript somewhere.

Gary, I think you are spot on here. As long as the ‘bot’ is only able to map input to ‘rules’, from my perspective there is no understanding involved.

To have ‘understanding’, the AI needs to be able to map input to concepts, assumptions and so forth. This is why I’m working from the core-assumptions model; for the AI to start learning, it needs a set of base-assumptions (instincts) that define it’s (infant) universe and give it the ability to build ‘understanding’ based on mapping new concepts and assumptions to the ones that are already there in the core.

 

 
  [ # 20 ]

Me like thread very much LOL

Maybe we need a grading system like school or Karate.
Something like:
Grade 2
or
“My bot is a yellow belt in math.”:)

Yep, that would be nice, though perhaps the ‘badge’ model is perhaps better in this case. What I mean is this: grades and sports and stuff are usually scaled, that is you can only get grade 2 after you pass grade 1. For bots, this doesn’t have to be the case: a bot can calculate for instance, but not count or parse,... So perhaps we need to make some categories and make ‘questiones and possibly correct answers’ for each category to test it, like:
-Grammar: adjectives, adverbs, ‘be’, ‘have’, ‘do’, modal verbs, regular verbs, interjections, conjunctions, pronouns (all types),...
-sentence structures: statement, question, simple sentence, inversed sentences, bad structures, complex sentences,....
-does it know about: time, location, reason,...
-how does it do: math, logic, creativity,output (does it have pre-canned statements, does it render or use a mixture of the 2),....

And then you’ll probably need to make combinations of all. Some automated test system might be appropriate.wink

These few examples are a far cry from parsing a sentence into the meaning of its words. Sure that might be referred to as understanding, but for a chatbot, a sentence is not a chat.  I know, I know, you’ve got to start somewhere.

How about starting with dialog management? What about building a belief network for the user and playing that against one already extablished for the bot? If you can capture the beliefs of the user, you might be on the road to understanding. That is, provided your bot is sympathetic to those beliefs. Else you might have a case of agreeing to disagree or not to come to an understanding at all.

You have a point Gary. The thing is, we’re not yet at the ‘jet-fighter’ stage. We’re still playing with wooden stick-canvas and 2pk engine planes. So, lets not yet measure in mach but meters? Though you are right, these tests should progress and become ever more complex.

 

 
  [ # 21 ]
Jan Bogaerts - Feb 8, 2011:

The thing is, we’re not yet at the ‘jet-fighter’ stage.

I don’t agree with this; I think we already do have the technology but still don’t understand the ‘model’ that is needed to put the technology to work. I also believe that most AI-research is way over the top, looking to add more complexity to the problem as we go and ‘over-engineer’ the possible solutions.

I’m pretty confident that when we get there, it will turn out to be a simple ‘paper-clip solution’.

 

 
  [ # 22 ]
Hans Peter Willems - Feb 8, 2011:
Jan Bogaerts - Feb 8, 2011:

The thing is, we’re not yet at the ‘jet-fighter’ stage.

I don’t agree with this; I think we already do have the technology but still don’t understand the ‘model’ that is needed to put the technology to work. I also believe that most AI-research is way over the top, looking to add more complexity to the problem as we go and ‘over-engineer’ the possible solutions.

I’m pretty confident that when we get there, it will turn out to be a simple ‘paper-clip solution’.

So you start by disagreeing that we’re not there yet and end up with ‘when we get there’? ohh

 

 
  [ # 23 ]
Jan Bogaerts - Feb 8, 2011:

So you start by disagreeing that we’re not there yet and end up with ‘when we get there’? ohh

I was expecting some more ‘reasoning capability’ from you Jan, I’m beginning to think you have your bot hooked up to chat here smile

I responded to the ‘technological stage’ that you where proposing we are at now (but I might have misunderstood you there). When we ‘get there’ is obviously pointing to ‘a working strong-AI’ based on previous statements made in conversations here on the board (see, that is exactly what I think is needed for strong-AI; the perception that previous statements are putting context to current statements wink).

 

 
  [ # 24 ]

the perception that previous statements are putting context to current statements ).

Euhm not certain what you mean with ‘perception’, but my bot is using previous statements to disambiguate between word meanings in the new statements. Is that what you mean?

 

 
  [ # 25 ]
Hans Peter Willems - Feb 8, 2011:

To have ‘understanding’, the AI needs to be able to map input to concepts, assumptions and so forth.

Highly agreed.  I do this ‘from the ground up’, and I know I’m in the minority here, but I go :

lui = last user input
(your words ‘concepts’ above maps to what I call meaning in the statements below).

1) parts of speech of each word of lui, which builds up to

2) parts of speech of combinations, which build up to

3) part of speech of entire lui

then

4) correlate with semantic rules to assign meaning to lui

5) correlate meaning (assigned to lui in step 4) with meanings of previous statements, to generate correlated meaning.

6) take meaning of the state of the conversation,and lui, and combine with logic which is changed together, to deduce response.

 

 
  [ # 26 ]
Gary Dubuque - Feb 8, 2011:

How about starting with dialog management? What about building a belief network for the user and playing that against one already extablished for the bot? If you can capture the beliefs of the user, you might be on the road to understanding. That is, provided your bot is sympathetic to those beliefs. Else you might have a case of agreeing to disagree or not to come to an understanding at all.

Even if the bot heard (exactly) what the user said. Still no understanding eh? Not because it can’t translate the input into its internal representation, but because it doesn’t know how to deal with that information other than to record it in a transcript somewhere.

Gary, I think “dialog management” is one of the big challenges. I don’t know that I could have started there, but it is one of the areas I am actively working on is how to store and extract information from the dialog (I have tried and throw away a lot of strategies). This greatly improves the conversational flow and gives much more of an impression of “understanding”. One of the hardest things for a bot to do is to stay on topic during a long interchange.

 

 

 
  [ # 27 ]
Jan Bogaerts - Feb 8, 2011:

the perception that previous statements are putting context to current statements ).

Euhm not certain what you mean with ‘perception’, but my bot is using previous statements to disambiguate between word meanings in the new statements. Is that what you mean?

That is probably what I mean, although things get lost in translation so I’m not sure. My gut feeling says that it is not ‘exactly’ the same, but that could as well be a difference in implementation of the same concept.

 

 
  [ # 28 ]
C R Hunt - Feb 11, 2011:

Grammar, in the end, is just ‘representation’. There is no ‘model’ wherein grammar evolves into understanding. You can formulate a grammatically correct sentence on say, quantum mechanics, without understanding anything of it. Or on making cocktails, the complexity of the concept doesn’t even matter in this.

At what point do you call it understanding? Please be concrete here. Why do you think you *understand* how to make a cocktail? Because you can picture the instructions? Because you can imagine feeling the cocktail glass in your hands? Is this any more understanding because it is united to the senses? Will a text-based chatbot ever be able to do that? How? Are you going to hard-code fake senses? How is having a system for mapping text onto special key words/tokens that represent senses any different from mapping text onto more text? Are the images in your mind not representations as well?

I am hoping that in the near term, my bot will be able to “understand” in the same way that a spreadsheet knows math or a GPS system knows location and directions. On top of that, for topics it “understands”, I hope it will be able to converse about these topics in natural language and recognize these topics in a free form conversation. Someday bots should be at least be able to understand “math” and “time”. This may end up being the first step on the way to “strong AI”.

 

 
  [ # 29 ]
Dave Morton - Feb 11, 2011:

To my way of thinking, there can be no “reasoning” without at least a basic “understanding”, at least in some small part. the two are inextricably intertwined. Or am I missing the mark here? smile

I would agree. In AI terms, for a but to respond correctly it must “understand” the input. This can be simple, like in a stimulus/response pattern in a bot, or complex, where it is based also on prior knowledge/experience/environment and reasoning/algorithmic method of responding.

 

 
  [ # 30 ]
Merlin - Feb 11, 2011:

I am hoping that in the near term, my bot will be able to “understand” in the same way that a spreadsheet knows math or a GPS system knows location and directions.

I’m not sure what you are trying to say here, because those examples are the exact opposite of ‘understanding’ and perfect examples of pre-programmed or ‘canned’ behaviour.

Merlin - Feb 11, 2011:

Someday bots should be at least be able to understand “math” and “time”. This may end up being the first step on the way to “strong AI”.

‘Time’ is actually the first ‘core-concept’ in my AI-mind model; a computer has a build-in ‘sense of time’. A human has a ‘biological clock’ that gives us a sense of time, this is one thing we don’t have to create in software. A computer already has it’s own clock that is actually more accurate then a human clock.

The next step for me is to map this concept of time to other time-related concepts, like morning, evening, sooner, later, yesterday, tomorrow, etc.

So I agree this might be the first step towards strong AI. And in my case this is exactly where I’m starting to build my model.

 

 < 1 2 3 4 > 
2 of 4
 
  login or register to react