AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Sample problem for AI reasoning
 
 
  [ # 46 ]
Hans Peter Willems - Feb 9, 2011:

Victor, I think by trying to solve complexity by adding even more complexity is one of the ‘traps’ in AI-research. You add complexity and then need more complexity to manage the newly added complexity, and so on… A ‘real’ brain doesn’t work that way; there is one governing system that rules just about everything, not a whole compendium of different little systems that each deal with something in a total different way.

That is mainly why I don’t believe in the idea to start with grammar and then add additional logic and processing to handle other things. I think it is impossible to reach ‘strong AI’ that way.

In the brain,  there are a lot of different little things that combine in ways to make up the whole. Some neurons are dedicated to place and position/orientation.
Place Cells

For me, starting with concepts and having fuzzy neurons fire when a concept was triggered was a better path of implementation. But, I also believe the opposite approach (precise understanding and grammar, then loosening the constraints) is a viable alternative.

I think both will be able to handle the sample problem. As for the ability to “understand”, I’ll leave that discussion for the “understanding thread”.

 

 

 
  [ # 47 ]

To my way of thinking, there can be no “reasoning” without at least a basic “understanding”, at least in some small part. the two are inextricably intertwined. Or am I missing the mark here? smile

 

 
  [ # 48 ]

Dave, I agree. But we need to define what we mean by “understanding”. (Perhaps in a different thread, as Merlin suggested.) If understanding is defined as mapping text onto knowledge that is not text based (for example, the taste of a cocktail or the feel of glass) then a text-based bot can never succeed in having understanding.

I see more and more the merit of an intelligence test based simply on fooling humans into believing you’re intelligent. It certainly gets around these semantic games.

 

 
  [ # 49 ]
Hans Peter Willems - Feb 11, 2011:

Of course you have a point when you say it is just mapped to text, but herein lies the difference in our view (I think); we use text to ‘store’ concepts, but in my model it doesn’t make any difference how something is actually named as it will map to core-concepts the AI will ‘comprehend’ by default. When the AI has a comprehension of temperature (in relation to itself), it then doesn’t matter if we call it ‘cold’ or ‘koud’ (Dutch for cold) or even ‘brrrrrr’ or ‘ghewgr’.... the AI will still know it’s meaning is a ‘low temperature’ or anything else the ‘comprehension algorithm’ will lead it to.

This sounds like the token scheme used by Cyc. Basically, taking a natural language input and mapping it onto pre-created words (they have dollar signs in them,  but they are still just words) that are hard-coded to have a specific sense and definition.

Hans Peter Willems - Feb 11, 2011:

I am indeed going to use VIRTUAL senses in my model. Not because my model needs them but because it will give me a richer environment to experiment with ‘comprehension’. The senses are not ‘hard-coded’ and just mapping of (virtual) analog inputs to base-concepts. Technically I will be able to define ‘sensors’, give them a range an boundaries, and map them to one or more concepts.

By hard-coded I mean that you explicitly define their existance and what actions they cause the bot to perform. (The same way that a grammar rule I write is hard-coded, but the rules that my bot develops are not.) This sounds like more tokens. That is, special words that represent hard-coded concepts. The bot tries to figure out when text input is equivalent to your special words. Will the bot’s reaction to a special token be hard-coded as well, or will it develop preferences/aversions for certain tokens via some learning mechanism?

 

 
  [ # 50 ]

How about this for a “Sample problem for AI reasoning”

~~~~~~~~~~~~~~~~~~~~~~~~~~

me: hay, bot, wanna pull more mp3 files to my laptop for me?

(first realizes that although this is a question, it really means user wants us to copy files from one system on the network to another)

(checks its KB, finds that I keep most of my mp3 files on host named “alpha”), but perhaps doesn’t know what hostname or IP on my network that “my laptop is”.  so….

bot : to your laptop???  I know about 2 machines on the network, alpha @ 192.168.2.100 and omega at 192.168.2.101, what’s your laptops address?

me: oh yeah, sorry, my laptop is 192.168.2.102

(suppose the bot knows what path the mp3 files are in on alpha, but each band is in different directory)

bot : so, is there a specific band you want me to “pull” ?  Or surprise you ?

me : surprise me!  well, as long as it’s classic rock

(assuming I’ve told it before a set of bands that are classic rock, the cars, rolling stones, whatever, and it resolves that to a directory), but is not sure of the next step

bot : ok, you’ve always asked me before just to copy the files from the hard drive of alpha and put on your USB stick, now you want to copy of the network ! wow, I haven’t done this before?  what program can I use?

me : use SSH (scp - secure copy)

(bot does a “man scp”), processing the naturual language of the Unix man page for “scp” (secure copy)
but runs into trouble:

bot : hum, I can ping alpha, but can’t seem to connect, output of the scp command says it wants the password. 

me : oh damn, I didn’t setup ssh host key validation.  I’ll do that now (I don’t want to give you root password)
me: ok, done, try again

bot : ok, I was able to SSH into alpha.  Now where abouts do you want the files on your laptop ?

me : uh… under my home directory (so /home/victor/Music)

bot : ok, done!  I think I was successful - well, scp said it copied 15 songs with no network errors (by the way, I copied rolling stones , oh yeah !!)

~~~~~~~~~~~~~~~~~~~~~~~~~~

at any point the conversation could have went another way.

such as

bot :  so I’m supposed to get these mp3 files from where?

me : you know where I kept my tunes, don’t you ?

bot : uh… actually no, I don’t

me : I thought I told you.  Oh well, maybe I didn’t.  Alpha is where they’re all kept.

.......
it would then of course ask where they are on alpha.. Or perhaps

bot : and they are located on alpha where?

me : come on bot ... i’m tired.. can’t you just scan for them?

bot : sure, I know how to use the Unix find command, I’ll do a find of all “*.mp3” - no problem, just thought you knew off the top of your head.

 

 
  [ # 51 ]

Or, instead of

~~~~~~~~~~

bot : ok, you’ve always asked me before just to copy the files from the hard drive of alpha and put on your USB stick, now you want to copy of the network ! wow, I haven’t done this before?  what program can I use?

me : use SSH (scp - secure copy)

~~~~~~~~~~

me : google it !

bot googles and finds out that, since its running on linux, that secure copy is usually installed by default.

me : well, I learned I could use SSH to do the job.    Do we have it installed? let me guess, you want me to check myself right ?

bot : you got it Pontiac !!!

(bot checks “expression database” to learn that is an expression for “correct”)

bot checks, learns that “rpm” can check if the rpm for ssh is installed, learns it is NOT installed

bot : well, I found out that SSH is not installed.  mind if I install it ?

me : go for it

bot : ok, i’m stuck - the O/S wants the root password, I can’t install without it

me : oh that’s right, here, I will do it, I don’t give my root password to anyone, not even you, sorry

<install ssh>

me : ok, you’re good to go, ssh is installed with host keys already setup, you shouldn’t even need the password, resume your task please

* [ Anticipated Response ]

<< some portion of above quoted >>

    “No <X> in there (or didn’t quite behave like <Y> ) , so no good, your approach way off…  no real ‘thinking’ in there…. WRONG !!!!!!!!!!!!!”

smile

 

 
  [ # 52 ]
C R Hunt - Feb 11, 2011:

Will the bot’s reaction to a special token be hard-coded as well, or will it develop preferences/aversions for certain tokens via some learning mechanism?

The only ‘hard-coded’ mappings (in your definition of hard-coded) are the core-concepts. The AI will create it’s own mappings after that, based on ‘learning’. The most important part in my model regarding this scheme is that only the AI itself determines how this ‘learning’ is mapped to new concepts. This is by the way not randomly determined but instead it’s based on the ‘experience model’ I described before; concept + context = experience (where ‘context’ can be anything like time, location, current discussion topic, etc.).

 

 
  [ # 53 ]

Victor, that’s an interesting example. I think it will be quite a while before a bot has that level of competency, let alone NL understanding. But it will be an awesome day when such a bot exists. (Although for those users versed in UNIX, a bot of this type would probably slow the user down rather than facilitate, at least in the example given! smile )

Hans Peter Willems - Feb 11, 2011:

The only ‘hard-coded’ mappings (in your definition of hard-coded) are the core-concepts. The AI will create it’s own mappings after that, based on ‘learning’. The most important part in my model regarding this scheme is that only the AI itself determines how this ‘learning’ is mapped to new concepts. This is by the way not randomly determined but instead it’s based on the ‘experience model’ I described before; concept + context = experience (where ‘context’ can be anything like time, location, current discussion topic, etc.).

Yes, I see that my definition of “hard-coding” is a little atypical. I consider something hard-coded if I explicitly force an action by the bot. It’s another level when the bot uses the hard-coded rules combined with user feedback to generate its own rules for a given class of input.

I’ll be looking forward to see your model implemented. I think it will help to see explicitly how you plan for the bot to learn that a word is associated with a concept. Will it be similar to the EMO scheme I described (somewhere in one of these threads…)?

 

 
  [ # 54 ]
C R Hunt - Feb 11, 2011:

I’ll be looking forward to see your model implemented. I think it will help to see explicitly how you plan for the bot to learn that a word is associated with a concept. Will it be similar to the EMO scheme I described (somewhere in one of these threads…)?

I have to lookup what you said about the EMO scheme to answer that, but one thing I already have reached as a conclusion is this; there will be no real ‘logic’ in coded algorithms, the ‘logic’ will be mainly in the data-model. The way the AI-mind will store information in such a way that it becomes knowledge, experience, comprehension-schemes, etc. is paramount for creating strong-AI. The way I see a human brain (thinking about that is a big motivator for my AI-research) is mainly as a storage device that runs under a set of ‘governing principles’ (call it a operating system) that in itself is not really changing over time. This ‘operating system’ describes the ground rules of our behaviour, how we react to certain stimuli, how our senses are calibrated and how our ‘feelings’ are mapped to those inputs.

‘Understanding’ and/or ‘comprehension’ will not come from the operating system, it has to come from the data-model.

 

 
  [ # 55 ]
C R Hunt - Feb 11, 2011:

Although for those users versed in UNIX, a bot of this type would probably slow the user down rather than facilitate, at least in the example given! smile )

Actually I’m pretty well versed in Unix command line, and I find it a pain in the A to open a terminal windows, ssh into the other box, copy and paste the long path name, etc, etc.

I know I know I should script it, or use GUI or whatever, but I think this example will be very practical.  The initial teaching it (in this case I mean telling it what server my mp3 files are on (usually, so it can assume), what the base path to the files, or explain the directory structure to it.  Also tell it that it should normally use scp to do the copy—that kind of thing.  By teach, I mean using English .. with ZERO help from me with the parsing, understanding etc). 

The only help would be, the same as a human, if it is confused, or can’t do something (like the “scp returned that it needs the password”), or “I’ve never done that before, what program can I use?”  But no *direct* going into the code, or helping it parse.

If I’m talking to it from a windows computer, and say “go get the files and snap them on this computer, thank you”, it will have to know what “this computer” is.  That will be interesting puzzle.

I guess part of the “login” function (when I make Grace listen on a socket so she can be talked to from any one of my networked computers), that information can be sent to her.    Then she would know what “this” computer is.

So the main point here,  (which now I seem to be able to voice without being attacked/shot down at every move I make), is that the bot is very *flexible*.

It is that flexibility and on-the-fly learning that I’m after. smile 

No, Grace can’t do this yet !  I’m hoping by mid next year to have a good portion of this type of dialog running though. 

But again, the point is *understanding* .. each statement,  one by one, and TEST each statement with a follow up question (as my previous example with electronics) showed.

I don’t know how we could worry about dialog management when we don’t even understand what the heck is really being said. smile  First things first.

 

 
  [ # 56 ]

oh, and if it gets stuck at the grammar, no problem, it will try “relaxing” the rules, and ignore some, and find the closet matching semantic tree / concept, and then ask user if that is what they perhaps meant.  I may through in probability if i feel like it, and if it is over 99% sure, it will just go with it.  Not sure on that yet though.

good examples are users using the wrong preposition.

 

‹ First  < 2 3 4
4 of 4
 
  login or register to react