|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
It’s very rare for me to actually start a topic of conversation here, mainly because my “role” here is a bit unusual, acting as both a student, and as a referee, at the same time. However, on occasion, something comes to mind that I feel that I need to share, and this is the occasion.
In relative terms, with regards to AI, let’s (for the sake of discussion) say that by “rules”, I mean hard-coded instructions, and “experience” to be data/structure/algorithms written by the AI itself. Now, given these distinctions, I’d like to outline an analogy to try to get my viewpoint across, and I invite everyone to share their thoughts/opinions/rebuttals/etc.
First, we have the average toddler, of age between 2 and 3 years old. In this context, “rules” are what Mommy and Daddy have told our toddler, named Dale (a nice, gender-neutral name), to do, or not do. “Experience’, on the other hand, is what Dale learns all by him/her self. Now, one day, Dale is in the kitchen with Mommy, who is baking a cake in the oven. Curious, Dale wanders somewhat unsteadily towards the oven, which has a rather hot door, due to the baking that is occurring within. Mommy says “Don’t touch the oven, Dale” (a rule), so Dale, who takes this as “no”, turns away. (for now, let’s assume that Dale has already learned the meaning of “don’t”) Dale files this away as “because Mommy said so”, since Mommy didn’t give a reason why Dale shouldn’t touch the oven. A few days later, we find the same scenario, except that Mommy isn’t paying so much attention, and Dale has “forgotten” the “rule” from the other day (as is often the case with toddlers). So our intrepid adventurer walks up to the oven, with it’s hot door, and promptly receives a minor burn as a reward. The result is that Dale stores this away as “experience” that touching the oven door is a “bad thing”. Thus, there is now not only a “rule” for not touching the oven (because Mommy said so), but also an “experience” that says not to touch the oven (because “it’s a bad thing”, or maybe because “bad things happen”; it can go either way). As time goes on, under similar scenarios, Dale begins to build a substantial “database” of both “rules” and “experience” that helps to keep him/her safe and happy.
I feel that the creation of AI should, at least in some small respect, include something along these lines. WE need some “rules”, to begin the process of learning, and “experience” to continue, reinforce and refine the learning process.
Ok, I’m done babbling. Thoughts? Ideas? Rebuttals? Go for it!
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 1 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
I bet you expected me to respond here
Great example that I must add my anecdotal experience to: when my oldest son was about that age (he is now 19) he had this fascination for our oven. As I believe that ‘experience’ is a better learner then ‘laying down (unfunded) rules, I didn’t tell him not to go near the oven. Instead I took him close to the hot oven with his little hands in mine so I could gauge the heat myself and brought his hands close so it wasn’t really hurting but seriously unpleasant. I told him that what he was experiencing was ‘hot’. From that day on, every time the light in the oven was on even without the heat, he would walk around it with obvious respect and point his little finger towards the oven while saying ‘hot’! Within days he ‘mapped’ this experience to cups with hot coffee on the table.
So while writing this down I realise that even with the upbringing of my kids (I have two sons) I gave precedence to ‘experience’ over ‘rules’.
Dave Morton - Feb 15, 2011: I feel that the creation of AI should, at least in some small respect, include something along these lines. WE need some “rules”, to begin the process of learning, and “experience” to continue, reinforce and refine the learning process.
I totally agree. Although I do understand that by ‘rules to begin the process’ means different things to different researchers.
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 2 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
I’m gratified to see that I’m not the only one who’s had similar experiences to my analogy. Like you, I felt (and still feel, for that matter) that “experience is the greatest teacher”, when it comes to children (my youngest , Tom, is now 26), within certain safety guidelines and restrictions (for example, DON’T let the kid play in the street, so that s/he learns of the dangers of automobiles, first hand).
And I understand that there’s a LOT of “interpretational latitude”, when it comes to “rules” versus “experience”, as to what and how much of each are required. That’s up to the individual programmer/researcher to discover on their own.
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 3 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
There is a very very subtle point here. I have discovered that there are 2 types of logic, “execute logic” and “knowledge logic”.
Execute logic is what makes your bot run. That is the code that, when executed, looks at and examines the “knowledge logic”.
In your example Dave, the toddler’s “execute logic” is the logic that caused him to question the rule. (that is, execute logic made him say “hum, mom said not to touch the oven, but there is no reason given, thus I should find out for myself”). Execute logic examined the “knowledge logic”.
Knowledge Logic :
This can be either a statement OR a rule. Knowledge logic in your example was “Don’t touch the oven”.
OR perhaps
“IF you touch the oven, THEN you’ll be unhappy”
we don’t know why we’ll be unhappy, but that is the logic we are given. But its only “knowledge logic”.
So, in CLUES, grammar rules are “knowledge logic”. There is a *LOT* more to it than that!!
The execute logic
creates
and
maintains
the knowledge logic.
But the AI can use both when trying to solve problems and make plans.
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 4 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor, how would you describe (like you did above) the example that I gave: i.e. skipping the ‘rule’ part and going straight for the ‘experience’ part?
Victor Shulist - Feb 15, 2011:
The execute logic
creates
and
maintains
the knowledge logic.
Per my example it seems to me that ‘knowledge logic’ can be created and maintained from ‘other’ knowledge logic. Hence my view that ‘new experiences’ the coffee cup) are based and categorized on previous experiences (the oven).
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 5 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
One thing that I forgot to mention in my opening post is that, often times, when working with AI (well, probably with kids, too), we need to carefully monitor the process of gaining “experience”, to prevent certain skewed or aberrant behaviors from forming. It’s probably not the best example in the world, but a quote from Mark Twain comes to mind:
“The cat, having sat upon a hot stove lid, will not sit upon a hot stove lid again. But he won’t sit upon a cold stove lid, either.”
This is something that I think we’ve all fallen prey to, in some form or another, during the course of our lives, and something that we might profitably consider as we continue our respective avenues of research.
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 6 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Dave Morton - Feb 15, 2011: “The cat, having sat upon a hot stove lid, will not sit upon a hot stove lid again. But he won’t sit upon a cold stove lid, either.”
Ah, the Pavlov-response: http://en.wikipedia.org/wiki/Classical_conditioning
I have to think about this some more, but I agree that we have to monitor the learning process to prevent from strange and wrongful ideas entering the AI-mind.
Before you know your bot develops a belief in UFOs
|
|
|
|
|
Posted: Feb 15, 2011 |
[ # 7 ]
|
|
Senior member
Total posts: 623
Joined: Aug 24, 2010
|
Hans Peter Willems - Feb 15, 2011: Victor, how would you describe (like you did above) the example that I gave: i.e. skipping the ‘rule’ part and going straight for the ‘experience’ part?
Victor Shulist - Feb 15, 2011:
The execute logic
creates
and
maintains
the knowledge logic.
Per my example it seems to me that ‘knowledge logic’ can be created and maintained from ‘other’ knowledge logic. Hence my view that ‘new experiences’ the coffee cup) are based and categorized on previous experiences (the oven).
(emphasis mine)
I think Victor means execute logic is what’s doing the creating and maintaining. The execute logic can use knowledge logic to build on/expand/revise other knowledge logic.
Dave: You make an important point. This is why user feedback is so important for bot development and why one shouldn’t focus too soon on allowing a bot to learn independently. Only after large sets of training data can one expect a bot to learn appropriately on its own, and then in just the restricted domain of the training data!
|
|
|
|
|
Posted: Feb 16, 2011 |
[ # 8 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
C R Hunt - Feb 15, 2011:
I think Victor means execute logic is what’s doing the creating and maintaining. The execute logic can use knowledge logic to build on/expand/revise other knowledge logic.
Absolutely correct.
Hans, yes, both EL and KL can be used to generate other KL.
EL would be doing the generalizations to produce KL. New data received (to me, experience, (ie sensor data), and facts told to the bot (audible or visual, or keyboard) are all sense inputs, and both are data. In other words, hearing a statement “Don’t go near the oven” and the actual expeirence of touching a hot oven are both data—the latter is simply storage of the temperature of the oven , as simple as an integer value stored in a db—no different than a complex parse tree (of a statement or rule coded in NL).
EL could also make assumptions.
Example
Situation 1 ———————
(bot) child touches oven, temperature reads X degrees (which is whatever over “normal”)
(bot) child hears mother say “hot !! hot !!!”
Situation 2 ———————
(bot) child touches cup of hot coffee, temperature data reading is close to situation 1. Thus there is a ‘connection’ here with situation 1…. what else was involved with situation 1? mom saying “hot”, perhaps this wide difference in temperature is related to the word “hot”
EL would be involved in doing that.
Now complex information about the world, not just temperature reading (ie expeirecing heat or cold), but complex sentences .. that are told to the bot via audible or visual or keyboard, are also experiences.
EL relating to complex parse trees in relation to sensor data could be looked at to form generalization… the EL could even do inductive logic to produce a KL rule in NL
|
|
|
|
|
Posted: Feb 16, 2011 |
[ # 9 ]
|
|
Senior member
Total posts: 974
Joined: Oct 21, 2009
|
The bot could then ‘voice’ that logic, and say to the user.. “Soooooo…. could you say that IF (x) then (Y)? “
Where (X) and (Y) are arbitrarily complex sentences.
The answer could be not a simple yes/no , but perhaps a complex sentence as a response.
That complex sentence reply is, of course KL—which EL would propose theories on how to integrate it into the larger KL KB.
This is the kind of functions I am pursuing in my bot design.
|
|
|
|
|
Posted: Feb 16, 2011 |
[ # 10 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Victor Shulist - Feb 16, 2011: New data received (to me, experience, (ie sensor data), and facts told to the bot (audible or visual, or keyboard) are all sense inputs, and both are data.
In my view, ‘experience’ is more than just ‘data’. It has to be mapped to ‘context’, where this context can be core-concepts (i.e. instinct).
Victor Shulist - Feb 16, 2011: In other words, hearing a statement “Don’t go near the oven” and the actual expeirence of touching a hot oven are both data—the latter is simply storage of the temperature of the oven , as simple as an integer value stored in a db—no different than a complex parse tree (of a statement or rule coded in NL).
A ‘simple stored integer value’ becomes an ‘experience’ when mapped to a core-concept like ‘pain’ or ‘discomfort’. Feeling the heat of the oven triggers instinct-driven responses like moving away from the oven to alleviate that sensation.
Victor Shulist - Feb 16, 2011: Now complex information about the world, not just temperature reading (ie expeirecing heat or cold), but complex sentences .. that are told to the bot via audible or visual or keyboard, are also experiences.
Of course
I talked about ‘virtual sensors’ before: no need (for now) to hook up a whole bunch of electronic hardware to get sensor-readings, we can simply ‘describe’ a sensor as a textual input that should be handled by the AI as a sensor. It’s a simple variable assignment where the sensor is defined as a variable and we assign values to it. However, the variable needs to be linked to core-concepts, like mapping ‘heat-sensor > 23’ equals ‘discomfort’, to give it context. From that context it then becomes ‘experience’ based on sensor readings over time.
Now jump to grammatical input: to ‘generate’ experience’ from that grammatical input, it needs to be mapped to ‘context’ (and other ‘concepts’ that might define the context), just the same as the sensor input described above.
|
|
|
|
|
Posted: Feb 16, 2011 |
[ # 11 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
@Victor:
Talking about concepts like pain or discomfort, leads directly to something like the PAD-model. Maybe this is something you can use in your approach as well:
http://www.kaaj.com/psych/ai.html
Coming across this research was for me one of the triggers to jump to ‘strong AI research’ myself. I have actually implemented the PAD-model in AIML, as an experiment, but value-handling is pretty hard to do in AIML.
|
|
|
|
|
Posted: Feb 16, 2011 |
[ # 12 ]
|
|
Administrator
Total posts: 3111
Joined: Jun 14, 2010
|
Hans Peter Willems - Feb 16, 2011:
...but value-handling is pretty hard to do in AIML.
I’ve run into those same difficulties, myself. But the advantage for me is that I’m not only controlling the data (AIML), but also the interpreter, as well. This gives me much more flexibility, and allows me to integrate improved functionality as I go along, allowing Morti to “evolve”. For example, When Morti is in his “natural home”, in my pChat chatroom script, you can interface with Morti to control a virtual telescope (provided you’re in the “observatory”, that is), to obtain either images of a given celestial body, or a “sky mode” view of that same body (also provided that it’s an “extra-solar” object) in Google Earth. If I were to make the requisite changes to his current “home”, that functionality would still exist. But there’s no real need for it right now, so I didn’t code it into the web page. Given the amount of control I have over the entire project, using an AIML approach isn’t as limited as it could be. At some point in the future, I’ll be moving away from stimulus/response and pattern matching as a means of generating Morti’s output, but I still have a lot to learn before that happens.
|
|
|
|
|
Posted: Feb 17, 2011 |
[ # 13 ]
|
|
Guru
Total posts: 1081
Joined: Dec 17, 2010
|
Dave, Didn’t Gary do something similar with AIML and extensions in his AIMLPAD?
Hans, how did you implement PAD with AIML? Do you find that PAD represents emotions well?
|
|
|
|
|
Posted: Feb 17, 2011 |
[ # 14 ]
|
|
Senior member
Total posts: 494
Joined: Jan 27, 2011
|
Merlin - Feb 17, 2011: Hans, how did you implement PAD with AIML? Do you find that PAD represents emotions well?
I implemented it just as an experiment, so I still have to decide how good or bad the PAD-model is. However, the reasoning behind the PAD-model seems pretty sound to me.
I implemented the PAD variables as a step-couter in AIML, and because you need a pattern for each step in the counter I just used 5 levels (1-5) in each of the three scales. Next I concatenated the three values together as a three-digit number that was mapped to a certain ‘state of mind’. So this mapped to 125 different values that I brought down to just about 10 by using SRAIs for the remaining 115 numbers. I used a basic Alice dataset that I infused with jumps to the step-counters so the conversation was constantly changing the PAD-values when appropriate. I also had a setting to simulate the user-state (nice, angry, aggressive, etc.) to interact with the step-counters so they could jump more then one step in certain situations.
It was fun to play with, but it was a very crude representation of ‘feelings’. It made me long for a richer environment to work in so I dumped AIML. But then I started to think about how the PAD-model could be combined with a better way to describe ‘knowledge’ in AI and how something like the PAD-model could provide an extra dimension in that model…. and here I am now, working on that model and researching just about everything that is related.
|
|
|
|
|
Posted: Feb 18, 2011 |
[ # 15 ]
|
|
Experienced member
Total posts: 69
Joined: Aug 17, 2010
|
Good point. I believe we need some rules as seed at first. The other rules will be built based on the induction of experience.
|
|
|
|