AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Introduction
 
 
  [ # 76 ]
Hans Peter Willems - Feb 6, 2011:

I never said that we can do without grammar. What I’m disputing is that grammar is needed for the base AI mind-model to be able to learn and eventually even ‘understand’ the concepts that we try to learn it. My stance is that ‘proper’ grammar is just something the AI could or even maybe should learn to be able to have a ‘proper’ conversation.

We are on opposite ends of the spectrum here for sure my friend !

I *do* like the idea of a bot learning grammar.  But I think it is unnecessary.  I think that adds way too much complication to your project.  I believe we can skip that, and provide the grammar directly. 

Then, build on that.  In my system, grammar plays the role of “brain storming”  - that is, generating all the possible interpretations that an input can have.  In NL, one sentence can generate sometimes thousands of possible meanings (and ALL grammatically correct!).  That is why stage 2 processing in my bot, is about knowledge of the world, and semantics, and it deduces the most reasonably interpretation.

then, once it knows, or believes it knows , which of those grammatically correct possibilities is the right one, it takes that, and figures out what to do.

Later, it will figure out which combination of logic modules to run in a tree structure it will figure out, and deduce a response. 

certainly NOT simply stimulus > response !

 

 
  [ # 77 ]
Gary Dubuque - Feb 6, 2011:

Hans Peter Willems, Feb 5th
Me: it is cold outside
AI: I suggest you put on something warm then

As I pointed out before Cyc has millions of these microtheories (rules, assertions, etc.) that might help in the kind of answers Hans just sited (about putting on something warm.) The hurdle to overcome is the part about simulation (sometimes I refer to this as storytelling.) Going from being cold outside to putting on something warm requires a sense of planning. Current technology usually builds those plans from a set of known operations or functions.

The AI should be able to ‘reason’ such an answer:

user input: it is cold outside
AI reasoning: ‘cold outside’ maps to a ‘user experience’
AI reasoning: ‘user’ maps to ‘dislike cold -> like warm’
AI reasoning: ‘user - warm’ maps to ‘dress for outside’
AI reasoning: ‘dress for outside’ maps to ‘be warm outside’
AI reasoning: ‘it is cold outside’ maps to ‘going outside’
AI reasoning: ‘going outside’ maps to ‘staying warm’
AI solution: dress warm !

The ‘reasoning’ part is of course based on ‘experience’ derived from previous input. And because when we start with a blank AI there is no previous input, so it needs some ‘base-concepts’ to being able to map new input so it can build new relations between concepts. The ‘reasoning’ engine must also have ‘fuzzy’ behaviour, to cater for ‘beautiful accidents’ and of course to have the AI getting things wrong all on it’s own, as I’m also very convinced that ‘learning from mistakes’ is actually very important in building concept understanding.

Gary Dubuque - Feb 6, 2011:

Perhaps you can start with a set like instincts, but soon, when combinations of those instincts are formed and saved as learned knowledge, the identifiers that we use will become lost to the machine generated references of those collections. Practically, we won’t be able to follow what’s going on like we can do now with grammar and concepts. Language is not very good at this level of operation.

Agreed.

Gary Dubuque - Feb 6, 2011:

The deeper these chains of reasoning can go without losing the user, I believe the more interesting will be the conversation.  That is, of course, as long as the discussion can stay grounded between the participants. Just having the instincts doesn’t guarantee the appropriate ones will be applied to facilitate continued satisfaction.

The ‘instincts’ would only be the fundamental part of the AI, everything else will be build on top of that. A ‘real’ conversation can not be held on ‘instincts’ alone, that would bring us back to the basic trigger-response model.

Gary Dubuque - Feb 6, 2011:

The most common feature of current bots is the user eventually becoming fustrated sometime during the discourse. Most chat bots simply react with conversational units of canned computations (Victor’s reactors) instead of the generation of the story that paraphrases (using the generated story and not a translation of the input) and maybe develops that story more to contribute new ideas to the conversation (like your example.)

Indeed it would take a dedicated teacher to be able to ‘stand’ the ‘initial stupidity’ of the AI while it will reason based on very limited concept-models. I think this is not different from teaching real humans wink

At the same time, because the ‘reasoning’ is not based on a strict algorithm, the AI will be capable of ‘beautiful accidents’ along the way. This might help the ‘teacher’ to stay involved in the conversation.

Gary Dubuque - Feb 6, 2011:

As you can surmise, this is more practical information than it is theory. At least towards the efforts I understand that interest you.

It does interest me, you are one of the few here that seems to understand to at least a certain degree where I’m going with this. So please keep your replies coming.

Gary Dubuque - Feb 6, 2011:

The learning part is the hard stuff to explain. Mainly because when the bot has acquired significant knowledge on its own, we don’t have a very good way to communicate with the internal operations except by “knowledge” the computer has learned to explain itself. As long as we hand enter the knowledge, we can build an ontology we understand (if one can actually do that monumental feat - thanks for Cyc to put the basics in perspective cause as I suggested earlier, I think Cyc is more passive information than active knowledge.

You need both knowledge and experience (either first or second hand) together to create ‘understanding’. Because of this we not only to teach the AI what is correct, but also what is NOT correct; as I stated before the AI needs to learn from mistakes as well. This is the basis for real understanding; being able to ask ‘why?’ and answer that question based on determining the best ‘why’ answer in relation to the ‘why NOTs’.

 

 
  [ # 78 ]

Guys,

All this input is great regarding reasoning and the like.  But don’t we agree that we first need the bot to fully understand the input from the user?  I mean, if the system’s understanding of the user input is “Eliza level” that is, picking out keywords, or matching templates, how well could it possibly fair out ? 

Unless the bot can figure out things like:

“While I was in Africa, I shot an elephant in my pajamas”

and know that *PROBABLY* the elephant was NOT in your pajamas, but instead YOU were *wearing* your pajamas, then all the processing that comes after is pointless.

You must have your bot understand, really understand the input,  *THEN* talk about knowing things like a rain coat should be put on before you go outside if it is raining.

NO understanding of user input means “garbage in, garbage out”.


This is the point I was trying so hard to make with Gary in my example electronics problem above, the bot needs FULL understanding of user input.

Yes, I know , the response was “canned” (simple divide voltage / resistance), but it was not the ACTION the bot does that I was trying to talk about, it was the UNDERSTANDING of the input… that is first priority I believe.

And it needs to do that on its own.  Not by having thousands of templates, it has to resolve language of ambiguity , and for that, it needs world knowledge.

 

 
  [ # 79 ]
Victor Shulist - Feb 6, 2011:

then, once it knows, or believes it knows , which of those grammatically correct possibilities is the right one…

But ‘knowing something’ has NOTHING to do with something being ‘grammatically correct’. To me THAT is the flaw in your reasoning.

However, to keep this discussion fair, you stated several times that you don’t aim for ‘strong AI’ whereas I actually do aim for that. So I think our disagreement is anchored in that difference. I actually do believe that you can accomplish your goal the way you are trying to. I therefor do admire your work as documented on the forum here so far. But I also think that for building a ‘strong AI’, the grammar-based route is a flawed idea.

To create a strong artificial intelligence, we need to know how real intelligence works. There is still a lot to research in that field alone, but that is the basis for my model.

 

 
  [ # 80 ]

OMG, we’re starting to agree!!  smile

Yes, I am not really shooting for true AI.  However, by building the system up, and making it work with higher and higher levels of abstract information, who knows, it could lead there.  But again, no, not my aim for now, out of scope.

Thanks btw, I also admire your attempt to have a bot learn grammar - what an achievement that would be - good luck with it.

          “But ‘knowing something’ has NOTHING to do with something being ‘grammatically correct’. To me THAT is the flaw in your reasoning.”

AGREED.

But, I believe grammar gives VALUABLE hints.  And no, there is semantic reasoning in CLUES that helps it figure out which one of those grammatically possible meanings could be the right one.    So CLUES *starts* with grammar, then tries to figure out which of those grammar suggestions work. 

Also, I will admit that I will probably have CLUES use word prediction to help it out smile  But primarily, grammar is regarded as very important in the design.

 

 
  [ # 81 ]

sorry.  .the sentence was a bit confusing i see now, and it is too late to edit… so here…

“And no, there is semantic reasoning in CLUES that helps it figure out which one of those grammatically possible meanings could be the right one.”

should be,

>>> There is semantic reasoning in CLUES that helps it figure out which one of those grammatically possible meanings could be the right one.

 

 
  [ # 82 ]

When a discussion boils down to semantics, it can be frustrating for everybody involved.

Here’s an example:

But ‘knowing something’ has NOTHING to do with something being ‘grammatically correct’.

Now, I agree with Hans that a fact is not the same thing as the sentence which represents it. But I also consider language to be a strict representation of fact, the same way that a mathematical equation can be. But an equation only means anything if one follows specific rules in interpreting it (order of operations, the action of operators correctly applied, what have you). The same for a sentence.

So here’s my question for Hans: Are you planning for your project to utilize input besides text? What type of behaviors do you consider the “instincts” you plan to hard-code? Do you consider the propensity to learn and use language an instinct? (Certainly we share many instincts with parrots, but they will never carry on a conversations, though they have the vocal ability.) If so, how will you hard-code it to search for structural rules within textual input?

You said this was a new project, so I’m sure much of the above hasn’t been fleshed out yet. But I am curious as to your ideas on this subject. I also have some tools in place for my bot to create new grammar rules by developing maps that translate a complex sentence into simple sentences. (They work in a somewhat fuzzy way as well. Too much fuzziness tends to lead to unreliable results.) But these simple sentences are dissected using hard-coded rules. I can’t imagine the time and coding involved for the bot to figure out for itself what the difference between a verb and a noun is, let alone that many words can be both. To say nothing of spacial/temporal relationships (as encoded by adverbs, prepositions, etc).

 

 
  [ # 83 ]

Oh, my! I’m away from the site for a while and ALL SORTS of things happen! Over 400 new posts to go through, Victor is back, we have new members to welcome, and I have some threads to split! ACK!!!

First off, let’s welcome the new member, since this was the original intent of the thread. Hans, hello and welcome! I’m Dave, one of the board’s moderators, and a chatbot programmer, as well. I don’t have the time to work on my chatbot that I used to, but I still try to keep abreast of things. As you’ve no doubt already noticed, we have a remarkably intelligent and very friendly/helpful crowd here, who are passionate about what they do. I’m rather in awe of most of these folks, but don’t let then know I said that, or it’ll go to their heads. smile

Secondly, Victor, it’s great to see you back on a regular basis. I’m sure that I speak for most of the community when I say that we’ve missed your insights and I’m looking forward to catching up with your posts here (along, of course, with everyone else’s).

Now obviously, I haven’t had the opportunity to read through this discussion to any great extent, and I sincerely want it to continue, as not only does it make for some very compelling reading, what I’ve read so far has given me some valuable insights into various aspects of machine understanding that will prove invaluable to me later on. However, that said, you do realize that I’m going to have to split this thread at some point, and give it a proper name and home. So please bear that in mind, and if you have an idea for a new title for the thread, please let me know. And again, welcome, Hans! I hope you enjoy your visits here, and if there’s anything I can do for you, you need only ask. smile

 

 
  [ # 84 ]
C R Hunt - Feb 6, 2011:

I can’t imagine the time and coding involved for the bot to figure out for itself what the difference between a verb and a noun is, let alone that many words can be both. To say nothing of spacial/temporal relationships (as encoded by adverbs, prepositions, etc).

No kidding!  And again, the first order of business is full understanding of your input, and ability to , on its own (not predefined logic), deal with different parts of it (electronics problem example above).

I don’t care what approach you use, when it comes to language, you WILL have ambiguity - and LOTS OF IT.  That is why we’re in 2011 with really no bot that can handle it. 

I think Google would have a natural language query by now if there was an algorithm out there already, don’t you?  I would think we would know.

No matter what approach, you are going to have ambiguity in interpreting the user’s input.  And grammar provides that systematic approach to the analysis of its structure.

And yes, there is more to it than grammar, much more.  But my own results so far, with grammar in the spotlight, are very promising.

Now, I am not saying it is impossible or wrong to have the bot learn grammar, that would be very impressive, but IMO, not necessary. 

 

 

 
  [ # 85 ]

Dave !!!!!!!!!!!!!!!!!!!!!!!

How are you ?  I was just thinking the last couple of days where you went !  I saw your name still listed on the left margin as moderator, so I knew you didn’t leave chatbots.org, but still, I was wondering, where is he !!??

Thanks for the warm welcome back. 

Yes, I mentioned WAY back in this thread that it needs to be split !!   

If this is Hans’ “Introduction” , imagine when he really gets going in a dedicated thread about his project??!!!!!!!!!!!!!!!

 

 
  [ # 86 ]
Dave Morton - Feb 6, 2011:

However, that said, you do realize that I’m going to have to split this thread at some point, and give it a proper name and home.

Dave, thanks for the kind welcome. As for splitting this discussion (which got itself going much faster then I anticipated myself smile), I’ll start a dedicated topic for my research project somewhere in the next few days and you can move anything specific from this topic into that new one.

One request; some discussion specifically on Victor’s project has spilled into this topic as well, it would be nice if those messages could be moved to his own topic.

 

 
  [ # 87 ]
Jan Bogaerts - Feb 5, 2011:

1. Gary has a point about overloading the word ‘knowledge’. What is that anyway? Perhaps, it’s only a database system?
2. I would just like to add a few things here:
- Hans, It seems to me that your current model consist of weighted relationships between words/concepts. If so, this path has been tried before, many times. Some examples: Mindforth or this ai project,....  Personally, I think it’s a model missing a few things.

Missing things, like FULL understanding of user input, in all the RICHNESS the LANGUAGE offers.  “Reasoning” by simple “X -has-a- Y” , “X is-a Y” and ignoring the complexities of language, is doomed to failure.  All these approaches ignored language in its FULLNESS.  Thus, for me, stage 1 is to have complete, TOTAL understanding of English (or other N.Language) input (to the point where it can be paraphrased, which my bot can do now),  and no, Gary, this is not “real thinking” - but is the first VERY IMPORTANT step.    Then, the automated reasoning like Hans’ “dress warm for outside” example starts.  But you can’t reason on simple “noun verb object” and “is-a” and “has-a” type simple relationships between objects & concepts.  Language & grammar provide us with a very rich system for expressing complex relationships, we need the bot to understand that FIRST.

Full free form NL processing, with “paraphrase” testing, allowing system to self determine the meaning of FREE FORM input, is the first step. 

So, in short, if you have a bot that gets hung up on something like the following, don’t waste your time trying to make it “reason”, first have it master NL, in all of its intricacies, then move forward:

user:My name is bob and whats yours
ai: hello, bob and whats yours

-or

user: a ham sandwich is better than nothing
user: nothing is better than a million dollars
ai : thus, a ham sandwich is better than a million dollars

yes, it is funny, but if your system just works with simple X-is-Y,  X-has-Y, and simple relationships that ignore language, ignore grammar, and most importantly, ignore semantic reasoning to evaluate those outputs, you’re doomed to failure.

Language is rich in expression.  The world is rich in complexitity, the human mind also, that is why our language is so complicated.  If reality was simple, then yes, X-is-Y, X-has-a-Y and all this “Socrates is a man, all men are mortal, thus Socrates is mortal” stuff would be enough for AI, but it isn’t.  Those simple systems have FAILED - over and over again for the past 50 years, to bring us a bot that can carry a real conversation, learn by NL, be asked complex questions to find or arrive-at, by deduction, and answer.  An AI will need to be able to read in very complex NL statements and figure out a way to correlate the information in each of them , to make a conclusion.

 

 

 
  [ # 88 ]
C R Hunt - Feb 6, 2011:

So here’s my question for Hans: Are you planning for your project to utilize input besides text? What type of behaviors do you consider the “instincts” you plan to hard-code? Do you consider the propensity to learn and use language an instinct? (Certainly we share many instincts with parrots, but they will never carry on a conversations, though they have the vocal ability.) If so, how will you hard-code it to search for structural rules within textual input?

A lot has to be fleshed out yet indeed, but here are a few short answers to your questions:

- text-input will be the main input for learning, but I’m also using sensors (simulated, only ‘loosely defined’ for now).
- I’ve not yet finalized my ‘instincts’ model, but it will encompass very basic things that I can not input with text or (simulated) sensors; concepts like ‘above and below’ (gravity), ‘agreement - disagreement’, ‘to like or dislike’ (natural aversion)... stuff like that.
- ‘learning and using language’ is not an instinct in itself, but it will be build on top of ‘instinct’. In my model it’s not called ‘instincts’ but it’s called ‘core assumptions’.
- I’m not going to code any hard-coded structures besides the base structure to store and evaluate ‘concepts’. Everything else is going to be learned.

 

 
  [ # 89 ]

It will be cool to see how your project develops. I think the “base structure” will need to be very clever to handle all that you intend. (To take volumes of input and statistically determine how those words relate to hard-coded “concepts”.) How extensive do you intend these sensors to be? Are you going to include simulated sensory experience with the text it uses to learn? (Text and sensory input united into “lessons”.)

 

 
  [ # 90 ]
C R Hunt - Feb 6, 2011:

Are you going to include simulated sensory experience with the text it uses to learn? (Text and sensory input united into “lessons”.)

Yes, but the other way around at first; I’m going to define the meaning of the sensors by describing it in textual concepts, related to the ‘base-assumptions’. After that I can use the sensors to ‘help’ with textual input.

The sensors are mainly used to give the AI a sense of embodiment.

C R Hunt - Feb 6, 2011:

I think the “base structure” will need to be very clever to handle all that you intend.

Yes indeed, that is where all the ‘magic’ needs to happen. So this is where my research is focussing now.

 

‹ First  < 4 5 6 7 8 > 
6 of 8
 
  login or register to react