AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Chabots, language, grammar & semantics
 
Poll
How big of a role does grammar play in your chatbot?
not central, grammar will be learned 2
grammar first, higher concepts after 5
i don’t care about grammar at all 2
have not decided 0
Total Votes: 9
You must be a logged-in member to vote
 
  [ # 31 ]

Patti, this example has been posted before (by you?). But this only shows that we have great pattern-recognition skills because when you read that aloud then you’ll see you actually do ‘spell-checking’ on the go (try it). So that’s a point for Victor, but does it has to do with understanding. I don’t think so.

It’s a totally different thing when we translate between languages. Translation is not based on grammatical knowledge, but instead on ‘understanding of the concepts’ as formulated in different languages. So to translate a sentence we can not just find the equivalent words in the other language and map those together. Because that way we get a syntactical translation, that might be turned into a grammatical one by adding the rules of the target language to it. But you don’t get a semantic translation; you won’t know if the translation is correct if you don’t understand what it actually ‘means’.

 

 
  [ # 32 ]
Hans Peter Willems - Feb 15, 2011:

Example: when the user puts in ‘I don’t want to talk with you right now’, based on only grammar there is no way for a bot to figure out what the correct response on that should be. For a bot to be able to respond correctly to that, it must have understanding about what that means to the user himself. No grammar-based model is going to help you there.

Based ONLY on grammar, yes of course you are correct.

Ok, we need to clear something up here.

I am absolutely not saying that grammar is the alpha and omega of NLU (natural language understanding), of course it isn’t.

I’m saying that I don’t think it really needs to learn grammar.  I am giving my bot a head start on life by providing that.

By the way, to me grammar and misspelled words are completely different things, I think some of you are thinking they are the same, perhaps they are, but to me, utterly different topics.

SO, for CLUES, it just uses grammar to give it HINTS as to what the input COULD be.

Semantic logic comes next, and higher levels of abstraction.  Grammar is first base.

For your example, Grace will know things like “to talk to you” means a social thing (and perhaps other semantic meanings), and “don’t want to” combined with it may semantically add up to perhaps being angry with her, to which she may ask : “have I done something wrong?”

All that good stuff,  (which I am ready to start teaching it THIS coming weekend) is much more than grammar, yes.

This is why , I voted “grammar first, higher concepts after”

 

 
  [ # 33 ]
Victor Shulist - Feb 15, 2011:

I’m saying that I don’t think it really needs to learn grammar.  I am giving my bot a head start on life by providing that.

I do understand that. The point I’m trying to make is that you may paint yourself into a corner. If you are ‘just’ designing a system that builds responses from inputs then I see no problem. But you keep stating that you want to ‘go from here’ onto higher levels of reasoning. Because of that I think you won’t get where you want to, based on the current assumptions that you have.

Victor Shulist - Feb 15, 2011:

Semantic logic comes next, and higher levels of abstraction.

This is, in essence, the problem that I see (and is to me the flaw in all grammar-based AI-research); how do you get from ;grammar’ to ‘higher levels of abstraction’ ? From my perspective, grammar simply doesn’t have the properties to do that. Well at least to do that in a way that it will actually get you somewhere.

Victor Shulist - Feb 15, 2011:

For your example, Grace will know things like “to talk to you” means a social thing (and perhaps other semantic meanings), and “don’t want to” combined with it may semantically add up to perhaps being angry with her, to which she may ask : “have I done something wrong?”

Maybe the best response is just to shut up. The only way for the bot to know the correct response, other that feeding it with ‘IF I say that THEN you should shut up’ (the ‘logic’ approach, more code), is to have ‘experience’ with previous similar situations that give ‘context’ to the input. Our ‘view of our reality’ comes into play, so the AI also needs a ‘view of our reality’.

How do you propose to do that, based on a grammar-engine? Can you give an example of how that ‘higher level of abstraction’ is going to be formulated, stored and processed back into something that is not just a rehashing of words?

I do agree that grammar has it’s place in all this, that why I voted ‘grammar later’, but I don’t see grammar as a viable base for describing the ‘reality’ that AI can use to be able to ‘relate’ to concepts. For me the basics are, way way WAY before any processing is going to take place, a model that has the parameters to describe the things that define our reality; time, location, sensory input, previous experiences,  and finally yes, conversation and interaction with other beings. If you have only grammar as the base, how do you cater for the rest?

 

 
  [ # 34 ]

Is it just me or is all the talk about what people’s bots “will do” getting a little old? I mean, it is easy to say vague things like “my bot will use UNDERSTANDING and therefore be able to answer” because of grammar/not because of grammar/because it’s powered by little elves. I know this topic is in “AI thoughts” but I think focusing more on specific algorithms we intend to implement will be more productive than vague ideas that seem more focused on the results we’d like to see than how we plan to get there.

 

 
  [ # 35 ]
C R Hunt - Feb 15, 2011:

Is it just me or is all the talk about what people’s bots “will do” getting a little old? I mean, it is easy to say vague things like “my bot will use UNDERSTANDING and therefore be able to answer” because of grammar/not because of grammar/because it’s powered by little elves. I know this topic is in “AI thoughts” but I think focusing more on specific algorithms we intend to implement will be more productive than vague ideas that seem more focused on the results we’d like to see than how we plan to get there.

WELL SAID !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Backing up our theories with even more theories is all we are doing here. 

I will refrain from replying.  It’s pointless. 

The bottom line is we don’t have real understanding of each other’s bot’s architectures to say “who is right”.

I say, let’s wait it out.  Let’s see where we are all one year from today, Feb. 15 2012.  Or whatever date we want to pick.

Then, perhaps have a competition with our bots smile  Have a panel of judges test our bots perhaps.

Theory is one thing…. RESULTS are another.

 

 
  [ # 36 ]

We seem to be heading back toward the whole “Gary vs. Victor” scenario again, and I think that’s counter-productive. {sigh!} It’s my considered opinion that Victor’s work on a grammar engine is a valid avenue of research, though hardly complete, that will achieve some level of success at some point as part of a larger package. From what I’ve read of the research many of the rest of you are following, I have the same impressions there, as well. Perhaps I’m wrong, and Victor’s work may not be as successful as I’m hoping. But none of us have completely traveled that path yet, so we just don’t know. This is the core of exploration, and we should all just let Victor (or anyone else, for that matter [Arthur comes to mind here, with “mindForth”]) follow his own path, and await the results. I, for one, find this sort of thing to be a “Grand Adventure”, and no adventure is without it’s risks and perils, hardships and setbacks.

Anyway…

Victor Shulist - Feb 15, 2011:

By the way, to me grammar and misspelled words are completely different things, I think some of you are thinking they are the same, perhaps they are, but to me, utterly different topics.

I agree. Spelling is a subset of grammar, just as Astronomy is a subset of Science, and sand is a subset (sort of) of concrete. I see your efforts, Victor, as providing more of a framework, or point of reference, to guide GRACE towards understanding the user’s input, which is an important step.  After all, how can you attach a “concept” to an input, if you have no CLUE as to what the input means? (no, Hans, I’m not trying to cast aspersions upon your work. I’m just as excited about your “plan of attack” as I am with Victor’s. It’s just that I don’t understand yet how you’re going about things) If we look at the following:

“Went I market the to.”

We, as humans, will probably struggle, though only a little, to understand the meaning of the above sentence. Without at least some sort of grounding in grammar, even a “semantic approach” would fail with most AI systems now. I think that GRACE/CLUES, however, may be able to make sense of it, based on the “parts of speech” portion of grammar. But I’ll let Victor tackle that “question”. smile

 

 
  [ # 37 ]
C R Hunt - Feb 15, 2011:

I think focusing more on specific algorithms we intend to implement will be more productive

This is of course reasoned from the perspective that the algorithmic approach is the correct one (I assume here that by ‘algorithms’ you mean ‘coded logic’).

Please take a look at my project-topic where I state that my approach is to have as little coded logic as possible and have the focus on a data-model that can actually describe what (I think) is needed to get to strong AI. There is nothing vague about that, and there are no little elves involved.

Another thing that just appeared to me is the fact that Victor stated before that he doesn’t want to build a ‘strong AI’ but in these discussions he is constantly debating how his approach will be able to do things that are very much in the field of strong AI.

Also let’s not forget that many approaches have failed, that where based on just pushing as much grammatical information into a database, linked with an inference engine, and hoping that somewhere along the line some magical threshold would be passed and the AI would then become ‘conscious’ (or how you name it). Even the current research on large corpora (like Cyc) is already showing this not to be the case. The current hype around Whatson is entertaining, but read up on the technology and you find very quickly that it is running cleverly designed algos to do the thing it does (fuzzy pattern matching and forward chaining towards a statistical solution), making it basically a very advanced calculator.

The fact that my research at this point is not yet ‘coded’ and running in any way does not mean that my research is ‘vague’. I’m just taking another approach then those of you who are experimenting with coded algorithms. I have experimented with AIML for quite some time and I’ve build several ‘bots’ with it to test certain things (like the PAD-model), but it is this experimenting that has lead me to believe that I needed to ‘get back to the drawing board’ and start formulating a better design first.

 

 
  [ # 38 ]
Dave Morton - Feb 15, 2011:

After all, how can you attach a “concept” to an input, if you have no CLUE as to what the input means?

I have given the answer to that before: this is where the core-concepts or ‘AI-instinct’ comes into play. A very basic definition of core-concepts that are used as a base to build upon through learning.

Example: we can hear a sound that we have never heard before, but it gives you a ‘creepy’ sensation. This is instinct. Now we investigate and find out it is indeed something horrible; it becomes actual (new) experience that ‘overloads’ the instinct. Next when we hear something similar (but not the same) we experience the combination of our instinct AND our new experience, and gain more experience based on that.

Now this example is about ‘a sound’, but go back to textual input and (for me) old Batman comics come to mind: KAPOW, BLUK, SWOOSCH… no grammatical value at all but definitely mapping to concepts I would say wink

 

 
  [ # 39 ]

Thank you for the clarification, Hans. That analogy helps me to understand where you’re coming from a little more. smile

 

 
  [ # 40 ]
Hans Peter Willems - Feb 15, 2011:

This is of course reasoned from the perspective that the algorithmic approach is the correct one (I assume here that by ‘algorithms’ you mean ‘coded logic’).

I mean algorithm more generally. The step-by-step method by which your bot will both 1) process input and 2) alter its own internal state (which may not be a function of input).

Hans Peter Willems - Feb 15, 2011:

Please take a look at my project-topic where I state that my approach is to have as little coded logic as possible and have the focus on a data-model that can actually describe what (I think) is needed to get to strong AI. There is nothing vague about that, and there are no little elves involved.

I follow every topic on this board, though I don’t always comment. I’ve read every word you’ve written there.

I want you to know that what I said before was no more a condemnation of you than it is of Victor or even myself. We’ve all engaged in a little “my bot will do this and that and act like such and such for blah blah reason” and this is all very well and good in an abstract sense, but it’s leading to conflict. Because we’re disagreeing about hypothetical results of hypothetical actions. (??) Seeing concrete examples of what we’ve done (whether its code, input/output examples, or a planning map, or what have you) will be more beneficial to others and the sanity of this forum, IMHO.

Hans Peter Willems - Feb 15, 2011:

Another thing that just appeared to me is the fact that Victor stated before that he doesn’t want to build a ‘strong AI’ but in these discussions he is constantly debating how his approach will be able to do things that are very much in the field of strong AI.

This is exactly what I’m referring to. Let’s talk about where we are and are immediate goals. Speculation is fun, but we’re taking it too darn seriously.

Hans Peter Willems - Feb 15, 2011:

Also let’s not forget that many approaches have failed, that where based on just pushing as much grammatical information into a database, linked with an inference engine, and hoping that somewhere along the line some magical threshold would be passed and the AI would then become ‘conscious’ (or how you name it).

And many approaches have failed (or at least, not yet succeeded) that don’t properly account for grammar. I believe in posts of yore Victor spoke of this as being his inspiration for tackling grammar first. I had the same motivation.

Hans Peter Willems - Feb 15, 2011:

Even the current research on large corpora (like Cyc) is already showing this not to be the case. The current hype around Whatson is entertaining, but read up on the technology and you find very quickly that it is running cleverly designed algos to do the thing it does (fuzzy pattern matching and forward chaining towards a statistical solution), making it basically a very advanced calculator.

We don’t know a lot about what makes Watson tick at this point. From everything I read, it sounds like his learning mechanisms are a big mix of techniques. Probably a successful chatbot will require the same.

Hans Peter Willems - Feb 15, 2011:

The fact that my research at this point is not yet ‘coded’ and running in any way does not mean that my research is ‘vague’. I’m just taking another approach then those of you who are experimenting with coded algorithms. I have experimented with AIML for quite some time and I’ve build several ‘bots’ with it to test certain things (like the PAD-model), but it is this experimenting that has lead me to believe that I needed to ‘get back to the drawing board’ and start formulating a better design first.

Many people on this forum (Dave, Victor, me,...) have expressed that they don’t really understand how you plan to implement your ideas. I think talking in more concrete details about your design implementation ideas would help. I’d love to read about it.

 

 
  [ # 41 ]

Nope, Hans is 100% correct (& rest of us, zero percent) and his bot will be the world’s first TRUE AI.    There is absolutely no doubt in my mind.

There, in the interest of saving time smile

Instead of innocent until proven guilty, he is correct until proven incorrect. 
When can we expect the bot Hans?

[Update at 3:30 PM, 15 Feb, 2011]

How big of a role does grammar play in your chatbot?
not central, grammar will be learned 2
grammar first, higher concepts after 5
i don’t care about grammar at all 2
have not decided 0

 

 
  [ # 42 ]
Dave Morton - Feb 15, 2011:

It’s my considered opinion that Victor’s work on a grammar engine is a valid avenue of research, though hardly complete, that will achieve some level of success at some point as part of a larger package.

That sounds like a lot, if not all, of research projects smile

Dave Morton - Feb 15, 2011:

Perhaps I’m wrong, and Victor’s work may not be as successful as I’m hoping.

Of course, I share that opinion.  Only a very arrogant person would believe forsure that his ideas will be 100% successful.  This is research.

Dave Morton - Feb 15, 2011:

“Went I market the to.”

Dave !! I love this example.    Later, when “good grammar” is mastered, CLUES will go on to being able to handle this kind of mixed up grammar.    It just means more processing.  After ‘good grammar’ rules are all tried, it will simply try out of order grammar.  Yes, this won’t be blind brute force; instead all kinds of hints will be used that will help it.

Just want to end by saying:

“That person’s idea won’t work, because it uses <X>, and systems that used <X> have failed before.”  (Never mind that <X> may be only a small portion of your system and you just may, have ideas of your own that can add to it).

for me <X> happens to be grammar.  Oh, systems involving grammar have failed in the past.  Hum, guess I will throw in the towel right now!

For other people, <X> is a bot in a “virtual world” withe senses etc, for others <X> is , I don’t know, “fuzzy logic” or whatever.

 

 
  [ # 43 ]

Dave,

This example is great… I think what I will do is,  in parallel to educating CLUES about normal grammar, is develop the algorithms for it to deal with mixed grammar.  I’m going to schedule my time, so much % of time to completing proper grammar, and so much % in development of code to deal with that.  Not sure what percent I’ll split yet though.

“Went I market the to.”

Thanks for that.  I will keep you posted in the GRACE/CLUES thread

As you indicated, yes, knowing the parts of speech of each term will provide a very valuable hint ( again.. “HINT”... there is also semantics and other factors at work).  And yes, every word given to CLUES *can* have more than one, (and in fact ALL parts of speech).  My head is already generating a couple of ideas on how this can be dealt with.

Thanks again smile

 

 
  [ # 44 ]

Before this is really getting out of hand or even turns nasty:

Victor’s approach, from my perspective, DOES have merit in many areas. And I’m sure at least ‘some useful result’ will come from it. So please let’s be civil about the points where we disagree with each other. Victor’s viewpoints have triggered several ideas of my own. No matter if those ideas stray away from his viewpoints or even counter them, I thank him (and others) for this inspiration.

Second, this IS a discussion board. I registered for the sake of discussing this stuff. I did not register to go around and just post how much in awe I am from everyone’s accomplishments (although I seriously respect everyone’s efforts here).

Lastly, as I said before I’m still in discovery mode and therefore far removed from finalizing my ideas. However, in replying to the questions here, I’m getting a handle on my own ideas more and more. So just bare with me on this. I’ll post more defining info on my model as I go along. Having said that, I do actually have a working concept in my head, just not the specifics on final implementation yet.

 

 
  [ # 45 ]

Btw, I would really like to hear from the other people here that voted against ‘grammar first’ wink

 

 < 1 2 3 4 5 > 
3 of 5
 
  login or register to react