AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Introduction
 
 
  [ # 46 ]
Victor Shulist - Feb 4, 2011:

I disagree.  That sentence DOES have grammar, not exactly “proper grammar”, but it DOES have grammar. 

I do agree with you on this but I believe it doesn’t matter. For me to understand such a sentence I don’t actually have to understand (or know) the rules of grammar.

Another point is that for me personal, English is my second language. I do understand (most of) it, I can write and speak it quite well and I have quite a large vocabulary (= content). However, all this is ‘mapped’ to understanding what it means, and certainly not to any grammatical knowledge other then my experience of word-combination frequencies. I can not tell you the grammatical construct of a sentence because I didn’t learn English that way; I learnt it by reading, listening and speaking it. Although I do have the notion of the ‘construction’ of a sentence, I do not ‘build’ sentences based on the correctness of the grammar. To put it another way; I know the proper sentence construction based on word-combination frequencies I have learnt over the course of my life, not because someone tutored me in the ‘rules’ of grammar.

I think we hinder our development of AI by focussing on the grammar thing. It is like when a child asks you to explain why the sun goes down, and you respond with ‘sorry, but you need to learn proper grammar first, or else you won’t understand my explanation’. I would call BS on such a reply, yet for AI we insist we go that way.

And lets not forget that the answer to ‘Life, the Universe and Everything’ is simply… 42 wink

 

 
  [ # 47 ]

Again, the problem is that people have a real-world knowledge base built from the experiences of our senses to map language onto and bots do not. As someone trying to learn a foreign language currently, I agree that I use my (limited) vocabulary knowledge and word combination frequencies to understand a lot of the German that comes my way. But I can do this only because I have other experience with the topic of the conversation. I take the input of known words, unknown words, and word proximity and I can map the conversational tidbits to my own knowledge base and thus “fill in the blanks”.

For a bot, the knowledge base has to be built on natural language input. (Unless your bot has other input capabilities besides “chatting”/reading.) To map new input to the base, whether the new input is grammatically sound or not, requires that the base be well-organized by some rule system. Our senses impart a rule system for how objects behave (physics!) but for the bot, the rules for how objects behave are encoded by proper grammar. (The sentence is the equation and the grammar tells the order of operations that make the sentence true, if you will.)

I don’t know if I’m explaining myself well. I’ll come back to this later when I have more time…

 

 
  [ # 48 ]
Hans Peter Willems - Feb 4, 2011:

And lets not forget that the answer to ‘Life, the Universe and Everything’ is simply… 42 wink

Someone once blew my mind by pointing out that 42 looks an awful lot like “2 b” (to be) upside down. Perhaps The Answer makes sense after all?? smile

 

 
  [ # 49 ]

@CR: that’s cool! I’ve just written that down and mirrored it. 

Image Attachments
42_the_answer_of_life.jpg
 

 
  [ # 50 ]
Hans Peter Willems - Feb 4, 2011:

I think we hinder our development of AI by focussing on the grammar thing. It is like when a child asks you to explain why the sun goes down, and you respond with ‘sorry, but you need to learn proper grammar first, or else you won’t understand my explanation’. I would call BS on such a reply, yet for AI we insist we go that way.

The child, even before he learns grammar formally in school, has a very good grasp of grammar, just by listening to people speak.  This is the wonderful power of the human mind to learn language.  They learn semantics and they also learn grammar… INFORMALLY. 

They may not be able to NAME the parts of speech, but their minds understands these constructs, and how each word relates to each other.  That is what grammar is, understanding how each word relates to each other.  And a child figures that out.

Later in school, they are told the names for these concepts.  “Jack and Jill went up the hill”, ok, “went” is the verb.  Ok, call it whatever, the child knows that the “went” is what jack and Jill DID and WHO did the “went”,  well, Jack and Jill.. they are called the subject, ok, call it whatever, but the child knows the RELATIONSHIP of ‘jack’ and ‘Jill’ WITH the word “went”—- they DID that.  Later, he learns that WHAT they DID is called the verb.

so yes, the child knows, and we know the child has a good grasp of language and grammar, so we know we can explain why the sun goes down.  No need for ‘BS’.

Now I highly agree with CR about the fact that the computer has no other source of information.  For a child, he uses his senses to deduce how language, semantics, and, at a high level, how grammar works.

Computer , until it has senses, has only text input.  Thus, my approach is, give it a “head start”,  give it grammar, to bypass this absence of senses, to get it started.    The bottom line is, computer or human, advanced ideas are communicated by semantics and grammar.

But, we’ll see in the years ahead, which approach turns out to be successful smile

 

 

 
  [ # 51 ]
Gary Dubuque - Feb 4, 2011:

So is the continuum:
* data (user inputs only if you really believe that computers are incapable of experiencing a world which bothers me a bit because I know I can find weather forecasts about snow storms from a device that has never felt a single snowflake);
* information (that stuff you find after applying grammar and semantic rules or fuzzy matching patterns, etc.);
* knowledge (perhaps something which appears produced by reasoning or maybe strong AI);
* wisdom (who knows if any machine has this kind of stuff, or is it embedded in social networks or forums like chatbots.org)?

Actually, the proficiency of students is often evaluated on exactly this sort of scale. (Google Bloom’s Taxonomy.)

Gary Dubuque - Feb 4, 2011:

  Learning is the capture of data. Maybe that data is of a higher processing order like grammar rules or the instance of the application of those rules. Learning is using information. Could you say you learned a thing because you can explain why (whoops, there goes physics which only explains the what)?

As the saying goes, a physicist’s “why?” is the layman’s “how?”

Gary Dubuque - Feb 4, 2011:

  BTW, all computer programming is philosophy. You can’t escape it or pretend to avoid it. You have to say, “this is how I believe it is done.” Period.

I definitely agree. It’s the same difference between science and engineering. There is no right answer to an engineering problem, just one that works or doesn’t!

Gary Dubuque - Feb 4, 2011:

  Unless you can get at least to analogies and storytelling, all your crunching of text will fall short of any human inspiration. You will always have just a machine with a glorified interface to an encyclopedia of facts.  I believe IBM is marketing one such machine called Watson!

Baby steps! After I’m satisfied with my knowledge base implementation scheme, I plan to add another dimension to my bot’s understanding—time! I’m calling this the “story” level, because it combines facts in order to tell the story of a process or event. The “story” may be something like “pouring a glass of milk”, but it is the building up of this sort of knowledge that will enable the bot to learn physical rules governing cause and effect and the consequences of actions. All of this is necessary for any sort of creative effort on the part of the bot, whether it is developing analogies or writing its own stories.

 

 
  [ # 52 ]
C R Hunt - Feb 4, 2011:

. (The sentence is the equation and the grammar tells the order of operations that make the sentence true, if you will.)

I don’t know if I’m explaining myself well. I’ll come back to this later when I have more time…

ABSOLUTELY !!!!  Grammar IS important, very important.

The sentence is the equation, the nouns are sort of like the numbers, (integers, floats, complex, whatever), the verbs are like the operators, plus, minus, multiple, divide, square root, etc.  The whole parse tree is like a system of equations or matrix equation.

Without structure (grammar), the set of words is just ‘word soup’ with no meaning.

 

 
  [ # 53 ]
C R Hunt - Feb 4, 2011:

To map new input to the base, whether the new input is grammatically sound or not, requires that the base be well-organized by some rule system.

Agreed, but that ‘rule system’ can be something else then grammar. In my view using ‘grammar’ as the ‘rule-set’ makes for a complex solution; the rules need to be programmed before you can train the AI. My aim is to have a much simpler ‘rule-system’ (based on something like word-combination frequencies) and have the AI learn grammar just like any other concept.

C R Hunt - Feb 4, 2011:

Our senses impart a rule system for how objects behave (physics!) but for the bot, the rules for how objects behave are encoded by proper grammar.

I don’t think so; we are quite capable of describing physical things like ‘a collision’ in terms of the sound it makes and other effects without actually describing in detail. We all know the idea of an explosion we did not see but we did hear and feel it. And even more, if you have never seen, heard or witnessed a real explosion, it is still possible to describe it in terms that have little to do with a real explosion. Something like that is actually simpler to explain when you abandon ‘proper’ grammar. Take a look at the movie ‘A knights tale’ where one of the guys try to explain what will happen if he makes problems:

http://www.youtube.com/watch?v=x668_vyjyHo (close to the end of the clip, around the 8:40 mark)

Also, when certain ‘senses’ are missing, we can still ‘map’ experiences for those missing senses onto other senses, again without describing things in (grammatical) detail. Our brain is very good at arbitrary pattern matching based on available vocabulary. So why not go that way with AI. The way I see it, a good defined base-vocabulary is a much better starting point then descriptions of grammar rules.

Maybe I’m not making myself clear here, it’s hard to explain. Especially because much of this is running around in my own brain on very abstract and conceptual levels. I need to do a lot of research and thinking still to be able to formulate those ideas more clearly.

But I will get there smile

 

 
  [ # 54 ]

The scientific method my friends. 

State a theory (write an algorithm), experiment (code your algorithm), run and test your bot, and observe your results.

We’ll see which approach works better in the coming years.

Only PROOF can decide that.  In a year, or two, or whatever, if any of us are successful, we’ll be proven right.  We can’t “prove” ourselves right on here by exchanging theories.  There is a huge difference between a theory, and “proving” your right by just spitting out more theories, and an actual intelligent bot.

  As the expression goes, “the proof is in the pudding”.  The first to have a chatbot converse, carry a conversation, learn via NLP, reason, perhaps pass a Turing Test, I think that will speak for itself.  Good luck to all !!!!!!

 

 
  [ # 55 ]

Victor, of course I’m with you on this. I don’t want to prove anything with theories, but discussing will bring new insights for all participants in the discussion, those being useful or not.

I agree that we can go in rounds here for quite some time and that might not really add up to achieving more understanding. While I’m looking at a way around ‘grammar’ for the initial model, I do agree that eventually the AI needs at least ‘some’ notion of grammar to get to a useful level. So no disagreement there, just different ways of looking at how to get there wink

 

 
  [ # 56 ]

Yes, absolutely.  And it may very well be that there is more than one way to achieve this goal.  It is a matter of taste perhaps which option we choose to get there.  OR, it may very well be that we are all wrong!  But I think, more likely, that it will be either one or more of us proven correct, OR at LEAST, all of us, a certain degree of correctness.

 

 
  [ # 57 ]
Hans Peter Willems - Feb 4, 2011:
C R Hunt - Feb 4, 2011:

Our senses impart a rule system for how objects behave (physics!) but for the bot, the rules for how objects behave are encoded by proper grammar.

I don’t think so; we are quite capable of describing physical things like ‘a collision’ in terms of the sound it makes and other effects without actually describing in detail. We all know the idea of an explosion we did not see but we did hear and feel it.

I think we are saying the same thing here. Because we have encountered explosions before, having only partial information of a new instance of “explosion” is enough, because we can map onto our knowledge base and determine what the information we are missing is. (What the explosion must have looked like based on how loud it was, for instance.)

Hans Peter Willems - Feb 4, 2011:

And even more, if you have never seen, heard or witnessed a real explosion, it is still possible to describe it in terms that have little to do with a real explosion. Something like that is actually simpler to explain when you abandon ‘proper’ grammar.

Have you ever explained color to a blind man? The only reason you could explain an explosion’s sound to someone is because they have heard loud sounds before. Or the sight because they have seen bright things before.

Why do babies grab at everything and stick everything in their mouths and you don’t? I can guess what my keyboard tastes like without licking it because at one point I’ve licked something that had a similar texture or a similar taste and I can guess based on that knowledge. (Even if I have no conscious memory of doing said licking.) But babies may have never touched ceramic or wood or water and need to build this knowledge base that we adults take for granted. Even if that means breaking your favorite vase.

A bot won’t do any of these things (at least, mine won’t). So how is the bot going to experience a self-consistant world where objects behave in ways governed by strict rules? (Although there may be many “rules” in the physical world, some surprising and unexpected, they are nevertheless strict and self-consistant.) These experiences must be fed in via text. Babies map natural language to a knowledge base they’ve been building since birth, but a bot must map text only to more text. Without some sort of *other* knowledge base (such as hard-coded grammar rules), such a scheme is asking for trouble.

If you’re definitely against hard-coded grammar, you will at least need to hard-code your bot with an understanding that objects exist and that they do things, and that they do things in space which contains other objects, and they do so in time. (That is, objects and actions have duration.) Then leave it to the bot to figure out which are the objects and which are the actions and which are the peripherals that describe the space/time components. I don’t have much faith in such a scheme, if only for the time and man-hours involved in teaching something like that.

Hans Peter Willems - Feb 4, 2011:

Also, when certain ‘senses’ are missing, we can still ‘map’ experiences for those missing senses onto other senses, again without describing things in (grammatical) detail. Our brain is very good at arbitrary pattern matching based on available vocabulary.

See above. Pattern matching skill is a combination of a good algorithm* and lots of experience with strict rules.

* There be the trick smile

 

 
  [ # 58 ]

As the expression goes, “the proof is in the pudding”.  The first to have a chatbot converse, carry a conversation, learn via NLP, reason, perhaps pass a Turing Test, I think that will speak for itself.

Then I guess I win! wink
Skynet-AI has done all of this. It has held conversations lasting over an hour. Used the learning mechanisms/conversations to add new knowledge, reasoned about math problems (If Skynet-AI can solve a math problem would you say it “understands” and “reasons” about it?). It has been convincing enough that some visitors have asked if there was a human on the other end of the chat.

In reality, it will be some time before it handles on-going conversational context (or full blown Turing Test) well. The challenge is how to expand the conversational knowledge base and how to store the “structure/grammar” of a conversation. “The proof is in the chatting.”

No, I would argue that we fully understand language constructs, and that the ability to do this in a fuzzy fashion only comes from first understanding it clearly.  Crisp understanding should come first, then fuzzy.  Crisp rules can be more easily relaxed, you know what your target is, going from fuzzy to crisp, good luck; you don’t know your destination, you’ll have a very difficult time getting there.

I disagree with that, and in fact Skynet-AI was built in exactly the opposite fashion. The first three inputs considered were:
“” - No input
“*?”- A question
“*” - Any input
The responses were general and the conversation not very enjoyable but it was a start, and some of these rules are triggered even today. Of course now most input is handled by less fuzzy rules. As I have mentioned elsewhere starting fuzzy or crisp may simply be different ends of the same problem.

Another point is that for me personalEnglish is my second languagedo understand (most ofitI can write and speak it quite well and I have quite a large vocabulary (= content). Howeverall this is ‘mapped’ to understanding what it means, and certainly not to any grammatical knowledge other then my experience of word-combination frequencies

I have found when dealing with people who speak English as a Second Language, they are able to convey their meaning even if their grammar is not perfect.

 

And for those of you who may have missed it, the human mind is very flexible:

If You Can Raed Tihs, You Msut Be Raelly Smrat

if yuo can raed tihs, you hvae a sgtrane mnid, too.
Can you raed tihs? Olny 55 plepoe out of 100 can.

i cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it dseno’t mtaetr in waht oerdr the ltteres in a wrod are, the olny iproamtnt tihng is taht the frsit and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it whotuit a pboerlm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Azanmig huh? yaeh and I awlyas tghuhot slpeling was ipmorantt! smile

 

 

 
  [ # 59 ]
Hans Peter Willems - Feb 4, 2011:

And even more, if you have never seen, heard or witnessed a real explosion, it is still possible to describe it in terms that have little to do with a real explosion. Something like that is actually simpler to explain when you abandon ‘proper’ grammar. Take a look at the movie ‘A knights tale’ where one of the guys try to explain what will happen if he makes problems:

http://www.youtube.com/watch?v=x668_vyjyHo (close to the end of the clip, around the 8:40 mark)

Ah, I love that movie.

But don’t you agree that “Geoff” is using the partial information he is receiving (well, the poorly formed threat he is receiving) and mapping it onto his own knowledge base of what that “pain” might feel like/look like/etc.? And I’m sure that he could again map that visceral knowledge back onto coherent, grammatically correct sentences. It is not “simpler” for Geoff to understand “entrails becoming extrails” because it is presented as a sentence fragment. It is only because Geoff has contextual knowledge of what entrails are, where they are located inside of him, and how much it hurts for the inside of him to be poked/prodded that the sentence fragment has any meaning.

In other words, it may be easier on Fowlehurst to use vague language, but not because he is explaining anything clearly. It is only made easy because of Geoff’s colorful imagination (built on previous experience and a good “algorithm”).

 

 
  [ # 60 ]
Merlin - Feb 4, 2011:

I have found when dealing with people who speak English as a Second Language, they are able to convey their meaning even if their grammar is not perfect.

Yes, but usually their grammar varies in consistent ways. Which become “rules” in themselves that map onto “correct” rules. And even if not, as I’ve been saying, we can understand these “fuzzy” inputs because we can map them onto a non-fuzzy knowledge base.

 

‹ First  < 2 3 4 5 6 >  Last ›
4 of 8
 
  login or register to react