AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Sample problem for AI reasoning
 
 
  [ # 31 ]

A good start of course would be the issues defined in

http://en.wikipedia.org/wiki/Natural_language_processing

I think the items in that list are considerably more important and should be tackled first rather than pointing out that someone’s design is lacking because, oh, its <user statement>  <bot> <user statement> instead of <user statement> <bot> <bot> <user statement>  (which by the way, was ONE example)

snap out of it smile

 

 
  [ # 32 ]
Victor Shulist - Feb 11, 2011:

So the document’s TOC would be, something like: ...

Even before the ToC is written, perhaps a basic outline should be “fleshed out”. I was trained that the general outline is the first task in creating the average document, and that the Table of Contents and first draft are derived from there. Of course, this is merely a suggestion; not something that’s engraved on a diamond. smile

Also, would it not be better to begin with the “nuts and bolts”, or basic framework, and from there discuss ever greater levels of complexity? To start with the complex and work back to the simple sounds like building a house both upside-down and backwards, to me. Just my 2¢ worth.

 

 
  [ # 33 ]

My point is and always has been that if the bot has a model already, it doesn’t need to do a brute force exhaustive search of all the possible interpretations of the inputs. If you were make a chess player you could forego all the book moves, all the heuristics, all the pruning because the only way to go is to spent massive amounts of processing to persue many dead-ends of an complete coverage of all the possibilities until your turn makes sense. Getting the subject, the context, first makes a big difference. The rules about rules is to help you do what you are doing now, not later.  If you weren’t so defensive (3 posts to every one, if not more) you might understand the basics of starting with the end in mind. You can improve the bot’s ability to process the inputs if it has a direction, a set of rules, for you to coordinate its analysis.

If a chat bot was only NLP then the Wikipedia reference could be the total universe of this thread. Many things identified in that article are things I bring up as needing to be addressed first. But Victor, you don’t see it. Automatic summarization is a test I suggested to determine if a model was produced or a list of entered “tokens” was dumped. Coreference resolution is part of the context I’ve been repeating. Discourse analysis is the largest contribution I’ve been emphasizing. Machine translation as they describe it beyond words (or semantic groupings) to AI-Complete is something I’ve been critizing as it really needs to be more than just information processing. Morphological segmentation I find is more dramatic in NLG than NLP, but this is beside the point. Named Entity Recognition is something that is ancillary to this thread. NLG is another big one which follows closly to what NLP extracts, maybe. Here’s the rub. You can screw up the NLG using stuff that is optimized for NLP. NLG from NLP would be machine translation in theory, but better at the translation if the NLP has NLG in mind.

Ok I leave. You don’t need any advice. You’ve got it all figured out. Only why did you even make this topic then? Oh yeah, because when I suggested some things for Hans, basically agreeing with him to skip the grammar and go for the knowledge processing (not information processing), you countered. Then Dave decided this was not appropriate to be in the Hans thread.

You’re first things first approach sounds to me like someone building the Winchester mansion. But when I suggest it is much more important to generate the bot’s output so you have some environment, some (knowledge) structure, that you can manipulate, you tell me that’s too abstract, too philosophical.

I kept the same sample over and over because that is this thread. If you are tired of using that one thing to demonstrate different aspects, then I’m sorry. If you think it is picking the example apart for the heck of it, then sure I’m wasting my time. Go ahead and grind away with your deliberate progress. You tell me to go, fine. Bye.

 

 
  [ # 34 ]

Ok, folks, this antagonism has gone on long enough. Gary, I (and others, I’m certain) don’t want you to go, and I’m sure that if we’re able to get past this, Victor will agree with that, as well. Victor, I understand your frustration here, and can see where you would want to find some way to avoid all of this contention. But rather than asking Gary to refrain from repeating the same arguments over and over, might I suggest that you both just agree to disagree here, and move on? I’m sure that everyone involved, yourselves included, will feel much better about it afterward. I’m sending each of you an email message, in the hopes that I can perhaps mediate a peaceful resolution to this problem. Please, each of you, try to make an effort here to find some common ground, and stop this before it gets any more out of hand.

 

 
  [ # 35 ]
Gary Dubuque - Feb 11, 2011:

My point is and always has been that if the bot has a model already, it doesn’t need to do a brute force exhaustive search of all the possible interpretations of the inputs. If you were make a chess player you could forego all the book moves, all the heuristics, all the pruning because the only way to go is to spent massive amounts of processing to persue many dead-ends of an complete coverage of all the possibilities until your turn makes sense. Getting the subject, the context, first makes a big difference. The rules about rules is to help you do what you are doing now, not later.  [...] You can improve the bot’s ability to process the inputs if it has a direction, a set of rules, for you to coordinate its analysis.

So if I understand you correctly, you’re saying that having a sense of context (what the current subject is, for example) helps the bot narrow the range of tools it needs to employ to develop an appropriate response.

The trouble I’m having with this is that without some sort of grammar knowledge, it can be difficult to determine from text what the subject is. One would likely need a large knowledge base consisting of interconnected words that indicate their relation to this or that subject in order to do this without grammar. And *building* that knowledge base without grammar would be very time consuming.

What am I missing here?

Gary Dubuque - Feb 11, 2011:

If a chat bot was only NLP then the Wikipedia reference could be the total universe of this thread. Many things identified in that article are things I bring up as needing to be addressed first. But Victor, you don’t see it. Automatic summarization is a test I suggested to determine if a model was produced or a list of entered “tokens” was dumped.

One of the first goals of my project—before it carries on chats—is to have it be able to summarize simple English wikipedia entries. The amount of contextual knowledge that is required, combined with knowledge about how the “hierarchy of information” is structured in paragraphs and throughout longer passages, is enormous for this sort of undertaking. Some of this hierarchal information can be taught directly by showing the bot examples of texts and summaries, the way students learn to write summaries in school. However the contextual knowledge has to come from somewhere.

Gary Dubuque - Feb 11, 2011:

Coreference resolution is part of the context I’ve been repeating. Discourse analysis is the largest contribution I’ve been emphasizing. Machine translation as they describe it beyond words (or semantic groupings) to AI-Complete is something I’ve been critizing as it really needs to be more than just information processing. Morphological segmentation I find is more dramatic in NLG than NLP, but this is beside the point. Named Entity Recognition is something that is ancillary to this thread. NLG is another big one which follows closly to what NLP extracts, maybe. Here’s the rub. You can screw up the NLG using stuff that is optimized for NLP. NLG from NLP would be machine translation in theory, but better at the translation if the NLP has NLG in mind.

How does one engage in discourse analysis if one can’t understand the basic units of the discourse? How would one design an algorithm that goes from trying to understand discourse to then deducing the basic units of said discourse? All nuance appears to be lost in this sort of top-down approach. I don’t get it. It would help to have an explicit example of this (theoretical or otherwise).

And as far as NLG goes—how does one turn a knowledge base into ordered sentences if one does not know the rules for sentence construction?

 

 
  [ # 36 ]
C R Hunt - Feb 11, 2011:

So if I understand you correctly, you’re saying that having a sense of context (what the current subject is, for example) helps the bot narrow the range of tools it needs to employ to develop an appropriate response.

The trouble I’m having with this is that without some sort of grammar knowledge, it can be difficult to determine from text what the subject is. One would likely need a large knowledge base consisting of interconnected words that indicate their relation to this or that subject in order to do this without grammar. And *building* that knowledge base without grammar would be very time consuming.

It might be time consuming, but what you describe is, in a nutshell, the exact approach I’m taking (with some additional mojo added). And ‘time consuming’ is pretty relative in this context tmo. As I see it, grammar is a small part of the total ‘problem’ and certainly not the solution to it. When you start with ‘grammar’ as the base model you paint yourself into a corner. I said it before; it requires you to keep adding complexity to your system to handle the current complexity, it’s a spiral, you never get to the point where the system will be capable to construct new paradigms by itself to be able to handle new concepts.

Grammar, in the end, is just ‘representation’. There is no ‘model’ wherein grammar evolves into understanding. You can formulate a grammatically correct sentence on say, quantum mechanics, without understanding anything of it. Or on making cocktails, the complexity of the concept doesn’t even matter in this.

So I do agree with Gary in most parts and hope that he, at least, does not leave the board. I specifically hope for his input in my own dedicated topic, when I start is that is wink

 

 
  [ # 37 ]

When you start with ‘grammar’ as the base model you paint yourself into a corner. I said it before; it requires you to keep adding complexity to your system to handle the current complexity, it’s a spiral, you never get to the point where the system will be capable to construct new paradigms by itself to be able to handle new concepts.

I don’t think that’s true. I have tools in place that allow my bot to form its own rules for mapping a complex sentence onto simple sentences (that is, sentences that it has rules for). (The “new concept” in this case being a new type of sentence input.) So while there are a set of fundamental, hard-coded rules to interpret simple sentences, adding complexity is not done through more and more coding on my part. It’s done by providing the bot with natural language examples of complex and equivalent simple sentences and letting it use the tools at its disposal to make its own rules for handling similar cases in the future.

The rules that *are* hard-coded compensate for the fact that the bot has no other way to learn about the physical world other than as described by sentences. (I’ve gone over this elsewhere and won’t repeat it here.)

Grammar, in the end, is just ‘representation’. There is no ‘model’ wherein grammar evolves into understanding. You can formulate a grammatically correct sentence on say, quantum mechanics, without understanding anything of it. Or on making cocktails, the complexity of the concept doesn’t even matter in this.

At what point do you call it understanding? Please be concrete here. Why do you think you *understand* how to make a cocktail? Because you can picture the instructions? Because you can imagine feeling the cocktail glass in your hands? Is this any more understanding because it is united to the senses? Will a text-based chatbot ever be able to do that? How? Are you going to hard-code fake senses? How is having a system for mapping text onto special key words/tokens that represent senses any different from mapping text onto more text? Are the images in your mind not representations as well?

An equation is only true when the representation (a bunch of symbols and letters) is united with the rules that govern how it is formed. Without those rules the equation is meaningless and with the wrong rules it is patently false. The same holds for a sentence. One could concoct a whole scheme for dissecting sentences that is completely false and without meaning. It is only because we map the elements of the sentence onto the physical world that we find the correct set of rules so intuitive, and imagine that a bot must arrive at the right set of rules as well.

 

 
  [ # 38 ]
Hans Peter Willems - Feb 11, 2011:

So I do agree with Gary in most parts and hope that he, at least, does not leave the board. I specifically hope for his input in my own dedicated topic, when I start is that is wink

I hope Gary will continue posting in this thread too. We all have a lot to learn from eachother and I definitely consider everything he says, even if I then decide that I do not agree with it or if I do not fully understand his perspective.

 

 
  [ # 39 ]

Ok, so that I’m not considered the ‘bad ass’ here…..

I was only suggesting that Gary not waste his time (he himself says he was talking to a “wall”).  If he is the authority in AI, and all my ideas are wrong, he has better things to do than be frustrated.  I would rather have Dave cancel my account than be the blame of loosing the expert.

 

 
  [ # 40 ]
Victor Shulist - Feb 11, 2011:

I would rather have Dave cancel my account than be the blame of loosing the expert.

Well that’s not gonna happen. raspberry And I’m not going to do that to Gary, either. You two have different views when it comes to AI and it’s creation, and you’re both passionate about it. This is a good thing! I just want to see some sort of resolution here that leaves everyone, if not happy, at lease civil. That’s all. Now both of you take a deep breath, count to ten, think about puppies, and come back, ready to play nice. smile

 

 
  [ # 41 ]
C R Hunt - Feb 11, 2011:

At what point do you call it understanding? Please be concrete here. Why do you think you *understand* how to make a cocktail? Because you can picture the instructions? Because you can imagine feeling the cocktail glass in your hands? Is this any more understanding because it is united to the senses? Will a text-based chatbot ever be able to do that? How?

Understanding how to make a cocktail, to me, is combining the knowledge and experience of ingredients and tastes to form a mixture that is ‘enjoyable’... how’s that for a definition smile

Of course you have a point when you say it is just mapped to text, but herein lies the difference in our view (I think); we use text to ‘store’ concepts, but in my model it doesn’t make any difference how something is actually named as it will map to core-concepts the AI will ‘comprehend’ by default. When the AI has a comprehension of temperature (in relation to itself), it then doesn’t matter if we call it ‘cold’ or ‘koud’ (Dutch for cold) or even ‘brrrrrr’ or ‘ghewgr’.... the AI will still know it’s meaning is a ‘low temperature’ or anything else the ‘comprehension algorithm’ will lead it to.

C R Hunt - Feb 11, 2011:

Are you going to hard-code fake senses?

I am indeed going to use VIRTUAL senses in my model. Not because my model needs them but because it will give me a richer environment to experiment with ‘comprehension’. The senses are not ‘hard-coded’ and just mapping of (virtual) analog inputs to base-concepts. Technically I will be able to define ‘sensors’, give them a range an boundaries, and map them to one or more concepts.

 

 
  [ # 42 ]
Dave Morton - Feb 11, 2011:

You two have different views when it comes to AI and it’s creation, and you’re both passionate about it. This is a good thing!

Dave posted the exact words that came to my mind as well, so there you have it smile

Victor, keep your ideas coming; even if I don’t agree with everything you say, your statements are keeping me on my tows and constantly fine-tuning my own ideas. Again, I respect your efforts, no matter if we agree or disagree on things.

Opposite views are great fuel for innovation !

 

 
  [ # 43 ]

Just out of curiosity, Hans, what “types” of sensors will you be emulating here? Are we talking about audio/sound sensors, sensors that mimic human senses, or something completely different?

 

 
  [ # 44 ]
Dave Morton - Feb 11, 2011:

Just out of curiosity, Hans, what “types” of sensors will you be emulating here? Are we talking about audio/sound sensors, sensors that mimic human senses, or something completely different?

Human senses. I think that working from an example (a human) gives a good reference for what to achieve. Mainly sensors for pressure, temperature, motion… things that can emulate ‘awareness of the surroundings’. I don’t need high-quality ‘vision’ or things like that, to me that is a different field of research (visual pattern recognition) that I leave to others.

As I said, these ‘sensors’ are virtual. They will be controls on a screen that I can influence and then teach the AI what it means that it is experiencing. It is analogue to ‘nerves’ that are wired based on where they are in the body. By creating ‘virtual nerves’ I get an instant interface that can be used (someday) to hook up real sensors. Al I need for now is to be able to teach the AI what a certain input relates to.

But let’s not hijack this topic by going into this concept further, I’ll be describing my plans (so far) in more detail SOON in a dedicated topic of my own.

 

 
  [ # 45 ]

In that case, I shall wait patiently, but with great anticipation for your contributions. smile

 

 < 1 2 3 4 > 
3 of 4
 
  login or register to react