AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Introduction
 
 
  [ # 61 ]
C R Hunt - Feb 4, 2011:
Merlin - Feb 4, 2011:

we can understand these “fuzzy” inputs because we can map them onto a non-fuzzy knowledge base.

True, in Skynet-AI the goal is to match a fuzzy input and map it to an output which is a specific concept.

 

 
  [ # 62 ]
Merlin - Feb 4, 2011:

As the expression goes, “the proof is in the pudding”.  The first to have a chatbot converse, carry a conversation, learn via NLP, reason, perhaps pass a Turing Test, I think that will speak for itself.

Then I guess I win! wink
Skynet-AI has done all of this. It has held conversations lasting over an hour.

LOL…uh… No.

Me: my brother’s name is paul
Syknet AI: You are paul.
Me: my name is henry and what is yours?
Skynet AI: Glad you stopped by henry and what is yours?.

So I mean the first bot that won’t get tricked by things like that.

A Turing Test passing system is decades, if not more, away, but to clarify my comment above, the first to have a bot that performs the above, as well as the various other examples mentioned earlier in this thread, would be a huge step forward. 

After those core functionalities are achieved, adding fuzziness is the final step—spelling correction, all that stuff, has been done before.

 

 

 
  [ # 63 ]

Ok I need some more grounding here. the term knowledge is thrown about so freely, that I’m losing the meaning of what folks are actually saying. Understanding to Victor is inconsistent as he relates what CLUES produces compared to Skynet-AI. Understanding to Has Peter seems like something different than that. In C R’s terms understanding is more like relating a language construct to an individual’s experience (how could she ever learn to play a game like tennis or golf? BTW, there are computers that can play table top tennis or even drive a car for that matter. Do those inventions use natural language?  Is driving different in Germany than in a country that does not speak German? I’m pretty sure language is only a tool. The problem is, with only a hammer the world begins to look like a bunch of nails.)

So if information is being able to abstract data into more useable forms, then much of machine translations could be said to still be processing at the information level. I would venture to say that even ontologies like Cyc as a commonsense repository with all its microtheories of how the world works is still at the information level.

C R texted an example of a baby putting things into their mouth. I believe the baby learns and “knows” from this behavior just as she pointed out.  She goes on to say that she doesn’t act in this manner. I am quite sure it is not because she has an experience of the taste of everything around her (she said she didn’t.)  We can imagine the taste. We have an idea maybe if we concentrate on it. To me knowledge has ranked that sensory input at a different priority than other stimulus.  So instead of just the abstract constructs that maybe language can capture, we are at a different level of learning or understanding when we talk of knowing. Instead of feeding on all the inputs we can get, we select the illusions that we need to maintain a self. Therefore spoon-feeding text into a bot probably won’t help the “knowing” of the being.

Again, knowledge is a part of you. Something deeper than just the fact.  You “use” knowledge unlike you use information.

Also I would like to amend my list because I’m not sure you can get to wisdom from knowledge without purpose. Some might view this as a high level of goal seeking that directs several levels of goals seeking (including self awareness) below it.

So Victor, when you say you want the bot to understand, do you want it to reiterate what you told it or do you want the bot to paraphrase in its own words from what it knows concerning what you told it? I’m guessing the latter is what Has is wanting to attempt.

 

 
  [ # 64 ]

My apologies Hans, I didn’t spelled your name correctly in the last post. I am really sorry.

 

 
  [ # 65 ]

1. Gary has a point about overloading the word ‘knowledge’. What is that anyway? Perhaps, it’s only a database system?
2. I would just like to add a few things here:
- Hans, It seems to me that your current model consist of weighted relationships between words/concepts. If so, this path has been tried before, many times. Some examples: Mindforth or this ai project,....  Personally, I think it’s a model missing a few things.
- For all the people who have a need for AI to be able to figure out the meaning of every word by themselves: have you ever considered archaeology? I mean, it took us more then 2000 years before 1 of us figured out what the hell those Egyptians wrote on their walls. And he only managed to do that after some translated text was found.  So, ye, I’d like to have a bot who will one day be able to learn new languages all by itself. And when I do, I’ll know the thing is more then I’ll ever be.

 

 
  [ # 66 ]
Gary Dubuque - Feb 5, 2011:

Ok I need some more grounding here. the term knowledge is thrown about so freely, that I’m losing the meaning of what folks are actually saying.

You’re right. There’s no point in discussing “knowledge” and “understanding” if all we’re really doing is debating semantics!

Gary Dubuque - Feb 5, 2011:

In C R’s terms understanding is more like relating a language construct to an individual’s experience (how could she ever learn to play a game like tennis or golf?

I guess I’m not explaining myself well. My point is that people use their experiences to generate “knowledge bases”. You might have knowledge bases built on interacting with objects, your sense of touch, taste, sight, etc. None of these knowledge bases require language, per se. And because our knowledge of a single object or event involves many sensory knowledge bases, even if we having partial information about a new single object or event, we can use our previous experience to guess what the missing information is. (See the example of “explosions”.)

So where does language fit into this? So far everything I’ve described could easily apply to your dog. (And I’ll bet to a great extent it does.) Humans are very much geared towards language learning. To the point that, like I mentioned before, it influences the way we think. So as we’re experiencing the world, we also map those experiences onto words and, for more complicated objects and events, we map them onto sentences and paragraphs. We do this in our mind whenever we think about something we’ve seen/done/etc. The way we form these sentences is based on grammar rules. Our own internal rules may not be completely “grammatically correct”, but they are self-consistent.

So when we hear language, we can use the connections we’ve already formed between language and experience to imagine what is being said. When we hear grammatically incorrect sentences or receive only partial information, that’s okay because by mapping what we do know back onto our experiences, we can “fill in the blanks” the same way we would if we heard an explosion but did not see it.

Gary Dubuque - Feb 5, 2011:

BTW, there are computers that can play table top tennis or even drive a car for that matter. Do those inventions use natural language?  Is driving different in Germany than in a country that does not speak German? I’m pretty sure language is only a tool. The problem is, with only a hammer the world begins to look like a bunch of nails.)

So in the context of what I said above, there is no reason a computer can’t utilize other sensory input (and a good algorithm) to learn to play tennis or drive a car*. The reason I’m so focused on natural language is that (1) this whole conversation began as a discussion of whether or not grammar rules are a necessary prerequisite to a bot building a text-based knowledge base and (2) for my bot, text is the only sensory input it’s getting. That’s it.

There’s no other type of knowledge base for it to compare the text against. No other teacher besides more and more text. (The car, provided it has a good algorithm, learns to avoid driving off the road when this leads to a crash. It’s interaction with the world acts as a consistent teacher.) Hard coded grammar rules are a sort of “work around” designed to deal with this limitation.

*Google rocks smile

Gary Dubuque - Feb 5, 2011:

C R texted an example of a baby putting things into their mouth. I believe the baby learns and “knows” from this behavior just as she pointed out.  She goes on to say that she doesn’t act in this manner. I am quite sure it is not because she has an experience of the taste of everything around her (she said she didn’t.)  We can imagine the taste. We have an idea maybe if we concentrate on it. To me knowledge has ranked that sensory input at a different priority than other stimulus.  So instead of just the abstract constructs that maybe language can capture, we are at a different level of learning or understanding when we talk of knowing. Instead of feeding on all the inputs we can get, we select the illusions that we need to maintain a self. Therefore spoon-feeding text into a bot probably won’t help the “knowing” of the being.

No, I haven’t directly tasted my keyboard. (Thank goodness, those things are filthy.) But the keys are slick, and I know they’re made of plastic. And I can feel how warm they are. All of this partial information allows me to guess what a thing sharing these traits would taste like. So probably at one point, I’ve tasted something that had most of these traits. That’s why I can imagine the taste. So “selecting illusions” require a great deal of previous experience. And for my bot, that experience comes from being fed text!

Gary Dubuque - Feb 5, 2011:

Also I would like to amend my list because I’m not sure you can get to wisdom from knowledge without purpose. Some might view this as a high level of goal seeking that directs several levels of goals seeking (including self awareness) below it.

Yes, good point. I define wisdom as the ability to apply knowledge to a new situation. (Through analogy, etc.) But why apply that knowledge if you don’t have a goal?

 

 
  [ # 67 ]

@CR: jummy, keyboard LOL

Humans are very much geared towards language learning.

Very true, especially compared to any other animals, I think. Though I’d like to add 1 more thing to this: I also believe it sort of depends upon a person’s character which process is the most dominant: language, image, emotion,... For me personally, I do a lot of thinking strictly visual-emotional, without words. As such, I do think that especially for the more creative thinking, multiple senses are required, as you say.
Perhaps, what we call ‘grammar’, is just a way for our brain to organise information that can be retrieved later on. Perhaps grammar isn’t only used for language, but for everything? Perhaps grammar as we learn at school, isn’t exactly the same? and perhaps, it’s not just humans who use grammar to structure the world?

 

 
  [ # 68 ]

C R I think you are onto something here.

If text creates the bot’s experience, how is that held within the computer? How is a machine going to get something more out of text than just more text (this is that chinese box parable where text goes in to a black box and text comes back out but who is to say if anything of real intelligence happens inside)? Or how do people get anything out of reading a book because it is just ink on a page?

We all believe there is something behind the language, something you might call experience.  The funny part is that reading a book is an experience in itself.

But if language represents something more and we can channel that interpretation through filters like grammar and generally accepted meanings of words, then maybe we can harness this “knowledge”. I’m kind of leaning towards the gestalt philosophy where focus on an idea comes to the forefront, that is, statistical clustering of those abstract extractions from text into patterns, mostly sequential, giving rise to capturing knowledge.  I say that because I can’t conceive what a learning bot would have as a model internally if it was constantly incorporating new experience into its structure.  I’m pretty sure it is not a database as we know it now, that is, unless the contents are processes instead of representions of processes. I can say that because I believe the brain is busy programming itself and that’s how we experience.

Maybe it is a game with a “grid” (AKA Tron’s vision) where ideas (programs) are born (inspired by text?) and then compete with each other to survive as knowledge the machine has learned. This might be one way to take information to the level of knowledge. The mechanics of ideas being formed and ideas competing to exist elude me though.

However experience is approached, it seems to be an environment we are still unfamiliar with.  So far I mostly come across the artifacts of rules to program these “higher” mental functions.  I’m guessing we use rules, fuzzy or crisp, because we are not comfortable with something we can’t explain as to why and how the machine did it. Yet I feel the answer might be in a process that only the computer can describe and then we may not fully understand how it described what it was thinking (or at least trust that the description is fully all that is involved in the thought.)

This might be something like a neural net where it has to be trained to get the values that make it function. For knowledge, the store could be a set of values more complicated than a nueral net and holographic in the sense that an idea is diffused thoughout the values. Whatever, it will be a model of some sort or a set of models. It won’t be something like Wikipedia!

And it will be a complicated task to mine the text to capture the knowledge, that hidden stuff that language communicates between people (and bots.)

 

 
  [ # 69 ]

Especially since the text will only be the tip of the iceburg that is the thought or knowledge it conveys.  The bot will have to contribute the bulk of the message, not the text itself!

 

 
  [ # 70 ]

I hope I didn’t come across as too negative, but seriously, this whole thread is just so subjective, too subjective, too nebulous regarding requirements.  Let’s get some FIRM EXAMPLES of some functionalities we’d like to see in a bot.  Let’s define based on what the bot should be ABLE TO DO.  Not everyone’s fuzzy ideas on what knowledge “really is”, or what thinking “really is”, etc, serves no purpose in my humble opinion. smile  That’s my approach for my project - defining the bot as what it can do.    Yes, no matter what your bot can do, someone can simply say “Puh! Ah… that’s **just** blah blah blah” smile  And sure, whatever, you can have your own SUBJECTIVE interpretation, but it is the actual functional abilities that I care about.

 

 
  [ # 71 ]

Let’s define based on what the bot should be ABLE TO DO.

true.

 

 
  [ # 72 ]

@Jan

Thank You!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

This thread should have been split into a philosophical thread, I don’t think it has much to do with “Introduction” anymore !

I will start a new thread, FUNCTIONAL DEFINITION OF A CHAT BOT later today smile

 

 
  [ # 73 ]

Again, thanks for all the info from everyone. Loads of stuff to digest wink

Jan Bogaerts - Feb 5, 2011:

Hans, It seems to me that your current model consist of weighted relationships between words/concepts. If so, this path has been tried before, many times. Some examples: Mindforth or this ai project,....  Personally, I think it’s a model missing a few things.

My model is indeed looking somewhat like this, but I agree there is ‘something’ missing in those models. My model has ‘something more’ however; something that is the AI-equivalent of ‘instinct’. Like a certain ‘base-knowledge’ that defines the most basic definitions of a ‘world-view’.
Example: if we feel heat, we instinctively know that we can get burnt. This is a value that we have inherited from our evolution. It is not something that we have to learn, and it is not something that is mapped to previous experience.

Also, we can map colours to sounds to tastes to temperatures. Grammar-rules are not flexible enough to do that (or you need a LOT of rules), you need another model for that.

And some more info: I’m working from ‘human’ knowledge management (I am professionally engaged in that field, I’m currently developing a cognitive navigation system for knowledge management) and in that field the debate on what ‘knowledge’ actually is, is still raging on. On top of that, doing mind-experiments with real people gives some neat insights into the ways that people process knowledge, experience and even feelings in relation to each other.

As for what my AI should be able to do in a conversation, is to show ‘understanding’ by mapping input to experience and reply based on that. Something like this:

Me: it is cold outside
AI: I suggest you put on something warm then

So it should be able to respond with information that is NOT grammatically mapped to the input in any way, but IS correct in the context of the meaning of both input and reply in relation to each other.

 

 
  [ # 74 ]
Hans Peter Willems - Feb 5, 2011:

Again, thanks for all the info from everyone. Loads of stuff to digest wink

Agreed !

Hans Peter Willems - Feb 5, 2011:

As for what my AI should be able to do in a conversation, is to show ‘understanding’ by mapping input to experience and reply based on that. Something like this:

Me: it is cold outside
AI: I suggest you put on something warm then

So it should be able to respond with information that is NOT grammatically mapped to the input in any way, but IS correct in the context of the meaning of both input and reply in relation to each other.

Examples, now we’re getting somewhere, but I’m not sure what you mean by :

    “NOT grammatically mapped to the input”

but your cold outside/put something on example is good.

FYI: I think there is a misconception that my approach is SOLELY based on grammar.  Grammar is just one source.    Grammar and parse tree generation is only stage 1 processing inside CLUES.  Semantic inference, and auto-linking of logic modules, and other factors are involved.

I agree that grammar alone cannot be the solution.      However, I’m sticking to my guns that, ignoring grammar altogether, you’re doomed to failure.  As for the “LOTS of grammar rules” comment, yes, it will take a lot, a LOT of many, many things…  I think if it was easy, and didn’t take a LOT, we wouldn’t be sitting here in 2011, some 60 years after computers have been invented, with no really powerful/learning/understanding chatbots smile

I also think that any system or design for a bot will fail if it is simply a ‘stimulus / response’ architecture.  It makes no difference really in the first stage of processing, whether you do that via grammar, or 900 billion AIML templates, if you don’t do anything useful with the input after you’ve understood it, the bot is pointless.  It must correlate that new input with previous facts, and develop a plan to deduce its own response, its own way, such as dynamically linking logic modules and automated reasoning.

 

 

 
  [ # 75 ]
Victor Shulist - Feb 6, 2011:

FYI: I think there is a misconception that my approach is SOLELY based on grammar.  Grammar is just one source.    Grammar and parse tree generation is only stage 1 processing inside CLUES.  Semantic inference, and auto-linking of logic modules, and other factors are involved.

No misconception here, it’s clear to me that you are using more models then only grammar.

Victor Shulist - Feb 6, 2011:

I agree that grammar alone cannot be the solution.      However, I’m sticking to my guns that, ignoring grammar altogether, you’re doomed to failure.

I never said that we can do without grammar. What I’m disputing is that grammar is needed for the base AI mind-model to be able to learn and eventually even ‘understand’ the concepts that we try to learn it. My stance is that ‘proper’ grammar is just something the AI could or even maybe should learn to be able to have a ‘proper’ conversation. For me it just maps to the level of conversational capacities the AI would have… or not have. Talking to a little child that yet has to learn proper grammar and currently converses in terms and short segmented sentences doesn’t hold it back from being able to learn. So this means that grammar on itself is not a prerequisite for learning. I think learning takes place on a much more organic and abstract level then ‘language’, and THAT is what I’m searching for as a basis for developing a strong AI.

Victor Shulist - Feb 6, 2011:

As for the “LOTS of grammar rules” comment, yes, it will take a lot, a LOT of many, many things…  I think if it was easy, and didn’t take a LOT, we wouldn’t be sitting here in 2011, some 60 years after computers have been invented, with no really powerful/learning/understanding chatbots smile

I think, after some 60 years we might think about we’re maybe going at it the wrong way wink

 

‹ First  < 3 4 5 6 7 >  Last ›
5 of 8
 
  login or register to react