AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Introduction
 
 
  [ # 16 ]
C R Hunt - Jan 28, 2011:

This means that our intuitive understanding of how the world works (what an object is, how it can perform actions and have actions performed upon it) cannot be developed by the bot by interaction with the world. They cannot store information about the world through sensory means. Thus there is no other type of non-language knowledge base for the language knowledge base to build from. People can do fuzzy matching because they can refer to this non-language knowledge base of experience and infer what the proper language representation should be. Bots only have the proper language knowledge base to draw from.

Ah yes, the age old problem in AI - the anchoring problem (Google: ‘anchoring problem’).

This is a very interesting point.  I remember when I first realized this conundrum WAY back in ‘86 at age 15 - I realized that, if you were to give the AI a definition of a word (A).  That defintion is simply a sequence of other words, A1, A2, A3,  and you are forced with an endless recursion of looking up words, and when do you find meaning !!!

My bot’s mind will be a kind of self-contained network of words and associated information, forming a kind of web.    Thus, meaning will only go to a certain depth of recursion required for a specific problem.  That is, a word like ‘hot’ will never have the real meaning of course, but will be defined as a parse tree of associations.  Anyone remember the episode of Star Trek TNG where Data was asked “Do you know what desire is Data?” to which he responded, “Desire: a wish, a request, a…..” and his designer (I think it was Dr. Noonien Soong) responded very aggressively… “Do you *KNOW* . .. . *WHAT*  desire *IS* !!!!”).

The question is, and I will be finding out very shortly, (within the next 6 months), IS: will this self-contained web of word defintions (with NO anchoring) be enough ???  We’ll see!!

 

 

 
  [ # 17 ]

Thanks for all the replies so far. I’ll add all the suggested information to my list of stuff to research, so keep those suggestions coming. This is what I hoped for (and expected) when subscribing to this forum. It is pretty hard to do any serious research without having peers review your ideas and react upon them. As I don’t have access to an academic institution for such interaction, this forum is already helping me tremendously (I actually do have access to an academic institute but they don’t have any AI-related research).

One thing I want to make clear is that I do agree on the importance of language! However, I also believe that language is very much tied to perception (as Nova pointed out), so to build a strong-AI we need to engineer this language-perception link into the model. And tmo the only way to do that is to have the AI ‘learn’ this instead of ‘programming’ grammar-logic into it. How to be able to ‘teach’ such a system while it not yet perceives grammar is one of the challenges (maybe the biggest) that I’m trying to tackle. I’m pretty sure that I’ll end up with a system that will need to ‘learn’ everything, including grammar, step by step. Having raised two kids gives me some perspective to how difficult a task this is going to be by the way wink Having the AI learn things might be a much bigger challenge then actually constructing the framework for the AI.

One thing that is paramount in my view is that the AI needs a ‘sense of self’. ‘I think, therefor I am’ starts with the perception (and understanding) of ‘I’. So instead of starting out with grammar-rules, I’m starting out with describing the entity of the AI itself. I’m hoping to gradually (through training) build the perception of the world around this entity.

I also need to point out that I’m just at the beginning of this. If worked about two years to build a behavioral model into an AIML-based chatbot, only to end up with the conclusion that AIML can model ‘to a certain extend’ a ‘conversation’ but not much else. Mind you, I’m not faulting AIML for this, it’s simply what it is designed for. In the end I do need a pattern-based system (not unlike AIML) to model the conversation-interface, but working FROM the parsing-engine (i.e. grammar-rules) seems backwards to me at this point.

 

 
  [ # 18 ]
Hans Peter Willems - Jan 29, 2011:

Thanks for all the replies so far. I’ll add all the suggested information to my list of stuff to research, so keep those suggestions coming. This is what I hoped for (and expected) when subscribing to this forum. It is pretty hard to do any serious research without having peers review your ideas and react upon them. As I don’t have access to an academic institution for such interaction, this forum is already helping me tremendously (I actually do have access to an academic institute but they don’t have any AI-related research).

I highly agree.  I also very much enjoy the conversations I engage in with all the members of this site.  The members here have a lot of talent, and I believe we all benefit from sharing our ideas.  Also, it is encouraging and I think we all motivate each other.

Hans Peter Willems - Jan 29, 2011:

One thing I want to make clear is that I do agree on the importance of language! However, I also believe that language is very much tied to perception (as Nova pointed out), so to build a strong-AI we need to engineer this language-perception link into the model. And tmo the only way to do that is to have the AI ‘learn’ this instead of ‘programming’ grammar-logic into it. How to be able to ‘teach’ such a system while it not yet perceives grammar is one of the challenges (maybe the biggest) that I’m trying to tackle. I’m pretty sure that I’ll end up with a system that will need to ‘learn’ everything, including grammar, step by step. Having raised two kids gives me some perspective to how difficult a task this is going to be by the way wink Having the AI learn things might be a much bigger challenge then actually constructing the framework for the AI.

Now I *do* agree with this.  However, to be perfectly honest with you, I have set for myself a more realistic goal.  I actually don’t expect to really build a strong AI—- well, not at least right away.  For my first goal, I want to create an AI that I can have a conversation with.  Store and retrieve information in the form of natural language.  Also, I want to be able to discuss, of course in natural language, complex things, and act as a sort of teacher.  Possible applications will be things like teaching English and perhaps a tutor for mathematics, electronics, or even doing your taxes.  Now, later, I may decide to have the system learn grammar.  But, I believe that I should know how that grammar is represented in the system first, then write code that learns it.    But again, I don’t want to have the system learn grammar just for the sake of learning grammar, if direct-coding of the rules works for the system, that is fine.  Again, I’m not concerned about whether that will be considered “true AI” - as mentioned above, all I am after right now is “the bottom line” - a system that can carry a conversation, learn new facts and words via NLP, and answer questions, and ask follow up clarifying questions, and deduce information, all by English conversation.  I think that is enough for “phase 1” !!! smile

Hans Peter Willems - Jan 29, 2011:

One thing that is paramount in my view is that the AI needs a ‘sense of self’. ‘I think, therefor I am’ starts with the perception (and understanding) of ‘I’. So instead of starting out with grammar-rules, I’m starting out with describing the entity of the AI itself. I’m hoping to gradually (through training) build the perception of the world around this entity.

Yes, I very much agree with this.  I do not think my “direct coding of grammar” will preclude the possiblity of putting this type of concept in my bot.  I will be pursuing this idea as well.

Hans Peter Willems - Jan 29, 2011:

I also need to point out that I’m just at the beginning of this. If worked about two years to build a behavioral model into an AIML-based chatbot, only to end up with the conclusion that AIML can model ‘to a certain extend’ a ‘conversation’ but not much else. Mind you, I’m not faulting AIML for this, it’s simply what it is designed for. In the end I do need a pattern-based system (not unlike AIML) to model the conversation-interface, but working FROM the parsing-engine (i.e. grammar-rules) seems backwards to me at this point.

Absolutely, AIML of course does have its uses, as they say “the right tool for the right job” :)  A lot of times all you need is AIML.

 

 
  [ # 19 ]
Victor Shulist - Jan 29, 2011:

Also, it is encouraging and I think we all motivate each other.

You, and others here, are already doing that for me smile

Victor Shulist - Jan 29, 2011:

However, to be perfectly honest with you, I have set for myself a more realistic goal.  I actually don’t expect to really build a strong AI—- well, not at least right away.

I got that from the information that you have posted so far (I’m following your topic in ‘my project’ as well). I do admire your work so far and I think there is certainly merit in your aim. I’m already picking up ideas from your postings that help me within my own scope of research. AI-research is obviously still in it’s infancy and I’m pretty convinced that every angle of research will have it’s value within a greater scope or model. I don’t expect to find someone that thinks exactly as I do, it’s the different ways of looking at problems and finding solutions for them that is the ultimate driving force for innovation (in any field). Every time I question something in your model, it inherently questions something similar or even totally different in my own model.

 

 
  [ # 20 ]

Thank you sir, much appreciated.  You picked a great time to join the site.  I am approaching a very exciting stage in my development of my own system.  You should be seeing increasingly sophisticated examples in my thread very, very soon!

 

 
  [ # 21 ]

I still think there is something that is being overlooked by everybody doing AI, that once we figure it out, everything will fall into place.  What is the step between having a bot that can understand perfect grammar and a bot that can deal with concepts and conceptual ambiguities?

Strong or General AI fascinates me because it seems like it is the one thing that humans can’t figure out.  I’m sure science will find cures for just about everything and I’m sure microprocessors will reach amazingly high clock speeds, but the AI thing is a stumper.  Truly.

 

 
  [ # 22 ]
Toby Graves - Jan 30, 2011:

Strong or General AI fascinates me because it seems like it is the one thing that humans can’t figure out.  I’m sure science will find cures for just about everything and I’m sure microprocessors will reach amazingly high clock speeds, but the AI thing is a stumper.  Truly.

Yes, but with ‘strong AI’ being the ‘holy grail’ in AI research, it won’t stop me (re)searching for it wink

To reflect on your statement; I think that, with pattern-based chatbots on one side of the spectrum and neural nets that try to model the actual human brain on the other side of the spectrum, the real solution is somewhere in the middle of those. I don’t think we actually need a complete model of how the human brain works (because a computer is simply different) but we DO need a model of how the human brain ‘processes’ information in such a way that it becomes knowledge. I also think that knowledge (and understanding thereof) is based in large part on ‘experience’. Hence building a strong AI needs to figure in this experience concept.

 

 
  [ # 23 ]

Yes, but with ‘strong AI’ being the ‘holy grail’ in AI research, it won’t stop me (re)searching for it.


And you should do that, because it is interesting for you.  Absolutely.

we DO need a model of how the human brain ‘processes’ information in such a way that it becomes knowledge

Understanding is the combination of awareness of(awareness of an object) and knowledge of its properties and relationships.

The brain is a flexible pattern matcher/(responder/reflector)
Understanding of a “pill bottle” is a reflexive pattern triggered by the resonance of a stored and matched “pill bottle” pattern in the brain.

 

 

 
  [ # 24 ]
Toby Graves - Jan 30, 2011:

What is the step between having a bot that can understand perfect grammar and a bot that can deal with concepts and conceptual ambiguities?

Concepts can be conveyed to the bot with language.    Perfect grammar is only the first step, then you add on the ability to process and determine closest match to what was meant.

Think spell checker - how can you determine a list of suggested words for a misspelled word, if you don’t know a list of correctly spelled words?

In the same way, you cannot determine what the grammar of a sentence should have been and match that, without first knowing that proper grammar.

 

 

 
  [ # 25 ]
Toby Graves - Jan 30, 2011:

  What is the step between having a bot that can understand perfect grammar and a bot that can deal with concepts and conceptual ambiguities?

dealing with “conceptual ambiguities” is a huge part of natural language *understanding* - perfect grammar or not. smile  It takes real world knowledge, semantic reasoning, to understand natural language.  Natural language is the medium by which we share concepts, ideas, knowledge, etc.  Don’t underestimate its importance and power.

 

 

 
  [ # 26 ]
Toby Graves - Jan 30, 2011:

  What is the step between having a bot that can understand perfect grammar and a bot that can deal with concepts and conceptual ambiguities?

I agree with Victor that understanding perfect grammar is a step towards understanding conceptual ambiguities, rather than another end of the spectrum. Because the chatbots we’re designing do not have other sources of experience with the world beyond reading natural language, perfect grammar input acts as a sort of reference point from which to analyze ambiguous input.

I don’t think we’ll see “true”/“strong”/etc. AI until we have a system that has senses beyond just text reading. The reason being that our own concept of intelligence is so wrapped up in the combination of senses we use to learn about the world.

 

 
  [ # 27 ]

The brain is a flexible pattern matcher/(responder/reflector)

This also lets some of us throw grammar completely out the window and attempt to identify meaning from the input in a “fuzzy” fashion.

Even the best “English” speakers will make typos or say something which is grammatically incorrect from time to time. The communication problem for bots increases with those who use English as a second language.

 

 
  [ # 28 ]
Merlin - Jan 31, 2011:

This also lets some of us throw grammar completely out the window and attempt to identify meaning from the input in a “fuzzy” fashion.

Exactly!

Merlin - Jan 31, 2011:

Even the best “English” speakers will make typos or say something which is grammatically incorrect from time to time. The communication problem for bots increases with those who use English as a second language.

English is my second language (I’m Dutch). I’m thinking that if a system is not primarily grammar-based, it will also be possible to mix languages and have the AI (for example) be capable of doing ‘free translation’. The model that I’m developing is very much based on fuzzy matching and the aim to gradually reduce the fuzzy-level as a result of accumulating more information (knowledge) to describe a certain concept.

 

 
  [ # 29 ]

Again, a bot based on grammar does not preclude the ability later to add fuzzy matching. 

Except, fuzzy matching COMBINED WITH the ability to map improper grammar to proper grammar in a fuzzy way, will give the bot that much more understanding.

 

 
  [ # 30 ]

The model that I’m developing is very much based on fuzzy matching and the aim to gradually reduce the fuzzy-level as a result of accumulating more information (knowledge) to describe a certain concept.

This is the approach that I took with Skynet-AI. As I watch the logs, I will often identify a set of inputs that triggers a very fuzzy pattern. This will then prompt me to add a higher priority, less fuzzy pattern.

I view both approaches (fuzzy vs grammar) as attacking the same problem from opposite directions. Fuzzy systems will become less fuzzy and grammar systems will handle bad or no grammar. Someday our bots will meet in the middle.wink

In my fuzzy system I can get broad (but shallow) understanding. It is more susceptible to false positives (triggering a response that may not be quite correct). As I looked at grammar systems, they gave narrow (but deep) understanding and eliminated the false positives but had the limitation that they needed all the predefined rules in advance and were unforgiving of poor input.

 

 < 1 2 3 4 >  Last ›
2 of 8
 
  login or register to react