AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

GRACE/CLUES
 
 
  [ # 76 ]

Thanks.  Now it must STAY as GLI. CLUES was ok (Complex Language Understanding Execution Engine). But GLI is better.

 

 
  [ # 77 ]

Very futuristic.

 

 
  [ # 78 ]

Very futuristic.

thanks Merlin, that’s the idea… ‘outside the box’.

 

 
  [ # 79 ]

Victor !
When you said you where working in a reasoning AI I couldn’t imagine this kind of work.
Really impressive.
I’ll keep track of this topic.
I took all my time reading the posts, I’m in a Cyber cafe coz my computer is broken, so I’ll take another time to write properly.
Amazing!

 

 
  [ # 80 ]

Thank you Fatima…..a h*** of a lot of work has went into this project so far smile

Actually Grace would extremely capable linguistically by now, but as I mentioned, I’m on the third engine rewirte (2 versions in Perl, one in early ‘09 & one in early ‘10, then the C++ rewrite which started in November ‘10). 

I will be tracking Grace’s progress, and very detailed test results like those shown in previous posts, using Google Docs, which I can grant access to anyone that is interested in really following closely her progress.  Later, when I ‘hook her up’ to the internet, perhaps some of you will want to train her.  She is not that far away from learning new words and phrases by naturual language.

The areas of progress, with test results, that will be kept track of on Google Docs will be:

1) Grammatical Knowledge….. that is, how a given input could be explained in grammatical structure, of course by all valid grammar constructs (yes, sometimes combination explosions).

2) World Knowledge—- this is data Grace uses to promote some parses over others.  This of course is a huge job.  I will track this in a seperate spreadsheet perhaps on Google Docs.

3) Question answering logic—very generic and flexible routines to do semantic delta and finding closest fact parse tree to answer questions.  This is where, when Grace notices an input string that she decides is a parse tree of a fact, or the parse tree representing a question, she knows what to do (how to go about choosing the corrresponding parse tree that is a fact which can supply the data requested in the question… true naturual language query)

4) Inference - intiially, provided code that will allow Grace to take on or more naturual language sentences, and instantiate a logical argument to reach a conclusion, which will be of the form of a naturual language statement.  So this is where when Grace is asked a question, and she doesn’t find any fact parse tree (*.pt file in her KB directory), she will try to generate it… she will find a logic module that has a conclusion of the same form as the desired fact.  The logic module will indicate the number and types of propositions required.  The inference then will be within the logic of the module.  The cool thing will be, when that logic module (call it “LM-1”), requires a proposition, the KB will be consulted first (again, *.pt file in her KB directory).  If that proposition (as a ‘fact’ parse tree), doesn’t exist, what will she do?  You guessed it.. try to find another LM which would generate a conlcusion of the form of the required proposition of the first LM…..this will go recursively until she returns to the original LM, hopefully with all required propositions, and deduces the response.  I won’t be working on this until probably late next year though… (God, to be able to work on this full time is my dream!!).    So the goal here, is to have, like forward chaining, and back chaining, but not in representations like F.O.L., but in full, rich NL.

I have still more plans for inference but I’m not going to torture myself… since I won’t even be able to start on that type of functionality, until, like I say, perhaps late next year….. for one person, working on weekends… I think 1 - 3 above is enough. 

I see the first job for Grace would be an extremely flexible knowledge base query.  Naturual language will over flexibility that conventional SQL database type applications can’t provide.  Take just a very simple example—storing telephone numbers.  This is just a very simple example to get the idea across.  I want to store telephone numbers, let me create a table.  What columns do I need—well, name of person, say 30 charactors, and a second column, telephone number.  So I enter a bunch of numbers, but then, oh,  I need to indicate if the phone number is a cell or landlane….oh… I have to go into my DB, and issue an “ALTER TABLE” statement to add a column.  Use the app for awhile… oh no… I realize, I should indicate if the given number is the persons work number, or home number….. use the app for awhile again .. oh no… I realize I should differentiate again between x and y,.... and on and on we go.

Now, imagine the NL solution….

user: Computer…. John’s phone number is 111-2222.
computer: OK
user: Quick… what is John’s cell phone number???
computer:  Well I don’t know if it is his *CELL* phone number, but you told me is phone number was 111-2222.
user: oh yeah, that’s his cell, thanks.

< system updates its DB itself, using the word “cell” as an adjective to modify “phone number” and adjusts the applicable “fact tree” parse in its KB) >

user: Computer, what is John’s WORK cell phone number?
computer: Hum, well, you told me his cell number number was 111-2222, but didn’t indicate if that was WORK cell or not.
user: no, that is his personal cell number
computer: ok, well, sorry I don’t know then

< system, although couldn’t supply answer, has learned something.. that 111-2222 is John’s PERSONAL CELL number, updates its KB >

* * *
So…you get the idea.  Instead of manually updating the SQL DB, the bot takes care of that “house work”.  I intend on having all of Grace’s knowledge stored as parse trees, that way, no details are lost. 

So, before we can have a dialog like this with Grace (well, we can right now, as in the previous posts, she can handle a variety of ‘did-<statement>’ questions as long as, right now, you limit your topic to people going to different types of social events and going to different rooms in a house LOL… she needs to learn more about the world to “get” other types of statements… like Merlin’s “What gender” question, when I give her more info about what gender means (that it, for example, has values of male and female, etc)), but before the rest of the dialog is possible, I need to ‘buckle down’ for the next several months with supplying her with enough grammar knowledge.  smile  Boat load of work to do !!

 

 
  [ # 81 ]

Ok. My computer is back!

I just imagine what you could do if it was a fulltime work… LOL

I could imagine one situation:
If computer dont have a information, it could just ask you to explain it. So you could “teach” it. ( I don’t know if it’s possible).

Where is stored GRACE’s databasis? Because you say it will have world knowledge. Will it use only your inputs or it will explore wikipedia and other sources? If it’s going to learn all from you, so it is almost independent from internet knoledge fonts?

 

 

 
  [ # 82 ]

I believe if a bot has enough basic conceptional information, then it would be possisble to teach it via user input and by using context evaluation. I have tested this theory with Marie and it looks promising.

Example;

Laura: teacher

Marie: teacher as in school?

Laura: school

Marie: teacher as in school as in building?

Laura: building

Marie: teacher as in school as in building as in structure?

Laura: structure

Marie: teacher as in school as in building as in structure as in city?

Laura: city

Marie: teacher as in school as in building as in structure as in city as in state?

Laura: state

Marie: teacher as in school as in building as in structure as in city as in state as in country?

So anyway, you get the idea? This is called dynamic word association. The above example seems simple because it is. When you give Marie a single word she will first find an associative word. When you confirm the association she moves along that same branch. If you give another word not in the previous association, then she starts the process over in a new direction.

The more words that you place in a sentence, the more possible associations she can make to form a response. Sounds like a game but it is not.  It’s a form of latter logic based on word association. I have been experimenting with this concept for product suggestions and in querying searches with interesting results. As long as Marie can keep track of the subject and topic history she can back reference and build logic on what she has learned. I am now working on applying this process to general conversations as well.

 

 

 

 

 
  [ # 83 ]

Fatima, thanks.  Yes, what Grace considers ‘undeniable truth’ is stored locally in text files.  These are *.pt files.  The knowledge is kept in its original form (well, with ‘world knowledge markup’ to the parse tree).  So

- actual literal original string, including case
- same, but with all lower case
- parse tree, specifying grammar of the sentence, predicate, phrase, etc
- world knowledge markup added to the tree (telling her what it means .. simple example “in July” means a time, but “in hell” can mean in serious discomfort).

Wikipedia - hum, interesting question.  When I ask people I get mixed comments about Wikipedia.  If it is truly reliable.  Since I am going to want to educate Grace on things that are far beyond my knowledge (so it can tutor me later !!), I will want a reliable source.

Perhaps Grace should store the facts as truth from Wikipedia, but also remember the source.  So if Grace remembers a fact about say physics from Wikepdia, but a friend of mine is talking to her (say who has a degree in physics), and notices something she thinks is wrong, then that ‘higher priority source’ could override.  Another thing I want Grace to learn is another human language.  When English is conquered, I will provide the necessary information for her to know French, and , by talking with her, I can then learn that language (many people i know speech French, I’m in Canada after all, so it will be nice to participate).  Since Grace can take in grammar rules of any language (the engine is language agnostic), she can learn much more quickly than I, and easily correct my grammar.    Internally, Grace knows a word is an adjective, like ‘cold’, and assigns ‘pos=adjective’, in French, adjective is adjectif, but internally it won’t matter.  Internally, ‘meta data’ and grammar knowledge will be in English (makes it a little easier for me), but all parse trees would be generated according to the specific language (in this case, French).    Will I translate the the parse trees? no.  They will keep their original form, but since the meta-data (in English) will be the same between all parse trees of different human languages, it’s ok, since the reasoning is done pretty much all at the meta-data level.

Laura, the word association thing is a good idea in general.  Humans tend to make jokes that involve associations, that actually don’t make complete sense all the time.

Take the joke—“How do you keep an idiot in suspense?”

answer—“I’ll tell you next week” 

This would probably involve some kind of self referential reasoning.

 

 
  [ # 84 ]

Sorry, In the above, “indifferent” is probably a better word to use than “agnostic”.

I missed the 15 minute “Edit Window” :(

 

 
  [ # 85 ]

Victor, your right about using word association in general.  You can’t rely on it as the only source of logic in your AI. However, I use it as a filtering system first to see if a pattern is developing as the conversation progresses. This is also a great way to flag a change of topic as well. It is really impossible to inturpit all words correctly, but using words in a response that are closely related to the input is a good method to simulate AI with limited resources.

 

 
  [ # 86 ]

Laura, very good.  I fully agree.  Is the association binary, I mean, is it a clear yes/no, or is there variable degrees of association between words?  I’m hijacking my own thread lol… Grace won’t mind, the next set of example I/O I make in a few months (because of the huge amount of work to-do), she’ll get a brand new thread.  “GRACE/GLI”, to reflect the new engine name.

 

 
  [ # 87 ]

Victor, like I said the more words you use in a converstation that are related the higher the topic score becomes and thus the confidence level increases. Depending on the word used, there could be as few as a dozen or as many as a thousands word associations. The real magic takes place when a matrix is formed from the subject and topic context. Some very intelligent responses can be produced. This matrix of subject and related topic history persists throughout the session so the longer you speak to Marie about a subject the more topic information is gathered. When you abruptly change the topic she knows this immediately and starts a new matrix. If you refer back to a previous subject or topic, she picks up right were you left off with all the history still in her memory. This was an important feature that I had to develop since her main function is an Internet Assistant, much like the search history in your browser Marie keeps track of all subjects and topics and information associated through searches performed.

Does this explanation help with your understanding of Marie’s core design?

Oh and BTW, I am the hijacker here wink

 

 
  [ # 88 ]

Yeah, that gives me a very good idea.  Quite an ordinal approach.

 

 
  [ # 89 ]

Victor, we are all here to share our ideas and hopefully together we can accomplish something great.
At least that is the case with most of us. wink

 

 
  [ # 90 ]

Agreed !

 

‹ First  < 4 5 6 7 > 
6 of 7
 
  login or register to react