AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

GRACE/CLUES
 
 
  [ # 61 ]

When I read about other bots that have years in the making, I have to ask myself, “What am I actually building?” it is a Chatbot? Yes, but maybe mine is more in the category of a web application with an interactive dialog interface?

I really admire the years of R&D that many members here have put into their projects.  I started on Marie back in July last year. The project was originally going to be designed as an interactive search portal of sorts. I came up with the idea that it would be fun for the user instead of just typing in a search box or clicking on a link, talking to a Chatbot that would simply interpit and carry out their requests.

One thing lead to another and now she not only performs tasks but carries on a converstation as well. The parser I have designed is not really a true NLP but more of a word association engine that parses out the subject in a sentence and then attempts to match this word to associated topics. Once this is accomplished, a knowledge base query is performed on this criteria and used together with regular expression replacement to form an action and or reply.

So Victor, in reading your updates and all the thought and effort in design that you have put forth, I am in awe of your accomplishments. I have been inspired by and have learned so much by just reading the posts of yours and others that are so dedicated to moving forward the current state of AI software technology.

I look forward to more updates on your project.

 

 
  [ # 62 ]

Thanks for the update Victor! It’s always exciting to see what you’ve got up your sleeve. smile

Don’t feel bad about the limited scope of conversation examples—I’m currently testing new clause parsing algorithms and everything seems to revolve around dogs, their owners, and the current state/location of their bones. LOL

Out of curiosity, for a given input, how many 1) token combinations and 2) final grammatical parses are generated? About how many are whittled away due to contextual clues? Are parses thrown out as they’re generated, or by comparing all grammatical parses after generation?

I imagine in general it wouldn’t matter when parses were thrown out—it might even be beneficial to accuracy to compare all grammatical parses at once. But I worry that with enough clauses and phrases tossed in, speed could be noticeably compromised. Thoughts?

Another question: How is Grace searching her knowledge base? How scalable is your method? Does she weight her searches by how recently the knowledge was acquired?

 

 
  [ # 63 ]

Laura, thank you, a very much appreciate those comments.  I also must say, it is notable what you have achieved so far for a project that only started last year !!  Question—are you part time or full time on your project? I WISH I was full time.. I have to divide my time between a 40 hour work week, commuting, and a home life,  with my project… oh… to win the lottery so I could put full time into the project (what win a lottery just to continue working…lol. .well, you get it… human mind has to stay busy busy!)

CR - lol…. yeah, there is so much to the guts of a chatbot that we don’t care about the examples even sounding “normal”.. I actually hesitated and thought, perhaps I should wait until I get a more “normal” sentence before I post to chatbots.org, and not use “bob went to his closet because jane went to the dance”... silly, but I figured everyone on here knows it is the language handling they are interested in.. but still… i am getting tired of bob and his closet and christine going to her dance LOL.

I’m pressed for time .... its 1024 am on a Sunday.. and I usually start work on the project by 900 ... so..  I will write a much more detailed response to your questions, but for now,  nothing at all is thrown away…. Grace casts a VERY LARGE “net” and keeps pretty much everything.  Instead of throwing away…. she keeps everything , but “promotes” some trees over others.  So kind of the opposite of throwing away.  The most “promoted’ parses are seen as “making more sense” and probably what the user really meant.

 

 
  [ # 64 ]

So just a small illustration to show Grace’s ability to pick “closest semantic matching fact”....

<user says>

Victor went to a dance

<GRACE says>

Yes, I understand.  I have stored that information in ‘temp-sqa-test’

<user says>

Did Victor go to a dance given by his big company

<GRACE says>

Yes, with conditions.  The conditions are that IF:

you ignore [*GIVEN BY HIS BIG COMPANY* dance]

<user says>

Victor went to a dance given by his company

<GRACE says>

Yes, I understand.  I have stored that information in ‘temp-sqa-test’

<user says>

Did Victor go to a dance given by his big company

<GRACE says>

Yes, with conditions.  The conditions are that IF:

you ignore [*BIG* company]

 


So, initially (after clearing all temporary facts from “temp-sqa-test” directory), Grace started off only knowing one fact:

Fact1: “Victor went to a dance”

(but later we tell her….)
Fact2: “Victor went to a dance given by his company”

So, with only fact 1 available, Grace did her best, finding closest match, but indicating “you ignore [*GIVEN BY HIS BIG COMPANY* dance”.

But, when fact 2 comes in, Grace is asked “Did Victor go to a dance given by his big company”, which still isn’t a completely good match with fact2, but a lot “closer” semantically than fact1, so she uses fact 2 to respond, but indicating the small differene of
  modifier (‘big’ used in question, but not in fact2)

I will come back to those questions in a bit CR.

 

 

 
  [ # 65 ]
Victor Shulist - Jan 29, 2012:

Laura, thank you, a very much appreciate those comments.  I also must say, it is notable what you have achieved so far for a project that only started last year !!  Question—are you part time or full time on your project?

Actually I too have a full-time job 40 hrs + performing in home care-giving. This along with my music (I have played guitar for 35+ years) and just put together a CD along with developing Marie. So as you can see my time is equally limited.

I still have a ways to go before Marie is ready for prime-time and my short term goal is to have her ready for the Chatter-bot Challenge in March. She will continue to be a work in progress as I have now caught the chatter-bot bug for sure. smile

 

 
  [ # 66 ]

<user says>

Jack went to his closet and took out his new suit because he was going to a dance given by his great company

<GRACE says>

Yes, I understand.  I have stored that information in ‘temp-sqa-test’

<user says>

Did Jack take out his big old dull suit

<GRACE says>

Yes, with conditions.  The conditions are that IF:

you ignore [*DULL* suit]
you ignore [*BIG* suit]
you mean [*NEW* suit]
you ignore [*OLD* suit]

 

So just an example where Grace is able to realize that the answer to this question comes from a complex-sentence, the main-clause in that sentence, *and* realize
  that it is actually the *second* predicate in that main clause (first predicate being “went to his closet” and second predicate “took out his new suit”).  And of course, once she has found the closet fact, and the closest matching predicate of that fact, she zeroes in on the object of that predicate, and does a compare of modifiers between Q and F.

So she’s able to handle not just contractions between TERMS but entire phrases, predicates, even subordinate conjunctions.

Luara - chatter bot bug… yep.. welcome to the addiction smile !

 

 
  [ # 67 ]
C R Hunt - Jan 29, 2012:

Don’t feel bad about the limited scope of conversation examples—I’m currently testing new clause parsing algorithms and everything seems to revolve around dogs, their owners, and the current state/location of their bones. LOL

lol, That’s great!... I also love the humor on here!

C R Hunt - Jan 29, 2012:

Out of curiosity, for a given input, how many 1) token combinations and 2) final grammatical parses are generated? About how many are whittled away due to contextual clues? Are parses thrown out as they’re generated, or by comparing all grammatical parses after generation?

1)how many ?  MANY !!!! Especially if several words have many parts of speech.  Combination explosions due occur, but so far the engine deals with them - it is incredibly efficient.  Basically anything that -can- be compared, is compared.
2) All grammar parses stay in existence for the entire processing.  When Grace sees that, for example, out of 20,000 parses, she “likes” say 20 of them much more than the rest,  if they all have about the same semantic value, she tries processing all 20, and then, considers, for each of those 20, how much information do they have in common with the knowledge base? (and later, how much do they relate to current conversation state).

Example, say if input comes in, and generates 10 trees.  say P1 and P2 have some semantic value>0, and 3-10 have zero.  Well, Grace then tries P1, and say if P1 has 3 correlations with previous KB (right now “temporary directory”),  but P2 has 5 correlations with previous facts.  Well, then P2 is chosen.

In other words, sometimes Grace actually never makes up her mind which parse was correct! ... well, never makes up her mind *DIRECTLY*, but only indirectly concludes which parse was best.  P1 and P2 were both equally valid, she assumes both are correct (grammatically and semantically), and whichever results in more things in common with the KB.  Later, conversation state will be a factor as well.  Right now there are 4 variables in Grace’s “confidence formula” (CF).

a) how many items are in common between last user input and the best matching fact (so how many answers did a given fact-tree provide to the question).
b) how many conditions (extra and/or missing modifiers between question and closest fact)
c) how many credible correlations inside the fact that was used (so how much sense does a given fact-tree make)
d) how may credible correlations inside the question parse tee (how much sense does a given question-tree make)

some variables of the CF you want HIGHER the better, other variables you want lower the better.

For example, Grace wants to pick the response that has the LOWEST value for b).

Example, if she has 2 responses…

R1—yes, if you ignore X and Y
R2—yes, if you ignore X

well, R1 has 2 conditions, R2 has only 1 condition.  So she will like R2 more.

And thus,  indirectly, whatever parse tree of your input caused R2, is then indirectly deduced to be ‘the best one’, or the one the user really meant.

for c) and d) think of my previous posts involving elephant in pajamas.  Grace actually consults a (small but growing db) that says if “X is in Y”, and X is a person, and Y is clothing, then we have “credibility”.... and a sentence that says a noun, N, which is a person, is in “Y” and if Y is a piece of clothing, then Grace will “promote” that parse tree over others.

Basically, the idea is that when Grace is stumped, she goes looking for even more context.  And as I do this, the confidence formula will grow more and more complex.

C R Hunt - Jan 29, 2012:

Another question: How is Grace searching her knowledge base? How scalable is your method? Does she weight her searches by how recently the knowledge was acquired?

Not overly concerned with scalability right now.  The major focus is NLU - taking input, dealing with combination explosions, ambiguity, and determine which parse trees make more sense than others, and factoring in things like common sense,  conversation state, previous facts, etc.  The Question/Answering right now is just proof (and a great testing tool) to evaluate her comprehension, before moving on to the next level smile  For example, NLI probably late next year (natural language inference).

 

 

 
  [ # 68 ]
Victor Shulist - Jan 29, 2012:

<user says>

Jack went to his closet and took out his new suit because he was going to a dance given by his great company

<GRACE says>

Yes, I understand.  I have stored that information in ‘temp-sqa-test’

<user says>

Did Jack take out his big old dull suit

<GRACE says>

Yes, with conditions.  The conditions are that IF:

you ignore [*DULL* suit]
you ignore [*BIG* suit]
you mean [*NEW* suit]
you ignore [*OLD* suit]

 

Victor, in your example, the original input described a new suit, whereas the test question asked about an old suit. To my way of thinking (and feel free to disregard this), GRACE should have answered “NO’, and with an explanation of why “she” answered negatively. For that, GRACE would have to understand that the context of both new and old applied to the suit, and that new is the opposite of old, and therefor if Jack pulled out only one suit, and it was his new suit, then he could not have pulled out his old suit.

Granted, there was no mention of the quantity of suits pulled out, other than referring to it in the singular, but that should be enough to limit it to a single suit. Yes? no? Shut up, Dave? cheese

 

 
  [ # 69 ]

Dave, I imagine that’s the next step. And a part of the “inference” Victor referred to. Before you can decide that two characteristics are incompatible, you have to be able to isolate from the question and statement what those characteristics are in the first place.

 

 
  [ # 70 ]

Dave & CR : Great comments.

Dave, absolutely, that is *one* option.  However, just , as CR pointed out, being able to even be in a position to do that, Grace must first ascertain what the delta of the question and fact is.

CR: yes, that will feed into the yet higher levels of reasoning.

So, what Grace does with her deduced information of delta of modifiers, new versus old, is yet to be determined.

And yes, I *do* like the comment of her checking if X is the opposite of Y, and if question has X and fact has Y, then word the response a bit differently, but again, as CR pointed out, non of that is possible unless Grace can understand the fact and question well enough to even determine those differences.  One step at a time my friend.\

I guess a really easy thing to do for now, would be to change Grace’s first portion of her response to…

“Well, no, not exactly, but my answer would be YES, if…......”

<conditions>


Or, as you say, later, when I have her detect opposites, the wording could be even more fitting.

But still, her original response is not incorrect…

you mean [*NEW* suit]
you ignore [*OLD* suit]

smile

 

 
  [ # 71 ]

Victor,
Is Grace at the state where she can answer the question:

What gender is Jack?

 

 
  [ # 72 ]
Victor Shulist - Jan 29, 2012:

Been a while since an update… so, i’ll blow the dust off this thread….


As I said, sorry for the same dull (and kind of weird examples lol)... my next posts will talk about other concepts…  (does anybody have a specific subject perhaps ? smile )

I am currently updating time/date oriented discussions. Would you like to go down that path?

 

 
  [ # 73 ]
Merlin - Jan 30, 2012:

Victor,
Is Grace at the state where she can answer the question:
What gender is Jack?

Not at the moment, but I like that.  I will probably work on that next.  Well, this coming weekend I still have to complete Grace’s imagination module first.  (that is where she brain storms the plausability of different parses, by consulting the KB to discover what roles different nouns can play, and which verbs they normally peform, in order to promote certain parses).

I’m actually following an English grammar course, and training Grace as I go through each lesson.  So my plan is:  Have Grace learn all grammar (and world knowledge for each of a set of example sentence with each grammar construct).  Then, work on the logic that allows her to answer questions.

She will learn about simple sentence, complex sentences, compound sentences,  predicates, adverbial phrases, prepositional phrases, gerunds, infinitive phrases, noun clauses, etc.  If you look back on my previous posts in the history of this thread, you’ll see I promised to have that done awhile ago .... the delay? well.. re-writing the engine from Perl into C++—where the grammar information now has an utterly different notation !  hence, having to re-code the grammar knowledge.  Actually Grace’s parse tree generation routines are in C++, but where she takes the question (after it is ‘digested’), and compares it with previous parses (‘facts’), that is right now done in Perl.  I always experiment in Perl.. then when I really know the algorithm is the one I want, and passes the ‘proof of concept’,  I convert to C++.

However, I like to mix it up a bit, if I get tired of grammar work and need to take a break fro it.  So I think probably this weekend or next at the latest, I will work on your example smile

Now Grace does have a set of generic routines that allow her to answer “Did” questions.  That’s why all the posts were did-questions.  Also, right now she knows how to deal with did-questions where the facts are predicates with “verb noun” or “verb prep-phrase”. 

However, it can be much more than just ‘verb noun’, there can be variable number of predicates, with variable number of verbs in each predicate, variable number of adverbs or auxiliary verbs modifying those verbs, and the prep phrase in “verb prep-phrase” can actually have a direct object which, in turn, is modified by another phrase ..... to pretty much any arbitrary depth smile

So like I say, probably, for the most part, I’ll probably go with the “breadth first” approach , that is, get her up to speed on a huge grammatical mastery first (and world knowledge interleaved).  then educate her on some generic Q/A abilities.  But I will work on your sample soon !

Merlin - Jan 30, 2012:

I am currently updating time/date oriented discussions. Would you like to go down that path?

aw… even more “thrilling” lol

 

 
  [ # 74 ]

General Language Intelligence- G.R.A.C.E.‘s engine.

Image Attachments
GLI-Logo-For-Chatbots.png
 

 
  [ # 75 ]

That is a cool logo Victor!

 

‹ First  < 3 4 5 6 7 > 
5 of 7
 
  login or register to react