AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

ALEX: Artificial Learning by Erudition eXperiment
 
 

I’ve been meaning to start a thread on my project for a while. Exciting to finally start it!

ALEX stands for Artificial Learning by Erudition eXperiment, and as the name implies, the major focus is learning via natural language. “Chatting” is a later goal. Most of my work is focused on how to design a program that parses/organizes natural language input and uses the organized input to improve its NLP capabilities.

The big goals for ALEX are divided into stages as follows:

Stage 0: A bot that can turn English input into a structured knowledge base. The bot should be able to use this knowledge base to deduce new facts from known facts (simple logic), and new parsing rules from examples of correct parses.

Stage 1: A bot that can read simple English wikipedia entries and construct summaries of the articles. This requires a contextual understanding of an article’s subject in order to deduce which pieces of information are the most significant, which sentences/paragraphs represent a generalization of an idea and which represent specific instances of that generalization, etc.

Stage 2: Use NL input to organize knowledge base facts into “stories” that incorporate temporal and spatial information to organize the facts. The “story” may be no more interesting than “how to pour a glass of milk”, but it will serve to add more context to NL input.

Stage 3: Develop an NL interface for querying the knowledge base and stories. This is where chatting comes in.

Stage 1 is no small feat and I’m not much interested in the finer aspects of chatting, or even dealing with aberrant grammar/spelling, etc., until I’ve accomplished this to some degree.

I’m in the process of overhauling a few parts of the parser and interface, so there won’t be sample i/o until April. (Or at least, the i/o won’t be in complete “input—> final output” form.) In the meanwhile, here’s a play by play of how the parser works. Remember that it is designed to convert NL input into a factual knowledge base.

Step 1: Parts of speech (POS) tagging.

I use the NLTK implementation of WordNet to tag verbs, adverbs, and adjectives. ALEX has built in word lists to handle articles, prepositions, and conjunctions. Proper nouns and pronouns are handled, but most nouns are assigned as a process of elimination. (All verbs, adverbs, and adjectives are also potential nouns.)

I do not distinguish between types of nouns (proper, pronouns, etc) nor types of verbs at this stage. All possessive determiners (her, my, their, etc.) are tagged as articles because they behave that way.

I also keep a cache of sentences with their associated POS tags so that commonly encountered sentences can skip this step.

Step 2: Naive POS discrimination.

Step 1 can leave tens to hundreds of tagging combinations (I call them grams—not quite grammars smile ), depending on the complexity of the sentence. Some of these can be eliminated because they violate simple grammar rules. For example, “her” is tagged as both an article and a noun. All articles must be followed by nouns. Any case of a “dangling” article is removed from the list of possible grams.

The POS discriminator ranks surviving grams using a database of correct grams. Whenever ALEX learns that it has parsed a sentence correctly, the gram for the sentence gets chopped up into gram chunks of three POS and longer and stored in the database. New grams get ranked based on how many of these chunks can be found in the gram. (More “points” for longer gram chunks.)

It’s a naive system to be sure, but the correct gram is consistently ranked in the top 10, and generally in the top 5. I’m always trying to think of little additions that will improve the ranking scheme.

Step 3: Complex sentence splitting.

I absolutely loathe conjunctions. I may build methods for handling them directly in the future, but for now, ALEX has a sophisticated way to use pattern matching to map any sentence containing conjunctions (what I call a “complex sentence”) to one or more simple sentences without conjunctions. Special characters are used to indicate the relationship that the simple sentences have with each other. I’ll go more into this later.

Based on examples, ALEX builds rules (generalized maps) for turning any sentence with a similar gram into a set of simple sentences. Only simple sentences will be parsed correctly in subsequent steps.

Step 4: Phrase culling.

This set of tools isolates clauses and phrases from the main sentence. Every clause and phrase is broken up into a separate sentence. Even prepositional phrases (with the dummy verb “to be”). Any sentence can be a “condition” of another sentence, using what I collectively call “joining words” to describe the relationship to the parent sentence. Joining words can be conjunctions, prepositions, subordinating conjunctions, etc.

Participial phrases are currently assigned the joining word “while” to indicate the timing with reference to the sentence they modify (“parent sentence”), though this might change. Some types of phrases have no joining word, in which case I assign “and”. This might become more sophisticated as needed, but it works well for now.

Sometimes its a trick to attribute the correct subject to dependent clauses, which cannot function independently, in order to turn them into proper sentences. This is where previous experience comes into play and the probability of multiple possibilities are ranked accordingly.

Step 5: Phrase attribution.

All phrases modify some other phrase or the main sentence. (See Victor’s infamous elephant in pajamas example.) Using examples already structured in the knowledge base, the bot will assign all phrases as “conditions” of the appropriate parent sentence. Even the direct object (DO) and indirect object (IDO) of the sentence will be stored in the knowledge base as a separate sentence (with the dummy verb “to be”). Thus the DO sentence can also be the parent of a phrase.

This part is currently being written and I’ll discuss it in greater detail once I’ve fleshed out my algorithms more.

Step 6: Parse tree formation.

Based on simple grammar rules, each sentence is divided into a python dictionary that contains the following:

1) Subject
2) Main Verb
3) Verb Phrase (gerunds/infinitives)
4) Adverbs (modifying the verb/verb phrase)
5) Adjectives (includes adverbs that modify the adjectives)
6) Direct Object (and dependents, see below)
7) Indirect Object (and dependents, see below)
8) Conditions

Condition lists contain members with two elements: the joining word, indicating how the condition is related to the parent sentence, and an ID (special token) identifying which tree is the condition.

Each direct and indirect object is the subject of its own parse tree. In other parse trees, they are referenced by their IDs. Each parse tree contains:

9) It’s own unique ID
10) A list of IDs that reference this ID
11) The probability that the tree is true.

The probability is determined by the rank of gram that formed it, how many grammar rules were used to create it, and how well the structured data the tree contains agrees with existing members of the knowledge base, weighted by the probability that those members are true. I’m still refining this part of the algorithm. Expect to hear more on this later.

While all of the above steps exist in some form or another, I’m doing a heavy round of revision right now and expect to change things around a bit. Until later in March, I won’t have much time to dedicate to this alas. The plan for this spring/summer is to:

1) Finish the latest round of edits
2) Improve the parser to the point that it can handle complex sentences with a 75% “first try” success rate (lots of training)
3) Work on the algorithms of logic that act on the knowledge base in order to derive new facts from already organized facts (parse trees).

I’m thinking these logic processes will eventually be run while ALEX is not actively being engaged (ie, reading the NL lessons I’ve written). Sort of how our brain processes our experiences while we sleep.

 

 
  [ # 1 ]

Any demo of this?
http://www.chatbots.org/ai_zone/viewthread/373/

 

 
  [ # 2 ]

Sure thing. I’ll put together some i/o examples while I’m trapped on a train Friday and post Friday evening. smile Some might have to be from an earlier incarnation since I’m tearing the phrase handler apart at the moment. I mentioned somewhere in one of these threads about how it works amazingly well except for the few sentence constructions where it fails miserably. (The problem has to do with how past participles are handled when choosing to construct participle phrases.) Perhaps I should be more focused on other things, but it’s bugging the heck out of me and I’ve got some ideas how to fix it.

In all the steps of the parser listed above, what I discuss is already implemented unless I explicitly state I’m currently working on it. Areas “under construction”:

- phrase attribution (step 5)
- the phrase culling thing I mentioned above (past participle issue)
- choosing how to weigh the “truth” probability of a parse tree (under refinement)

 

 
  [ # 3 ]

ALEX stands for Artificial Learning by Erudition eXperiment

Hi CR,
I would like to commend you on creating a cool name for your bot.  That’s probably the most important stage of building a chat bot. =)

I’m glad you’re taking the time to start a thread.  It’ll be interesting to follow the progress.  How long have you been working on this? It’s been almost 2 years…yes?

Regards,
Chuck

 

 
  [ # 4 ]

Hi Chuck! Haven’t seen you around lately—good to have you back. smile

Yup, once the name’s there, the rest just follows eh? I came up with the name at least 3 years ago. Back then I was just playing around with POS tagging and trying to develop ways for a bot to build its own dictionary. I got “serious” so to speak about 2 years ago, yes. But it’s been slow going—PhD research is time consuming and brain consuming. One must find that golden combination of time + energy. And now with my preliminary exam and an important conference fast approaching, I’ll have even less time until April! Phew…

How about yourself? How’s Walter these days?

 

 
  [ # 5 ]

Here’s an example of the naive POS filtering at work:

EDIT: the tokens my parser uses are conflicting with something on the forum. My entries are getting cut off when I enter “% a d”, minus the spaces. Any idea what this is??

 

 
  [ # 6 ]

Okay, take two. I’ve replaced all incidences of the problematic string with: “% a d “. Otherwise the output is as given by ALEX.

I wanted to show a quick example illustrating the power of the “naive POS discriminator.”

Sentence to parse:  I would like to commend you on creating a cool name for your bot .
Time taken to generate grams: approx. 1.7 seconds
Number of grams generated: 72


Ranked grams after gram filtering:
Sentence to parse:  I would like to commend you on creating a cool name for your bot .
Time taken to generate grams: approx. 0.02 seconds
Number of grams generated: 4
Parse option # 0 :  [I = %%n] [would = %%v] [like = %%v] [to = %%p] [commend = %%v] [you = %%n] [on = %%p] [creating = %%v] [a = %%a] [cool = %% a d j] [name = %%n] [for = %%p] [your = %%a] [bot = %%n] [. = %%pun]
Parse option # 1 :  [I = %%n] [would = %%v] [like = %%v] [to = %%p] [commend = %%v] [you = %%n] [on = %%p] [creating = %%v] [a = %%a] [cool = %%n] [name = %%v] [for = %%p] [your = %%a] [bot = %%n] [. = %%pun]
Parse option # 2 :  [I = %%n] [would = %%v] [like = %%v] [to = %%p] [commend = %%v] [you = %%n] [on = %%p] [creating = %%n] [a = %%a] [cool = %%n] [name = %%v] [for = %%p] [your = %%a] [bot = %%n] [. = %%pun]
Parse option # 3 :  [I = %%n] [would = %%v] [like = %%v] [to = %%p] [commend = %%v] [you = %%n] [on = %%p] [creating = %%n] [a = %%a] [cool = %% a d j] [name = %%n] [for = %%p] [your = %%a] [bot = %%n] [. = %%pun]

Step 1 generated a whopping 72 grams and the naive discriminator brought it down to just 4. Note that the first gram is the correct one. The tagging tokens are:

%%n = noun
%%v = verb
%% a d j = adjective
%% a d v = adverb
%%a = article
%%c = conjunction
%%p = preposition
%%pun = punctuation

 

 
  [ # 7 ]

Sounds like an interesting approach. Very similar to what I’m doing at the first stages. I also couldn’t help but think how much easier something like that is to implement using my network designer compared to a traditional programming language.

 

 
  [ # 8 ]

Do you have a post where you talk about how your “network designer” works?

 

 
  [ # 9 ]

Check my homepage: lots of info over there: http://janbogaerts.name

 

 
  [ # 10 ]
,`2@
 

 
  [ # 11 ]
C R Hunt - Feb 18, 2011:

These simple sentences are grouped together with special tags that indicate the hierarchy of how they relate to each other

Although I’m not working from a grammatical point of view (for now), but you are (as you stated elsewhere), this model comes pretty close to parts of my model: what you call ‘simple sentences’ are in my model called ‘concepts’ (as they describe a very basic concept). The tags are obviously important in my model and indeed describe ‘relations’ (among other things). Only, in my model the tags themselves are loaded with additional descriptions, that’s why I call them ‘symbolic tags’ as they add symbolic value to the ‘link’ that is being described by that tag.

So there are certain parallels between your approach and my model smile

 

 
  [ # 12 ]
Hans Peter Willems - Feb 18, 2011:

...what you call ‘simple sentences’ are in my model called ‘concepts’ (as they describe a very basic concept).

Hans, THAT’S the missing piece of the puzzle! When you were referring to “concepts”, earlier, I took the meaning of the word to be it’s normal interpretation. NOW it all makes sense! YAY!!! cheese

C R Hunt - Feb 18, 2011:

Note: Something is causing a semi-colon to be added to the end of each conjunction tag. It’s not there in the reply box. Not sure what’s going on with that. Ghosts in the (forum) machine??

CR, that’s probably due to the forum script (mistakenly) thinking that the ampersand/word combination is a poorly formed HTML entity. Unfortunately, there’s no way to “fix” it, if you want to use ampersands within the [ code ] tags. ...Unless you try to use (&)amp;AND, maybe? (no parentheses) - The result would look like this:

&AND - &OR 

Nope… It works outside the [ code ] tags, however:

&AND - &OR

Well, I tried. smile

 

 
  [ # 13 ]

I’m interested to see how you decide to chunk text and assign tags. Your emotional model sounds similar to what I plan to implement when I incorporate my EMO program into ALEX. (I have mentioned EMO here and here.) From the first link:

With all this discussion about instinct vs learned behaviors, I was reminded of a project I’ve been letting languish. (I think I’ve mentioned it on this site before, but not in great detail.) It’s a little program that maps emotions onto words and vice versa, starting with 7 basic emotions to define my “emotion space”. All other “higher” emotions are some linear combination of these emotions (a vector in emotional space, if you will). I got the idea from some psychology mumbo jumbo asserting some emotions to be instinctual and others socially learned. I don’t really buy it, but the concept is useful.

My bot (I call him EMO for Emotion MOdule smile ) can only accept restricted input of the forms ” * is * “, “emotion * is * “, or ” * “. The first two prompt the bot to learn either a new word or new emotion respectively. The words and emotions listed to the right of the “is” are used to deduce the emotional vector of the new word. For example,

“sunshine is warm sunny yellow wonderful happy daytime”

would prompt the bot to sum up the known emotional vectors to define a new emotional vector for sunshine. If that emotional vector is (roughly) associated with a named emotion, it would then express that it feels that emotion when reminded of sunshine.

Of course, one can quickly see the limitations of such a scheme. “Sunshine is hot sunburn cancer drought,” as well. Context is key. Eventually I’m planning to expand EMO to associate larger chunks of text with emotions and integrate it with my main chatbot project.

 

 
  [ # 14 ]
Dave Morton - Feb 18, 2011:
Hans Peter Willems - Feb 18, 2011:

...what you call ‘simple sentences’ are in my model called ‘concepts’ (as they describe a very basic concept).

Hans, THAT’S the missing piece of the puzzle! When you were referring to “concepts”, earlier, I took the meaning of the word to be it’s normal interpretation. NOW it all makes sense! YAY!!! cheese

It is for me too. It was never clear to me what a concept actually was in your model, Hans. I guessed it was a class type with attributes that you hadn’t yet decided on.

Dave Morton - Feb 18, 2011:

CR, that’s probably due to the forum script (mistakenly) thinking that the ampersand/word combination is a poorly formed HTML entity. Unfortunately, there’s no way to “fix” it, if you want to use ampersands within the [ code ] tags.

Ah, so that’s the problem. The script is trying to be smart. wink I guess the “% a d” problem is something similar, although this one is completely cutting off my posts.

 

 
  [ # 15 ]
C R Hunt - Feb 18, 2011:

I’m interested to see how you decide to chunk text and assign tags.

You and me both smile ... This is one of the major hurdles for me to take at the moment; formulating how this process is going to take shape, but I’m getting there.

One problem I’m facing is that of ‘feature creep’; as the definitions take shape they give new ideas to implement more ‘features’ into the model. As the model is aimed at describing what is needed for a sentient AI, I have no clue (yet) how far I need to go to get there.

Dave Morton - Feb 18, 2011:

Hans, THAT’S the missing piece of the puzzle! When you were referring to “concepts”, earlier, I took the meaning of the word to be it’s normal interpretation. NOW it all makes sense! YAY!!! cheese

I’m glad it made some things clear. I hope to post a more specific description of my model that I have so far, soon in my dedicated topic. However, not sure how soon that will be as I have been asked by an academic researcher to work together on this project (= me very excited). So at this moment I have no idea to where this is leading me.

 

 1 2 3 > 
1 of 3
 
  login or register to react