AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The anatomy of a conversation

To discuss the functions and sources of information a chat bot would need to participate in an intelligent and “stateful” conversation.

The main functions:

1) If the input was a statement, determine if it is suitable for evaluation in a logic true/false sense, if so, evaluate it and produce response.  Example, “All birds can fly” is false.

2) If the statement is one which the system would have no way to evaluate from a true/false perspective, (example “I have a headache”), then provide some useful, relevant information (example, “Have you tried taking a pain killer?”).

3) If input is a question, try to determine the answer from knowledge base.

4) If input is neither a question nor statement, try to determine if it is a response to a question that the system asked, example, if user entered “Bob” and we just had asked what is your name, then update the “state” to user’s name is Bob.

The sources of information a chat bot needs:

1) The latest your input

2) a knowledge base

3) a ‘state’ of the conversation, this could be broken down into:

      state.context -  would store what the topic of the conversation currently is.  Perhaps when program first loaded into memory this would be set to “initial”

      state.objective - maybe we are currently engaged in a debate with the user and we are trying to prove that we are correct about a particular point.

      state.user.mood = a value to indicate whether the user is happy, angry, sad, etc

      state.history - history of lines of text entered by user and output from chatbot.
        ** An important point here is that the chatbot must first consult state.history before responding.  In the example above, “I have a headache”, the chatbot must not respond immediately suggesting pain medication if the conversation went as follows:
      user : I just took some pain relievers.
      chatbot : why ?
      user : I have a headache

      It would be silly if the chatbot responded with the pain medication suggestion at that point !!  Thus any intelligent chat-bot *must* first exhaust all ‘state’ information before producing a response.

4) state of the ‘world’ - example, current time, weather conditions, time of year, etc.


  [ # 1 ]

I like how you broke it down in terms of the input type.  However, I think it’s silly to make a distinction between the state of the world and the state of the conversation, and the knowledge base, these are all information about the world, and if they’re not interoperable, the system is not flexible enough.


  [ # 2 ]

Thanks for your comments.    My reasoning for the distinction of knowledge base statements versus conversation statements is to do with beliefs.  When the ‘administrator’ wants to give it FACTS, those are things it will ‘believe’ to be true, and simply insert them into its knowledge base.

Propositions that come to it via conversation (when you do not have “administrator privileges” - for example an anonymous user chatting with it online), will not be simply accepted, they will be evaluated in terms of the facts it currently believes to be true,  -OR- it will evaluate that proposition in terms of logic, to see whether or not it was able to deduce that the user’s proposition is false.

The other purpose would serve more for the organization of the system.  From an object oriented point of view, there could be objects with their methods and attributes, which deal with ‘external’ things (the world), and others which deal with internal state (its current objective, etc).    The separation would just make it easier to manage/update/debug the code (which undoubtedly would grow quite large).  Also, methods of the ‘world state’ objects could interact with methods of the ‘internal state’ objects and ‘knowledge base’ objects.


  [ # 3 ]

I would point out that true/false leads to a very limited form of logic. To a human, the proposition “birds can fly” is generally true, though we can think of exceptions. So perhaps it should have a truth value of 0.95 or thereabouts, if truth is measured on a scale of 0 to 1.

Some people think that these partial truth values align well with probabilities, so a truth value of 0.95 means the proposition is true with probability 0.95 (and untrue with probability 0.05).

However, rather than take a point value of probability, it’s possible to assign a probability distribution to the proposition. This is a Bayesian approach, and effectively says that we don’t know that the truth value is precisely 0.95. Our uncertainty is expressed in the form of a probability density function. Technically, 0.95 would be the mean of the distribution. If we were very certain about its value then the distribution would be sharply peaked near 0.95, but if we were uncertain, it would be more diffuse. The Beta distribution is an appropriate one to use, as it is supported on the interval [0, 1].

Another approach is fuzzy logic but personally I think it’s not well founded compared to probability theory.

Having said all this, there’s a lot of evidence that people actually manipulate probabilities rather badly, so our brains are unlikely to be based on probability theory.



  [ # 4 ]


Your description of these ideas gets better and better every time I read you, as you’re adding detail (like the Beta distribution) and refining the rhetorical delivery capsule.  I lean more towards your approach over fuzzy logic now.



  [ # 5 ]


For more detail on the Beta distribution approach, see this Wikipedia article and the references at the end of it. There are further papers available on Josang’s website.

Unfortunately the work described is mostly a one-man effort so it hasn’t had the benefit of criticism and contributions by other researchers (except for his coauthors). I followed it up in some detail and I’m not convinced it’s rigorous. In particular there are some arbitrary assumptions made which seem unjustified. The notation is also non-standard and hard to follow. Eventually I decided there were too many difficulties to make it worthwhile pursuing, but maybe someone else can make more sense of it than me.



  [ # 6 ]

Very good feedback, thank you.  Yes, absolutely I completely agree with you.  Classic propositional logic is much to rigid to be applicable in the real world.    I believe the system should be able to handle complete uncertainty and even simultaneous contradictory statements - it would figure out the resolution later on as it learns more facts and determines how to “account for” these contradictions.


  login or register to react
‹‹ AI robots      Research on Chatbots ››