AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Multiple Sentence Input with same meaning
 
 

Hi,

is there concept how to handle multiple sentences from the user?

I know how to make the bot respond to each sentence like it would be typed in like single ones (like it is done in the controlscript doc) . But I think this isn´t the behavior of a good chatbot in every case. Sometimes it feels more natural to have less sentences in the bot output than the user typed in.

The context between the sentences matters too.
Lets break it down to two sentences to make it easier.

The two sentences combined could represent the same statement.

Imagine the following example dialogs:

Bot: How late is it?
User: It is 10pm. The sun is already set.
Bot: Ok, got it.

or:

User: How did you do that? How did you become such a clever chatbot?
Here the bot shouldn´t be asking “Do what?” first befoer answering the second sentence, what he would have done if there wouldn´t be a second sentence.

Or something similiar. The point is that the bot has to recognize that he doesn´t need to produce output for both sentences in some cases. Which sentence to ignore is based on the context, what makes it rather complex, but I think would be an improvement for every chatbot.

Has anyone of you done similar things already? How?

 

 
  [ # 1 ]

You could track the number of responses generated by each sentence and then in the post process topic use ^reviseOutput to reduce the responses from the unwanted sentences to null.

And thank you for raising this; I’m going to go back to my control script and reassess it, as currently it will ignore a second sentence if the first actually produced output.

 

 
  [ # 2 ]

Have you any idea how to determine those unwanted sentences? Is there some smart way to do this? It seems to be laborious to do that manually for each sentence combination I can think of. The necessary code of chatscript would increase a lot. I mean in theory you have to check each combination of each Rule. And that is only for having 2 input sentences.

 

 
  [ # 3 ]

It would be relatively easy in a post process topic to loop through all the responses and check to see if there are duplicates.

Or use ^responseruleid() to check if two sentences came from the same rule, or part of the same topic/rule hierarchy.

Alternatively whenever you generate a response you could create a transient fact, and then in post process check those facts to see if there is some commonality.

 

 
  [ # 4 ]

documentation says that ^reviseoutput is not available in postscript.
I made some tests, ^reviseoutput isn’t working in postscript.
Any alternatives to that?

 

 
  [ # 5 ]

What you can do at the bottom of the main control script is test for %more and if that is false then you can loop through the responses and process them.

Obviously you’ll have to make sure that the previous processing in the control script doesn’t end the rule for the sentence early. One way to do this might be:

topic: ~control system ()

u
: ()
 ^
sequence()

 
a: ()
  
# standard rejoinder/responder/gambit processing


 
a: (!%more)
  $
$i 0
  loop
(%response)
  
{
   
$$i += 1
   
$$tmp = ^response($$i)
   ^
analyze($$tmp)
   ^
respond(~CHECKOUTPUT)
  


However I do think there is the potential for an additional example in the documentation. There are hints to post processing the output for pronoun resolution, but it is not explicitly clear how this is done, how the output is actually changed.

 

 

 
  [ # 6 ]

Postprocessing for pronoun resolution has nothing to do with changing the output. It has to do with recording data, so that if a user uses a pronoun on input, it could be rewritten automatically and resubmitted as replacement input. Eg.
if chatbot had said “My mother lives in China”, then a postprocessing script might record “she” will mean “your mother” and “there” will mean China.  So if user input next is “How old is she? What’s it like there?”  that could by script be modified to be “how old is your mother? What’s it like in China” and that input sent back in.

 

 
  [ # 7 ]

Ah, OK. I understand now, it is about remembering context, setting up the resolution in the next volley.
Thank you.

 

 
  [ # 8 ]
Andy Heydon - Apr 8, 2016:

It would be relatively easy in a post process topic to loop through all the responses and check to see if there are duplicates.

Or use ^responseruleid() to check if two sentences came from the same rule, or part of the same topic/rule hierarchy.

if you look at my examples from the first post, this wouldn´t solve the problem, because the sentences aren´t generated by the same rule:

User: How did you do that? How did you become such a clever chatbot?
Here the bot shouldn´t be asking “Do what?” first before answering the second sentence, what he would have done if there wouldn´t be a second sentence.

Andy Heydon - Apr 8, 2016:

Alternatively whenever you generate a response you could create a transient fact, and then in post process check those facts to see if there is some commonality.

Could you take my example and add facts to that like you would do it? I am not sure how these facts should look like yet.

 

 
  [ # 9 ]

Facts can be anything you want, they are just three pieces of data.

Final output can be inspected via an output response number and for each response in this volley you can get the rule tag that generated it via ^responseruleid().

So in each rule you could have a line that creates a fact based on the rule tag.

u: ()
   $
$tag = ^getrule(tag ~)
   ^
createfact($$tag outputimportance 100 FACTTRANSIENT)
   
This is what i need to say 

(and those two lines could easily be encapsulated within an outputmacro)

I don’t know how you know intend to choose one output over another, but as an example, if each one could have an importance score and the output processing could choose to keep the highest, or those over a certain level, or you could use fact flags.

 

 

 
  [ # 10 ]
Andy Heydon - May 19, 2016:

I don’t know how you know intend to choose one output over another, but as an example, if each one could have an importance score and the output processing could choose to keep the highest, or those over a certain level, or you could use fact flags.

What I was primarly looking for are strategies how to choose one output over another. Just rating each rule in terms of importance is difficult, because, as you can see in my examples, how important a rule is depends strongly on the context and the other sentence.

Thank you for raising this technicaly approach anyway, I will get back to it after I found a working strategy

 

 
  login or register to react