AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Authentic Human Emotions in Chatbots
 
 
  [ # 16 ]

For the emotional component of the chatbot equation, I think you have more than enough there Steve.  I can’t see there really being any more to it than that…  perhaps a bit of ‘statefulness’ .. but that’s about it.  I can’t see it playing a real role in intelligence.  I think many people fall into this fallacy of reasoning…

premise 1. Humans have strong general intelligence.

premise 2. Humans have emotions.

Conclusion: Thus we need emotions for artificial general intelligence.

uh… no ..... humans have legs, cars have wheels… cars are artificial. . .wheels are faster smile

HOWEVER , if the goal is simply to simulate emotions, then that’s a different story.

 

 
  [ # 17 ]

Hans,

I think you are wanting to be more ambitious with strong AI than I am.  I’m only looking for strong AI elements.  It’s possible one can be clever and simulate the bot knowing what’s going on, within the limited domain of simple conversation.  I’m not talking about the bot actually understanding human interactions as another person would, although I don’t think human beings necessarily “know” what’s going on either, the way you seem to indicate a strong AI system might.

 

 
  [ # 18 ]

Steve,

I don’t consider your examples to have anything to do with emotional responses, but rather the bot is just making statements that an agent with emotions might make, but not necessarily at realistic times or under realistic circumstances.

My approach to emotional responses is very natural within reinforcement learning, as the error signals, in a temporal difference architecture for example, adjusted for subjective value, would give the same anger response intensity as would occur in a mammal, given the same expectancies, state of satisfaction, and neural network architecture. 

 

 
  [ # 19 ]

Could you perhaps show us exactly what you mean?

 

 
  [ # 20 ]

@Mike: Could you perhaps tell us what exactly you find inferior about Steve’s example: the way that he determins the emotional state or the way that he responds to it (or something else)?
How would you do it based on your emotional state model?

 

 
  [ # 21 ]
Mike Sandifer - Jun 10, 2012:

... although I don’t think human beings necessarily “know” what’s going on either, the way you seem to indicate a strong AI system might.

I agree that some humans do things without understanding wink

Mike Sandifer - Jun 10, 2012:

I don’t consider your examples to have anything to do with emotional responses, but rather the bot is just making statements that an agent with emotions might make, but not necessarily at realistic times or under realistic circumstances.

I agree. But I also think that to experience, as a user, the chatbot to react with emotions, ‘faking’ it might be enough. I’m not convinced that a ‘full blown’ emotional subsystem would add to the user experience more then Steve’s example does.

This is, as I stated before, where I made the switch to strong-AI research (and development); when you think about giving a chatbot emotional capabilities, you start thinking about how that would benefit the system. In a ‘conversational agent’, it seems like overkill, as it can be ‘faked’ like everything else in a chatbot. You just need to add some emotional content into the response system, and something to track states like in Steve’s example.

Mike Sandifer - Jun 10, 2012:

My approach to emotional responses is very natural within reinforcement learning, as the error signals, in a temporal difference architecture for example, adjusted for subjective value, would give the same anger response intensity as would occur in a mammal, given the same expectancies, state of satisfaction, and neural network architecture.

I still can’t get away from the feeling that you have already crossed the boundary between chatbots and strong-AI wink

 

 
  [ # 22 ]

Hans,

It does require some strong AI.  I’m beginning to see that I come at this from a very perspective than the rest of you.

 

 
  [ # 23 ]

Hi Mike and all,
I hope it’s OK to enter into this very interesting discussion and add another perspective, knowing that you all may find it totally irrelevant (and please don’t be afraid to tell me so).

I’m speaking from this point of view: I’ve performed with my virtual character live animation system for about 18 years now, in many different settings and for many different audiences. It’s a “Wizard of Oz” thing where I can see and hear my guests from a closet, via a spy camera and mic, and they see and hear me as an interactive virtual character, avatar or talking cartoon. It’s not a chat-bot, and still it’s given me insights that I can share, about the quality of interaction and the expectations of the human side of the conversation.

Let’s jump in then, because I’ve always been very focused on creating a powerful and positive emotional experience for my audience. As a performer who empathizes with the guest, it seems correct to say that without a mixture of emotions expressed, the experience quickly becomes flat and without much future. The plain, emotionless delivery of information comes across as creepy and passionless, like reading a dictionary.

When we’ve added positive emotions such as happy, smiling, curious, intrigued, etc., we often find ourselves mirroring back the expressions we see in the faces of those we entertain. And people like mirrored expressions, unless they expect something else. So we puppeteer our characters in such a way that there are frequent variations and nuances in facial expressions, just as there are variations in language and word usage - to keep the conversations stimulating and somewhat “alive” with respect to texture and timing.

In all my years, I’ve never found it useful or necessary for an avatar to express anger, shame or guilt, even though humans are often displaying these facial expressions without even realizing it, or realizing it and suppressing it. As “actors,” virtual characters can make their own artistic contributions (with the performance of an artist backstage) and one day, computer intelligence may indeed be able to “create” similar experiences, because of the skill and thoroughness of AI pioneers and programmers.

My virtual hat is off to you all. Cheers!

 

 
  [ # 24 ]

Hi Mike, hi all,

I think these two links could be useful for everyone,
who is working on A.I. and emotion:

http://www.dfki.de/~gebhard/alma/index.html
and
http://www.univis.uni-erlangen.de/form#remembertarget

All the best
Andreas

 

 
  [ # 25 ]

I see the ALMA model uses the PAD emotion space model.

There was someone on the board here that also uses that model….. oh wait, that was me wink

However, I’m not just modeling affect, I’m using the PAD-model to describe ‘temporal emotional content’ to, among other things, model episodic memory.

 

 
  [ # 26 ]

smile

Very interesting.
Do you have a paper how you are modeling episodic memory?
Is there a paper which shows it?

In my opinion, the most interesting benefit of ALMA
is ALMAS approach to a social understanding (“GoodEventForGoodOther” for example):
http://www.dfki.de/~gebhard/slides/slides-iva06.pdf

I am working on an “ethical-social logic”
which is trying to find an adequate notation for human behavior,
which allows an ethical reasonig, too.

 

 
  [ # 27 ]
Andreas Drescher - Jul 12, 2012:

Do you have a paper how you are modeling episodic memory? Is there a paper which shows it?

Unfortunately not. I’m developing a full AGI-engine, and many of the concepts involved are currently locked under non-disclosure agreements until the patent application is done (hopefully before the end of this year). We have also just started building a full prototype, that should be presentable within 6-9 months.

Besides that I’m not aware of any research program that is working in the same area as where I’m working (but I’m very interested if you know of any); most AI-projects that are working on affect-modeling and affect-computing, are aiming at using affect in communication, whereas I’m working on using emotion for ‘conceptual knowledge representation’. Based on that, my system includes things like emotion-based reasoning and introspective learning (including transparent handling of analogies).

Andreas Drescher - Jul 12, 2012:

I am working on an “ethical-social logic”
which is trying to find an adequate notation for human behavior,
which allows an ethical reasonig, too.

I recently hooked up with Matthijs Pontier at the VU University Amsterdam (Vrije Universiteit Amsterdam), who is working (at least somewhat) in the same area as you.

You might find his work interesting, he has several papers on-line available: http://www.few.vu.nl/~mpr210/

 

 
  [ # 28 ]

until the patent application is done

For which regions did you apply a patent? And what type of patent are you getting (business model, broad concept or a narrow one)?

 

 
  [ # 29 ]
Jan Bogaerts - Jul 13, 2012:

For which regions did you apply a patent? And what type of patent are you getting (business model, broad concept or a narrow one)?

We didn’t apply yet (that is ‘doing the application’ which isn’t done yet). We are currently working on the draft. As I said, we hope to file before the end of this year. Writing the application is a pretty hefty (and costly) affair.

We will file an application for the Netherlands first, that will give one year worldwide protection. After that we can decide what regions to file globally, or scale up to a EU-patent request which will give us a few years additional worldwide protection. Not sure yet what kind of patent we will file for, as we are still working this out. We are currently coached by a consultant from the Dutch patent office during this stage.

 

 
  [ # 30 ]

Hi Hans Peter,

thank you very much for your inspiring link.
I will have a research concerning Matthijs work.

Apropos AGI: Probably you have seen the work of Ben Goerzel.
After his AGIs fusion with Joscha Bachs MICROPSI,
it seems to contain all the benefits of Prof. Dörners concepts,
including emotional reasoning and motivation.

http://www.youtube.com/watch?v=Rgjw8O3vLBs
(Joschas introduction at this time: 0.25.22)

All the best to you and all
Andreas

 

 < 1 2 3 4 >  Last ›
2 of 6
 
  login or register to react
‹‹ My learning AI Jay Sim      Chat Bot JESSICA ››