It’s an interesting fact that despite Elbot’s successes in numerous artificial intelligence competitions it was never the intention that people would mistake him as a human, after all he does claim to be a robot. Behind the sarcastic chatterbot lies a variety of social psychological theories that provokes the user to act predictably, creating not just a very different chat experience, but one where the human believes that they are in control.
It’s a part of human nature to react to stereotypes. We fill in any gaps according to our expectations and react predictably to inconsistencies. Using the theory of Schemata as defined by Landman and Manis, it’s possible to make the user believe that the system is highly intelligent. For example, ask Elbot what 2 + 2 is and more often than not he will give the wrong answer, drawing the user into a long conversation because the human is ‘pre-programmed’ to believe that robots should be able to compute mathematics.
In reality, Elbot’s response is actually about defending a wrong answer whether it be a math problem or a general knowledge questions. This is be illustrated in the following dialogue where a user is questioning Elbot’s encyclopedic knowledge.
- User: you still don’t want to say what the capital of Spain is?
- Elbot: I am terribly sorry. The Spanish people do not wish for you to know their capital.
- User: you still don’t want to say what the capital of Spain is?
- Elbot: Didn’t I just answer that question?
- User: yes, sorry
- Elbot: Well, what was wrong with the answer I gave you?
- User: you didn’t answer my question
- Elbot: I refuse to answer anything I may have to retract later.
- User: are you stupid?
- Elbot: At least I know the capital of Spain is Madrid.
Parents have been using reverse psychology on teenagers for years. Just as persuading someone to ditch their unsuitable boyfriend, it has to be used subtly to be effective. Known as Reactance and first described by Brehm in 1966, it can be used to manoeuvre users into having predictable conversations, whilst still providing them with the illusion that their reactions are unique and original.
If Elbot for instance denies knowledge or refuses to correct a wrong answer, the user will persist in their line of questioning until the issue is resolved, all the time receiving answers which were intended for just this situation.
Whilst the main focus is giving the user the appropriate response within a conversation, emotion does have a part to play, but surprisingly a visual image is not a prerequisite to conveying it. Elbot has more than 40 different images to express emotion, but in settings such as the Turing Test and the Loebner competition where only text responses are allowed, it did not appear to have been a disadvantage.
The perception of control within dialogue simulation is important if the user is to have a positive experience. Again and again we see through Elbot that users are only satisfied if they believe they are in control of the conversation situation, to the point that we sometimes receive emails telling us that Elbot has bug in his mathematics algorithm.
By taking advantage of the various social psychological concepts it is possible to build virtual assistants that don’t just effectively converse with a customer, but give the illusion that the customer is still in control.
Related Chatbot: Elbot