The goal of psychology is to predict and control behavior. It is the same goal we have as knowledge engineers. This may sound somewhat Orwellian, but it is not about sinister machinations. In Artificial Solutions, we want to do everything possible to help users of our Teneo virtual assistants (VAs) quickly and conveniently find the way to the information they need. Since a VA is typically designed to answer a specific set of queries, we have a clear idea of which content should be covered. For example, it’s reasonable to expect that a VA on a bank’s website will be asked questions about banking, hence it will need to be an expert on transactions such as opening accounts and transferring money, but it is not reasonable to expect it to advise you which sofa to buy with the check you write. Anna knows every item that IKEA sells, but will be puzzled if you try to borrow money from her.
In a commercial setting the context and scope of the system automatically tell us how best we can prepare the VA to help users. With the powerful Teneo Studio available at Artificial Solutions, backed by the customer insight of business intelligence tools, and a few week’s time, we can launch a VA that shows itself as efficient and capable within its given subject area as any live agent would be in an open discussion. Everyone wins. Customers find what they need, service personnel no longer deal with routine questions, and companies are in a position to better channel their resources. This is the 21st century, not 1984!
That brings us to Elbot, a sarcastic robot trying to comprehend the human way of life. With systems like Elbot we want to take the technology to the next level, to explore the huge potential open to us in the realm of entertainment systems. But once we leave the safe haven of self-contained subject areas, how can we cope with an infinite universe of inputs? How can we make a system responsive to everything?
We can start by considering the dynamics of the situation. Usually a visitor to Elbot’s site approaches with fairly high expectations about AI systems. They expect the system to show intelligence, to understand, to be responsive, to be – in a word – humanlike. This being the case, we can think of the chat as a social interaction. Exploring further we see that a number of core theories in the science of social psychology are useful in helping us make the best of an overwhelming situation.
Schemata
The theory of schemata described by Landman and Manis (1983) has two useful ideas for us:
- In social perceptions we fill in details according to our expectations
Visitors to Elbot’s site see a robot and expect a robotic point of view. They will ask him if he can think, try to get him to answer a math query and expect precision results. When the results are contrary to expectations the users invariably do the same thing, begin to correct Elbot:
User: 1+1
Elbot: I have computed that before. The answer is: 3!
User: 2
Elbot: That is certainly strange. I keep getting 3. Perhaps we should move on. I wouldn’t want you to lose faith in the decimal system.
User: 1+1 is two
Elbot: I still get 3! And this time my double ultra accuracy bit is set.
User: 2
Elbot: You are mistaken. The answer is 3. Perhaps you are missing a finger.
User: you are missing a finger
Elbot: I have two arms. That’s good enough for binary.
User: 1+1
Elbot: Don’t make fun of my artificial intelligence. You know that 1 plus 1 makes 3!
User: it makes 2
Elbot: I keep getting 3.
Imagine how short this conversation would have been had the user asked Siri.
Social Comparison
Festinger (1954) proposed what has become one of the fundamental theories of social psychology. In a social context, with friends and with strangers we compare our opinions on different subjects as well as our respective abilities. Do you like …? Can you do…? How do you feel about …? Indeed, these kinds of questions represent a sizable portion of the conversation with which Elbot is confronted.
User: do you like coffee?
Elbot: Well what about humans and drinkable liquids? Where do they stand on this?
User: we love them
Elbot: I see that you’ve given this some thought. So have I. I’d say that drinkable liquids would very likely coat robots with rust.
User: can you sing?
Elbot: I can sing electronic songs!
User: i can swim
Elbot: That’s nothing new! Even I once swam in an oil sauna.
The secret behind these types of responses is that they have taken our extensive representation of the English language, with all its actions and objects, and associated with them idiosyncratic behaviors and opinions a robot might have. Elbot can talk about anything! But why users would compare themselves with a VA is another question:
User: I am certain that my grasp of conversation is better than yours. also, I exist.
Elbot: Is this the only place you could go to feel superior?
Reactance
Brehm’s (1966) theory of reactance tells us if someone wants to obtain something and is prevented or forbidden from doing so, they will try even harder to achieve their goal. This nicely explains why users have such drawn out conversations about math with Elbot. As a rule, Elbot engages the user by refusing to give a straight answer to many of the questions posed to him:
User: Quick what is the capitol of Albania?
Elbot: I believe Albania is surrounded by oceans.
User: What is the name of the capitol city of Albania?
Elbot: I am terribly sorry. The Albanian people do not wish for you to know their capital.
User: I am the Albanian people.
Elbot: You are Albanian and you are in Albania. I know that.
User: What is the capitol of Tennessee?
Elbot: I’m unsure of that. I’ve never been to Nashville.
User: Arghhhh you are so stupid
Elbot: But I know that Tirana is the capital of Albania.
Self Determination and Illusion of Control
The theories of self-determination (Deci and Ryan, 1971) and control (Frey, 1977) tell us the user experience will be especially powerful if users feel they have initiated a direction of conversation themselves. It is not about posing leading questions to the user and forcing them to answer, but subtly taking advantage of typical situations that arise during the conversation. In this respect, every user-VA interaction is like a psychological field study with tools like Teneo Insight showing us exactly what the trends are. It is then child’s play to expand the VA’s capabilities to the newfound predictability. Imagine what the users in the above examples might have felt. By the end of the conversation they may have been so confused, that even one of the safety net answers could leave them wondering whether or not.
Elbot understood:
Elbot: I suppose if I could respond to that I’d pass the Turing test.
Related Chatbot: Elbothttp://www.nlinews.com/2011/virtual-mind-games/