AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Siri and Cortana
 
 

http://www.bbc.co.uk/news/technology-26147990

The ability of a system to understand more broadly what the overall context of a communication is turns out to be very important…

There are some critical signals in context. These include location, time of day, day of week, user patterns of behaviour, current modality - are you driving, are you walking, are you sitting, are you in your office. Are you in a place you are familiar with versus one you are not?

A person’s calendar can be a very rich source of context, as is their email.

 

 
  [ # 1 ]

Interesting article. Thanks Andrew! smile

Haven’t seen you in a while. It’s great to have you back.

 

 
  [ # 2 ]

Oh I’m always here… lurking… thinking… coding. grin

 

 
  [ # 3 ]

Same here. I don’t “contribute” anywhere neare as much as I used to, but I’m still here, every day. smile

 

 
  [ # 4 ]

Moderating the forum, Dave, is a huge contribution.


And…

Thanks for the article Andrew. 

EDIT:

Just (friendly, politely) adding…

Spoiler Alert!

I am trying to go see the movie, “Her”
but can not find a theater it is playing
near me.

So I thought I would mention this little
heads up here, in case it is useful to
someone else.

Just to be 100% clear:

I am NOT crticizing Andrew’s post…
Please no apologies or anything.

On the contrary, I do look forward
to reading that article, Andrew posted
right after I see the movie.

 

 
  [ # 5 ]

Again an interesting post smile

AI: Sorry, I don’t know what a pizza is
Human: OK, well do you know where there’s a nice Italian restaurant?

I have to say this human response is even less likely than I find an A.I. likely to distinguish between related and unrelated combinations this way in this application. I can be wrong, but when a human runs into a dead end in free conversation (rather than a goal-oriented quest), they usually change topics (or confront it unproductively). Siri in particular is almost exclusively asked “out of the blue” questions for parlour amusement. It’s a talking phone.
Knowing time, place and intonation will increase the accuracy of Siri’s bag of tasks, but it won’t make the phone a better conversationalist. As long as Siri is going to statisticalize keywords, she’s missing out on every bit of contextual nuance that passes in the here and now of conversation. That is where I believe the game is at, not in global context like what planet I’m on.

If I’m a little biased it’s because I’ve been working out some conversational mechanisms these last months, and to opt that one should learn “the rhythm and beat of the way humans talk to each other.” is a bit of an unspecific understatement.

 

 
  [ # 6 ]

The article and videos contains no spoilers about the movie “Her”, not to worry.

Where Microsoft seems to be going with these methods is task-aimed personal assistants, hence incorporating context from schedules and location would be quite useful. But you can have a perfectly good conversation with an online stranger without knowing their place, time, schedule, age, etc. As compelling a speaker as I find Eric Horvitz, I wish he would have something more to say here than affirming the obvious challenges.

Context is extremely important for understanding, but I believe you can retrieve a lot more context from reading inbetween the lines of a conversation, than from harvesting exterior circumstantial context. Where is psychology in the field of AI?

 

 
  [ # 7 ]

I have not seen the movie yet either, but as far as I could tell there was nothing that might constitute a spoiler in that article. However the most cogent comment that I have read about the movie so far may amount to a spoiler for some (spoiler alert) and that is that the movie completely ignores “the elephant in the room”. To wit, any artificial intelligence that is potent enough to develop and maintain the kinds of personal relationships portrayed in the movie would also be powerful enough to pose an immediate existential threat to the human race. Personally I find that to be a refreshing change from all the other movies that portray that as a bad thing.

On a more serious note…

Context is extremely important for understanding, but I believe you can retrieve a lot more context from reading in between the lines of a conversation, than from harvesting exterior circumstantial context.

These are all different aspects of pragmatics and I’d be reluctant to assign priorities like that. The relative merits of the different information back channels (shared knowledge of the situation and context, body language and tone of voice, the application of conversational maxims, cultural background and idioms, etc) must vary radically from one case to another. There is still a lot of research to be done in this area.

I’ve mentioned the following project here in the past but not since you became a regular here on the forum Don, so I’ll post it again in case you’re not familiar with it.

The most capable real world conversation system that I’ve found in my research so far is being developed by James Allen’s team at the University of Rochester. It employs deterministic methods (the same rigorous approach that you and I and some others are taking—analysis of grammar, semantics and pragmatics) rather than the statistical methods and pattern search that are currently so popular.

Please view the videos and browse the papers that they have published. I’d be interested to know what you (and anyone else who has seen it) might think.

http://www.cs.rochester.edu/research/cisd/projects/plow/

http://www.cs.rochester.edu/~james/

 

 

 
  [ # 8 ]

Thanks for that link. I’ve been curious about such projects.

Priorities depend entirely on one’s goal of course. I should point out that I greatly distinguish between general language processing and having a free, fluid, social, two-way conversation, and my remarks so far have been towards achieving the latter, as the article suggested.
Plow (and Microsoft) however seem to be entirely task-oriented systems, where social and psychological aspects needn’t come into play much. When you want an exact task done, you want exact control, and that is just what deep language processing offers over statistical guesses. I am obviously a fan of this approach and admire projects like this for willing to take such effort.

Sorry about the wall of text:
The paper tells me that there is much more going on behind the screens than I can tell from the videos, the Trips system is intriguing. I have however doubts about the efficiency and need for advanced language processing to the ends of task-learning. I would think that 80% of these instructions could cleverly be identified through the back-end of mouse actions, copy-paste contents and html, and what strikes me most in the videos is that verbal instructions make the task-learning process sluggish. While there is certainly a need to communicate “if-then” conditions, one could also speak the command “If” and then select a form label or fiddle the mouse cursor over whatever the condition is, whithout speech recognition -> ontology -> grammar -> semantics -> reasoning -> pragmatics and their potential for misunderstandings.
What I’m saying is that I find it hard to judge to what extent advanced methods like semantic reasoning contribute over ordinary commands in this case. It also appears from the code in the video that merely the keyword “Now” identifies a new sub-task, so there you go.
I am entirely interested in James Allen’s 2001 paper Towards Conversational Human-Computer Interaction involving implicit goals, conditions and problem-solving through language, but these do not appear to be in action. I hope we will get there with time.

One task-learner I admire for its simplicity is the Baxter robot, since you basically only have to move its limbs and press enter at pivotal points.

 

 
  [ # 9 ]

Andrew said, “I have not seen the movie yet either, but as far as I could tell there was nothing that might constitute a spoiler in that article.”

Thanks Andrew.  This was an honest netiquette mistake on my part. It did not occur to me that it was bad manners to try to correct something I did not post.  So, I have set a new rule for myself, to never post another spoiler alert to this forum on anything I did not post myself. 

 

 
  login or register to react