It’s just over two week since we finished at Mobile World Congress and I’ve had a bit of time to reflect on the conversations I had with people that dropped by for a chat at our two exhibition stands – one located in the main section of Hall 2, the other in App Planet. There was certainly a healthy level of interest in how virtual assistants can be used on mobile devices – maybe this shouldn’t be such a surprise following Apple’s launch of Siri but it was really encouraging that so many people are recognizing the opportunity of using NLI as the basis of a new and effective speech enabled interface for mobile devices.
Interestingly, there’s still a lot of confusion around the difference between automatic speech recognition (ASR) and natural language interaction (NLI). In summary ASR converts spoken words to text. It is often used in applications such as voice dialing and dictation tools. But ASR doesn’t have humanlike intelligence. It can’t qualify a question by asking for more information. It can’t remember. It can’t search other information sources for information. In summary, it’s not able to deliver intelligent solutions.
NLI on the other hand allows you to ask complex questions in free-format, natural language and it will learn, reason, understand, and then apply this knowledge to act on what has been said. In short, it’s the ‘brains’ that makes technology think. It is the platform that allows humans to talk to devices and that allows devices to understand humans!
There was also considerable interest from the developer community who were particularly keen to explore how they can embed NLI technology into third party apps. This demonstrates not just the growing demand for natural language solutions on mobiles, but recognition of the importance of virtual assistants (VAs) to be able to reason and react intelligently to complex commands for apps to be a commercial success.
People we talked to were also impressed by the capabilities offered by NLI. For example, we had a number of detailed discussions around areas such as topic switching, where a user asks a VA one question, but then digresses onto another subject. An intelligent NLI solution like those built with Teneo will be able to answer this digression question and then – if appropriate – politely bring the conversation back to the original subject; a humanlike attribute not commonly found in virtual assistants.
Slot filling was another subject that generated significant interest because of its potential to deliver a humanlike, intelligent experience, particularly when people recognized that other mobile VAs available today can only remember one detail at a time. For example, if you were booking a flight to MWC you might say “I want to book a flight from London to Barcelona, tomorrow morning flying business class”. Slot filling enables the VA to skip the usual qualifying questions such as “where from”, “destination”, “when”, etc. because this information has already been provided in the opening query; the VA just gets straight to the next part of the booking process. Of course, if information was missing, it would recognize what further information is required and ask relevant questions. In other words, another example of how NLI delivers humanlike intelligence. In fact, if you’ve already given information such as you’re a vegetarian or prefer business class to the VA, it will also remember these facts when making the booking.
Scenarios like this really demonstrate the power of NLI intelligence behind VAs. It’s not just about understanding a specific voice string or command using ASR technology, it’s about delivering a much faster, richer way of building an intelligent user interface and customer service tool.
http://www.nlinews.com/2012/capability-not-voice-impressed-most-at-mwc/