Imagine a TV that responds to your physical behaviour such as finger pointing, talking and other actions such as picking up a magazine. Basically, it is a TV that watches you via gesture recognition, voice recognition and image recognition and responds accordingly.
This new intelligent user interface technology is currently being developed by Japan’s NHK.
Cameras set up next to a TV monitor the viewers’ facial expression and actions, and a microphone picks up voice commands. The TV also automatically detects when a viewer is distracted and will stop the picture and will resume as soon as it detects their attention. This technology can even go as far as suggesting TV content while, for example, reading a magazine: the TV will match content to the same subject as you were reading in the magazine. This feature is based on video on demand; others may refer to this as ‘indirect advertising’ which opens up a whole new world for promoting products and services by companies. It relates to the product activation trend as earlier described by Erwin on his website.
Although most of the features can be implemented fairly soon, such as gesture, voice, and image recognition, some features rely on content input of broadcasters to update their shows with information (by streaming Meta data). This information can then be accessed by the viewer such as names of actresses and information about the shows’ content. Standardising and implementing these data streams by broadcasting companies could still take years though.
As research has shown, people are more likely to respond to a virtual human than to a plain screen that responds to their voice. It simply feels more natural to have a living creature to speak to, that is how our brain is wired. Therefore, we expect that future applications could incorporate a chat bot to give this already intelligent user interface an even more natural feel.