An Intelligent User Interface is designed to improve communication between humans and computers. Intelligent User Interface researchers, designers and developers specifically aim to enhance the flexibility, usability, and power of human-computer interaction for all users. In realizing an intelligent user interface, HCI scientists exploit knowledge of users, tasks, tools, and content, as well as devices for supporting interaction within different contexts of use - as described Dr. Mark Maybury in Intelligent User Interfaces for All.
This term is often referred to as Intelligent User Interface Agent, which is an intelligent virtual agent shaped as an animated character that aims to improve naturalness of human-machine interaction by adding human-like verbal and nonverbal behavior.
On this video Steven Bathiche, researcher from Microsoft Applied Sciences Group, demonstrates the progress in user interfaces relating to gesture recognition, which consists of a nonverbal part of human conversations, prototyping a concept of Conversational User Interfaces for the future.
Researcher Ron Kaplan from Natural Language Theory and Technology research group at Palo Alto Research Center and professor Marti Hearst from UC Berkeley spoke on the state of natural language interfaces for search engines. Back in 2005, they predicted that by 2015 interfaces for search engines will be conversational. Read more in Progress in Search: A Conversational User Interface (CUI) by 2015?
Typical usage
The term Intelligent User Interface is typically used within human-computer interaction, human-robot interaction and artificial intelligence communities. The focus of IUI designers is to increase the intelligence of interaction between computers and people. The Intelligent User Interface research field adopts various knowledge-based techniques and emerging technologies such as natural language understanding, brain computer interfaces or gesture recognition. IUI field also considers contributions from related fields, such as psychology, cognitive science or computer graphics.
An example of Intelligent User Interface is the interactive modular running pants with external display for personal health data analysis as designed within Intermedia project in interdisciplinary research lab MIRALab at the University of Geneva. Wearable senses attached to these pants can measure the activity of a runner’s knee at runtime, processes captured data stored in the wearable modules and inform about possible problems while running. The smart phone is connected to the wearable modules and informs the runner when to slow down or stop running. The data is stored in the cloud and revealed through smart phones, tablet or large screen interactive tables. The result of MIRALab’s research is shown on the video below.
Another group of researchers interested in enabling natural human-computer interaction by combining techniques from computer vision, machine learning, computer graphics, human-computer interaction and psychology is Intelligent User Interfaces Lab at KoƧ University. In collaboration with Graphics & Interaction Group at University of Cambridge they explore various options of intelligent affective computing, pen-based interfaces and sketch-based applications. As presented on the video, the shape drawn by pen is recognized and appropriate objects are added to the sketch-world with complementary use of speech. The interface actually understands what the user is trying to do and models it for him. It automatically raises the suggestion for the expansion of this model and its very next step: gesture recognition.
Japan’s NHK Broadcasting Corporation has also developed an intelligent TV interface that watches you watching it. The TV set uses cameras and microphones to monitor the viewer and takes programming cues from these devices. Check out this video about interactive TV of the future:
Another example of Intelligent User Interface application is Emotionally Reactive Television [HiTV] with its slogan: “If you don’t like the DVD/TV/Video program, just throw the HiTV Ball to it!”. This video shows Emotionally Reactive Television [HiTV] that was demonstrated in Nordic Exceptional Trendshop in Copenhagen back in 2006.
Brown University Robotics Group explores problems in human-robot interaction, robot learning, robot perception, autonomous control, dexterous manipulation, and game development. In this video, they present a robotic system that integrates naturally with a human user through intelligent interface which responds to speech, gestures and human behavior.
Trends toward future Intelligent User Interfaces
Futurist Erwin Van Lun predicted the future of intelligent user interfaces back in 2005. The amount of displays around us, and around the world would explode, normal surfaces such as tables, mirrors or windows would turn into displays, and they all would respond to those surrounding them. Some of them would be very small, fitting on packaging of commercial products, others as large as our walls or as our ceilings. All those screens will behave like 3D windows in a virtual world. One can be projected in a total synthetic world, or feel like having a window with a view showing a real location on earth, or experience time travel to the past (great for historians) or into the future (great for strategic consultants, futurists and life coaches). All of those screens would respond to all of our gestures, finger movements, body postures, voices and especially emotional expressions. And in these worlds, totally dedicated to the human user, artificial characters will appear completely indistinguishable from human beings, and that is what this website is about.
The research on IUI is somewhat mixed up with research in NLP, animation or robotics, and there are more commonalities than differences. Intelligent interpretation of speech or recognition of emotions in body postures is a research area on its own, and can be regarded as part of IUI research, but it will eventually contribute to the realization of virtual responsive 3D humans.
Background
The term Intelligent User Interface is a composite of three words: intelligent, user, interface.
The word intelligent dates back to XV century and is a back formation from intelligence or else from Latin intelligens, present participle of intelligere.[1]
Intelligere means “to understand, to discern, to perceive” and it constitutes a submission of two particles: inte meaning “inter” and legere, meaning “to choose”.[2]
From intelligere originates the word intellectus meaning “discernment, understanding”.[3]
The word user dates back to XIV century and is a noun from the verb use (from Old French user - “use, employ, practice”). User of narcotics - from 1935, user of computers - from 1967. The term user-friendly is said to have been coined by software designer Harlan Crowder as early as in 1972.
The word interface appeared in 1962, constituting a submission of two words: inter and face.
Inter originates from Latin and means “among, between”, from Proto-Indo-European (the hypothetical reconstructed ancestral language of the Indo-European family) enter - “between, among” (compare with Sanskrit antar, Old Persian antar, Greek entera, Old Irish eter, Old Welsh ithr, Gothic undar, Old English under), a comparative of en- meaning “in”.[4]
The word face dates back to late XIII century meaning “front of the head”, from Old French face - “face, countenance, look, appearance”, from Vulgar Latin facia, facies - “appearance, form, figure”, and secondarily “visage, countenance”, probably related to facere - “to make”. The word face replaced Old English word andwlita (from root of wlitan - “to see, look”) and ansyn, the usual word (from the root of seon - “see”). In French, the use of face for “front of the head” was given up in XVII century, and replaced by visage (older vis), from Latin visus - “sight”.[5]
The word interface means a point of interaction or communication between a computer and any other entity, such as a printer or human operator.[6]
The term Intelligent User Interface was first used in 1988 during the ACM/SIGCHI and AAAI Workshop on Architectures for Intelligent Interfaces: Elements and Prototypes. The workshop initiated the development of the field of Intelligent User Interfaces.
After three years, in 1991, the organizers of the workshop Joseph W. Sullivan and Sherman W. Tyler published the book under the title Intelligent User Interfaces, which highlighted the main papers as presented in the workshop and explored artificial intelligence techniques used to improve intelligent user interfaces.
Intelligent User Interface pages
Although we use chatbot as the main synonym on this website, please do not be confused. There are more than 161 synonyms in use by academics, business and intelligent user interface enthusiasts! It is simply a matter of reading between the lines.
Please check out our main directory with 1376 live intelligent user interface examples (an overview as maintained by developers themselves),
our vendor listing with 253 intelligent user interface companies
and intelligent user interface news section
with already more than 368 articles! Our research tab contains lots of papers on intelligent user interfaces, 1,166 journals on intelligent user interfaces and 390 books on intelligent user interfaces. This research section also shows which universities are active in the intelligent user interface field, indicates which publishers are publishing journals on humanlike conversational AI and informs about academic events on intelligent user interfaces. Also, check out our dedicated tab for awards, contest and games related to the intelligent user interface field,
various forums like our AI forum by intelligent user interface enthusiasts
and add any intelligent user interface as created by yourself and your colleagues
to our intelligent user interface directory. Please do not forget to register to join us in these exciting times.
A selection of pages on this website using 'intelligent user interface':
Alternative usage of Intelligent User Interfaces
Intelligent User Interfaces have also been used to describe a user interface that responds in an intelligent way, but not necessary via conversations.
On this video, a scientist, artist, inventor and futurist Connor Dickie, talks about his kameraflage display technology which allows images to be integrated into clothing which can only be viewed by cell phones or digital cameras - not the naked eye.
The topic of using mobile devices as an extension of human senses was described back in 2007 by futurist, trend analyst and professional speaker Erwin Van Lun in his article See more than your eye can. It is actually part of a completely different trend: the rise of Cyborgs, real humans enhanced with technology, implemented in their bodies.
As an example of the world’s first cyborg can serve Kevin Warwick, researcher and Professor of Cybernetics at the University of Reading in England. He implanted a silicon chip transponder into his left arm and connected it to his nervous system. In this video professor Warwick talks about ultra-sonic senses, brain-to-brain telepathic communication, and the therapeutic benefits of his experiments.
With intelligent human computer interface ethical discussions may arise. For example Amsterdam’s Schiphol airport uses this technology to find metals and explosives hidden underneath clothing. At the airport’s security checkpoints body-scanning equipment were introduced that scanned passengers bodies through radio waves, from head-to-toe and renders into a 3d image. This technology was actually so advanced that it almost showed a naked passenger (except for their skin color and texture obviously). This initiated a huge privacy discussion. In order to avoid embarrassing situations with local security officers who focus on certain passengers, scans are being viewed by colleagues on a distance in another part of the building, and deleted after they have been viewed and checked.[7]