Dr. Roman V. Yampolskiy from University of Louisville says that we should be aware of dangerously self-aware chatbots and he suggests to keep dangerous AI entities in virtual prison for avoiding social-engineering attacks. According to him, we should improve cybersecurity, because one day disobeying virtual agents could threaten humanity’s existence.
Dr. Yampolskiy in his paper Leakproofing the Singularity: Artificial Intelligence Confinement Problem suggests to create a new AI-related discipline aimed at detecting, tracking and imprisoning dangerous avatars in virtual prisons. Development of this field could be combined with research in artimetrics - identity verification of embodied conversational agents, avatars, and virtual humans.
Not all scientists agree with cybersecurity perceived in that way. Eliezer Yudkowsky, world’s foremost researcher on Friendly AI and recursive self-improvement, a research fellow at the Singularity Institute for Artificial Intelligence, states that truly super-intelligent avatars could find a way to break out of even a very sophisticated virtual prison. To prove his beliefs he conducted so-called AI-Box Experiment.
It turns out that an advanced artificial intelligence agent “closed” in the box, with no external contact (such as Internet, computers or networks) could eventually escape from that box by convincing the human box-keeper to let it out. Taking into consideration this experiment, Yudkowsky proposes to focus on creating only human friendly artificial intelligence, just in case one day a super-intelligent applications got out of control.
More about artimetrics, virtual human tracking, and avatar face recognition issues you can read in following selected articles:
Face Recognition in the Virtual World: Recognizing Avatar Faces, Baseline Avatar Face Detection using an Extended Set of Haar-like Features, and Direct and Indirect Human Computer Interaction Based Biometrics.