I saw an interesting article in the magazine for Cambridge Alumni, explaining that a Professor of Philosophy is teaming up with a founder of Skype to study “potentially cataclysmic effects of human technology”. The article goes on to focus on AI as a potential source of future trouble. It’s short on detailed discussion, but it’s interesting to see that academics see it as an area for future research. The comment that most of the thinking in this area is happening outside academia should ring true for members of this forum.
You can read the article on this link: (sorry about the terrible magazine reader interface, jump to pages 24-25)
http://issuu.com/cambridgealumnirelationsoffice/docs/cam_68_composite_online-100dpi_opt?mode=window&backgroundColor;=#222222
I take a much less pessimistic view, because I think slow progress in AI will give us time to figure out how to manage these risks while we develop increasingly useful and autonomous AIs. Just as we have roughly figured out how to make industrial machines safe for humans (think of the cut out that stops a shredder sucking you in when it grabs your tie) and how to educate children so they don’t all become psychopaths, so we should be able to handle sophisticated AIs.