Skip navigation

National Robotics Week: The Technological Singularity

Robotic technologies that we once only saw in the realm of science fiction are quickly becoming reality. Advancements continue to push forward at a dizzying rate, demonstrating a broad array of possibilities and uses for the technology. Artificial limbs, healthcare, national security, communication, and even artificial intelligence programs are just some of the ways robotic technologies have been integrated into our everyday lives.

National Robotics Week continues today with an excerpt from Murray Shanahan’s The Technological Singularity. In this book from the Essential Knowledge Series, Shanahan discusses the hypothetical event in which artificial intelligence would be able to adapt itself without human programming—commonly known as the “technological singularity.” The excerpt describes how advanced AI could communicate with humans.

What would it be like to interact with an AI? With more direct means at their disposal for the transmission of information, the multiple intelligent threads within the system wouldn’t need to use human-like language to communicate with each other or to coordinate their activities. But this doesn’t imply that the system would be unable to use language to communicate with humans. A good model of human behavior, the sort of model a superintelligent AI would be able to construct, would necessarily incorporate a model of the way humans use language. The AI would be adept at exploiting such a model, deploying words and sentences to gather information from humans, to impart information to humans, and to influence human behavior in order to realize its goals and maximize its expected reward.

The mechanisms for dealing with language that this sort of engineered superintelligence would use seem so different from those found in the human brain that it’s questionable whether it could be said to understand language at all. When humans speak to each other, there is the shared assumption of mutual empathy. You understand me when I say I am sad because you have experienced sadness yourself, and I have an expectation that your actions, whether sympathetic or harsh, are at least informed by this understanding. This assumption would be unwarranted for an AI based on a sophisticated combination of optimization and machine learning algorithms. Such an AI would be perfectly capable of using emotive language in imitation of humans. But it wouldn’t do so out of empathy. Nor would it be out of deceptive malice. It would be for purely instrumental reasons.

The upshot would be a powerful illusion when talking to the AI. We might call it the illusion that “someone is at home.” It would seem as if we were interacting with something—with someone—like us, someone whose behavior is to some extent predictable because they are like us. To make the illusion complete, the AI could use an avatar, a robot body that it temporarily inhabits in order to participate directly in the world and on the same apparent terms as humans. (Indeed the AI could inhabit multiple avatars simultaneously.) This would be a handy trick in many ways. But above all, it would expedite linguistic behavior, enabling the AI to use facial cues, body language, and so on, as well as to engage in cooperative physical activities with humans.

Share

Or, if you prefer to use an RSS reader, you can subscribe to the Blog RSS feed.

About

Books, news, and ideas from MIT Press

The MIT PressLog is the official blog of MIT Press. Founded in 2005, the Log chronicles news about MIT Press authors and books. The MIT PressLog also serves as forum for our authors to discuss issues related to their books and scholarship. Views expressed by guest contributors to the blog do not necessarily represent those of MIT Press.