National Robotics Week

National Robotics Week: Developmental Robotics

In our final post in celebration of National Robotics Week, Matthew Schlesinger and Angelo Cangelosi discuss science’s progress toward creating machines that appear to “think,” express themselves, and have authentic experiences like our own. Schlesinger and Cangelosi are authors of Developmental Robotics: From Babies to Robots, a comprehensive overview of robotics that takes direct inspiration from the developmental and learning phenomena observed in children’s cognitive development.

Depending on where you look, 2016 has either started out as a watershed year for intelligent machines, or instead, as a good source of material for late-night comedy. To be fair, this is not an atypical year: over the last 6 decades, the field of AI and autonomous robotics has produced countless successes as well as failures. It might even be argued that the failures are more valuable and instructive, as they illuminate how far we have yet to go on the path toward discovering artificial, human-level intelligence.

A fundamental yardstick for measuring our progress toward “thinking machines” was first presented by Alan Turing in his 1950 paper, “Computing Machinery and Intelligence.” The key insight from Turing’s essay, which later became known as the “Turing Test,” was that a thinking machine doesn’t actually have to think—it only needs to be good enough at conversation that a person chatting with the machine would be unable to tell if they were conversing with a machine or another human.

Perhaps not surprisingly, Turing’s proposal has inspired a wide variety of research strategies, including those that seek to leverage the same kinds of cognitive and linguistic mechanisms that humans use, as well as clever “impostors” that exploit more low-level social and conventional heuristics (e.g., the Eugene Goostman chatbot). Collectively, these strategies are part of what may be described as a “front-door” approach to passing the Turing Test, insofar as they are explicitly designed to engage in conversation and to mimic the same skills and knowledge that humans use.

Ultimately, however, it might turn out that some of the most important steps toward passing the Turing Test—or to put it more precisely, toward creating machines that appear to “think,” express themselves, and have authentic experiences like our own—may have little or nothing to do with the test itself. That is, there is a back-door approach, in which ongoing efforts to solve more fundamental problems in robotics and human-machine interaction might “get us there” without explicitly trying. We’ve highlighted few steps in this direction.

We’re born to attach

John Bowlby’s theory of infant-caregiver attachment is built on the idea that human evolution has shaped a number of basic instinctive behaviors, including crying and smiling in infants, as well as soothing, playing, and protective behaviors in parents. The attachment mechanisms that Bowlby described are not limited to parent-child relationships (they surely include our family pets), but may also extend seamlessly to robots in our environment, like the Roomba that vacuums our floors. (Given what we know about Mori’s “Uncanny Valley,” it is probably noteworthy that these human-machine attachments do not depend on the robot having a human-like appearance.)

We also have a “theory of mind”

The concept of theory of mind means we understand that other people have their own internal thoughts, feelings, and mental experiences. It would be silly to attribute internal experience to a machine, and yet we happily attribute intentionality and tell stories about the goals of 2D shapes that move on a screen. Like becoming attached, perhaps the tendency to anthropomorphize anything that moves around with apparent purpose is in our blood too (think: R2-D2 and BB-8).

And we love our digital assistants

We are immersed in a digital environment. We’ve learned that we can speak naturally to Siri, Google Now, and Alexa, and that their ability to understand us is unexpectedly good. In the same way that many of us have become dependent on GPS to navigate (just think of the mild panic you feel when you’ve been driving for 5 minutes and then realize that you left your phone at home!), imagine how it will feel when these human-machine interactions begin to take up the bulk of our daily activities—in our smart-homes, in our self-driving cars, at our computer workstations, etc. We do depend on the digital devices in our lives, maybe even more than we’d like to admit.

To wit: we are primed to form relationships with intelligent machines and robots, and we reflexively view their actions as goal-directed. To add insult to injury, we have made ourselves helplessly dependent on the digital devices we use every day. As the holy trinity of machine learning (natural language processing, image understanding, and semantic networks) continues to improve at breakneck speed, we are surely on a collision course with machines that will fit into our lives fluidly and effortlessly, just as other humans around us already do. It’s tempting to agree with Turing when he suggested that thinking machines don’t have to think—they just have to make us believe that they think.