These Drones Have a Mind of Their Own

 

David Gunkel, Professor in the Department of Communications at Northern Illinois University and author of The Machine Question, continues the discussion of Amazon’s plan for drone delivery by exploring the ethical questions it raises, and more. The following are comments from Dr. Gunkel:

Drones are everywhere. Not necessarily in the skies above our heads just yet. But certainly in the news media, in the informal discussions around the office, and front and center in the national consciousness. Until December of 2013, these conversations had largely been about battle field drones or Unmanned Aerial Vehicles (UAV’s) in military parlance. And this is clearly evident in, for example, the heated discussions surrounding the Obama administration’s rather controversial drone policy, the recent publication of studies from Human Rights Watch and other watch-dog groups concerning civilian casualties in places like Yemen and Pakistan, and (for many of us) the rather surprising reports about the number of drone operators now exhibiting symptoms of post-traumatic stress disorder (PTSD), a condition previously thought to be limited to those individuals who were on the ground in the theater of battle and not insulated from it by layers of technological mediation and vast global distances.

But Amazon.com CEO, Jeff Bezos, changed all that in an interview on the popular television news magazine 60 Minutes on the Sunday evening of December 1st. In the course of his conversation with Charlie Rose, Bezos unveiled a new Amazon package delivery strategy—drones. And if his plan materializes, drones will, in fact, be everywhere—picking up packages from the local Amazon fulfillment center; flying across suburban soccer fields and school yards, navigating through densely populated city blocks, avoiding busy intersections and roadways; and depositing packages of up to five pounds outside our front door just thirty minutes after clicking the submit button on the website. It is an impressive proposal and one that promises to foreclose the wait time for getting stuff, at least until we fully develop that Star Trek transporter technology.

But perhaps the most significant part of this scheme is not the use of drones for domestic delivery (we’ve actually heard that one before in various forms), the regulatory difficulties that must be solved prior to unleashing a swarm of unmanned aerial vehicles into the national airspace, or even the privacy concerns of consumers who definitely want virtually instantaneous access to online products but still hope to retain some modicum of control over who or what gets to peek into their windows and backyards. No, the most surprising part of this plan is the fact that these drones, unlike their military counterparts, will not be operated by a human being sitting at what is arguable a video-game console “turned up to eleven.” As Bezos explained to Rose, the Amazon Octocopter drones are autonomous; you simply feed them GPS coordinates and the device flies itself to the destination and back again. If the deployment of military UAVs adhere to the basic AI strategy of keeping a human being in the loop (or at least on the loop), the Amazon drones appear to be pushing in the direction of fully autonomous mechanisms.

Machine autonomy has long been a staple of science fiction. We see it, for example, in the I, Robot stories of Isaac Asimov, the artificially intelligent HAL 9000 computer of Stanley Kubrick’s 2001: A Space Odyssey, the android Lt. Commander Data of Star Trek, and the robotic Laurel and Hardy, C3PO and R2D2, from Star Wars. These fictional robots “have a mind of their own,” which is what often structures the narrative and produces the story’s dramatic tension. Autonomous machines, however, are no longer science fiction. They are here, they are now, and the Amazon package delivery drone is but one example. These mechanism might not have “a mind” in the classical philosophical sense of the term or in the way we typically portray these things in fiction, but they are designed to operate independently in our world and to make decisions that do, for better or worse, have an impact on us and our social reality.

Designing machines for autonomous operations is clearly necessary for these devices to be economically viable and effective—to be able, with little or no expensive human involvement, to navigate through a complex and crowded airspace on their way from the warehouse to your house. And such devices will need to be designed with proficient object detection methods, collision avoidance systems, and sophisticated decision making capabilities so that we can depend on them to operate safely above our heads instead of, as Bezos puts it (somewhat comically), “landing on someone’s head as they are walking around their neighborhood.” But machine autonomy also has a dark side, vividly illustrated in science fiction by the robot run amok scenario or the proverbial robot apocalypse, where the machines institute a coup d’etat and turn us into their playthings or worse. Although science fiction clearly exaggerates things for dramatic effect, the basic questions raised by these techno-myths already apply to contemporary technology like Amazon’s drones: How much autonomy should we design into these systems? How reliable are machine generated decisions? Can we (or should we) count on them? And if something does go wrong, who or what is responsible for the error? In other words, who or what is culpable, when decision making and real world action is no longer under human direction and control? This complex set of issues comprises what I call the The Machine Question. And it is this question that, I believe, is one of the defining issues of our time.