Edward Jones-Imhotep’s book The Unreliable Nation is, broadly, about how technological failures define nature and national identity, and more specifically about how this happened in Canada during the Cold War. In this blog post, he writes about the series of hurricanes that have battered the United States, and asks about how natural disasters and technological failures define one another.
What does a natural disaster look like?
On the evening of Sept. 9, as Hurricane Irma bore down on Florida’s Gulf Coast, casting aside a devastated Caribbean and churning its way through the Florida Keys, news outlets scrambled to show what was coming: mounting winds, torrential rain, coastal waters drawing back ominously, satellite images picturing a cyclone larger than the entire state. The full scope of the disaster would make itself known over the coming days. But in the meantime, one image could capture both the extent of the danger and the devastation about to strike: a map of power blackouts spreading across the state. With night falling, it was the kind of vast, redoubled darkness that only a failing electrical infrastructure could create—literally, a casting of natural disaster as technological failure.
Why do we represent the violence and threat of nature through images like this? One reason is our deep reliance on electrical power grids. As Jennifer Lieberman and others have taught us, in times of disaster and emergency, electrical infrastructures translate natural effects into other kinds of suffering and danger: paralyzed cities, intensive-care wards on backup generators, darkened homes and isolated humans. Power infrastructures work as proxies for the kind of modern industrialized lives we mostly take for granted; the ones disasters threaten to disrupt. As Bruno Latour has put it, “technology is society made durable.” The reason repair and recovery efforts are so critical is not just because they get these infrastructures working again; they also mend the important social relations that go with them.
But these kinds of representations capture something else, and possibly something deeper. They illustrate how technological failures help define nature itself. One trait associated with technologies since the Industrial Revolution has been their apparent ability to operate outside nature: relentlessly, regularly, tirelessly. But there has always been another, less publicized side to machines; one that connects them intimately to the natural world. Breakdowns, malfunctions, and failures have pervaded our everyday life with technologies. And throughout that history, those effects have been caused by arguably natural processes: decay, degradation, wear, cracking, and corrosion. Our very understandings of the natural world—its characteristics, its power, its limits—are shaped by the problematic behaviors of the technologies that surround us. Technological failures are “natural” states in a deep sense. So natural, in fact, that we pay almost no attention to how they shape our world.
At my university, I teach a course on history and disasters. My students learn early on how technologies shape the very definition and possibility of “natural” disasters. Machines and infrastructures aren’t just the casualties of those events. They help to create and even define how we understand “natural” and “social” catastrophes. The disaster of Hurricane Katrina emerged not just from a powerful tropical cyclone in the Gulf of Mexico, but from an ecology of poverty, under-privilege, and inequality in New Orleans. Natural catastrophes are never unproblematically natural: their catastrophic qualities are made possible by social, political, and economic determinants of vulnerability. We can add the technological to that list. Buildings, power grids, bridges, and levees are technologies, and it is precisely the discriminate damage to them—the way they can shield and protect, but also shatter and extinguish lives when they crumble and collapse—that translates natural phenomena into social, political, and economic cataclysms.
Historically, then, technologies are not just vehicles for controlling nature, or instruments for recording it. They are not just tools or devices. They are also the media through which nature gains specific powers and abilities in reshaping societies and cultures, sometimes with devastating effects. Out of those effects and in the wake of those disasters, people craft identities and meanings. From the geological, social and conceptual ruptures opened up by earthquakes and eruptions, tsunamis, and typhoons, people constantly imagine and reimagine their world, including the relationship between natures and machines.
If technology is society made durable, then we are surrounded by instances where nature is technology made fallible. My book, The Unreliable Nation, explores one of them. It looks at failures in another kind of infrastructure: the massive shortwave communications blackouts that blanketed portions of the Canadian North during the Cold War. For two decades after World War II, scientists and engineers argued that those failures were the product of one of the Cold War’s “hostile” natures: a distinctive Northern nature of violent magnetic storms and auroral displays that characterized Canada uniquely. The disturbances they tracked circled the globe, geomagnetic storms so powerful and widespread they could reverse local magnetic fields at the Earth’s surface, darken power grids, and blind radar defenses, mimicking the effects of natural disasters. Under the threat of Soviet invasion and U.S. occupation, these scientists and engineers wove that vision of hostile nature and technological failure into a sweeping political program to remake the post-war nation, and to express vulnerability, identity, status, and power during the early Cold War.
Their project was one particularly intricate way that technological failures have defined our understandings of nature. But all around us, those connections are made and remade in the problematic behavior of our technologies. Paying attention to these kind of failures, from the spectacular to the mundane, teaches us that the cultural life of technologies doesn’t end when machines fail. It’s only just beginning.