How did you come to study this topic and write You’ll See This Message When It Is Too Late?
I started down the path of thinking about the aftermath of data breaches because I was interested in understanding how the costs of these types of incidents were tallied and distributed. But I found it often takes a fair bit of time for all the dust to settle and for organizations to even figure out how much a data breach is going to end up costing them, so I ended up focusing more and more on historical examples of breaches that were already at least a year or two old. The book came out of trying to make sense of all the different complicated factors that arise around trying to figure out who is responsible for breaches and, by extension, who should pay for them. I came away feeling like we really do ourselves a disservice by focusing on data breaches at the moment when they’re first discovered or announced because many of the most interesting, complicated, and important elements aren’t revealed or even settled until much later. That’s why I wrote the book: I wanted to explore the richness and complexity of that aftermath and remind people of what a significant impact these breaches can have many years down the line, even if it’s not always (or ever) the impact you would most hope for them to have.
Were there any findings in your research that surprised you?
The thing I’m always most struck by when I look at the cases in the book is how much sympathy I end up having for the breached companies—many of them are wildly negligent and make terrible decisions about security and ignore several important warning signs, and in that sense they richly deserve the criticism and bad publicity and class action lawsuits and Federal Trade Commission investigations they face. But I’m also very conscious of how much conflicting guidance there is around cybersecurity and how little clarity policy-makers have provided about what is expected of organizations in this space and what their specific responsibilities are. None of that excuses the poor choices that many of these organizations make, but I do firmly believe that breached companies are often unfairly cast as the sole deciders of their own fate when there are, in fact, many other intermediaries and stakeholders who share some responsibility for helping them detect and defend against some of these incidents.
It seems massive data breaches are in the news almost perpetually. How much alarm should the average person take in these cases?
That’s a tricky question and it depends a lot on what kind of data gets stolen and what you’re most worried about someone doing with that data. If your credit card number gets stolen, everyone pretty much knows how to cancel their card at this point. If your social security number or medical records or credit report is breached then it becomes a question of whether or not you want to freeze your credit or invest in some sort of identity monitoring/insurance service. My general inclination for myself is to do the former (freeze credit) but not the latter (purchase an identity protection service). On the other hand, if it’s your password or online credentials that are stolen then it makes sense to change your passwords and perhaps implement two-factor authentication for your high-value accounts. I think it’s generally sensible to not be too alarmed by data breaches—they’re too frequent to go into crisis mode every time a new one is announced—but also to be willing to do some of the practical things, like change passwords or monitor credit reports, that can have a real impact not just on on your own online presence but also on whether your resources, accounts, and devices can be used to attack others. So while I would typically caution people not to panic, I do think it’s possible to go too far in the direction of ignoring data breaches and refusing to do even basic, worthwhile things like reset your home router when the FBI issues a request, or change a compromised password.
On the same note, what can we do to better protect our data?
As individuals, I think we don’t have a huge amount of power beyond routine security hygiene (strong passwords, multi-factor authentication, software updates, etc.) One of the reasons I wrote the book is that I think a lot of the real power to protect our data lies with the big, centralized intermediaries of the Internet—the Internet service providers, payment processors, online hosts, ad networks, etc.—who often have the capabilities to identify and mitigate certain types of threats much more effectively than anyone else but are wary of taking on that responsibility. Moving forward, I would like to see policy makers pay a lot more attention to the question of what those stakeholders might be well-suited to doing rather than focusing on individuals and the actions they can take.
Can you say what you’ll work on next?
Recently, I’m very excited about looking at how the cyberinsurance markets are developing. I write a little bit about this towards the end of the book, but I think the intersection of traditional risk management tools in the insurance industry with the particular nature of cyber threats is creating a lot of interesting challenges, some of which we’ve faced before, some of which we haven’t. So I’m interested in trying to dive more deeply into that and see if I can disentangle what about cyber threats seems genuinely new (in terms of actuarial modeling) and what challenges they present that are more a product of these being relatively new threats that we don’t have a lot of good data about. I’m also interested in the convergence between insurance firms and security firms as the former group tries to fill the gaps in the data and better understand this space. I’d also like to look at the money being spent by governments on cybersecurity workforce development– what kinds of programs that money is going towards, who they’re targeting, how well they’re working, and what metrics governments are using to measure their success in this space.