People-Centric Policy for a Digital World
Professor Hartzog and coauthor propose holistic, human solutions to prevent data breaches.
People-Centric Policy for a Digital World
Professor Hartzog and coauthor propose holistic, human solutions to prevent data breaches.
In September, The Record spoke with BU Law Professor Woody Hartzog and his coauthor, George Washington Law Harlan Research Professor Daniel J. Solove, about their recently published book, Breached! Why Data Security Law Fails and How to Improve It (Oxford University Press 2022). Hartzog and Solove examine the continued escalation of data breaches, its connection with information privacy issues, and how holistic policy may better secure our data.
Listen to an unabridged version of the interview.
Q&A
The Record: How did the research for your book first begin?
The very first data breach notification law was passed in California in the early 2000s. Since then, all states have passed similar laws, and the FTC has been filing complaints against companies for unfair data security practices. While it’s admirable that lawmakers and regulators are thinking about data security, it continues to be a growing problem.
My coauthor, Daniel Solove, and I wondered: Why aren’t new data security laws and increased enforcement shrinking the size and number of breaches? We looked into the literature around cybersecurity and computer science, and then compared that to efforts in the law of data security to see where things might be going wrong.
The Record: What are some of the human limitations that need to be accepted to proceed with preventing and mitigating data breaches?
The law tends to see people as rational, consistent actors. It often presumes that if you are given a series of choices and safeguards, you’ll optimize those to maximize protections. The problem is that people are careless, and they make mistakes all the time. This is well known in the cybersecurity literature, but for some reason, the memo hasn’t yet made it to lawmakers and regulators.
Currently, when a breach happens, data security law requires is to send consumers a breach notification first. These rules seem to presume that: A, people are going to read the notice and B, they are going to understand what that threat is and take steps to mitigate that threat, by freezing our credit or changing all of our passwords or monitoring our bank account every single day to watch for fraudulent charges. Some may start to act on this guidance, but people can’t do that indefinitely. We know that the threat from data breaches can extend for years and across a broad spectrum.
My coauthor and I argue that data security law is asking far too much of individuals, as though they are superhuman and have the capacity to memorize one thousand different passwords, or easily recognize targeted fraudulent emails.
By failing to plan for the fact that humans make mistakes and failing to plan for that, it actually makes us more vulnerable. In Breached!, we argue for a framework that has lots of redundancies built in, that assumes that people are going to not read the fine print. Our framework aims to protect people no matter what reasonable or foreseeable mistakes they’re likely going to engage in.
These human errors are just the tip of the iceberg of deeper structural problems that the law has yet to tackle as a meaningful part of data security. One of which is we collect too much data in the first place. Data that isn’t collected can’t be breached.
We need a much more holistic approach that holds more than just the breach entities accountable and paints a fuller picture of the kinds of contributors to the risk of data breaches.
For instance, placing responsibility on all actors to incorporate better security design values into their software and hardware tools. There’s a stronger relationship between the concept of privacy and the concept of data security than many lawmakers and regulators recognize. Our failure to confront information privacy problems is closely related to the rise in the number of breaches, even though they can be thought of as distinctive concepts. Organizationally and legally, we should have a tighter connection between information privacy issues and breaches.
When it comes to things that you can do you can better secure your data, my advice is not very much. One of the great myths is that data security is an individual responsibility, rather than a collective structural problem that requires collective structural solutions.
The Record: Are there other industries or models that policymakers may be able to look at to glean new, holistic approaches for preventing data breaches?
One way to think about data security law is to consider approaches from public health. It is similar to data security in lots of ways, including using some of the same terminology, like “viruses.” It’s important for us to engage in practices like hand washing, not the least of which because our practices affect the vulnerabilities of other people, which is true in security law as well.
It is also similar in that the response requires collective messaging and collective responsibility by large organizational actors to look out for public health, and to build in resiliency. We know contagious diseases and breaches are going to happen, and we need to focus not just on responding to current breaches but preparing for future threats. We also look at environmental factors and social causes that affect people’s behavior.
All these things provide good lessons for lawmakers and regulators in the data security space, because both are not problems that can be solved by individual responsibility, but rather deep, holistic change.
The Record: Could you share any guidance you may have for the average digital citizen in protecting their data?
When it comes to things that you can do you can better secure your data, my advice is not very much. One of the great myths is that data security is an individual responsibility, rather than a collective structural problem that requires collective structural solutions.
It promotes a false sense of security and leaves us thinking, ‘If we just monitor our bank accounts and choose good passwords, then our information is going to be safe.’
There are certainly some things that are good for individuals to do. Anytime you are offered the option to turn on two-factor authentication for your accounts, that’s a good idea. You want to choose different and complex passwords; maybe use a password manager if you’re interested.
But none of these things are really going to protect you nearly as well as a holistic approach by lawmakers to data security, that looks at all the different actors, that has us collecting less data overall, that has us mapping where our information goes in more effective ways, and has meaningful enforcement mechanisms. It simply asks too much of us to be responsible for securing all of the data that is stored by other entities and in very complex systems across the world.
The Record: When researching for the book, was there anything that surprised you?
One of the interesting things that surprised me was the newness of the concept of data breaches. True data breaches didn’t start to become recognized until the late 1990s and early 2000s, when we first got these data breach notifications. We’re still in the early days of thinking about how to approach data breaches, relatively speaking. We don’t have to make breaches the center of a data security universe. That was something that really surprised us and guided us, as we explored and made our recommendations.