Julissa Milligan, a visiting professor at the BU law school, gave the Cyber Alliance an overview of the EU’s new privacy regulation: the General Data Protection Regulation, or GDPR.
I’m not going to spend this whole blog post summarizing GDPR – lots of people have already made brief summaries, and going into full detail is well beyond the scope of what can be covered in a short post. Instead I’ll spend a little bit of time on GDPR, and then use the rest of this post as an excuse to talk about privacy more broadly.
GDPR is, in short, a regulation meant to protect the privacy of EU citizens and EU-residing people. It attempts to reign in needless or illegitimate data analysis, and it also legally enforces some security requirements for databases of personal information. Because of the global nature of today’s society, GDPR has imposed requirements on non-European companies as well. If your company has any contact with Europeans, you can be sure that GDPR will affect you.
GDPR grants “data subjects” (individuals) several rights, requiring companies to implement new features to accommodate those rights. These include the right to erasure (you can request that a company delete all records pertaining to you), the right to data portability (you can request that you download all your data, ostensibly for the purpose of moving to a competitor service), the right “not to be subject to a decision based solely on automated processing, including profiling,” and some others.
GDPR also imposes requirements on “data controllers” and (separate or otherwise) “data processors.” The controller must put some thought into the way the data is protected, and the intention is for them to put some standard security methods into use to protect the data. Data processors are supposed to restrict the processing of personal data to “explicit and legitimate purposes,” with very broad exceptions, including “legitimate interests,” and if users consent to the processing.
Interestingly, several of these are not really privacy concerns. They also cover some data security issues and some extra utility for individuals.
As Mayank Varia is fond of saying, privacy and security are not the same thing, and anyone that deflects a privacy question by answering a security question knows that they’re deflecting. Security is a necessary condition for privacy – if non-authorized people can view your information, it’s not private. But it’s not sufficient – if authorized people can view your information, it’s still not private.
The focus on user consent is also interesting considering the tendency for websites to put in fast click-through consent banners or make it difficult to not consent. The big win for GDPR, in my opinion, is the requirement that data subjects be able to revoke consent, easily, at any time. Making it easier for users to change their minds makes it easier for people to overcome a practical barrier to privacy – “they already have a bunch of info about me so why should i bother hiding future information from them?”
Privacy, privacy, privacy – we hear about it all the time and yet we don’t seem to understand the concept at all. My own view on the subject was strongly influenced by Solove’s paper “‘I’ve Got Nothing To Hide’ and Other Misunderstandings of Privacy.”
Many people seem to view privacy as a means to hide harmful or embarrassing information about ourselves. This view of privacy is sufficient for many purposes. It gets us some obvious use cases – consider a gay person hiding their sexuality from those they believe will harm them as a result of their knowing. (This is especially compelling because it has legal implications in many countries, and in the U.S. as well prior to 2015.) It also covers some intellectual freedom issues – someone with an idea very against the mainstream might want to research the idea further in secret until they have a more convincing case for themselves, so that they don’t get mocked out of their community.
The “prevent harm” view of privacy has new implications in the digital era, when our actions that leave records that span our entire lives. People say mean things on social media all the time. Thirty years from now someone is going to run for president and get flak for saying something dumb on the internet when they were thirteen.
Where the “prevent harm” view starts to run into trouble (at least, where some people seem to think it runs into trouble) is against the “I have nothing to hide” argument. The argument goes that if you haven’t done anything bad, then you don’t need privacy, because you can’t be harmed by the information being released.
(The standard response to this argument is “Then can I take a picture of you naked right now? Since you don’t have anything to hide, of course.” In my opinion, this response does not adequately address the core of the “nothing to hide” argument, so I won’t deal with it here.)
First of all, I’d like to point out that the argument “I have nothing to hide” is very different from “only wrongdoers have nothing to hide.” The assumption is that all things people want to keep hidden are illegal or immoral.* I think most of the people who ascribe to this argument legitimately do not have (m)any legal or moral secrets of their own. I can see why the argument would make sense, in that context. But the release of this information has consequences for people with life experiences that are perfectly legitimate but are difficult or frustrating to explain. I don’t think this argument gets past things like sexual orientation, or past embarrassments long since forgotten. (To be fair to the proponents of this argument, I usually hear it applied more toward law enforcement surveillance than toward privacy in general. That could be the topic of a whole nother post, so I’ll leave it for now.)
Secondly, if you have nothing to hide, great! Good for you! Your experiences are not universal! Consider that you can still voluntarily give up your information, and that those who want to hold it closer to their chest should be allowed to do so as well.
But most importantly, privacy is about more than preventing harm. Privacy preserves individuals’ dignity and prevents chilling effects on society. My own current view of privacy is centered on free speech – knowing that my actions are private allows me greater intellectual freedom, an escape from self-censorship (or censorship imposed by someone else), and an escape from the needless judgement of others. I at least ought to have the right not to play. I feel as though it’s a very stereotypical American desire, the right to be left alone.
We’ve come a long way from GDPR. What does all this have to do with data privacy for corporations?
Well, as mentioned, the concept of privacy is still grossly ill-defined for something that is so important in this era. Given the potential for such a large amount of information that has the potential to stay accessible for decades or centuries, we very much need a more sophisticated theory of privacy. This theory will have broad technological implications, and also strong legal and philosophical consequences, so we had better get a move on.
This post was written by Sarah Scheffler, a second-year Ph.D. student in computer science studying applied cryptography.