Q&A: LAW’s Danielle Citron Warns That Deepfake Videos Could Undermine the 2020 Election

Danielle Citron, a leading privacy scholar, joined the LAW faculty in July, and is a member of the Hariri Institute for Computational Science & Engineering’s Cyber Security, Law, and Society Alliance. Photo by Cydney Scott
Q&A: LAW’s Danielle Citron Warns That Deepfake Videos Could Undermine the 2020 Election
Every presidential candidate needs to start preparing now, privacy scholar says
UPDATE, SEPTEMBER 25, 2019: Danielle Citron was named a 2019 MacArthur Fellow for her pioneering and policy-shaping work in countering hate crimes, revenge porn, and other cyberspace abuses that led her to become one of the nation’s leading privacy and constitutional law scholars. Read more here.
Danielle Citron delivered a TED talk in July at TEDSummit 2019, in Edinburgh. It will be released on TED.com on Wednesday, September 11, and available here at 11 am EST.
Danielle Citron is worried. Deepfake videos are here—you may have watched one already and not even realized it—and they could undermine the 2020 presidential election.
Citron, a leading privacy scholar, who joined the School of Law as a professor of law in July, and is a member of the Hariri Institute’s Cyber Security, Law, and Society Alliance, coauthored an essay published this week on the Carnegie Endowment for International Peace website calling for every candidate for president to take immediate steps to counter deepfakes and outlining an eight-point deep fakes emergency campaign plan.
Why the urgency? Only the future of our democracy is at stake.
Deepfakes are hard to detect, harder to debunk, highly realistic videos and audio clips that make people appear to be saying and doing things they never said or did. Enabled by rapidly advancing machine learning, they are distributed at lightning speed through social media. A recent example highlighting the danger of manipulated videos is a video of Speaker of the House Nancy Pelosi (D-Calif.) that made it appear as if she were drunk and slurring her words. It got more than 2.5 million views on Facebook, and while it was relatively easy to tell that the video had been altered (Citron and other experts call it a cheap-fake rather than a deepfake), it went viral anyway, with an assist from President Trump, who tweeted a clip that first aired on Fox News.
Citron, an expert on deepfakes, testified on manipulated media, particularly deepfakes, in June before the House Permanent Select Committee on Intelligence. Warning that technology will soon enable the creation of deepfakes that will be impossible to distinguish from the real thing, she told lawmakers: “Under assault will be reputations, political discourse, elections, journalism, national security, and truth as the foundation of democracy.”
BU Today talked with Citron about why we should be worried about deepfakes, how they could threaten the 2020 election, and what can be done about digital forgeries and other online disinformation.
Q&A
BU Today: How do you define deepfakes, and what about them concerns you?
Citron: We have long been able to doctor video and audio. In some respects, it’s nothing new. If you go on YouTube, you will find doctored Donald Trump videos, too. The Pelosi video is just a slowed-down version of a real video—and it makes her seem either drunk or as if she has ALS or some other degenerative disease. It’s very subtle, but it’s fairly easy to detect the fake, in part because the real video was out for a while. So we could compare the real video and the fake. Things like the Pelosi video are a low-grade threat—easily detectable and not that sophisticated.
Deepfakes is a huge jump in the technology. It lets you manipulate existing video or fabricate from digital, whole-cloth, video and audio showing people doing things and saying things they never did or said.
So you can take 30 pictures of me and use these machine-learning algorithms and you can mine those pictures and create a video of me doing and saying something that I never did or said.
The threat landscape is pretty significant. Within nine months, they will be able to create audio and video fakes so good that you cannot tell the difference between fake and real. There have been some serious breakthroughs on the audio front. So it will lead to identity theft. Imagine if we have recordings of your voice—we call your bank, we wire all of your money. It’s pretty terrifying, right?
So even as we’re talking, people are working on this technology?
Even as we’re talking. It’s not just academic researchers. There are researchers who want to make money. There are a lot of financial incentives. It would be serious money to have a deepfake of one of the Democratic candidates doing and saying something they couldn’t debunk, that you could [use to] tip the election on the eve of the election.
Or the night before diplomatic talks, some deepfake is shown to some high official, and it scuttles the talks and nobody knows about it. Deepfakes pose a unique danger. What’s different today is that the technology is emerging so rapidly, and with such sophistication, and with serious black hats behind some of these efforts, that it will be impossible to tell fake from real.
It sounds scary—and almost hopeless.
So it’s not hopeless. Bobby Chesney and I wrote this very long paper for the California Law Review and we said that to the extent that law can step in—and there’s no silver bullet—there’s some possibility for law. There are some proposals to change law that we talk about. There are possibilities for the role of intermediaries [platforms] and there’s self-regulation and education. It’s not that there’s nothing to do. It’s that there is no one tool that’s going to fix this. And it requires a lot of moving pieces to come together and those moving pieces we may not be able to regulate.
Why are lawmakers in Washington suddenly interested in modifying Section 230 of the 1996 federal Communications Decency Act, which provides Facebook, Twitter, and other platforms immunity from civil liability for user-posted content, when for so long it was considered virtually untouchable?
Section 230 is indeed the issue of the moment. Two trends are at work—the first is post Cambridge Analytica anger at platforms. There is a bipartisan cry that platforms are too powerful, which section 230’s broad immunity enables and enhances. The second is conservative cries that platforms are censoring them. That is empirically debatable—but leading to calls to change section 230.
What if someone uses deepfakes to attack a candidate nine months from now—in the middle of the 2020 presidential race?
I think the campaigns need to have immediate-access relationships with Twitter and Facebook, so there can be immediate reporting. They need to take note of where their candidate is at every moment. Because when there’s a video that says your candidate was doing and saying something on Wednesday, at this time, you can say “No, they weren’t. There’s documentation.”
And unfortunately, it leads to self-surveillance. But we’ve got to do it, because this is what’s happening.
In the California Law Review article on deepfakes, privacy, and national security you wrote with Robert Chesney, you warn about what you call the Liar’s Dividend. How does that work?
The Liar’s Dividend is the impending cultural conundrum. Liars may seize on deepfakes to cast doubt on real evidence of their wrongdoing. They will say—and we have heard this already—you can’t believe what your eyes and ears are telling you. The more we fall into the liar’s trap, the more we are saying yes, we live in a post-fact society.
Some say we live in a world where we can’t count on videos to be true. We have to assume they are false. So are we saying there is no empirical truth in audio and video? That assertion may stick in the long run, but in this liminal period we believe video and audio more than other types of evidence. So the liars will get off scot-free and the creators and distributors will cause grave harm in the meanwhile.
You say in a TED talk you gave in July that all of this is happening at the intersection of technology and networked society and our cognitive biases and frailties.
When some of our most basic human errors and frailties intersect with social media platforms that supercharge all of those tendencies, it’s like we are on crack.
One of our many frailties is that we tend to like what we like—we call that confirmation bias. And we really like nude photos, sex, gossip, fakery—anything that seems like it’s bad for us.
And once you have a platform that lets you supercharge that tendency so you can link, you can share, you can comment—and they all happen instantly, and widely, and sometimes you can livestream it—there’s no moderation. You are real-time projecting to the world, so all those frailties are then exponentially aggravated because of the size of the audience, and the instantaneous traction, and the speed.
Right now, there’s no financial incentive to monitor, block, filter, or even do anything when speech is reported that involves illegality or harm. The platforms don’t have any incentive. They’re not responsible.
It’s not that there’s no market response. My work with Twitter and Facebook—I’m not paid for it—is an illustration of there is a response. It’s usually in response to advertisers—in combination with advocates. Like in 2011, when 15 advertisers pulled out, or threatened to pull out, of advertising on Facebook because Facebook had pro-rape pages. Advocates got the attention of advertisers. And once advertisers threatened to walk away, all of a sudden Facebook was like, “Oh, we’re taking down these pro-rape pages.”
FCC commissioner Rohit Chopra just wrote a magnificent dissenting opinion in the Facebook consent decree case, in which he explains that tech platforms—the entire financial incentive of their business is clicks, likes, and shares. The world I write about, online abuse—execution videos, nude photos, fabrications—that gets a lot of likes, clicks, and shares. So these companies, there aren’t a lot of speed bumps for them.
I don’t blame them. It’s because law doesn’t provide any incentive to change that. If law changed, if they weren’t immune from liability, then they would change their practices because they could be sued. And it would be in their shareholders’ interest to avoid lawsuits.
It’s really hard to write speech rules for these platforms [such as Facebook and Twitter]—and it’s also harder to execute, because you can’t throw an algorithm at it. You need human content moderators. You need people. Automation does not grasp content and context—it can’t. So it thinks everything is hate speech, even if it’s just speech describing hate speech, because it’s all about combinations, and patterns of words.
And so when you throw an algorithm against hate speech, then you’re going to over-censor speech. Then everything is hate speech—even if it’s not even remotely hate speech, if it’s parody, if it’s satire, if it’s a Black Lives Matter activist, writing about the hate speech he received in his inbox.
Is there anything we can do about all this, just as individuals?
Yes, some of it is on us. We’re lazy. We share, we link, we don’t even think about it, we don’t investigate. We fall prey and create “information cascades,” which is why information spreads so widely, because we trust. I’m on Twitter, and when I retweet something, I make sure I read it. But people retweet stuff clearly that they don’t mean to retweet. They don’t read it, they trust whoever it is. And they’re perpetrating a fraud, and they don’t realize it.
Read before you retweet. Remind yourself that there are human beings in the picture. That if you retweet a nude photo, gossip, a murder video, someone’s going to get hurt. And people forget that. Before a screen, they feel invisible.
They feel like whatever it is they’re reading doesn’t affect people. And they click, and they like, and they share. And they forget: there are human beings in the calculus. And so part of what I always say when I talk to students is that when they post and reshare that nude photo of the girl in the class with them, that people are going to get hurt, shamed, embarrassed, further harassed. And you can’t check out of your own humanity just because you’re behind the computer screen.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.