With Doctored Audio and Video Increasingly Undetectable, LAW’s Danielle Citron Has a Plan to Counter Fake Attacks for This Year’s Political Campaigns
LAW’s Danielle Citron: How Campaigns Can Counter Deepfakes
Cybersecurity expert and MacArthur Fellow has devised an eight-point plan for political campaigns to protect against fabricated video and audio
Lying about political opponents is as old as the republic. But new technology called deepfakes allows someone to fabricate video and audio of a rival saying and doing things he or she never said or did.
And with New Hampshire’s lead-off primary tomorrow, it will likely get worse.
“Within months, technologists say, it will be impossible…to detect deepfakes” with counter-technology, says Danielle Citron, a School of Law professor of law, a MacArthur Fellow (an award commonly called a genius grant), and vice president of the nonprofit Cyber Civil Rights Initiative (CCRI). The sheer volume of different ways to make deepfakes will confound efforts to detect them “and therefore to filter and block them,” she says. So Citron has devised an eight-point plan for political campaigns this election year, from president to dogcatcher, to protect against this cyber-sabotage.
The plan includes campaigns pledging not to disseminate deepfakes knowingly; designating a rapid-response team of media and legal staffers to manage a deepfake incident (something few campaigns have done); establishing “points-of-contact” both at technology companies whose platforms might be used for deepfakes and with media fact-checkers, to understand their verification procedures; and preparing “contingency web content” to counter and correct a deepfake attack.
Distinguishing between deepfakes and other forms of political lies is important, Citron says. The video of Nancy Pelosi released last year, where the US Speaker of the House appeared to speak haltingly, as if cognitively impaired, was a real video that was altered and slowed down. While damaging and misleading, that’s not a deepfake, where the video is a manufactured avatar of the person it impersonates.
Technology is coming that will enable manufactured sex videos of people who never did what’s depicted in the videos, Citron warns. BU Today spoke with Citron about her eight-point plan, the 2020 campaign, and the perilous state of technology.
Q&A
With Danielle Citron
BU Today: Have you had contact from any presidential campaign about your plan?
Citron: We’ve shared it with presidential campaigns, in the hopes that all make an explicit agreement not to use, share, or amplify [deepfakes]. I have friends in different campaigns who I know are deeply interested.
How are the tech companies when it comes to policing deepfakes?
I work closely with Twitter and Facebook, without being paid, which is important because I get to criticize [but] won’t reveal confidential information. Twitter recently—this is gratifying to hear—banned deepfakes that cause harm to individuals.
They’ve hired people to police and check?
Presumably, because you can’t use an algorithm to detect them. We are going to need human content moderators to look at the content and context of a deepfake to see if it’s a parody or satire or if it’s a harmful, defamatory deepfake. It’s not a cheap proposition, but it’s a proposition that’s important for our democracy, for individuals, and for reputations.
Is Facebook as good as Twitter in policing deepfakes?
What they’ve said they’re doing, and how they handled the Pelosi video, is that when they can confirm that there is a doctored video, they’ll put up some way to notify individuals that it’s fake. Do I think it’s going to work? Nah. We have cognitive biases and limitations, and when we see audio and video, we have a visceral reaction to it. We believe it, because we think we can believe what our eyes and ears are telling us. Couple that with confirmation bias—if we want to believe it, we’ll believe it—and with the fact that we are attracted to the salacious.
There isn’t a binary approach to this. People think, you remove it or keep it. That is not what the landscape looks like. The landscape is nuanced: enhancing [content’s] prominence, load time—there are lots of ways we can change visibility of content [that] doesn’t mean removing it.
Is the greatest threat of deepfakes in politics coming from domestic bad actors, or foreign actors from countries like Russia and China?
We don’t know. [There’s] a distinct possibility that hostile state actors are going to disrupt the 2020 election, manufacturing deepfakes to sow discord. In 2016, Russia was successful in messing with our electorate. The book Network Propaganda [by three authors from the Harvard University Berkman Klein Center for Internet and Society] shows a lot of lies stem from Fox News, and they’re propagated through social media. Those lies [often] originate from Russian state actors, Fox amplifies it, people watch Fox and become committed to it.
The book [makes] clear it is a distinctly right[-wing] problem, Fox Media as propagator of clear falsehoods that often are planted by hostile state actors like Russia. Not that they’re working together; our political system is being hacked.
How can we possibly guard against sexually altered deepfakes?
We need a federal bill against nonconsensual intimate imagery. CCRI is working with lawmakers about what we’re calling digital forgery: manipulation of video and audio showing people doing and saying things they never did and said, with the intent to cause reputational or economic harm, a species of criminal defamation. CCRI research [shows] more than 60 percent of perpetrators said that if they knew there was a law, they wouldn’t dare risk it.
Two states have intervened—California and Texas—on digital manipulation of elections. But the Texas [law] was particular to elections, and I worry that it will be an uphill battle in terms of First Amendment challenges.
According to a report by Deeptrace Labs, 98 percent of deepfakes posted online, identified as a deepfake, are sex videos, and 99 percent of those involve inserting women’s faces into porn without their consent. There is no question that this is a gender problem.
Do First Amendment purists, like the American Civil Liberties Union, have a problem with a federal law?
They have a big problem. Their argument is we can’t criminalize speech, and that’s just not true. There are like 21 crimes that are words: attempted conspiracy—
Defamation, slander, libel?
Right. Incitement [to violence], solicitation. There are crimes that are just words. The notion that we can’t criminalize digital forgery or nonconsensual intimate imagery is absurd.
Read More
Also in
-
November 7, 2020
Joe Biden Defeats President Trump, Clearing the Way to Becoming 46th US President
-
November 7, 2020
How Does the Electoral College Work and Other Election Questions
-
November 7, 2020
Joe Biden Will Be the Oldest President Elected. Is That Worrisome?
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.