Big margins support protections against AI-powered “deepfakes” on social media, survey finds

Deepfake illustration

Photo illustration: Adobe / AI generated

August 28, 2025
Twitter Facebook

Share

Big margins support protections against AI-powered “deepfakes” on social media, survey finds

An overwhelming majority of Americans across the political spectrum support protections against “deepfakes” in media that use a person’s voice and visual likeness without permission, according to a new opinion survey designed by the Communication Research Center at Boston University’s College of Communication and conducted by Ipsos. 

More than four of five (84%) respondents agreed or strongly agreed that individuals should be protected from the unauthorized use of their voice and visual likeness in digital replicas created by artificial intelligence, or AI.

“As social media platforms scale back content moderation and generative AI creates realistic imitations of celebrities, creators, and journalists, disinformation is spreading faster and more widely than ever before,” said Michelle Amazeen, associate professor and CRC director at Boston University. “In this confusing environment, one principle has strong bipartisan support: the public overwhelmingly agrees that everyone’s voice and image should be protected from unauthorized AI-generated recreations.” 

In the same survey, respondents agreed or strongly agreed by wide margins with other statements about the deceptive use of AI and technology in digital media: 

  • A.I.-generated content posted to social media platforms should be clearly labeled and watermarked: 84% agreed or strongly agreed  
  • Social media platforms should be required to remove unauthorized deepfakes just as they would pull down copyrighted music: 81% agreed or strongly agreed  
  • Social media platforms should provide transparent appeals processes for individuals to contest unauthorized AI generated representations of their voice and visual likeness: 79% agreed or strongly agreed  
  • Individuals should have the right to license their voice and visual likeness for AI training with control over how their digital identity is used: 75% agreed or strongly agreed  

“People are clearly more comfortable with actions that are framed as protections from harm than those framed as ownership rights,” added Susanna Lee, an assistant professor at Boston University and a survey collaborator. 

Support for protections was similar among both Democrats and Republicans. For example, 84% of Republicans agreed or strongly agreed with protections against “deepfakes” in media that use a person’s voice and visual likeness without permission, statistically indistinguishable from the 90% of support among Democrats. Similarly, 80% of Republicans and 87% of Democrats said they agreed or strongly agreed that “social media platforms should be required to remove unauthorized deepfakes just as they would pull down copyrighted music.”  

The bipartisan agreement among Americans is reflected in recent Congressional action. In May, President Donald Trump signed into law the Take It Down Act, co-sponsored by Senators Ted Cruz (R-TX) and Amy Klobuchar (D-MN) to punish the posting of intimate images, including deepfakes, without consent. 

With Senators Chris Coons (D-DE), Thom Tillis (R-NC) and Marsha Blackburn (R-TN), Klobuchar recently introduced the No Fakes Act intended to force the removal of unauthorized deepfakes by social media companies. 

Survey Summary 

Respondents to this month’s Media & Technology Survey were asked how much they agreed or disagreed with the following statements: 

Individuals should be protected from the unauthorized use of their voice and visual likeness in digital replicas created by AI. 

Strongly Disagree: 3% 
Disagree: 4% 
Neither Agree or Disagree: 9% 
Agree: 30% 
Strongly Agree: 53% 

A.I.-generated content posted to social media platforms should be clearly labeled and watermarked. 

Strongly Disagree: 2% 
Disagree: 2% 
Neither Agree or Disagree: 12% 
Agree: 32% 
Strongly Agree: 51% 

Social media platforms should be required to remove unauthorized deepfakes just as they would pull down copyrighted music. 

Strongly Disagree: 2% 
Disagree: 3% 
Neither Agree or Disagree: 14% 
Agree: 31% 
Strongly Agree: 50% 

Social media platforms should provide transparent appeals processes for individuals to contest unauthorized AI generated representations of their voice and visual likeness. 

Strongly Disagree: 3% 
Disagree: 4% 
Neither Agree or Disagree: 14% 
Agree: 35% 
Strongly Agree: 44% 

Individuals should have the right to license their voice and visual likeness for AI training with control over how their digital identity is used. 

Strongly Disagree: 2% 
Disagree: 4% 
Neither Agree or Disagree: 19% 
Agree: 32% 
Strongly Agree: 43% 

About the Media & Technology Survey 

The Media & Technology Survey is an ongoing project of the Communication Research Center (CRC) at Boston University’s College of Communication. This month’s polls were conducted in English on August 22 to 25, 2025. This online survey has a credibility interval (CI) of plus or minus 3.5 percentage points. The data were weighted to the U.S. population data by region, gender, age and education. Statistical margins of error are not applicable to online polls. All sample surveys and polls may be subject to other sources of error, including, but not limited to coverage error and measurement error.