Americans Expect Social Media Content Moderation
Meta is ending fact-checking on Facebook and Instagram, but a new BU poll finds the public backs independent verification of social media content
Social media companies’ “role in shaping the national conversation has never been more consequential,” writes BU communications researcher Michelle Amazeen. Photo via iStock/SARINYAPINNGAM
In an age where misinformation spreads at the speed of a click, the announcement by Meta—the company that owns Facebook and Instagram—to abandon its partnership with independent fact-checking organizations raises urgent questions. Meta’s decision comes at a critical juncture, as the US faces an era where disinformation campaigns—often amplified by political figures—threaten democratic discourse and public trust. How will this shift affect the quality of content on its platforms? And, as Meta is the largest funder of fact-checkers globally, what does this mean for the future of fact-checking itself?
Meta CEO Mark Zuckerberg justified the decision by claiming that the company’s fact-checking program “too often became a tool to censor.” Yet, a recent poll from Boston University’s College of Communication paints a very different picture of public sentiment: an overwhelming majority (72 percent) of Americans believe it is acceptable for social media platforms to remove inaccurate information about public health issues. Support spans political divides, with 85 percent of Democrats, 70 percent of independents, and 61 percent of Republicans agreeing that such content moderation is necessary.
Instead of relying on independent fact-checkers, Meta is pivoting to a “community notes” model. In this approach, users write and rate notes that accompany posts containing dubious claims. This model mirrors the approach Elon Musk has implemented on Twitter, now rebranded as X.
But Americans remain skeptical. The same poll reveals that nearly two in three adults (63 percent) believe independent fact-checking organizations should verify social media content. In contrast, less than half (48 percent) support the “community notes” model. Although there are some partisan differences—73 percent of Democrats, 62 percent of independents, and 55 percent of Republicans favor a fact-checking model—the lukewarm reception of community notes crosses party lines.
Is there any evidence that crowdsourcing claim verification works? The academic literature is mixed. In certain contexts, crowdsourcing can rival expert verification. However, other research highlights its inconsistencies. Crowdsourcing is generally effective at assessing the credibility of news sources but struggles to reliably identify disinformation. Partisanship often undermines its efficacy, influencing which claims are selected for verification. Moreover, distinguishing verifiable claims from unverifiable ones is a skill that typically requires training.
In practice, the results are sobering. Despite the presence of the community notes program, X remains a platform rife with misinformation on elections, climate change, and other critical topics. Offloading content-moderation responsibilities onto users is yet another example of platforms shirking their duty to ensure the safety of their digital products. By abandoning content moderation, social media platforms risk enabling disinformation from those in power. Accountability measures are essential, especially as a new White House administration with a history of weaponizing disinformation takes office.
Still, paying independent fact-checkers has its own complications. Under Meta’s program, the platform itself determined which claims were submitted for review. This approach often resulted in fact-checkers debunking viral but nonpolitical content, while more politically charged claims that could influence democratic processes went unaddressed. Additionally, Meta did not disclose what happened to posts flagged as inaccurate, leaving fact-checkers in the dark about the impact of their work.
The silver lining in Meta’s rejection of fact-checkers may be that the commercial imperatives of the company will no longer influence their claim-selection process. Freed from Meta’s influence, fact-checkers might return their focus to democratic priorities. However, the financial loss will undoubtedly strain these organizations.
There is a potential bright side: the public could play a pivotal role in sustaining independent fact-checking. According to the Boston University poll, one-third of US adults would donate $1 to fund these initiatives through crowdfunding campaigns. Such efforts could restore some of the financial resources that fact-checking organizations need to thrive.
The question of who should moderate social media content—and how—is a critical challenge of the digital age. As political leaders test the limits of truth, the integrity of public discourse hangs in the balance. Social media platforms must rise to the occasion, for their role in shaping the national conversation has never been more consequential.
Michelle Amazeen is a Boston University College of Communication associate professor of mass communication and associate dean of research. She can be reached at mamazeen@bu.edu
“Expert Take” is a research-led opinion page that provides commentaries from BU researchers on a variety of issues—local, national, or international—related to their work. Anyone interested in submitting a piece should contact thebrink@bu.edu. The Brink reserves the right to reject or edit submissions. The views expressed are solely those of the author and are not intended to represent the views of Boston University.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.