YouTube Is the Latest Media Platform to Loosen Content Moderation. What Does That Mean for Users?

Rolling back content moderation allows misinformation—which can be dangerous—to thrive, says researcher Michelle Amazeen, a College of Communication assistant professor of mass communication. “So many of our digital locations are polluted with hazardous information,” Amazeen says. Photo via Pexels/Freestock.org
YouTube Is the Latest Media Platform to Loosen Content Moderation. What Does That Mean for Users?
BU communications researcher Michelle Amazeen on what we lose when we stop regulating information as closely
Earlier this week, the New York Times reported that YouTube has quietly rolled back some of its content moderation policies.
The Times noted that the video platform, the second-most-visited website in the world behind Google, has upped its threshold for how much potentially prohibited content a video can contain before it warrants removal, as long as the video is in the public interest. Public interest includes videos that are deemed newsworthy or discuss political and cultural issues. Potentially prohibited content includes derogatory language and misinformation.
Prior to the policy change, a YouTube video could contain only a quarter of questionable content before warranting removal. Now, the threshold has been raised to half a video’s content. The change—which was not publicly disclosed—took effect in December 2024.

BU Today spoke with Michelle Amazeen, a Boston University College of Communication associate professor of mass communication and associate dean of research, about YouTube’s decision to loosen rules governing the moderation of videos. Earlier this year, the Amazeen-led Communication Research Center at COM released a poll that found 72 percent of Americans believe it’s acceptable for social media platforms to remove inaccurate information about public health issues.
Those findings demonstrate an appetite for accurate information online, which is compromised by lower regulation standards, Amazeen says. On top of potentially enabling misinformation, YouTube’s policy rollback opens the door to messy interpretations of what’s in the public interest, and at whose expense.
“It’s one thing to place a label on a YouTube video indicating that there are misleading claims in it, but you’re still allowed to watch it,” Amazeen says. “It’s another to have a video that has a two-minute passage of someone saying, ‘everyone who’s part of this religion should die,’ but the video is an hour long. Does that get to stay up?”
This interview has been edited for length and clarity.
Q&A
With Michelle Amazeen
BU Today: YouTube is just the latest company to take a step back on moderating the content on its platform, following similar moves by Meta, which owns Facebook and Instagram, and X, owned by Elon Musk. Who benefits from less stringent moderation?
Michelle Amazeen: I think it helps those who don’t want people to have accurate information. More broadly, I think what’s happening is that there’s an incentive to manufacture confusion. [At the highest level], we have an administration that doesn’t like to be held to account by evidence or experts. We’re seeing a decline in trust in expertise and many institutions, a lot of which I think has been driven by those who are politically motivated to gain power.
There’s also a commercial motivation here. It’s expensive to moderate content. You have to hire people, train them, develop algorithms to catch keywords and such. That all costs money. We have to remember that YouTube is a commercial company. Its number-one priority is to make money, not to inform the public—and especially not at a time when regulation is politically inconvenient.
BU Today: Speaking of politics: do you think the government has a responsibility to step in if social media companies back off from regulating themselves?
We need more government regulation. Congress has basically been asleep at the wheel while social media has overtaken all other media. Section 230 of the 1996 Telecommunications Act has been used to shield social media companies from being held to account for content on their platforms. I think that needs to be modified; if you’re developing media products that are harmful, you need to be held to account for it.
Going back to what I said earlier, by abandoning or loosening content moderation rules, companies are complicit in manufacturing confusion. I think it’s akin to how in the past, the fossil fuel and chemical industries dumped hazardous byproducts by burying them in the ground. Then, decades later, when schools and subdivisions were built on those sites, people started getting sick from the contamination. In response, Congress and the Environmental Protection Agency developed Superfund sites [for cleaning up hazardous waste]. I think that’s where we’re at now. We need digital Superfund sites. So many of our digital locations are polluted with hazardous information, and we need more regulation to address that.
BU Today: Do you see that happening anytime soon?
It’s hard to say. Thinking back to when George Floyd was murdered in 2020, people were out in the streets protesting—and then we saw all these initiatives pass to look at policing and how we’re treating non-white people. So who knows what’s going to happen with social media sites loosening their regulations at the same time that things like LA’s immigration protests are happening, where we’re seeing a lot of confusion and difficulty distinguishing what’s accurate information. Maybe something else big will happen that results in more protests, and a movement will emerge where people try to take back responsibility and accountability [for information online].
BU Today: So what can be done as social media platforms relax moderation?
This could be an opportunity for news outlets to double down and really promote what they’re doing. Most of these social media influencers aren’t on the ground in LA doing reporting. It’s reporters from CNN, from the local NPR station affiliates, who are putting themselves in harm’s way in order to report on what’s happening based upon journalistic standards. So much of what we see on YouTube and social media is people trying to grift off confusion. One of the examples in the Times article involved a YouTube video of [Health and Human Services Secretary] Robert F. Kennedy, Jr., claiming that COVID-19 vaccines alter people’s genes. There are a whole slew of people who are trying to leverage that disinformation to sell people vitamins or other unproven things that won’t protect you against COVID-19. Reporters aren’t the ones trying to sell you some survival kit so that you can go fend for yourself in the wilderness.
BU Today: Finally, being susceptible to misinformation goes hand-in-hand with subpar media literacy. What are some things you recommend for anyone trying to improve their media literacy and be more discerning online?
First, think about who’s posting the content you’re viewing. Who created it, and for what purpose? What might the motivation be for putting this message out? And, is there anything that’s being left out from the message? Then cross-reference. If you find something that’s intriguing, but you’re unfamiliar with the source, go see if anyone else is talking about that thing. Look to reputable news sources, such as NPR, PBS, or non-US sources like the BBC or the World Health Organization. Finally, it’s helpful to understand how our media systems work. Public media is nonprofit and supposed to serve the public interest versus commercial media—which is the vast majority of our media—whose number-one priority is to make money. Think about how those models impact the type of content that’s out there.
Comments & Discussion
Boston University moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (EST) and can only accept comments written in English. Statistics or facts must include a citation or a link to the citation.