On Meta's content moderation changes
Wednesday, January 8, 2025
Meta announced a significant change to their content moderation practices yesterday. If you haven’t read it, it boils down to this (from Zuckerberg on Threads):
- Replace fact-checkers with Community Notes, starting in the US.
- Simplify our content policies and remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse.
- Change how we enforce our policies to remove the vast majority of censorship mistakes by focusing our filters on tackling illegal and high-severity violations and requiring higher confidence for our filters to take action.
- Bring back civic content. We’re getting feedback that people want to see this content again, so we’ll phase it back into Facebook, Instagram and Threads while working to keep the communities friendly and positive.
- Move our trust and safety and content moderation teams out of California, and our US content review to Texas. This will help remove the concern that biased employees are overly censoring content.
- Work with President Trump to push back against foreign governments going after American companies to censor more. The US has the strongest constitutional protections for free expression in the world and the best way to defend against the trend of government overreach on censorship is with the support of the US government.
I have a strong perspective on this, but I wanted to highlight some of the best analysis on the issue that I’ve read so far, in particular this passage from Casey Newton from his Platformer newsletter:
Chastened by the criticism, Meta set out to shore up its defenses. It hired 40,000 content moderators around the world, invested heavily in building new technology to analyze content for potential harms and flag it for review, and became the world’s leading funder of third-party fact-checking organizations. It spent $280 million to create an independent Oversight Board to adjudicate the most difficult questions about online speech. It disrupted dozens of networks of state-sponsored trolls who sought to use Facebook, Instagram, and WhatsApp to spread propaganda and attack dissenters.
CEO Mark Zuckerberg had expected that these moves would generate goodwill for the company, particularly among the Democrats who would retake power after Trump lost in 2020. Instead, he found that disdain for the company remained strongly bipartisan. Republicans scorned him for policies that disproportionately punished the right, who post more misinformation and hate speech than the left does. Democrats blamed him for the country’s increasingly polarized politics and decaying democracy. And all sides pilloried him for the harms that his apps cause in children — an issue that 42 state attorneys general are now suing him over.
It stands out to me that the moves this week are consistent with his actions in 2016 and 2020. These moves are externally prompted - reactions to whoever seems to hold power versus some new beliefs or principles held by the company or by Zuckerberg himself.
Whatever else I think about the specific changes they announced, the thing at the top of my mind right now is why Zuckerberg believes things will go any differently for Meta this time? The two most likely answers to that are, first, (as the pieces linked below all mention) this is a uniquely “transactional” administration - a polite way of saying this administration can be bought, explicitly. The second possible answer is that it… just won’t go differently. It’s going to be easy for people to continue to be mad at Meta. Many of the issues people are upset about are, honestly, non-partisan: harm to their kids' mental health, fraud, and hate speech. While some groups may welcome some of that content, especially the hate speech, I still don’t see that as being acceptable to many voters with more loosely held partisanship.
Meta, in a way, epitomizes the product development version of the view from nowhere - the elevation of some noble position without doing the actual work that make that position noble. As it has been for journalists, maybe this is ultimately a no-win battle for Meta anyway, but it’s definitely so when it’s so obvious that there’s no deep principle here on what makes a safe online space or a great social experience. They’re just optimizing different metrics. Trump gives them an easy way to optimize some of the business and regulatory battles ahead.
For a more favorable view of Meta’s changes, I’d recommend Ben Thompson’s piece today, 1 where he does a great job analyzing this as a potentially rational decision independent of politics and outcomes for users. I want to highlight this portion, which he quotes from an earlier post (summer 2024):
The fundamental issue with blaming misinformation on Facebook for Trump is that it was, to use Zuckerberg’s words, “a pretty crazy idea”. That’s why all of the efforts over the last eight years to contain misinformation have totally failed; worse, they have actually made everyone mad at tech, not just Democrats. In short, you have an industry that has been endlessly vilified in the press, bent over backwards to do what the press demanded, but instead of receiving credit for those efforts, has only seen itself even more isolated and under siege…
What Horowitz says about the pain of switching sides and losing friends is important: I have criticized tech companies for alienating their natural allies in Washington, but what I am driving at in this Update is the extent to which the Democrats generally and the media specifically have made a similar error; by making tech the scapegoat for Trump, “the deal” that united tech and Democrats was broken. The end result is not so much enmity as it is naked self interest. In other words, the surprise isn’t that Musk and Andreessen and Horowitz are supporting the politician that is better for their business ventures, but that Democrats gave up the enviable position of being the default choice for people who didn’t want to think about politics at all.
I don’t agree with some of Ben Thompson’s political views, insofar as I understand them through his writing, but I respect his views immensely. For me, it’s hard to argue against two basic observations: no one is happy with content moderation in Meta’s products, and Democrats haven’t done much to govern in ways that encouraged tech companies to support them. His full argument is worth reading in its entirety. The net effect: for tech companies, Trump is much easier to read and manage - pay him, and he can be placated. It’s horrifying to write that about an American president, but… here we are.
Final thought - these changes will be massive for how we measure, understand, and combat misinformation on the internet. I just listened to a wonderful interview with Renee DiResta. Worth listening to if you want more details on how misinformation was researched before and how things might change from someone that was at the forefront of this work.
Actually, one more thing - my own personal take on this is maybe more nuanced than you’d expect: I am very concerned about this, not for electoral interference reasons, but because of potential harm to communities I care about, along with minorities of all stripes. As Casey Newton pointed out in the piece above, we’ve seen the worst that can happen with light moderation in Myanmar - that ended in genocide. The US is in a bad place right now with public discourse and rising political violence. Maybe we’re all a lot savvier about social media now, and this change will have little effect. That is the free speech argument - that more speech is the only way to combat “bad” speech. I’ll be honest - even though I generally agree, I am not confident our systems can survive that approach given asymmetrical passion and asymmetrical amplification by the platforms.
As a final example, it’s worth pointing out that a major presidential campaign was willing to lie about immigrants eating pets. They didn’t back down when numerous people pointed out that this was a lie. The only thing that stopped it from being a major factor in the election was, IMHO, the obvious ridiculousness of the lie. A bigger lie, but made in a more plausible way - e.g. the way trans athletes were disproportionately vilified during the election - will easily get amplified by social media. I don’t know how to combat that, and I worry these changes undo the best effort any platform has made yet to combat this sort of evil nonsense.
I hope Meta approaches this problem as a core product experience - how to really get community notes at scale on posts that may not be fully public, in a way that elevates the correction as much as the original bad take. That would be wonderful. I worry they will do what Meta does - optimize the easily quantifiable metrics at the expense of a truly meaningful free speech platform.
Because of this, I’m going to actively prioritize my networks on Bluesky - please follow me there if you haven’t already.
UPDATE: I swear I checked, but I really wanted to include a take by Mike Masnick, who writes a lot about content moderation and free speech issues with tech. Well, he posted literally as I was posting this. Worth reading in its entirety. He covers a lot of ground, including some of the positive in Meta’s announced changes - it’s a really good piece. He does go into the politics, unlike Ben Thompson - to me they’re indisputably tied to this change, and he seems to agree:
The timing of all of this is obviously political. It is very clearly Zuckerberg caving to more threats from Republicans, something he’s been doing a lot of in the last few months, while insisting he was done caving to political pressure.
I mean, even Donald Trump is saying that Zuckerberg is doing this because of the threats that Trump and friends have leveled in his direction: [embedded tweet removed]
I raise this mainly to point out the ongoing hypocrisy of all of this. For years we’ve been told that the Biden campaign (pre-inauguration in 2020 and 2021) engaged in unconstitutional coercion to force social media platforms to remove content. And here we have the exact same thing, except that it’s much more egregious and Trump is even taking credit for it… and you won’t hear a damn peep from anyone who has spent the last four years screaming about the “censorship industrial complex” pushing social media to make changes to moderation practices in their favor.
Go read it!
UPDATE 2: Just read John Gruber’s take at Daring Fireball. Some echoes of my point above:
What the whole thing primarily highlights is that Zuckerberg has no guiding principles — zero, zilch, nada — behind any of Meta’s platforms other than “success” in and of itself. Is it about you and your closest friends and family? Or about following celebrity influencers? Politics yes, or politics no? The answers were all different a few years ago, and they’re likely to be different again a few years from now.
Does anyone believe Meta is going to really, thoughtfully try for a solution here?
UPDATE 3: Masnick wrote a post looking deeper at the specific changes announced by Meta, specifically around language dealing with LGBT+ topics:
Indeed, the most incredible thing in all of this is that these changes show how successful the “working the refs” aspect of the MAGA movement has been over the last few years. It was always designed to get social media companies to create special rules for their own hot button topics, and now they’ve got them. They’re literally getting special treatment by having Meta write rules that say “your bigotry, and just your bigotry, is favored here” while at the very same time suppressing speech around LGBTQ or other progressive issues.
It’s not “freedom of speech” that Zuck is bringing here. It’s “we’re taking one side in the culture war.”
It really is incredible.
-
Ben Thompson’s posts are behind a paywall. It’s not a cheap sub, but it’s 100% worth it if you want to understand the business of technology. If you can’t or don’t want to get a sub, ping me and I will share a copy. ↩︎