5 min read

Age checks pushback, Meta on child safety and gaming’s Nazi problem

The week in content moderation - edition #327

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

You know there's an issue when scientists come together to write a letter advocating for a pause. It happened with genetic engineering, and with generative AI and now it's happened with age assurance. More in the Products section of today's newsletter, along with other stories you may have missed.

If you're reading this, or have listened to Ctrl-Alt-Speech, there's a good chance you're interested in the messy, fascinating relationship between the media and how companies do Trust & Safety. In a special episode of the podcast, Mike and I discuss (with apologies to Tay-Tay) the three eras of content moderation in the media and what comes next. Have a listen.

Welcome to new subscribers from Vinted, the New York Times, Milltown Partners, Automattic, LegitScript and a host of other people. Get in touch to say hi and let me know what made you subscribe.

This is your week in review for the first week of March — BW


IN PARTNERSHIP WITH CHECKSTEP, where T&S meets regulation
CTA Image

Online safety regulation is no longer theoretical.

With regulations like the EU Digital Services Act (DSA) now actively enforced, platforms are expected to clearly demonstrate how they identify, assess, and mitigate risk in practice.

To help teams get clarity fast, we’ve launched a free Online Safety Compliance Checklist in partnership with our partners, Illuminate Tech.

It takes just a few minutes to complete the assessment to provide a custom checklist that provides a practical starting point for compliance planning for your online platform.

TAKE THE ASSESSMENT

Policies

New and emerging internet policy and online speech regulation

Almost a month into the Meta child safety trial in New Mexico, Mark Zuckerberg told jurors in a taped deposition that “no system can ever be perfect, and we’ve never claimed it to be”. The Meta CEO was joined by Instagram CEO Adam Mosseri, who somewhat laughably said, “we will prioritize safety over profits”. Maybe he hasn’t been reading EiM lately. Reuters has reported on Meta’s fraudulent ads problem, while The Guardian and the BBC have both covered Instagram’s child safety issues.

The upshot of the disclosures as part of this case is that child safety at Meta got what I call “the Atlantic treatment”. In a 3000-word feature, the magazine lays out how Meta employees acknowledged the risks to children for years while repeatedly weighing safety fixes against growth and engagement hits. It’s a reminder of just how central recommender algorithms are to the question of user safety: Meta’s was so good that it reconnected four times as many minors to “groomer-esque” accounts as it did regular adults. Oh, and Andy Stone (EiM #178) makes an appearance too.

Also in this section...

Products

Features, functionality and technology shaping online speech

A total of 371 security and privacy scientists have come together to publish a joint letter warning about the viability of age assurance technology and warning that “the new regulation might cause more harm than good”. The group has called for a global pause “until the scientific consensus settles”, which seems highly unlikely, especially when you consider what happened the last time hundreds of experts called for a pause. Politico has the full story.

If you want more on age assurance debate, Taylor Lorenz, who writes User Mag, has written a full and frank op-ed for The Guardian about what governments and regulators really need to do to address challenges with social media. Stronger anti-surveillance laws — my pet issue — get a mention.

Enjoying today's edition? Support EiM!

💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.

💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.

📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!

Platforms

Social networks and the application of content guidelines

TikTok’s sixth transparency report under the DSA shows the growing role of what it calls “increasingly effective” automated moderation. Between July and December 2025, the platform removed around 112 million pieces of violating content, but there’s plenty more interesting data in the full spreadsheet (credit to TikTok for releasing that): “solely automated means” accounted for more than 220m pieces of content, while human decisions following automated review still accounted for a chunky almost 15m pieces of content.

On the topic of transparency reports, Electronic Arts has released its 2025 report explaining its automated systems proactively reviewed (wait for it) 1 billion text strings and 31 million images, filtering about 0.9% as policy-violating before players saw them. Unfathomable scale.

Who's there?: I used to read lots of transparency reports — and include them here in EiM — back when they were published voluntarily by platforms. Ironically, the swathe of enforced regulatory disclosures means I read fewer reports — there’s just too much to work through. I wonder if others are the same? I’d love to provide deeper analysis on the disclosures. Hit reply if you feel the same.

Also in this section...

People

Those impacting the future of online safety and moderation

When the US imposed travel bans on five Europeans at the end of last year under the guise of “foreign censorship”, many focused on former EU tech chief Thierry Breton or Imran Ahmed, CEO of the Center for Countering Digital Hate. Both have not been afraid to go head-to-head with large tech platforms and their CEOs.

Anna-Lena von Hodenberg and Josephine Ballon, however, would’ve been known to fewer people As co-lead of non-profit HateAid, they have helped people who experienced sexist attacks, revenge porn and doxxing through legal support and advocacy. As this new profile from the New York Times explains, they’ve been successful in doing so.

Unfortunately, that work has attracted the ire of German politicians that accuse it of liberal bias and also of Donald Trump and some of his cronies — including good friend of Ctrl-Alt-Speech, Jim Jordan. But the stories of people they’ve helped and those that turned out in a Berlin museum to support HateAid shows Anna-Lena and Josephine may not be travelling to the US abut they also aren’t going anywhere. 

Posts of note

Handpicked posts that caught my eye this week

  • “This could be big: The head of product at X says they'll use Community Notes to cut funding from creators who spread AI-generated war footage.” - Alex Mahadevan director of MediaWise, on what looks like X/Twitter launching a T&S feature. Could it really be?!
  • “There are not many movies that manage to so creatively and emotionally tell a story of personal loss, resilience, and courage and the systemic story of Big Tech corporations endless search for profit.” - Luminate’s Elise Tillet-Dagousset recommendation should be enough of a convince to watch new documentary Molly vs The Machines.
  • “Even after watching the documentary last night; I scrolled through TikTok and was pushed several videos telling me I was ‘too fat’ and to start taking supplements and reduce the food I eat.” - youth-led mental health expert Connor Warren finds the documentary all too real.