Age checks pushback, Meta on child safety and gaming’s Nazi problem
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
You know there's an issue when scientists come together to write a letter advocating for a pause. It happened with genetic engineering, and with generative AI and now it's happened with age assurance. More in the Products section of today's newsletter, along with other stories you may have missed.
If you're reading this, or have listened to Ctrl-Alt-Speech, there's a good chance you're interested in the messy, fascinating relationship between the media and how companies do Trust & Safety. In a special episode of the podcast, Mike and I discuss (with apologies to Tay-Tay) the three eras of content moderation in the media and what comes next. Have a listen.
Welcome to new subscribers from Vinted, the New York Times, Milltown Partners, Automattic, LegitScript and a host of other people. Get in touch to say hi and let me know what made you subscribe.
This is your week in review for the first week of March — BW
Online safety regulation is no longer theoretical.
With regulations like the EU Digital Services Act (DSA) now actively enforced, platforms are expected to clearly demonstrate how they identify, assess, and mitigate risk in practice.
To help teams get clarity fast, we’ve launched a free Online Safety Compliance Checklist in partnership with our partners, Illuminate Tech.
It takes just a few minutes to complete the assessment to provide a custom checklist that provides a practical starting point for compliance planning for your online platform.
Policies
New and emerging internet policy and online speech regulation
Almost a month into the Meta child safety trial in New Mexico, Mark Zuckerberg told jurors in a taped deposition that “no system can ever be perfect, and we’ve never claimed it to be”. The Meta CEO was joined by Instagram CEO Adam Mosseri, who somewhat laughably said, “we will prioritize safety over profits”. Maybe he hasn’t been reading EiM lately. Reuters has reported on Meta’s fraudulent ads problem, while The Guardian and the BBC have both covered Instagram’s child safety issues.
The upshot of the disclosures as part of this case is that child safety at Meta got what I call “the Atlantic treatment”. In a 3000-word feature, the magazine lays out how Meta employees acknowledged the risks to children for years while repeatedly weighing safety fixes against growth and engagement hits. It’s a reminder of just how central recommender algorithms are to the question of user safety: Meta’s was so good that it reconnected four times as many minors to “groomer-esque” accounts as it did regular adults. Oh, and Andy Stone (EiM #178) makes an appearance too.
