X/Twitter pivots CSAM efforts, OSA adult performance and life in an online scam mill
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
In this week's Ctrl-Alt-Speech, I'm really excited to be joined by the fantastic Kenyan lawyer Mercy Mutemi, who has spent more than two years working on three high-profile cases related to Meta's content moderation practices. Not only does she talk insightfully about digital rights in an African context, she's also an absolute hoot. Listen wherever you get your podcasts.
Mercy and I had a brief chat about politics in Tanzania, which, fittingly, is one of countries — also including Germany, the US, India, Japan, Australia, Taiwan, and the UK — from which EiM gained new subscribers this week. Welcome to new readers and longstanding ones too!
Here's everything in moderation from the last seven days — BW
Want to understand the constantly shifting nature of content moderation and internet regulation and learn from some of the most experienced voices in Trust & Safety?
Become an EiM member today and get access to the complete archive including:
- Career advice including Ten things no one tells you about working in T&S
- Practical guides such as How to get buy-in for T&S
- Alice Hunsberger's ultimate guide to networking
Whether you're a practitioner, policymaker, or just curious about the decisions shaping our digital spaces, EiM is your essential companion.
Upgrade today for just a few dollars a week
Policies
New and emerging internet policy and online speech regulation
X/Twitter has left it late to file a federal suit against the state of New York for its Stop Hiding Hate Act, which mandates twice yearly transparency reports on hate speech, disinformation, and extremism. The Elon Musk-owned platform claims the law — which was signed in last December and goes into effect this week — violates First Amendment protections. X/Twitter successfully sued to block a similar law in California last year (EiM #229).
Also in this section...
- Why Making Social Media Companies Liable For User Content Doesn’t Do What Many People Think It Will (Techdirt)
- Dutch online platform watchdog struggling to connect with other EU member states (Euronews)
- LGBT Q&A: Your Online Speech and Privacy Questions, Answered (EFF)

Products
Features, functionality and technology shaping online speech
It’s the age-old problem for any software-as-a-service: a company signs up to get access but, over time, starts to not pay its invoices, leading to frantic emails to find a payment solution and, eventually, the hard decision to cut the company off. Except this week the company was X/Twitter and the technology was Thorn’s CSAM detection tool. Gulp. The platform told NBC News that it was “moving toward using its own technology” to address CSAM which, if you’ve been paying attention to its product releases of late, should be cause for concern.
I didn’t expect to write this when I got up this morning but VerifyMy, a safety tech provider that provides age verification tech, has teamed up with an adult entertainer to read aloud the need-to-know tenets of the Online Safety Act.
Ivy Maddox told Metro that she was motivated by the fact that she “wouldn’t want children to view my content – the same way I didn’t want to when I was their age”. That's despite research that has shown age verification can push people to less controlled or secure sites and may lead to fewer revenue opportunities for her profession if adult entertainment companies pull out of markets, as happened recently (EiM #294). The 26-minute version is a little dry, even for me, but, as a gimmick to attract attention, it’s eye-catching.
Platforms
Social networks and the application of content guidelines
Remember Digg, the once-venerated front page of the internet? You might have seen it’s being brought back by founder Kevin Rose and Reddit founder Alexis Ohanian but it’s how the platform will moderate content that I’m most interested in; Rose noted that:
“Just recently we’ve hit an inflection point where AI can become a helpful co-pilot to users and moderators, not replacing human conversation, but rather augmenting it, allowing users to dig deeper, while at the same time removing a lot of the repetitive burden for community moderators”
I’m all for hybrid moderation models — but will it succeed where other scale-at-all-cost platforms have failed? Also contains sage words from the writer of EiM’s other newsletter, T&S Insider.
Also in this section...
- Exclusive: New Global Safety Standards Aim to Protect AI’s Most Traumatized Workers (TIME)
- Designing Safer Player Spaces with AI: Lessons Learned from Our European Center for Not-For-Profit Law Partnership (Discord)
People
Those impacting the future of online safety and moderation
Last week, I highlighted the excellent reporting on the case of Rhianan Rudd (EiM #295). This next story, about a trafficked sports teacher from Sierra Leone, is also a brilliant read and tells an important story.
Mustapha Momoh flew to Bangkok under the guise of a teaching job that would increase his income ten-fold and help provide for the two kids and wife he left in Freetown. What actually happened was that he was driven into Myanmar to work on a scam farm run by a Chinese crime syndicate.
Over nine months, he was beaten daily and compelled to defraud victims online, including through romance scams. Freed in a rare crackdown, he returned home traumatised and empty-handed. His story, excellently reported in The Times, is a stark reminder that behind online scams are real lives — regularly coerced, often invisible — trapped in wider technological systems of our own making.
Posts of note
Handpicked posts that caught my eye this week
- “Social tech & safety researchers: are you looking for new opportunities and funding to work on interesting design research and experiments?” Roblox’s Alex Leavitt shares exciting news of some pots of research money.
- “I’m excited to contribute to their mission and support this next chapter of impact.” - Forget the football transfer window; the big news is that Vaishnavi J and Theodora Skeadas have joined the Integrity Institute’s board of directors.
- “I had extensive experience with the UN and human rights and humanitarian NGOs but knew nothing of what it means to be a human rights advocate within a company as large, complex, fast-moving and influential as Meta.” - Human rights veteran Iain Levine departs from his role at Meta.
Member discussion