'AI psychosis' research, X/Twitter criticised in inquiry report and Reddit's Ben Lee
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by paid members like you.
Welcome to new EiM subscribers from Stripe, Contrails, the CMJ Group, Salesforce, Terra.do, rbb24, 150 Bond, and elsewhere. Say hi, suggest a story or ask a question via ben@everythinginmoderation.co.
The big news this week is not reported by The Verge or TechCrunch (at least not yet): that news is that fans of Ctrl-Alt-Speech — the weekly podcast I host with Techdirt’s Mike Masnick — is 100 episodes old 🎉 (yep, that deserves an emoticon).
To mark that milestone and to keep the podcast going for another 100 episodes, we’re moving to a listener supported model. Listen to Mike and I discuss why we’re doing it but here’s the tl;dr:
- From May 28th, extended episodes with our unique mix of cross Atlantic ‘dad jokes’ will only be available via Patreon. A free episode will still be available via your favourite podcast player.
- The Founder tier — which gets you bragging rights and allows you to suggest stories we should cover each week plus random missives on Patreon — is available at a discounted price until 28th May.
- I’m on paternity leave/holiday until late May so Mike will be helming the podcast with a bunch of special guests until then.
I’ll be sharing more in the coming weeks — including an EiM x Ctrl-Alt-Speech membership bundle — but if you’re a regular listen, this is the best way to support the podcast and keep it going.
That's the big online speech news. But here are the other stories you need to know about this week too - BW
Policies
New and emerging internet policy and online speech regulation
Following months-long inquiry, the first volume of the Southport Inquiry report has been published and it doesn’t hold back in its criticism of X/Twitter and Amazon’s age verification systems and content moderation practices in the lead-up to the incident. A fascinating if harrowing read, the report outlines how harmful content remained accessible despite attempts by authorities to have it removed. Tech Policy Press has a good write-up of the report's findings if you don't have time to read. Thanks to Heather for bringing the report to my attention.
The report is particularly critical of the arguments put by X/Twitter's head of global affairs, Deanna Romina Khananisho. Mike and I discussed her performance in front of the Inquiry back in November.

o
Also in this section...
- Will social media addiction go the way of cigarettes? (Financial Times)
- Indonesia and the politics of platform governance (Global Voices)
- How Meta's Content Moderation Practices Risk Turning Instagram into a Hub for Hate (ADL)

Products
Features, functionality and technology shaping online speech
Another week, another harrowing story — this time from The Observer — about the potential for people to developer problematic relationships with AI chatbots — known increasingly as ‘AI psychosis’. The fact that this story concerns a 51-year-old Brit and Grok busts the myth that teens are the only victims and that OpenAI is the main culprit of such delusional beliefs. We’re really in the foothills of understanding the impact of these technologies.
Both companies are also mentioned in new research from City University of New York and King’s College London, which assessed five widely-used models and found two profiles: high-risk, low safety’ (GPT-4o, Grok 4.1 Fast, and Gemini 3 Pro) and ‘low-risk, high safety’ (Claude Opus 4.5 and GPT-5.2 Instant). One of the study’s authors, Luke Nicholls, paid tribute to OpenAI for addressing concerns in its newer model and summed it up when he told 404 Media:
“There’s also clearly pressure to release new models on an aggressive schedule and not all labs are making time for the kind of model testing and safety research that could protect users.”
Also in this section...
- Making AI safer for victims of intimate partner violence (Cornell)
- How AI Reverses the Political Logic of the Internet (Tech Policy Press)
- Age verification is coming for the internet — and it’s already raising red flags (NBC News)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
A story that broke just after last week's newsletter: more than 1100 outsourced workers review content on behalf of Meta have lost their jobs after their employer, Sama, once again attracted controversy for its working practices. The Guardian and others have the story.
The cancelling of the contract is almost certainly linked to stories coming out of the Swedish media in February (and shared via EiM) which found private moments recorded using Meta’s smart glasses were being reviewed by Sama workers in Kenya. That led to a wave of unfavourable coverage and pressure from data protection bodies and regulators, which Meta seemingly deemed to high a risk. Longstanding EiM readers will remember the last Sama story (EiM #150) which resulted in court cases (EiM #223 and others) that are still to conclude and were postponed again in February.
Contract wars: The last time Meta moved to end Sama’s contract, it was given to Majorel — but only if former Sama workers were prevented from applying. Majorel was then bought by Teleperformance in late 2023 — in part for its operations in Africa — but has also been criticised for its approach to moderator wellbeing (EiM #291). I don't think that will stop them getting the contract — that is, presuming the workers get replaced at all.

Also in this section...
- Instagram Expands Teen Accounts Inspired by 13+ Content Ratings (Meta)
- Substack Still Has A Nazi Problem (And Doesn't Care) (The Fine Print)
- Africa has 2,000 languages. AI content moderation covers fewer than 20 (Global Voices)
- Inside the AI systems Amazon uses to protect every part of your shopping experience (Amazon News)
People
Those impacting the future of online safety and moderation
Ben Lee probably won’t be offended by me saying that he’s not a household name in the way Sam Altman (EiM #333) is. However, his influence over one of the internet’s most distinctive moderation models is hard to ignore.
A new profile from the Electronic Frontier Foundation this week puts Reddit’s community moderation system — and Lee’s role in shaping it — under the spotlight. As the company’s Chief Legal Officer since 2019, Lee has been central to defending and evolving a model that pushes significant power to volunteer moderators, rather than consolidating it within the platform itself.
Lee talks to the enabling nature of Section 230 — it being the law’s 30th anniversary and all — and shares a fascinating story of a Reddit Star Trek fan using Texas’ NB20 law to dispute a moderator decision to remove a post which called one of the characters a ‘soy boy’ (more details in Reddit’s amicus brief in the Gonzalez v. Google case).
While regulators push for clearer accountability and more standardised enforcement, profiles like this reminded us why Lee and Reddit continue to argue that decentralisation — letting communities define and enforce their own rules — is not just viable but the way forward.
Posts of note
Handpicked posts that caught my eye this week
- “We'll be talking about social technologies as infratructure for publics, and how we can get ourselves out of cycles of tech enshittification.” - Modal Foundation’s Ivan Sigal promises to share plans for Eurosky (EiM xxx) in a webinar next week.
- “If you have a background in content moderation, fraud, or financial crimes investigations and want to join an amazing, mission-driven company, please reach out directly or tag a relevant contact in a comment!” - good friend of EiM, Cathryn Weems, has a number of jobs going at GoFundMe based in Australia.
- “There's a LinkedIn profile for Ted Bundy, the serial killer, that displays LinkedIn's verification badge. It's been reported multiple times but the company still hasn't removed it.” - This update from Indicator's Craig Silverman certainly stood out among the 'personal news' and AI-written engagement bait.
Member discussion