Moderation hits the big screen, NYT covers teen chatbots and AI startup raises $12m
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by paid members like you.
Becoming a paid EiM member supports independent coverage of the T&S industry at a time when it's most needed and gets you unfettered access to EiM's archive of 450+ editions.
Firstly, welcome to a typically eclectic set of free EiM subscribers, including folks from StackOverflow, AI Together, the Home Office, Peleton, Medium, Duco and elsewhere. You’ve joined for must-read news and analysis; this week, you also get film recommendations.
Today's edition comes at the end of a chaotic week in the Whitelaw household so I may have missed a story or two. Hit reply with anything worth revisiting to or that should be shared with EiM subscribers next week— ben@everythinginmoderation.co.
I did, however, manage to chat with Fadza Madzingira, a T&S director at Twitch, for this week's Ctrl-Alt-Speech. Come for Fadzai's sharp analysis, stay for her brilliant laugh. And, with that, here's your Week in Review — BW
""I love the way the newsletter is curated and its helpful to understand what else is possible across industries" - T&S policy manager
Thousands of experts and decision makers get their fix of industry news and analysis from Everything in Moderation's weekly newsletters, Week in Review (every Friday) and T&S Insider (every Monday).
Talk to them by becoming a sponsor today.
Policies
New and emerging internet policy and online speech regulation
Meta has discussed reducing or ending funding for the Oversight Board when its current commitment expires in 2028, according to Platformer, prompting questions about the future of one of the industry’s boldest accountability experiments.
Where next?: This latest news caps a rocky few years for the Board: it had to make significant cuts in 2024 (EiM #246) and was not given a head’s up about Meta’s content moderation overhaul last January before it were announced publicly (EiM #283). Saying that, its recent five-year impact report suggested that the Board members knew its original model may not be fit for the future.
Ofcom this week brought into force one of the Online Safety Act’s more operationally challenging duties: the requirement for user-to-user services to report UK-linked child sexual exploitation and abuse content (CSEA) via a newly established reporting portal. It’s not dissimilar to the National Center for Missing Children’s CyberTipline regime in the US, in that platforms that become aware of apparent CSEA/CSAM must report it, but the UK process routes to the National Crime Agency for investigation.
Important headaches: This is good step towards making CSEA reporting a formalised part of platform operations rather than something that it voluntary or market dependent. Sensibly, there’s no requirement for platforms to report content that is already shared with NCMEC. Nonetheless, I am lighting a candle for the compliance managers mapping out these workflows and ensuring everything is working as it should.
Also in this section...
- Digital Fingerprints, Human Stakes: Governing NCII Hash-Matching (CDT)
- How the Internet Fringe Infiltrated Republican Politics (The New Yorker)
- Greece to ban social media for under-15s from 2027, calls on EU action (Reuters)

Products
Features, functionality and technology shaping online speech
Moonbounce, the AI content moderation startup formerly known as Clavata, this week announced raised $12 million in funding, according to TechCrunch. The round was co-led by Amplify Partners and StepStone Group, which said they invested because they saw “objective, real-time guardrails become the enabling backbone of every AI-mediated application.” Founder Brett Levenson was previously on sponsored segment of Ctrl-Alt-Speech to discuss his “policy as code” concept.
Also in this section...
- Large Language Models in the Abuse Detection Pipeline (Arxiv)
- The Family Tech Cycle: Navigating Screens, Devices, and Social Media (Joan Ganza Cooney Center)
- Forget the A.I. Apocalypse. Memes Have Already Nuked Our Culture (The New York Times)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
There’s no shortage of media stories about teen mental health problems caused by character chatbots (EiM #305). But a new New York Times piece takes a different approach, following one teenager — now 15-year-old Quentin — over time to show how these bots actually fit into everyday life. The reporter spent a year in touch with Quentin and other teens she first contacted on Discord, which gives the piece a depth that most coverage of chatbot harms lacks. The result feels closer to an average teen experience than the usual worst-case narrative. Read it for the almost fairytale ending.
As Fadzai and I discuss on this week’s podcast, that doesn’t make the products safe. But it suggests that for many teens, the chatbots function less as replacements for real connection than as a low-cost, always-on substitute — what one New York Times commenter called “the fast food of intimacy.”
Telegram has struggled to shed the idea that it is a platform with inadequate T&S policies and a new report shows the sheer scale of the problem. Researchers from AI Forensics found nearly 25,000 people across Spain and Italy using the app to distribute and sell CSAM and non-consensual intimate imagery — almost entirely of women and, shockingly, from current or former partners — often through organised, monetised networks. Euronews has more details https://www.euronews.com/next/2026/04/08/telegram-hosts-vast-organised-abuse-networks-in-spain-and-italy-report-finds
Also in this section...
People
Those impacting the future of online safety and moderation
I’ve written before (EiM #308 and others) about how often content moderation is spilling out of wonkish policy circles and into literature, films and wider cultural commentary. It’s not just me noticing: Variety has clocked it too. In horror movies, in particular:
“As social media continues to splinter relationships and spread fake news, content moderators — the poor souls doomed to monitor posts all day to determine if they are “dangerous” — are popping up as characters in scary movies."
One of those films, Faces of Death, a horror remake reimagined for the age of feeds, content moderators and algorithmic spectacle. Its director, Daniel Goldhaber, is interesting because, from his interview with Interview Magazine, he seems to get that moderation is never just a technical process:
“Yeah, [moderation] a political decision, and yet these human-run moderation divisions are also the exact people enacted to monitor undesirable speech on the apps, which the movie also talks about”.
Can’t wait to see it.
Member discussion