The new AI governance plan, YouTube's policy walkback and inside American Sweatshop
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
A big thanks to everyone who showed up for the Marked As Urgent x EiM London meetup last night. I had a great time and — based on feedback I received — so did everyone else. In fact, I had such a fun evening that I forgot to take a photo. You'll just have to believe me.
We’d love to host more events for people working and interested in tech policy and internet regulation, both in London and elsewhere. If you’re interested in working together or sponsoring an event, email me at ben@everythinginmoderation.co. You can follow Marked As Urgent for futrue events and updates.
There's the usual mix of regulatory updates, platform u-turns, and real-world policy consequences in today's newsletter. Let's get into it — BW
Does your platform have messaging, search, or generative prompt functionality? Thorn has developed a resource containing 37,000+ child sexual abuse material (CSAM) terms and phrases in multiple languages to use in your child safety mitigations.
The resource can be used:
- To kickstart the training of machine learning models
- To block CSAM prompts
- To block harmful user searches
- To assess the scope of this issue on your platform
Apply today to get access to our free CSAM keyword hub.
Policies
New and emerging internet policy and online speech regulation
This week’s 80th UN General Assembly in New York has drawn together a cast of global tech leaders, diplomats, and platform execs, all keen to shape what comes next. It might be dry and the conversations often wonky but it's the only forum for all UN members to raise diplomatic issues — which increasingly intersects with internet governance and regulation.
Here’s what caught my eye:
- Secretary-General António Guterres launched the catchily titled Global Dialogue on Artificial Intelligence (AI) Governance, a new body designed to help govern the global development of AI that is “grounded in international law, human rights and effective oversight”. The New York Times explains more.
- Amazon shared its own vision for responsible AI and expanded global internet access, which sounds like the disastrous internet.org initiative in new clothing.
- Konstantinos Komatitis writes for Tech Policy Press that China’s exporting of digital surveillance technology should prevent UN members from too much backslapping about support for the open internet.
In the UK, civil society organisations have warned that Ofcom is coming off as too timid in enforcing the UK’s Online Safety Act, suggesting that major platforms are far from “quaking in their boots.” In an interesting shift, 5Rights Foundation and the Molly Rose Foundation also emphasised the importance of privacy in age verification tech which, as we know, isn’t always straightforward.
Also in this section...
- EU Commission to ‘leave doors open’ for social media ban (Politico)
- How Serbia could use EU Digital Services Act for state censorship (EU Observer)
- Reclaimed slurs removal shows errors in Instagram Carousel moderation (Oversight Board)

Products
Features, functionality and technology shaping online speech
Meta is expanding its effort to protect younger users by putting suspected under 18 users in what it calls ‘Teen Account settings’ following a successful trial in the US. For teens based in UK, Canada and Australia, it means they will no longer be able to go live or disable barriers to stop unwanted content in messages.
Also in this section...
- What Does It Take To Moderate AI Overviews? (Tech Policy Press)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
YouTube is reversing its decision to ban channels for COVID‑19 and election misinformation under a new pilot scheme for creators banned under those now-deprecated rules. CNBC reported that it leaves the door open for Dan Bongino (now deputy FBI director), Steve Bannon (former Trump strategist and head of Breitbart) and Robert F Kennedy (US Secretary of Health) to return to the platform. The company will also abandon third‑party fact‑checking and is assessing the effectiveness of Community Notes style feature that it started testing in June 2024.
The opposite of between the lines: It’s a fairly naked concession by a company that is facing several anti-trust cases and could be broken up if they don’t go their way. Rep. Jim Jordan (yep, the one regularly featured on Ctrl-Alt-Speech) predictably — and somewhat hilariously — called it “victory in the fight against censorship”.
Also in this section...
- Teen Accounts, Broken Promise (Molly Rose Foundation)
- Behind Grok's 'sexy' settings, workers review explicit and disturbing content (Business Insider)
- Videos of Charlie Kirk’s Murder Are Still on Social Media — and That’s No Accident (The Intercept)
People
Those impacting the future of online safety and moderation
I promise this is the last time I’ll mention American Sweatshop (EiM #305). But I couldn’t resist sharing this interview with its leading actor and how she approached playing the role of a content moderator.
Talking to Parade, Lili Reinhart said:
“Most of these people that were coming forward told me, “I quit that job because it fucked me up so much,” so I didn’t want to be like, “Let’s sit down and talk about it for an hour.” I have an imagination, and I’m an actor, and it’s my job, so I just filled in the pieces on my own”.
I’m conflicted on this approach. Yes, content moderation can be traumatic — that much is well documented — but many moderators also develop ways to compartmentalise those experiences. It wouldn’t have been hard to find someone willing to speak to that without suffering a relapse.
I believe the bigger problem — and one I’ve written about before — is that T&S professionals are often unable to share their experiences publicly due to restrictive NDAs, legal risks, and the fear of professional repercussions.. This means what they have seen and know gets flattened and overly simplified. That goes for moderators too.
So Reinhert might think she knows about content moderation “we all are, weirdly, content moderators in our own way”. But would it harm anyone to check?

Posts of note
Handpicked posts that caught my eye this week
- “If you sow enough doubt and fear and confusion, you can sell more supplements to "bulletproof" your followers immune systems against measles” - Abbie Richards’ read on the rise of measles misinfo raises questions about TikTok Shop's moderation policies.
- “Bluesky CAN serve this key role in enabling third party moderation, and I don't see someone else equipped to do that rn.” - Great thread from Daphne Keller on the very live discussions about how Bluesky does moderation
- “Here's what the new co-owner of TikTok US and existing co-owner of the Tony Blair Institute (the 'let's do digital ID' guys) had to say this time last year.” - Maria Farrell joins the dots between the UK’s ID plans and the new part-owner of the US TikTok.
Member discussion