How to moderate 100m daily users, Meta over-enforcement claims and Clegg saves the web
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
Today's edition is like a bit like EiM's greatest hits — we've got stories on Sir Nick Clegg, the eSafety Commissioner, even Reddit’s mod tooling — but with new questions to answer. Add to that a technical peek into Roblox’s AI enforcement engine, and you’ve got a round-up that covers different harms, geographies and platforms. I hope it's useful.
Want more? You're in luck. Mike and I go deeper on some of these stories in the latest episode of Ctrl-Alt-Speech (Apple, Spotify).
New subscribers from Dow Jones, Checkstep, Ofcom, Headland Consultancy, Reddit, Duco, Reddit and elsewhere, I'm hoping that you weren't signed up to EiM by a jealous ex or a vindictive scammer. If you were, change your email settings in just a few clicks.
Let's get into it then, thanks for reading — BW
Voice-based scams are more sophisticated, more emotional, and more expensive than ever before. But traditional fraud tools still focus on the text and transaction — not the tone.
This blog post from Modulate breaks down why the voice channel is becoming the new front line in fraud, and how behavioural voice analysis can catch what legacy systems can’t.
Policies
New and emerging internet policy and online speech regulation
From last week but an important one: a ruling in Australia has reversed a takedown decision by the eSafety Commission, stating that it did not meet the threshold for cyber abuse under the country’s Online Safety Act. In a win for X/Twitter, who jointly filed the challenge with a Canadian anti-trans activist, the Administrative Review Tribunal said it couldn’t be satisfied of the “necessary intention to cause serious harm to the subject of the post”, a trans man appointed to advise the World Health Organisation.
ABC News has a 6-minute explainer on the case, which has intensified the focus on Australia’s regulatory approach and led to calls for the eSafety Commissioner Julie Inman Grant to step down.
The EU has released its the latest list of 40+ trusted flaggers under the Digital Services Act and it’s revealing. Euronews led with the fact that 14 of the 27 European countries have not yet approved any organisations but I was more interested in the range of organisations that have been designated: the Finnish branch of Save The Children? Romania’s national institute for Holocaust studies? The Austrian Chamber of Labour? A strange bunch with big responsibilities for tackling illegal content.
Products
Features, functionality and technology shaping online speech
Roblox has shared given a behind-the-scenes overview about how AI models are now detecting policy violations in real-time — including hate speech and adult content — for almost 100m daily users. In a technical blogpost out this week, Naren Koneru, its VP of Engineering Safety, explains how it handles 370,000 requests per second for personally identifiable information alone and has had success reducing false positives by improving data quality. It’s a company worth watching if you’re working on AI for policy enforcement.
Also in this section...
- Considering the Human Rights Impacts of LLM Content Moderation (Tech Policy Press)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so everyone can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
Meta is facing heat this week for over-enforcement of its child sexual abuse materials (CSAM) policy following a disgruntled complaints by affected Instagram and Facebook users and a petition signed by 28,000 people called for the company to “fix its broken systems and treat its users with respect and fairness”.
Over 100 people locked out of their accounts spoke to the BBC about the stress of being accused of accessing CSAM and, in some cases, losing out financially after being locked out of business accounts.
Just the start?: We can safety predict that more stories like this will emerge as a raft of incoming online safety regulation puts pressure on platforms to remove content that is borderline. It’s not an abstract concern either; just last month, a Korean official confirmed that Meta had told them it already over-enforcing on CSAM as a compliance mechanism.
It's bad for users for bodes well for user appeals organisations set up under Article 21 of the Digital Services Act, two of which announced expansions of platform and language this week.

Not for everyone but some fascinating insights in this post on r/modnews about how Reddit plans to move from “from policing to community cultivation”. There’s some advanced tooling in there, including ways for mods to inform redditors about violating posts before they are submitted. Very cool.
Also in this section...
- Elon Musk’s Grok AI chatbot praises Adolf Hitler on X (Financial Times)
- Social Media Can Support or Undermine Democracy – It Comes Down to How It’s Designed (The Conversation)
- 'Bosses see us as machines': Content moderators unite to protect mental health (Context News)
- Racist AI-generated videos are all over TikTok, thanks in part to Google's Veo 3 tool (FastCompany)
People
Those impacting the future of online safety and moderation
He’s back! (EiM #92, #106, #180), And this time he’s got stuff to say! After a hiatus brought about by Meta’s now infamous policy pivot (EiM #276), Sir Nick Clegg is on the media trail promoting his new book about, yep, saving the internet.
Due to be published this autumn, it is described as a “radical, reasonable, deeply felt and disarmingly honest” account by its publisher. From the blurb, it sounds like he takes aim at Silicon Valley and Big Tech and has some ideas for preserving the openness of the web that, in part, culminated in him becoming deputy UK prime minister back in 2010.
However, according to The Times, the former president of global affairs at Meta told conference delegates in Oslo recently that misinformation has always been a thing and that the issue is that “human beings are not always nice and never ever have been.” Look forward to hearing how he defends that one.
Posts of note
Handpicked posts that caught my eye this week
- "Looking forward to next weeks event organised by All Tech is Human and the Royal Society on July 16th." - unfortunately I can't make this fantastic London event but be sure to join Meta's David Miles and a host of other smart T&S people there.
- "We're thrilled to have partnered with the experts at Childnet on these resources as a part of a holistic approach to tackling the misuse of AI to create NCII - and they're available for schools to use now!" - Microsoft's Liz Thomas on the tech giant's latest partnership about the risks of gen AI for teens.
- "For the purpose of the film, Jordan gets himself ‘sextorted’—a form of online blackmail in which criminals threaten to release intimate images of victims unless they pay money or comply with other demands." - Paul Raffile teases a documentary that'll be must-watch for EiM subscribers (and Rizzle Kicks fans).
Member discussion