Free speech politics, AI missteps, and Pinterest’s pivot to prevention
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
Exciting news, for me at least: EiM is co-hosting some birthday drinks and nibbles in conjunction with Marked As Urgent, the tech policy events series I've been involved with since the start of the year.
So, if you're in the UK on Thursday 25th September, join us for a drink and the chance to network with brilliant folks working in digital policy, tech regulation and Trust & Safety. It's register only and space is limited.
Warning: This week's newsletter contains several instances of the 'F' word. Apologies in advance to those who find Nigel Farage offensive in any way.
This is your Week in Review — BW
Policies
New and emerging internet policy and online speech regulation
Rep. Jim Jordan’s special House Judiciary Committee hearing (EiM #302) might have been a circus but it also served as an “escalation in Jordan’s war on Europe tech regulations”, according to Politico.
UK politician Nigel Farage — called a ‘Donald Trump sycophant and wannabe” by a Democrat committee member — was the self-styled main attraction and was quick to reference the arrest of an Irish comedian that has attracted the attention of the British press this week.
However, he failed to mention that it had anything to do with the Online Safety Act and was related to a law that came in in 1986 — coincidently, the same year Tim Berners-Lee co-authored a paper about pre-internet information systems while still at CERN. That didn’t stop tech industry group NetChoice from "applaud(ing) this Committee's leadership on this important issue".
Mike had a less than positive take on the hearing — and Farage’s lunch plans — on this week’s Ctrl-Alt-Speech. Have a listen.

The most interesting part of the hearing did not happen on camera but via a letter to Rep. Jim Jordan signed by 30+ academics and researchers addressing the idea that the Digital Services Act is a tool for censorship. Organised by Martin Husovec, associate professor of law at LSE, it explains the scope of the Act and reminds US representatives about checks in balances in place to avoid overstepping. Will it help? Unlikely. But seeing US and EU scholars come together in this way certainly made me sit up and take notice.
Also in this section...
- A Primer on Cross-Border Speech Regulation and the EU’s Digital Services Act (Center for Internet and Society)
- New 7amleh Report: Meta’s Role in Amplifying Harmful Content Against Palestinians During Genocide in Gaza (7amleh)

Products
Features, functionality and technology shaping online speech
An in-app prompt aimed at reducing screen-time among Pinterest-loving teens during school hours is being rolled out in the UK after seeing success in US and Canadian pilots earlier this year. The pop up advocates users “take a break from the Pinterest app” — and also encourages switching off notifications all together.
Bill Ready, Pinterest's CEO, has previously put his eggs in the basket marked 'operating system level age verification' (EiM #276) but this move signals a more direct attempt to engage with teen usage behaviour itself.
Google’s Veo 3 AI generator was at the centre of a racist and antisemitic TikTok video trend this summer (EiM #297) and it’s now enabling the creation of hundreds of misogynist videos — in Hindi, no less — on Instagram too. That’s despite it arguably violating multiple parts of Google’s generative AI prohibited use policy.
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Online spaces and their application of content guidelines
In a rare move, OpenAI and Anthropic granted each other limited API access, enabling researchers to probe for failure points like harmful misuse and hallucinations, according to TechCrunch. Both companies produced research papers on the 'alignment evaluation exercise', which detail weakness in each other's models, most notably the propensity for Open AI's o3 and o4-mini models to hallucinate.
That's all relevant because a string of stories — the emotional pull of AI "deadbots", the ease by which teens can develop parasocial relationships and the growing use of AI to diagnose symptoms — underscore how fragile that alignment still is
Also in this section...
- Poland: Twitter/X facilitated spread of anti-LGBTI hatred and harassment (Amnesty International)
People
Those impacting the future of online safety and moderation
If there’s a whiff of American hypocrisy in the ‘free speech’ narrative that’s been coming out of the White House recently, Brazil’s top court justice Alexandre de Moraes has seen it all before.
As laid out in this thorough Global Voices piece, Brazil — led by Moraes’ courtroom and the government’s trade policy — has mounted one of the most direct challenges to the deregulated, US-centric vision of the internet. He’s gone head-to-head with Elon Musk (EiM #262) and Telegram founder Pavel Durov and survived to tell the tale.
Moraes embodies a growing frustration in the Global South that global internet governance is still disproportionately shaped by US commercial and constitutional interests. His stance may come with political and economic costs — including unfavourable US trade tariffs and accusations of overreach — but it's a fight that's not going away.
Posts of note
Handpicked posts that caught my eye this week
- "We're going to talk about TRUST! About SAFETY! About CAREERS! Express your interest and register using the link below. (And I'll let you in on a big NYC secret: you can also take the F train.)" - Google's Zoe Darme with a strong sell for an upcoming Cornell University event.
- "This research would not have been possible without the insights, guidance, and trust of the hundreds of content moderators and data annotators who spoke with us. Because of the retaliatory culture within the tech industry, we must protect their identities, but we remain deeply grateful for their courage in sharing their stories and revealing realities unique to Asia." - Sabhanaz Rashid Diya of the Tech Global Institute shares a new report that I can't wait to dig into.
- "We will discuss how age checks are being implemented across platforms to support safer, age-appropriate online experiences. Not just to meet regulatory requirements - but to better protect children and support families." - Verifymy's Andy Lulham with a webinar that couldn't be more timely.
Member discussion