'No free pass' for platforms, pro-AI party vibes in India and Spiegel’s pushback
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
This week is all about the politics of platform power. From AI summits that sidestep safety to CEOs dabbling in digital sovereignty, there’s plenty of positioning going on.
Last week's Ctrl-Alt-Speech inspired me to write a longer piece on how the rise of creators may shift how internet governance works — plus what platforms and regulators need to do to adjust. If you like badly drawn pyramid frameworks and/or irate TikTokers, you'll want to give this a read.
Welcome to new subscribers from Google, Electronic Arts, EFF, Persona, Duck Duck Go, Sony, the Oversight Board and a steady trickle of other well-intentioned internet shapers. I hope you find EiM useful, challenging and — occasionally — mildly entertaining. If you do, you know what to do.
Here's your seventh Week in Review 0f 2026. Enjoy — BW
What’s the real cost of “good enough” moderation?
Online platforms face growing scrutiny as harmful content continues to cause real-world harm. Corporate Complacency vs. Human Cost exposes how the gap between policy and practice - revealed through Checkstep’s analysis of EU DSA Transparency data - creates serious ethical, legal, and reputational risks.
In the whitepaper, we uncover the hidden crisis behind inconsistent reporting, manual moderation burnout, and misleading safety metrics. We also explore how companies must align their content moderation policies with action to protect users, staff, and brand trust.
Policies
New and emerging internet policy and online speech regulation
UK prime minister Keir Starmer means business. Or at least that's what he'd have you think with his double platform regulation announcement this week:
- On Monday, he announced that all AI chatbot providers would abide by illegal content duties in the Online Safety Act as part of a plan to keep children safer. “No platform gets a free pass” was the motto, even if that doesn't reflect how companies are categorised under the Act. Hey, it's just minor details.
- On Thursday, the Labour leader backed that up by announcing that the UK will require tech firms to remove abusive images within 48 hours or face having their services blocked in the UK. It indicates — to me, at least — a move to a move prescriptive, Australian model.
No brainer: This is politically low-risk move from the UK government. Tech regulation has broad cross-party support, child safety polls well and “tough on platforms” is an easy message to land. Coming just weeks after Starmer's favourability rating sank to its lowest score to date, I'm not surprised at all.
I've not spent as much time as I'd like reading what's coming out of the Los Angeles addictive design trial, in which Meta and Google are defendants. But here's a few pieces I'll be diving into over the weekend:
- The FT's Hannah Murphy reports on Mark Zuckerberg's claim that“utility” and “value” — not engagement — is the priority for users over the longer term.
- Casey Newton over on Platformer called the case a "novel and portent challenge to Section 230" and could force "significant changes to social app design".
- Sky News offers the strange mental image of six lawyers unveiling comically large social media posts for Zuckerberg to review while on the stand.
Hit reply and share anything I've missed.
Also in this section...
- Exclusive: US plans online portal to bypass content bans in Europe and elsewhere (Reuters)
- America’s problem with Europe’s online speech rules (Politico)
- How to Keep the Internet Human (Creative Commons)

Products
Features, functionality and technology shaping online speech
Anyone who's anyone in AI was in New Delhi this week for the AI Impact Summit, an event that felt a world away from the now-infamous 2023 safety-first gathering at Bletchley Park (EiM #223).
Despite sensible calls for safety to be central to the summit and strong comments from Emmanuel Macron and Narendra Modi, Politico described the summit as a pro-AI “party” in which talk about model trust and safety were “sort of marginal and on the edges”. Not exactly music to the ears of internet safety professionals.
One attendee even said: “It has essentially become verboten in many circles of global civil society to have hard-nosed conversations about AI risks.” Eeek.
With T&S missing from the conversation, there’s a danger of repeating the Web 2.0 playbook: scale first, mitigate later, apologise down the line (if ever). The question is whether AI is on a different path — or whether we’re simply postponing the same kind of product reckoning that we’ve seen unfold in Los Angeles this week, just 10 or 15 years from now.
Also in this section...
- Grok Is Now Editing Itself (Columbia Journalism Review)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
Snap CEO Evan Spiegel used an op-ed in the Financial Times this week to push back against Australia’s under-16s social media ban. In it, he lays out the three often discussed problems — unregulated apps, age verification challenges and mixed evidence base — and says that:
“if Australia’s experiment yields clear evidence that this approach genuinely improves youth wellbeing without creating bigger problems elsewhere, we will of course re-evaluate. Good policy and corporate decisions should follow high-quality evidence.”
Change the tune: FT readers are not naïve and the comments beneath the piece reflect that. That’s because what Spiegel fails to articulate or acknowledge is the reason why the social media ban rhetoric has emerged in the first place — that is, from genuine concerns about child safety due to the the impact of platforms’ product and policy decisions.
If he — and other platform CEOs, for that matter — want to persuade a sceptical policy audience, they’ll need to engage more directly with that root cause.
Also in this section...
- Australia’s social media ban is a high-stakes experiment (FT)
- App Stores Shouldn’t Have to Parent the Internet (ITIF)
- Reddit's human content wins amid the AI flood (BBC)

People
Those impacting the future of online safety and moderation
When you start seeing more of a companies’ head of global affairs, it usually means something is afoot. For example, Meta’s former chief, Nick Clegg (EiM #274), appeared a lot during 2022-2023 as he backed the company’s development of open source AI and claimed that it's products weren’t the election risk that everyone made out it out to be. The EU opened formal proceedings not long afterwards.
Kent Walker, Google’s President of Global Affairs has been similarly prominent this week to coincide with the 62nd Munich Security Conference, where his message sought to position Google as a stabilising force in an increasingly fragmented tech landscape.
He also told the FT that the EU should be wary of “erecting walls” as it seeks to reduce reliance on US technology providers. His pitch for ‘open digital sovereignty’ had the energy of an unhappy couple insisting they’re ‘taking space’, right before a messy breakup.
His comments are a reminder that T&S increasingly sits within a broader security/sovereignty narrative — making some like Walker, with his blend of legal, geopolitical and product strategy experience — the perfect messenger.
Posts of note (new research edition)
Handpicked posts that caught my eye this week
- “Likes are private since June'24. This study measures whether people like “risky” stuff more.” - Rutgers’ Kiran Garimella shares a paper looking at X/Twitter’s change to likes. Also props to the researchers for following EiM and calling it X/Twitter.
- "Our forthcoming piece (with Christoph Mueller-Bloch) in the Communications of the ACM develops a public utilities model of social media regulation where provision and moderation are democratically governed.” - fascinating looking paper from University of Sydney’s Raffaele F Ciriello.
- “Digital ethnonationalism (noun) —“the design & operation of opaque algorithmic systems & policies that reproduce racial & religious hierarchy & cultural homogeneity at scale.”” - Alex Howard shares new research that develops a concept increasingly embedded in large platforms. Timely.


Member discussion