Anthropic plays defence, Discord pleas for forgiveness and Reddit plans to appeal
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
When we created our 2026 bingo card, “US Defense Department strong arms AI safety rules” wasn’t on it. Clearly, we weren’t thinking big — or weird — enough.
That story is central to today’s newsletter (see: Policies) and this week’s Ctrl-Alt-Speech, in which I’m joined by Casey Newton, founder and editor of Platformer and co-host of the Hard Fork podcast. As well as some big personal news, Casey has strong views on the future of age assurance. Have a listen.
Talking of policies that make people safer on the internet (or don’t), have you read Alice’s latest “How to” guide on creating user policies that aren’t terrible?. Get the full guide, access to all 300+ editions of Week in Review and the full T&S Insider catalogue by becoming an EiM member.
Welcome to new subscribers from Milltown Partners, Legitscript, Colorado State University, Ofcom, Nisos, Patreon, Positive News, Human Rights Watch, the Australian Artificial Intelligence Safety and other savvy newsletter consumers.
That's enough preamble. Here's your Week in Review - BW
Online platforms face growing challenges in keeping up with the increasing volume of harmful and unwanted content. Manual review alone can be slow, costly, and inconsistent, while many automated approaches often lack the ability to balance safety with freedom of speech.
Checkstep was built to solve this. Its AI content moderation platform acts as your trust and safety co-pilot, combining cutting-edge AI and automation with human oversight. Detect content of interest faster, set and enforce policies, and stay ahead of compliance obligations. All while empowering your teams to make informed, accurate decisions.
Policies
New and emerging internet policy and online speech regulation
US Defense Secretary Pete Hegseth has this week heaped pressure on Anthropic to allow the military to use its powerful Claude model as it sees fit, after other AI companies removed clauses in their contracts. The company, which has a $200 million contract with the Defense Department, is reportedly concerned about AI-controlled weapons and mass domestic surveillance of citizens — both of which go against its safety focus.
The Washington Post has the gory details, including some potential backstabbing by the famously ethical technology company, Palantir. Anthropic responded by announcing changes to its Responsible Scaling Policy, saying it felt “with the rapid advance of AI, that it made sense for us to make unilateral commitments”, although insiders told CNN that the Pentagon pressure had nothing to do with it. Hard to imagine that they’re not linked.
Do or Amodei: Trust & Safety is how platforms put their values into action is a common refrain of my wise EiM colleague, Alice Hunsberger. This is a perfect example of that. Anthropic has styled itself as the most safety conscious AI company, which doesn’t seem to have harmed its growth in the B2B market. Co-founder Dario Amodei released a statement last night saying “we cannot in good conscience accede to their request”, which is commendable. It’s a shame that he has become the story and not the fact that Google, OpenAI and xAI have all dropped pledges or revised language related to AI use for weapons and surveillance.
In the UK, the much-touted social media consultation is expected to be launched next week with The Guardian reporting that Prime Minister Keir Starmer will back the idea despite senior insiders saying they are “sceptical about whether a ban will work”. Despite many making strong cases against a ban — including, this week, the Council of Europe’s Commissioner for Human Rights, Michael O’Flaherty — it increasingly feels like a done deal.
Also in this section...
- Ofcom's enforcement of the OSA: some initial reflections (Online Safety Network)
- Europe vs Big Tech: A battle for democracy? (Coda Story)
- EU’s platform data-access system enters crucial test phase (MLex)
Products
Features, functionality and technology shaping online speech
Discord’s co-founder and chief technology officer this week published a candid blog explaining how the company botched communication around its planned global age assurance rollout and will now delay the global rollout until the second half of 2026. Stan Vishnevskiy acknowledged that many users misunderstood the scope and privacy implications of the original plan and committed to publishing the technical details of improved age-estimation methods.
Also in this section...
- New Alerts to Let Parents Know if Their Teen May Need Support (Instagram)
- Inside the Internet Archive's race to save federal webpages (Axios)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
In more age assurance news, Reddit has been fined £14m by the UK data watchdog for failing to check the ages of users in the period between May 2018 and July 2025. The Information Commissioner’s Office, which opened the investigation into the platform in March last year alongside TikTok and imgur, found “failures” stemming from asking users to declare their age when opening an account. Reddit have said they will appeal.
Coded messages?: It’s not clear why the ICO have only decided to investigate and fine Reddit now; the failures go almost 10 years back and the Children’s Design Code — the basis of the investigation — was brought in in August 2020. What took them five years to bring an investigation? And why aren’t other sites that used self-identification age verification also under investigation? It might have something to do with Reddit’s significant growth over the last few years; just last month, it become the UK’s fourth most visited social media site, overhauling TikTok,
In Kenya, MPs have rejected calls for a total ban on TikTok, concluding that such a move would infringe constitutional rights and risk harming the country’s fast-growing digital economy. But its Parliament is pushing for stronger regulatory measures, including local data storage requirements — ring any bells? — as well as AI models trained on local dialects ancd human moderators that understand the Kenyan context. Maybe they listenered to Daniel Motaung and his colleagues after all (EiM #199).
People
Those impacting the future of online safety and moderation
Professional footballers (aka soccer players) get a bad reputation and often rightly so. But I appreciated the four individuals that spoke up this week after receiving racist abuse via Instagram.
Hannibal Mejbri, Wesley Fofana, Tolu Arokodare and Romaine Mundle all went public after receiving messages following defeats or poor performances. Arokodare said that: “These individuals should have no place in our game and collectively we have to take action to punish everyone who taints the sport like this, no matter who they are”. Of course, women footballers have had it worse for much longer.
For a long time racism in football has been normalised and it's not got better since players became easier to reach online. But there’s been a recent push back with Ofcom pledging to work more closely with major football bodies and the UK Football Policing Unit. It'll be a long road.
Posts of note
Handpicked posts that caught my eye this week
- “As part of my doctoral thesis, I am conducting research on the motivations and barriers to participation in Community Notes on X” - Marion Seigneurin is looking for X/Twitter users that have evaluated Community Notes.
- “The report is about how and why digital products and services keep getting worse (known as “enshittification”) - and how we can turn the trend” - come for the hilarious video, stay for the report. Thanks to Finn Lützow-Holm Myrstad for the report and bringing this to my attention.
- “I got a subpoena from Snapchat.“ - Nicki Petrossi, host of Scrolling 2 Death, finds herself in the firing line of another platform once more.

Member discussion