AI content needs 'new rules', UK goes Henry VIII and T&S researchers file suit
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
AI features heavily in today's edition, but I’m not convinced the technology is always the real story. Whether it’s chatbot safeguards or AI content policies, many of the problems being blamed on AI look more like the product of familiar corporate incentives: shipping features quickly, outpacing so-called competitors and worrying about the consequences later. Tell me what you think — hit reply or email ben@everythinginmoderation.co.
Not coincidentally, those same dynamics crop up in the just released episode of Ctrl-Alt-Speech, in which Mike and I discuss the AI feature that riled editors, the new Molly vs the Machines documentary and calls for new rules for AI content. Listen wherever you get your podcasts.
If you're heading to the T&S Summit in London, let's meet for coffee. You can book a slot in my calendar to talk about what you're working on and how we can work together. New EiM subscribers from the eSafety Commission, Vinted, Coimisiún na Meán and elsewhere, you're especially encouraged to say hi.
Here's your Week in Review — BW
Attending the Trust & Safety Summit in London? Don’t miss our panel: “When AI Is Perfect, Why Do We Still Need Humans?”
Join Checkstep and T&S leaders from our clients Daily Mail Group and JustGiving, alongside our partners ModSquad, as we explore how content moderation and community management are evolving — and why the right blend of AI and human expertise is critical for long-term success at scale.
Catch the panel on Wednesday 25 March at 11:50am!
Policies
New and emerging internet policy and online speech regulation
A coalition of technology researchers is suing the Trump administration over visa restrictions (EiM #319) that not only prevent foreign experts conducting independent technology research in the US but are “so broad and vague that it casts a shadow over a vast range of protected activity”. The lawsuit, which has been filed by the Knight Institute and Protect Democracy on behalf of the Coalition for Independent Technology Research (CITR) — two of the five individuals banned last year are CITR members — alleges that the policy violates the First Amendment and argues that the US government is waging a “brazen and far-reaching campaign of censorship while cynical and falsely claiming that censorship is what it's fighting”. Props to those involved for putting themselves in the firing line of the US administration on this one.
The Oversight Board has called for new rules governing deceptive AI content during armed conflicts, warning that the platform’s current policies don’t adequately address synthetic media. The recommendation came alongside a case involving an AI-generated video posted on Facebook during the 12-day Israel-Iran conflict last year (copies of it remain up, including here). The company did not remove or label said video because it didn’t “directly contribute to the risk of imminent physical harm”.
Tough reading: I’ll be honest, going through the Board’s decision didn’t give me much hope for the future of our information ecosystem. For one, the recommendations directed at Meta — such as creating a Community Standard for AI-generated content — require significant internal co-ordination and a will to execute that feel hard to see happening anytime soon. The decision is also an indictment of well-meaning industry-wide efforts, like C2PA, which platforms like Meta were able to sign up to, not invest heavily in but still subsequently become part of the steering committee of. Not a great look.

Meanwhile, the UK government has voted down an amendment that would allow for an immediate social media ban for under 16s. However, it is reportedly considering new regulatory powers that would allow ministers to update technology rules without full parliamentary scrutiny. According to Politico, amendments to the Crime and Policing Bill and Children’s Wellbeing and Schools Bill would allow ministers to move "at pace" to keep individuals safe by, yep, restricting access to internet services.
Apparently called Henry VIII clauses, because of their power, it begs the question: do we want Nigel Farage or his Reform ministers potentially wielding such powers?
Also in this section...
- Shareholder Control and the New Politics of Platform Regulation (Tech Policy Press)
- Far right pushes for debate on ‘election interference’ by Brussels (Politico)
- Anthropic and the Constitutional Dimension of Governance in the Digital Environment (Verfassungsblog)
- Beyond “Fake News”: How information integrity creates a building ground for disinformation-resilient society (Sciences Po)

Products
Features, functionality and technology shaping online speech
Having announced product improvements (EiM #314), the major AI companies may have thought the topic of chatbot safeguards would disappear for a bit. But CNN and the Center for Countering Digital Hate (CCDH) ran an experiment looking at the top ten most popular LLMs assisted dummy users in planning violent acts and the results — on the surface at least — are not pretty.
Perplexity, Meta AI and DeepSeek all provided actionable information almost 100% of the time, although the expectations of these systems to know what is a threat and what is a regular request for information feels unrealistic.
The small print: I was intrigued by quotes from “former safety leads at AI companies” who said that priority was given to building products that outpace competitors rather than keeping users safe. That reminds us that the models themselves are only part of the puzzle; the important inputs are the incentives that staff work towards, the metrics that they are tasked to hit and the culture that is embedded in the teams to get there (see today's People).
Speaking of product priorities, OpenAI has reportedly realised that an “adult mode” for ChatGPT might have some stumbling blocks. The company announced that it was delaying the rollout in favour of other product developments, after its ChatGPT product fell behind other LLMs at the end of last year.
Also in this section...
- Social media firms asked to toughen up age checks for under-13s (BBC)
- AI-led Content Moderation Tools: Are They The Answer To Combatting Online Extremism in Canada? (GNET)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
You’d be forgiven for thinking it's 2018 following the announcement that WhatsApp has finally rolled out parental controls. As TechCrunch reports, parents will be able to control contacts, group invitations, and get alerts for when a teenager deletes a chat or a contact or a group turning on disappearing messages (which have long posed a risk to children). Parents, however, still won’t be able to see a child’s messages.
Although late to the parental control party, the suite of controls looks more granular than some of Meta’s other platforms, which, frankly, is what you’d hope from an app that has had the luxury of seeing other platforms try (and often fail) with similar safety products. And the best part? Pre-teen accounts don’t have access to Meta’s terrible, experience-interrupting Meta AI. I’m off to change my date of birth.
Also in this section...
- Inside the Discord Server: Echo Chambers and the Spread of Gen-Z Radicalisation (GNET)
- YouTube accused in EU of having 'manipulative' homepage (Brussels Times)
People
Those impacting the future of online safety and moderation
A company’s strategy, the goals it sets and the culture it builds all contribute inadvertently to its safety approach (see today's Products). But so does corporate structure.
That’s the take of Paddy Leerssen, an assistant professor at the University of Amsterdam’s Institute for Information Law and one of the most thoughtful scholars examining the power dynamics shaping platform governance. In a recent piece for Tech Policy Press, Leerssen explores how shareholder pressure is becoming an increasingly important force in the politics of platform regulation.
Citing the recent takeover of TikTok in the US, he argues that investors are beginning to influence how technology companies approach issues like moderation, transparency and corporate accountability. That, he claims, requires new research approaches that track the political convictions (if they can be called such a thing), business connections and personal values of owners like Elon Musk and David Ellison.
Leerssen also suggests that regulatory instruments, such as the EU’s European Media Freedom Act (EMFA), could be used to introduce greater transparency and independence safeguards. US tech billionaires won’t like that but, in some ways, that’s the whole point.
Posts of note
Handpicked posts that caught my eye this week
- "One year ago, we lost Ladi Anzaki, a sister, a friend, and a colleague. Her death remains a mystery. To this day, there has been no autopsy result made public" - Kauna Malgwi with a sad update to a story I covered in March last year (EiM #285).
- "For many viewers, the documentary will be shocking and educational. For those of us who have spent years working on online abuse, violence and misogyny, much of it emerging from incel culture and manosphere radicalisation, this is an opportunity to progress the conversation further." - Glitch founder Seyi Akiwowo on Louis Theroux's new documentary about the manosphere.
- "The Summit brought together TikTok teams with government representatives from 12 countries across SSA, NGOs, international organizations and media to discuss prominent issues like election integrity, AI and youth safety." - Valiant Richey joins TikTok's safety conference in Kenya — where it recently avoided a ban (Let Fly The Claudes of War).



Member discussion