AI content needs 'new rules', UK goes Henry VIII and T&S researchers file suit
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
AI features heavily in today's edition, but I’m not convinced the technology is always the real story. Whether it’s chatbot safeguards or AI content policies, many of the problems being blamed on AI look more like the product of familiar corporate incentives: shipping features quickly, outpacing so-called competitors and worrying about the consequences later. Tell me what you think — hit reply or email ben@everythinginmoderation.co.
Not coincidentally, those same dynamics crop up in the just released episode of Ctrl-Alt-Speech, in which Mike and I discuss the AI feature that riled editors, the new Molly vs the Machines documentary and calls for new rules for AI content. Listen wherever you get your podcasts.
If you're heading to the T&S Summit in London, let's meet for coffee. You can book a slot in my calendar to talk about what you're working on and how we can work together. New EiM subscribers from the eSafety Commission, Vinted, Coimisiún na Meán and elsewhere, you're especially encouraged to say hi.
Here's your Week in Review — BW
Attending the Trust & Safety Summit in London? Don’t miss our panel: “When AI Is Perfect, Why Do We Still Need Humans?”
Join Checkstep and T&S leaders from our clients Daily Mail Group and JustGiving, alongside our partners ModSquad, as we explore how content moderation and community management are evolving — and why the right blend of AI and human expertise is critical for long-term success at scale.
Catch the panel on Wednesday 25 March at 11:50am!
Policies
New and emerging internet policy and online speech regulation
A coalition of technology researchers is suing the Trump administration over visa restrictions (EiM #319) that not only prevent foreign experts conducting independent technology research in the US but are “so broad and vague that it casts a shadow over a vast range of protected activity”. The lawsuit, which has been filed by the Knight Institute and Protect Democracy on behalf of the Coalition for Independent Technology Research (CITR) — two of the five individuals banned last year are CITR members — alleges that the policy violates the First Amendment and argues that the US government is waging a “brazen and far-reaching campaign of censorship while cynical and falsely claiming that censorship is what it's fighting”. Props to those involved for putting themselves in the firing line of the US administration on this one.
The Oversight Board has called for new rules governing deceptive AI content during armed conflicts, warning that the platform’s current policies don’t adequately address synthetic media. The recommendation came alongside a case involving an AI-generated video posted on Facebook during the 12-day Israel-Iran conflict last year (copies of it remain up, including here). The company did not remove or label said video because it didn’t “directly contribute to the risk of imminent physical harm”.
Tough reading: I’ll be honest, going through the Board’s decision didn’t give me much hope for the future of our information ecosystem. For one, the recommendations directed at Meta — such as creating a Community Standard for AI-generated content — require significant internal co-ordination and a will to execute that feel hard to see happening anytime soon. The decision is also an indictment of well-meaning industry-wide efforts, like C2PA, which platforms like Meta were able to sign up to, not invest heavily in but still subsequently become part of the steering committee of. Not a great look.

Meanwhile, the UK government has voted down an amendment that would allow for an immediate social media ban for under 16s. However, it is reportedly considering new regulatory powers that would allow ministers to update technology rules without full parliamentary scrutiny. According to Politico, amendments to the Crime and Policing Bill and Children’s Wellbeing and Schools Bill would allow ministers to move "at pace" to keep individuals safe by, yep, restricting access to internet services.
Apparently called Henry VIII clauses, because of their power, it begs the question: do we want Nigel Farage or his Reform ministers potentially wielding such powers?

