'Chilling effect' of US policy, safety stack investment and Ellis urges reform
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by paid members like you.
I’m writing this from the south of France in the dying embers of my paternity leave, where I may have found the perfect metaphor for how T&S professionals feel in the face of the growing threat of AI-mediated harms. (NB: I was reassured that neither animal or human was harmed during the event).
A big welcome to free subscribers over the last two weeks; folks from Fieldfisher, TenTens Tech, Kroll, Reddit, Overtone.ai, Future Privacy Forum, Meta and a bunch of folks from Automattic.
Whether you're new or otherwise, chances are that you'll want to check out Alice Hunsberger's guide to tracking the right T&S metrics and tune in to Mike talking with his good friend and First Amendment lawyer, Ari Cohn, on the latest episode of Ctrl-Alt-Speech.
An even bigger celebration goes to a handful of newly-paid up EiM members, who, for the princely sum of $100 a year, have supported independent coverage of the T&S industry at a time when it's most needed and got themselves unfettered access to EiM's archive of 450+ editions. Pretty good deal, eh.
Here's everything in moderation from the last seven days(ish) — BW
Online abuse doesn’t stay on one platform, so neither can the response. Through Lantern, companies are sharing threat signals to detect and disrupt abuse networks more effectively.
Since launch, companies have shared 2M+ signals, supporting 350K+ enforcement actions across accounts, URLs, and content. Lantern shows how shared infrastructure can enable coordinated, real-world impact.
Policies
New and emerging internet policy and online speech regulation
Remember the visa restrictions placed on “foreign nationals” responsible for US censorship (EiM #294) and the five individuals sanctioned in December? Oral arguments in that lawsuit — Coalition for Independent Technology Research v. Rubio — were heard this week with attorneys for the non-profit arguing that the policy “is expansive and incredibly vague, and the chilling effects are correspondingly enormous.” Both Poynter and The Verge have good write ups.
Ofcom has issued a £950,000 fine to a suicide forum under the Online Safety Act in what it says "reflects the serious and deliberate nature of the contraventions”. The fine, which must be paid by 12th June, is the second largest after the one given to a porn company AVS Group Ltd back in December (EiM #317). The forum must also comply with a series of duties by the end of May in order to remain online.
Also in this section...

Products
Features, functionality and technology shaping online speech
The last few weeks have seen some notable investment in the T&S vendor ecosystem:
- Checkstep, the end-to-end content moderation product that counts Trustpilot and JustGiving as customers, announced £3m in funding. Led by Alea Capital Partners, it will support continued product development and commercial partnerships, a release explained. (Editor's note: Checkstep has previously sponsored EiM)
- Cinder, the safety tooling stack founded by ex-Meta and Palantir employees and used by platforms such as OpenAI, ElevenLabs and Depop, has announced a $41m series B round led by Radical Ventures to “accelerate our work building mission-critical infrastructure”.
Winner takes all?: It’s a positive sign for the T&S industry to see VC investment in the technologies that so many platforms rely on, despite the political backlash mentioned above (see Policies). The interesting question now is whether the market can sustain companies all promising roughly the same thing — usually a mixture of automated policy enforcement, cross-jurisdictional compliance and regulator-ready reporting. Will one become the 'Salesforce or Hubspot of online safety'? And will there will acquisitions like we saw a few years back?
Also in this section...
- We don't need content cops on social media. We need better design. (Mashable)
- Connecting Researchers & Practitioners to Catalyze Actionable Research (Prosocial Design Network)
- Magical thinking about magical thinking (Heather Burns)
I know it’s not possible for everyone to support EiM with a full membership. You can now send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend. It helps keep independent reporting and analysis about the T&S industry sustainable — and is hugely appreciated.
Platforms
Social networks and the application of content guidelines
Meta has been in the UK High Court disputing the fees and potential penalties that it should pay to regulator Ofcom for administering the Online Safety Act. The company, it claims, would prefer that its UK revenue is used as part of the calculation instead of its global revenue. That might seem a fair argument if there hadn’t already been a 2.5-month consultation about Ofcom’s charging principles at the back end of last year, including the clarification of ‘qualifying worldwide revenue’, which was wording locked in the Online Safety Act by UK parliament all the way back in 2023.
xAI, the Elon Musk-owned company behind the chatbot Grok, has been doing a much more overt regulatory pushback than Meta. Dutch investigative outlet Follow The Money reported that it added an Estonian address — purporting to belong to a legal firm —to its terms of service soon after the EU opened up an investigation into the chatbot. Was that address genuine? Turns out not.

People
Those impacting the future of online safety and moderation
For six years, I have read GLAAD’s Social Media Safety Index hoping that there will be some material gains to the safety of LGBTQ users and a reason to cheer. Every year — including this year's recently released report, I’m disappointed.
The numbers speak for themselves; five of the six platforms saw their scores decline and only one — TikTok — avoided a decline. But it’s the foreword from Sarah Kate Ellis which really brings it home.
Ellis, who has been president and CEO of GLAAD since 2014, focuses heavily on Meta’s policy backsliding, noting that the company has “traded a commitment to human rights for the overt backing of anti-LGBTQ hate and the actors who traffic in it.” She urges “LGBTQ creators, advocates, and organisations targeted on and by these platforms” to make themselves heard by all platforms — perhaps easier said than done in the current climate.
During her tenure, Ellis made the non-profit one of the most persistent critics of platform failures around LGBTQ safety, and for good reason. From this year’s report, she has plenty of work ahead of her.
Posts of note (research edition)
Handpicked posts that caught my eye this week
- “AI is not some universal experience, but something filtered through our culture, our identity. Questionable content didn't suddenly fracture some shared objective reality, but is reminding us that we never had one to begin with.” - If nothing else, research from Zine/Reddit’s Matt Klein shows that so much of our view of AI could be over or underreported.
- “I’m biased towards New_ Public’s point of view: pro-social spaces, pro-democracy technology, and community as an ingredient for trust are all my jam. But everything laid out in this presentation is already happening.” - Ben Werdmuller over at Propublica shares some juicy New_Public research that I look forward to diving into.
- “I asked five AI chatbots who to vote for in the UK local elections. It took a single follow-up prompt for two to give me a name.” - Ifigenia Moumtzi shines a light on what happens when chatbots do political advice.



Member discussion