Washington cries censorship (again), X/Twitter raid and TikTok mods talk legal action
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
This year was always going to be about regulators knocking at platforms' doors — but I didn't expect that to be literally. Mike and I discuss what the raiding of X's French offices mean for transatlantic policy relations in this week's Ctrl-Alt-Speech.
If you're an T&S professional looking to onboard a new vendor, Alice and T&S Insider has you covered with a new guide to running a successful RFP. EiM members get exclusive access — as well as full rein on a newsletter back catalogue that includes our recent jobs series and a detailed guide to networking. Become a monthly or annual member today.
New subscribers from Ofcom, Arkose Labs, Naver, Deloitte, Tencent, Reimagination Lab, Meta and elsewhere, welcome to the club.
Here's your Week in Review for this bleak and dreary February week (in London, at least) — BW
Looking to make new connections in Trust & Safety?
Checkstep will be hosting the next London T&S community meetup on 19 February in Farringdon, at the Ukie offices.
Expect an evening of networking, shared insights, and plenty of food and drink. The event will also feature two short talks from industry leaders, offering their expertise on the trust and safety landscape — with a focus on rising regulatory demands.
The event is free to attend and open to anyone working in Trust & Safety, compliance, policy, or community management. We can’t wait to see you there!
Policies
New and emerging internet policy and online speech regulation
French authorities raided the offices of X/Twitter this week and summoned Elon Musk to appear at a “voluntary” hearing in April as part of an investigation that started about biased algorithms (EiM xx) and has been extended to include the proliferation of NCII and CSAM.
The move sparked a furious response from the presidentially certified “bullshit artist”, who called it a “political attack” while former X/Twitter CEO Linda Yaccarino and Telegram CEO Pavel Durov jump to his defence. Safe to say that France is not making many friends among platforms right now. And Spain is following suit.
It came as Ofcom provided an update on its investigation into X/Twitter under the Online Safety Act (OSA), which it said was progressing “as a matter of urgency”. However, it noted that it wouldn’t investigate xAI — which owns and provides access to Grok — because it is not a user-to-user service, search service or produces pornographic content (which some may dispute).
Who’s after who? Although the Online Safety Act does not allow Ofcom to pursue a case against xAI, the UK’s Information Commissioner’s Office (ICO) announced this week that it will — and also has X/Twitter in its sights according to the release. It follows the European Commission opening an investigation into Elon Musk’s social platform last week. So the ICO and France are going after xAI and the EU, Ofcom and ICO investigating X. Got that? Good.
Meanwhile in Washington, the US House Judiciary Committee published a ‘report’ (if you can call it that) accusing the European Commission of a “decade-long campaign” to censor American speech. The framing — eagerly amplified by blue checks on X/Twitter — positions EU tech regulation as geopolitical interference but suggests effects of recent enforcement are being felt across the pond.
None of this will shock long-time EiM readers. But what might be more surprising is the tone of industry association NetChoice’s praise for the House Judiciary leadership for which it described as “foreign censorship of Americans”. The group has long argued that regulation amounts to censorship so that’s not new. But the geopolitical framing — and its fawning tone — feel fresh.
Also in this section...
- Claude’s Constitution needs a Bill of Rights and Oversight (Oversight Board)
- Two months since the social media ban began and teens say it isn't working (ABC)
- Data Brokers, and Other Things Not Covered by 230 (Public Knowledge)
Products
Features, functionality and technology shaping online speech
While much of the focus has been on ChatGPT’s model outputs, an interesting story reminds us that it also has obligations as a marketplace. The Observer reports that host of custom GPTs — tailored versions of the larger model that any Pro user can create — can be prompted to output violent and misogynistic “dating advice” to teenage boys, some in the style of Andrew Tate. OpenAI’s “automatic systems to help ensure GPTs adhere to usage policies, preventing harmful content and impersonation” might need a rethink.
One area that OpenAI does seem to recognise is a vector of abuse is ads. Business Insider notes that the company is building a dedicated “integrity team” to prevent ads that cause harm surfacing inside ChatGPT. As I’ve previously argued in EiM, T&S for advertising tends to go under the radar but poses a significant reputational threat when it goes wrong. So it makes sense for OpenAI to get ahead of it.
Also in this section...
- International AI Safety Report 2026 (International AI Safety)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
Bluesky's 2025 transparency report provides some useful detail about how the decentralised platform scaled moderation as its user base grew from around 25 million to more than 41 million. The moderation numbers are significant — nearly 10 million user reports, 16 million labels and around 2.45 million removals — but tiny in comparison to other platforms; Meta removed 259m pieces of content across Instagram and Facebook in three months alone.
What interested me most was the improvements to product design — better replies, auto hiding of reported lists — not to mention the shade thrown at other verification approaches "not all of which provide helpful signal about account authenticity and provenance." Wonder who its referring to?
Also in this section...
- Announcing the Minecraft Safety Council (Minecraft)
People
Those impacting the future of online safety and moderation
I’ve might have said it a dozen times before but I don’t mean it any less: it takes a lot to speak out about platform workplace conditions. So credit to former TikTok moderator Lynda Ouazar for doing so.
In an interview this week, Ouazar — who is one of five former employees taking legal action against her former employer — spoke publicly about her experience working at the company , alleging bullying, harassment and union-busting practices. "I was finding it really hard to sleep at night, having flashbacks, feeling tired, losing my motivation,” she told Sky News.
TikTok “strongly reject(s)” her claims but her account adds to a growing number of testimonies from former platform workers describing intense internal pressure and a disconnect between public safety commitments and internal culture.
As with earlier whistleblowers covered here (EiM #291 and others), the significance lies not only in Lynda's testimony itself but what it reveals about the way safety decisions are made.
Posts of note
Handpicked posts that caught my eye this week
- "Here’s my ✨ updated ✨ mega list of resources so you can keep up with Trust & Safety / AI safety/ Fraud and Risk" - T&S Insider's Alice Hunsberger performing a public service. Good reminder to share my reading sources at some point...
- "Excited to host Rudy Fraser, founder of Blacksky Algorithms at our next Folk Tech event! How can we bridge the gap between non-authoritarian tech and regular people?" - Community tech folks, you'll want to join LX Cast's upcoming call.
- "In a new piece for The Globe and Mail, Helen Hayes and I make the case for a temporary moratorium on social media access for children" - Taylor Owen has a different take on the big story of the last month.

Member discussion