EU defines ‘addictive design’, defensive platforms and Ba’s exit
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
This week, we’ve seen what happens when a platform makes product changes to get ahead of regulation as well as what happens if product design isn’t up to scratch in the eyes of a regulator.
The latter story — TikTok’s breach of the Digital Services Act for ‘addictive design’ — is one of the dozens of stories in today’s Week in Review and a focus of this week’s Ctrl-Alt-Speech. With Mike away, I got to pick the brains of Dr Blake Hallinan, Professor of Platform Studies in the Department of Media & Journalism Studies at Aarhus University. It was a fun conversation, have a listen.
Welcome to new responsible internet users from Salesforce, eSafety Commission, Kroll, Google, Gifct, Thorn, Besedo, Ofcom and others like them. If you missed it, read Alice Hunsberger’s deep-dive into the Interrnational AI Safety Report. And if you don’t receive T&S Insider, update your newsletter settings in your EiM account to get it every Monday.
Here’s your Week in Review. As ever, drop me a line if I missed something or you have feedback about today's edition — BW.
Online safety regulation is no longer theoretical.
With regulations like the EU Digital Services Act (DSA) now actively enforced, platforms are expected to clearly demonstrate how they identify, assess, and mitigate risk in practice.
To help teams get clarity fast, we’ve launched a free Online Safety Compliance Checklist in partnership with our partners, Illuminate Tech.
It takes just a few minutes to complete the assessment to provide a custom checklist that provides a practical starting point for compliance planning for your online platform.
Policies
New and emerging internet policy and online speech regulation
The European Commission has told TikTok to disable features such as infinite scroll and autoplay for minors and to make changes to its recommendation systems under the Digital Services Act.
In preliminary findings of its two-year investigation, the Commission claimed the Chinese-platforms design features led to “compulsive behaviour” by users and led to reduced self-control, which is some statement. It marks the first time that any regulator has tried to set the standard for “addictive design”. TIkTok called the findings “entirely meritless” and “categorically false”, which, ironically, are the same words used to describe by description of Oliver Twist in the year 6 school play.
The UK government has published results from a snapshot survey that shows just little parents engage with their children regarding online safety. Fewer than half of 1100 parents — whose children were aged between 8-14 — have ever spoken with their kids about their online experiences and, of those that did, most were one-off conversations. The research also demonstrates a lack of online safety resources and a lack of confidence about what their child sees on internet platforms.
Control of the parental kind: This research — and the simple but effective advert put out to accompany it — suggests that the UK government have been paying attention to civil society organisations and are starting to see the downsides of a social media ban. A three-month consultation seeks to gather more views. In the meantime, try finding the aforementioned ad — it’s so hard to find online that I’ll be surprised if a parent ever sees it.
Also in this section...
- Claude’s Constitution Needs a Bill of Rights and Oversight (Oversight Board)
Products
Features, functionality and technology shaping online speech
Remember Community Notes, X/Twitter’s crowdsourced method of adding context to posts (EiM #172) that was butchered by Elon and then (badly) copied by platforms worried about being seen to do Trust & Safety (EiM #276)? Well, that’s been expanded to “Collaborative Notes”, an AI-assisted method to co-draft notes and — we can presume — increase the total number of notes. That’s good in one sense since Community Notes have been shown to reduce the spread of potential mis- and disinformation. But will AI-drafted notes have the same effect?
Alexios Mantzarlis over at Indicator Media suggested that this could push AI-written notes above 50% of those marked “helpful” by the end of the year. Which would be a strange milestone considering it was billed as “empowering people” to “create a better informed world”.

Also in this section...
- Exclusive: OpenAI disbanded its mission alignment team (Platformer)
- Introducing KORA: the first public benchmark for AI child safety (Korabench.ai)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
In a move that has upset large swathes of its users, Discord will roll out teen-by-default safety settings globally in March, introducing stricter messaging controls and content filters for younger users. The announcement — to coincide with Safer Internet Day — builds on age assurance measures in the UK and Australia but was so badly received that the company had to put out another statement clarify that “the vast majority of people can continue using Discord exactly as they do today, without ever being asked to confirm their age.” Users, understandably, are worried about Discord’s recent history of leaking users' personal data.
A powerful Guardian article has looked at the experiences of female workers in India reviewing abusive and traumatic content to train AI systems. It’s an unfortunately familiar story: data labellers and content moderators pushed from queue to dataset to project with no recognition of the psychological harm. Only two of the eight Indian companies spoken to by the Guardian’s reporter provided psychological support.
Same but different: What I found interesting is that workers like Monsumi — unlike Daniel Motaung (EiM #159) and many others before him — haven’t had to relocate to a large Indian city to do the work: improvements in internet connectivity means its possible for them to work from rural settings, where — let's be frank — safeguards and counselling are less likely to exist.
Also in this section...
- TikTok rival UpScrolled hosts antisemitic and pro-terrorism contentTikTok rival UpScrolled hosts antisemitic and pro-terrorism content (The Times)
- Telegram, the platform favored by cybercriminals and disinformation (El Pais)
- Despite Meta’s ban, Fidesz candidates successfully posted 162 political ads on Facebook in January (Lakmusz.hu)
People
Those impacting the future of online safety and moderation
Jimmy Ba, one of the 12-strong co-founder team of xAI and the leader of its research, safety and enterprise efforts, is reported to be leaving the company amid an exodus of senior technical staff.
The FT reports that Elon Musk has been unhappy with the development of xAI’s coding and AI companion capabilities, which have not delivered the user uptake that the f0rmer DOGE head honcho expected. Last month’s Grok saga (EiM xx) can’t also have helped with the idea that xAI is lagging behind its frontier model rivals.
Ba may not be a household name in the safety space but his departure matters, at least in my mind. We’ve seen how leadership can shape strategy — particularly when it comes to safety — and the we know the exit of key individuals create both internal strain and acts as a signal of where the company is heading.
Posts of note (DSA special)
Handpicked posts that caught my eye this week
- "At the occasion of the 2-year anniversary of the DSA, join me at the Europe House in Copenhagen for our European Coffee Mornings where, for this first session, I will take you through the state of play on the implementation in practice." - Valentines schmalantines! Dr Berdien van der Donk reminds us about the only date you need to remember this week.
- "As the DSA reaches two years of full applicability, the conversation is shifting from “what does it mean?” to “how is it being enforced?”" - Tremau's Agne Kaarlep on the way to the DSA and Platform Regulation Conference 2026 next week.
- "The obligations and rights in the DSA — in particular under Articles 16, 17, 20 and 21 — build on each other. Where one of them is not respected, the subsequent ones fall like dominos." - Niklas Eder from User Rights highlights the domino effect of non-compliance.


Member discussion