YouTube talks up safety credentials, Turkey talks regulation and Britta's testimony
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
Today's newsletter is hitting your inbox later than usual because there's only so many hours to read about the variously far-fetched news lines coming out of the Swiss mountains. I've done my best to parse through the big safety-related ones but drop me a line if I missed anything.
Mike and I went deep into the implications of Anthropic updating its model Constitution in this week's Ctrl-Alt-Speech. I'd say it's a broadly interesting episode (geddit?). And you can still get in touch if you have ideas for our 2026 bingo card.
Welcome to all new EiM subscribers including folks from The Economist, Spotify, JP/Politikens, Harvard University, Meta, Aiba.ai, Technology Coalition, the ICO, Microsoft, the Department of Science Innovation and Technology and one of my favorite media organisations, Maldita.es.
Here's everything worth knowing this week — BW
Online child sexual exploitation and abuse evolves quickly, so cross-platform detection must keep pace. The Tech Coalition’s Lantern program is expanding industry signal sharing to disrupt harmful activity faster.
As participation grows across the tech and financial sectors, Lantern is helping uncover patterns earlier, strengthen investigations, and drive coordinated action to protect children across the digital ecosystem.
Policies
New and emerging internet policy and online speech regulation
Turkey is cranking up the pressure on Big Tech platforms after separate government ministers called for both an under 15 social media ban and stronger overall tech regulation in the space of a few days It comes a week after two relevant and related reports were published: a 200-pager from MPs that highlighted 82 proposals designed to protect children online (including some rather farfetched ideas) and another from a Turkish non-profit warning about social media platforms' widespread compliance with government takedown demands. The two together should be cause for concern in a country that already scores poorly for internet freedom.
The teen social media ban news (EiM #320) continues to gather pace:
- UK: Before the Labour government could get started on its newly announced consultation on a social media ban, the House of Lords (the unelected chamber, for EiM’s US readers) resoundingly voted to update a schools bill currently going through parliament — meaning a ban could be in place within months. This is despite Ian Russell, father of Molly Russell, and academics cautioning against moving too quickly to decide.
- Netherlands: Like many countries, impetus is growing for a ban, with two thirds of people in a 6,000-strong survey in favour.
- Australia: Julie Inman Grant, Australia’s eSafety Commissioner, told the BBC that social media platforms had communicated with the regulator “very, very reluctantly” and that Snapchat had become a focus for further investigation after reports that children had got around age verification measures.
If you’re looking for a smart take on smartphone/social media bans, enjoy digital policy expert (and friend of EiM) Heather Burns’ idea for a Darnella test, after the young woman who filmed the killing of George Floyd.
Also in this section...
- Ofcom joins forces with international online safety regulators on age checks (Ofcom)
- Rand Paul Only Wants Google To Be The Arbiter Of Truth When The Videos Are About Him (Techdirt)
- Philippines to Lift Ban on Grok After xAI Vowed Safeguards (Bloomberg)
Products
Features, functionality and technology shaping online speech
A new social network was unveiled at Davos this week as a European alternative to Elon Musk’s X/Twitter. W (that’s not a typo) aims to tackle misinformation and bot activity by requiring verified identities and photo checks for users, with decentralised data hosting under strict EU privacy law. Led by Swiss privacy expert and former eBay exec Anna Zeiter, it follows other Twitter clones that have promised to do safety differently only to realise it’s not that simple.
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
YouTube has sought to burnish its safety credentials over the past fortnight. The platform announced new parental controls for its Shorts format — including the ability to set short-form content consumption to zero, which YouTube claims is an industry first. This was followed by CEO Neal Mohan’s annual letter, in which he outlined the importance of child safety and combatting AI slop within YouTube’s strategy for 2026 and beyond.
Safety business: As I mentioned in this week’s Ctrl-Alt-Speech, Mohan and YouTube clearly see the ROI of investing in safety as it goes deeper into the classroom, the living room via connected TVs and the creatorsphere. Frankly, you can’t expect to grow across those three verticals without strong policies and stronger enforcement. Add in the news that the BBC — the world’s largest broadcaster by employees and reach, let’s not forget — will start producing bespoke content for the platform in order to attract younger audiences, and the safety stakes become even higher.

TikTok and ByteDance have finally agreed its long-awaited deal to transfer parts of their US operations to American investors (EiM #277). In the official — and very male-heavy — TikTok announcement, the joint venture is said to “safeguard the U.S. content ecosystem through robust trust and safety policies and content moderation. All very normal stuff, then.
Also in this section...
- Claude's new constitution (Anthropic)
- The Tea App Is Back With a New Website (Wired)
People
Those impacting the future of online safety and moderation
Authentic accounts of life inside Big Tech companies are hard to come by — you only need to ask ex Meta employees Sarah Wynn-Williams (EiM #285) or Kelly Stonelake. Now we have Britta Hummel’s testimony to add to the list.
Hummel spent over six years at Meta, where she was an engineering manager driving virtual and mixed reality prototypes that shaped the company's big bucks Quest and Horizon initiatives. After handing in her badge in early January, she has shared a candid account of burnout, unchecked power, and diminishing psychological safety inside one of Silicon Valley’s largest companies.
In it, she reflects on the challenges of speaking up in a difficult corporate culture, child safety concerns in XR, and leadership values that ultimately made her step away from corporate tech. But she’s clear it’s not just a Meta thing; there’s a trend for “people who choose power over integrity rise fastest.” Dex Hunter-Torricke (EiM #320) suggested as much too.
She plans to build a mental health app and get involved in European tech initiatives. Maybe she’s the person to save W (see Products) from whatever fate awaits it.
Posts of note
Handpicked posts that caught my eye this week
These posts are a reminder that online harms affect groups of online users disproportionately. The perpetrators of online harms is almost always men. Support Glitch , Chayn, and the Coalition Against Online Violence to name just a few organisations doing brilliant work in this space.
- “On Civitai—a prominent platform for gen-AI content/tools w. millions of users—you get a growing marketplace for NSFW requests and a nontrivial stream of deepfakes.” Stanford postdoc Matt DeVerna with a thread on what happens when you incentivise genAI creation at scale. And no, it’s not good.
- “Disappointed, but not surprised that a tech billionaire can be faced with a bereaved sister and continue to put profit interests first…” - Online safety campaigner Adele Zeynep Walton shares her account of a problematic interaction at Davos.
- * “Nine years of profound disappointment that deepened this week as governments around the world finally seemed to recognize the nature of the problem—all it took was thousands of women, girls, and men (!) per hour being abused by Elon Musk’s bangified chatbot, Grok.” - It's no wonder author and disinfo expert Nina Jankowicz never wants to write about deepfake sexual abuse again.

Member discussion