PG-13 platforms, OpenAI’s moral line, and Europe’s child safety push
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
It’s been a week of quiet but telling shifts across the key pillars of online speech. EU and UK regulators are flexing their enforcement muscles — with a clear focus on child safety — while the major platforms (yes, that includes OpenAI now) continue to cobble together policy justifications for their latest product changes.
Mike is back in the Ctrl-Alt-Speech co-host chair to discuss these stories and more. Find it wherever you get your podcasts.
A warm welcome to new subscribers from Telus Digital, Ofcom, 1Password, Amnesty International and elsewhere. As always, if this newsletter helps you keep on top of your work, consider getting full-fat access to the EiM archive, urging your organisation to do so, or sharing it with someone in your team.
This is your Week in Review — BW
Twenty years ago, online child protection looked different. The threats were simpler, the platforms were fewer, and the pace of change was slower.
Since then, the internet — and the risks facing children — have evolved beyond recognition.
In the first article of our series “20 Years in Online Safety: Reflecting, Evolving, and Adapting,” we reflect on how Resolver has adapted to protect children in an ever-changing online landscape.
From detecting grooming behaviour in thousands of text messages in 2005, to developing machine learning tools that identify complex threat patterns today — we explore how technology, expertise, and collaboration have shaped our approach.
The challenges may have changed, but our mission hasn't. We’re still here to help platforms, regulators, and policymakers stay one step ahead of those who seek to cause harm.
Policies
New and emerging internet policy and online speech regulation
20 EU countries this week signed a Danish-led declaration committing to strengthen online child protection, according to Euronews. The non-binding Jutland Agreement (full text here) calls for greater cross-border cooperation and the development of new detection tools, including "effective and privacy-preserving age verification" — something that European scientists warned was "impossible" earlier this year. Notably, France, Germany and Italy declined to sign, raising questions about the depth of consensus among member states.
The agreement comes just days after European Commission formally asked Snap, YouTube, Apple, and Google to explain how they protect minors online under Digital Services Act.
Missing in action: What’s interesting here isn’t just who received the request, it’s who didn’t. Meta, long in the DSA firing line, is notably absent, which may be a sign of the Commission diversifying its enforcement actions. Or, more cynically, a way of showing that, after recent scrutiny of Twitter/X and Meta, that there are no favourite children.
In the UK, 4chan has been fined £20,000 (~$26,000) for failing to comply with the UK's Online Safety Act. Ofcom issued the penalty after the platform refused to conduct a required risk assessment. It’s the first financial penalty issued under the new legislation and, while modest, is an important signal nonetheless. Whether the approach hold when it’s comes to more combative companies — looking at you, Elon — we will find out.
In the US, California’s new internet age-gating law — AB 1043, the Digital Age Assurance Act — will require operating systems and app stores to implement age assurance measures by 2027. The law is the latest in a series of state-level efforts to regulate platforms, the last being the 2022 age appropriate design code (AADC) legislation.