Age assurance tokens, a mixed week for OpenAI and how teens 'fact check' the news
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
I don’t often talk about what happens between editions of Week in Review, but this one’s been “one of those” weeks. As well as juggling a busy few days at my day job, I’ve had the the first installment of EiM’s new series on T&S jobs, an infant-led wake schedule (read: 4:30am starts), and a big interview for my wife.
I’m not saying that for sympathy but to say thanks for reading, replying, sponsoring, sharing, and becoming a member. It keeps me going, both mentally and — frankly — financially too.
I was also leading the charge on Ctrl-Alt-Speech this week but was incredibly lucky to call upon scholar, author, Oversight Board member and all-round nice guy, Kenji Yoshino. Definitely tune into this one.
This week’s edition lands a little later and a little lighter than usual, but hopefully still useful to you in whatever work you do. A warm welcome to new subscribers from Tremau, Duco, Yoti, Demos, Censhership and beyond.
Here's everything in moderation from the last seven days — BW
New Grooming detection added! Powered by trusted data and Thorn’s issue expertise, the Safer Predict text classification model now detects messages that contain signs of suspected grooming.
When indicators of sexual exploitation or abuse of a minor are detected, the model applies a “grooming” label and confidence score to each message.
Policies
New and emerging internet policy and online speech regulation
No sooner than regulation forced platforms to adopt age verification — and a data breach became big news (EiM #308) — a host of influential online safety groups have united behind a new age verification standard. The OpenAge Initiative proposes a token-based “AgeKey” to verify a user’s age without repeatedly uploading ID. Created by safety tech provider k-ID and backed by the Family Online Safety Institute (FOSI), the Centre for Information Policy Leadership (CIPL) and the WeProtect Global Alliance, its an admission of the privacy challenges of age assurance and verification — though critics warn it could still undermine users' rights. Politico Pro has the story.
Whether young people are equipped for today’s internet is the topic of new research from think tank Demos, which has some fascinating insights about how British 16-18 year olds:
- ‘Fact check’ claims on social media by searching on Google or Reddit to verify what they see or go to a legacy media outlet’s social profile.
- Gravitate strongly towards individuals online, particularly those that promote self-improvement, but believe Andrew Tate is ‘dead’ (read: over).
- Believe that both boys and girls — thanks to toxic masculinity and misogyny — have it hard online.
My takeaway? Maybe they aren’t as — as the kids say — ‘cooked’ as the rest of us think.
Also in this section...
- Justice Commissioner Michael McGrath to run EU's hub to fight disinformation (Euractiv)
- Architects of Online Influence: How Creators, Platforms, and Policymakers Shape Political Speech (CDT)
Products
Features, functionality and technology shaping online speech
Zentropi, the AI moderation startup, published a thoughtful blog this week about the evolution of its toxicity-detection models — and more interestingly, its new shareable content policies. Users can now browse featured policies, borrow ideas from other policy authors, adapt what they’ve written and use on your own site or forum.
Also in this section...
- Measuring political bias in Claude (Anthropic)
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
OpenAI has released a blueprint for youth safety standards that explains how it plans to roll out parental controls and improve policy transparency. It’s a clear attempt to get ahead of lawmakers following seven new lawsuits alleging that the company's models caused user harm. The problem? It’s a paltry five pages and doesn’t say much more than this blogpost from Sam Altman a month ago. With strong signals from investors but bad ews about its compute costs, it's been a mixed week for the company.
YouTube — which has got a lot of heat online over the past few weeks for its AI moderation accuracy — has been forced to clarify its automation and appeals process, explaining when and how users can challenge enforcement decisions. Spare a thought for the Windows 11 fans caught up in this unholy mess.
Also in this section...
People
Those impacting the future of online safety and moderation
I often keep an eye on research coming out of the Oxford Internet Institute, which continues to produce some of the best work on online behaviour and AI governance (EiM #313).
Among its standout scholars is Helen Margetts, Professor of Society and the Internet, whose career from programmer to professor is fascinating in its own right. She was also the programme director of policy director at The Alan Turing Institute, another fantastic technological institution doing amazing work.
She spoke to The Actuary about how digital systems shape democracy, and the potential for AI to do the same — both positively and otherwise. “AI has the potential to turbocharge all those negative online harms. We need rigorous research into these issues,” she explained. In an world of AI snakeskin oil, I’m glad Professor Margetts continues to insist on evidence over hype.
Posts of note
Handpicked posts that caught my eye this week
- - “Here's the first in the series on Trust and Safety, as I attempt to shed more light on the expectations vs the many many limitations of scaled content moderation.” - props to Aaron Rodericks for unpacking how he and Bluesky think about moderating for humans.
- “We are delighted to introduce the Digital Trust Council, an independent nonprofit organization building the global benchmarks for trustworthy AI that measure and celebrate positive social impact.” - Catherine Feldman, its Executive Director, shares the news and the Council’s stellar team. - "Boundaries are good, I'm not arguing against no phones at dinner or choosing books over screens, but here's what I've learned from a decade in online safety: keeping kids completely away from technology doesn't prepare them for the inevitable encounter. " - Daisy Soderberg-Rivkin sympathises for parents.
Member discussion