6 min read

Sam Altman is speedrunning the Content Moderation Learning Curve

All platforms think they can avoid the T&S mistakes of the past — until they can’t. OpenAI’s now speedrunning content moderation but can it learn before the crash?

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.

I'm finally back home after what seems like endless weeks of travel! In today's edition, I write about lessons that Sam Altman can learn from Trust & Safety professionals. Plus, a brief section on climate change disinformation.

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go!

— Alice

P.S. You'll find me next (virtually) at Safer eCommerce Day on November 5th, where I'll be talking about Human-in-the-Loop systems and what skills fraud and T&S professionals need in the age of AI. Hope to see you there!


in partnership with Resolver Trust & Safety, exploring what we call ‘threats’
CTA Image

In 2005, the biggest concerns for online platforms looked very different from today. Over time, regulations have sharpened, expectations have risen, and what once sat in a gray area is now a clear threat. 

Our second blog in the “20 Years in Online Safety” series reflects how definitions of harm and responsibility have evolved, and how those changes have shaped our own journey at Resolver. 

From reactive moderation to proactive protection, each milestone has pushed us to rethink not just what safety means, but how it’s achieved. 

It’s a look back at how far we’ve come, and a reminder of how much further our industry still has to go.

How Resolver approaches taxonomies

Elon did it and now Sam is doing it too

Why this matters: Like others before him, OpenAI is discovering through costly experience the same safety lessons that Trust & Safety teams have learned over decades of content moderation. Luckily, T&S professionals have some answers.

In 2022, Techdirt's Mike Masnick (also of Ctrl-Alt-Speech fame) wrote, "Hey Elon: Let Me Help You Speed Run The Content Moderation Learning Curve".

In it, he mapped how every platform stumbles through the same stages of discovery — from "We're the free speech platform!" to "We're just a freaking website. Can't you people behave?".

Along the way, platforms will often encounter — in no particular order — unique child safety needs, copyright requirements, the difficult balance between privacy and safety, a deluge of spam, FBI reporting obligations, international legal conflicts, and the reality that different users' needs are often incompatible.

Once you've come face-to-face with all that, you typically end up with — in no particular order — a set of complex policies, dedicated T&S teams, age-gating technology, some form of rate limiting, and the exhausting realisation that "humanity is messy."

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member