5 min read

Is it time to rethink the Santa Clara Principles?

The 2018 standards set the benchmark for moderation transparency and were adopted by the world’s biggest platforms. But with recommendation algorithms and AI now shaping online speech before it’s even published, the Principles may need updating.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.

This week, I'm thinking about the Santa Clara Principles — the 2018 expert-created standards that have since been endorsed by a dozen major platforms — and wondering how we can modernise them for the age of AI.

As always, get in touch if you'd even the most fiendish question answered or just want to share your feedback. I've really appreciated all the kind words I've gotten recently, and I want to be sure to write about things that resonate with you. This week's edition is inspired by a reader question (thanks Jenni!), so please feel free to send your ideas. Here we go! — Alice

P.S. A places you can catch me in the coming weeks:

  • On September 11th, I'm speaking on a webinar about the study on moderator wellness that I wrote about a while back, with forensic psychologist Jeffrey DeMarco and researcher Sabine Ernst.
  • If you'll be in NYC for Marketplace Risk (16-18 September), be sure to join Juliet Shen (ROOST), Marc Leone (Giphy), Nick Tapalansky (MediaLab) and me on the topic of prioritisation.

Are we still doing "content moderation"?

Why this matters: The Santa Clara Principles have been a vital standard for content moderation transparency since 2018, but AI has fundamentally changed how platforms work. We might need to expand these principles to address new realities — not because we have all the answers, but because the questions have changed.

I’ve been revisiting the Santa Clara Principles (SCP) lately, those early guardrails for content moderation that first emerged in 2018.

Drafted by a coalition of academics and civil society groups in — you guessed it — Santa Clara, they pushed platforms to be more transparent about how and why they take content down, emphasising notice, appeals, and accountability. A 2021 update widened the lens, bringing in perspectives from more marginalised communities and urging platforms to think beyond the “big three” of removal, appeals, and transparency reports.

Ben interviewed Jillian C. York — who was involved in the original SCP — about why they were refreshed and how they could be applied to provide greater accountability. It's worth a read if you haven't already.

Q&A: Jillian C. York on the newly revised Santa Clara Principles
Covering: why an inclusive process was key and what Trust and Safety teams should take from the new recommendations

However, as I watch how AI is reshaping Trust & Safety work, I'm wondering: to what extent are these Principles still applicable?

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member