The new African mod whistleblowers, KOSA goes quiet and Dunn deal
The week in content moderation - edition #291
Social media use is changing, but why, and what does it mean for T&S?
Fewer users doesn’t mean fewer risks — bad actors thrive when harm is concentrated among smaller, more active audiences. Platforms must move beyond user reports to stay ahead.
Oversight Board gives verdict, verification is back but 4chan may not be
The week in content moderation - edition #291
Fair moderation is hard but fair, scalable moderation is harder
Throughout my career, I’ve struggled with the problem of how to enforce policies in a fair, accurate, and scalable way. A new research paper reminds us just how difficult that is
FTC lays out antitrust case, TikTok 'adds context to content' and Haidt analyses Snap Inc
The week in content moderation - edition #290
A reader asks: What should be on my ‘red line’ list?
Most T&S professionals—whether they admit it or not—have a line they won’t cross for their company. But when you're in the middle of a major, public failure, it can be hard to know what to do. Here’s my take on what to consider before quitting.
Brussels to go after X, Meta to face Kenyan courts and Substack's subtle shift
The week in content moderation - edition #289
Is it prosocial design’s time to shine?
With some platforms retreating from a reactive, enforcement-driven approach to Trust & Safety, there’s a stronger case than ever to lean into proactive and prosocial practices that prevent toxicity from happening in the first place. Here's where to start.
US raises OSA concerns, OpenAI's 'permissive approach' and moderation on stage
The week in content moderation - edition #288
What I heard at the T&S Summit in London
My first time attending a big T&S event outside of the US brought with it a lot of fun. But I left without as deep an understanding about British or European attitudes to online safety as I'd have liked