How to write better transparency reports
Mandated transparency reports are a cornerstone of online safety regulation. But dozens of companies still produce voluntary disclosures that take significant time and energy to pull together. What is the purpose of these reports now that so much other information is in the public domain?
AI safety theatre, Shopify’s policy dodge, and the reality of moderation work
The week in content moderation - edition #281
NCMEC funding — and what happens next
LGBTQ+ children face unique risks online, yet political pressure has forced the removal of critical research and support. For those working in Trust & Safety, the question isn’t whether this will have an impact — it’s how we respond before more harm is done.
The transatlantic speech trade, Deepseek safety report and building out Blacksky
The week in content moderation - edition #280
Is it time to unite T&S and AI ethics?
There's enormous potential in bringing together these often siloed disciplines and organisational functions. Given the complex, intertwined risks of AI and human interaction, it may even be a necessity.
The pitfalls of a Meta-built Community Notes, Germany prepares and AI safety report
The week in content moderation - edition #279
My five Trust & Safety mantras for difficult moments
The moment we find ourself in feels politically and socially fraught meaning that T&S workers need to look out for themselves more than ever. Here's what I say to myself during times of stress
EU bolsters DSA compliance, multimodal moderation and Bluesky release T&S report
The week in content moderation - edition #278
It's hard in T&S right now but it's not all bad
Exciting AI developments from Reddit and Hinge and the resourcefulness and innovation of the T&S community are a reminder that Trust & Safety is an industry with lots to be proud of
The TikTok timebomb, knock-on effects of Meta's new rules and time to FreeOurFeed?
The week in content moderation - edition #277