How to moderate 100m daily users, Meta over-enforcement claims and Clegg saves the web
The week in content moderation - edition #297
Fractional T&S leadership - a good idea?
Fractional T&S leadership sounds good on paper. But if safety is the product, is part-time influence ever enough? Here's my response to a question from a T&S Insider reader
DSA’s new demands, US age checks upheld, and Roblox suspension research
The week in content moderation - edition #296
Is the Oversight Board just "safety washing"?
Meta’s refusal to follow the Board’s recommendations on LGBTQ+ hate speech could be the beginning of the end for this much-debated platform accountability experiment
UK’s age check mandate, Grok gets it wrong (again) and Singaporean mods shine
The week in content moderation - edition #295
Are schools taking online safety seriously?
I recently learned that even thoughtful, well-intentioned schools often lack strong safeguards on internet-connected devices. Here’s a list of questions to ask — and a template you can use to start the conversation
X/Twitter pivots CSAM efforts, OSA adult performance and life in an online scam mill
The week in content moderation - edition #295
Can LLMs fix the flaws in user reporting?
Large Language Models are being tested for everything from transparency to content review. But could they help modernise one of the oldest T&S processes — how users report harm and appeal moderation decisions?
More teen social media bans, nudify ads nixed and Rudd remembered
The week in content moderation - edition #295
User reporting isn't the magic fix some people think it is
Despite their ubiquitous use, user reports don't always drive effective moderation or meaningful change in platform policy. Is there a better approach?