Can LLMs fix the flaws in user reporting?
Large Language Models are being tested for everything from transparency to content review. But could they help modernise one of the oldest T&S processes — how users report harm and appeal moderation decisions?
User reporting isn't the magic fix some people think it is
Despite their ubiquitous use, user reports don't always drive effective moderation or meaningful change in platform policy. Is there a better approach?
The new(ish) job roles in T&S
As the Trust & Safety industry matures, we're seeing new types of role emerge that didn't exist five years ago. For each of them, a working knowledge of AI is the bare minimum.
Are we getting moderator well-being all wrong?
New research on wellness programs for moderators shows we’re still far from ensuring that the people doing this emotionally demanding work are truly supported.