4 min read

The new(ish) job roles in T&S

As the Trust & Safety industry matures, we're seeing new types of role emerge that didn't exist five years ago. For each of them, a working knowledge of AI is the bare minimum.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.

This week, having recently changed jobs myself, I'm thinking about the emergence of a new wave of Trust & Safety jobs (and how AI is changing more traditional ones).

Get in touch if you'd like your questions answered or just want to share your feedback. When you read this, I'll be in sunny California for a company offsite, but I'll still be checking email. Here we go! — Alice


SPONSORED BY RESOLVER, proudly attending TSPA EMEA 2025 in Dublin

For over 20 years, Resolver has been at the forefront of protecting communities from online threats. We provide unrivalled intelligence and strategic support to platforms and regulators, driving innovation in Trust & Safety.

At this year’s Trust & Safety Professional Association EMEA summit, we’re thrilled to be taking part in a vital panel session: The Psychological Cost of Innovation: Reassessing Well-being Challenges for New Types of T&S Work.

Join industry leaders, academics, and frontline professionals as we explore how the evolution of Trust & Safety work is reshaping mental health norms, organisational responsibility, and sustainable innovation.

EXPLORE MORE FROM RESOLVER

Apply within (especially if you have AI skills)

Why this matters: As well as reshaping the proliferation of online harms, AI is also changing the skillsets required by job applicants. I believe that real people will continue to be employed to oversee Machine Learning systems and audit LLMs but it's clear that some level of AI fluency is becoming a baseline expectation.

At the recent All Things In Moderation conference, I got to present about the future of Trust & Safety jobs. 

The central argument of my talk built on something I've written about here before: yes, AI is good at entry-level work (content moderation, policy analysis, basic research) and finding patterns of behaviour in large datasets but we still need people — real humans — to oversee those systems and the actions taken by them.

From what I’ve seen advertised recently, we’re starting to see this play out. There are specialist AI and product roles that require ML/LLM experience to get an interview. But there are also operations and quality assurance roles that expect a good understanding of core AI concepts and want you to interface with other teams using it. 

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member