4 min read

The new(ish) job roles in T&S

As the Trust & Safety industry matures, we're seeing new types of role emerge that didn't exist five years ago. For each of them, a working knowledge of AI is the bare minimum.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.

This week, having recently changed jobs myself, I'm thinking about the emergence of a new wave of Trust & Safety jobs (and how AI is changing more traditional ones).

Get in touch if you'd like your questions answered or just want to share your feedback. When you read this, I'll be in sunny California for a company offsite, but I'll still be checking email. Here we go! — Alice


SPONSORED BY RESOLVER, proudly attending TSPA EMEA 2025 in Dublin

For over 20 years, Resolver has been at the forefront of protecting communities from online threats. We provide unrivalled intelligence and strategic support to platforms and regulators, driving innovation in Trust & Safety.

At this year’s Trust & Safety Professional Association EMEA summit, we’re thrilled to be taking part in a vital panel session: The Psychological Cost of Innovation: Reassessing Well-being Challenges for New Types of T&S Work.

Join industry leaders, academics, and frontline professionals as we explore how the evolution of Trust & Safety work is reshaping mental health norms, organisational responsibility, and sustainable innovation.

EXPLORE MORE FROM RESOLVER

Apply within (especially if you have AI skills)

Why this matters: As well as reshaping the proliferation of online harms, AI is also changing the skillsets required by job applicants. I believe that real people will continue to be employed to oversee Machine Learning systems and audit LLMs but it's clear that some level of AI fluency is becoming a baseline expectation.

At the recent All Things In Moderation conference, I got to present about the future of Trust & Safety jobs. 

The central argument of my talk built on something I've written about here before: yes, AI is good at entry-level work (content moderation, policy analysis, basic research) and finding patterns of behaviour in large datasets but we still need people — real humans — to oversee those systems and the actions taken by them.

From what I’ve seen advertised recently, we’re starting to see this play out. There are specialist AI and product roles that require ML/LLM experience to get an interview. But there are also operations and quality assurance roles that expect a good understanding of core AI concepts and want you to interface with other teams using it. 

So whether you’re a generalist looking for your next chapter or a specialist seeking a new angle, here are five roles reflecting the future of T&S that you may not have considered yet. All them have a degree of proficiency with AI — the question is: how deep do you need to go?

Policy/Prompt Engineer

  • What: The emerging viability of  LLMs in T&S requires people who know policy and AI. There’s no playbook for this yet — and that’s what makes it exciting.
  • How: It requires deep knowledge in policy best practices as well as prompt engineering and how to “speak” to LLMs in a way that ensures consistency and accuracy. 
  • Who it’s good for: Policy and ops experts who feel comfortable with cutting-edge tools.
  • Open role: Trust & Safety AI Engineer, EverAI

Go-To-Market/Sales 

  • What: As more vendors pop up in the T&S space, this opens up new opportunities in B2B sales/marketing/community building. On the platform side, it’s more important than ever to have a marketing and comms plan for T&S products. These roles build the “Trust” in T&S. 
  • How: Deep T&S knowledge is key and the ability to communicate what makes a tools/technology — including AI-enabled ones — deliver value. Also T&S sales teams are often small operations so using AI to support your own workflows and goals is crucial.
  • Who it’s good for: Seasoned T&S leaders with a solid footprint in the community.
  • Open (related) role: Product marketing lead, Roblox

Tooling Product Manager

  • What: More platforms (and vendors) are building internal moderation tools, dashboards, and automation layers. T&S-savvy PMs are key to ensuring these actually serve moderator needs.
  • How: Requires product instincts, experience working with ops teams, and an eye for UX in high-stress workflows. Likely to involve working with ML engineers and being able to define requirements for AI-powered features.
  • Who it’s good for: Former T&S leads or analysts who want to shape tools from the inside.
  • Open role: Senior Safety Product Manager, Yubo

Moderator Enablement/Training 

  • What: With AI shifting the shape of human work, there’s fresh demand for people who can upskill, support, and enable moderation teams. AI is just part of the package here.
  • How: Requires experience in policy enforcement and strong instructional design or pedagogy experience. It’s likely you’ll be training humans to work alongside side so knowing where they handoff to each other and being able to explain how AI makes decisions is a must.
  • Who it’s good for: Ops leads with a knack for mentorship.
  • Open role: Enablement lead, Snapchat

Regulator/Advisor

  • What: UK regulator Ofcom, in particular, has been snapping up former platform workers for their regulatory teams and similar roles are open globally. 
  • How: These are strategy-heavy roles that require broad oversight of how platforms — and increasingly AI — impact society. Knowing how ML/LLMs introduce risk but can be used for transparency and accountability would make you a good fit here.
  • Who it’s good for: Mid-career idealists. 
  • Open role: Principal, online safety intelligence, Ofcom

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Also worth reading

TSPA T-shirt design contest (Trust and Safety Professional Association)
Why? I'm so excited! We're finally getting TrustCon T-shirts, and the TSPA is opening up the design as a competition. Unfortunately, it's only one design per person, but let's bring on the entries!

New Report on AI-Generated Child Sexual Abuse Material (Stanford Cyber Policy Center)
Why? A systematic look at how educators, platform staff, law enforcement officers, U.S. legislators, and victims are thinking about and responding to AI-generated child sexual abuse material (CSAM). 

How to use LLMs for Content Moderation (Musubi, authored by me)
Why? We've launched a blog over at Musubi, so if you want even more insights from me, go check it out. This time, I write a practical guide about how to use LLMs for content moderation.
Related: Smarter, Safer, Scalable: The Generative AI Revolution in Content Moderation (Daniel Olmedilla)

Behind the Curtain: A white-collar bloodbath (Axios)
Why? Anthropic CEO says "AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years."
Related: my post on LinkedIn about this with some slides giving guidance on what to do next to future-proof your career.