Where are tomorrow’s T&S experts coming from?
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.
This week, I'm thinking about the future of T&S, specifically around the talent gap that may occur if we get rid of all the entry-level jobs.
Get in touch if you'd like your questions answered or just want to share your feedback. And thanks to everyone who sent kind messages about last week's edition — it's always super helpful to know what resonates.
Here we go! — Alice
The disappearing rungs of the T&S career ladder
A T&S Insider reader recently wrote in with a question:
I've got two recent college grads in my family and jobs are hard to find. this is true in T&S too, and I wonder... what is the future of T&S going to look like if there are no more entry-level roles? Where will "experienced" people come from in 5-10 years if they can't get jobs now?
Now, first off, I realise I'm part of the problem. Why? I spent 20 years climbing a ladder that I’m now helping to dismantle. Allow me a brief trip down memory lane.
Why replicating my career is impossible
My career started with volunteer content moderation at 19, moved to freelance moderation of dating profiles at 25, and culminated in becoming a VP of Trust & Safety at a well-known public company by 39. This was back in the "wild west" of the early internet, when we built the plane while flying it. Each step was a lesson in pattern recognition, adversarial thinking, and decision-making without precedent.
Today, at 41, I work at a 15-person AI startup, where I'm doing the work of several people (product, policy, marketing, and strategy). Not only is my work enabled by AI but we’re building AI-enhanced tools that automate the very entry-level jobs that gave me the experience to get where I am. The irony is not lost on me.
Just to be clear: automating the most psychologically damaging and repetitive parts of content moderation roles is a net good. I've witnessed the human cost of that work: the PTSD, the burnout, the exploitation. No human workforce can or should manually sift through the worst of the internet all day every day.
But here is the paradox: we are automating away the very experiences that create expertise. Much has been written about this in other industries — the creation of incompetent experts, the end of time-worn progression, even the rewriting of what it means to be an expert — and T&S is very much at risk of these shifts. The fact is that my career path is now nearly impossible to replicate.
I believe that's a 'bad thing' because my years of frontline content moderation taught me something no textbook could: intuition. I learned to see the subtle evolution of harassment, the cultural context that separates a joke from a threat, and the hidden patterns of coordinated inauthentic behaviour. This "tacit knowledge" is built from exposure to thousands of messy, ambiguous edge cases. It's the difference between knowing that something violates a policy and understanding why that policy exists at all.
Today, the landscape and job market has transformed. We’re in the "Compliance Era". The next generation of T&S workers can ramp up on what to do far more easily, but they have less opportunities to learn why, with few entry-level roles to get started in, and more strict guidelines for how to do them. AI is exceptionally good at executing known processes, but true safety leadership requires the intuition to handle the unknown, and that requires experience.
Training the humans in the loop
My expectation is that the number of entry-level T&S roles will continue go down — I’m curious to know how job opportunities in our field have already changed — but that there will likely always be some number of mid-level positions open for humans-in-the-loop to oversee AI systems, to steer policy, and to QA decisions.
The question then becomes: how do we create a system in which people ramped up to that level as quickly as possible?
The good news is that regulatory oversight of Trust & Safety has given us an excellent blueprint for industry-wide professional development courses and certifications — we just need to create them. If you look our tech industry cousins of privacy and cybersecurity, they’ve got this on lock (see: privacy; security). What's stopping us?
As well as certification, we should also be investing in:
- More in formal internship and apprenticeship programs, in sandboxes and simulators
- Public research, media and resources for T&S professionals (EiM being one example)
- Academic courses — including full T&S degrees — that are accessible to people who want to go into the industry.
All of this can help people skip the gruelling frontline moderation jobs that I undertook and still have the background knowledge for mid-level roles.
I very much realise that the my generation of T&S workers built the very frameworks and tools that now threaten to make our own formative experiences obsolete. But the ladder I climbed being gone isn't necessarily a tragedy if we consciously build something better in its place.
(If you're thinking about creating certification courses/tests for T&S, please get in touch — I'd love to collaborate).
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchAlso worth reading
Lack of Responsible AI Safeguards Coming Back To Bite, Survey Suggests (Forbes)
Why? "On average, executives believe they are underinvesting in responsible [AI] by at least 30%" leading to financial loss and reputational impact. Responsible AI and T&S go hand-in-hand, so this could be a good opportunity to ask for more resources!
"Privacy preserving age verification" is bullshit (Pluralistic)
Why? "Take it from the guy who invented it."
Extremist identity creation through performative infighting on Steam (Frontiers, by Alex Bradley Newhouse and Rachel Kowert)
Why? "Using open-source data and scaled social network analysis, we show that the far-right ecosystem on Steam possesses characteristics of collective radicalization and mobilization. This poses both an immediate danger to gamers and game developers who rely on Steam and also a longer-term risk to social safety."
Four Functional Quadrants for Trust & Safety Tools: Detection, Investigation, Review & Enforcement (DIRE) (SSRN preprint, paper by Camille Francois, Juliet Shen, Yoel Roth, Samantha Lai, and Mariel Povolny)
Why? This paper, by the ROOST team (and friends), explores a taxonomy for mapping and analysing the tools that make trust and safety work possible.
Social media toxicity can't be fixed by changing the algorithms (New Scientist)
Why? This experiment used AI chatbots, not real users, but it found that changing a social media site to the most basic chronological feed didn't eliminate toxicity from the platform.
Member discussion