Can anyone do T&S work for 20 years?
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.
This week, I'm thinking about whether I (or anyone) can reasonably do T&S work for 20+ years, or if we should all quit to work as a lighthouse keeper.
Ben and I are cooking up a series on T&S careers and we'd love your inputs to shape it. So T&S Insider readers, tell me how you're thinking about your career — are you fed up with the state of the industry, excited about the changes or just tired? Oh and not forgetting the jobs you'd do if you weren't a T&S professional. Get in touch or just hit reply — Alice
The internet speaks every language — and over time, the best safety teams have learned to truly listen.
In the early days of Trust & Safety, many risk detection frameworks were built for English, missing nuances, slang, and coded languages globally.
Our third piece in the “20 Years in Online Safety” series explores how this blind spot shaped the industry, and how learning from language and culture became one of Resolver’s most powerful risk detection tools.
Featuring cases from our analysts, it’s a reflection on how context became intelligence, and how understanding cultural cues changed the way we identify harms.
Because online safety isn't just about what's said—it’s about what’s meant.
15 years down, how many more to go?
This week, Daisy Soderberg-Rivkin's LinkedIn post about the T&S career trajectory was the most viral T&S-related post I've ever seen, with more than 1,000 reactions in just a few days.
In case you haven't seen it yet, Rivkin — who has 10+ years of online safety experience herself — runs through a typical T&S career progression from idealistic community steward to being burned out and quitting the industry almost 20 years later.
The post immediately hit a nerve for many of us and for good reason. It's completely true. I've written about why T&S is hard work and why it can lead to burn out more times than I'd like to admit. Many of the comments on Soderberg-Rivkin's post said as much.
I'm 15 years into my T&S career, 20 if you include the unpaid community moderation I did at the start. I haven't quit yet, though I've certainly burned out a couple of times.
So Soderberg-Rivkin's post got me asking myself a question I've thought about a lot recently but don't yet have an answer to: what does it take to do this work sustainably for 20+ years?
Here for a good time, not a long time?
So what has kept me here this long?
For one, T&S professionals put users first. We believe that community is important, and that people should be able to be safe online and express themselves freely. Historically, we've fought for what's right – guidelines that protect marginalised communities, slow rollouts of new features until they're proven safe, wellness programs that protect moderators. We feel rewarded when people have good experiences on platforms and whose lives are better because of the communities we've helped foster. It feels good to do good. I'm no exception.
T&S is also never boring. Technology, social norms, and regulations are all evolving quickly. We get to build things that don't exist yet. We get to innovate and experiment. Building the plane while flying it is a real challenge, but it's rewarding when we do it well. And the kinds of people who are willing to do this work are unique. We're smart, caring, creative, unafraid of hard problems, motivated by progress even when perfection is always out of reach. We look out for each other and are highly collaborative. We're equal parts skeptical and optimistic. Our culture is precious and one of a kind.
These things are real. They matter. They're why I wake up and do this work even when it's hard.
But as our industry grows and matures, it's also changing. T&S is becoming more about compliance than innovation. Many platforms are prioritising profits and politics over DEI and user safety. Users are getting more polarised and toxic and AI-generated slop is making spaces feel less authentic, so people are wondering why they're online at all.
If online communities aren't fun to be part of any more, then the fundamental reward for the tough work of T&S goes away. "Make users hate us and each other just a little less" isn't exactly a rallying cry. The things that made this work meaningful are getting harder to hold onto. And I'm not sure we can will our way through that with sheer dedication to the mission.
Don't mention the 'R' word
So where does that leave us? I don't know, in all honesty. I don't know if this is a job anyone — including me — can do into their 60s. And I don't know if caring deeply about work that's structurally unsustainable is brave or just stubborn. It's probably both!
What I do know is that I'm not ready to give up yet. I hope I'll still be doing T&S related work until I retire, but I don't know what that might look like, or how I'll stay sane until then. For now I'm enjoying doing T&S work from the vendor side of the fence, where the different pressures feel refreshing after so many years at a platform.
I get catharsis from writing here, and from hearing from you, and that helps a lot too. And I know that if ever there were a group of people who could figure out how to make things better, it's Trust & Safety folks. So here's to another 20 years together — if we can make it work.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchAlso worth reading
Responsible AI courses (All Tech is Human)
Why? ATIH just released five free responsible AI courses, covering history, principles, operationalising, and governing responsible AI.
Five-year overview of the online and offline anti-LGBTQ+ landscape (Institute for Strategic Dialog)
Why? "The research summarised and contributed to by ISD evidences a clear interplay between the online and offline worlds of anti-LGBTQ+ hate and activity." I do disagree with their assumption that the use of AI for moderation is fundamentally contributing to over-censorship for the LGBTQ community – I'd argue it's less about the technology itself and more about the enforcement priorities and resources of platforms overall. But otherwise a really good summary of the last 5 years.
Why Governing AI Synthetic Media is So Hard—And Why Everyone’s Trying To Anyway (Thinking Freely with Nita Farahany- Substack)
Why? A written form of her class in AI law and policy: "Today, we’re tackling whether you govern can something that’s impossible to fix and constitutionally dangerous to regulate."
Member discussion