My favourite T&S Insider reads
I'm Ben Whitelaw, the founder and editor of Everything in Moderation*. I'm standing in for Alice in this week's Trust & Safety Insider.
Last week marked two years of this fair newsletter, which is written every week by T&S superstar (and good friend of EiM), Alice Hunsberger. That time has flown by.
We started working together after I saw Alice post on LinkedIn about starting a newsletter. The idea of working together made sense sense from the get go and she brings the kind of optimistic and clear-eyed perspective that can be rare nowadays. I know many EiM readers value her opinion on where the T&S industry is heading and I’m delighted she continues to write for EiM.
Today, I wanted to highlight some of the great pieces she’s written in those two years. Some were among the most read editions of T&S Insider. Others stuck with me because they hinted at trends we’ve since seen play out across the industry.
Are you going to be at the T&S Summit in London at the end of March? Both Alice and I will be there for what’s set to be a fascinating few days. If you want to chat about the state of T&S over coffee (or something stronger), pencil something in my calendar — Ben
The Trust & Safety Summit is just two weeks away and it’s shaping up to be an amazing gathering of your T&S community.
Join senior leaders across policy, moderation, engineering, enforcement, and legal as they tackle the most urgent challenges facing Trust & Safety today. You’ll hear from experts at Roblox, Snap Inc, OpenAI, Bending Spoons, 2K, Match Group, and many more.
Your pass also includes full access to the brand‑new Digital Platform Regulation & Compliance Summit.
Use code EiM20 for 20% off Diamond–Bronze passes (vendor passes excluded).
Five of the best
A lot has happened in Trust & Safety over the last two years — from the explosion of generative AI to the arrival of new regulation like the Digital Services Act and Online Safety Act, not to mention the layoffs and restructuring that have reshaped many T&S teams. Alice has chronicled much of it along the way.
The editions here are a combination of the most read by you and a few of my personal favourites that hinted at broader trends we’ve since seen play out. You can find the full catalogue on the EiM website.
Yet another way humans shouldn't be replaced — April 2024
Things may have moved on since Meta’s early AI chatbot experiments — which memorably conjured up strange hallucinations about having a gifted disabled child — but the episode was a useful reminder of the importance of prosocial design and building community interactions that go beyond simply removing the bad.
Alice argued that Trust & Safety teams should think just as much about what healthy online interaction looks like as they do about what to take down. She also predicted we’d see a rise in people seeking out “verified human spaces and human support.” There are signs of that happening but we’ve also seen rapid adoption of AI-mediated interactions. Which trend wins out is still an open question.
I asked, you answered: applying policy outside the US — June 2024
US platforms have long applied speech ideals around the world and the clash of values has become more noticeable in recent years. But what if you’re a non-US citizen working for a large social media or tech company? How do you balance your own beliefs with the policies your company expects you to enforce?
This edition drew on responses from EiM subscribers and Alice's network outside the US about how they navigate those tensions. I found it particularly useful to hear how people working internationally hold US-based platforms accountable internally as debates about global speech norms become more prominent.
Why “censorship” is complex — December 2024
The news agenda regularly throws up platform “censorship” stories — the Epstein TikTok episode (EiM #322) following the platform’s ownership changes being one recent example.
But as Alice noted, the reality is usually far less dramatic. In most cases, content is removed or restricted for one of several fairly mundane reasons that the public would benefit from understanding more clearly. As she put it: “Only through a shared understanding of the realities and constraints of content moderation can we move towards more effective and equitable solutions.” Hard to disagree there.
Is it time to unite T&S and AI ethics? — February 2025
Recent developments — ahem Grok — have highlighted how much the issues affecting Trust & Safety and AI safety overlap. But organisationally, many companies still treat them as separate functions.
Last year, Alice suggested that might not make sense for much longer. “Risks don’t come in neat, compartmentalised packages,” she wrote. “We can no longer separate ‘AI’ and ‘human’ — it will always be both, together.” We’re yet to see widespread structural integration of these teams but it can't be long.
The most secure job in T&S might just be compliance — November 2025
The layoffs affecting Trust & Safety teams over the last few years inevitably raised questions about where the stable career paths in the field might lie. Alice and I explored that by partnering with the excellent job board Trust & Safety Jobs board to produce a six-part series — Safe for Work? — looking at hiring trends across the industry.
The first edition — which looked at the growing demand for compliance roles — ended up being the most opened T&S Insider of 2025 and generated a lot of feedback from readers. If you’re planning on looking for a job in the field, it’s still a great place to start.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touch
Member discussion