T&S is political. Fund it like it is.
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends, and workplace strategies that trust and safety professionals need to know about to do their job.
Become an EiM paid member today to support independent coverage of the T&S industry and get access to the full archive of editions — including some of my greatest hits.
This week, I talk about the themes I saw at the Trust & Safety Summit in London last week. The event was closed to media but Ben and I were lucky enough to take part as long as we abided by the Chatham House Rule. As such, I won't be getting into specifics of any talks, but I did observe a few common threads across the event, and of course I have plenty of thoughts of my own. (You can read last year's Summit reflections here, for comparison.)
Before we get into that, I'd like to quickly recommend a fun sci-fi novella with a surprise Trust & Safety plotline: Automatic Noodle.
Get in touch if you have any questions about today's edition or want to debate the potential for food bots to build their own ghost kitchen. Here we go! — Alice
"EiM consistently shares content that I don't find elsewhere" - T&S gaming expert
T&S leaders of all types say the same thing, every week: Everything in Moderation* gives them an overview to the industry that is hard to match, delivered by people who actually know how difficult it is.
If you’re a technology company, law firm or nonprofit looking to put your work in front of the people shaping Trust & Safety, sponsor EiM today.
Three uncomfortable truths about Trust & Safety
I don't need to tell you, T&S Insider readers, that Trust & Safety work is inherently values-based. But I left last week's Trust & Safety Summit being reminded of that age old truth. Let me explain why.
T&S teams decide what kinds of speech and behaviour get prioritised and what gets minimised. They shape what conversations happen in a community and what the overall tone of a space feels like. They make choices about how marginalised communities are protected and how misinformation is treated.
There is no true neutrality when you're curating speech and behaviour, let alone when you're doing it at the scale of large platforms. In fact, claiming neutrality is itself a political stance, as was highlighted during my panel (see below). The reality, as we all know, is that teams of safety professionals are making some of the most consequential decisions a company faces about how it presents itself to the world.
When you think of it like that, it's fairly easy to think that T&S teams should resourced and positioned accordingly. And in fairness, several presenters at the Summit explained how their companies were doing exactly this: integrating trust and safety into business strategy, not just treating it as a cost centre, doubling-down on product safety nudges and investing in effective AI-assisted content moderation. It was genuinely encouraging to see T&S being taken seriously as a driver of the business.
But it is only half the story.
Priorities matter
The companies saying this were mostly large and mature and attempting to differentiate against other large, mature competitors. They've passed the hypergrowth phase where safety gets deprioritised in favour of user acquisition. That means they have the budgets, the headcount, and the executive buy-in to invest in T&S strategically.
Presenting their approach as a playbook that anyone can follow creates a kind of survivorship bias: we hear from the companies that got it right, and we assume the ones that didn't just need to argue better.
I spent 13 years across two platforms in the dating industry, building and leading trust, safety, and support teams. Over that time, I saw five or six CEO and leadership changes, and with each change, the attitude toward my work shifted dramatically. Under some leaders, T&S was treated as central to the product and the brand. Under others, it was a cost centre to be minimised. The work itself didn't change. The quality of the team didn't change. What changed was the priorities of the person at the top and what stage the company was at.
I don't say this to be defeatist. I say it because the industry sometimes implies that if you just make the right business case, or present the right data, or frame T&S the right way, you'll get the investment you need. Sometimes that's true. But sometimes it's not something you can control.
Many T&S professionals have never been trained in the communication and influence skills that would help them get those resources and buy-in. Rachel Kowert and I co-led a workshop at the Summit on this topic. Our core point is that whether you're pitching a CFO, convincing a product team, or educating a user, you're doing the same thing: taking something complex and making someone who hasn't had to think about it actually care about it. That's a learnable skill, and it's one most T&S people were never explicitly taught. (If you're interested in influence skills more broadly, Jessica Fain's recent episode on Lenny's Podcast is worth a listen.)
Two things can be true. The ceiling on T&S investment is often set by leadership priorities and company stage, and that's frequently outside your control. And many T&S professionals aren't trained to push that ceiling higher. The industry needs to be honest about both instead of pretending one is the whole story. Either way, the work of getting T&S prioritised is political, just like the work itself.
The Atlantic divide on regulation
As the Summit made clear, it's not only political inside companies. It's political at the regulatory level too, and the differences are stark depending on which side of the Atlantic you're on.
The British and European attendees, broadly speaking, had trust in the respective regulators (who, incidentally, were not allowed to take part in the conference — despite Ofcom leading a plenary session on day one). They talked about regulatory frameworks as a positive thing, though sometimes frustrating, and fundamentally believed that Ofcom and the European Commission acted in good faith.
On the other hand, many American attendees I spoke to were much more sceptical, with an underlying assumption that regulation means a "gotcha" is coming in the future.
This connects to the point about leadership priorities, scaled up. Just as T&S investment within a company depends on what the CEO values, T&S investment across an entire market depends on what regulators and governments signal matters. The EU has taken T&S seriously through frameworks like the Digital Services Act and the EU AI Act. The US has been much more hands-off. If you're a T&S professional in Europe, your regulatory environment, whatever its flaws, signals that your work matters at a societal level. If you're in the US, T&S investment is largely at the discretion of company leadership, with all the variability that brings.
My view is that both positions are incomplete. Regulation is important, especially for large platforms with enormous influence over public discourse and user safety. Some of the most meaningful progress in T&S has come from regulatory pressure, because it creates a floor and gives T&S leaders internal leverage they might not otherwise have.
But regulation is also a real burden for smaller T&S teams, and it can distort priorities. When your team is stretched thin and required to meet specific compliance thresholds, the work inevitably shifts from "what is actually good for our users" to "what do we need to do to satisfy the regulator." A platform can be technically compliant with a transparency reporting requirement while still not addressing the underlying safety problem the regulation was trying to solve.
And that burden gets worse when it intersects with the leadership problem from section one. If your CEO doesn't take T&S seriously, doesn't fund it adequately, and doesn't particularly like the regulation but you still need to comply, that is a deeply unfun place to be. You're under-resourced, under-supported, and accountable for meeting standards that your own leadership hasn't bought into. That's the reality for a lot of T&S teams right now, and it doesn't come through in conference talks from companies that have already figured it out.

We're still very early in the era of T&S regulation. The frameworks are new, enforcement is still being worked out, and the relationship between regulators and platforms is still being defined. Neither reflexive trust nor reflexive skepticism is the right posture for what's going to be a long process.
T&S teams should be leading the AI conversation, not following it
Almost universally right now, tech CEOs are genuinely excited about AI. That gives even the most under-recognised T&S leader an advantage, because T&S teams have been using AI and automation for well over a decade. Long before the current wave of excitement, trust and safety professionals were building ML classifiers, automated detection systems, and scalable review workflows. The T&S industry was operationalising AI before most of the executives now championing it had started paying attention.
Savvy T&S leaders can position themselves well by showing they're already experts in AI governance, oversight, and operationalisation. Indeed, many of the key projects highlighted at the Summit by leaders who had successfully made T&S a business priority were AI projects.
That said, I saw a real missed opportunity at the conference. Most of the discussion around AI was about already-completed projects and the importance of human-in-the-loop oversight. It almost felt like business as usual, with a bit of AI mixed in.
AI is moving incredibly fast and it's shifting everything. I wish we had spent more time talking about the opportunities and disruptions that we're bound to see in the near future, like agentic AI, where autonomous systems are already interacting with platforms in ways that break the traditional "bot vs. human" sorting framework T&S has relied on for years. These conversations are urgent, and they were largely absent.

A way forward?
The way T&S can best position itself as fundamentally core to a business is to stop being reactive. That means adding proactive user features and controls, and moderating proactively rather than waiting for reports. The Summit covered this well, and the companies that have successfully made T&S a business driver are doing exactly this work.
But it also means looking proactively at the future. What does new technology unlock for T&S teams? How do we think about trust and safety in the age of agentic AI? Where do we think regulators are headed next? What does it mean to keep a community safe when the boundaries of who counts as a community member are blurring? How should the trust and safety work adapt as we get more technical? Do we all even want to get more technical?
The Summit showed me an industry that's getting better at solving today's problems, but is not yet seriously grappling with the problems that are coming next. That's understandable. It's hard to have a conversation about the next paradigm when you're still fighting for resources in the current one.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchAlso worth reading
DLC Leadership Program (Psychgeist)
Why? A new digital leader certification program run by my good friend Rachel Kowert is coming soon. Very exciting!
Managing Misinformation in Large Language Models (Integrity Institute)
Why? A thorough and quite technical look at how to manage misinfo in LLMs, by experts from the Integrity Institute.
Governing Agentic AI Systems (LinkedIn Learning/ All Tech is Human)
Why? A free course (even if you don't have a paid LinkedIn account) that teaches the fundamentals about agentic AI.


Member discussion