6 min read

Is it time to unite T&S and AI ethics?

There's enormous potential in bringing together these often siloed disciplines and organisational functions. Given the complex, intertwined risks of AI and human interaction, it may even be a necessity.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

Today I'm thinking about:

  • How AI Ethics and Trust & Safety teams can collaborate and innovate for everyone's benefit
  • How US federal workers can make the move to Trust & Safety

As always, get in touch if you'd like your questions answered or just want to share your feedback about today's edition.

Here we go! — Alice


Today’s edition is in partnership with Safer by Thorn, a purpose-built solution for detection of online sexual harms against children

At Thorn, we are committed to pushing the boundaries of innovation in the trust and safety space, developing purpose-built solutions to protect children from harm in the digital age.

We’re proud and grateful to be selected as one of Everest Group’s “Content Moderation Technology Trailblazers.” Everest’s recent report celebrates and calls attention to top tech startups creating buzz in the industry, and our inclusion on this short list reflects the impact and innovation we’ve long prioritised.


T&S and AI ethics is not an ‘either or’ choice

Why this matters: We must address the complex, intertwined risks of AI and humans together, not separately. There's enormous potential in bringing together AI Ethics and Trust & Safety teams to innovate and collaborate for the benefit of all.

If the last two years have been defined by two major trends, they would be AI’s rapid expansion and the erosion of trust in content moderation.

AI has rapidly become central to online platforms, powering everything from creative content to routine task automation — but it has also introduced new risks and harms.

At the same time, mass layoffs in Trust & Safety and widespread scepticism of content moderation and enforcement have sent a mixed message: we’re going to invest in AI, but not in responding to its risks. 

The tension between these two industry-wide shifts means we might need to drastically rethink things. Let me explain why.

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member