4 min read

Five T&S tools you might not know about

There's a plethora of under-the-radar tools built by small teams solving big safety challenges. Here are five that I recommend knowing about (or revisiting).

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

You might have noticed that there was no Week in Review on Friday — there's a good reason for that, which I'll let Ben explain when he's back online.

In the meantime, I've been thinking about T&S tooling, which is evolving at a rapid pace and merits another look (Full disclosure, it's also what I've been focusing on in my new job).

Get in touch if you have tools you'd like to recommend or questions you'd like answered. Here we go! — Alice


IN PARTNERSHIP WITH RESOLVER Trust & Safety – leading intelligence on online harms, platform risk and child protection

What happens when violence against women becomes a feature, not a bug, in gaming culture?

Resolver’s latest intelligence reveals how toxic narratives, avatar anonymity, and online disinhibition are fuelling misogyny and putting adolescent boys at greater risk of suicide and self-harm.

With immersive tech like VR intensifying exposure, the industry faces a tipping point. Platforms can no longer afford to ignore the real-world consequences of in-game hate.

Resolver helps you uncover and act on hidden harms - before headlines force your hand.

UNDERSTAND THE RISKS

Smart tools for real T&S pain points

The pace of Trust & Safety technical innovation is relentless — and it’s not just the big platforms building interesting safety features. There are some really neat tools and services emerging from the growing market of T&S vendors, that I wanted to highlighted in today’s edition.

Why? Many of these tools come from startups or partnerships with little to no marketing budget or go-to-market strategy. That means they often fly under the radar, used by just a handful of companies (that I know of, anyway), despite being genuinely useful.

The tools I’ve included are ones I've heard good things about, or — based on the online safety trends — feel worthwhile knowing about. Some are shiny and new; others are quietly dependable. But all are built to solve specific pain points and reduce the need for companies to reinvent the wheel.

Full disclosure: I currently work on one of tools listed (more on that below) but I've no qualms including it here because I genuinely think it would've been very helpful in all of my previous roles. 

StopNCII

What it is: This is a hash library of non-consensual intimate images that are submitted by users and then distributed to their 15 participating platforms. It was founded in 2021 by the Revenge Porn Helpline with support from Meta. 

Why I’m including: It’s timely given that the Take it Down Act (EiM #291) will likely be signed into law soon — it could be a good way to for platforms to meet compliance obligations without reinventing the wheel. 

Project Lantern

What it is: A signal-sharing project between 26 companies and the Tech Coalition on Online Sexual Exploitation and Abuse. Importantly, it’s signal sharing between social platforms but also to financial institutions (which are “consume-only”). 

Why I’m including: Currently in its pilot phase, it will continue through 2025. I hope the team working on it can build something robust and pick up more partners in the future. 

Unitary

What it is: Founded in 2019, the product has evolved in the last few years to offer a blend of both automation and human review for content moderation. 

Why I’m including: This kind of human/automation hybrid company is pretty new in the T&S space. If you want to be hands-off and outsource moderation completely but still want a blend of automation and humans in the loop, Unitary is a way to get both at once. 

Scamalytics

What it is: A fraud detection tool that gives a fraud score for IP addresses. 

Why I’m including: This isn’t new at all, but it’s got a free manual IP lookup on the home page. Particularly if you’re at a small startup or don’t need to do IP checks very often, it’s a great tool to bookmark. 

PolicyAI

What it is: PolicyAI is a tool that uses LLMs for policy enforcement. It gives policy experts the ability to update policies and edge cases (and test them) without any engineering input. 

Why I’m including: This product was born out of a conversation that Musubi co-founder and CEO Tom Quisel and I had about a year ago. I was blabbering to him about some of the cool stuff that could be possible with using LLMs for content moderation, and then a few months later he came to me with a working demo of a product that was very similar to what we’d discussed. Today I work with Tom at Musubi, and one of my main focuses is product strategy for PolicyAI, which is now a fully working tool supporting millions of users. (Honestly so exciting for me!)

Know a tool that deserves more love? Hit reply and tell me about it—I’m always looking to spotlight smart solutions the T&S world should know about.

Bonus: Three more tools by tiny teams that I’ve used personally and recommend: 

  • Lex.page — a tool for writing which also includes AI feedback/ editing in a seamless way. 
  • Metro Retro — a great whiteboard tool for remote teams, this is a way to make planning and collaboration fun. I love their templates
  • Block Party — helps you search for where your personal information is showing online and automate removal.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Also worth reading

Policy-as-Prompt: Rethinking Content Moderation in the Age of Large Language Models (Spotify research team)
Why? An academic paper from Spotify's research team on the viability of using LLMs for content moderation, and how teams should think about implementing them, both practically and structurally.

Online Aggression and PTSD Symptoms: New Findings on Traumatic Outcomes (Cyberbullying Research Center)
Why? Important new research showing that T&S teams should take even small violations against minors seriously: "online aggression is strongly linked to PTSD symptoms in teens. We also uncovered a critical insight for educators, social media platforms, and policymakers: perceived “minor” forms – such as exclusion and gossip – actually can inflict trauma on youth in ways comparable to direct threats or hate speech."

Prevention by Design (Search for Common Ground, Integrity Institute, Council on Technology and Social Cohesion)
Why? "In contrast to existing solutions—such as content moderation—that address harm reactively, they place the burden of safety on victims, this paper advocates for a proactive, design-focused approach that embeds safety and user empowerment into social media platform design."

Kids Don’t Have IDs And Age-Estimation Tech Is Frequently Very Wrong (Techdirt)
Why? A look at the argument against age estimation as a viable solution.