7 min read

Don't fall into the T&S ROI trap

Sometimes we have to invest in Trust & Safety because it’s the right thing to do, not because there will be a return on the investment. Here are some suggestions for alternatives to traditional ROI calculations.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job. This week, I'm thinking about:

  • How return on investment calculations for T&S proposals can fall short, and alternative ways to convince leadership to approve projects.
  • Generative AI has a hard time depicting race and sexuality in a non-stereotypical way.

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice


ROI vs the right thing to do

Why this matters: Sometimes we have to invest in Trust & Safety because it’s the right thing to do, not because there will be a return on the investment. In this article, I suggest alternatives to ROI calculations.

When I was a newer leader, I used to think that if only I could make my boss understand a Trust & Safety issue better, then obviously they’d agree with me that we had to invest more into solving the problem. I’d get really frustrated when I’d show an issue in great clarity and not get approval for my proposals. Aren’t I here to solve these problems? I’d think. What am I here for if you don’t want to solve them?

I think a lot of T&S leaders go through this. One reason we don't get buy-in could be because we’re not tying the outcomes to business goals. The usual answer to this is a return on investment (ROI) calculation: tie your efforts to metrics that the company cares about — for example, user engagement, retention, monetisation — and then figure out how an investment in your project would return good results for those metrics that the company is trying to increase. If you can prove your project will generate more money/engagement/whatever, you get the budget. Voila!

Be careful: this is a trap! 

Figuring out these ROI calculations can be difficult. Users may leave platforms because of the culmination of years of small, annoying things rather than one big incident that can be easily pinpointed. Data structures might not be set up to record a user’s T&S journey through time. It can take a year to release one new policy or enforcement project, and another six months to measure, all the while many other things are happening simultaneously, both on platforms and generally in society. What users say and what they do don’t always match up. And on and on. 

If you do want to calculate ROI, befriend the business analysts in your company's finance department. They can help! It took me an embarrassingly long time to realise that finance departments have specialists whose entire job is to look at aspects of a project and determine what the financial and business outcomes may be. I tried to calculate these projections on my own, and it was hugely inefficient. A lot of T&S leaders, are used to operating in a silo like I had been, but the more that you can leverage existing experts, the better off you'll be.

That said, not every T&S metric will be positive for the business. What is good for users is not always good for profit. Success for a T&S team may be to decrease engagement in certain areas, where other teams are working to increase it. New__Public's Rob Ennals recently proposed using LLMs to measure positive outcomes, which is a really interesting idea, but maybe not one that is immediately practical today:

Rather than Facebook counting comments, GPT could count how often people helped each other or strengthened their social relationships. Rather than a website counting page views, GPT could count whether users are seeing content that users gain real benefit from, thus boosting the web sites long term brand.

I hope we move towards a future where we can calculate new positive metrics, but in some cases, there is no ROI. Sometimes there's negative ROI. Sometimes we have to invest in Trust & Safety simply because it’s the right thing to do

So now what?

How can a T&S leader effectively convince others to just do the right thing? I don’t have all the answers here, and I don’t think anyone else has them either. There are a few models being developed so that a CEO or leadership team’s priorities aren’t the only deciding factor:  

  • Regulation to force a minimum set of standards to all companies. For example, Australia's eSafety Commissioner has safety by design assessment tools that can be used by T&S teams or product designers.
  • Independent oversight boards — like Meta’s Oversight Board — made up of digital and human rights experts that can provide additional inputs and different perspectives.
  • Federation and community moderation, so that the users of platforms have greater control and say over their experiences, rather than a top-down approach. (Bluesky's approach to stackable moderation is an example).

For many people working in T&S at platforms today, the above solutions may not be workable, especially if they want to propose solutions that go beyond regulator’s minimums. However, there are all kinds of tactics that can be helpful when navigating office politics and business priorities. Here are a few things I’ve done with varying degrees of success:

  • Set up a “Safety Council” (either officially or unofficially) with people in other departments who also care about user safety - Having people who can advocate for user safety in meetings where there’s no T&S department represented is a big win. We in T&S can take some pages from the cybersecurity playbook with their slogan: “Security is everyone’s responsibility." Safety should be everyone's responsibility too.

    Sidenote: For a time, at one company I worked for, every new product manager would get assigned to Trust & Safety before being pulled off to work on something else. This was frustrating because we didn't have consistent and specialised T&S product leadership, but it meant that I got to develop a strong relationship with every PM, and was able to help them to carry T&S principles with them to their next assignment.
  • Bring in a consultant or outside expert - Sometimes if leadership hears the exact same thing from someone else they’ll agree.  You can create an unofficial board of advisors (from nonprofits, academia, etc) and use their expertise to bolster your arguments. Vendors that you work with may also have expertise that you can leverage. For example, I'm always happy to consult/ advise/ help T&S leaders as part of my current job, and I know many other people with similar roles do the same.
  • Share user stories widely and often - Make them personal, share details, encourage people to sit with you when you respond to a real person about a tragic situation. Video is great to humanise issues beyond metrics. Work with user research and customer support teams to pull this information together.
  • Work with others in the tech industry to set benchmarks, standards, and best practices - If there are other platforms that are similar to the one you work for, make friends with the T&S leaders there. Yes, you may be “competitors”, but I believe there’s no good competition when it comes to user safety. If all platforms are safer, that’s a good thing. Sharing notes and strategies with other leaders can help raise the bar across all the platforms. "They're doing it, so we should too" can be a solid argument.
  • Use a human rights framework (BSR) to map out what kinds of issues are fundamental to address - this helps get agreement from leadership on protecting these rights before there’s a giant emergency. Create buy-in on general concepts and ideas over time, so that you can fall back on those pre-agreed fundamentals when making decisions. 

As I spoke about with Nadah Feteih on my podcast this week, as an individual, sometimes the answer is to keep working on these issues from within companies and do the best you can, and sometimes the answer is to leave and do your best from the outside.  For me, it was fairly liberating to realise that sometimes I just wasn’t going to win the argument. It wasn’t necessarily because I was arguing badly, or didn’t have enough evidence, or was bad at my job. It was because I, with my specialised perspective, had a different set of priorities than those who were tasked with the success of the business as a whole.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Job hunt

If you're a job seeker, there can be a conundrum when you're expected to have experience to get a job, but you can't get a job without the experience.

One way to solve this is to work on projects outside of your workplace. The Trust & Safety Hackathon is an excellent way to do this. You'll get assigned to a group with other people interested in T&S, and will work together to come up with a proposal based on a specific theme. More than one hackathon group has decided to continue working on their project even outside of the hackathon, so it can be a great way to add experience to your resume or portfolio. You can check out past projects here.

The upcoming hackathon is in collaboration with Australia's eSafety Commissioner, and is focused on answering the following question: “How can we evolve and increase adoption of Safety by Design?”. Unlike traditional hackathons, you do not need to have technical experience to participate.

It takes place on April 22nd, is fully remote, and is free. Applications are due Wednesday, April 17th.


Also worth reading

By definition, generative AI creates new images based on what it's already seen in data sets. For representations of race and sexuality, this can become dangerously stereotypical.

Here are three recent articles digging into different examples of this problem:

Using LLMs to moderate content: Are they ready for Commercial Use? (Tech Policy Press)
Why? The most thorough of these three articles, this article examines how GPT-4 interprets potential reappropriated and reclaimed language, and suggests that it's not ready for use in moderating content just yet. The author suggests that any company wanting to use LLMs for moderation must provide transparency and standards for user messages, regularly retrain its systems, and leverage the expertise of researchers, partners and community leaders.

Here's how Generative AI depicts Queer people (Wired)
Why? AI stereotypes queer people as white, young, and with purple hair. The author argues that AI tools misrepresent minority groups to the general public and have the potential to constrict how queer people see and understand themselves.

Meta's AI image generator can't imagine an Asian man with a white woman (The Verge)
Why? Testing Meta's AI image generator, the author couldn't create an image with an Asian man and a white woman, even when explicitly describing what each person should look like.