5 min read

User reporting isn't the magic fix some people think it is

Despite their ubiquitous use, user reports don't always drive effective moderation or meaningful change in platform policy. Is there a better approach?

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.

This week, I'm thinking about user reports, and how they're not as useful as many people think they are. For some platforms, putting more time and resources into user reports may not actually result in an increase in user safety.

If you're going to TrustCon in July, I'm excited to say I'll be doing my best Ben impression as I take part in the second Ctrl-Alt-Speech Live recording (he'll be in London on childcare duty). I'll also be joining a handful of other panels so I hope to bump into some T&S Insider readers.

Drop me a line if you're working on something the T&S community should know about — if I get enough submissions, I'll share them in a special edition of the newsletter. Here we go! — Alice


The limitations of the report button

Why this matters: User reports are one of many signals that platforms use to moderate content online. However, the difficulty of designing user reporting flows means that they are not a terribly useful signal and don't always lead to the right kinds of platform action or investment. Being more open about the limitations of reporting content — with users and with platforms executives — might be a good place to start.

Here's a sentence that I've wanted to write for a while: User reports are not the magic bullet for finding and fixing harmful content that some people think they are. That feels good to get off my chest.

As many T&S Insider readers will be familiar with, there's a bunch of reasons why this is the case. But it's worth reminding ourselves of the main challenges:

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member