User reporting isn't the magic fix some people think it is
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.
This week, I'm thinking about user reports, and how they're not as useful as many people think they are. For some platforms, putting more time and resources into user reports may not actually result in an increase in user safety.
If you're going to TrustCon in July, I'm excited to say I'll be doing my best Ben impression as I take part in the second Ctrl-Alt-Speech Live recording (he'll be in London on childcare duty). I'll also be joining a handful of other panels so I hope to bump into some T&S Insider readers.
Drop me a line if you're working on something the T&S community should know about — if I get enough submissions, I'll share them in a special edition of the newsletter. Here we go! — Alice
The limitations of the report button
Here's a sentence that I've wanted to write for a while: User reports are not the magic bullet for finding and fixing harmful content that some people think they are. That feels good to get off my chest.
As many T&S Insider readers will be familiar with, there's a bunch of reasons why this is the case. But it's worth reminding ourselves of the main challenges:
- Policy misunderstanding: Users don't always fully grasp a platform's rules. They'll report things they genuinely believe are violations, even if they aren't, which floods the system with noise. For instance, a user might report someone on a dating app simply because their profile has minimal information, thinking it signals a "scammer," or report a comment that's just mean, but doesn't quite rise to the level of harassment defined by platform policy.
- Weaponised reporting & emotional bias: This goes beyond simple misunderstanding. Some users intentionally report others or content, knowing full well it doesn't violate policies, often driven by emotion or disagreement. This kind of behaviour disproportionately affects marginalised communities; for example, trans people are often mass-reported as "fake" accounts just for their identity. It can also extend to trying to get an offending user banned.
- Incorrect categorisation: Users frequently select the wrong reporting category for a genuine issue. This can accidentally push a truly important flag down the priority list or, conversely, bring an unimportant one to the top, adding to the moderation burden and slowing down critical responses.
- Under-reporting: This is the paradox; while users sometimes report too much, they also often fail to report crucial things. They might not think it's important, not realise reporting is an option, or simply assume someone else has already handled it. Internet Matters just released a report showing that 71% of children had experienced harm online, yet only 36% of those who had been harmed reported it to the platform. This means a significant amount of harmful content might never even reach a platform's T&S team through user channels.
- Insufficient context: Users don't always provide enough context or information with their report, especially when the harm involves off-platform activities like scams or harassment spilling over from other services. Without adequate details, it's incredibly tough for platforms to investigate effectively or make informed decisions.
- Misunderstanding platform scope and limitations: Users sometimes report harms that, while real, falls outside a platform's ability or jurisdiction to address. This might involve reporting a personal dispute that originated offline with no on-platform trace, or requesting action for criminal activity that strictly requires law enforcement intervention, not platform moderation. The user correctly identifies harm but incorrectly identifies the entity capable of resolving it.
The question I come back to is: why do these challenges in user reporting persist and how might we address them?
Going with the flow
Some of these issues come back to a users' understanding of the policies and their intention to misuse or abuse them. But some of it is tied to the design of the user reporting flow.
Creating reporting flows that work really well is difficult for two reasons:
- Making them too easy often means platforms get an abundance of reporting data, which can be incredibly expensive for human moderators to sift through. I know of platforms where 80% of reports require no action; that's a lot of meaningless data.
- Making them too hard means that there is a scarcity of reporting data because users can't report the content they want to; either it's too slow or they get lost in the process. Some platforms actually made labyrinthian reporting flows as a feature, not a bug, in an attempt to alleviate the incredibly messy nature of the data that I mentioned previously.
This poses a dilemma for platforms, especially those with limited resources: do you pour resources into redesigning reporting flows in the hope that it yields more useful data? Reddit updated its reporting flow back in 2019 and Twitter, as it was then known, tried this by focusing on a "symptoms-first approach" (EiM #140).
Or do you invest in proactive detection that seeks to address the lack of user reports caused by complex reporting flows? Many platforms have taken this approach by using a host of custom-built and off-the-shelf tools to complement user reporting.
These decisions are never either/or but I believe the inherent limitations of user reports are too little understood by users and anyone outside of the T&S team. Next week, I'll suggest what an alternative could look like.
Related reads...
- Social media use is changing, but why, and what does it mean for T&S?
- Fighting complexity with auditability
- Mind the internet safety gap
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchAlso worth reading
Platform governance: myths vs reality (Digital Politics)
Why? A look at the "political battle raging over what happens on social media worldwide" and how to balance regulation, platform governance, and what's best for users.
Anti-Porn Laws' Real Target Is Free Speech (404 Media)
Why? "Anti-porn laws can't stop porn, but they can stop free speech." Well said!
Bluesky trust and safety is too important to be left to Bluesky (Platformocracy)
Why? An argument for a safety council to govern Bluesky's T&S programs (in addition to their layered moderation approach).
‘Sextortion’ Scams Involving Apple Messages Ended in Tragedy for These Boys (WSJ)
Why? I'm glad to see more public discussion of sextortion, as horrifying as it is. The more people who understand it and can talk to teens about it before they're targeted, the better.
Member discussion