Social media use is changing, but why, and what does it mean for T&S?
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.
New research suggests that adults are spending less time on some platforms and you'd be forgiven for thinking that means less potential for harm too. In today's edition, I take a look at what could be behind this shift and what T&S professionals can do to get ahead.
Reminder: If you want a break from T&S Insider, or indeed Ben's Friday round-up, you can change your newsletter preferences on the EiM site without hitting unsubscribe (which removes you from all future emails - sad face).
Get in touch if you'd like your questions answered or just want to share your feedback. Plus, scroll to the links section for a look at some recent child safety updates and my #1 (cheeky) networking tip.
Here we go! — Alice
For over 20 years, Resolver has been at the forefront of protecting communities from online threats. We provide unrivalled intelligence and strategic support to platforms and regulators, driving innovation in Trust & Safety.
At this year’s Trust & Safety Professional Association EMEA summit, we’re thrilled to be taking part in a vital panel session: The Psychological Cost of Innovation: Reassessing Well-being Challenges for New Types of T&S Work.
Join industry leaders, academics, and frontline professionals as we explore how the evolution of Trust & Safety work is reshaping mental health norms, organisational responsibility, and sustainable innovation.
Getting ahead of changes in social media usage
If you don't already know about the Neely Center Ethics and Technology Indices, you should.
Created by the University of Southern California, the Indices measure the positive and negative experiences of US adults when using social media platforms, AI and AR/VR and are designed to inform the design, use and regulation of these emerging technologies. Last week, Neely senior advisor Matt Moytl published the results from the social media index survey with data through January 2025.
The findings offer an important read on shifting user experiences and the knock-on effects for T&S teams.
Who's up and who's down
Let's start with people's experiences on the platforms. Two stood out in particular:
- LinkedIn scored well for both positive and negative experiences (lower numbers/green scores are better)
- Reddit performed less strongly across three of the four experiences — witnessing or experiencing something that affects you negatively; content that you would consider bad for the world and experiencing a meaningful connection with others (see chart below).

However, neither of those scores seemed to have a bearing on platform usage. On the flip side, Reddit increased an incredible 39.6% over the period.

While the author of the study suggests Reddit's success may have to do with its partnership with Google, I have another theory that merits further research: Reddit gives subreddit moderators the ability to make and enforce rules against AI-generated content.
In a landscape increasingly flooded with AI slop, Reddit has positioned itself as a haven for human-centred conversation. You might have read the ArsTechnica piece from February about the lengths mods are going to:
A mod of r/wheeloftime told me that the subreddit's mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.”
As a very active LinkedIn user, I’ve seen a lot of complaints lately about AI content on the platform. It’s not that the users themselves are fake; it’s that their posts seem insincere and irrelevant, yet are promoted anyway.
This shift in both user demographics as well as content is a challenge for T&S teams.
What changing platform dynamics mean for Trust & Safety
Most platforms still rely heavily on user reports as a content review signal, despite its longstanding issues with accuracy. But when a platform loses a significant chunk of its user base, two things happen simultaneously:
- There are fewer eyeballs to stumble across and flag policy-violating posts.
- The remaining population is proportionally made up of more power users who post more often.
Add to this the fact that:
- Spam content created by generative AI looks less like legacy spam (repetitive phrases, suspicious links, bad grammar) and more like legitimate user speech.
- Legitimate users are using generative AI tools to assist with their posts, further complicating how T&S systems evaluate content.
This is a double-hit to recall: fewer items are flagged, and reports are less accurate.
Some practical ideas
Given these shifts, T&S teams need to adapt. Here are a few practical approaches worth considering as platform dynamics and content characteristics continue to evolve:
- Give greater weight to signals that don’t depend on user volume (e.g., network-level abuse patterns, metadata anomalies, behavioural patterns).
- Track the average number of reports per X impressions; when it falls below an internal baseline, automatically lower model thresholds or raise proactive sampling to compensate.
- Put a greater weight on trusted flaggers (those who have a high percentage of actioned flags in the past).
- Fine-tune models with labelled AI-generated violations so they pick up more subtle stylistic cues.
- Look at the holistic user experience: are remaining users coming across bad content/ users more often now? Are power users more likely to engage in behaviour that drives others away? What does "good" user behaviour and content look like, and how can you promote that?
Over to you: What other ideas do you have for adjusting to this shift in behaviour? Hit reply and let me know.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchChild Safety updates
Back in December, I posted a series of predictions for 2025. One of them was that “we’ll see an increase in regulation, policies, and feature changes around youth experiences online, especially apps and services featuring generative AI.”
This week, almost every link I bookmarked was about child safety:
- Platforms have a year to comply with new COPPA requirements, which are mostly around what data can be collected and how parental consent works.
- Ofcon also released new guidance for platforms, who have to conduct risk assessments between now and July.
- The eSafety Commissioner in Australia has released information on recommender systems and algorithms, with resources for parents to learn more.
- Thorn’s latest report shows that “1 in 4 young people report receiving a solicitation to exchange sexual imagery, engage in sexual talk, or participate in a sexual interaction in return for something of value before turning 18.”
- The Tech Coalition has released their annual report, showing 10 new members and 14 new participants in their Project Lantern program.
- Internet Watch Foundation’s annual report shows an 8% increase in reports over the course of 2024.
Meanwhile, the Wall Street Journal reports that Meta’s chatbots will happily talk sex with users of all ages, highlighting the need for both robust red teaming and Safety By Design principles.
Also worth reading
Videos demeaning trans women and girls don’t violate Meta’s guidelines, Oversight Board rules (19th News)
Why: "This ruling gives “terrible validation to Meta’s new harmful approach to content moderation” and shows “Meta is moving its products away from longtime industry standard best practices and deeper into toxicity that harms users,” said Ellis, GLAAD’s CEO."
(Also check out Ben's analysis of this ruling in Friday's EiM post, if you haven't already.)
In DOGE’s Hunt For Imaginary Censors, It Kills Actual Anti-Censorship Research (TechDirt)
Why: "The people most loudly (misleadingly) complaining about censorship just… helped enable actual censorship. Not metaphorical censorship, not “they won’t let me tweet slurs” censorship, but literal “we’re going to stop research into fighting actual government censorship” censorship."
DoorDash released new community guidelines
Why: I want to give them a shout out: they're easy to read, user-centered, and tell users what values they care about and what they DO want to see (something that I intentionally built into my rewrite of Grindr's guidelines).
And as promised, here's my #1 networking tip.
Member discussion