6 min read

35,000 ways to harass women, Wikipedia pushes back and Stoll turns source

The week in content moderation - edition #292

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

This week's T&S Insider got a lot of you talking about what it means to be someone responsible for keeping people safe online in 2025. With last week's Ghanaian whistleblower story (EiM #291) and the major BPO layoffs announced this week, it’s clear that not everyone in the industry feels equally empowered or supported.

We're taking a break from Ctrl-Alt-Speech this week as Mike is at a conference. But you can still catch up on last week's episode and use the time to leave a literary-themed review (or frankly any review, they all help massively).

This is your Week in Review for the last seven days — BW


SPONSORED BY All Things in Moderation, the annual conference for humans who moderate

The wait is over — tickets are now available for All Things in Moderation (ATIM) 2025, the go-to global event for anyone working to create safer, more inclusive, and better-governed online spaces.

Taking place over two packed days (15–16 May), this year’s programme includes:

  • Cutting-edge approaches to moderation and governance
  • Key regulatory shifts affecting digital platforms
  • Strategies for protecting younger users online
  • Rethinking community in the age of platform dominance

…and plenty more.

If you’re involved in online communities, trust & safety, tech policy, or product design — or simply care about the internet’s future — ATIM is the place to be.

BUY YOUR TICKET

Policies

New and emerging internet policy and online speech regulation

The Wikipedia Foundation has issued a rare and direct rebuke of the UK’s Online Safety Act, warning that its categorisation as high risk could threaten its whole model. In a blogpost reported by The Verge, it argued that “the most burdensome compliance obligations” should not be applied to a site read by “someone reading an online encyclopaedia article about a historical figure or cultural landmark”. Lead counsel Phil Bradley-Schmieg also argued that the OSA category 1 duties could “undermine the privacy and safety of Wikipedia volunteers”. 

Meanwhile in Brussels, the European Commission has taken legal action against Poland, in part for failing to designate a Digital Services Co-ordinator (DSC) under the Digital Services Act (DSA). The central European country was given an initial deadline of February 2024 and warnings thereafter Four other countries — Cyprus, Spain, Portugal and Czechia — have appointed DSCs but failed to “entrust them with the necessary powers” meaning they too will be referred to the European Court of Justice.

Also in this section...

Ten things no one tells you about working in T&S
Newcomers to the Trust & Safety world often ask me what’s it like to work in the industry and the things I wish I’d know before I started. So here are my ten hard-won lessons for the next generation of online safety professionals

Products

Features, functionality and technology shaping online speech

Oxford Internet Institute researchers have documented a steep rise in the availability of deepfake AI image generators, with the overwhelming majority targeted at women. Researchers Will Hawkins, Chris Russell and Brent Mittelstadt found more than 35,000 examples of public models that have been downloaded more than 15m times since 2022 with “many (...) intended for the generation of sexual content, or non-consensual intimate imagery, despite this violating the Terms of Service of hosting platforms”. The full paper can be read here.

Playing dumb: Civitai, one of the largest repositories of models and mentioned heavily in the research, published a blogpost in August last year promising the rollout of safety measures including advanced filters called “Semi-Permeable Membranes” that “prevent the misuse of AI by blocking the generation of deep fakes and preserving digital identities.” That’s going well, then.

Right on cue, as if by magic, 404 Media has a report about how Twitter/X’s own AI chatbot Grok happily removes clothes of anyone you like by just @ing it. The scale of this usage pattern is unclear but it’s a worrying, if not unsurprisingly, discovery. Platform accountability expert Phumzile Van Damme noted this morning that the loophole seems to have been closed. But did it really need this to happen?

Also in this section...

Platforms

Social networks and the application of content guidelines

Match Group is laying off 13% of its workforce, including members of its customer care and content moderation teams, after a slowdown in paying customers. The parent company of Tinder, Hinge and OkCupid also touted new T&S features on Tinder designed to ‘improve platform integrity’ within its quarterly investor update this week. 

(I'm trying to overlook the fact that the only Tinder 'safety' product announced this quarter is this awkward-sounding feature in which you try and chat up an AI persona. Eeek)

Product over people?: Just last month, an investigation found that Match Group had been slow to react to reports of sexual assault via their platform (EiM #282). Its new CEO has suggested that a leaner, product-focused company could have avoided that scenario. We know that hasn’t always ended well in the past.

It was news that we knew was coming (EiM #289) but this week Meta’s decision to cut over 2,000 moderation jobs in Spain was formally announced. Employed via Canadian BPO Telus International, which operates locally as Barcelona Digital Services, the moderators will be let go between May and September. It comes as part of a broader retreat from content review announced by Mark Zuckerberg back in January (EiM #276).

Also in this section...

People

Those impacting the future of online safety and moderation

If you're trying to figure out what's going on at X/Twitter, you’re not alone — but there’s a new person worth knowing about.

John Stoll, a former columnist and editor at The Wall Street Journal, is now leading media partnerships at the company. He was quoted in The Atlantic this week defending the platform's approach to user-led moderation via Community Notes. I’d missed the announcement but Stoll's LinkedIn profile notes he joined in January this year.

CEO Linda Yaccarino has made no secret of her desire to help journalists to make a living on the platform, in the way that other creators peddling viral clips and engagement-bait videos have occasionally been able to. That might be harder than it looks for Stoll: I’ve spoken to journalists with more than a million followers and strong engagement who make pocket money from X/Twitter’s Creator Revenue Sharing scheme. And that’s not getting into whether we want Elon deciding who constitutes a news source and who doesn’t.

Posts of note

Handpicked posts that caught my eye this week

  • “Here I tried to sum up what I've learned, what's changed over the past two years and what companies should do to better recognise the importance of this work” - TBIJ reporter Niamh McIntyre gives an overview of her work giving voice to content moderators around the world.
  • “I cannot possibly thank all the fab people at the LEGO Group that I was honored to work with, but I can definitely send you virtual hugs and the open invitation to stay connected. 🙏🏽” - Dr Elizabeth Milovidov is moving on from the LEGO Group.
  • “This year, I joined Harvard's Berkman Klein Center for Internet & Society as an affiliate and next week, I'm excited to be giving a talk with them on the future of social media transparency.” - A date, from Brandon Silverman, for your calendar