6 min read

More teen social media bans, nudify ads nixed and Rudd remembered

The week in content moderation - edition #295

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

Welcome to new subscribers from TikTok, Ofcom, EA, Wikimedia, Mozilla. eBay, Slack and a host of other movers and shakers in the T&S space. Don't forget to say hit reply and tell me about yourself; you can also customise which newsletters you receive in your account.

If you like your online safety news and analysis in audio form, I'm back in the Ctrl-Alt-Speech chair this week after a month's break. Mike and I go deep on some of the stories below but the real juice is hearing my sleep-addled brain come up with an unfortunate name for our quick story round-up. Have a listen.

Outrage For The Machine - Ctrl-Alt-Speech
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:He’s a Master of Outrage on X. The Pay Isn’t Great. (NY Times)The vulnerable teen drawn into far-right extremism online (Fi…

A quick shoutout to all of the sponsors who support EiM in its current form and allow me to produce both Week in Review and T&S Insider each week. If you're company or organisation would like to reach thousands of decision-makers in online safety, government and digital rights, get in touch.

That's all from me; here's everything in moderation from the last seven days — BW


IN PARTNERSHIP WITH TREMAU, the T&S provider building end-to-END SOLUTIONS FOR ONLINE PLATFORMS

At Tremau, we’ve seen how complex Trust & Safety has become — especially with the ever changing regulatory landscape.

That’s why we don’t just build tech. We bring together policy experts, ex-regulators, and engineers to help platforms get it right, end-to-end. So we built Nima, our Trust & Safety orchestration platform.

It’s a control center for your T&S operations: Nima helps you streamline moderation, centralize analytics, and automate compliance workflows.

Take DSA Transparency Report — as it is a complex and high-stakes task, Nima makes it easier to:
● Classify decisions under DSA infringement categories;
● Log actions in a built-in analytics database;
● Keep the Transparency Report automatically up to date

So when the next deadline hits (early 2026!), your Report is ready
for download.

SEE NIMA IN ACTION NOW

Policies

New and emerging internet policy and online speech regulation

More than 20 US civil society organisations have written a joint letter against the Stop CSAM Act — which seeks to hold tech platforms accountable for child sexual abuse material — after it advanced through a Senate Judiciary Committe yesterday. The Internet Society, Electronic Frontier Foundation and the Center for Democracy and Technology have all signed, claiming that the bill’s “vague liability provisions” pose risks to privacy and security. 

France looks like it will follow Australia and other countries in pushing for a social media ban for users under 15. President Emmanuel Macron made his clearest public comments yet following the stabbing of a teaching assistant on the outskirts of Paris, saying he would try for “a few months to achieve European mobilisation” for a teen ban before doing so alone. It’s the third show of online safety impatience in fortnight following the suspension of Pornhub and the self-claimed victory for the banning of #SkinnyTok (both EiM #294). Seems like something is in the red wine.

Also in this section...

Products

Features, functionality and technology shaping online speech

Meta has announced new detection tools to spot and takedown so-called nudify apps which perpetuate non-consensual intimate imagery (NCII) — even if the ads don’t mention or show nudity. In a blogpost, the company also announced it would share URLs and other signals with platforms via the Lantern programme so action could be taken.

Lantern started as a programme for child safety but, as Alice mentioned recently, has expanded and is open to new companies 

Also in this section...

Platforms

Social networks and the application of content guidelines

YouTube has sneekily updated its moderation policy to allow certain videos that break its rules but are in the “public interest” to stay on the platform. The change, which was discovered in training material reviewed by the New York Times,  follows in the footsteps of Meta earlier this year (EiM #276). The only difference is that Neal Mohan or Sundar Pichai didn't have to sport any bling jewellery or record an embarrassing video to achieve the same outcome.

Alexios Mantzarlis on Meta’s ‘more speech, fewer mistakes’ announcement
Covering: Mark Zuckerberg’s accusations of fact checking bias, Community Notes and the power of users’ ‘directional sense’ and the decision to prioritise US vs global speech

[Warning: sexually graphic material] A prominent UK adult creator has had her Instagram and TikTok accounts removed ahead of her latest ‘petting zoo’ stunt. Bonnie Blue has also been permanently banned from OnlyFans for breaching its rules on “extreme challenge content”, although I found no mention of the word "challenge" in either its Terms of Use or Acceptable Use Policy. Blue had been earning an estimated £600,000 a year. 

The recent Meta news that it will soon rely on AI for the majority of risk assessments on its platforms has gone down predictably badly. The Guardian reports that three non-profits have written to Ofcom, the UK regulator, raising concerns about the lack of human oversight. Meta responded by touting its “significantly improved safety outcomes” when using AI. I'd like to see the proof of that particular pudding...

Also in this section...

People

Those impacting the future of online safety and moderation

[Warning: themes of self-harm and suicide] Rhianan Rudd was described by a police officer as “one of the most vulnerable children she had met in her entire career”. The 16-year-olds difficult upbringing led her to be groomed by an American neo-Nazi she met online, who furnished her with anti-Semitic views and bomb-making materials. Following her arrest for terrorism offences, she reportedly took her own life in 2022. 

Her story is told in a long read by The Financial Times (free link) following the conclusion of the inquest into her death last week. There is a lot to unpack in Rudd's story — family trauma, police and healthcare failings, the pull of her old life even as she recovered — but my takeaway was that this was not something that happened solely because she was allowed access to Discord and Telegram. Have a read and tell me what you think

Posts of note

Handpicked posts that caught my eye this week

  • "Sadly, I’m seeking a new role due to a reduction in force last week at Character.AI where my role was impacted." - go and hire good friend of EiM, Cathryn Weems. Few are as experienced as she is.
  • "By offering cheap cards, they direct unsuspecting users to phishing websites and get their credit card details. Some admins seem to be in Vietnam, India, Bangladesh... but not a single one in Spain." - Carlos Hernández-Echevarría and the team at Maldita continue to do some fantastic work.
  • "The team has done an incredible work - check it out! It's super helpful for anyone working in Trust & Safety or building the AI detection models." - Alexandra Koptyaeva shares a handy database of open-source tools curated by ROOST.