6 min read

The great AI music grift, Section 230's next fight and more Meta whistleblowers

The week in content moderation - edition #329

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

'Where there's money, there's thieves' is an old adage that applies to T&S generally but particularly to AI-generated content. Today's edition looks at the challenges music platform Deezer faces in combatting fraudulent content and plays and Mike and I go deeper in the newest edition of Ctrl-Alt-Speech . Listen wherever you get your podcasts.

If you're in London next week for the T&S Summit, drop me a line or book a slot in my calendar to say hi. I'll be on a stellar panel about the important role of Trust in Trust & Safety so come along to that and hear me try and fathom the views and behaviours of Gen Z internet users.

From sunny London, this is your Week in Review — BW


IN PARTNERSHIP WITH CHECKSTEP, the AI content moderation platform
CTA Image

Attending the Trust & Safety Summit in London? Don’t miss our panel: “When AI Is Perfect, Why Do We Still Need Humans?”

Join Checkstep and T&S leaders from our clients Daily Mail Group and JustGiving, alongside our partners ModSquad, as we explore how content moderation and community management are evolving — and why the right blend of AI and human expertise is critical for long-term success at scale.

Catch the panel on Wednesday 25 March at 11:50am!

LEARN MORE

Policies

New and emerging internet policy and online speech regulation

Congress this week took another swing at Section 230 as a committee helmed by Ted Cruz (EiM #308) sought to understand the role that the famous internet law has played in the growth of major US tech platforms and the current state of online speech. There was the usual mix of child safety concerns, partisan complaints about government “jawboning” and some platform design critiques so I don’t blame you if don’t have time to watch the full two-hour video back. However, do skip to half way to see Stanford Law School’s Daphne Keller deliver an excellent riposte to a startled senator. Tasty. 

The Canadian government are getting its online safety band (read: advisory group) back together as the country — like many other nations — grapples with how to deal with AI harms in the aftermath of the fatal Tumbler Ridge shooting. The group previously advised on the controversial Bill C-63 and will likely be expected to advise on a social media ban for kids; Prime Minister Mark Carney recently said Canadian legislation lagged behind other countries and "there is a need to at minimum ... catch up to that."

Also in this section...

Money for Nothing and Clicks for a Fee - Ctrl-Alt-Speech
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:Gamblers trying to win a bet on Polymarket are vowing to kill me if I don’t rewrite an Iran missile story (Times of Israel)…

Products

Features, functionality and technology shaping online speech

A New York Times opinion piece tied to the Los Angeles trial involving Instagram and YouTube argues that social media should be understood not just as a speech environment but as a defective, hazardous product. Tim Wu, Columbia law professor, says that it is an “unfortunate side effect of the information idiom is the idea that everything is a form of speech and therefore exempt from regulation”. Closing arguments in the LA trial were made on Thursday so the jury will delivers its verdict in the coming weeks. 

Also in this section...

Enjoying today's edition? Support EiM!

💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.

💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.

📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!

Platforms

Social networks and the application of content guidelines

French streamer Deezer has highlighted a growing trend of AI-generated music being uploaded and repeatedly streamed by fraudsters to siphon royalties away from legitimate artists. According to results posted by the company and reported by the FT, more than 85% of streams of AI-generated music on the platform are fraudulent, compared to around 8% for the catalogue as a whole. As a result, the music platform is seeing more AI-generated music being uploaded every day.  

Change the tune: Mike makes the point in this week’s Ctrl-Alt-Speech that new technologies — synths, samplers, auto-tuning — have always been used to create music. However, I very much doubt that that the uptick that Deezer is seeing are legitimate artists using AI as an experimental feature. As was reported this week, it’s just as likely going to be far-right political groups using AI to game music platforms to spread anti-immigration sentiment.  

I don’t want to be the guy that drags up the long-running TikTok sale again (EiM #277) but I spotted an interesting nugget in this Bloomberg story on the US government’s hefty payout: the new CEO will be Adam Presser, who was most recently head of operations and Trust & Safety. That won’t do much to allay TikTok user fears that the app is about to be censored or sanitised. 

People

Those impacting the future of online safety and moderation

The story of how platforms like Facebook steered users toward borderline content because it kept them engaged will be familiar to most people in online safety. A new documentary from the BBC, Inside the Rage Machine, usefully lays out the logic, evidence and consequences for anyone coming to the issue fresh.

One of the main voices in the documentary is Matt Motyl, a former Meta civic integrity researcher turned independent scholar, who has spent more than two decades researching how social media harms are measured and understood. He writes a very good regular newsletter called Unmoderated Insights

Motyl goes on record to share vast amounts of research and describe a company culture in which evidence of harm could struggle to compete with growth priorities. Another former staffer also shared how, while Instagram hired 700 staff as part of a Reels launch, requests for a dozen roles focused on child safety and election integrity were rejected.

Motyl's experience, and that of all the sources in the documentary, go a long way to giving the general public a clearer picture of how platform harms are produced — not just those from bad actors but by the company's systems and incentives too.

Posts of note

Handpicked posts that caught my eye this week

  • “The real issue isn’t the door. It’s the building design. Our research showed that when platforms removed specific content or accounts, motivated users often adapted quickly—sometimes producing information ecosystems that were more polarized and more viral than before.” - As the safety shifts focus to product design, I very much welcome the smart inputs of engineering experts like David Broniatowski and Joseph Simons.
  • “It's a dream to be able to apply my knowledge from the past five years on the T&S operations team and a decade of being an avid user of the platform.” - product safety is becoming more central and Olivia Peach's new Discord role reflects that.
  • “Whether a synthetic image or video got labeled came down to a combination of how the content was created, what device was used to upload it, and which platform it was posted on. While these failure modes were not random, the process ended up feeling like a slot machine.” - I’m a proud paying subscriber to Indicator Media and Alexios Mantzarlis highlights why.