5 min read

The tool 'used by bad people to do bad things', X's new anti-bot effort and detecting deepfakes

The week in content moderation - edition #244

Hello and welcome to Everything in Moderation's Week in Review, your in-depth guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you.

Today’s edition has a host of stories looking at an internet harm as old as the web itself: sexual content. And while everyone can probably agree that the non-consensual and deepfaked content should not be allowed, deciding what constitutes harmful sexual content is harder and the detection and removal even more difficult. Not that'll stop governments, in particular, from trying. It's a topic that Alice takes on in this week's T&S Insider, if you haven't already read it.

To EiM's newest subscribers — including people working at ByteDance, Match, Discord, Checkr, Intuit, eSafety Commission, Ofcom, Cornell and Rio de Janeiro State University — welcome to the club.

Here's what you need-to-know from the last week — BW

Want to reach thousands of Trust & Safety experts?

Sponsoring Everything in Moderation gets your message in front of up to 2800+ practitioners and other experts working on the thorniest online safety problems. Subscribers work for platforms of all sizes, governments, academic institutions and technology companies around the world.

To find out more about the newsletter and podcast packages on offer, fill in the following short form...


New and emerging internet policy and online speech regulation

Deepfaked videos of women mocking people of colour and purporting to be the relatives of a far-right French politician have racked up 2 million views on TikTok — despite not being real. According to Euractiv, the three women were made to seem to be nieces of Marine Le Pen as they talked about the upcoming EU elections and presented a glamorous image of National Rally, France’s far right party. It's the latest — but probably not last — story in a long line of gen AI/election head-in-hands moments.

Wider context: France is one of the European countries that has yet to pass legislation to adapt the DSA into French law and appoint a regulator (EiM #233). The irony is that National Rally opposed the bill that would do so on the grounds that it represented “authorisation measures”. Wonder if this changed their minds.

In the UK, creating sexually explicit deepfake content is to be made illegal under a new law designed to prevent the targetting of women online. The new offence, which would be introduced as an amendment to the Criminal Justice Bill, would carry a criminal record and a fine, which would be increased or include a jail sentence if the content was shared more widely. 

What makes it worse is that, if you are a journalist wanting to investigate this harm and happen to be a woman, you might find your face on an AI generated image, as Channel 4 presenter Cathy Newman did. No one is safe.

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member