3 min read

šŸ“Œ Hateful memes, upload filters and deleting racist Facebook comments

The week in content moderation - edition #91

Welcome to Everything in Moderation, your weekly newsletter about content moderation carefully negotiated by me, Ben Whitelaw.

If you're new to EiM (maybe you came via this kind tweet), you'll find a bit about me and the newsletter at the bottom of this edition. And, if you feel inclined, do email and say hi.

Here's what you need to know this week — BW


šŸ“œ Policies - company guidelines and speech regulation

We saw this coming a mile off (see EiM #88) but Twitter, Mozilla, Automattic and Vimeo this week implored the European Commission to create a flexible approach to harmful content ahead of the publication of the Digital Services Act next week (15 December).

Unlike episodes in the past, this doesn't look like big tech trying to shirk their responsibilities: the letter recommends a 'tech-neutral and human rights-based approach’ and lists all four companies' support for:

  • algorithmic transparency and control
  • setting limits to the discoverability of harmful content
  • further exploring community moderation
  • providing meaningful user choice (eg opting out of ads)

It all feels a bit late (how many changes to the DSA are likely to be made in the final six days before publication?) but it shows that the digital dominant platforms — and some smaller ones — are worried: they fear stricter takedown reporting, faster turnaround times and enforced transparency. And that means big changes for their businesses.

On the topic of the Digital Services Act, FEPS — an EU think tank —has an interesting looking event on Wednesday next week to take a first look at the new proposals. It’s free to attend and has a great panel. (Thanks to Justin for flagging).

šŸ’” Products - features and functionality

I’d never heard about the Hateful Memes Challenge, Facebook's $100,000 competition that it launched in May, until I read this Venturebeat piece. Apparently, hundreds of teams have been competing against one another over the last six months to create machine learning systems that can identify offensive memes above a baseline AI detection rate of 64.7% and even beat human moderators, who correctly detect a hateful meme in 85.7% of cases.

I don't completely understand the science but the piece sets out the idea of ā€˜multimodal learning’, a particular concept which artificial intelligence needs to be trained on in order to classify an image overlaid with text. Pretty fascinating.

Bonus: Viafoura has a webinar next week (16th December) on the new rules of moderation, looking at its importance in maintaining users’ engagement of digital products. Might be interesting, might also be rubbish.

šŸ’¬ Platforms - dominant digital platforms

Pornhub has increased moderation capacity and will begin publishing a regular transparency report following a big New York Times story that this week claimed it hosted and monetised rape videos and revenge porn. The new moderation squad, called the 'Red Team’, will be tasked with ā€˜proactively sweeping content’ while only identifiable users will be able to upload content in a change to its policies.

Will Oremus at OneZero noted that Pornhub’s response to the story "mirrors that of other platforms unaccustomed to being confronted publicly with their ugliest underbelliesā€. However, the speed of the changes, just four days after the story was published, was pretty impressive. Let’s wait and see if it helps.

It wasn’t the only platform with an adult content issue this week: Parler apparently is being overwhelmed with hardcore images as a result of its relaxed policies and volunteer mod model. Not surprising whatsoever.

šŸ‘„ People - those shaping the future of content moderation

Read this new Vanity Fair piece on Clubhouse, the invite-only audio app for influential people and you can't not be struck by the work of Tracy Chou.

In the piece, Chou — formerly of Pinterest and Quora and now CEO of Block Party, an online harassment tool — takes the time to chat to a founder of Clubhouse about the nuances of online speech as well as recommending other experts that he can talk to about moderating audio. All for free.

I’ve got access to Block Party and it’s a great product (although, luckily for me, not something I have a need for right now) but this informal, behind-the-scenes, for-the-better-good work is rarely called out or commended. So kudos to Tracy for that.

🐦 Tweets of note

  • "It all happens to us first. trust me, we know before youā€ - writer and meme critic Erin Taylor on why sex workers are so familiar with content moderation.
  • "I've spent the last 30 minutes deleting racist Facebook comments and racist Black Lives Matter "jokes" from underneath a court story about the murder of a young black manā€ - UK journalist Rebecca Marano doing a job that, in an ideal world, she wouldn’t have to.
  • "The idea that a government can somehow nationalise the internet to ban anonymity never goes wellā€ - My favourite self-declared NGO cyberboffin Andrew Ford Lyons responds to another nutty MP calling for anonymity on social networks.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.