3 min read

📌 Avoiding hate speech, 'troll' pays damages and YouTube under fire

The week in content moderation - edition #120

Welcome to Everything in Moderation, a weekly newsletter for people interested in online speech and building a friendlier, more civil web. It’s curated and produced by me, Ben Whitelaw.

Last week's newsletter was a popular one — almost half of you got round to reading it — with a particular interest in the fight to end gender-based online abuse. If you missed it, you can read it and 119 other editions in the archive.

Also, I’ll be announcing some exciting changes to EiM in August after my usual summer break. You'll hear more in the coming weeks but in the meantime, tell your friends, colleagues and contemporaries to subscribe — I promise it's in their interests.

Here's what you need to know this week — BW

📜 Policies - emerging speech regulation and legislation

India this week made Twitter legally responsible for users' posts after removing its liability for failing to comply with new rules designed to make the big US social networks more accountable. It follows several incidents that the country's government said jeopardised "the sovereignty and integrity of India" (EiM #110) and constituted "manipulated media" (EiM #114).

Important context here: Twitter claimed it was trying to comply with the new rules and already hired an interim grievance officer but they quit after just two weeks. Now they are recruiting for three senior roles — Chief Compliance Officer, a Nodal Officer and a Resident Grievance Officer — suggesting the Indian market is too lucrative for it to miss out on.

Meanwhile, the European Commission has warned French officials that a proposed law (called the Avia Bill after the MP that called for it) may harm the planned Digital Services Act and make it  "more difficult to ensure that all Europeans benefit from a comparable level of effective protection online". A very European problem.

💡 Products - the features and functionality shaping speech

It seems a stretch to class Gettr — the new Twitter clone started by members of Donald Trump's former team — as a product and not just a jumble of code. But, if we put that to one side, there's an interesting lesson (via Casey Newton's Platformer) about the role moderation plays in the value proposition of each and every app:

But when you consider why these apps (Parler and Gettr) failed as quickly as they did, lax content moderation is surely among the biggest reasons. Most people will only spend so long in a virtual space in which they are surrounded by the worst of humanity.

Yep, even conservative, free-speech ones like Parler.

💬 Platforms - efforts to enforce company guidelines

New research published this week shows that Facebook's failure to pay attention to minority languages in Asia has allowed hate speech to flourish in the region. A team from the University of Queensland — funded by Facebook through the Content Policy Research initiative — found that moderators were not given training materials in their native language and recommended a hiring drive to increase the number of dialects its moderation teams cover.

It's been a while since I've had reason to mention YouTube's recommendation algorithm (EiM #60) but a new crowdsourced study led by Mozilla claims that the video platform still has a huge problem. 71% of the videos reported as harmful content by users were automatically recommended by the platform with recommended videos 40% more likely to be reported as harmful. Plus ça change

Bonus read: I tweeted about it the other day but media professor Dr Siva Vaidhyanathan's Wired article on what happens is regulating Facebook fails is littered with nuggets. Go read it immediately.

👥 People - folks changing the future of moderation

I know that journalists aren't everyone's favourite people but, unless you're Giles Coren, few deserve the amount of abuse dished out to them. In fact, 92% of media lawyers in a new survey claim abuse of journalists has increased in recent years and it is far worse if you're a woman or from an ethnic minority.

That's why I was intrigued by Stephen Nolan's recent victory in court. Nolan, who presents a talk show for BBC Radio Ulster, won six-figure damages for defamatory statements made about him via an anonymous account. I don't know if it's a first but it's certainly rare.

The guilty individual was allowed to remain anonymous but it doesn't sound like it will be the last case of this nature: Nolan said on Twitter that several other court cases were scheduled. Trolls, you have been warned.

🐦 Tweets of note

  • "I really hate to say this because I know I'll be proven wrong within a day" - assistant professor Jeff Kosseff has a hunch about Section 230.
  • "I kept thinking: how long until we hear about some secret exception or special circumstance" - Brennan Center counsel Ángel Díaz spots something in the latest Oversight Board judgement.
  • "Because ‘audience’ contains the word ‘die’ in it.” - you can almost hear Dr Chris Gilliard, aka @hypervisible, hit his forehead on the table in this tweet.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.