3 min read

📌 AI moderation? Not for Twitch

The week in content moderation - edition #58

Hello everyone and first of all, an apology: a gremlin during the production of last week's EiM means you may have received an unformatted and unreadable version. I’m really sorry. You can still read it here and I promise to be more careful before I hit send.

One thing I thought about this morning: in the time since I sent last week’s EiM, a whole hospital with 4,000 beds has been built in a warehouse in London. We humans are great when we put our mind to something.

Stay safe and thanks for reading — BW


💜 Twitch out on its own

You will have read all about the big social media platforms move to a AI-driven content moderation model but there’s seemingly one exception.

Twitch this week announced that it was looking for two Safety Operations Reviewers (one in the US and one in the UK) to 'evaluate and act on user reports of Twitch policy violations’ and maintain policies and internal documentation. The job description clearly states the role is a remote one.

(Sidenote: If anyone can explain the data/privacy policies that allow Twitch mods to work from home but not Facebook contractors, please drop me a line).

Twitch’s scale is clearly is a factor in this recruitment — it can arguably scale up its team because it has 15m daily active users compared to Twitter’s 140m and Facebook 1.6bn, according to the Business of Apps — as does its focus on video streaming, which is notoriously hard to moderate.

But that doesn’t necessarily explain new measures, also announced this week, to allow creators to remove block users from their chats and from their Follower lists (a common problem, I’m told).

These two initiatives, seen together and in the context of previous pronouncements by CEO Emmett Shear, demonstrate a push for people, not just AI, to be making decisions on the platform.

Long may that continue.

🏥 Public service platforms? (Week 4)

The fallout of COVID-19 crisis continues, to the point where I'm tempted to rename the newsletter 'Everything in Coronavirus'. Anyway, here are the developments this week:

🕦 Not forgetting...

Snopes, the fact-checking site that has been inundated with requests since the start of COVID19, has said that Facebook was only prepared to pay ‘nominal sums’ for its moderation work.

One of the internet's oldest fact-checking organizations is overwhelmed by coronavirus misinformation

Business Insider - Snopes is fighting "the deadliest information crisis we might ever have," as it struggles to keep up with the high demand for coronavirus answers..

Mike Masnick at Techdirt does a great job of keeping an eye on US lawsuits about moderation and this one is particularly mad...

Anti-Vaxxer Sues Facebook, In The Middle Of A Pandemic, For 'In Excess' Of $5 Billion For Shutting Down His Account | Techdirt

I've seen an increasing amount of commentary on the EARN IT Act in the US — I'll likely take a closer look in a future edition of EiM but this is a good summary for now.

How the tech industry will have to step up to fight online toxicity and child abuse | VentureBeat

When it comes to fighting online toxicity and sexual abuse of children, Two Hat Security has figured out how to automate some of the task

The fundamental rights of citizens in India are at risk from the uneven application of hate speech, according to this blogpost by the The Centre for Law and Policy Research.

Content Moderation by Private Platforms: Can Fundamental Rights be Invoked?

Can fundamental rights be invoked in case of content moderation by private platforms?


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.