3 min read

📌 The end of ‘just-in-time’ moderation?

The week in content moderation - edition #57

Hello to new subscribers from Stichting Democratie en Media and the Dangerous Speech Project. If the concept of days still exists for many of you self-isolating, happy Friday.

It feels like we’re seeing more stories than ever about content moderation and my various Google Alerts are certainly longer than a few months ago. People’s natural interest in COVID-19 is obviously part of the spike but I also wonder if regular folks (and journalists) are beginning to realise that there's a public benefit to knowing more about what goes online and who decides what stays up.

As ever, I’ve rounded up the best of those stories in today’s EiM.

Stay safe and thanks for reading – BW

PS Calling people is cool again, apparently. Feel free to say hi.


🍽 Empty shelves, everywhere

We’re all familiar about the food supply chains that have been disrupted by COVID-19 (see this thread on UK supermarkets) and we’ve all seen the accusations of stockpiling levelled at those buying more than four toilet rolls.  We’ve seen the pictures of empty shelves circulating on social media and the anger that it creates among customers when stores don’t fulfil their job.

Well, the same thing is happening with content.

As Sarah T Roberts, assistant professor of information studies at UCLA, explains in this Flow Journal essay, outsourced commercial content moderation essentially operates under the same model as the shops that have had such a hard time keeping people fed. She explains:

The model is one of just-in-time, in which all aspects of the process, from putting up a site to hiring in workers to the actual moderation itself, takes place as quickly and as “leanly” as possible, particularly for functions such as content moderation that are seen as a “cost center” rather than a “value-add” site of revenue generation.

The virus-related shutdown of the Philippines, where large amounts of content moderation takes place, has caused what Roberts calls 'disruption in social media’s production chain’ and what we might liken to empty shelves. Moderators have been sent home (see last week’s EiM) and AI been tasked to fills in the gaps, which will likely result in posts being removed without reason.

Obviously, people care more about food than what they publish online but frankly, it’s a close run call between the two. Whether or not it will lead to questions (and fights) about the 'just-in-time' moderation model, we'll have to wait and see.

💉 Underlying assumptions

NYT tech reporter Mike Issac this week posed what I think is a valid question: does the differing moderation approaches to COVID-19 (strict) and anti-vaxx content (less so) suggest that the latter is in some way valid?

If so, who is making that call about legitimacy? The platforms? All of us?

🏥 Public health platforms? (week 3)

We’re getting into the analysis phase of COVID-19 x content moderation stories.

  • Medium removed a viral post that contained unsubstantiated health claims and subsequently issued new guidance on its content policy
  • Some credit must be given to tech platforms for greater and more proactive transparency, writes Evelyn Douek on Lawfare Blog, but they also must be reigned in.
  • Wired look at the work-from-home mod movement and its implications on online speech.
  • Protocol, the new site from the founders of Politico, do a similar overview but with an interesting note about Whole Post Integrity Settings, a new data point that allows Facebook's systems to moderate image and text in context of one another.

⏳ Not forgetting...

Twitch banned a female Swedish streamer called Swebliss, reportedly for wearing inappropriate clothing, which she denies.

Streamer accuses Twitch of sexism after ban for her clothing | Dexerto.com

Fashion and Art streamer Swebliss was banned over her clothing on stream, and has called the decision "discriminating."

Facebook is on the verge of settling with moderators working in California, Arizona, Texas, and Florida who developed PTSD after removing disturbing content.

Facebook is nearing a settlement with its content moderators in a class action lawsuit - The Verge

The company and attorneys for the moderators have reached a settlement in principle, but it must be approved by a judge. Selena Scola sued Facebook in 2018.

Nice Q&A with Zohar Levkovitz, CEO of L1ght , which uses machine learning to predict toxicity in comments

Q&A: The startup that's using AI to protect children online (Includes interview)

Many parents worry about what their children might stumble across online. L1ght is an anti-toxicity startup using AI to detect and filter harmful online content to protect children. We talk with the company's CEO, Zohar Levkovitz.

Spam, revenge porn and sexual content relating to minors was the main reason behind Discord banning 5.2m accounts last quarter.

Discord says it’s banning millions of accounts to tackle spam - The Verge

Discord banned 5.2 million accounts between April and December last year, the company revealed today in its second transparency report.

Facebook ban 36 accounts, 10 Instagram accounts, 9 pages and 9 private groups related to a white supremacists group called Northwest Front.

Facebook Removes Network of White Supremacist Accounts

(Bloomberg) -- Facebook Inc. has removed dozens of user accounts plus other Pages and Groups on its social network associated with the Northwest Front.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.