I’ve meant to share this for a while but I have a Twitter list of moderation experts - policy folks, academics, authors, researchers, journalists - that I’m constantly updating. I use it almost every day so maybe it’s useful for you too? Feel fee to subscribe.
Thanks for reading — BW
PS If there's any academics/researchers who'd like to collaborate on this, get in touch.
Whose side are you on, community admin?
I’ll be honest, I didn’t know what a ‘furry' was until this week. I certainly didn’t have a clue what 'cub content’ was. But now I understand both, courtesy of the controversy that has played out on Discord, essentially Skype for gamers, over the last two weeks.
The tl;dr (too long, didn’t read) is that Discord angered a bunch of its users by inconsistently applying its community rules. Arguably, it’s nothing that other tech companies haven’t done before.
However, the reason why it happened may be interesting to people (as it was me) and points to the challenges of coming up with community rules that govern 19m users chatting about everything from science fan fiction to Wikitribune (the only room that I’m part of on Discord right now).
The whole episode boils down to two things: lolicon/shotacon (drawings of young girls/boys in suggestive scenarios) and cub play (sexualised imagery of anthropomorphised animal characters), both of which have communities on Discord. The only difference, as we found out two weeks ago, is that the former is against Discord community guidelines and the latter isn’t.
🚨Interesting-but-not-essential-background detail-alert: this came to light in an email that sent by a Discord admin to a user, which was then posted onto Reddit. Another admin from the Discord Trust and Safety team was forced to jump on the subreddit to justify why cub play, unlike loli drawings, were a grey area and that was why there wasn’t a blanket ban. Cue loli fan anger, some digging of dirt and the discovery that at least one Discord admin on the Trust and Safety team is a 'furry'. In the end, in a blog post published on Wednesday, Discord banned all cub play and announced a quarterly transparency report on moderation decisions and outcomes. 🚨
What to take from all this? Well, it’s clear that the process of creating community guidelines and policies has to change. Discord, as their blog notes, have a robust process by current standards (research > writing > circulation > implementation) but even that wasn’t enough to avoid the ‘cub play’ firestorm. Users need, and expect, to be consulted. Time and resources should be allocated to do so. The way that Civil, the blockchain-powered news community, solicited feedback on their Constitution comes to mind (Full disclosure: Civil part-fund the programme I work on).
As part of that, the people who create the policy have to it clearer where their allegiances lie, whether that’s their political preference, sports team or internet sub-culture preference. It's that perceived secrecy (I don't believe anyone working in moderation policy teams does anything for their own benefit) that Discord users objected to and that led to the outrage of the last two weeks.
It wasn’t long ago that Discord was 'the chat app of the future’, ‘a breath of fresh air’, and even the new Reddit. That narrative has shifted, not because of the tech (some very big Slack communities are transitioning across to Discord) but because of the processes and practices that underpin its community. A cautionary tale if ever there was one.
Facebook goes into Africa
Facebook has 139m monthly users in Africa. That’s roughly an eighth of the continent's 1.2bn inhabitants (data from 2016). But until now, it’s had no content moderation centre (at least in sub-Sahara Africa).
That will change this year with 100 content moderators being employed in Nairobi, Kenya and focusing on local dialects including Swahili.
The question is: who was moderating content in these languages until now? The folks in Essen or Dublin? As with much of what Facebook does, the answer is: who knows?
A question of form
Moderation has been pitted as a question of human skill vs artificial intelligence scale. But what if the answer is somewhere between the two? The Verge has created a science fiction project about hope and one of the stories’ main protagonists is Ami, an AI created by her human parents to moderate online communities. The Q&A with author Katherine Cross explains why she focused on moderation.
Deciding what is eligible to appear online is difficult enough without having celebrities throw their weight into proceedings. Corinne Cath-Speth from the Oxford Internet Institute makes the point in a New Statesman article that the behind-closed-doors courting isn’t conducive to a more transparent or even application of the rules that everyone else must abide by.
In a town in Bavaria in Germany, a police inspector goes door to door to debunk Facebook misinformation, with considerable success. Cheaper than setting up another content review centre, eh?
With the social media company unresponsive, a police veteran in Germany is using shoe-leather detective work to combat online misinformation and hate.
There has never before been a case in the US Supreme Court that involves a school or college’s decision to discipline a student for free speech. 24-year-old medical student Paul Hunt may change that.
The latest legal skirmish over the ability of public universities to regulate what goes on outside campus
Interesting research project: Sabrina Ahmad spent a year at the Oxford Internet Institute interviewing Indian content moderators and executive leadership at Indian firms to see how culture affects moderation decisions.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.