š Why is there discord on Discord?
Iāve meant to share this for a while but I have a Twitter list of moderation experts - policy folks, academics, authors, researchers, journalists - that Iām constantly updating. I use it almost every day so maybe itās useful for you too? Feel fee to subscribe.
A brief 'hey' to folks at Macrosco and the wonderfully-named Gearslutz who signed up in the last week. Don't misplace your money-back guarantee - I'm very strict on returns.
Thanks for reading āĀ BW
PS If there's any academics/researchers who'd like to collaborate on this, get in touch.
Whose side are you on, community admin?
Iāll be honest, I didnāt know what a āfurry' was until this week. I certainly didnāt have a clue what 'cub contentā was. But now I understand both, courtesy of the controversy that has played out on Discord, essentially Skype for gamers, over the last two weeks.
The tl;dr (too long, didnāt read) is that Discord angered a bunch of its users by inconsistently applying its community rules. Arguably, itās nothing that other tech companies havenāt done before.
However, the reason why it happened may be interesting to people (as it was me) and points to the challenges of coming up with community rules that govern 19m users chatting about everything from science fan fiction to Wikitribune (the only room that Iām part of on Discord right now).
The whole episode boils down to two things: lolicon/shotacon (drawings of young girls/boys in suggestive scenarios) and cub play (sexualised imagery of anthropomorphised animal characters), both of which have communities on Discord. The only difference, as we found out two weeks ago, is that the former is against Discord community guidelines and the latter isnāt.
šØInteresting-but-not-essential-background detail-alert: this came to light in an email that sent by a Discord admin to a user, which was then posted onto Reddit. Another admin from the Discord Trust and Safety team was forced to jump on the subreddit to justify why cub play, unlike loli drawings, were a grey area and that was why there wasnāt a blanket ban. Cue loli fan anger, some digging of dirt and the discovery that at least one Discord admin on the Trust and Safety team is a 'furry'. In the end, in a blog post published on Wednesday, Discord banned all cub play and announced a quarterly transparency report on moderation decisions and outcomes. šØ
What to take from all this? Well, itās clear that the process of creating community guidelines and policies has to change. Discord, as their blog notes, have a robust process by current standards (research > writing > circulation > implementation) but even that wasnāt enough to avoid the ācub playā firestorm. Users need, and expect, to be consulted. Time and resources should be allocated to do so. The way that Civil, the blockchain-powered news community, solicited feedback on their Constitution comes to mind (Full disclosure: Civil part-fund the programme I work on).
As part of that, the people who create the policy have to it clearer where their allegiances lie, whether thatās their political preference, sports team or internet sub-culture preference. It's that perceived secrecy (I don't believe anyone working in moderation policy teams does anything for their own benefit) that Discord users objected to and that led to the outrage of the last two weeks.
It wasnāt long ago that Discord was 'the chat app of the futureā, āa breath of fresh airā, and even the new Reddit. That narrative has shifted, not because of the tech (some very big Slack communities are transitioning across to Discord) but because of the processes and practices that underpin its community. A cautionary tale if ever there was one.
Facebook goes into Africa
Facebook has 139m monthly users in Africa. Thatās roughly an eighth of the continent's 1.2bn inhabitants (data from 2016). But until now, itās had no content moderation centre (at least in sub-Sahara Africa).
That will change this year with 100 content moderators being employed in Nairobi, Kenya and focusing on local dialects including Swahili.
The question is: who was moderating content in these languages until now? The folks in Essen or Dublin? As with much of what Facebook does, the answer is: who knows?
A question of form
Moderation has been pitted as a question of human skill vs artificial intelligence scale. But what if the answer is somewhere between the two? The Verge has created a science fiction project about hope and one of the storiesā main protagonists is Ami, an AI created by her human parents to moderate online communities. The Q&A with author Katherine Cross explains why she focused on moderation.
Not forgetting
Deciding what is eligible to appear online is difficult enough without having celebrities throw their weight into proceedings. Corinne Cath-Speth from the Oxford Internet Institute makes the point in a New Statesman article that the behind-closed-doors courting isnāt conducive to a more transparent or even application of the rules that everyone else must abide by.
Platform patricians and platform plebs: how social media favours the famous - NS Tech
In a town in Bavaria in Germany, a police inspector goes door to door to debunk Facebook misinformation, with considerable success. Cheaper than setting up another content review centre, eh?
When Facebook Spread Hate, One Cop Tried Something Unusual - The New York Times
With the social media company unresponsive, a police veteran in Germany is using shoe-leather detective work to combat online misinformation and hate.
There has never before been a case in the US Supreme Court that involves a school or collegeās decision to discipline a student for free speech. 24-year-old medical student Paul Hunt may change that.
College sanction for ādisrespectfulā political speech by medical student on Facebook faces appeal
The latest legal skirmish over the ability of public universities to regulate what goes on outside campus
Interesting research project: Sabrina Ahmad spent a year at the Oxford Internet Institute interviewing Indian content moderators and executive leadership at Indian firms to see how culture affects moderation decisions.
Donāt Blame Your Indian Content Moderator ā Oxford Internet Institute
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.