Welcome to Everything in Moderation, your weekly newsletter about content moderation on the web, carefully stitched together by me, Ben Whitelaw.
This week, I've been working on an idea called 'Meet the moderators', a series of interviews with people that keep online spaces safe for us all. I'm excited about it (mods get a bad rep and I want to help change that) but it will need your help to make a reality. I'll share more in next week's newsletter about what you can do so stay tuned.
Back to this week's moderation news... — BW
📜 Policies - company guidelines and speech regulation
A new academic paper has provided an answer to a question I’ve wondered for a while: how similar are the community guidelines of social media platforms? The answer: not a lot.
By categorising 66 types of content violation across 11 platforms, researchers at the University of Colorado found ’significant variability’ between policies. Facebook, which has the most extensive guidelines, exhibited all 66 violations while Discord covered only 18. Six of the 11 networks contained between 48 and 56 violations.
The paper is worth reading in full but let me first share my favourite nugget: only two platforms — Facebook and LinkedIn — specifically prohibit the celebration and promotion of one’s own crime. Worth remembering the next time you get a speeding ticket.
Twitter was in court in France this week as four French NGOs accused the platform of refusing to provide information on its moderation practices. Avia's Law (introduced in March 2019 and covered in EiM #29) states that platforms in France must delete illegal content when reported by users as well as reveal its number of moderators, where they work and what training they receive. Twitter refuses to do so, most likely because the headlines write themselves ('xx moderators for 67 million people! Sacré bleu!').
In what could justifiably be described as ‘keeping up with the (Alex) Jones'’, YouTube this week followed Twitter and Facebook in clamping down on QAnon content, albeit not as strictly as its platform counterparts. In a blog post, the video platform announced it would prohibit conspiracy content that 'targets an individual' or accuses them of being complicit in the debunked #pizzagate theory. Good news for Tom Hanks, Lady Gaga and Chrissy Teigen; not sure it means a lot for the rest of us.
💡 Products - features and functionality
Intel is developing a ’temperature tracking mechanism’ that allows a gamer or streamer to mute other users if they cross a pre-set conversational threshold. The tool, revealed in this article, is part of an R&D push 'to give players user-facing tools that let them control the type of content that they encounter’ although it hasn’t committed to bringing any products to market. If you were reading EiM back in April, you’ll remember that Twitch announced a suite of similar tools (albeit ones that don’t use AI models) to allow streamers to remove users from their chat and follower lists.
While on the topic of AI in moderation, Built In (the tech recruitment platform) recently published a good overview of the challenges of moderating audio communities. Professor Yvette Wohn (one of the 130+ experts on my moderation Twitter list) has some smart things to say about how AI can help moderators make more consistent decisions.
Finally, one that I missed last week: Andrew Losowsky, Head of Coral Project at Vox Media, has written a super smart guide for news publications about how they can manage users comments in the immediate period after the US election. A must-read for any audience-focused folks.
💬 Platforms - dominant digital platforms
TikTok has been given a reprieve in Pakistan after it promised the country’s regulators that it would moderate content according to local laws and block accounts ‘involved in spreading obscenity and immorality’. In 2016, politicians passed an act that empowered the Pakistan Telecommunications Agency to regulate online content if it threatened the country’s security or jeopardised ’the glory of Islam’. TikTok was banned for 10 days in total.
Finally, these three roles at Reddit — Product Manager for Community Safety, Senior Community Relations Specialist and Anti-Evil Operation Specialist — suggest that the platform is getting over the great moderator revolt of 2020 and really cares about online safety after all (EiM #81). Nice to see this kind of investment.
👥 People - those shaping the future of content moderation
Ron Guilmette is a security researcher, not a content moderator, but this week, his role was not dissimilar. On Sunday, he made a call to CNServers, which provides DDoS security to websites including 254 QAnon and 8chan-related domains. Just as Cloudflare did in August 2019, CNServers removed its protection, bringing the sites down (NB: they are back up now, as Arstechnica explains). It's another reminder that internet infrastructure is increasingly part of the spectrum of content moderation, whether we like it or not.
🐦 Tweets of note
- "They're asking us to edit the article and not speak publicly about internal content reviews" - The CEO of Babylon Bee, a satirical news site, lifts the lid on how Facebook flagged its page after an article that contained a Monty Python joke was deemed to be ‘inciting violence’.
- "This beat is the Hotel California, isn’t it?" - Issie Lapowsky, senior reporter at Protocol, on being on holiday but unable to get away from content moderation.
- "Another day of reporters doing content moderation for multibillion-dollar corporations" - Buzzfeed senior tech reporter Ryan Mac reacts after colleague Christopher Miller found and flagged a number of Facebook pages attempting to influence the US election.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.