📌 How to identify "disruptive behaviour", new GIFCT report and moderators in art
Hello and welcome to Everything in Moderation, your weekly rundown of the content moderation week that was. As always, it's written by me, Ben Whitelaw.
Greetings to new subscribers from Teleperformance, University College Dublin, Fathm, Tattle and elsewhere and thanks to Tyler and Justin for their kind words about last week's edition.
As 2021 draws to a close, there are two EiM mini announcements to make:
1. EiM is partnering with four other fantastic independent newsletters on an exciting new year predictions project/early warning system. It's called #Lookoutfor2022 and is designed to showcase smart takes about what 2022 will mean for the web. We've had dozens of inputs already from folks working across a bunch of different disciplines and I'd love for you to take part too. Find out how here (deadline is Monday).
2. This newsletter is now on Twitter! Why? For three reasons: i) to share links I'm reading that might not make it into the newsletter, ii) to give you an early warning on on Q&As and analysis written for the site (read on for more) and iii) (hopefully) have fresh conversations about online speech and internet safety in a way that's hard to do from my other account. I hope you'll say hi there.
I'll be back on 31st December with the final EiM of the year. Enjoy the next two weeks and stay safe — BW
📜 Policies - emerging speech regulation and legislation
A group of UK parliamentarians who have been scrutinising the Online Safety Bill for over a year this week published their final report. Techcrunch has a comprehensive story on what it included but notable recommendations include the expansion of the Bill to include scams and fraud from paid ads as well as the tightening of the definition of “harmful” content to that defined in law as illegal. Longtime EiM reader Heather Burns has her usual brilliant readout and you should also check out Dr Edina Harbinja's thread too. Ministers have two months to respond before a vote in the House of Commons around March 2022.
Four NGOs have been in a French court of appeal this week arguing that Twitter should disclose detailed information about internal moderation processes, including the number, location, nationality and language of the people in charge of processing French content flagged on the platform. Twitter — which Politico calls a "poster child for hate speech online" in France — lost the initial case in July but decided to appeal.
The independent-but-Meta-funded Oversight Board this week found that Facebook's decision to remove and then reinstate a post about Ethiopia's civil war was wrong and that it must be removed again for violating its standard violence and incitement. The judgement goes on to say, in words that are not vague or abstract, that posts like it have the potential the effect of "heightening risks of near-term violence." Crikey.
Here's one I've not read yet but I'll be poring over this festive period: Global Internet Forum to Counter Terrorism (GIFCT) published its first annual report this week. The NGO — which now has 18 members — announced video conference tool and pandemic winner Zoom as its newest recruit as well as plans for membership tiers in 2022.
💡 Products - the features and functionality shaping speech
Playstation looks to be moving into the proactive moderation space after filing a patent to automatically detect and avoid "disruptive behaviour". Gamerant reports that the "Behaviour Score'" system will give marks for positive interactions and will use demographic data to work out if players should be playing together. Interesting.
Not one, not two but three big articles published in the tech press this week on the inherent challenge of ensuring users are safe using AR/VR (aka the metaverse). The worst part about it all? You might bump into Nick Clegg.
Quicker options for hiding comments (or bozoing as it used to be known), improved keyword blocking and an automated moderation tool called Moderation Assist were some of the improvements announced this week by Facebook. About time too.
💬 Platforms - efforts to enforce company guidelines
Users on TikTok are using a "mimetic format" to game the platform's moderation algorithms, according to a study reported on by Insider this week. Media Matters found that right-wing users paired a line from terrorist Ted Kaczynski's manifesto with images of LGBTQ+ TikTokers to imply that they were, in his words, a "disaster for the human race". A reminder again that users are always one step ahead of the technology.
A bit of positive news for Twitch users, especially if you've been following the hate raids storyline (EiM #130): the video live video platform has hired former YouTube and Facebook exec Kendra Desrosiers to "advocate for underrepresented creators and develop programs to highlight their voices". It's not much but it's something.
The Washington Post reports that Twitter Spaces has become a home for Taliban supporters, white nationalists, and anti-vaccine activists and is a "dumpster fire", according to a former Spaces team member. Well, we couldn't see that coming, could we?
You might not have realised it was gone but the infamous social media platform Yik Yak is available to download once more, this time with better moderation. New York-based men's mag Inside Hook has done a good backgrounder on what went wrong last time (I'd forgotten how bad it was). My read of the week.
👥 People - folks changing the future of moderation
What is the physical impact of being online in a space? And what does it feel and sound like to be partial to harm, when it happens?
That's what Gareth Kennedy and Sarah Browne explore in their new Real World Harm project currently at the Glucksman Gallery at University College Cork and reviewed last week by the Irish Examiner.
The artists use interviews with former content moderators to document the moderators' trauma and pair their words with moving images of "locations we identify as significant within the plot of global capitalism - such as the Titanic Quarter in Belfast, or Silicon Valley". Powerful stuff.
As I've said here many times, the testimonies of moderators are a crucial lens from which we can learn so much about the moderation processes that currently exist and how we can improve them. Perhaps it's time for a trip to Cork.
🐦 Tweets of note
- "Missed the Deplatforming Sex roundtable?" - I did when it took place back in October so glad to have this reminder from Canadian comms professor Stefanie Duguay.
- "what you *can’t* do is claim it does not impact on freedom of speech. That’s the whole point: to stop some speech" - Tech law professor Paul Bernal reacts to the Online Safety Bill parliamentary report this week.
- "“Look, we don’t *produce* the toxic waste, we just amplify and distribute it” says exec at toxic waste factory" - Snowboarding professor Chris Gilliard aka hypervisible with the perfect Guy Rosen takedown.