Welcome to Everything in Moderation, your weekly newsletter about content moderation on the web, served up by me, Ben Whitelaw.
Let’s dive into what happened this week — BW
📜 Policies - company guidelines and speech regulation
Finally, there is some movement with the UK’s Online Harms Bill, the legislation designed to improve citizens’ online safety — with a special focus on protecting children — initiated back in 2017. On Wednesday during a parliamentary debate, a Government minister announced new timelines: a full response ‘in the next two months’ and legislation by early next year.
If this feels like this has been going on for a long time, it's because it has: the UK Government’s initial response was published in February with a full version intended for spring. Covid-19 naturally put paid to that and, since then, elected and non-elected representatives have been calling for more progress to be made as time spent online, especially among children, increases as a result of the pandemic. To make things worse, there is an abundance of bad media takes and my worry is that this will be rushed through the parliamentary process to the detriment of everyone.
It’s not all bad in Brexitland though: the UK arm of Sky Sports this week announced that it will increase moderation of content on its own sites following a ‘surge’ in hate during the Covid-19 lockdown. Increased coverage for women’s football and a greater discussion of racism in sport prompted the spike but uncivil contributions aren’t confined to those two topics. Regular readers will remember that BBC Sport announced similar measures last month (covered in EiM #78).
💡 Products - features and functionality
‘Never read the comments’ has become a common internet refrain and even a meme. But a new Nielsen report commissioned by TikTok claims that its users — 79% of a month-long piece of research done during May and June — do read the comments, more than search hashtags or save sounds clips. The report isn’t available in its entirety (just in blog summary form) and is designed to bolster the video-sharing platform's commercial credentials and thus should be taken with a pinch of salt. But interesting nonetheless.
This next piece was published back in August but I’m coming across it for the first time via Evan Hamilton’s super weekly newsletter for community managers: Ben Balter, senior product manager at Github, outlines seven trust and safety features to build into your product to avoid you and your users get hurt. There are some obvious ones (blocking, reporting) and some I wouldn’t have thought about straight away (audibility). In short, it's a must-read piece.
Sidenote: You may remember I flagged another Github employee — Devon Zuegel — in a recent edition about the need for good ‘moduct’ managers. Based on the thoughtfulness of both Ben and Devon's writing , it’s fair to say Github have quite the team.
💬 Platforms - dominant digital platforms
The big announcement that you’ll no doubt have seen was Facebook announcing that it is removing all Pages, Profiles and Groups representing QAnon, the far-right child sex-trafficking conspiracy theory (If you're new to QAnon and want some more background, listen to this excellent Reply All podcast episode).
What I find interesting about this is not the timing (overdue plus US election) or the scope (rightly comprehensive) but the team at Facebook responsible for its enforcement: the Dangerous Organizations Operations seems to be a brand new branch of its policy/community operations team and has no Google search results before September this year. A quick snoop around shows it has been hiring for project manager jobs in California and Dublin over the summer with an open role currently open in Singapore. Expect to hear from this team in the future.
There were plenty of people who tweeted that they wished Donald Trump would die before last week but it’s taken his contraction of Covid-19 for Twitter to clarify that such behaviour breaches its rules. Not only does that woefully ignore the experiences of people in minority groups on the platform who receive such threats every day, but it is almost impossible to enforce, even if it focuses on instances with a high chance of 'real-world harm'. Jeez.
👥 People - those shaping the future of content moderation
His company might have missed out on TikTok but Satya Natella is still in the content moderation game: it has an off-the-shelf product, a mostly friendly social network, a series of tools built into one of the world’s biggest gaming communities and even had high hopes for the civility of its streaming service before it was shuttered.
So it was noteworthy that the CEO of Microsoft called for social media reform during this week’s Wall Street Journal CEO Council and said 'Internet safety should be a top consideration’. He also made reference to the regulatory scrutiny that the automotive industry has faced over the decades, echoing a point that I made in an early version of EiM (#19).
🐦 Tweets of note
- "Just spoke to NPR about what I’m now calling “content moderation-washing”: The always excellent UCLA professor Sarah T Roberts outlines why new platform policies are just for show in this thread.
- "So far we've got the wild west internet trope and demands for mandatory spot fines, an offenders register, and mandatory ID verification to go online”: UK tech policy expert Heather Burns doesn’t think much of the aforementioned parliamentary debate on the Online Harms Bill.
- "Without your persistence, Facebook would likely have never taken action against this clear danger": former exec director of NYC Media Lab Justin Hendrix praises the journalists and researchers on the QAnon beat.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.