đ Reading the comments, QAnon gets whacked and Satya speaks
Welcome to Everything in Moderation, your weekly newsletter about content moderation on the web, served up by me, Ben Whitelaw.
Greetings to new subscribers from Balls.ie, CNN and the University of Sheffield. Glad to have you here â youâre among excellent company.
Letâs dive into what happened this week â BW
đ Policies - company guidelines and speech regulation
Finally, there is some movement with the UKâs Online Harms Bill, the legislation designed to improve citizensâ online safety â with a special focus on protecting children â initiated back in 2017. On Wednesday during a parliamentary debate, a Government minister announced new timelines: a full response âin the next two monthsâ and legislation by early next year.
If this feels like this has been going on for a long time, it's because it has: the UK Governmentâs initial response was published in February with a full version intended for spring. Covid-19 naturally put paid to that and, since then, elected and non-elected representatives have been calling for more progress to be made as time spent online, especially among children, increases as a result of the pandemic. To make things worse, there is an abundance of bad media takes and my worry is that this will be rushed through the parliamentary process to the detriment of everyone.
Itâs not all bad in Brexitland though: the UK arm of Sky Sports this week announced that it will increase moderation of content on its own sites following a âsurgeâ in hate during the Covid-19 lockdown. Increased coverage for womenâs football and a greater discussion of racism in sport prompted the spike but uncivil contributions arenât confined to those two topics. Regular readers will remember that BBC Sport announced similar measures last month (covered in EiM #78).
đĄ Products - features and functionality
âNever read the commentsâ has become a common internet refrain and even a meme. But a new Nielsen report commissioned by TikTok claims that its users â 79% of a month-long piece of research done during May and June â do read the comments, more than search hashtags or save sounds clips. The report isnât available in its entirety (just in blog summary form) and is designed to bolster the video-sharing platform's commercial credentials and thus should be taken with a pinch of salt. But interesting nonetheless.
This next piece was published back in August but Iâm coming across it for the first time via Evan Hamiltonâs super weekly newsletter for community managers: Ben Balter, senior product manager at Github, outlines seven trust and safety features to build into your product to avoid you and your users get hurt. There are some obvious ones (blocking, reporting) and some I wouldnât have thought about straight away (audibility). In short, it's a must-read piece.
Sidenote: You may remember I flagged another Github employee â Devon Zuegel â in a recent edition about the need for good âmoductâ managers. Based on the thoughtfulness of both Ben and Devon's writing , itâs fair to say Github have quite the team.
đŹ Platforms - dominant digital platforms
The big announcement that youâll no doubt have seen was Facebook announcing that it is removing all Pages, Profiles and Groups representing QAnon, the far-right child sex-trafficking conspiracy theory (If you're new to QAnon and want some more background, listen to this excellent Reply All podcast episode).
What I find interesting about this is not the timing (overdue plus US election) or the scope (rightly comprehensive) but the team at Facebook responsible for its enforcement: the Dangerous Organizations Operations  seems to be a brand new branch of its policy/community operations team and has no Google search results before September this year. A quick snoop around shows it has been hiring for project manager jobs in California and Dublin over the summer with an open role currently open in Singapore. Expect to hear from this team in the future.
There were plenty of people who tweeted that they wished Donald Trump would die before last week but itâs taken his contraction of Covid-19 for Twitter to clarify that such behaviour breaches its rules. Not only does that woefully ignore the experiences of people in minority groups on the platform who receive such threats every day, but it is almost impossible to enforce, even if it focuses on instances with a high chance of 'real-world harm'. Jeez.
đ„ People - those shaping the future of content moderation
His company might have missed out on TikTok but Satya Natella is still in the content moderation game: it has an off-the-shelf product, a mostly friendly social network, a series of tools built into one of the worldâs biggest gaming communities and even had high hopes for the civility of its streaming service before it was shuttered.
So it was noteworthy that the CEO of Microsoft called for social media reform during this weekâs Wall Street Journal CEO Council and said 'Internet safety should be a top considerationâ. He also made reference to the regulatory scrutiny that the automotive industry has faced over the decades, echoing a point that I made in an early version of EiM (#19).
đŠ Tweets of note
- "Just spoke to NPR about what Iâm now calling âcontent moderation-washingâ: The always excellent UCLA professor Sarah T Roberts outlines why new platform policies are just for show in this thread.
- "So far we've got the wild west internet trope and demands for mandatory spot fines, an offenders register, and mandatory ID verification to go onlineâ: UK tech policy expert Heather Burns doesnât think much of the aforementioned parliamentary debate on the Online Harms Bill.
- "Without your persistence, Facebook would likely have never taken action against this clear danger": former exec director of NYC Media Lab Justin Hendrix praises the journalists and researchers on the QAnon beat.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.