📌 Downvoting as moderation, TikTok's new LGBTQ rules and UK ups regulation ante
Hello and welcome to Everything in Moderation, your online safety and content moderation week-in-review. It's written by me, Ben Whitelaw.
Greetings to new subscribers from Logically, Yale, Open Society Foundations, Twitter, Ofcom, Microsoft and elsewhere. Let me know how you found out about EiM and what made you subscribe.
This week on EiM, I spoke to Professor Lilian Edwards about dealing with 'uncertain' scientific information for the latest Viewpoint Q&A. She had a lot to say about takedowns and the economic incentives that drive how platforms' to moderate content. It's a reminder of the wider forces at play in decisions about what stays up and what comes down.
Here's what you need to know from the last seven days — BW
📜 Policies - emerging speech regulation and legislation
Platforms operating in the UK will have to proactively tackle a host of extra "priority offences" following an amendment to the Online Safety Bill announced on Monday. The 11 new offences include hate crime, drug-related offences, fraud and financial crime and incitement and threats of violence, and mean that Ofcom, the regulator of the Bill, can sanction companies that fail to remove such content. It marks a significant shift from the original list of offences (terrorism and child sexual abuse material) and likely means huge swathes of content will be caught in the crossfire when the Bill is introduced. Demos researcher Ellen Judson has a good thread on what it means.
The independent-but-Facebook-funded-Oversight Board™ has issued a policy advisory that, if enacted, would make the sharing of private residential information against its guidelines in all cases. The newly published opinion, which was sought by Facebook last year, marks a shift towards protecting users' privacy; until now, sharing residential information was acceptable if it was deemed publicly available (ie had been shared by five news outlets or is accessible via court records). Let's see where this one goes.
Bangladesh has become a hotbed of religious tension and political violence as a result of hate speech on Facebook, according to this report from Foreign Policy. Much like the platform-inspired troubles of nearby India (EiM #146 and others) and Pakistan (EiM #138 and others), language barriers and lack of bandwidth among moderation teams have led to the storming of Hindu temples and the displacement of other religious minorities. The expansion of its Bengali-speaking team and appointment of a public policy manager in 2020 has done little to help too. My read of the week.
💡 Products - the features and functionality shaping speech
Twohat has released a Discord plugin to allow moderators on the chat platform to "securely classify, automatically process, or escalate content from the channel to your moderation team", according to a company email. The tool allows mods to set different rules for different Discord servers and is the first big release from TwoHat since being acquired by Microsoft last year (EiM #135). It might also pave the way for fresh Discord acquisition talks, after negotiations between the messaging platform and Microsoft cooled in April 2021.
After announcing it back in July, Twitter has started to roll out downvoting on replies to users. The tweet's author won't know you've downvoted — mirroring YouTube's recent move to hide the dislike count (EiM #136) — nor will you see downvotes on your own tweets. But, according to the company, it will reduce the likelihood of seeing similar tweets. What that means (Fewer posts from the author? About the topic? Heavily engaged with?) is not clear.
Have you written or launched something that EiM subscribers would find interesting or useful to their work? Get in touch about your own Viewpoint.
💬 Platforms - efforts to enforce company guidelines
TikTok has announced a number of changes to its community guidelines, including banning conversion therapy and deadnaming — the use of a transgender person's former name without consent — and adopting a stricter approach to dangerous acts and challenges. It is a belated change — it's been almost a year since the video platform was deemed "effectively unsafe for LGBTQ users — but, then again, it wasn't very long ago when TikTok's guidelines recommended screening users for its 'For You' page based on whether they were, unattractive, poor or otherwise undesirable (EiM #56).
The company also announced a UK partnership Logically.ai to help "determine whether content shared on the platform is false, misleading or misinformation." Feels like a platform that's finally growing up.
Coinbase, the cryptocurrency exchange platform, has unveiled its policies for managing users and content. In a blog post grandly titled "Coinbase’s Philosophy on Account Removal and Content Moderation", CEO Brian Armstrong outlines the differing approaches for its infrastructure (eg Coinbase Cloud) and public-facing (eg Coinbase Earn ) products ahead of its expansion into NFTs and other products. That's all well and good but a quick look shows that there's no mention of any such 'philosophy' in Coinbase's terms and conditions or ethics policy. ¯\_(ツ)_/¯
Facebook and Instagram have expanded its terrorism counterspeech initiative into the UK and Pakistan, after finding success with replacing search results with anti-terrorism resources. The Redirect Initiative was initially launched in the US and Australia in May 2019 before being extended to Indonesia and Germany.
👥 People - folks changing the future of moderation
A sad story coming out of China reminds us of the toll of working in content moderation.
A 25-year-old working as an online content auditor for video streaming site Bilibili died after reportedly working a 12-hour shift on Chinese New Year. He suffered a cerebral haemorrhage, according to the South China Morning Post.
The death has provoked an online outcry among China's internet users about the culture of overwork that is persistent in China but also in many internet-related task-based jobs (I recommend Phil Jones' Work without the Worker on this topic).
The company, which has 270m monthly active users, denied that overwork was the cause but has since promised to hire 1000 moderators to "reduce their average workload". However, a statement also made clear that "Like other public services, content security can’t stop even during the Chinese New Year holiday". Doesn't sound particularly sorry, does it?
🐦 Tweets of note
- "The problem with unmoderated online spaces is that a few people will always ruin them" - Wharton professor Ethan Mollick shares data from a series of papers in this viral thread (thanks Max for sharing).
- "The way it's done inevitably harms sex workers, censors marginalized sexualities, & fails to foster the kind of sexual ethics it supposedly aims to enforce" - Valerie Webber outlines the findings of a new report into the effect of Mastercard's adult content policy on adult creators.
- "No information on or care paid to how Facebook is building norms and environments to facilitate safety" - Our good friends over at New Public regret watching a Facebook Horizon video.