Hello, and happy new year from Everything in Moderation, your weekly roundup about what's going on in the world of content moderation, now and in the future. It's written by me, Ben Whitelaw.
A warm welcome to an influx of new subscribers from TikTok, the Canadian Broadcast Corporation, University of Michigan, Bar-Ilan University, Memo, the Open Data Institute, Unfinished, Stanford University, Spotify and many more. If you found EiM via one of these brilliant independent newsletters, you're especially welcome here. Do say hi if you get the chance.
If you missed last week's special edition of EiM on what to look out for in 2022, it's worth returning to for Tosha Sakar's very relevant insight alone.
Today's newsletter is a return to the usual Friday format; 10-15 must-read articles about online speech and content moderation published from the last seven (or so) days and loosely filed into one of four categories. I hope you find something new or useful — BW
📜 Policies - emerging speech regulation and legislation
Poland will today present its controversial new content moderation bill to Parliament in the wake of political controversy for the banning of an opposition party on Facebook for Covid disinformation. The so-called Freedom Act, which seeks to introduce strict takedowns and financial penalties for websites that fail to comply, has been criticised by industry and civil society organisations for duplicating incoming Europe-wide legislation and introducing a speech regulator that would work against bodies designed to maintain journalistic freedoms. A real mess, by all accounts.
The introduction of 'must-carry' laws in Texas and Florida (EiM #119) designed to prevent social media platforms removing political figures or speech look like being just the tip of the iceberg, as Protocol reports. It found that Republican legislators in 33 states across the US have introduced anti-content moderation bills, with five making their way through state legislature chambers as we speak. The implementation of the bills in Texas and Florida, you'll remember, is currently blocked after a court tossed it out on First Amendment grounds (EiM #139).
Finally for this section, if you didn't catch the Truth and Trust Online conference back in October, the slides from all of the sessions were recently published online. I can personally recommend Oxford Internet Institute's Hannah Kirk on detecting hateful emoji.
💡 Products - the features and functionality shaping speech
Moderators on Reddit have discovered that content that is removed for violating its community guidelines posts are still accessible via their own comment history. These so-called "ghost posts" were reportedly flagged to Reddit back in September, according to The Markup, but execs are yet to do anything, meaning that sexual content is available via countless moderators profiles for those who know how to find it. Sameer Hinduja, co-director of the Cyberbullying Research Center, described it as “a huge miss by the engineering and UX team” which is hard to disagree with.
💬 Platforms - efforts to enforce company guidelines
Just a week into 2022 and there have already been two political account suspensions, one relevant to either side of the Atlantic:
- Politics for All, a Twitter account aggregating headlines from mostly UK news outlets, was removed along with several similarly branded accounts for what Press Gazette reported was "platform manipulation and spam". The rapid growth and influence of this franchise suggests that the account owner could have been buying followers.
- Republican Congresswoman Majorie Greene Taylor has had her personal account banned from Twitter for violation of Covid-19 misinformation policies under the five strikes and out policy. Her Telegram account was also suspended although her official Congressional Twitter account remains active.
OpenSea, the mega-hyped non-fungible token (NFT) platform, will spend some of its new $300m funding round on trust and safety efforts, according to the New York Times. It comes after 15 NFTs 'worth' $2.2 million were stolen, leading the platform to freeze the assets, angering some users who argued that this was anti-decentralisation. As Casey Newton wrote about and T Brooking pointed out, there is a large dollop of irony in these "self-governing" NFT communities immediately for calling for moderators to intervene.
Around 400 Accenture moderators working on behalf of Facebook that were expected to begin back working from its Mountain View campus on 24 January have now been told they can continue working from home. Several moderators had threatened to resign at the plans to return to the office, which led to accusations of being treated like "second-class citizens" in comparison with Facebook staff. Buzzfeed has the sadly all-too-familiar story.
👥 People - folks changing the future of moderation
If there was one thing that I'd recommend looking out for in 2022, it's the increasing importance of the testimonies and experiences of content moderators around the world. Their accounts of trauma are an essential part of understanding the difficult trade-offs inherent in moderation and their experience can play a vital role in designing solutions that don't see policy decisions solely handed over to opaque artificial intelligence models.
Candie Frazier is the latest in a long line of moderators to speak up. A contractor for Telus International working on behalf of TikTok, she claims that the video-sharing platforms made her watch "thousands of acts of extreme and graphic violence" without protections such as minimizing or blurring content. She has filed a lawsuit but has since been stood down from working, according to reports this week.
That Frazier is willing to (probably) break an NDA and (potentially) go without pay tells you all you need to know about how tough being a moderator for a large platform is.
🐦 Tweets of note
- "Internet policy needs to mature, fast. It’s throwing blunt tools (take down! refer to prevent! dns block!) at what are often complex *offline* policy failures it fails to understand. Both online and offline are real life" - Michael Veale on reports of a worrying trend in the UK.
- "Think about what happened when FB left content moderation to AI for a min early COVID." - FTC senior advisor Meredith Whittaker on the limitations of algorithms to spot rare, real-world events.
- "We might not have law-based solutions to every problem with online speech. And that is difficult for many to accept." - Jeff Kosseff reminds us all of the big elephant in the room in a new piece for The Atlantic.