4 min read

šŸ“Œ Moderation as 'leverage', best OnlyFans analysis and watching Birdwatch

The week in content moderation - edition #125

Hello and welcome to Everything in Moderation, your weekly newsletter about the policies, products, platforms and people shaping the future of online content moderation. It’s curated and produced by me, Ben Whitelaw.

I’m bringing you this week’s edition on a long train journey that started at a campsite in the British countryside and will end in the north of the UK in God’s own county. Despite signal and battery issues, I’ve tried to keep up to speed with this week’s need-to-know news. If there is anything I’ve missed, hit reply and I’ll share it next week.

Without further ado, onto your links (including a hefty section of platform news) — BW


šŸ“œ Policies - emerging speech regulation and legislation

If there is one thing we know, it is that content moderation is political. Seen within Chinese expansionism (EiM #36), US incompetence (#48) and Indian censorship (#110), it is in part what makes the topic so fascinating. And it is also what Emerson T Brooking, resident fellow at the Digital Forensic Research Lab of the Atlantic Council, says could be crucial in the fight against the Taliban (last week’s edition). In a Q&A with Tech Policy Press this week, the coauthor of LikeWar: The Weaponization of Social Media notes:

Some of the greatest leverage that the international community has right now is basically these content moderation policies, and these social media platforms, on which the Taliban would desperately like to maintain a presence

With deadly attacks taking place yesterday at Kabul airport, it will be interesting to see how platforms change tack over the next few days according to public and media sentiment.

Can a centuries-old German system for archiving dangerous writing be the secret to improving access to data held by the dominant digital platforms? That’s the case made by Jonathan Zittrain and John Bowers in this Slate piece about 16th-century ā€œGiftschrankā€ (poison cabinets). Their recently published paper with Elaine Sedenberg notes that ā€œ(t)he ways in which speech is produced and filtered on a societywide level is going undocumentedā€ before laying out how a similar system could work to allow researchers to review and understand the effect of (un)moderated content. My read of the week.

šŸ’” Products - the features and functionality shaping speech

People who rate tweets as part of Twitter’s Birdwatch programme may be pseudonymous, be rated on helpfulness and have an overall score to represent the consistency of their judgements, according to screengrabs from a recent product feedback survey. Announced back in January, the pilot was designed to test how adding notes to contested tweets could ā€œroot out propaganda and misinformationā€ but there’s been little in the way of positives noises about the project since.

One analysis of the publicly available data back in February found that fewer than half of notes contained a citation while a Twitter user last week found 12,254 notes created by 2,062 users on 9,566 tweets. Not exactly a flock of people (sorry).

šŸ’¬ Platforms - efforts to enforce company guidelines

Here’s a story that’s still unfolding and with potentially significant consequences: mods from hundreds of subreddits have called for the site to ā€œremove dangerous medical disinformation that is endangering lives and contributing to the existence of this ongoing pandemicā€. The post — published to r/vaxxhappened (330k subs) and reportedly crossposted by over 1000 others — warned execs that Covid-19 misinformation could leave users ā€œin the hospital or even cost them their livesā€. It led to a rather weak statement from Reddit CEO Steve Huffman (aka Spez) a few hours later and a counter post from the mods.

This revolt might not be as significant as the joint letter against white supremacy back in June 2020 (EiM #69) but mods at Reddit that I’ve been in touch with suggest that a blackout could be the next step.

YouTube ā€œstrategically avoids controversiesā€ and has ā€œrepeatedly ducked the brunt of backlashes that Facebook and Twitter have absorbed head-onā€, according to The Washington Post’s Will Oremus. In an interesting read about the video platform’s approach to PR, Oremus cites recent examples of its second mover advantage and speaks to a former employee who says it’s a ā€œdeliberate strategyā€. This one doesn’t reflect well on how content moderation is covered by major media.

News that OnlyFans was banning sexually explicit content broke just before last week’s EiM (#124) and we’ve seen a host of articles from the commentariat in the week since. Here are a few that are worth reading:

  • CNN’s Brian Fung reported that payment providers have created ā€œessentially a new content policy regimeā€ in forcing OnlyFans to act.
  • Over at Qz, Scott Nover notes how the policy change came down to four tiny bullet points in OnlyFans’ acceptable use policy.
  • Marie Solis over on The Verge writes that banks’ influence over adult content has been ā€œa slow creepā€ going back as early as 2014.
  • BBC Newsā€˜ Neil Titheradge reports that the move came after it approached OnlyFans about its response to leaked moderation documents.

šŸ‘„ People - folks changing the future of moderation

There’s a lot to be said for people across an industry (especially from so-called competitors) coming together to share knowledge and drive the adoption of shared standards. I saw it happen in pockets when working for UK newsrooms, although it is hard and draining work.

That’s why credit should be given to Tiffany Xingyu Wang, the president and co-founder of Oasis Consortium, a think tank that has been in stealth mode but launched officially this week. Oasis — which stands for openness, accountability, security, innovation, and sustainability — will work with companies to develop its Trust and Safety initiatives and is already working with a number of ā€œthought leadersā€, including people from audio company Pandora and dating giant The Meet Group.

Wang has a wide range of tech experience: she founded a wine persona startup, led AI product development at Salesforce and has been Chief Strategy Officer for Spectrum Labs for almost two years, according to her Linkedin profile.

It’s unclear how Oasis is funded but the consortium has said it will work towards creating template user safety standards to help companies avoid common content moderation pitfalls by the end of 2021. I’ll return to this later in the year.

🐦 Tweets of note

  • ā€œI believe we’re now shifting from a moment when platforms use all sorts of "reductionā€ + ā€œborderlineā€ techniques, but prefer not to talk much about it" - Tarleton Gillespie looks at how YouTube is presenting how it moderates in a good thread.
  • ā€œWhat it takes is a TON of work from REAL PEOPLE setting rules and enforcing them.ā€ - Galaxy Brain writer and former NYT staffer Charlie Warzel notes the special sauce of one well-moderated Covid-19 group.
  • ā€œML content moderation of extremist content is… machines against the rageā€ - I’m all for this content moderation humour courtesy of The Calyx Institute’s Jeff Landale.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.