š Moderation as 'leverage', best OnlyFans analysis and watching Birdwatch
Hello and welcome to Everything in Moderation, your weekly newsletter about the policies, products, platforms and people shaping the future of online content moderation. Itās curated and produced by me, Ben Whitelaw.
Iām bringing you this weekās edition on a long train journey that started at a campsite in the British countryside and will end in the north of the UK in Godās own county. Despite signal and battery issues, Iāve tried to keep up to speed with this weekās need-to-know news. If there is anything Iāve missed, hit reply and Iāll share it next week.
Without further ado, onto your links (including a hefty section of platform news) ā BW
š Policies - emerging speech regulation and legislation
If there is one thing we know, it is that content moderation is political. Seen within Chinese expansionism (EiM #36), US incompetence (#48) and Indian censorship (#110), it is in part what makes the topic so fascinating. And it is also what Emerson T Brooking, resident fellow at the Digital Forensic Research Lab of the Atlantic Council, says could be crucial in the fight against the Taliban (last weekās edition). In a Q&A with Tech Policy Press this week, the coauthor of LikeWar: The Weaponization of Social Media notes:
Some of the greatest leverage that the international community has right now is basically these content moderation policies, and these social media platforms, on which the Taliban would desperately like to maintain a presence
With deadly attacks taking place yesterday at Kabul airport, it will be interesting to see how platforms change tack over the next few days according to public and media sentiment.
Can a centuries-old German system for archiving dangerous writing be the secret to improving access to data held by the dominant digital platforms? Thatās the case made by Jonathan Zittrain and John Bowers in this Slate piece about 16th-century āGiftschrankā (poison cabinets). Their recently published paper with Elaine Sedenberg notes that ā(t)he ways in which speech is produced and filtered on a societywide level is going undocumentedā before laying out how a similar system could work to allow researchers to review and understand the effect of (un)moderated content. My read of the week.
š” Products - the features and functionality shaping speech
People who rate tweets as part of Twitterās Birdwatch programme may be pseudonymous, be rated on helpfulness and have an overall score to represent the consistency of their judgements, according to screengrabs from a recent product feedback survey. Announced back in January, the pilot was designed to test how adding notes to contested tweets could āroot out propaganda and misinformationā but thereās been little in the way of positives noises about the project since.
One analysis of the publicly available data back in February found that fewer than half of notes contained a citation while a Twitter user last week found 12,254 notes created by 2,062 users on 9,566 tweets. Not exactly a flock of people (sorry).
š¬ Platforms - efforts to enforce company guidelines
Hereās a story thatās still unfolding and with potentially significant consequences: mods from hundreds of subreddits have called for the site to āremove dangerous medical disinformation that is endangering lives and contributing to the existence of this ongoing pandemicā. The post ā published to r/vaxxhappened (330k subs) and reportedly crossposted by over 1000 others ā warned execs that Covid-19 misinformation could leave users āin the hospital or even cost them their livesā. It led to a rather weak statement from Reddit CEO Steve Huffman (aka Spez) a few hours later and a counter post from the mods.
This revolt might not be as significant as the joint letter against white supremacy back in June 2020 (EiM #69) but mods at Reddit that Iāve been in touch with suggest that a blackout could be the next step.
YouTube āstrategically avoids controversiesā and has ārepeatedly ducked the brunt of backlashes that Facebook and Twitter have absorbed head-onā, according to The Washington Postās Will Oremus. In an interesting read about the video platformās approach to PR, Oremus cites recent examples of its second mover advantage and speaks to a former employee who says itās a ādeliberate strategyā. This one doesnāt reflect well on how content moderation is covered by major media.
News that OnlyFans was banning sexually explicit content broke just before last weekās EiM (#124) and weāve seen a host of articles from the commentariat in the week since. Here are a few that are worth reading:
- CNNās Brian Fung reported that payment providers have created āessentially a new content policy regimeā in forcing OnlyFans to act.
- Over at Qz, Scott Nover notes how the policy change came down to four tiny bullet points in OnlyFansā acceptable use policy.
- Marie Solis over on The Verge writes that banksā influence over adult content has been āa slow creepā going back as early as 2014.
- BBC Newsā Neil Titheradge reports that the move came after it approached OnlyFans about its response to leaked moderation documents.
š„ People - folks changing the future of moderation
Thereās a lot to be said for people across an industry (especially from so-called competitors) coming together to share knowledge and drive the adoption of shared standards. I saw it happen in pockets when working for UK newsrooms, although it is hard and draining work.
Thatās why credit should be given to Tiffany Xingyu Wang, the president and co-founder of Oasis Consortium, a think tank that has been in stealth mode but launched officially this week. Oasis ā which stands for openness, accountability, security, innovation, and sustainability ā will work with companies to develop its Trust and Safety initiatives and is already working with a number of āthought leadersā, including people from audio company Pandora and dating giant The Meet Group.
Wang has a wide range of tech experience: she founded a wine persona startup, led AI product development at Salesforce and has been Chief Strategy Officer for Spectrum Labs for almost two years, according to her Linkedin profile.
Itās unclear how Oasis is funded but the consortium has said it will work towards creating template user safety standards to help companies avoid common content moderation pitfalls by the end of 2021. Iāll return to this later in the year.
š¦ Tweets of note
- āI believe weāre now shifting from a moment when platforms use all sorts of "reductionā + āborderlineā techniques, but prefer not to talk much about it" - Tarleton Gillespie looks at how YouTube is presenting how it moderates in a good thread.
- āWhat it takes is a TON of work from REAL PEOPLE setting rules and enforcing them.ā - Galaxy Brain writer and former NYT staffer Charlie Warzel notes the special sauce of one well-moderated Covid-19 group.
- āML content moderation of extremist content is⦠machines against the rageā - Iām all for this content moderation humour courtesy of The Calyx Instituteās Jeff Landale.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.