📌 The growing case for moderation juries
Hello — this is Everything in Moderation, your weekly dose of news and analysis about online content moderation.
A big welcome to new subscribers from SwissInfo, New Statesman, Business Insider, BCW Global and others. If you have a minute, drop me a line to tell me how you made your way here.
This week, I wanted to go deeper on a topic that I didn't quite have space to cover in last Friday’s newsletter (#EiM 71): digital justice.
Stay safe and thanks for reading — BW
⚖️ The need for moderation to be seen to be done
On the few occasions that someone has asked me what Everything in Moderation is about, I tell them the same thing: ’it’s a newsletter about justice in the age of the internet masquerading as a weekly email about content moderation’.
This pitch, I will admit, doesn't get people super excited about subscribing. But it’s a fair way to describe the issues — including human rights (EiM #42) oversight boards (EiM #44) and the need for better arbitration of disputes (EiM #31) — that I’ve touched on here over the last 18 months.
It's also why I was interested in the Transatlantic Working Group’s floating of e-courts as a means of redress when content moderation goes too far. As the report — which I covered here last week — explains:
A system of e-courts would enable users to resolve disputes over content deletion through public scrutiny when the fundamental right of freedom of expression is involved. It would also enhance legitimacy through due process and independence, and protect democracy by bringing decisions about the legality of content into public view.
The TWG isn't the only group looking at a content moderation in the context of justice. An EU policy principles briefing by the Electronic Frontier Foundation this week also noted that liability of content should only be decided by court order, not by platforms or individuals. Although it stopped short of saying so, EFF's suggestion is that the justice system must adapt to the speed and level of potential harm at which dominant digital platforms operate.
That’s all well and good but it begs the question: what do we do in the meantime? What interventions are possible while speech regulation rumbles on and the justice systems of every country across the global make themselves fit for the 21st century?
That could be where digital juries come in.
I first came across the idea of digital juries in March this year in an academic paper called Digital Juries: A Civics-Oriented Approach to Platform Governance.
As part of a thesis project, the authors — Jenny Fan and Amy X Zhang — took learnings from the judicial system to create a five-step digital jury framework. Using this, they ran 15 pilot juries to understand how users would respond to real-life examples of content, including an anti-Semitic version of Pepe The Frog standing in front of the World Trade Center. The goal was to gather evidence about whether juries were perceived as more democratically legitimate than other forms of moderation.
82 participants sourced from Amazon Mechanical Turk took part in one of three types of juries:
- a control where users could see the content and the decision but not influence the outcome
- one with some background to the content and blind-voting on what should happen next
- a third that allowed group deliberation for four minutes before a vote
When surveyed afterwards, participants agreed that the more immersive format was more 'procedurally just' even if it was less efficient than types one and two. This led Fan and Zhang to conclude that:
"...the jury processes are perceived as a more legitimate exercise of platform power, improving trust in how content moderation processes are made, valuing individual voices, and caring about user preferences.”
Jonathan Zittrain, professor of (you guessed it) law and also computer science at Harvard, has also written about juries in the context of Facebook’s Oversight Board and notes how it:
'borrows from the design of a legal system, which, when it works, brings otherwise intractable conflicts to resolution and legitimacy, even though some people, even many people, will understandably be disappointed by any given decision that emerges from it.’
As with a lot of ideas about how to deal with the current crisis in content moderation, these concepts are not necessarily new, nor do they need to be. Plenty has gone before which has merit and from which there is a great deal to learn.
For example, transparency of process and engagement in decision-making were seen in the early 2010s in the form of League of Legends tribunals and can be traced even further back to Slashdot’s still-working moderation system. I remembered in the course of writing this that Periscope — Twitter’s now-defunct live video platform — also helped users decide collectively about the nature of comments on streams.
The time feels ripe for a return to these ideas. I’d welcome an ongoing widening out of the debate from regulation and policy to procedure and process and would love to see Fan and Zhang's research continued into new and fresh directions.
If justice needs to be seen to be done, then so does moderation.
🌏 Around and about
I'm conscious that a lot of what I cover in Everything in Moderation can be US-focused so here are a few stories from the last seven days from elsewhere.
🇩🇪 Justice ministers in Germany this week said that “voluntary commitments and self-responsibility” from social platforms are not enough and that they may look to extend existing NetzDG regulation that sought to prevent extremist content.
🇧🇷 A draft Bill to establish an Internet Freedom, Responsibility and Transparency Law in Brazil has been translated by an academic institution in Rio to allow others to input. Take a look.
🇮🇪 A leading Irish defamation lawyer has urged newly appointed Justice Minister to establish a committee to review the digital dominant platforms and assess options for regulation.
Is there anything I've missed from where you are? Get in touch and I'll share here next week.
🤖 Are humans scalable?
Programmer Dan Luu noticed that most flags on Lobste.rs, a link aggregation site about computing, were attributed to a handful of users, which he posits is a way that human moderation can be made scalable.
Having operated moderation teams in a similar way (not always with 100% success, mind you), I like this approach.
7️⃣ Not forgetting...
Some interesting detail in this LA Times piece about the recent Reddit mod revolt and the experience of one Black moderator, Jefferson Kelley, who is taking over r/BlackFathers.
Reddit moderators spent years asking for help fighting hate. The company may finally be listening
When Reddit announced last week it was shutting down the noxious pro-Trump group that had violated the site’s rules for years, Jefferson Kelley could scarcely believe it.
“A complex web of nebulous rules and procedural opacity": how Sarah T Roberts, professor at UCLA, describes moderation practices in the abstract of her latest paper about the takedown of the Terror of War photo. As ever, Sarah is not messing around.
View of Digital detritus: 'Error' and the logic of opacity in social media content moderation | First Monday
Auditors that spent two years reviewing Facebook's civil rights record published their report this week and it’s as damaging as it could be. A ’terrible precedent’ and 'made policy and enforcement choices that leave our election exposed to interference by the president’. Ouch.
Facebook’s Decisions Were ‘Setbacks for Civil Rights,’ Audit Finds - The New York Times
An independent audit faulted the social network for allowing hate speech and disinformation to thrive — potentially posing a threat to the November elections.
TikTok’s second transparency report for the second half of 2019 shows that it pulled down almost 50,00,000 videos. 98.2% using automated means and 89.4% before they received a view.
Our H2 2019 Transparency Report - Newsroom | TikTok
By Michael Beckerman, VP and Head of US Public Policy and Eric Han, Head of Safety, US Today we published our global Transparency Report for the last half of 2019 (July 1 - December 31, 2019). This re
Parler has come down hard on impersonation this week, launched new community guidelines and put out a call for volunteer mods which suggests it wants to be decentralised
A Twitter Alternative, If They Can Keep It | Cato @ Liberty
Professor Chris Gillard has written for OneZero about the toxic waste that Facebook is spilling into our online environment and how we wouldn’t stand for it if it was lead in our water
Facebook Cannot Separate Itself From the Hate It Spreads | by Chris Gilliard | Jul, 2020 | OneZero
Imagine a factory that allowed anyone to bring toxic waste there, any time of day or night, and promised to store it. Imagine that in addition to storing the waste, the factory would exponentially…
Another nice example of an online community (this time a Texas university sports fan site) reiterating that racism will not be tolerated. Some surprisingly positive comments under this one.
A note to the TexAgs community regarding posting and moderation | TexAgs
TexAgs CEO Brandon Jones addresses the community dynamics of posting and site moderation as well as an updated TexAgs user agreement.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.