đ A transatlantic call for platform transparency
What a week it has been.
2000 subreddits (including r/The_Donald). Both President Trump and Dr Disrespect's Twitch accounts. 200 Facebook âboogalooâ Â accounts. A host of white supremacists from YouTube. All banned or suspended in just the last seven days. Youâve heard of The Night of the Damned; this was the week of the banned (probably not coming to a cinema near you any time soon).
Also notable was the fact that Everything in Moderation passed 250 subscribers, a small but pleasing achievement. Big thanks to folks from Microsoft, Portsmouth University, Public Interest News, Queensland University of Technology, Historic England and others who subscribed. Please take a minute to reply and say hi.
In today's edition, I sought to understand where the platforms might go from here and to do that, I went back to a recently published report.
Stay safe and thanks for reading â BW
đ 'Transparency will benefit companies and governments'
The principal behind the Transatlantic High-Level Working Group on Content Moderation Online and Freedom of Expression was always better than its name.
Convened in February last year by two research institutes â the Annenberg Public Policy Center in Philadelphia and the Institute for Information Law in Amsterdam â it brought together 25 experts on digital policy, law and human rights from both Europe and America together for the first time.
The task? To come up with ideas to reduce the amount of hate speech, terrorist content, and misinformation without curtailing freedom of expression.
Over 18 months, the group have written 14 papers (some of which I've mentioned here in EiM) on everything from artificial intelligence to intermediary liability and from accountability solutions to existing legislation. Last week, it added to that bank of research by publishing its final report: Freedom and Accountability, A Transatlantic Framework for Moderating Speech Online.
The report, to give you an idea, calls for greater transparency and accountability from the dominant digital platforms and puts forward what I believe is a sensible, flexible framework for moderation based on five components:
- Regulate on the basis of transparency
- Establish an accountability regime to hold platforms to their promises
- Create a three-tier disclosure structure
- Provide efficient and effective redress mechanisms
- Use an ABC (actors, behaviour, content) framework to combat viral deception, or disinformation
(The full report is readable if you can spare 30 minutes or you can watch an hour-long presentation of the findings here).
I wanted to find out more about the recommendations and their possible application in the real world so I asked Jeff Jarvis, professor at the Craig Newmark Graduate School of Journalism  at CUNY and a member of the TWG, to answer some questions via email.
Here's what he said:
Q The report makes a lot of the lack of trust among tech companies, the public, and government. Why do you think that mistrust exists and when did it start?
To be clear, I speak only for myself, not for the Working Group. Others would disagree, but in my view, what we are seeing is a burgeoning moral panic that arises out of worry about change. The tech companies were too optimistic about human behaviour and did not sufficiently guard against manipulation; they also were far too opaque about their inner workings and haughty atop that. Media and politics -- institutions themselves threatened by the change of the net -- joined together in attempts at protectionism for their past.
Q Transparency â one of the report's major recommendations â arguably runs counter to everything about the dominant digital platforms. How likely is it that the likes of Facebook and YouTube will change their ways?
Twitter is fairly public, releasing data on misinformation campaigns for researchers. Facebook has, in fits and starts, tried to release data (see Social Science One) but trips over GDPR, Cambridge Analytica, and its own culture. Work is needed following the recommendations of the Working Group to examine in greater detail what transparency is needed and why to study the impact of the social networks on society and the impact of regulation to date on the social networks and the public conversation, and to hold the companies accountable for doing what they say they will do. In the long run, transparency will benefit both companies and governments to regain trust.
Q The report recommends a regulator to oversee standards and implement frameworks. What kind of body is best placed to act as that regulator across both the United States and Europe (and beyond)?
Again, I speak for myself here but personally, I do not presume to start with regulatory agencies. As for the group, it is not recommending an extra-governmental, international regulator. That is up to each government (nation and EU) to decide whether a regulator is needed, whether that regulator could be an existing body (e.g., Ofcom in the UK, FTC in the US), or whether it should be a new body.
Q The current debate pitches the respective policies of the social media platforms against one another, which distracts from the wider discussion on process and regulation. To what extent would it be beneficial if the tech companies agreed on a common set of community guidelines and terms of service?
The Working Group proposes a flexible framework that specifically enables companies and communities to establish their own standards, opposing one-size-fits-all rules, and also enabling an ongoing multi-stakeholder discussion â among technology companies, governments, researchers, civil society, and users themselves â as new challenges, such as a pandemic, arise.
Q The report says TWG 'did not seek unanimity on every conclusion or recommendation'. But which part saw the greatest disagreement and why do you think that was?
The discussions in the Working Group were productive, collaborative, and nuanced, informed by research. When former FCC Commissioner Susan Ness convened the Working Group, I was frankly unsure whether such a disparate set of experts from various sectors, nations, and interests could reach an agreement, and so I am impressed with what the Group accomplished. I would not say there was any pattern of disagreement.
Q What reaction has there been from the tech companies to the report so far and which organisations do you expect to be first to implement some of the recommendations?
In discussions with various technology companies, they have focused on the specifics of transparency. As I said, more work is needed to ask and answer what transparency is needed for what purpose and to be diligent about coming to common definitions (for example, what does it mean to "promote" or "demote" a piece of content when countless decisions are made about it by algorithms and users?). This is not about transparency for transparency's sake but about providing evidence and research to inform decisions by companies and governments; that could be a productive and collaborative discussion.
Thanks for Jeff for taking the time to answer questions. Who else should I reach out for a Q&A? Let me know.
đŚ Free speech is cheap
Parler â the newly popular free speech social network I touched on in last weekâs EiM (#70)â has been banning users this week for, wait for it, making fun of it.
The irony of the situation was sadly lost on its CEO, who posted what could barely be called 'community guidelines'.
Elsewhere on Twitter this week:
- Adam Mosseri, the head of Instagram, used one of his butter-wouldnât-melt  videos to pledge to better âarticulate the tradeoffs, articulate the different solutions and debate them as openly as we canâ following criticism of moderation and rate-limits on the platform.
- This great thread from Evelyn Douek â lecturer on law at Harvard University âon how the #stophateforprofit campaign is just one of many levers to pull to change platforms approach to hate speech.
- I hope to dig more into the recipients of 20 Knight Foundation grants totally $1.7m to support research into internet governance but Simon Galperin isnât impressed with one of them.
đ Not forgetting...
A group of UK peers this week published a new report urging the government to bring forward a Draft Online Harms Bill and appoint an independent ombudsman for content moderation for those let down by platforms systems. Itâs not a million miles away from TWG's blueprint.
Democracy under threat from âpandemic of misinformationâ online, say Lords Committee - Committees - UK Parliament
The UK Government should act immediately to deal with a âpandemic of misinformationâ that poses an existential threat to our democracy and way of life. The stark warning comes in a report published today by the Committee on Democracy and Digital Technologies.
Reddit CEO Steve Huffman always comes across pretty well in interviews, including this New York Times ones. Not sure Iâd agree with his assessment that allowing hate speech on the platform was âa gapâ and a ârough patch'.
Reddit's Steve Huffman on Banning âThe_Donaldâ Subreddit - The New York Times
Steve Huffman, Redditâs co-founder and chief executive, says new rule changes will help the company fulfill its mission.
Following criticism about its moderation reaction to Black Lives Matter, Nextdoor is actively recruiting Black moderators and giving existing mods unconscious bias training.
Nextdoor CEO says itâs âour faultâ moderators deleted Black Lives Matter posts - The Verge
Sarah Friar, CEO of neighborhood-focused social network Nextdoor, says the company is at fault for moderators deleting posts discussing racial injustice and the Black Lives Matter movement.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.