Welcome to Everything in Moderation, your weekly newsletter about content moderation hand-carved by me, Ben Whitelaw.
Here is the policy, product, platform and people news that you need to know about this week — BW
📜 Policies - company guidelines and speech regulation
A new report has identified regulation of moderation systems and platform transparency as two of four structural challenges to ending 'the informational chaos that poses a vital threat to democracies’.
The new 128-page report (I haven’t read it all, I’ll be honest) from The Forum of Information and Democracy — an independent body formed in November 2019 by 11 civic organisations — also lists 250 clearsighted recommendations for state governments and service providers about how to outline guidelines, manage user data and evaluate content. Basically doing all the hard work for them.
New legislation in Pakistan which could lead to big fines for online platforms if they don’t remove content within 24 hours has led Facebook, Google and Twitter to threaten to pull out of the country. In a statement, the companies said the rules — which also require firms to nominate an in-country representative — make it ‘extremely difficult’ to make the services available. You’ll remember that the same rules led to TikTok being banned in the country (EiM #25) before a reprieve.
Finally, a note about last week’s edition on the Digital Services Act: without me knowing, the European Commission had pushed back its launch by a week. Thanks Samuel at Euractiv for flagging the new date - 9 December.
💡 Products - features and functionality
Surprise, surprise, Twitter’s new Fleets (eurgh, worst name ever) feature does not have a stable system of moderation behind it. Data scientist Marc-André Argentino was able to post Nazi content and graphic images without any checks or warnings whatsoever. The irony is that Marc's account was temporarily suspended after users reported him for posting such content. You have to laugh.
In gaming news, Mojang — the Swedish developer of Minecraft — announced that moderators will now be able to permanently ban users as part of a push to create a more ‘welcoming environment’. (read: make more cash). Bans, which can be levied for threats, exposing personal info and cheating among other reasons, cannot be reviewed and will require users to sign up with a different account. Pretty draconian. (Know any Minecraft mods? I'd love to speak to one for a new project).
💬 Platforms - dominant digital platforms
A punchy open letter from over 200 Facebook moderators (61 under their own name) has accused the platform of risking their lives during the Covid-19 pandemic. The letter, published by legal firm Foxglove, also called for the end of outsourcing and the introduction of hazard pay and better healthcare. Artificial intelligence, they said, was ‘years away from achieving the necessary level of sophistication to moderate content automatically’ and 'may never get there’. A Facebook response is not yet forthcoming.
To make matters worse for Facebook, the CEO of Taboola — a company that made millions from making dustbins of media sites —has written an op-ed for Mumbrella about the need for human moderation. A bit like being told to chill by the Incredible Hulk.
👥 People - those shaping the future of content moderation
Glitch, a UK non-profit aiming to end online abuse, has produced some fantastic work since it was founded in 2017 (its recent report on the effect of Covid-19 on online abuse experienced by minorities is worth reading). Its founder, Seyi Akiwowo, has been behind a lot of that good stuff and has rightly been recognised with a place on Twitter’s Trust and Safety Council. A smart appointment.
🐦 Tweets of note
- "Every snap is moderated": Juliet Shen, product manager at Snapchat, points out a surprising aspect of its new Spotlight product.
- "Really helpful panel & remarkable the narrative change on content regulation” - associate professor Fenwick McKelvey makes the 2020 Canadian Internet Governance Forum sound like a hoot. Genuinely sad I missed this.
- "Amid a flood of misinformation, Facebook dialled up a publisher quality score known as NEQ to make authoritative news more prominent" - Fascinating piece and thread from New York Times’ Kevin Roose.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.