📌 Europe’s new free speech taskforce
Hello to a handful of new EiM subscribers from across the pond on the back of this kind tweet - do drop me an email to say hi. Thanks also to two folks who helped celebrate 50 editions of EiM by buying me a virtual ko-fi — you know who you are.
Today’s newsletter is a world tour of content moderation news that starts in Europe, goes via China and ends in the US and Brazil. As ever, I hope it’s both useful and enjoyable (or at least one of the two).
Thanks for reading — BW
PS If you’re in Dublin next Wednesday, go and listen to Chris Gray, the lead plaintiff in the case against Facebook and contractor CPL Resources.
🇪🇺 A tale of two regulatory processes
Two weeks, two notable announcements about online harm regulation. One you will likely know about but one you may not. Both are at very different stages but together, they give us an indication of where we're heading when it comes to policing harmful content.
First, the one that may have gone under the radar. The Council of Europe unveiled its new Expert Committee on Freedom of Expression and Digital Technologies, a group of 13 people from 7 countries that have been tasked with creating a draft recommendation on the impacts of digital technologies on freedom of expression. Over the next two years, the nattily named MSI-DIG committee will also put together a guidance note on content moderation practices for the Council’s 47 member states. Even if you think the Council of Europe doesn’t have the heft that it used to, this feels like a positive step.
Then, there’s the announcement you might have heard about. This week, the UK government released the initial consultation response to its April 2019 Online Harms White Paper, which had sought to introduce a new duty of care for companies towards their users and appoint a regular to oversee complaints and fines.
I won’t go into the reaction here but the overall feeling seems to be: there’s a lot of work to do. (Tech UK’s response is worth reading as are Heather Burns and Will Perrin’s Twitter threads). The final paper will be published in the Spring.
What can we glean when we put the two side-by-side? One thing for sure: that regulation will be about ‘legal and procedural frameworks’ (taken from the MIS-DIG terms of reference) and 'systems, procedures, technologies and investment’ (Online Harms wording). It won't be about removing individual pieces of content.
That might seem obvious to many but the Online Harms white paper was a ‘bit of a jumble’ and suggested that the regulator would adjudicate on individual matters or notice and redress. That would have been disastrous.
As it is, these two processes have come from different starting places to rest in a similar spot. It’s not much when there's still a great deal to thrash out (and go wrong) in both cases but it’s a start.
🇨🇳 How Reddit is dealing with coronavirus
I don’t often have reason to read The Hill, the new site covering US politics, but there was some interesting detail in this piece about how Reddit is working with its users to combat misinformation on two of its subreddits, r/china_flu and r/coronavirus:
Between them, there was:
- 30 moderators dealing with 1.2m visits
- Strict rules against sensationalised news content
- A focus on authoritative info eg World Health Organisation
- ‘Rumors’ and ‘grain of salt’ tags to flag unsubstantiated info
- A mod who said "I view this as some sort of extension of my day job, even though I’m not getting paid for it” ❤
Another subreddit — r/wuhan_flu — was quarantined for misinformation/hoax content and was replaced with a banner directing people to Center for Disease Control.
Not perfect then but more content moderation kudos to Reddit.
The State of it
An interesting thread from a Harvard Law professor about Trump’s retweet of (another) Nancy Pelosi video and why it should stay up in spite of calls to remove it.
Not forgetting...
More people behind the class action lawsuit against Cognizant and Facebook are coming forward to tell their stories publicly. Clifford Jeudy was a Florida-based moderator until PTSD made him leave his job in July 2019.
Facebook moderator sues firm saying watching rapes, mutilations and murders gave him PTSD – The Sun
One of those stories (or at least a headline, I haven’t got a BI Prime account) that tells you something most knew five years ago: Facebook comments are awful
Facebook's comments tool promised to make 'higher quality discussions' on the internet. It's riddled with spam instead.
Facebook's spam-detection systems are failing to notice blatant spam and scams, leaving popular websites covered in malicious comments.
Good reporting from The Intercept about TikTok’s latest questionable policy (and management) decision in Brazil.
TikTok Livestreamed a User’s Suicide — Then Got Its PR Strategy in Place Before Calling the Police
Internal documents show that, after a livestreamed suicide, TikTok’s Brazil office planned its crisis management before calling the police.
Ever wondered how Quora moderated its users? There’s a (very well-mannered) thread for that.
How much control do Quora moderators have over what shows up in our feed? - Quora
Moderation has a pretty heavy role in ensuring that the right content gets to the right users.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.