📌 Transparency is not a physical place
Hello everyone and a warm welcome to new subscribers from the Financial Times, Deutsche Welle and Khoros. Sending you — and affected moderators all over the world — elbow bumps during this difficult and contagious time. I've included a special COVID-19 section this week to guide you to relevant virus-related moderation stories.
As part of going freelance, I’ve made my calendar open to anyone who wants to have a chat (about work opportunities or otherwise). If you’re self-isolating or want to discuss EiM/moderation, put some time in — I'd love to find out more about you and what you're working on.
Onto this week’s newsletter — BW
🚖 Taken for a ride?
It’s the probably the closest we’re going to get to a content moderation theme park. TikTok this week revealed it will open a transparency centre in its LA offices where the public can examine its moderation practices and watch how its team operates.
As the announcement sets out, the centre will:
operate as a forum where observers will be able to provide meaningful feedback on our practices.
Although it is focused primarily on attracting outside experts and policymakers, anyone can walk in off the street in theory (it’s unclear at this stage whether there will be queues or height restrictions). The centre will open in early May.
Although it’s the first time that a company has done this for content moderation, it isn’t a new tactic for Chinese tech companies under pressure from regulators. Huawei opened a similar bricks-and-mortar centre in Brussels in early 2019 to 'facilitate communication between Huawei and key stakeholders on cybersecurity strategies and end-to-end cybersecurity’.
This week saw another slew of headlines about TikTok’s content policies — Wired reported that users were being surfaced pro-anorexia content — so any oversight into the way the algorithm and how its human moderators surface content is welcome.
However, let’s not be lulled into thinking this is a big step forward. It is tech blog fodder at best and regulator seduction at worst. Transparency is not a place; it is a process. It is seen in removal explanations, openness about algorithmic decision-making, localised moderators and a regular review of the takedown data that Facebook, Twitter, YouTube and Reddit now report publicly.
Anything less is a sham.
⛔️ Banned but for what?
Mohamed El Dahshan, an economist and writer tweeting critically about the Egyptian government, was banned from Twitter for 11 days for, wait for it, profanity. Considering that most of the site is swear words, I found that surprising. His thread explains what happened and why there is cause for concern from a moderation perspective.
🏥 Public health platforms?
If we started off 2020 thinking that the US election was going to pose the biggest content moderation challenge of this year, it was because we had no idea that coronavirus was coming.
Here are a few stories from this week related to the spread of information about the COVID-19 virus:
- A report by the University of Toronto’s Citizen Lab suggests WeChat censored almost 45 coronavirus-related blocked keywords relating to coronavirus in early 2020 when the virus was in its infancy. Vox’s latest Reset podcast looks at the topic too.
- Business Insider Singapore pours scorn on Reddit’s efforts to challenge false information about coronavirus. New subreddits have cropped up since I wrote about it a month ago (EiM #51).
- In App Store moderation related news from China, school kids have got DingTalk (the app used for remote teaching) removed by giving it thousands of one-star reviews.
Anything I've missed here? Hit reply and I'll include in next week's edition.
⏰ Not forgetting...
Jonathan Zittrain (whose work I’ve linked to before here) outlines his Right vs Public Health era framing in relation to Facebook’s recent white paper on content moderation (EiM #52 - Facebook is pro thresholds) and advocates for a new Process era. Worth a read.
An Ambitious Reading of Facebook’s Content Regulation White Paper - Just Security
I was going to write about The Bristol Post highlighting trolls on their Facebook (and I still might) but good friend Adam Tinworth has written about it so succinctly I don't know if I need to.
Naming and shaming is not a community management strategy
It is only right that the first use of Twitter’s manipulated media labels was reserved for President Trump.
Twitter flags video retweeted by President Trump as ‘manipulated media’
It’s the first time the social network has enforced a new policy to fight doctored videos and photos.
A timely op-ed over on OpenGlobalRights that Facebook talks a good game when it comes to human rights but that’s about it.
Facebook’s new recipe: too much optimism, not enough human rights | OpenGlobalRights
Because social media platforms dominate public forums worldwide, a governance system rooted in “social values” instead of human rights may be convenient for companies, but it is deeply unsatisfactory in terms of protecting users.
Chinese AI moderation systems can be bought relatively cheaply and could perpetuate censorship around the world, report the WSJ.
Made-in-China Censorship for Sale - WSJ
Chinese AI tools from tech giants like Alibaba make it easier to scrub online content—and anyone can buy them.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.