đ âTechnical glitchâ is no longer an excuse
Itâs been the kind of week that has made me think a newsletter on content moderation isn't what the world needs right now. Iâve found it hard to motivate myself, which is why todayâs edition is dropping into your inbox later than usual.
At the same time, I recognise that the issues of free speech, online abuse and platform regulation are at the heart of the George Floyd protests and the political response to it, both in the US and elsewhere. It doesnât make any sense to stop highlighting the inconsistencies of online platforms and their guidelines and policies now. Now is the time that it's needed most. So here it is, your weekly content moderation roundup.
Stay safe and thanks for reading â BW
PS Iâm looking to interview EiM subscribers over the coming weeks â read on for more details...
đ Platforms should investigate, not just apologise
Enough was happening in the US this week before a tweet went viral accusing TikTok of censoring the #blacklivesmatter hashtag.
This is the very same video platform, donât forget, that has been heavily criticised for allowing racist users and content to propogate on its platform (#EiM 61) and for lacking transparency about its moderation processes. Now, millions of users were seeing 0 views for hashtags relating to the George Floyd protests and, perhaps rightly, presuming the worst.
The reality was different. All hashtags, as TikTok pointed out, were at 0 as a result of a 'technical glitchâ which only occurred in the Compose screen of the app. In a blog post, published on Monday, it reiterated the diagnosis and acknowledged how it may have looked to supporters of the movement.
TikTok, however, wasnât the only platform to pass off a fuck-up as a glitch this week.
Facebook also resorted to blaming a âtechnical errorâ for deactivating the accounts of 60 high-profile Tunisian journalists and activists without warning. Facebook is huge in the north African country and was a vital communication tool during the 2011 revolution. Haythem El Mekki, a political commentator whose account was deactivated, said to The Guardian: âIt would be flattering to believe that we had been targeted, but I think itâs just as likely that an algorithm got out of control.â
There has been a worrying rise in these 'out of control' algorithms in recent months â driven by an industry-wide move to a more automated process of moderation â and subsequently more excuses that have been put down to so-called âglitchesâ. For example:
- Last week, YouTube blamed an âan error in our enforcement systems' for deleting comments containing certain Chinese-language phrases related to the countryâs government.
- In March, Facebook accidentally removed user posts including links from reputable news organisations because of an âissue with an automated systemâ.
- Even in January, before COVID-19, Chinese leader Xi Jinpingâs name appeared as 'Mr Shitholeâ on Facebook when translated from English to Burmese. Again put down to a âtechnical issue'.
It's clearer than ever that platforms are using 'technical error' as a free pass when content moderation issues arise. It has become a way of sweeping issues that affect user speech under the carpet, of passing the blame to an anonymous engineer or product manager. The suggestion seems to be that if it's a 'technical error', then the platforms can't be blamed.
This is no longer good enough. With more automated systems being used to flag and deal with content that violates platform rules, the 'technical glitch' get-out doesnât wash. Such 'errors' are impacting userâs speech in real-time and with real-world implications. If we are going to have more auto-moderated content (and it doesnât look like we have a choice in the matter), we also deserve better responses to breakdowns of those systems than âcomputer says noâ.
So letâs stop underestimating the effect that 'display issuesâ (TikTok's words) have on peopleâs health and their trust of platforms. Letâs make PR teams give details about the cause of blackouts and takedowns, rather than just bland apologies. Letâs ensure engineering teams conduct investigations, the findings of which should be made public. And letâs pressure platforms like TikTok into only shipping features that wonât affect usersâ speech in the way it did this week.
âđ˝ EiM needs you
Can you spare 30 mins for a video call? Iâm hoping to chat with five EiM subscribers over the coming weeks about the newsletter and what it could do better. I can offer $15 Amazon voucher or Iâll donate $20 to an anti-racism charity of your choice. Reply to me if thatâs you.
đşđ¸ The fallout of the Executive Order
Last weekâs newsletter (EiM #66) was perhaps overly doom and gloom â we're still here after all. To put that right, here are some good reads on Trumpâs Executive Order, what it means and whether it will go anywhere:
- A piece on Lawfare blog explains that, even though the Order will not withstand judicial scrutiny, the mere act of producing is enough to pressure companies to give his content preferential treatment.
- Over on The Quint, a strong case is made by Rahul Matthan to replace the Good Samaritan moderation protection with a Bad Samaritan prosecution option.
- EFF continue their good work on Section 230 in the form of a series of essays, including this one on how it gets the Federal Trade Commission's job all wrong.
- Trump and Twitter continue to go head-to-head, this time over a copyright complaint about one of the images in a George Floyd tribute video posted from his account.
- Over on The Verge, Facebook said it will re-examine its policies after staff staged a walkout following the platform's laissez-faire attitude to Donald Trumpâs remark.
đŻ Not forgetting...
One of the co-chairs of Facebookâs Oversight Board was involved in a race speech controversy this week. Casey Newton of The Verge has tried to read the ruins on what it might mean for the board.
The Oversight Board and the N-word | Revue
The Interface - Letâs conclude what turned out to be Free Speech Week on The Interface with a look at a case involving the co-chairman of Facebookâs new Oversigh
The rumour that US authorities jammed communications during this weekâs protests is reportedly false and Twitter have removed the accounts that started it.
Twitter suspends hundreds of accounts over fake protest claims
The social media site has ramped up efforts to clamp down on misinformation during the unrest across the US
Snapchat have stopped promoting Donald Trumpâs account following via its Discover tab after his remarks last week, even though it didnât violate its community policy đ¤
Snapchat stops promoting Trumpâs posts, saying they âincite racial violence and injusticeâ | The Independent
Snapchat will stop promoting President Donald Trumpâs account, saying it will ânot amplify voices who incite racial violence and injusticeâ. Mr Trumpâs account will still appear when searched for, but the social media app will no longer actively promote his account, or feature him using the discover function.
On any other week, if the world wasnât burning, I would have written at length about this crazy Australian legal judgement and what it means for publishers.
Australian media companies face defamation liability for comments on Facebook after court dismisses appeal | Media | The Guardian
NSW court of appeal upholds ruling in Dylan Voller case that media companies can be held responsible for defamatory comments under stories they post on Facebook
Moderation but for TV? Roku found that a QAnon channel was live on its platform for two weeks after slipping through its review processes.
Roku removes dedicated QAnon channel that launched last month - The Verge
Roku has removed a channel dedicated to the popular QAnon conspiracy theory movement. The streaming company, because it lets anyone create a channel on its platform, inadvertently legitimized the opinion show, which began peddling misinformation when it launched last month.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.