š The racist-approved platform that isn't Gab
Every week, when I sit down to write Everything in Moderation, I try to pull together some disparate threads from content moderation coverage that Iāve read. Sometimes making connections between what has been going on is hard. This week, as I read three separate stories about racism on TikTok, it was sadly very easy.
If youāve had enough of reading about the platformās incompetence, this weekās edition also includes no less than 12 recommend reads including stories about sex on Zoom, YouTube lawsuits and the usual dose of coronavirus news.
Stay safe and thanks for reading āĀ BW
PS I wrote my first piece from Sierra Leone about endangered orphan chimps, if thatās your bag (how could it not be?).
𤯠TikTok has a racism problem
If TikTok's most recent moderation problem was buried in its community guidelines (EiM #56), the next is surely hiding in plain sight.
The evidence is compelling:
- Al Jazeera reported that British far-right figure Stephen Yaxley-Lennon (aka Tommy Robinson) and hate group Britain First have both set up accounts on TikTok in the last month.
- The Tab published a story about British-Sri Lankan TikTok user whose videos were removed without explanation. They were later restored after TikTok claimed they were removed in 'error'.
- Two US students posted a frankly disgusting video to TikTok, deleted their account after it went viral on Twitter and rightly got expelled from school, according to The Grio.
There are two different issues at play here, which both point towards the idea that TikTok has a racism problem.
- It is seen as a safe place for figures spouting hate - Yaxley-Lennon and Britain First have both been banned from Twitter and Facebook for hateful conduct (Yaxley-Lennon even references it in his TikTok bio). Despite that warning, a number of their recently posted videos on TikTok arguably 'dehumanize an individual or a group of individuals based on protected attributesā (from TikTokās community guidelines). And yet no action has been taken. Meanwhile, the US student was not banned from TikTok and was able to create another account right away.
- Its algorithms appear to be penalising minorities - The bias within artificial intelligence is well-documented and widely agreed to affect women and minorities most starkly ā some facial recognition systems record error rates as high as 34.7%. Stars like Lizzo have had videos pulled down without reason.
It's frankly all too familiar. Despite having watched other social media platforms fail to deal with both user-led and algorithmically mandated racism over the last decade, TikTok appears to be going down the same path. It's looking right now like Gab-lite.
What can be done about it? Well, a recent paper on artificial intelligence and race by the Transatlantic Working Group on Content Moderation and Freedom of Expression (phew!) recommended greater public transparency and media literacy as ways to combat these challenges.
Sadly, I donāt expect Chinese-owned ByteDance to do either of those any time soon. (And, no, I don't count PR-exercises like this).
š„ Public health platforms? (week 7)
Many, many COVID-19 related reads this week, some serious and a few less so.
- In India, a prominent Bollywood actorās manager called for the death of Muslims and journalists but only had her tweet removed 24 hours and 2,000 retweets later (Buzzfeed)
- Facebook's content review team will be among the first to return to the office according to a Zuckerberg update (TechCrunch)
- Human Rights Watch called on tech companies to better communicate how they are automating content moderation and appeal processes
- Content moderators should be classed as key workers and systems should be designed inclusively and open source, say Turing Institute (via @adders)
- Eventbrite and Facebook are removing COVID-19 protests and rallies on the grounds that they violate social distancing guidelines (Yahoo News)
- Researchers and non profit orgs have sent an open letter to platforms urging them to store removed posts and help take advantage of the āunprecedented opportunityā for research presented by COVID-19 (The Verge)
- A virtual pensioners fitness class was taken down after Facebook categorised it as āsexual contentā (Metro) š±
- Related news but not: Zoom is being used for sex parties against its terms and conditions but donāt worry, it canāt see you (PC Mag)
š Not forgetting...
The CEO of crypto company Ripple, the fantastically named Brad Garlinghouse, resorted to taking YouTube to court to have fake accounts bearing his name removed. I liked this quote he gave in this piece: "YouTube did $15 billion worth of revenue last year. Ā Youāre telling me they canāt spend more money to police their own platform?ā
A new lawsuit against YouTube shows how hard it is to get the company to respond to abuse - The Verge
An impersonation scam involving cryptocurrency is now heading to the courts ā but the average person has far fewer options
Kaitlyn Tiffany writes in The Atlantic about why the Internet in the time of COVID-19 isnāt as good as we initially thought.

How the Coronavirus Is Changing Facebook Moderation - The Atlantic
After a few weeks of faith in the possibility of online utopia, the cracks are starting to show.
A newly published study looking at the American College of Surgeons' community forums found 43% of threads contained unsafe advice although much of it was corrected by other doctors. (via @RichMillington)
Is the American College of Surgeons Online Communities a safe and useful venue to ask for surgical advice? - Abstract - Europe PMC
Europe PMC is an archive of life sciences journal literature.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.