📌 'A repository of hate speech', TikTok pay wars and life as an outsourced moderator
Hello and welcome to Everything in Moderation, the weekly newsletter about content moderation, now and in the future. It's written by me, Ben Whitelaw, every Friday.
A special welcome to new faces from the University of Exeter, Headland Consultancy, TaskUs, Sorbonne University, Demos and elsewhere. Apologies if there was a delay between subscribing and receiving the newsletter in your inbox; there was what the platforms call "a technical glitch" (EiM #67).
I'm taking a few days off this week so today's edition was put together a little earlier than usual. Nonetheless, there's plenty to get into. Here's what you need to know — BW
📜 Policies - emerging speech regulation and legislation
The UK government could enforce the use of "gold standard technology" as part of the Online Safety Bill as part of efforts to protect children from harmful content, according to reports this week. The move would see Ofcom, the proposed regulator of the bill, assume the power to mandate the use of content filters, monitoring software and artificial intelligence tools. Not only is it the second expansion in scope of the bill in consecutive weeks (EiM #148) but it suggests that the Priti Patel, the Home Secretary, thinks that technology is a panacea for the internet's evils. Worrying times in Westminster.
A safer internet requires "an objective accountability mechanism developed by practitioners in close consultation with governments and experts from academia and civil society" according to the head of a coalition of technology companies. In a piece for the World Economic Forum, David Sullivan implores companies to develop "processes and controls" to avoid user harm and to use external assessors to verify measures and avoid "unintended consequences for human rights and economic opportunities alike". Sensible stuff.
💡 Products - the features and functionality shaping speech
Automated moderation, image recognition and enhanced search tools are just some of the ways that OpenSea, the non-fungible token (NFT) marketplace, will try and address a growing problem with bad actors. According to the company, 80% of NFTs created (or minted) on the platform are plagiarised or fake, according to the company, and scams and thefts have dogged the platform. This very good Wired piece has more details but I just can't believe that these safety tools weren't created at the point where minting was made open and free.
From a platform putting growth before safety to a game that grew as a result of its simplicity: Wordle has come under fire this week after its new owners, the New York Times, removed a number of slurs and racist terms from its database as well as agora, fibre, lynch and wench and others. The company released a statement saying that the move was about "keep(ing) the puzzle accessible to more people" and, as many people have rightly pointed out, it's a reminder that "everything is a content moderation problem".
💬 Platforms - efforts to enforce company guidelines
Third-party Facebook moderators working in Nairobi take home as little as $1.50 an hour and face a culture of workplace intimidation, according to an investigation from TIME magazine. Employees at Sama (formerly Samasource) were also: only told to watch the first 15 seconds of a video and make a decision on other posts in 50 seconds, denied wellness breaks and even fired for trying to secure better pay and working conditions. The treatment of contract moderators has been a longstanding thread in EiM (#51 #144), but I expect similar stories could be written about many contractor companies around the world. It's been made the cover of this week's magazine and it's also my read of the week.
That story from Kenya is particularly interesting in light of reports that TikTok has poached almost 200 mods from third-party contractors and business process outsourcing (BPO) companies since the start of last year. Better pay, improved psychological support and the chance to work from home were considerations, according to one former Accenture worker quoted in the piece.
Another person riding the wave of platforms' failure to get on top of online safety is Nick Clegg, who was this week was announced as Meta's new president of global affairs and the man responsible for all policy matters. The former UK deputy prime minister will have a dual reporting line to Mark Zuckerberg as well as existing boss Sheryl Sandberg and will focus on regulatory issues and the company's quest to build a metaverse. Buckle up, everyone.
Google Podcasts is a "repository of hate speech" where content is only blocked in "rare circumstances, largely guided by local law", according to UK media outlet Tortoise. As Spotify continues to feel the pressure for its decision to host Joe Rogan, it found shows on the Google equivalent hosted by far-right and extremist groups including the American Nazi Party, Atomwaffen and the National Vanguard. Google Podcasts has over 50m downloads on Android alone so it's not insignificant.
👥 People - folks changing the future of moderation
"That's creepy". "He's fine with it bruh". "It's a virtual world!". If you thought being a moderator on the social web was hard, wait until you see what it's like in the virtual world.
A TikTok video shared this week shows the challenge of overseeing Facebook's virtual reality platform, Horizon Worlds. You should watch it for yourself but it shows a Community Guide called Peanutbutter doing his utmost to mediate between a bunch of kids fighting over a boomerang. Meanwhile, a genial man asks how he can search for other rooms and another kid pokes someone with their "holy finger". There's no other way to describe it than utter chaos. "This is way too much", Peanutbutter utters. I hope he had a long lie down.
🐦 Tweets of note
- " There's part of me that wonders why platforms aren't more sophisticated about this (it's not trivial but not hard either)" - University of Minnesota university professor Dr Stevie Chancellor reacts to a story about how users are getting around content filters.
- "Tinder swindler is a content moderation issue" - Doctoral researcher Divij Joshi with a unique take on the new Netflix documentary. I'd read a longer piece on just this, in all honesty.
- "Nice rhetorical flourish, but I suspect this is actually aimed at tackling content that directly facilitates child abuse" - NSPCC's Andy Burrows questions the framing of the Financial Times' front page.