4 min read

πŸ“Œ Addressing algorithmic harm, TikTok combats anti-semitism and new laws Down Under

The week in content moderation - edition #145

Hello and welcome to Everything in Moderation, your Friday fix of content moderation and online speech news and analysis. It's curated by me, Ben Whitelaw.

Before I get to today's newsletter, I have a question: how would you like to support EiM? That's just one of a few questions as part of a short survey that I hope you'll take four minutes to answer. I've been doing a lot of thinking about the future of EiM and your feedback will shape what happens next.

Welcome to folks from Cyan Forensics, University of Washington, Tech Against Terrorism, Integrity Institute, ActiveFence, Georgetown University, and more, all new subscribers to EiM. Don't forget to say hi (or tell me you hated today's edition). Here it is, thanks for reading β€” BW

πŸ“œ Policies - emerging speech regulation and legislation

Australia's Online Safety Act came into effect this week, six months after being passed by MPs. Much like other online speech legislation, it compels social media platforms to remove "cyber-abuse material" within 24 hours or face a fine, delisting from search engines or removal from app stores. Australian eSafety commissioner Julie Inman Grant said it puts the country "at the international forefront in the fight against online abuse and harm" although it should be noted that the act only protects individuals so harmful posts about minority groups are fair game. No wonder it's been called "an abdication of responsibility" in the past.

Platforms giving academics access to data is the starting point to finding a solution to the "motley and complex" struggles of social media, argues an editorial by The Washington Post this week. The articles lays out how elected officials are trying to protect citizens from online harm "without understanding the problems", citing the Platform Accountability and Transparency Act as a key intervention (even though the shape of it is "uncertain at best").

πŸ’‘ Products - the features and functionality shaping speech

Searches on TikTok for terms related to the Holocaust will now see a banner directing users to educational resources or, if the terms violate the platform's guidelines, block search results entirely. The video platform announced the changes to coincide with Holocaust Remembrance Day (Thursday) and come almost a year after the mass reporting of Jewish creators (EiM #117). If you want to know how the banner is going, the aboutholocaust.org website that users are directed to is down. So yeah.

Twitter is hiring a team lead to conduct technical audits of its machine learning systems, according to a job ad. The ML Ethics Red Team lead will help to proactively address harms caused by algorithms and sits within the company's new Machine Learning Ethics, Transparency and Accountability research team, led by Dr Rumman Chowdhury, "the notorious and beloved" critic (I wish someone dubbed me that). Sarah T Roberts, whose work many of you will have read, was also a consultant at META last summer.

This week saw some major funding news from Spectrum Labs, which announced $32 million to develop its toxicity detection technology and, according to its website, "develop new applications for HR, Sales, Customer Service, and Brand Safety". Spectrum counts Pinterest, Match Group and Udemy as customers and also revealed that it is working with Grindr. Following ActiveFence's announcement of $100m funding in July last year, it's fair to say the moderation funding space is hotting up.

πŸ’¬ Platforms - efforts to enforce company guidelines

Twitter received a record number of government requests to remove content from its platform in the first half of 2021, according to quarterly data released this week. 43,387 legal demands were made in the six months to June 2021, with Japan responsible for almost half of those demands most of which are drug-related, included obscenity or related to financial-crimes, according to Engadget. Β Russia, Turkey and India make up the top slots. In 54% of cases, Twitter "withheld" (read hid) the content or it was removed by account holders.

Talking of India and takedowns, YouTube was forced to remove 20 channels by the Ministry of Information and Broadcasting for "spreading anti-India propaganda and fake news". How the government managed this, despite the fact that the channels were operating in Pakistan, merits further interrogation.

From one company straining to balance commercial imperatives and moderation responsibilites to another: Spotify users have lost access to Neil Young's music after he wrote a letter urging the company to stop "spreading fake information". I expect he won't be the last artist to do so.

πŸ‘₯ People - folks changing the future of moderation

Funny how there isn't a mention of Substack in EiM for months and then two come in consecutive weeks.

Following last week's ISD report on Covid deniers taking refuge on the platform (EiM #144), a new blogpost from the Centre for Countering Digital Hate claims anti-vaccine newsletter writers are making than $2m via its platform while taking 10% for themselves.

The new revelations put the spotlight back on the company's three founders β€” Hamish McKenzie, Chris Best, and Jairaj Sethi β€” whose latest blogpost once again defends their approach to moderation.

It's an odd read that mixes warnings of the perils of censorship with curioius references to the "Online Thunderdome", Barack Obama and killing monsters. I'm interested in what you think β€” drop me a line.

🐦 Tweets of note

  • "The new Safety Data Standards role is super interesting and involves designing, scaling, and implementing a comprehensive safety taxonomy program that covers a wide range of safety issues." - Bumble's Azmina Dhrodia on a great new role in her team.
  • "Moves us beyond the current fixation of auditing a platform's code, which is possibly the least useful + most difficult approach" - Benedict Dellot reflects on work by the Ada Lovelace Institute to audit online harms caused by algorithms.
  • "Meanwhile there is a MASSIVE problem of takedowns and deplatforming of Persian language content and accounts." - Mahsa Alimardani of Article19 and the Oxford Internet Institute with a concerning thread on reports of takedowns of Persian language content by Instagram.