4 min read

📌 10-year ban for online abuse, doctors prescribe moderation and Twitch 'hate raid' progress

The week in content moderation - edition #143

Hello and welcome to Everything in Moderation, your content moderation week-in-review, out every Friday. It's written by me, Ben Whitelaw.

Welcome to newly subscribed folks from Tech Against Terrorism, Mozilla Foundation, 4kicks productions, New Zealand's Department for Internal Affairs, Twitter and a host more. Thanks for taking a punt on EiM, I hope you find it useful.

My 'read of the week' makes its return in today's edition. For those that are new to EiM, it's the story that I enjoyed or got me thinking most during the past seven days. Get in touch via email or Twitter (@EiMdotco) if you take a different view  — BW


📜 Policies - emerging speech regulation and legislation

The Nigerian government has agreed to lift Twitter's seven-month ban after the company satisfied six key conditions, including agreeing to appoint a country representative to oversee operations and vow to not "undermine national security". Twitter Public Policy's announcement tweet yesterday racked up 25k likes, demonstrating the popularity of the reversal, although, in reality, most committed tweeters will have used VPN services during the blackout. Twitter, it should be noted, only opened its first office on the continent — in Ghana — in April 2021 so it's a big deal.

In Ireland, the Online Safety and Media Regulation Bill — which places a legal onus on platforms to keep users safe and defines "online harms" — was approved by ministers on Wednesday. Facebook and the UCD Digital Policy Centre had previously both argued that the bill should be paused "until EU laws are finalised and in force", which is due in the coming months. However, the Oireachtas (Ireland's national parliament) is going ahead nonetheless and will now recruit an Online Safety Commissioner to oversee the new codes and, if organisations fail to comply, issue fines of up to €20 million (or 10% of annual turnover).

The UK government has announced that people who post racist abuse online could be banned from attending football matches in England and Wales for up to 10 years. It comes on the back of the abuse of three Black England players following the Euro 2020 final (EiM #121) and after pressure from football authorities including the Premier League (EiM #127), which took part in a social media blackout last April. I think we'll see greater convergence between on and offline penalties in the coming years. It's also my read of the week.

💡 Products - the features and functionality shaping speech

Safety tech, the catch-all term for software and products that help keep users safe online, is a billion-dollar US market, according to an investor report based on research by cyberpsychologist Professor Mary Aiken. I haven't read the 52-page report in full and urge caution since Paladin Capital Group's portfolio includes a number of safety tech companies but it is nonetheless interesting to note that that there are more than 160 safety tech businesses with 8,800 staff in the US (thanks to Lucy for the heads up).

💬 Platforms - efforts to enforce company guidelines

For a change, the biggest platform moderation news of the week comes not from Facebook or YouTube but from Twitch, whose Global VP of Trust and Safety published an open letter detailing the video platform's plan to make streamers safer in 2022. Notable nuggets from Angela Hession's blog post include:

  • A Boris-Johnsonesque non-apology for the "hate raids" experienced by black and LGBTQIA+ streamers (EiM #130)
  • The stat that 15 million bot accounts were removed from the platform last year and bot attacks reduced "significantly"
  • The announcement that Twitch's appeals process and sexual content policy will be updated this year, both of which have long been a thorn in its side (EiM #62)

Did you spot Pinterest's announcement back in October about its new live, shoppable TV channels? Well, the social network is now staffing up its moderation efforts, according to a job ad. The contractors, who will be employed by its third-party partner PRO Unlimited, will watch live streams and "monitor live chat during Pinterest TV live streamed episodes". I look forward to a spate of inevitable stories about hate speech and health misinformation hiding among streams about home decoration ideas and the tastiest root vegetable recipes.

Spotify has come under pressure for its hosting of the Joe Rogan podcast after 270 healthcare professionals and scientists signed an open letter calling on the company to "establish a clear and public policy to moderate misinformation on its platform". You'll remember that last April, the music-streaming service removed 40 episodes of the Joe Rogan Show for Covid-19 misinformation and other false claims.

👥 People - folks changing the future of moderation

Of all the Facebook executives that have left the company and gone public with their criticisms of how it mitigates harm, Katie Harbath is the most senior.

The former public-policy director for global elections resigned in March last year and was active on Twitter during the #FacebookFiles revelations, as she shared her concerns about the company's leadership and approach to content moderation.

This week Harbath spoke to the WSJ about how her job training political parties became 80% "escalations" and other PR crises. That hasn't changed since she left either. Just last week, a Polish party criticised the company for removing its page for contravening its Covid misinformation policy (EiM #142).

Harbath seems to know her stuff and, in her role with the newly formed Integrity Institute, has a chance to, in her words, "actually do something.”

🐦 Tweets of note

  • "After griping about overzealous AI content moderation for years, I finally got to experience it myself" - Ranking Right's Zak Rogoff gets on the wrong side of a Facebook Marketplace algorithm.
  • "The rare expansion of content moderation rules to allow more of something" - Stanford professor Daphne Keller on last year's Grindr policy change on exposed buttocks.
  • "783 million people have joined the internet since 2019. Here's the big question: what's being done to protect all those people online?" - Justin Davis, the founder of Spectrum Labs, wonders if we're doing enough.