5 min read

📌 Eradicating racist abuse, Canada's 'bad' bill and pdf transparency reports

The week in content moderation - edition #123

Hello and welcome back to Everything in Moderation, your weekly newsletter rounding up the most important news and analysis about online content moderation. It’s curated and produced by me, Ben Whitelaw.

A warm welcome to a small bus-load of new subscribers from esteemed institutions such as Public Interest Registry, Das Magazin, Wikimedia, the European Union, New Public, Spotify, University College Dublin, The Guardian, Headland Consultancy and more. Thanks for your patience while I took a breather for the last few weeks.

In the last edition of EiM, I promised you’d notice something different about the newsletter when it next arrived in your inboxes. Unfortunately, that’s going to have to wait a few more weeks. But I’m excited to show you what I’ve been working on.

There’s been an avalanche of content moderation news recently so this edition is a bumper one. Whether you’re focused on policy or product or care about platforms or people, there’s something for you — BW


📜 Policies - emerging speech regulation and legislation

The land that gave us Justin Trudeau and poutine might not be the first place that comes to mind when you think about bad online speech regulation. However, Canada’s C-36 bill represents a “terrifying” development, according to Electronic Frontier Foundation, and contains a list of potential harms that is “vast”. As with other pending legislation, including the UK’s Online Safety Bill, there is the serious risk of over censorship to avoid fines and website blocking. Not good.

There have also been several reports published since the last edition of EiM which are worth knowing about/bookmarking:

  • The Technology Coalition — the alliance of companies charged with fighting online child sexual exploitation and abuse — last week published its first annual report about Project Protect, a five-year piece of work to increase independent research and knowledge sharing among its members. Judging by the survey that forms the basis of the report, there is a lot of work to do — fewer than 60% of companies use classifying algorithms to flag suspected content for review and only a third utilise parental controls.
  • Outside Looking In, a report from the Center for Democracy and Technology on moderating content in the context of end-to-end encryption (EE2E), highlights user reporting and meta-data analysis (a similar process to that used to identify spam emails) as sensible ways forward. It rightly concludes that more research is required.
  • Greater transparency and more data sharing between platforms and researchers were two of the recommendations from this policy paper by the UK Government’s Centre for Data Ethics and Innovation (CDEI) paper on the role of artificial intelligence in misinformation following a roundtable of experts in 2020 (No idea why the 40-page report took so long to publish — answers on a postcard).

💡 Products - the features and functionality shaping speech

Instagram announced three new ways that users can protect themselves from abuse and harassment - Limits (a way to hide DMs from non or new followers), stronger pop-up warnings and Hidden Words (DM filters for particular words). It comes after the girlfriend of an England football player revealed she received 200 death threats a day (not just on Instagram) during the 2020 European Championships.

I hold a special place in my heart for the grumpy announcements made by newspapers about changes to how readers comment under online stories and this one from US local news site The Union might be the most cantankerous yet. Don Rogers, the site’s publisher, explains with some disdain that Viafoura — the community engagement platform it uses — will take over human review:

Enough with the constant sniping, the tedious repeating of the same points about the same tired topics that for the most part have very little to do with western Nevada County… I’m done. We’re done.

He doesn’t seem to hold a lot of hope for change in strategy: “I’m looking forward to not having to read through the comments on TheUnion.com”. Fair enough, Don.

It’s not just platforms or newspapers websites where content moderation applies: karaoke content providers in China will soon have to audit songs for illegal content following a ruling by the Ministry of Culture and Tourism.

💬 Platforms - efforts to enforce company guidelines

Twitter‘s investigation into the racist abuse of England players (EiM #121) has found that the majority of posts came from UK accounts. England manager Gareth Southgate had previously suggested that most abuse came from overseas actors. That’s now clearly not the case: 11 people have been arrested from over 600 racist comments reported to the police, 123 of which were based abroad.

We’ve got a new euphemistic phrase for content moderation: “corporate responsibility”. That’s the term Susan Wojcicki, CEO of YouTube, used in her Wall Street Journal op-ed last week in which she outlined the three principles that she feels should shape regulation. There’s also a bold claim that YouTube is a platform of “openness” which is highly debatable.

Koo, the Indian Twitter clone app, submitted its second content takedown transparency report, as required by intermediary liability legislation passed back in March. The number of 'koos’ (its equivalent to a tweet) reported by users in July went down by 37% (although there’s no data on usage, posts or time spent in the app so we don’t know if that’s a good or bad thing). The number of posts subsequently removed after review remained steady at around 20%. Oh, and the reports are pdfs stored in Google Drive. Old skool.

👥 People - folks changing the future of moderation

It is a given nowadays that online abuse has an outsized effect on marginalised users, particularly women, people with disabilities and folks of colour. I know it and the research supports it. But few stories have felt as real or visceral as the recent Twitch ‘hate raids’ and the response mounted by Black streamers.

The brigading came to light when a Black Twitch user — Critical Bardshared a video of the bombardment of one of his stream chats by racists. It’s a difficult but essential watch. Others have since been targeted leading RekItRaven, another Black streamer that uses them/they pronouns, to start a #TwitchDoBetter campaign.

Why has this happened now? According to The Washington Post, incidents of identity-based harassment have spiked on the video streaming platform following the launch of new tags, including ‘black’ and transgender, which make creators more easily targetable. I wonder whether Twitch’s team war-gamed the effects of the tag before launching them? In any case, it’s now incumbent upon RekItRaven, Critical Bard and other Black streamers and allies to mount a fightback.

🐦 Tweets of note

  • “I’m confident he is no threat to the public but his crimes are serious” - interesting case from Oliver Kamm, a former colleague of mine, writes about how online abuse he received led to his harasser being jailed.
  • “The people who care in the rest of the world are mostly stretched too thin to pay attention” - Stanford’s Daphne Keller flags the serious problems with the aforementioned draft laws in Canada and roll calls the people trying to fix it.
  • “The Internet has a Facebook problem”. Here’s a bit more of what I mean by that” - Internet Society’s Konstantinos Komaitis gets deeper into the regulatory class’ obsession with the big blue app.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.