3 min read

📌 Klonick on the Oversight Board, moderating audio and Koo's local plan

The week in content moderation - edition #100

Welcome to the Everything in Moderation, your weekly newsletter about content moderation by me, Ben Whitelaw.

This week marks the 100th edition of the newsletter (🎉). How time flies eh? I've enjoyed writing every word and it's been great to see a community of people interested in moderation and speech policy build up organically around it.

Thank you in particular to everyone who has sent feedback, jumped on a call or shared their support over the last two and a half years. I hope to bring you the long-promised Q&A series and other useful resources over the coming months.

If you enjoyed this edition of EiM, or any of the last 100 sends, give it a tweet or support with a small contribution via Ko-Fi. It really helps to keep me going.

That's enough of that. Here's this week's news and links — BW


📜 Policies - company guidelines and speech regulation

We've seen the adoption of speech policies in lots of countries over the past three years and, in the last few weeks alone, we've had word of incoming legislation in Hungary, Brazil (EiM #99), Poland (EiM #98) and China (EiM #97). The outcome of all of this? The 'splinternet'. As The New York Times explains, the term is used to describe an internet comprised of many smaller internets with rules drawn at country's borders. If you're a fan of the open web, it's not a good thing. Expect to hear the term a lot more.

Facebook, Google, Microsoft, Twitter, Discord, Pinterest, Reddit, Shopify and Vimeo have joined forces to set up The Digital Trust & Safety Partnership, a new association to develop best practices for handling harmful content and behaviour online. There isn't much information on its website but the press release notes that it has been advised by Alex Feerst, who led Trust and Safety at Medium and is someone I hold in high regard. Let's see what it does next.

💡 Products - features and functionality

As I've covered here before (EiM #86), moderating audio is a minefield. With Clubhouse and Twitter Spaces taking off, The Verge has done a good write-up of exactly why that is (many actors in the space, reliance on RSS, lack of transparency) and includes some particularly pathetic excuses why moderation is hard (one podcast player creator literally says "there’s nothing more I can do.” Tiny violins etc).

The co-founder of Koo, a Twitter clone app with 3 million Indian users that has gained traction in recent weeks, has outlined its approach to moderation in an in-depth interview with Forbes India. Aprameya Radhakrishna explained that each of the six languages Koo currently supports —English, Hindi, Kannada, Tamil, Telugu and Marathi — has local 'community managers' (aka moderators) that respond to user reports. It will add 12 more Indian languages 'soon'.

💬 Platforms - dominant digital platforms

The amount of content removed by mods on Reddit increased 61% last year, according to the platform's newly released 2020 transparency report. The rise, attributed to a sharp rise in user reports and higher use of Automod — its automated content mod tool — contributed to an overall 1% increase in content removed from the site to just 6%. Pretty good considering it had a full-scale moderator revolt not so long ago (EiM #69).

Twitter this week banned Project Veritas, the so-called 'journalism non-profit' run by conservative provocateur James O'Keefe, for posting the private information of Facebook VP Guy Rosen. Project Veritas, you might remember, published sketchily edited audio from Facebook moderators last year (EiM #70). It had 700k Twitter followers at the time of its suspension.

Parler is back online after SkySilk (which sounds more like an online mattress company to me) agreed to host the free speech social network. Can't go anything other than badly.

👥 People - those shaping the future of content moderation

In 2019, Kate Klonick — the St John's law professor that has been a featured regularly in EiM over the last two years — did the impossible: she got Facebook to allow her to report on the creation of the Oversight Board.

This week, a great New Yorker piece, and accompanying podcast, was published about the process. It wasn't well-received by everyone but the article includes some great details:

  • how the participants of a New York workshop hacked the question submission platform to make jokes about Game of Thrones;
  • how the Facebook team drafting the board's charter added feathers to their pens to evoke the founding fathers of the United States;
  • that appeals of posts kept-up by Facebook will be heard by the board by mid-2021 (currently it's just take-downs).

The thing that worries me most though is that, when Klonick spoke to Mark Zuckerberg for the piece, he intimated that Facebook would separate people who built its products from those whose job it is to deliberate 'the thorny questions that came along with them'. That's a terrible idea, in my opinion, and one Klonick and other academics in the space will have to keep an eye on.

🐦 Tweets of note

  • 'We've been paying too little attention to power' - Nicholas Dawes, exec director of The City, ties together a few different strands from this week.
  • 'The stages of startup content moderation grief' - Patrick O'Keefe, podcast host extraordinaire and CNN trust and safety consultant, gets it spot on.
  • 'We are being treated to a lot of "content moderation" theatre thanks to Facebook' - Mahsa Alimardani, Oxford PhD pre-empts the inevitable: 'Facebook: The Musical'.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.