3 min read

Understanding 'legitimate deletion', Canada's hate speech bill and making better policy

The week in content moderation - edition #118

Welcome to Everything in Moderation, the weekly newsletter that helps you keep on top of what's going on in the world of content moderation. It’s curated and produced by me, Ben Whitelaw.

Welcome to new subscribers from Open Intelligence Lab, Patreon and elsewhere and thank you to a brand new subscriber, who bought me a ko-fi before even receiving an edition in their inbox. That's trust, right there.

Here are this week's need-to-know-about stories — BW

📜 Policies - emerging speech regulation and legislation

Canada has moved ahead with legislation to curb hate speech that will make it easier for those targeted to complain to platforms. Bill C-36 includes provisions to fine users up to $50,000 if they continue to commit hate speech after being warned but stops short of fines for platforms that fail to respond quickly to abuse, as bills in the UK and elsewhere have set out. An election is widely expected in Canada in the autumn, meaning there is currently no timeline for the bill to become law.

The UK's Online Safety Bill (EiM #112) is midway through its three-month pre-legislative scrutiny, which makes the timing of the Carnegie Trust's recently published analysis very handy. Among other things, it notes that harms to adults on the largest platforms are "ill-defined and weak" and that the bill "takes too many powers for the Secretary of State". Room for improvement, let's say.

💡 Products - features and functionality shaping speech

The recent launch of Spotify's Greenroom and Facebook's live audio room as a direct competitor to Twitter Spaces and Clubhouse has reignited a conversation about how platforms moderate audio. The Quint has published a good comparison of their policies, noting that Spotify's community guidelines are a "clear winner" on paper. The proof will be in the proverbial pudding.

💬 Platforms - efforts to enforce company guidelines

Do users that have their comment removed trust the decision differently depending on whether the deletion was done by a human moderator or an algorithm? Generally no, according to a new report from the Center for Media Engagement in conjunction with researchers in the Netherlands and Portugal. It found that people tended to perceive the deletion the same, although there was a difference in what people perceived as legitimate deletion (hate speech, for example, was deemed a fairer target than profanity). A worthwhile read if you manage a community yourself or a team of moderators.

Over at Twitch, another storm about "sexually suggestive content" is brewing after two streamers had their channels removed following the simulation of ear-licking during an ASMR yoga stream (never thought I'd write that sentence down). Other streamers criticised the "habitual line-steppers" and Twitch for the inconsistent application of its rules.

Techradar has interviewed Reddit CTO Chris Slowe, who talks about working on products to bolster its community health since his return to the company in 2016 after a five-year break. It's a bit of a puff piece that doesn't go into detail about the Great Reddit Mod revolt (EiM #69) but there are a few interesting lines, including "In some cases, the report is an indication that it’s too late".

👥 People - folks changing the future of moderation

The things that people can and can't post online are enshrined in both governmental and platform policies. But who is writing these rules?

It's this question that is the driver for an open latter and new EU-focused campaign by six female campaigners — Aina Abiodun (founder and CEO), Asha Allen (digital safety and gender equality expert and activist), Dr Carolina Are (online moderation researcher and activist), Hera Hussain (founder and CEO, Chayn), Dr Nakeema Stefflbauer (CEO, FrauenLoop) and Raziye Buse Çetin (independent AI policy researcher) — and supported by 30+ civil society organisations and academics.

Who Writes The Rules highlights the tendency of policy teams in Brussels to be white and male and calls for racialised and marginalised women to "be part of the rule-making to effectively shape our online experiences, not just share our trauma". I support them and the campaign wholeheartedly.

🐦 Tweets of note

  • "Anyone know anything about this work?" - UCLA professor Sarah T Roberts was given a book of some leaked PowerPoint slides used to train Facebook content moderators and, to be honest, I'm really jealous.
  • "Content moderation comes for everyone, eventually" - Neuralink general counsel and self-declared trust and safetyologist Alex Feerst notes Coca Cola's latest cock-up.
  • "I am very proud to know they not only referenced the policy models I put forth in "Content or Context Moderation" but they built on them really well!" - Data and Society's Robyn Caplan on the Trust and Safety Professional Association's new T&S Curriculum, which is well worth a read.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.