4 min read

📌 'A vague, standardless penalty', Twitch hot tubs and mods on stage

The week in content moderation - edition #111

Welcome to Everything in Moderation, your weekly newsletter about the policies, products, platforms and people shaping content moderation. It's curated and produced by me, Ben Whitelaw.

Today's newsletter is longer than usual — just shy of 1000 words — due to this week's completely unprecedented nature. But I've made sure it only includes what you need to know. Let's get stuck in — BW


📜 Policies - guidelines and speech regulation

As far as content policy goes this week, there's really only one story in town: the independent-but-Facebook-funded Oversight Board's decision to uphold the suspension of Donald Trump and its criticism of the way it imposed an 'indefinite suspension'. The Oversight Board described the handling of Trump — which is not described in the company's content policies — as a 'vague, standardless penalty' and ordered it to complete a review within six months to decide the former President's fate. In effect, the Board kicked the final decision down the road and I wouldn't be surprised if Trump returns to the platform with some big caveats.

I'm giving this section over to the best takes that I've read since the decision on Wednesday (email me if there's any I've missed):

Talking of Oversight Boards, Koo — the Indian Twitter rival (EiM #100) are launching a four-person board "in the next few months". If recent events in India are anything to go by (EiM #110), Koo board members will have a job on their hands.

💡 Products - features and functionality

A popular make-your-own text adventure game faced a backlash this week after changing the way users' storylines are moderated. AI Dungeon, which has 20,000 daily players, added "technical safeguards" after a researcher exposed that around a third of the custom characters and storylines written were NSFW and some included child pornography. Following the change, disgruntled users experienced several false positives (for example, any mention of "fuck" was removed) and the game's creators published a blog post admitting "our initial test was not perfect".

Remember that Twitter was testing prompts to encourage greater civility in users' replies? Well, that was rolled out to English-language users this week following a successful pilot in which a third of prompted users amended their reply or decided keep schtum. Such prompts are on the rise: Nextdoor released similar prompts last month to stem racist language.

💬 Platforms - dominant digital platforms

Users on Etsy have been found selling fake Covid-19 cures, elephant ivory and, wait for it, preserved kitten fetuses. An investigation by Business Insider found that over 800 listings were found to contravene the online marketplace's prohibited items policy and were only removed by Etsy when contacted about the story. Interestingly, the handcraft site released a preview of its upcoming Transparency report the day before the story's publication to try and steer off criticism.

Have you heard of 'hot tub meta', the ongoing Twitch debate about whether newly popular hot tub videos go against the platform's guidelines? Me neither until this week, when Head of Community Productions Marcus Graham (aka djWHEAT) decided that hot tubs were fine unless sexually suggestive or explicit and advised streamers to use the 'I'm not interested button' if they didn't want to see such content. Understandably, not everyone was happy about the response.

👥 People - those shaping the future of content moderation

I've always felt that there's inherent drama and intrigue in moderation and the closing-doors type of decisions that form the basis of the job. So I was glad to stumble across this behind-the-scenes piece from playwright Ken Urban about his upcoming play, The Moderate.

The plot is in many ways familiar: Frank, separated from his family and out of work, takes a job reviewing 2,000 videos a day and has to confront his own mortality, and that of a teenage girl he has never met. I really hope the play comes to the UK and that some of the many mods mentioned in EiM get to see it.

🐦 Tweets of note

  • "Not bans per se, but restrict how someone's post goes out into the world." - product manager Carlos Delgado notes an interesting section from Charlie Warzel's recent newsletter on the unbundling of Facebook.
  • "It's turning into a nice collection of interesting case studies" - Mike Masnick flags the work that his Copia Institute has been doing with the Trust and Safety Foundation Project. Worth checking out.
  • "These are not contradictory statements." - Evan Greer raises an interesting challenge when it comes to app stores, speech and who governs the infrastructure.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.