5 min read

Verifying who you say you are, Canada's regulatory experts and banning climate misinfo

The week in content moderation - edition #154

Hello and welcome to Everything in Moderation, your online safety and content moderation week-in-review. It's written by me, Ben Whitelaw.

Welcome to new subscribers from ActiveFence, Ofcom, Discord and a host of other folks at the International Journalism Festival, where I'm on a panel tomorrow about the internet's essential workers.

To coincide with the festival, I wanted to highlight the particular challenge of abuse and threats faced by reporters, particularly women and journalists of colour. The subsequent Viewpoints Q&A with former Jigsaw research lead Tesh Goyal hopefully provides some hope that, with more investment, that won't always be the case. Do have a read.

Today's edition covers everything from Ukraine to moderating virtual experiences. Let me know what you think and, if you find EiM useful, consider becoming a supporting member — BW


Policies - emerging speech regulation and legislation

44 days into the Ukraine war and there continues to be a number of significant changes to how major social media companies manage and moderate content about the conflict.

The major shift this week was at Twitter, which announced that it would stop amplifying state-run accounts via the timeline, search or explore pages and would limit accounts that post videos or images of prisoners of war.

Over the last two weeks, questions have been asked about how platforms will maintain evidence that might later be used in trials for war crimes and, as Tech Policy Press points out, it's clear that these sites "were never designed with atrocity documentation in mind". Twitter's announcement speaks to an acknowledgement of that vital function.

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member