5 min read

How internet governance can inform AI safety, 'nonexistent' T&S teams, and Amodei talks

The week in content moderation - edition #223

Hello and welcome to Everything in Moderation, your guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you.

If you're a regular EiM reader, you'll know that I'm consistently advocating for trust and safety workers to have a greater say in the ongoing debate about internet safety. A new account from former Twitter employee Anika Collier Navaroli (EiM #174) reminds me that it isn't always that simple because of the trauma and risk involved, particularly for Black integrity professionals. Her piece doesn't fit neatly in any of the sections below but is well worth a mention here.

Anika's article is one of dozens of must-read stories this week — look out for my Read of the Week on one platform's (lack of) response to the Israel-Palestine war. And if you value the newsletter and can afford to become a EiM member, join today and I'll buy you a virtual coffee.

Here's everything in moderation from the last seven days — BW


Policies

New and emerging internet policy and online speech regulation

Internal documents show that Meta knew ahead of the Ethiopian conflict in 2020 that its "current mitigation strategies are not enough"; that's according to new report from Amnesty International that outlines the tragic story of Meareg Amare, a professor who was killed after his home address was posted alongside false information about his political affiliation. His son Abrham is bringing a case against Meta in Kenya — where the company's Africa moderation hub was — for damages in the region of $1.5bn (EiM #185).

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member