Hello and welcome to Everything in Moderation, your guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you.
Before we get into today's edition, I want to make a note of last weekend's events and the ongoing horror of this violent conflict. I hope that those of you with family, friends or connections to Israel or Palestine are safe and that all EiM subscribers are holding up as they watch it play out on the news.
A rainy British welcome to new subscribers from BSR, Université de Lausanne, Kamara Global, Princeton University, Center for Digital Action, Brinkhof, Etsy, Unitary, Newsguard and a host of others. Don't be shy — drop me an email to introduce yourself and what you do.
Here's everything in moderation from the last seven days — BW
Today’s edition is in partnership with Sightengine, a company with 10 years of experience in AI for Trust & Safety and Content Moderation
AI-based content moderation can be hard to integrate and customize. Real-life situations are tough to predict and aren't always clear-cut.
Sightengine introduces AutoRule. By testing your expectations against complex real-life scenarios, AutoRule learns in a few minutes how to best replicate your intended moderation rules.
New and emerging internet policy and online speech regulation
Meta's manipulated media policy is under review after the independent-but-Meta-funded Oversight Board opened a case about an altered video of Joe Biden which suggested that he was a paedophile. The Financial Times reported that Meta had refused to remove the May 2023 post — which had fewer than 30 views as of last month — because it argued that the video was "merely edited to remove certain portions". With President Biden due to be in the news a lot over the next year and the harms of generative AI becoming more known, I can see why the Board has taken this case up now.
It also announced a case about an illustration of a bullet linked to war in Sudan, which was removed by Meta twice, for violating different policies.