The many definitions of online safety, Meta teases Trump and shadowbanning explained
Hello and welcome to Everything in Moderation, your content moderation week that was. It may be a new year but it's still written by me, Ben Whitelaw. I hope you had a good break.
2023 promises to be no less crazy than 2022 as far as online speech goes and, as I noted in my prediction for the excellent Horrific/Terrific newsletter, you can expect a year of regulators flexing their muscles against Big Tech once more. Buckle up for a wild one, especially new subscribers from the University of Padova, Meta, Global Counsel, ActiveFence and elsewhere.
Fear not, EIM will be with you all the way, hitting your inbox every Friday and generally trying to put on a brave face. I've also committed to publishing one deep-dive piece every month (don't call it a resolution), which are currently for EiM members only. Don't worry if that's not you just yet - until the end of January, you can become a member for even less than usual.
The weekly newsletter and the extra Q&As —including this new one from a former Meta employee— wouldn't happen without EiM's members. But if you can't support the newsletter financially, perhaps there are other ways to collaborate over the next 12 months? Drop me an email if you think so.
Here's everything in moderation this week — BW
New and emerging internet policy and online speech regulation
Vietnam has joined Indonesia, Singapore and Thailand in recently introducing new laws to deal with "false" content on social media platforms as part of an effort to rein in platforms ahead of upcoming elections. As with similar legislation passed in recent years, platforms must remove offending content within a 24-hour timeframe, which could contribute to "incentivising them to err on the side of caution", according to digital rights groups Access Now and Article 19.
Not coincidentally, governance fragmentation in democratic countries was a theme of the 2022 Internet Governance Forum, held in Ethiopia in December. It's a topic I'll keep a close eye on throughout this year.
Talking of fragmentation, changes to Section 230 are "long overdue", according to a new op-ed by two US professors, who put forward three ideas for amending the 26 words that created the internet (h/t Jeff). In a piece on The Conversation, Robert Kozinets and Jon Pfeiffer posit the ideas of verification triggers, transparent liability caps and Twitter court as means of increasing public responsibility and limiting misinformation and other harms. I'd love to hear your thoughts on this.