6 min read

The many definitions of online safety, Meta teases Trump and shadowbanning explained

The week in content moderation - edition #186

Hello and welcome to Everything in Moderation, your content moderation week that was. It may be a new year but it's still written by me, Ben Whitelaw. I hope you had a good break.

2023 promises to be no less crazy than 2022 as far as online speech goes and, as I noted in my prediction for the excellent Horrific/Terrific newsletter, you can expect a year of regulators flexing their muscles against Big Tech once more. Buckle up for a wild one, especially new subscribers from the University of Padova, Meta, Global Counsel, ActiveFence and elsewhere.

Fear not, EIM will be with you all the way, hitting your inbox every Friday and generally trying to put on a brave face. I've also committed to publishing one deep-dive piece every month (don't call it a resolution), which are currently for EiM members only. Don't worry if that's not you just yet - until the end of January, you can become a member for even less than usual.

The weekly newsletter and the extra Q&As —including this new one from a former Meta employee— wouldn't happen without EiM's members. But if you can't support the newsletter financially, perhaps there are other ways to collaborate over the next 12 months? Drop me an email if you think so.

Here's everything in moderation this week — BW


Policies

New and emerging internet policy and online speech regulation

Vietnam has joined Indonesia, Singapore and Thailand in recently introducing new laws to deal with "false" content on social media platforms as part of an effort to rein in platforms ahead of upcoming elections. As with similar legislation passed in recent years, platforms must remove offending content within a 24-hour timeframe, which could contribute to "incentivising them to err on the side of caution", according to digital rights groups Access Now and Article 19.

Not coincidenlty, governance fragmentation in democratic countries was a theme of the 2022 Internet Governance Forum, held in Ethiopia in December. It's a topic I'll keep a close eye on throughout this year.

Talking of fragmentation, changes to Section 230 are "long overdue", according to a new op-ed by two US professors, who put forward three ideas for amending the 26 words that created the internet (h/t Jeff). In a piece on The Conversation, Robert Kozinets and Jon Pfeiffer posit the ideas of verification triggers, transparent liability caps and Twitter court as means of increasing public responsibility and limiting misinformation and other harms. I'd love to hear your thoughts on this.

We've all come across ChatGPT in recent months and some of us will have had a play with OpenAI's remarkably intuitive prompt-based chatbot. But, with so many users and so much hype, how does it approach safety? And what can we take from the fact that Elon Musk —one of OpenAI's founders, remember— isn't seemingly a massive fans? For some answers, read this thought-provoking piece from Tech Policy Press about the different meanings of safety. My read of the week.

New Q&A: Those working in trust and safety product roles —engineers but also designers and product managers— are among the deepest thinkers about online safety but are often hidden from view.

It's for this reason that I'm especially delighted to publish the final instalment of the "Getting to Know" mini-series, in collaboration with the Integrity Institute.

Glenn Ellingson has vast experience working for platforms with user-generated content, most recently as an engineering manager at Meta. He talks thoughtfully about mitigating bad behaviour and the limitations of enforce/allow model that so many platforms use. Have a read.

Q&As like this one will always remain free to read thanks to the support of EiM members. If you're a regular reader but not yet a member, become one today.

Products

Features, functionality and technology shaping online speech

A new tool being developed by Google R&D unit Jigsaw and Tech Against Terrorism will allow smaller websites to detect terrorist content quicker and easier, it was announced this week. Details are scarce about the tool or when it would be available but Yasmin Green, CEO of Jigsaw, said it was borne out of a need to halt the move of "terrorist content and Covid hoax claims to [other sites]” from major platforms.

The wider context here is that Jigsaw has doubled down on building and open-sourcing tools for better moderation in the last 18 months. Last year, it launched a tool called Harassment Manager to help women journalists document and manage abuse in partnership with Thomson Reuters and this announcement feels as if it's in the same vein: product-led, open source and consultative in its approach. I spoke to the research lead Tesh Goyal about it in April 2022, in case you missed it.

Platforms

Social networks and the application of content guidelines  

Meta will soon announce whether Donald Trump will be allowed back onto Facebook and Instagram, it was reported this week. A working group led by Nick Clegg (EiM #92) and including staff from public policy, communications, content policy, and its safety and integrity teams has been convened to make the call. The date is disputed —The Financial Times reported it will happen by tomorrow (7th January) while CNN said "the next few weeks"— but, if you ask me, the fact that this story has made it out into the world suggests that it's more likely than not that Trump will be given back the keys to his account.

Staying with politics, Twitter will be reversing its ban on political advertising, according to Ella Irwin, its new head of Trust and Safety. The ban had been in place since November 2019 (EiM #41) but it is not a big surprise that this is changing. I mean, what's the point of spending $44bn on a platform full of politicians and journalists if you can't allow other people to influence them? Will it end badly? Yes, almost certainly.

YouTube creators have begun to see the impact of its updated monetisation policy, which lays out rules on videos featuring adult content, violence, drugs and the unhelpfully broad "harmful acts". The new version of Advertiser Friendly Guidelines was rolled out at the end of December and has affected gaming channels in particular, where violence and drugs are often baked into the gameplay.

YouTube monetisation is a fascinating sub-area of content moderation and was described in a 2020 research paper as a "shifting financial and algorithmic incentive structure". If you're a gamer that's been affected, or have knowledge of the effect of these changes, get in touch.

People

Those impacting the future of online safety and moderation

I have a soft spot for regular platform users advocating for more transparency and better digital rights. Zev (EiM #109) and his one-man TikTok algorithm campaign was one such person, as was Instagram user Neoliberalhell (EiM #169). Art teacher Jennifer Bloomer is the latest.

In this excellent read from The Washington Post, Bloomer explains how her account — which she used to share her activism and anti-racist art — received fewer likes and replies after she had an ad rejected for its "political" nature. She suspects her account was shadowbanned and has tried to find out why.

Instagram explained to WashPo reporter Geoffrey A Fowler that her account's reach wasn't limited. However, not long after, it returned to its regular audience, suggesting it could have been the famous "technical glitch" (EiM #67) or perhaps a form of quarantine or bozo (EiM #28) imposed by Meta. We'll never know.

Whatever it may be, Fowler's conclusion that "we the users also need the power to push back when algorithms misunderstand us or make the wrong call" is an important one. Good on Bloomer for doing so.

Tweets of note

Handpicked posts that caught my eye this week

  • "Government involvement in content moderation is a serious issue deserving serious scrutiny and maximal government and industry transparency." -EFF's David Greene returns from a four-month Twitter hiatus with a bang.
  • "They're adding a 'Verified phone number tag' to Indian profiles only" - Aroon Deep of The Hindu spots something new on Twitter and YouTube.
  • "What really scares me is to realise how much the policy-making community is far far far behind this logic" - Alberto Fernández Gibaja shares a piece on why content moderation is a dead end. Discuss.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.

New year, new roles going in your team? If so, become an EiM organisational member to get your position included here and in front of some of the smartest people working in integrity, speech governance, digital rights and policy. Get in touch today — your usual Job of the week returns next Friday.