A Trump-inspired crisis protocol, sizing up Section 230 and live streaming dangers

Hello and welcome to Everything in Moderation, your 'what-the-hell-just-happened?" review of the week’s content moderation and online speech news. It’s written by me, Ben Whitelaw.
The past few months have felt to me like we're approaching a fork in the road when it comes to online speech. Meta's re-admission of Donald Trump onto its platforms and the upcoming Supreme Court case, both covered in today's edition, speak to the fact that we're close to the point where we decide what route we take.
A special thank you to three new EiM members, whose support allows me to produce this each week, as well as a gaggle of free subscribers from the London School of Economics, Brainly, OpenWeb, Crisp, Harvard, ActiveFence and one of my favourite sites, Rest of World. I’m aiming to double the double of EiM subscribers and members this year as part of my efforts to better cover this vitally important and under-covered topic. And I've made it easy for you to help with this January offer.
Finally, if you're affected by the ongoing news of platform layoffs, I hope you're holding up. As I've said before, reach out if you think I can help.
Here’s everything in moderation this week - BW
Policies
New and emerging internet policy and online speech regulation
After 749 days and more column inches than I care to count, Donald Trump will be allowed back onto Facebook and Instagram "in the coming weeks". In a blog post, Nick Clegg, Meta's president of global affairs, justified the decision by saying that "the public should be able to hear from a former President of the United States, and a declared candidate for that office again" (sidenote: since when has anything been normal?)
Meta also announced the introduction of a crisis policy protocol and heightened penalties to deter repeat offences as "guardrails" against this happening again. However, the Oversight Board —which made many of these recommendations in its 2021 response to the Trump ban— called for Meta to provide additional details of its assessment to be made public.
As I said in last week's newsletter (EiM #188), this has been on the cards for a while, perhaps even as far back as October if reports are to be believed. But some concerns still remain about the decisions, as demonstrated by the coverage from the last 48 hours:
- Amnesty International called for Trump to be "held to the same standards as everyone else" and implored Meta to "commit sufficient resources to ensure effective and impartial moderation in line with international human rights standards".
- Kairos Action, a racial justice organisation, told The Guardian that Trump had been given back his "megaphone to spread misinformation about the integrity of our elections, incite violence and stoke the flames of white supremacy".
- Separate analysis by The Guardian's Dan Milmo wrote that, if Trump brings his Truth Social posting pattern to Facebook, "he will immediately hit the “guardrails” that Clegg outlined".
Amidst all the noise and chaos, remember this from Fight for the Future's Evan Greer:
"Discussions about online content moderation and what policies are needed to ensure human rights, free expression and safety are some of the most important and consequential societal debates in human history."
Also in policy-related news: the upcoming Supreme Court cases relating to Section 230 of the Communications Decency Act 1996 have variously been described as "cases that could break the internet" and a fight between free speech reformers and purists. And we're seeing an influx of commentary as a result:
- A Q&A with Stanford professor Daphne Keller explains why "it's very hard to predict the nature of the change, or what anybody should do in anticipation of it".
- Casey Newton of Platformer wants the case to be taken before June, when the justices' term finishes, unless it is about to "become illegal to remove pro-Nazi content from Facebook and Twitter [in which case] maybe waiting until June 2024 wouldn’t be so bad."
- Meanwhile, the Supreme Court made the curious decision to ask the US Solicitor General to weigh in on the Texas and Florida appeals, which is a ploy designed to buy time, according to Mike Masnick at Techdirt.
In last week’s newsletter (EiM #188), an error meant I wrote that the Oversight Board decision related to ‘female’ nudity. That’s wrong on a number of levels and I’ve clarified accordingly. Thanks to Jenni for flagging - BW
Products
Features, functionality and technology shaping online speech
Livestreaming is “a very seductive feature” for users but platforms continue to struggle to moderate live video content; that's the gist of this Financial Times long read published this week. It notes the "flood of technology start-ups entering the moderation space" and speaks to Hive (EiM #181 and others), Yubo (#149) and ActiveFence (#187 and others) as well as folks from GCHQ and the US Department of Homeland Security to get a fuller picture of the problem.
Despite being around forever, livestreaming continues to post platforms a major headache. As recently as October last year, TikTok added new safety features to its live stream feature (EiM #178) including increasing the required age from 16 to 18.
Platforms
Social networks and the application of content guidelines
Digital rights groups have called on Meta to "put things right and ensure better labour conditions for African moderators in the region" following last week's news that its main moderation partner in the region was pulling out of the space (EiM #188). Access Now, Foxglove and Amnesty International all insisted that the platform must "adequately cover local languages and dialects, and also be more transparent about their algorithms which are promoting harmful content". I expect we'll be waiting a while.
Two stories now which speak to the degradation of Twitter's moderation policy since Elon Musk took over:
- Physical attacks in the US can be linked to spikes in Twitter hate speech, according to research coming out later this month and reported by The Washington Post. The Network Contagion Research Institute found a correlation in anti-LGBTQ incidents in tandem with higher usage of the term ‘groomer’ on Twitter.
- A lawsuit has been filed in Germany arguing that the company is failing to enforce its own rules against antisemitic content, including holocaust denial. Campaign group HateAid and the European Union of Jewish Students (EUJS) argue that "antisemitism is becoming a normality in our society."
People
Those impacting the future of online safety and moderation
John and Ryan aren't their real names and, frankly, we'll never know who they are. That's because both were recently terminated from their roles as moderators working for Meta after the company announced that it was severing ties with Accenture.
Speaking to Mountain View Voice under the condition of anonymity, they say between 150 and 175 people were affected by the January 11 decision, which I don't believe has been reported elsewhere. And it's clearly a bittersweet moment for both men: “I do feel that we did a lot of good," said one, "but also that, at the time, you could see it was almost like a PR move.”
It follows a trend of platforms reviewing their outsourced moderation capacity and, in some cases, streamlining trust and safety teams. In November, Meta announced significant layoffs and we must presume this is part of that process.
A commenter underneath the piece who appears has knowledge of the situation said: "Meta is ramping up their offshore contract moderators located in India". Let's see if that's true.
Tweets of note
Handpicked posts that caught my eye this week
- "It’s like Musk is taking all of the content moderation best practice norms the trust and safety community has built up over the past decade and is trying to set them on fire" - Davey Alba, Bloomberg technology reporter, quotes evelyn douek in her latest piece.
- "Just two of the foremost content moderation law experts on the planet, here, talking about how the UK is moving towards "Modi-level censorship" - self-declared policy wonk Heather Burns (who has a very good book out on privacy) with a podcast recommendation.
- "the content moderation was geninuely (sic) valuable" - Twitch streamer Hasan Piker bemoans the downward spiral post-Musk.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1600+ EiM subscribers.
Spotify is looking for an Applied Machine Learning Engineer to join its Content Intelligence product area and help drive the direction and development of proactive content moderation at scale.
The role, which is EMEA based, involves working with ML and engineering teams to build, deploy and maintain in production models that help the platform know what content is available via its products.
Applied machine learning is a must and experience implementing production machine learning systems at scale in Java, Scala and Python very welcomed.