5 min read

A functional framework for online speech, Meta told to free nipple and Biden's rallying call

The week in content moderation - edition #188

Hello and welcome to Everything in Moderation, your global content moderation and online speech week-in-review. It's written by me, Ben Whitelaw.

This week's edition has the distinct whiff of change about it: slow-moving legislative processes have suddenly become unblocked, long-in-the-tooth platform policies look set to be torn up; and new ways of thinking about speech are being sketched and shared. Whatever happens next, you can't deny things are moving quickly in the early weeks of 2023.

New subscribers from NextDoor, Memetica, Luminate Group, Internet Lab Brazil and elsewhere, you've joined at just the right moment. If you feel inclined, there are 186 editions of EiM going back to 2018 to delve into.

Dozens of you tried to access the January membership offer in last week's newsletter, only to find a broken link. Here it is again. Act fast, you've got less than two weeks to make use of it.

Enough preamble; this is everything in moderation this week - BW


Policies

New and emerging internet policy and online speech regulation

Lots of movement this week with the UK's Online Safety Bill, after MPs forced the government to include personal criminal liability for platform executives. A sentence of up to two years is designed to dissuade bosses from ignoring notices from the regulator Ofcom but reportedly won't target those who “acted in good faith to comply in a proportionate way”. Let's see about that.

The Guardian has a quick and dirty rundown of what the bill now includes but if you're looking for background, read Andy Burrows' piece on why senior manager liability is a "pre-requisite" of effective compliance. Andy most recently headed up child safety online policy at the NSPCC and talks about his initial reluctance to the idea before being brought around to the idea.

The reaction elsewhere is far from rosy though: Helen Thomas at the Financial Times writes that the "bill has drifted from its original intent, in a way that is probably unhelpful for everyone."

In broader platform policy news, the independent-but-Facebook-funded Oversight Board has called for Meta to change its policy to allow nudity in order to "respect international human rights standards." It came as part of a decision on the wrongful removal of two photos showing a couple, who are transgender and non-binary, posing topless but with their nipples covered. The post contained a fundraising link and was taken down under the Sexual Solicitation Community Standard. The OB disagreed with that decision and also called the existing breastfeeding-only nipple policy "convoluted and poorly defined."

As Paper Mag points out, this is a policy discussion ten years in the making and I'd be surprised if Meta pushed back against this one. It now has 60 days to respond to the recommendation.

Products

Features, functionality and technology shaping online speech

If you haven't already read Tracy Chou's functional framework for protecting and empowering users, do. The Block Party app founder (EiM #76 and others) helpfully breaks down the "speech spectrum" into a few segments to explain how governments, companies, and individuals might tackle the question of online safety.

I particularly like the focus on what Chou calls "content ranking guidance" which, much like Google does for search, could "give clarity and transparency around not only policy but also mechanisms of enforcement for the legal and TOS lines on the platform." My read of the week.

Platforms

Social networks and the application of content guidelines  

The big platform story this week is not about Twitter or Facebook but ChatGPT, which an investigation has found was trained to be less toxic by outsourced Kenyan workers earning less than $2 an hour.

Its $200,000 contract with Sama, the self-declared "ethical AI" company that last week halted its moderation work (EiM #187), involved over 30 workers labelling between 150-250 passages of text containing sexual abuse, hate speech and violence in nine-hour shifts. Anonymous employee accounts say Sama bosses denied them one-on-one well-being sessions. OpenAI, let's not forget, was recently valued at $29bn.

We've been here before, of course: Meta used Sama's services until an investigation, also by TIME's Billy Perrigo (exclusive Q&A), found evidence of poor working conditions and union-busting that led to the ending of the relationship (EiM #148).

Back in 2019, I wrote that "everyone loses when content is outsourced" (EiM #14) and I'm more convinced of that than ever. And, as Paul M. Barrett wrote this week for Tech Policy Press, "it is past time to stop farming out content moderation".

Become an EiM founding member
Everything in Moderation is your guide to understanding how content moderation is changing the world.

Between the weekly digest (📌), regular perspectives (🪟) and occasional explorations (🔭), I try to help people like you working in online safety and content moderation stay ahead of threats and risks by keeping you up-to-date about what is happening in the space.

Becoming a member now helps me connect you to the ideas and people you need in your work making the web a safer, better place for everyone.

Hit reply if you have any questions. And thanks for your support — BW

Elsewhere, TikTok is expanding its labels for state-affiliated media to 40+ mostly European countries, it was announced this week. Justin Ehrlich, Global Head of Issue Policy & Partnerships, wrote that the policy was designed "ensure people have accurate, transparent, and actionable context when they engage with content from media accounts that may present the viewpoint of a government". TechCrunch has the full list of countries.

It comes almost a year after labels were announced by TikTok as part of an effort to control Russian state information (EiM #151), although the implementation was criticised for being easy to miss. We'll see if lessons have been learnt.

In perhaps the oddest story of the week, GoFundMe has confirmed that it removed a fundraiser set up by Republican George Santos for a dying dog after he failed to prove that $3000 made it to the pet's owner. I won't say more than this: The stuff that trust and safety folks have to deal with never ceases to amaze me.  

People

Those impacting the future of online safety and moderation

When the leader of the free world talks about tech regulation, it's wise to listen. Which is why Joe Biden's WSJ op-ed from last week is a must-read.

In it, he calls out platforms for allowing "abusive and even criminal conduct" and notes how "tragic violence has been linked to toxic online echo chambers". He goes on to call for bipartisan support to hold so-called Big Tech accountable to protect people's privacy, competition and digital rights, especially those of children.

It might not have gone down well in all quarters but it's a rallying cry that builds on his predecessor's scepticism of the industry, albeit for more sensible reasons. The question is: can he and his elected colleagues "show the nation we can work together to get the job done".

Tweets of note

Handpicked posts that caught my eye this week

  • “I think it was because of our discussion over how to moderate the Jwin Jowers and Never Forgetti” - visiting professor Ángel Díaz shares his tip for how to make your students like you.
  • “It has now been one week since the attack on the Brazilian government and pro-Bolsonaro/far-right groups are still able to use social media to spread mis- and disinformation” - Bloomberg’s Daniel Zuidijk and his colleague Madis Kabash with a good explainer on the Brazil coup.
  • “Especially if what you are writing is a Supreme Court brief” - some recommended reading from Daphne Keller, director of Stanford Cyber Policy Center.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1600+ EiM subscribers.

Amnesty Tech is looking for a research consultant to investigate the algorithmic harms of social media.

The short-term role will be focusing on analysing platform content relating to self-harm, depression and suicide, with a focus on Kenya and the Philippines, as well as auditing the professional help that users are directed towards.

The successful candidate will be responsible for designing replicable research methodologies, data collection frameworks, and launching a pilot project. You need to be available to start asap - the project starts in February and delivers in April. Good luck!

Update: an error meant I initially wrote that the Oversight Board decision related to ‘female’ nudity. That’s wrong on a number of levels and I’ve clarified accordingly. Thanks to Jenni for flagging.