4 min read

πŸ“Œ Designing inclusive moderation, reaction to Facebook Papers and an integrity oath

The week in content moderation - edition #134

Welcome to Everything in Moderation, your one-stop shop for content moderation news from the last seven days. It's curated and written by me, Ben Whitelaw.

A special hello to new subscribers from Twitter, Media Lab Bayern, Cornell University and Facebook. For all recipients, I hope today's newsletter helps make sense of this week's avalanche of online speech news.

I'm also sharing my second πŸ”­ Exploration article, in which I ask whether media outlets cover content moderation in a way that brings about understanding and change. I'd be interested in your thoughts β€” just hit reply.

If you enjoy the newsletter, forward this to a friend and let them know that they can subscribe here. Here's this week's rundown β€” BW


πŸ“œ Policies - emerging speech regulation and legislation

Lawmakers are "not thinking big enough" about how to legislate against the tide of harmful content and are failing to consider" other digital priorities like privacy and competition", according to Politico's chief technology correspondent. Mark Scott goes on to warn that specific bills to monitor algorithms put forward in the US (such as the Algorithmic Justice and Online Platform Transparency Act) are unlikely to ever become law, leaving us, well, precisely where we are now.

Nadine Dorries, the UK's new-in-post culture secretary, has boldly (and falsely) claimed in a newspaper column that the Online Safety Bill will "end abuse, full stop". Writing in The Daily Mail, she said that "enough is enough", although what that actually means in practice is not clear. Β Twitter's head of policy in the UK, meanwhile, has argued that the culture secretary has too much power in the bill, echoing the Carnegie Trust's reservations back in September (EiM #118). Hard to deny based on recent showings.

A related tidbit: the UK Parliament's Draft Online Safety Bill committee took evidence this week from Facebook's head of safety Antigone Davis (featured in EiM #130). Remarkably, Davis admitted she hadn't got round to reading the bill in question. I know we're all overwhelmed but really?

πŸ’‘ Products - the features and functionality shaping speech

Algorithmic amplification β€” a key factor in the battle against online abuse β€” has been back in the spotlight following new research from Twitter that showed tweets from right-wing politicians were amplified more than left-wing equivalents. The report, which looked at seven countries including Japan, Turkey and Germany, found that it didn't know why and would have to do more research to determine the cause.

While we're on the subject of engineers making arbitrary decisions that inform whole systems, I recommend this blog post by Twitter's Colin Fraser. The Data Science Manager works on misinformation and election safety and writes interestingly about the trade-offs between precision and accuracy in predictive machine learning models. It gets heavy quickly but give it a go.

Hiding unwanted tweets and muting trolls just got a bit easier β€” Block Party app is now out of open beta. Created by software engineer and diversity advocate Tracy Chou, the tool now allows Block Lists to quickly block accounts en masse as part of a new Premium tier.

πŸ’¬ Platforms - efforts to enforce company guidelines

Many of the stories published under the umbrella of The Facebook Papers β€” as the late-night US TV hosts noted β€” do not say anything particularly new or surprising. However, they do give increasing weight to the idea that the company "regularly places political considerations at the center of its decision making" when it comes to content.

There are a host of round-ups (Tech Policy Press and The Verge are my picks) but I have been most interested in how the story has been covered around the world. Here are a few examples:

  • Kyiv Post covered the low pay of Ukrainian content moderators compared to other content workers based in Spain and the US.
  • Indian Express carries a story about the lack of local moderators in Hindi and Bengali, a line that tech policy site Rest of World looked at more closely too.
  • The Straits Times (Singapore) notes how Zuckerberg bowed to demands from the Vietnam Ruling Party to pull down "anti-state" content before January's party congress.
  • The Nation (Kenya) says that Facebook led some of the 7 million Kenyan Facebook users "to make dangerous decisions on the coronavirus".

Back in the US, YouTube, TikTok and Snapchat testified before a Senate subcommittee on the promotion of eating disorders via their platforms. And while all three take a hard line on content that promotes extreme weight loss, their ad policies are much looser.

Just a few weeks ago, we had a platform sue a user for its failure to adhere to its community guidelines (EiM #128); now we have a user (or rather a group) sue a platform for the same thing. Lady Freethinker, an animal rights nonprofit, brought a case against YouTube last week accusing the platform of failing to take action about animal abuse videos.

Finally in this section, Grindr has published a whitepaper about creating thoughtful, equitable and inclusive moderation practices. It covers safety design (beware open text field), flagging and reporting (separate moderation queues for trans and nonbinary users)and resources for moderation teams. My read of the week.

πŸ‘₯ People - folks changing the future of moderation

I've long believed that people working in online safety (see also: integrity or trust and safety or content moderation) know what it takes to course correct the speech problems we see on digital platforms. They are not bad people, far from it, but they are often work in organisations that don't respect their work or are incentivised to make alternative decisions. That's why Sahar Massachi and Jeff Allen's new organisation is worth taking note of.

The Integrity Institute, announced on Tuesday alongside a piece in Protocol, is a new nonprofit that will bring together current integrity workers and former employees to create consensus and understanding about issues, such as transparency and ranking design. It will create resources and research and advise policymakers and the media, something that's much needed.

Allen, Massachi and the Institute's seven fellows have done a lot of work and thinking already, including creating a Hippocratic oath for integrity workers. But the project has not been without flashes of criticism that it will have to note as its community of practitioners grows.

🐦 Tweets of note

  • "significant work around online violence against vulnerable communities on platforms was started by global south digital rights activists a decade ago & that too women of color" β€” Lawyer and Oversight Board member Nighat Dad reminds us, at moments of great change, the work of some groups can easily be erased.
  • "The release of these papers has been, ironically, optimized for engagement over understanding" - Alex Stamos, formerly of the Big Blue, has a point about the media coverage of the Facebook Papers.
  • "How will all of the current challenges of social media moderation (bias, PTSD for human moderators, etc.) translate into the metaverse? Because they definitely will." - Casey Fiesler lists a whole series of Meta-questions that need answering in this mega-thread.