6 min read

📌 How to create trustworthy systems, Bluesky takes flight and US senators talk transparency

The week in content moderation - edition #158

Hello and welcome to Everything in Moderation, your weekly inbox-sized guide to online safety and content moderation. It's written by me, Ben Whitelaw.

A warm welcome to folks from Moonshot, Facebook, Discord, the University of Michigan, Bumble, NewsGuard, Global Counsel, Unitary and elsewhere. Also I want to thank the latest (and 14th) EiM member, and all members whose support helps keep the newsletter sustainable at a time when it's needed most. You can join them for less than $2 a week (I also provide expensable invoice/receipt, if you catch my drift).

What follows is everything I've read this week organised into something resembling a vaguely coherent breakdown. I hope it's useful — BW


📜 Policies - emerging speech regulation and legislation

Canada's government has failed to "live up to the standards that it has set for itself" regarding its plans to regulate the internet and should reconsider its approach, according to a law professor. Michael Geist from the University of Ottawa shared his comments in a CBC segment and added that the process was "embarrassing"  and should "serve as a wake-up call". Geist is just one of a number of academics to raise concerns about plans laid out last year by the Liberal government (EiM #133)

The wider context is that Heritage Minister Pablo Rodriguez is under increasing pressure to sell a trio of digital bills to the Canadian public, at a time when relations between the government and platforms are particularly strained. Last week, a Meta representative claimed the company was not consulted on a draft bill to force tech companies to compensate news outlets for content (the government denied this was true). It's a reminder that moderation is politics and politics is moderation.  

On the topic of transparency, US senators were treated to what Casey Newton called the "unfamiliar spectacle of highly intelligent people talking with nuance about platform regulation" this week as the Subcommittee on Privacy, Technology, and the Law heard from three experts about the need for platform transparency:

  • Former Crowdtangle executive Brandon Silverman talked about the difficulties of working on a transparency project within a big platform (33:45) and the need for legislation to mandate data sharing and transparency (38:28)
  • Professor Nate Persily explained why platforms have lost their right to secrecy (39:50) and the power of transparency to change the behaviour of platforms and the products that they put out into the market (43:11)
  • Professor Daphne Keller spoke about the potential of the Digital Services Act (45:37) and why transparency laws for big companies "are a bad fit for companies that far smaller in measures of revenue or users or employees" (48:46)

Watch it in full if you have the time.

The head of a new UK body designed to improve greater cooperation between existing regulators on speech regulation has warned that the Online Safety Bill could stifle startups and impede innovation. Gill Whitehead, in her first interview in new role at the head of Digital Regulation Cooperation Forum, warned that "complying with the bill that might be prohibitive for smaller firms" and could "slow things down for business". Which is not what the pro-business UK government presumably want to hear.

💡 Products - the features and functionality shaping speech

Bluesky, Twitter's decentralised public conversation project announced back in 2019 (EiM #45), yesterday released its first tranche of code and committed to better content moderation in a timely riposte to the moderation naysayers that have gathered around the platform following its purchase by Elon Musk. Jay Graber, head of the project, announced the release of ADX, the “Authenticated Data Experiment" — a self-described "git, for your social posts" — and ended the blogpost with the rallying cry to "move from platforms to protocols". Mike Masnick at Techdirt, who first coined the idea in 2019, will be pleased.

Basic moderation tools have been released on Substack, despite the founders famously not agreeing with the concept of moderation. The announcement, made earlier this week, means that readers can report offensive comments to publication admins, and those comments can be approved or reported. Founders Chris Best, Hamish McKenzie and Jairaj Sethi (EiM #145) have long defended their laissez-faire approach so this could be seen as a climbdown.

What makes a trustworthy system or piece of software? That's the question that a group of cybersecurity experts have been grappling with for the best part of three years. And now they have an answer. The Lawfare Institute's Trusted Hardware and Software Working Group published its findings this week and, although I won't claim to have read all 52 pages in full, it contains a checklist that people creating moderation systems and software will no doubt find useful.

💬 Platforms - efforts to enforce company guidelines

Spotify this week showed signs of moving on from the PR problems caused by the Joe Rogan Experience last year (EiM #143) by announcing plans to beef up its European policy team. The streaming platform is recruiting two senior policy specialists in either Dublin or remote Europe who will play a "key role in how we define, enforce, and communicate our stance on policy issues that impact users". A tough gig but also a good chance to shape an influential platform's approach to content policy.

You don't often see Google in this section of EiM but news that the search giant blocked more than 3.4 billion ads in 2021 is a reminder of the scale and importance of its safety operations. Around a quarter of those ads were banned for "abusing the ad network", according to Brian Crowley, director of trust and safety, with ads about healthcare and trademarks responsible for around 200 million blocks each.

Elon Musk's "will-he-won't-he-oh-no-might-he" purchase of Twitter has reached the point where British MPs have sent the billionaire a letter asking, wait for it, to talk about "the developments you propose". While they wait by the door for a response, the commentary keeps coming:

Facebook moderators have raised concerns about posts praising Russian atrocities in Bucha, which remain on the platform because of a policy loophole. Anonymous employees told The Guardian that the atrocity has not been classified as an "internally designated" incident, meaning some offending content must be left up. There's one particularly telling quote from a moderator: "They only care if they look good in the US media.”

Related EiM read: Does the media cover online safety in a way that helps platforms improve?  

👥 People - folks changing the future of moderation

I often wish Cindy Cohn was on Twitter so I had direct access to a pipeline of her every thought (if she is but I missed it, please let me know). Every interview I read from the EFF executive director is always smart without being overly technical and future-focused without somehow being bleak. The latest one for The Verge's podcast is no different.

It covers everything — "competitive compatibility", Santa Clara Principles and regicide (kind of) — as well as the changing shape of EFF over its 32 years of existence. Props to Verge editor-in-chief Nilay Patel for a great conversation — it's my read of the week.

And on Musk, there is the golden line: "I do not hang out with billionaires, so it’s not like I can meet them in the club and tell them what I think." If only.

Related EiM read: EFF's Jillian C. York on the revised Santa Clara Principles

🐦 Tweets of note

  • "Few users would actually want to use a forum where anyone could actually say anything lawful" - Director of Tech Freedom Berin Szóka steps us through why the First Amendment is the least of Elon Musk's problems.
  • "Right now, it's sh*tposted lofty ideas, ego & emotions." - Former South African MP Phumzile Van Damme doesn't hold back with her thoughts on Musk's plans.
  • "The simple version of why this Elon has chosen this hill is that he hasn't considered these questions in complex ways, and mostly imagines Twitter through his own, extremely unusual lens." - long and thoughtful thread on Musk (what else?) from author and YouTuber Hank Green.

🦺 Job of the week

This section of EiM is designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.

Fashion app Depop is on the hunt for a Head of Trust and Safety to own its policies and enforcement work.

The person should be " a strategic and experienced leader who is eager to learn, and is excited to build systems to support millions of people." The salary is apparently £30,000+ according to Linkedin (which I don't necessarily trust) with the usual benefits.

Although this latest ad was posted just yesterday, I've seen the role out there for a while, which suggests to me that Depop are struggling to find the right candidate. Worth a shot.