4 min read

📌 The big 'blacklist' conspiracy

The week in content moderation - edition #73

Firstly, an apology. A bug during the sending of last week's Everything in Moderation on digital juries meant that it was a mess when it hit your inboxes. I’m really sorry about that. If you'd like to, you can still read it here.

New subscribers from Business Insider, journalism.co.uk and others, don't worry — gaffes like this are relatively rare. Hit reply to tell me how you got here and what your interest in moderation entails.

This week, hackers gave us a fresh glimpse into how Twitter manages conversations on its platform. As ever, there's a lot to be desired.

Stay safe and thanks for reading — BW


⛔️ The shadowban brigade are back

Twitter's massive hack this week confirmed the existence of something that US conservatives have long wailed about: a ‘blacklist’.

Screengrabs shared on Twitter (and later removed) show the tool that hackers used to gain access to dozens of verified accounts — including Apple and Barack Obama — by 'socially engineering’ a member of Twitter staff. The tool shows a panel with several accounts featuring tags called ’Trends blacklist’ and ’Search blacklist’. It's what Donald Trump might call a ’shadowban’.

A spokesperson explained that these tags are added to accounts to prevent content that violates Twitter’s rules from surfacing in search or via Trending and that this has been going openly since 2018. But it’s noteworthy because in none of its FAQs or policies does Twitter refer to ‘blacklisting’.

As Vice News notes:

This opacity creates a situation where a devastating lapse in Twitter's security leads to a leaked image of an internal panel that contains the word "Blacklist," which Twitter doesn't use when talking about moderation, and which sounds more sinister than it is.

Back in July 2019, I wrote about quarantining/bozoing — a concept adjacent to ‘blacklisting' — when Twitter announced that tweets from government officials which violated its rules would be hidden behind an opt-in message.

The questions raised by academics and researchers then are still ones that still apply now. They include:

  • What do users do to become blacklisted? And what impact does it have? (In Twitter's case, it prevents them from showing up in Search/Trending but this will differ on other platforms)
  • How quickly does it take effect? Is is instant or does it take minutes/days?
  • Who adds the tags to the account page — is it automated, based on rules or manually done?
  • Is the decision taken by one person or does it involve a group? Who are these people?
  • What does adding blacklist tags actually involve? How much scope for error is there in those steps?
  • What process is in place to check people that have been blacklisted should continue to be there? What criteria is that decision based on?
  • What authorisation is needed to remove the tags?

Back in 2018, Twitter tried to distance itself from shadow banning when some accounts did not appear in its search. It surely knows that, when the story becomes about its own lack of transparency, there really is only one way to change that.

If it decides to sit on its hands, the episode has nonetheless strengthened the argument of advocates of greater openness about how dominant digital platforms control content, which I’ve covered here in recent weeks (EiM #71, EiM #72).

Twitter already had a reason to clarify the language it uses and to get rid of 'blacklist'. What has happened this week will only add to those calls.

⚫️ What's it like posting into the void?

It’s easy to think that it’s only furious Republicans that get blacklisted. It’s not.

After the introduction of SESTA-FOSTA — two bills passed in America in 2018 to make internet platforms liable for ‘knowingly facilitating sex trafficking' — thousands of sex workers got similar treatment.

I heard about this through Hacking Hustling, a collective of sex workers, technologists and researchers that was formed following the introduction of the two acts. It found that the removal of Backpage — a classified ads site that become known for prostitution — affected sex workers health, finances and safety.

This week, it shared a preview of some of the research it has been doing on the topic of shadowbanning.

Timely stuff in light of Twitter’s difficult week. I look forward to reading the full research.

🐘 Not forgetting...

Madness in Malaysia, where a highly-respected independent news site — Malaysiakini — has been charged over five reader comments that were critical of the judiciary’s decision after a COVID-19 lockdown. Malaysian law does not require news organisations to moderate comments but the attorney general still judged them to be a threat to ‘public confidence in the judiciary’.

Contempt case over reader comments tests Malaysia’s press freedom - ICIJ

A Malaysian court has reserved its judgment in a contempt case against Malaysiakini and its top editor that has alarmed press freedom advocates

If you want to dig deeper into platform regulation, this is a useful review from Jurist of the landmark US court cases that have set the precedent for speech and could inform policy in the future.

No matter the decisions that Twitter, Facebook, YouTube, and other social media giants make, critics will inevitably continue to find faults in Internet speech policy.

I’ve praised Pinterest’s moderation efforts in the past (EiM #33) but it seems all is not well. OneZero found white supremacist content and sexualised images of girls on the platform.

Pinterest Hosts Sexual Images of Young Girls, Anti-Vax Memes Despite Moderation Promises | OneZero

The moderation oversight is a result of hiding rather than deleting illicit content.

Spandi Singh from New America’s Open Technology Institute reflect on Facebook’s recent civil rights audit and cautions against telling digital dominant platforms just to spend more on moderation (IMHO it would be a good start).

What would it take to moderate a platform as big as Facebook? - Marketplace

Here’s something I didn’t know: The US Army has a Twitch account for recruitment purposes. Last week it banned people for asking questions about US war crimes, which may be a violation of the First Amendment.

U.S. Army Esports Team May Have Violated the First Amendment on Twitch

Two civil rights lawyers say that the U.S. Army may have violated the constitution when it banned Twitch viewers for asking questions about American war crimes.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.