5 min read

Moderating misinfo in a crisis, Brazilian bill blocked and 'designing for the margins'

The week in content moderation - edition #160
Moderating misinfo in a crisis, Brazilian bill blocked and 'designing for the margins'
A Roma family from Ukraine explains what led them to flee their home for Moldova (UN Women/Maxime Fossat, coloured)

Hello and welcome to Everything in Moderation, your guide to understanding how content moderation is changing the world. It's written by me, Ben Whitelaw.

I want to thank new subscribers from Depop, Cyacomb and Spectrum Labs and half a dozen others for carving out a space in their inboxes for EiM. If you received this from a colleague or friend, you can sign up here (or join as a supporting member). This week's newsletter is in your inboxes later than usual because I was in charge of some very important cargo.

It'll be back at its normal time next week with an exciting interview and some fresh analysis. For now, here's everything you need to know from the last seven days — BW

📜 Policies - emerging speech regulation and legislation

A lot can happen in a week, especially as far as Texas' HB 20 law is concerned.

First, technology companies, represented by NetChoice and Computer & Communications Industry Association (CCIA), try to block the law in the Supreme Court, citing the risk of causing "irreparable harm" to the web. Then, the state hit back with its own petition, urging the Supreme Court to reject the emergency application. And in the last 24 hours, we've seen Republicans "scrambling" to defend the law in the face of a flood of pro-platform legal briefs.  

We'll no doubt be somewhere completely different next week but in the meantime, I recommend this Lawfare podcast with Alex Abdo and Scott Wilkens from the Knight First Amendment Institute and this thread from Corbin K. Barthold, internet policy counsel at think tank Tech Freedom, who explains why the whole thing is "bonkers".

A Brazilian bill that would force platforms to declare details about its moderation teams and place design restrictions on messaging apps to reduce the spread of misinformation is now likely to take place after the presidential election in October. According to a report, legislators hoped to vote on PL 2630 — also known as the Fake News Bill — in advance of the election but "extremely dishonest" lobbying by current president Jair Bolsonaro, Google and Facebook about the news element of the legislation means that it has been pushed back.

💡 Products - the features and functionality shaping speech

Apps like Telegram should adopt patterns of "design based on the lived realities of those who are the most marginalised", according to Article 19's Afsaneh Rigot in a Wired op-ed. Rigot's research has focused on the impact of technology on LGBTQ+ rights in the Middle East and on "designing for the margins". It was also her work that led to some of Grindr's excellent safety improvements over the last few years (blog post and video discussion). My read of the week.

💬 Platforms - efforts to enforce company guidelines

Last week's racist shooting in Buffalo, in which ten people were murdered in a New York supermarket, was another reminder of how far platforms have to go when it comes to terrorism. Twitch took most of the heat for allowing the suspected 18-year-old killer to stream the attack without an account, even if it was only live for two minutes and watched by 22 people before being pulled down.

Those efforts were not enough to stop multiple copies from appearing on Twitter and Facebook, nor did its addition to the Terrorist Content Analytics Platform avoid its spread. Some videos ended up with over 250,000 views, and even when they were reported by journalists, took up to three hours to be removed. Streamable, the video hosting platform bought by Hopin in 2021, also hosted a video that was accessible for over 9 hours and had over 3 million views, despite its terms of service prohibiting "content that promotes terrorism or acts of violence". Expect the fallout to continue long into next week.  

Twitter yesterday launched its crisis misinformation policy to "slow the spread...of the most visible, misleading content, particularly that could lead to severe harms". In practice, it means users will see a warning notice before viewing the tweet and the post will not be amplified in Search or via its recommendation systems. Head of site Integrity Yoel Roth tweeted that it would begin with the Russian invasion of Ukraine, although predictably it inspired a host of 'Ministry of Truth and Elon gags.

Facebook saw huge increases in spam and content relating to violence and incitement in the first quarter of 2022, according to its newly published Community Standards Enforcement Report. Spam content doubled compared to the end of 2021, ending a downward trend since September 2020, when there was 1.9bn incidences, while violence and incitement also saw a 75% compared to October-December last year.

The cause of the increases, according to VP Integrity Guy Rosen, was improved "proactive detection technologies", although Casey Newton also noted in Platformer that it was the same automated systems which wrongly took down hundreds of thousands of posts relating to self-harm, terrorism and violence and graphic content.

Which is it, Guy? A win for Facebook's automated moderating systems or a means of suppressing speech at a huge scale? Until the report shares better data, we'll never know.

👥 People - folks changing the future of moderation

When Twitch published a statement this week that "white supremacism, racism, and hatred should have no place anywhere", some users were not happy about the speed of its response. And the first place some of them turned to was CEO Emmett Shear's Twitter account to vent.

Shear has a comparatively low-profile compared to other platform executives and hasn't appeared in EiM since October 2019 (EiM #36) when he addressed confusion about how the streaming site's guidelines were applied. Since then, Twitch has been busy releasing its first transparency report (#102), battling torrents of hate raids (#130) and hiring Angela Hession to lead its Trust and Safety efforts.

Yesterday, Shear appeared on a Harvard Business Review podcast in which he briefly addressed the "horrific hate crime" and committed to "continue to invest heavily in ensuring the safety of everyone on Twitch". It was a soft question and an unsympathetic, templated response.

At least, I guess, Shear said something. Elon Musk, very notably, has kept very quiet about the incident.

🐦 Tweets of note

  • "What we have here is a tactical disagreement" - University of Chicago law professor Genevieve Lakier — who has authored a series for Knight Columbia — responds to criticism levelled at her and her co-author.
  • "In a nutshell 🥜 it uses human-and-model-in-the-loop learning 🤖🤝 🙆 to tackle emoji-based hate " - Hannah Rose Kirk, data scientist and researcher at Oxford Internet Institute, has a handy thread on her new paper.
  • "I used to be a bartender and now study community moderation so I have Thoughts" - Cornell Research Manager (and, just as importantly, Reddit mod) Sarah Gilbert has some very valid issues with Nick Clegg's comments on the metaverse.

🦺 Job of the week

This section of the newsletter is to help EiM subscribers to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.

Lots of interesting roles this week but unfortunately none that display salary ranges. As I've said before, I won't include any roles here that aren't transparent about pay, such as last week's Sales Engineer role with Checkstep (EiM #159).