6 min read

Justifying Russian state takedowns, better suspension emails and moderation at Davos

The week in content moderation - edition #161
Justifying Russian state takedowns, better suspension emails and moderation at Davos
Ruth Porat, Alphabet chief finance officer, spoke about online regulation on CNBC during Davos 2022 (screengrab)

Hello and welcome to Everything in Moderation, your weekly review of content moderation and online safety news. It's written by me, Ben Whitelaw.

Welcome to new faces from The Signals Network, Folkeskolen, Airbnb, the University of Chicago, the Ada Lovelace Institute and elsewhere. I've been writing EiM since 2018 and it's amazing to me that this group of smart, interesting people continues to grow. If you enjoy or find EiM useful, don't forget to forward it to a good friend or kind colleague.

I've had less time to catch up on what's happening this week than I'd like as I'm away celebrating my mum's 60th birthday (πŸŽ‰). But there's still plenty to get through, including a new Viewpoints Q&A with journalist Billy Perrigo. Thanks, as ever, for reading β€” BW


πŸ“œ Policies - emerging speech regulation and legislation

The Online Safety Bill may not be operational until 2024, according to one expert, because "so much of the bill does depend on secondary legislation and codes of practice". Speaking at a recent panel discussion, Carnegie UK Trust associate Maeve Walsh explained that a 2022 or 2023 implementation was unlikely because the bill still needs to go through readings in both the House of Commons and the House of Lords first.

And that's presuming all goes smoothly, which it didn't this week. Fresh parliamentary committee evidence sessions started on Tuesday but were heavily criticised for having zero human rights experts among its invited list of 35 organisations. So stark was their omission that the UK parliament's own human rights committee wrote to secretary of state Nadine Dorries to voice its concerns about their absence. Hardly bodes well.

This week, a US federal appeals court ruled it unconstitutional for the government to stop social media platforms banning politicians in Florida. The law's "onerous disclosure provisions", it said, made it substantially like to violate the First Amendment, leaving the platforms to decide the kind of content accessible on their platforms. The decision was applauded by the Computer and Communications Industry Association (CCIA) and NetChoice, which recently lost their appeal against the controversial Texas HB 20 Bill (EiM #159). Based on what happened here, a reversal in the Supreme Court is more likely.

πŸ’‘ Products - the features and functionality shaping speech

Suspension emails that include clips of policy violations are coming soon to Twitch, according to The Washington Post. VP of trust and safety Angela Hession explained that better notifications are "a number-one ask from our community". The announcement comes quickly after the video streaming site announced a new appeals hub to allow users to appeal moderation rulings and monitor their complaints (EiM #153).

Barriers to entry are one of the reasons why Metafilter β€” the online forum founded back in 1999 β€” remains such a pleasant place to hang out online. That's the verdict of Josh Kramer at New Public, who spent some time among its aged threads and spoke to its current owner, Josh Millard for a recent newsletter (I'd recommend subscribing). Another thing to note: new Metafilter users must pay $5 and wait a week until they are able to post, which helps to reduce spam and bots. Sounds very pleasant indeed.

New Viewpoint Q&A: The story that has piqued my attention the most over the last few months is the legal case being brought against Facebook/Meta in Kenya. Why? Because it speaks to the macro issues β€” do-as-they-wish tech companies, outsourced labour, workers' rights – that make content moderation so interesting and also so hard to solve for.

I asked the journalist behind the investigation, TIME's Billy Perrigo, to explain what the case means for Facebook/Meta, and content moderators worldwide, as well as where he sees the story going.

Viewpoints will always remain free to read thanks to the support of EiM members. If you're interested in becoming a founding member, join today and receive a 10% lifetime discount.

πŸ’¬ Platforms - efforts to enforce company guidelines

YouTube has sought to justify its approach to Russian state content by revealing that it has removed 70,000 videos and 9,000 channels relating to the invasion of Ukraine. Neal Mohan, its chief product officer and head of safety (EiM #149) told The Guardian that "a lot" of those videos were "coming from Russian government, or Russian actors on behalf of the Russian government".

Unlike Facebook and Instagram, YouTube is yet to be banned in by Vladamir Putin, most likely because Russian state-affiliated channels are still garnering millions of views. That uneasy tension could crack at any time, though: Russia's Foreign Ministry yesterday warned it will expel foreign journalists if YouTube is found to block any more of its press briefings.

Elsewhere, the great and the good of Silicon Valley's largest tech companies have been at Davos this week Β for the World Economic Forum's annual meeting and there have been one or two mentions of content moderation:

  • Ruth Porat, Alphabet CFO, said it was crucial to "constructively engage with regulators" in the fight to combat online harms. She also said that content moderation was "core part of what people expect of us".
  • YouTube CEO Susan Wojcicki said the company had been thinking about how it would amend content policies related to abortion following the leak of the Supreme Court draft decision to overturn Roe vs Wade. She also characterised the video platform as one that "focuses on free speech", which I'm not sure many others would.
  • WEF themselves published a piece on digital safety in the context of the Russia-Ukraine war and asked important questions about wartime moderation protocols and metrics during crisis periods.

Substack's policy of limited moderation is "a key means to attract highly influential writers" and has subsequently seen it become "a platform for the de-platformed", according to new research from Storyful and published via the Global Network on Extremism and Technology. Intelligence Lead Nathan Doctor analysed over 9,000 Substack posts scraped from 100 unique accounts as well s mentions of Substack on alt-tech platforms (Rumble, Telegram, Gettr) and found that Substack had an outsized influence among right-wing figures. No wonder its three founders (EiM #145) are so reluctant to apply any rules.

πŸ‘₯ People - folks changing the future of moderation

This newsletter is named after (part of) one of Oscar Wilde's most famous quotes. But another β€” "life imitates art far more than art imitates life" could easily describe the new novel by Dutch author Hanna Bervoets.

In "We Had To Remove This Post', Bervoets β€” who has published eight other books β€” writes about "quality assurance worker" Kayleigh, who is giving evidence against her employer on behalf of former colleagues forced to view "gruesome and unrelenting" streams of content.

The New York Times calls it "surprising and enigmatic" but, to me, it sounds all too similar to the story of Daniel Motaung (EiM #153) but also of Selena Scola, Isabella Plunkett, Gadear Ayed, Shawn Speagle and the dozens of moderators who have come forward to make the nature of this work known.

Last year, I wrote about my excitement about the upcoming New York play, The Moderate (EiM #111), and I feel the same way about Bervoets' book. Seeing these stories play out in art will help the public's understanding of what it is to be one of the internet's essential workers. And that's no bad thing.

🐦 Tweets of note

  • "I've managed to read the draft #DSA - all 600 & something pages. Some initial reflections." - The New York Times' Konstantinos Komaitis does the hard work so you don't have to.
  • "Let's Face It: Content Moderation Policies Are Discriminatory" - Arslan Arsu Arsi writes for Digital Rights Monitor on how platform policies are not geared toward users in the southern countries.
  • "We need a more sustainable way to do content moderation than to have excellent reporters do the hard work, then only have the platforms react when they’re called for comment." - Sleeping Giants, the platform accountability campaign, reminds us of the power of thoughtful content moderation coverage.

🦺 Job of the week

This section of the newsletter is to help EiM subscribers to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.

Tech Against Terrorism is hiring a Junior Policy Analyst, Policy Response and Advisory Team to work directly with platforms with the aim of strengthening online counterterrorism approaches across the tech ecosystem.

The role is suitable for a recent graduate in technology policy, cybersecurity, or international law and the salary starts at Β£25,000. Other draws include working with TAT's talented team and the chance to work abroad for a month a year.