5 min read

The Musk moderation moment, India's own council and why we don't trust AI decisions

The week in content moderation - edition #179

Hello and welcome to Everything in Moderation, your globally-minded content moderation and online safety round-up. It's written by me, Ben Whitelaw.

Due to unforeseen personal circumstances, there wasn't a newsletter in your inbox last Friday, which perhaps was just as well since it coincided with that takeover and would have been out of date upon arrival. I've read wall-to-wall Musk this week to bring you the bits that matter most.

The brief hiatus means there are a host of new subscribers to welcome: folks from Grindr, Unitary, the University of Berkeley, Discord, the Mozilla Foundation and a bunch of the team at Hinge, hello to you all. Get in touch and let me know what you think of today's edition (that goes for everyone).

Here's everything in moderation this week - BW


Policies

New and emerging internet policy and online speech regulation

The Indian government has gone ahead with its plans to create a government panel to hear content moderation complaints in an unprecedented move that has significant ramifications for users and platforms. The Grievance Appellate Committee — an amendment of its controversial IT Law (EiM #103) — will hear appeals from Indian citizens unhappy with any platform decision. The panel is not large — just a chairperson and two full-time members, according to Reuters —  and they have 30 days to reply so it's very hard to imagine how they will deal with the tidal wave of complaints in Twitter's third largest market. Scroll.in has more about how it will function.

Do the incoming Digital Services Act and US law, including the First Amendment, conflict with each other? And how can we minimise these conflicts to allow different versions of speech to co-exist? Those are the questions posed by law professors Catherine Roraback (University of Connecticut) and Laurence R. Helfer (Duke University) in a new paper and article published this week. They go on to argue that user control is the key to helping to "ameliorate tensions between national jurisdictions seeking to protect users from harms in the digital sphere". My read of the week.

Products

Features, functionality and technology shaping online speech

A new study has found that social media users are generally more likely to question decisions made by AI moderators if the content in question is not obviously offensive. Researchers at Cornell University used a simulation platform called Truman to test how 400 participants reacted to scrolling through content moderated in different ways (AI, other humans, unidentified source). They found in their paper, published in August but reported by the Cornell Chronicle this week, that users trusted automated decisions when the post was clearly harassing but otherwise favoured fellow humans. Mods 1, robots 0.

Platforms

Social networks and the application of content guidelines  

A week is a long time in politics and it's an even longer time in the world of Twitter, where Elon Musk has thrown everything upside down since his purchase of the company went through last week. There's been a lot written about him and his tweets but here are the main points as I see them, broken down into an all-too-familiar format:

1. Policies

The major news in the days after the world's richest man took the helm was that there was going to be nothing new and that quickly turned into his appointment of a council of experts to do the work for him. Conservative and far-right figures took the opportunity to encourage their followers to misgender trans people and, by Monday, Yoel Roth, head of safety and integrity, was forced to note (with charts) that there had been a surge in hateful conduct. As of yesterday, we're in the "which brand caves into pressure to pull their ad spend" phase.

The European Commission's Thierry Breton (EiM #52) took no time to remind Musk who he thinks is boss, leading the billionaire to reach out about "meet[ing] with Breton in the coming weeks" according to Politico. Twitter, lest we forget, will have to comply with the Digital Services Act between mid-2023 and February 2024.

2. Product

Musk has touted a plan to charge verified users $20 a month (then $8) to keep their blue check under the guise of reducing spam and “defeat[ing] the bots and trolls” (don't ask me how). Meanwhile, content enforcement tools were removed from some employees, according to Bloomberg, proving that the site has, at least, learnt from the past. Musk later promised that they will return.

3. People

Musk fired all of his top execs in one fell swoop but the most consternation was reserved for his dispatching of Vijaya Gadde, Twitter's head of legal policy, trust and safety, who was variously described as a "force for good" and someone who fought for free speech "over and over and over again".  It's not clear if and when 25% more layoffs will take place (or whether they will be in trust and safety) but Wired warned that if Musk starts firing Twitter's security team, then don't hang around. What a week it's been.

As all this was going on, a new court hearing in Daniel Motaung's case against Meta was being heard in Kenya about the jurisdiction of his case (EiM #159 and others). The social behemoth argued that it’s not incorporated in the country and, therefore, local courts lack jurisdiction. it A decision has been scheduled for 6 February 2023.

Finally for this section, one of my favourite ideas to come out of the last few years — Discord's Moderator Academy — has received a design refresh. Launched in January 2021 (EiM #94), the course is designed to "empower users and moderators with strong policy acumen, moderation best practices, and a passion for making the internet a better place", according to Michael R Swenson from its Policy Programs team on LinkedIn.

People

Those impacting the future of online safety and moderation

This section of EiM nods to people shaping online safety in one way or another but it also acts as a list of things I want to personally watch, read or see and their directors (EiM #111, #161, #168).

To that list, I'm pleased to add Shalini Kantayya, whose new documentary TikTok Boom explores the rise of the Bytedance-owned video app and its impact on users' mental health. It pays special attention to the algorithm that is fuelling its growth, she explained in a Forbes interview, and the data harvesting practices which impact moderation decisions on a daily basis.

Kantayya also draws a parallel to the revelations about Instagram that came out during the Facebook Files (EiM #128) and which, for all that there is to say about Frances Haugen, have led to new safety features on the app. (EiM #175, #152). I look forward to giving it a watch.

Tweets of note*

Handpicked posts that caught my eye this week

  • "The choice of platforms to under invest in content moderation in the global south was a business decision. This is structural violence." - Curtin University prof Tama Leaver quoting Kenyan activist and writer Nanjala Nyabola at the Association of Internet Researchers in Dublin.
  • "Men will literally make themselves the face of all content moderation decisions instead of going to therapy." - Former Twitter VP of Product at Jason Goldman going all in.
  • "Everyone will hate you all the time." - no, not a text sent to me by my sister but Eva Galperin, director of cybersecurity at EFF, explaining what happens if you're in charge of moderating content.
  • "the hardest moderation is before you hit enter" - someone put this wisdom from St John's law professor Kate Klonick on a t-shirt.
  • "This is a direct result of all the careful design that went into the system." - Colin Fraser, formerly of the parish of Twitter, believes Birdwatch is the real deal.

*This is a bumper edition of Tweets of note because there were too many good ones not to include. Let me know if you'd like to see more tweets every week.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1400+ EiM subscribers. This week's job of the week is from an EiM member.

ActiveFence is looking for a Senior Cyber Threat Intelligence Analyst to help drive its analysis of multiple cyber threat intelligence sources on the darknet and deep web. The role has responsibility for leading processes of Request For Information (RFI) analysis and quality assurance of the CTI team deliverables and requires at least three years of experience.

Its Extremism team is also looking for a Webint/Osint Analyst whose work will be to remotely gather and analyse data from all corners of the internet. Daily tasks include processing large amounts of data and creating reports for our customers. This is a great role for someone with strong research and analysis skills.