Inside the Oversight Board, Roth on app stores and third party moderation under threat?
Hello and welcome to Everything in Moderation, your global content moderation and online safety round-up. It's written by me, Ben Whitelaw, and supported by good people like you.
Today's newsletter touches on all the meaty, difficult, hard-to-resolve topics that makes content moderation such a fascinating issue to follow — transparency, accountability, labour rights, and the rule of law to name a few. I've linked to relevant past editions of EiM to help to give further context to this week's developments.
To new subscribers from Cornell, DCMS, SightEngine, The Amplifier Group and elsewhere, thanks for joining the club. If you enjoy today's edition, consider forwarding it to someone who might also find it useful or share it on the least badly moderated platform you know.
Here's everything you need to know in moderation this week — BW
Policies
New and emerging internet policy and online speech regulation
The Oversight Board, the independent-but-Meta-funded council created two years ago, aims to take a "more visible, consequential" role in the way Facebook and Instagram moderates speech, according to a long and detailed profile in Wired.
There are lots that we know in there — not least that Meta has implemented less than a quarter of the Board's recommendations — but also a number of details which I had missed: platform algorithm expert Renée DiResta being denied a spot on the board because "they were going in a different direction"; the difficulty getting access to Crowdtangle to conduct investigations and the furore over Facebook's u-turn on seeking a Ukraine advisory (EiM #153). Steven Levy, who writes the story, even sits in on the investigation of the Xcheck debacle (EiM #129) so chances are you'll learn something new. My read of the week.
What kind of data-sharing provisions exist in existing and incoming platform regulation and how do the different approaches compare? Well, former Crowdtangle CEO Brandon Silverman has produced a useful legislative overview, containing transparency tidbits from the Digital Services Act and Platform Accountability and Transparency Act and a glossary of key terms. One to bookmark.
Products
Features, functionality and technology shaping online speech
Perhaps the most interesting news of the week comes from Teleperformance, which announced that it would no longer moderate what it calls "flagrant, heinous and odious content" following accusations of low pay and union busting in Colombia (EiM #180). The list of platforms that the French company works for is not public but we know that it counted TikTok as a client as recently as August (EiM #171) so this is big news. Outsourcing is viewed as a necessary evil for platforms but Foxglove and others have argued that it is a core part of the "deep hypocrisy" at the heart of platforms and should be ended (EiM #89).
On the topic of outsourcing trust and safety, TaskUs this week announced a strategic investment in Antitoxin Technologies, which owns AI startup L1ght. The two will work together to launch TaskUs' Safety Support Center, a product that enables moderators to "analyze toxic content in real-time". Unlike Teleperformance, TaskUs's CEO Bryce Maddock was clear that "content moderation has never been more critical than it is now".
Platforms
Social networks and the application of content guidelines
Yoel Roth, Twitter's former head of Trust and Safety, has written an op-ed for the New York Times in which he lifts the lid on what it was like working under Elon Musk.
The most notable detail, and the thing that got picked up in tech press, is his revelation that "representatives of the [Google Play and Apple] app stores regularly raised concerns about content available on our platform" in his time in the company and were knocking on Twitter's door again in recent weeks. It would be remiss of them not to.
Elsewhere in bluebird related news:
- The remaining members of its Trust and Safety team organised a 'sickout' on the same day that Yoel left to protest against Musk's ignoring of their warnings on hate speech.
- Video obtained by TMZ (never thought I'd say that here) about Elon Musk talking down his promised content moderation council has been leaked, if a leak can also be the most unsurprising thing ever.
- A prominent Brazilian academic told Al Jazeera that online critics were ready to pay for Twitter Blue and create a fake account of her "to defame me as they please".
- Oh and Donald Trump and a bunch of previously banned users, including Babylon Bee and Jordan Petersen, have had their accounts reinstated, a decision that will "lead to real harm for users and will scare away (more) advertisers", according to this op-ed by Danielle Citron and Hany Farid, and will give the former president "greater reach to inflame more violence ahead of 2024".
An interesting story now involving Instagram, which this week was forced to reinstate a track of drill music removed following a police request and told to revamp how it deals with asks from government authorities. The music, which was removed after a request by the Metropolitan Police despite not breaking the law, did not even break Facebook or Instagram's rules, according to the Board, who also noted "serious concerns of potential over-policing of certain communities". I said back in July that the case raised eyebrows (EiM #169) and now it's clear why.
People
Those impacting the future of online safety and moderation
In the same week that one platform CEO mocked the concept of trust and safety and continued to undo a decade of good work, another took the stage to espouse its benefits.
TikTok CEO Shou Zi Chew said "the company has tens of thousands of employees in content moderation" in an interview at the Bloomberg New Economy Forum. Whether he was referring to the video app specifically or Bytedance, with its suite of user-generated content products, is not clear.
The app, lest we forget, has had its fair share of moderation controversies in the last few years, most notably the disappearing of Uighur Muslims (EiM #49) and government pressure in Pakistan (#84) to name just two. Contractors moderating content in the US have also filed a lawsuit against TikTok regarding working conditions they were forced to endure (#154).
When asked if TikTok could operate effectively if it fired half its staff, as Twitter has done, the Singaporean replied: “I hope that day never comes”. Us too Shou.
Tweets of note
Handpicked posts that caught my eye this week
- "What are they doing about "one of the real issues for the board" – human moderators looking at content from conflicts like in Ethiopia?" - legal firm Foxglove shares a video of a Sky News reporter grilling an Oversight Board member about content moderation at the Big Ideas Live event. How far we've come eh?
- "Sneak peak: low trust = low reporting by users" - University of Utrecht associate professor Catalina Goanta shares an interesting sounding paper on factors that influence decisions to report content.
- "Content moderation is not a dinner party" - Marco Pancini, public policy at Meta, clearly hasn't seen me try and cook.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.
The Information Society Project at Yale Law School is looking for a fellow as part of its Majority World Initiative (MWI).
The successful candidate will oversee a project bringing together scholars and stakeholders in a workshop and conference, as well as writing papers, academic articles, blogs and interviews with scholars. To be honest, it sounds fantastic.
The fellow receives a salary of $50-80,000 depending on experience and will be based in New Haven, Connecticut. Do forward to anyone you think might be eligible.