5 min read

The problem of algorithmic distribution, new Kenya court case and top 100 women list

The week in content moderation - edition #185

Hello and welcome to Everything in Moderation, your all-in-one recap of the important online speech and content moderation news this week. It's written by me, Ben Whitelaw.

This is the final edition of 2022 and boy has it been a wild ride. Over the last 12 months, I've written 44 newsletters, produced 10 expert Q&As, penned more than 65,000 words and garnered almost 900 new subscribers (including you? Thank you if so). We've seen the topic of platform governance play out almost daily in the news and content moderation, more than ever, has become a lightning rod for political and economic discord across the world. 2023, as I predicted this week, will see more of the same.

A festive welcome to new subscribers from Google, Amazon, ByteDance, Tremau and elsewhere. A special thank you to every single EiM member whose contributions ensure the newsletter reaches your inbox every week.

I'm working on some exciting changes and collaborations for 2023 and will be back in January with more information (use this special offer to get in early). So, for the last time this year, here's everything in moderation - BW


Policies

New and emerging internet policy and online speech regulation

The UK's Online Safety Bill threatens to "threatens to undermine" the volunteer-driven governance model used by numerous big platforms and could chill speech, according to two Wikipedia Foundation executives. Writing on CEPA, Rebecca MacKinnon and Phil Bradley-Schmieg argue that the bill should "recognise the difference between centralized content moderation carried out by employees, and community-governed content moderation systems", as the Digital Services Act does.

Meanwhile, Global Partners Digital has published a blog post following the Bill's latest scrutiny period, pointing out that its design could "accentuate the existing network effects and monopolies of the most dominant platforms." Expect more legislation next year to try and mitigate that effect.

Tackling the issue of online speech and safety will require "more insight into how platforms make these decisions", according to an op-ed piece from a former Facebook trust and safety. Katie Harbath writes for CNN that platforms must "make the best choice out of a range of terrible options" using a multi-pronged approach that looks "not just at the content but also the behavior of people on the platform, how much reach content should get, and more options for users to take more control over what they see in their newsfeeds." My read of the week.

Products

Features, functionality and technology shaping online speech

Algorithmic distribution is back in focus this week following TikTok's announcement that users will soon be able to see why a video has been recommended for them. The additional context will include: a user's previous interactions; the accounts the user follows; content that's been posted recently; or content that's growing popular in a user's region. No date has been set for go-live.

It comes just five days after the Centre for Countering Digital Hate published a new report on the speed with which a 13-year-old new TikTok user was recommended suicide content via its For You page. Facebook also just updated its approach to ranking signals that govern what users see in their feeds. So all very timely.

Elsewhere in product news: the design of Community Notes, Twitter's fact-checking tool that allows users to add additional context, means that 96% of notes are not seen by the public, according to new analysis by Bloomberg. Notes only become visible if users from a “diversity of perspectives” are able to agree that a note is “helpful"; however, with so little consensus on the platform, it seems like the feature is not getting significant visibility.

Platforms

Social networks and the application of content guidelines  

Meta is facing another lawsuit in Kenya, this time for failing to remove Facebook posts inciting racial hatred which reportedly led to “the loss of lives, displacement of families, vilification of individuals and destruction of communities in Kenya and across Africa”. The case is being brought by two Ethiopian researchers Abrham Meareg and Fisseha Tekle, both of whom have suffered as a result of inadequate content moderation, according to OpenDemocracy. It comes just weeks after it was announced that Daniel Motaung's case against Sama and Facebook (EiM #179) will be heard in February next year.

Twitter's new head of trust and safety has come under fire for suggesting a partnership with a controversial anti-child sexual abuse non-profit that has links with QAnon. Ella Irwin, who took over from Yoel Roth (EiM #180), reached out to Operation Underground Railroad despite the organisation being under investigation by federal authorities since 2020. A worrying development.

In other Twitter news, a former member of its Trust and Safety Council has broken her silence on how "there was no outreach to us since Musk's takeover" and its advice was "not being heard" in the weeks before she quit. Eirliani Abdul Rahman, the co-founder of Youth, Adult Survivors & Kin In Need (YAKIN), added that she was "very proud of the work we did on the council".

It feels like an age ago but this was also this week that Twitter suspended (and then resuspended) the accounts of journalists, other platforms like Mastodon and Koo and even ElonJet, the bot account of his personal carbon-belching plane. Will Oremus at The Washington Post called the whole episode "a mixture of vindictive score-settling, a made-for-social-media reality show, and an attempt to distract from scrutiny of the personal digital fiefdom that Musk’s Twitter has quickly become."

That sense was confirmed by the news yesterday that the blue bird app's already slashed public policy team was being halved again, including Sinead McSweeney, its global vice president for public policy, according to Rappler.

Finally, spare a thought for Pornhub: after being kicked off Instagram in September (EiM #172), the adult content company has been banned from YouTube for linking to pornography from its supposedly safe-for-work channel. The company “vehemently denies” the claims. Expect a strongly worded letter.

People

Those impacting the future of online safety and moderation

Dr Sarah T Roberts has cropped up numerous times in EiM (EiM #72, #74, #133) but somehow not in this section. It's an oversight on my part because I've been reading the UCLA professor's work since I started writing the newsletter back in 2018.

Thankfully, Roberts' inclusion in the 2023 100 brilliant women in AI ethics list gives me a reason to call out her work again. Compiled by Lighthouse3's Mia Dand, the list is designed to showcase "talented women working hard to keep humanity safe from technological harms".

Roberts' work, alongside Safiya Noble, to launch and run the Minderoo Initiative on Technology and Power is certainly that. And she's accompanied by other brilliant women whose work you may have also come across, notably Catherine Bui and Phumzile van Damme. Bookmark and follow each and every one on the list.

Tweets of note

Handpicked posts that caught my eye this week

  • "even peloton is upping its Trust & Safety game" - Ben Decker, CEO of Memetica (and, we must presume, Peleton user) notes a clampdown by the fitness company.
  • "I've been in contact with @Linktree_'s Trust & Safety team about Real America's Voice, home of Steve Bannon's War Room" - Co-founder of CheckMyAds Nandini Jammi with a thread on how the bio site is violating its own T&Cs.
  • "Using Facebook’s @OversightBoard as a case study, I critique how this discourse empowers, legitimizes & obscures." - University of Georgia's Thomas Kadri shares his paper on the use of judicial language to justify and expand content moderation powers.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.

Not a job but a great opportunity nonetheless: the Berkman Klein Center for Internet & Society at Harvard is accepting fellowship applications for 2023-2024.

BKC fellowships are designed for scholars whose "research advances Internet & society studies in the public interest." Fellows can be academics, practitioners or people at the intersection of industries. Each fellow receives a $75,000 stipend for the year.

If accepted, you'll be expected to produce at least one public output that impacts and informs the scholarly and public debates in the arenas in which they work. And you'll be working with five other fellows too. Sounds great, doesn't it?