5 min read

The downside of platform transparency, Sama ditches Meta and battlefield bans

The week in content moderation - edition #187

Hello and welcome to Everything in Moderation, your rundown of the most important moderation, safety and online speech news this week. It's written by me, Ben Whitelaw.

Thanks to those of who you shared last week's edition and the Q&A with Glenn Ellingson and to the host of new subscribers that have come on board this week, including folks from Meta, FT Strategies, the Oversight Board, TrustLab, Fiverr, Spotify, DCMS and elsewhere.

If you want the newsletter to keep growing and are one of the 400+ people that open every edition, I'd love for you to join the growing ranks of EiM members. There's a special offer until the end of January.

Enough from me; there's a lot to get through this week, including news from Kenya and contrasting approaches to sexually suggestive content.

This is everything in moderation this week — BW


Policies

New and emerging internet policy and online speech regulation

The transparency provisions that form part of controversial social media laws in Texas and Florida could "become tools for government officials and regulators to intimidate platforms and influence their editorial decisions". That's according to Ramya Krishnan, staff attorney at the Knight First Amendment Institute.

In her Slate piece, Krishnan notes  that disclosure requirements exist in a variety of contexts (eg political lobbying) without always being used for good. She also calls for "a First Amendment framework that can account both for the value of platform disclosure and for its potential harms and costs", which doesn't sound like a straightforward task. Luckily, we won't have to wait long to find out: the Supreme Court is expected to consider the constitutioanlity of the laws "in the coming weeks".

The difficulty of creating robust platform policies comes out very strongly in this Tech Policy Press' discussion with four investigators and analysts involved in the recently released Select Committee report looking at the role of social media in the Capitol Hill riots. You should read it all but this part summed up the challenge nicely:

If Facebook had acted, if they had say passed a policy against delegitimizing the 2020 election, a policy against election denial, almost all of conservative media at the time was complicit in spreading these lies. And so, Facebook would have to take action, not just against Stop the Steal groups, but conservative talking heads, major conservative news sites, major social media influencers, political figures.

Products

Features, functionality and technology shaping online speech

The CEO of a major trust and safety company has said that there are "too many opinions [in the field] and too little cooperation". Writing for the World Economic Forum, Noam Schwartz, CEO and founder of ActiveFence, noted that there was a  "growing perception that technology platforms, governments and the media sit on opposing ends of the harmful content debate" but that regulation —such as the iterative approach to the UK’s Age Appropriate Design Code— demonstrated that a collaborative approach between different players in the market was possible.

ActiveFence's website does not say who they work with but it announced $100m funding in 2021 when it reportedly had "several dozen" customers across a range of verticals.

Platforms

Social networks and the application of content guidelines  

The big story of the week was one that I've been covered repeatedly here for the last year: Sama, the moderation outsourcing company facing a lawsuit in Kenya for union busting, will be "discontinuing" its work with Meta due to the "current economic climate". 3% of its staff, mostly in Nairobi, will be let go.

The decision has wider significance for a few reasons:

  • It comes just two weeks before a judge is due to rule on whether Kenya is an appropriate jurisdiction for Daniel Motaung's case (EiM #159 and others).
  • Majorel will be taking over the the $3.9m contract, according to legal NGO Foxglove, despite reports of poor mental health support for their workes.
  • Sama are the second major player to withdraw from the space following Teleperformance's decision to retreat at the end of the last year (EiM #182) following its own union busting accusations.

If you're new to this story, I spoke to Billy Perrigo, the TIME journalist who spoke to Motaung and broke the story last year, about why the outcome of the case will have significant ramifications for platforms.

Google and Meta have been removing content that calls for violence in Brazil or "praising the attacks by inciting others to commit violent acts", spokespeople at the companies said this week. No specific details were given about how the platforms have reacted to last week's riots but I'm left with the impression that the work done by platforms in the run up to the election did not work as intended. YouTube, to give an example, removed 10,000 videos on 2,500 channels between March and November last year — beyond the election in October —and yet this still unfolded.

Twitter has banned the use of several hashtags used to share child sexual abuse material (CSAM) following an investigation by NBC News. The hashtags related to material stored on on Mega, a file upload service, and which users on Twitter were trading or selling access to. It's a blow to Elon Musk, who has criticised Twitter's former leadership for inaction and made preventing CSAM a focus. As Techdirt noted, the company "used to be one of the leading companies in responding to this challenge, but now it appears that the opposite is true".

The investigation coincided with further cuts to Twitter's trust and safety team, including Nur Azhar Bin Ayob, the recently appointed head of site integrity for Twitter’s Asia-Pacific region. Being in that team must be tough right now.

Finally, here's one I missed last week: TikTok has expanded its 18+ distribution feature to it videos, having only been available on its Live stream to date.

People

Those impacting the future of online safety and moderation

To names of of Zev Burton (EiM #109), Ferora Aziz (#56) and Jennifer Bloomer (#186), we can add Brandon Mitchell. Their link? Exposing the hypocrisy in the moderation guidelines of major platforms.

His story is a little different to the the others. The 36-year-old Canadian has been volunteering as a medic in the Ukrainian forces, supporting civilians and helping them get medical care. His account gives dispatches of life in the war effort and is designed to raise money via a PayPal link in his bio. Except that some of his videos have been removed entirely and others had views suppressed, he says.

So he took this week to posting a video on his so-called skincare routine, barely hiding his disdain for the rules while also throwing some shade at the superficial nature of the video platform.

And I tell you what: Watching someone shout "Does the L'Oréal age expert actually inhibit one's own actual oils?" over the echoes of shelling in the background really drives the point home that we still know very little about how platforms work.

Tweets of note

Handpicked posts that caught my eye this week

  • "Our very own Daniel Motaung, SA! A slayer of giants" - Phumzile van Damme celebrates her fellow countryman for his impact in 2022.
  • "Meta's responses to the Board's recommendations are due next Thursday, 19 Jan" - Andrew Smith, policy and cases lead at the Oversight Board, provides us with a helpful reminder.
  • "Do not use "Thank God" in your video script—the video gets rejected because of content moderation violation" - sound engineer Guido Mercer finds a strange quirk in the policy small print.