5 min read

📌 New call for platform transparency, YouTube's spam problem and talking 'algospeak'

The week in content moderation - edition #155

Hello and welcome to Everything in Moderation, the weekly newsletter that keeps you on top of online safety and content moderation news and what it means. It's written by me, Ben Whitelaw.

Welcome to new subscribers from Ranking Digital Rights, MediaHack, Hearken and a host of smart Danish folks that I met at the International Journalism Festival in Perugia. You can watch back my panel on the internet's essential workers with four excellent women here.

If you missed the recent Q&As with the head of Trust and Safety at Clubhouse and the former research lead for conversational AI at Jigsaw, made possible by EiM's founding members, both are worth catching up on. There'll be more in the coming weeks too.

Without further ado, here's this week's round-up — BW


📜 Policies - emerging speech regulation and legislation

The Digital Services Act "could undermine platforms' ability to effectively and timely moderate content, keep users safe and promote trust online", according to a blog post from the Disruptive Competition Project, run by the Computer and Commissions Industry Association. It also calls for the creation of online legal practices for the mediation of disputes, which is a topic I've covered here over the last few years (EiM #72) but which doesn't seem to have moved on very much in that time.

Calls for greater transparency about the content moderation processes of big platforms are not new (EiM #71) but a new joint op-ed by three politicians in response to the Sama/Facebook revelations in Kenya (EiM #150) is a notable development. Writing for The Independent, Damian Collins (UK), Sean Casten (US) and Phumzile van Damme (South Africa) call for public audits of moderation teams, the disclosure of contractors and the protection of whistleblowers. Not very far away from what I called for in 2019 (EiM #68).

Our understanding of how well the platforms have responded to the war in Ukraine is changing all the time, as demonstrated by a number of developments this week:

  • A letter signed by 31 civil society groups has raised questions about the response of large social media platforms to the Ukraine war and emphasised that "other crisis situations have not received the same amount of support even when lives are at stake". There are some killer quotes in there, including this from Access Now's Marwa Fatafta:
“Imagine Facebook making an exception for Hamas calling for resistance or self-defense against the Israeli occupation. It is unthinkable.”
  • There's some interesting detail in this piece on the lack of "systematic communication" between Ukrainian government officials and Meta which explains the company's fluctuating response to the ongoing war. I also learnt that the company only recruited a public policy expert in Ukraine in 2019, a whole five years after Russia's invasion of the Donbas.
  • A great example here of how journalists reporting on online speech assume the role of pseudo-moderators: Reuters contacted Meta about the abuse of Mariana Vishegirskaya, a pregnant Ukrainian fashion and beauty influencer, via Instagram, and was told the company couldn't do anything about the "vile" comments on her profile. The next day, Meta made her an “involuntary public person", meaning moderators could start removing posts under its harassment policy. Go figure.

💡 Products - the features and functionality shaping speech

Stricter filters for spam comments are being piloted on YouTube which continues to blight the platform's major streamers. The new setting was spotted by tech video maker Marques Brownlee, although how strictness is defined or how long the "experiment" will last is not clear. 950m spam comments were automatically removed from YouTube in 2021 alone, according to its latest transparency report.

I would've hoped that this already happened but reported comments on Kickstarter will now be temporarily hidden until a mod can review them in an attempt to quell abusive responses under crowdfunding campaigns. The company will also launch a Community Advisory Board in May, much like the councils that TikTok (EiM #80) and Twitter (#89) have had in place for some time. Kickstarter, for what it's worth, is yet to sign up for the Crowdfunding Trust Alliance (EiM #136) which was set up last year by two rivals to share best practices on safety.

💬 Platforms - efforts to enforce company guidelines

The risk of having content removed from TikTok, YouTube and Instagram is forcing users to use "algospeak" to avoid getting their posts removed or down-ranked, according to a new, widely-shared Taylor Lorenz piece. It's full of fun yoof lingo including “unalive" (meaning dead), “SA” (sexual assault), “spicy eggplant” (vibrator) and “Backstreet Boys reunion tour" (pandemic) but also emphasises how designing systems to catch ever-evolving language is nigh on impossible.

Digital culture expert Jamie Cohen writes for OneZero that algospeak "may increase our ability to read internet content better, but on the other, it severs and fractures our ability to communicate collectively." My read of the week.

France should move towards "a free and open public social network" and force platforms to make content workers full employees, according to Marie Le Pen, the far-right candidate facing Emmanuel Macron in the upcoming French election run-off. Her views — represented in a policy document reviewed by Euractiv— echo moves in India, where clone companies (such as Koo) have sprung up with an emphasis on sovereignty.

Become an EiM founding member
Everything in Moderation is your guide to understanding how content moderation is changing the world.

Between the weekly digest (📌), regular perspectives (🪟) and occasional explorations (🔭), I try to help people like you working in online safety and content moderation stay ahead of threats and risks by keeping you up-to-date about what is happening in the space.

Becoming a member helps me connect you to the ideas and people you need in your work making the web a safer, better place for everyone.

To acknowledge the leap of faith you'd be taking in supporting EiM at this early stage, I've created a special 10% lifetime discount for founding members who come on board early. Become a founding member today.

👥 People - folks changing the future of moderation

I never set out to mention Elon Musk in consecutive newsletters (EiM #154)  but his recent Twitter takeover offer, and the fact that it seems partly motivated by the company's speech guidelines, means I have no choice. It also means that Fred Wilson's take on the matter is worth noting.

Wilson is a venture capitalist, co-founded Union Square Ventures and was Twitter's first venture investor. He was an early user of the platform and, as the book Mastering the VC Game noted, was deep in the nitty-gritty of product decisions.

In a blog post earlier this week, Wilson writes — in an echo of some of what Musk has said — that "one company controlling the moderation policy of the entire Twitter conversation" is "not ideal". He also later tweeted that the bluebird is "too important to be owned by one person" and should be "decentralized as a protocol that powers an ecosystem of communication products and services." Which is essential to what Birdwatch is working on.

I might eat my words on this but perhaps where we'll end up won't be that far from where we are now.

🐦 Tweets of note

  • "First, it'd be impossible to enforce and turn every kid into an instant criminal for seeking access to information & culture" - tech analyst Adam Thierer reacts to the latest bad-mainstream-media-moderation-take.  
  • "Glad to see that today, everyone's suddenly an expert on content moderation" - Jillian C York reacts to the Twitter takeover news in the only way that feels right.
  • "The Iron Law of social media platforms" - Wharton professor Ethan Mollick on how profit maximisation can indirectly affect "moral behaviour" on platforms.

🦺 Job of the week

This is a new section of EiM designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.

UK-based charity Childnet is looking for an education officer to create and deliver online safety training for young people and children. It's a one year contract with a salary of between £25,000-29,000 with 25 days' holiday.

Founded in 1995, Childnet helps to organise Safer Internet Day and inputs on government legislation meaning that, as the Online Safety Bill passes through parliament, it will probably be an interesting place to work for the next 12 months.