5 min read

📌 Life as a Chinese content reviewer, NetzDG violates EU law and a tool to combat harassment

The week in content moderation - edition #151

Hello and welcome to Everything in Moderation, your guide to the world of content moderation and online safety. It's written by me, Ben Whitelaw.

A big thank you to everyone that became an EiM member since its launch last week (EiM #150). Your support and messages of encouragement have been heartening. Remember, as a current EiM reader, you can take advantage of the founding member offer until the end of March, which gets you 10% off for life.

I'm really pleased to welcome news subscribers from Stanford University, ActiveFence, the BBC and elsewhere. If you were sent this by someone you know, you can subscribe with your email address here.

There's a lot to get into this week so let's dive in — BW


📜 Policies - emerging speech regulation and legislation

Georgia joined the growing list of US states using potentially unconstitutional laws to address supposed biases in content moderation under the guise of "protecting users". Senate Bill 393 seeks to prevent social media platforms removing or censoring content, arguing that services with more than 20m users are "common carriers", and will now move to the House for further debate. Bills passed in Texas (EiM #139) and Florida (EiM #119) last year are now on appeal but others in Alaska, Ohio and Tennessee are proceeding at pace.

Late to the party on this one but a German court last week ruled that the Network Enforcement Act (NetzDG), which mandates that social media platforms block or delete criminal content, partially violates European Union law. Google and Meta brought the complaint against the 2018 law after amendments were passed that in effect force platforms to hand over user information to the German authorities. As far as I can make out, those changes — which were due to come into play in February 2022 — will now no longer apply to Google and Meta following this judgement.

Platforms made a host of policy changes in the wake of the invasion of Ukraine (last week's EiM) and TikTok this week caught up by adding 'state-controlled media' labels to Russian accounts. Which would be great if, as this Mashable piece notes, the labels weren't almost invisible.

The-independent-but-Facebook-funded Oversight Board™  will now provide a one-week warning for cases in order to allow organisations to prepare submissions and join office hours with the Board team. For small NGOs with limited resources looking to engage in the OB process, this is a small but welcome change.  

💡 Products - the features and functionality shaping speech

New moderator tools released this week are the latest attempts by Facebook to address the spread of misinformation in its Groups product. Admins can now approve or deny members based on predesignated criteria (this piece in Protocol doesn't say what that those are) as well as block posts flagged as false by Facebook's fact-checking partners (er, why was this not happening before?!). It follows changes to how mods of banned groups could create new ones (EiM #80) and the demotion of Groups content in users' feeds where it's found to break community guidelines (EiM #133).

A tool for hiding abusive tweets and bulk blocking toxic accounts has been open-sourced as part of efforts to protect female social media users from online abuse. Harassment Manager, built by Google's Jigsaw unit, is now available on GitHub for adaption by any organisation. It will be also distributed among the staff at the Thomson Reuters Foundation later this year. I haven't got around to reading it but this paper, from February, has some background on the project.

French startup Bodyguard, which uses classifiers to reduce toxicity and hate on platforms, has raised €9m in series A funding. The round was led by Keen Venture Partners, which has backed UK safety tech tool Crisp, and RingCP, which has invested in fraud prevention service Blackwen.

💬 Platforms - efforts to enforce company guidelines

Substack is facing an exodus of trans and marginalised creators who "do not trust that the platform will enforce its own rules". Mashable spoke to a number of writers, including professor Grace Lavery and Singaporean journalist and activist Kirsten Han, who have left the platform even after receiving grants or significant advances. Turns out you can't buy people's complicity in a poorly governed platform .

TikTok's new community guidelines (EiM #147) seem to have be having some teething problems after they came into effect this week. Details are scarce and the scale of the takedowns is not known but reports claim that videos with hashtag #covidisnotover have been removed, suggesting some overreach. One to watch.  

Twitter announced that Birdwatch, its pilot programme for adding context to tweets via notes, is being rolled out to more users in the US following positive initial results. In a blog post, VP product Keith Coleman said surveyed users were 20-40% less likely to believe a misleading tweet if they read a note about it. I've covered Birdwatch (EiM #132) before and am glad to see its slow and steady progress.

One that slipped through the cracks last week: Tumblr has agreed on a settlement with the NYC's Commission of Human Rights following allegations that its 2018 adult content ban, disproportionately affected LGBTQ users. The company now has 6 months to hire a sexual orientation and gender identity (SOGI) expert and provide diversity and inclusion training to moderators.

👥 People - folks changing the future of moderation

If you read about the recent stories of Chinese content reviewers (EiM #147) and thought they were one-offs, you might want to think again.

Half a dozen moderators spoke to news site Sixth Tone on the condition of pseudonymity and spoke candidly about the gruelling shift patterns and excessive target-driven work. One said he was required to process 1,600 video clips in a 12-hour shift — roughly two videos per minute — and another was expected to do "month-long graveyard shift, working from 9 p.m. to 9 a.m., every three months."

The extreme work cycle caused workers to gain weight, experience irregular menstrual cycles and even suffer a feeling "like you're near to dying". And the worst thing? One video platform started to count down their break time if the employee’s monitoring screen stayed idle for ONE minute.

It's no wonder that one-quarter of reviewers at video app Bilibili leave in the first three months. And all so that we can post what we please. My read of the week.

🐊 Tweets of note

  • "a man tried to mansplain the concept of the "loss of control” in online harassment situations to me in a meeting." - Brittany Anthony, head of safety policies at Bumble, explains why International Women's Day is still needed.
  • "I fear they will put Trump back on the platform, even though he continues to spread false claims that undermine confidence in election system and our democracy." - UCI Law professor Rick Hasen takes the words out of my proverbial mouth.
  • "want to see if investigations and threat disruption is a fit for your flow?" - Twitter's Aaron Rodericks flags an intern role in the Site Integrity team based out of Dublin. I've gone back to ask about the deadline and remuneration.

🊺 Job of the week

This is a new section of EiM designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.

Sadly, despite looking at 50+ relevant jobs on Linkedin, Greenhouse and directly on employers' websites, I couldn't find any that displayed salary ranges and sadly didn't have time to reach out to find out more (like I did last week). As I said, I won't include any roles here that aren't transparent about pay. Applications take time and companies should respect that. Sorry to folks expecting to find a cool role here.