3 min read

2019: when moderation went mainstream

The week in content moderation - edition #45

Hello to recipients of this, the 33rd and final EiM of 2019. Special festive greetings to a handful of new subscribers (do drop me an email to say hi 👋)

As 2020 hurtles towards us all, I wanted to say thank you to everyone who has opened, read, sent feedback or bought me a ko-fi these past 12 months. It has been incredibly enjoyable to put together EiM and there is lots more to come in 2020.

Festive greetings and thanks for reading — BW


A lot can happen in a year

Next week marks the first anniversary of the weirdest Facebook press release ever.

On 28 December 2018, a Fb Newsroom blog was published in response to a New York Times article about content moderation. In it were listed all the ways the article was incorrect (many points of which were themselves disputable) and the whole thing was written in a tone that was way off too. At that point, you feared for Facebook's community guidelines going into 2019.

Now that blog serves as a reminder of how much can change in 12 months. Facebook has significantly changed its tune, expanding its review tools, improving its wellness practices and announcing plans for an Oversight Board. It has taken content moderation much more seriously.

The same thing happened elsewhere too: YouTube started 2019 with the story of paedophile commenters (EiM #17) and finished it by announcing a creator’s label just for kids. Twitter began by getting panned by Buzzfeed for struggling to moderate Nazi content and ended by announcing several decent features and Project Bluesky. It might not feel like it but things have moved on.

Other important ideas related to content moderation started to take hold. The idea of moderators unionising (EiM #18) had, by November, become a serious possibility. The need for more academic research into online speech (EiM #24) saw significant US funding just last month. Both are good and positive initiatives.

It’s easy to think the debate about online speech is more complex than ever. Perhaps, when it comes to regulation, it is. But 2019 saw a positive change in how we discuss the platforms, people and policies at the heart of content moderation and, all being well, so will 2020.

A word on Casey

There’s a strong argument that the reason content moderation has been on the agenda this year is Casey Newton’s reporting.

This week, The Verge reporter published his third big investigation into the practices of US tech companies’ moderation teams (this time it was the mental health toll of working at Google and YouTube).

This week also saw one of his previous investigations, The Trauma Floor, announced the second-best Longford of 2019, a remarkable achievement considering the vast amount of quality writing published on the web over the last 12 months.

As a journalist, I’d obviously say this but it’s important to have reporters and reporting that understands the nuances of a topic in order to bring it to a wider audience. Casey has done that with good effect.

Software, hard impact

I’ve talked about how good software underpins good content moderation (EiM #33 Pinterest, EiM #28 Reddit) and Github is another strong example: engineering manager Danielle Leong gives an interesting overview of the work the team have done over 3.5 years in the Twitter thread above (I’m not a regular Github user so I’ve got no idea if it’s had an effect).

Not forgetting...

Casey Newton also got to chat to the Facebook product manager in charge of the oversight board but, in true tech company style, got kicked out of the room before he could ask anything significant 🙄

The software behind Facebook’s new Supreme Court for content moderation - The Verge

YouTube’s new policy (last week’s EiM) has gone down well with creators, who have had hundreds of videos removed, some as long as three years ago

YouTube just made sweeping positive changes to its harassment policy. So why all the backlash?

A ban on "malicious insults" and a complicated FTC ruling mean drastic changes could be coming to YouTube.

YouTube mistakenly thought an Aussie gaming user used a racial slur and demonetised his channel (it turns out it was his accent).

This YouTuber's Videos Were Demonetised. He Says It's Because Of His Aussie Accent.

YouTube accidentally transcribed "car" as "cunt", said gaming YouTuber Fynnpire.

Project Bluesky (last week’s EiM) is taken to town by Syracuse assistant professor, Jennifer M. Grygiel, who suggests a regulatory protocol for new tech solutions.

Twitter's Jack Dorsey created a walled internet garden. Then he realized he hated tending it.

Twitter's Jack Dorsey created a walled internet garden. Bluesky is because he hated gardening it.

A neat review of everything that happened in 2019...

Year in Review: Content Moderation on Social Media Platforms in 2019 | Council on Foreign Relations

The spread of disinformation, misinformation, hate speech, and violent and extremist content on social media platforms in 2019 prompted heated debate over how tech companies and governments should approach content moderation.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.