3 min read

YouTube's moderation policy prank

The week in content moderation - edition #14

Hello everyone and happy Friday.

2019 started just as 2018 ended, with a flurry of stories about the regulation of content online. I’ve included some of the most interesting from the last two weeks below but the debate doesn’t look like being anything other than a Gordian knot.

This Everything in Moderation is coming out a day later than usual this week on account of the fact that yesterday was a big day at the Engaged Journalism Accelerator, the European funding programme that I work for. Find out more here and feel free to ask any questions.

Thanks for reading — BW


One step forward etc

How many failed attempts to be more open and accountable does it take before everyone clocks that YouTube doesn’t actually want to become more transparent?

Last week, the video platform announced that they were getting serious about pranks and, in light of the rise of the Bird Box challenge and the ongoing Tidepod challenge phenomena, updated it’s Dangerous Challenges and Pranks Enforcement.

It’s justification for doing so? The fact that these pranks are often at the expense of small children who, YouTube says, ‘could be traumatised for life’. They’ve worked with child psychologists, the announcement said, to create ‘guidelines around the types of pranks that cross the line’.

Can you find those guidelines anywhere though? Are they linked from the announcement post? Is there any information about the psychology professionals involved in the research that led to the policy change? Are there details of how they were able to deduce that children were ‘traumatised’ by these videos? Not a chance.

Incidents like this, although small, go to show that YouTube's commitment to transparency, like the other platforms, has a low limits. Last year, they had led the way in revealing a host of statistics about content removal, which made Facebook do the same. At the time, that felt like progress. But in reality it was only because the world's media was watching them after a string of bad press.

This policy update is straight out of the 'trust us, we know best' playbook. As is becoming clear, platforms like YouTube, for all that they try, don't know best at all.

Everyone loses when content is outsourced

What would happen if Facebook had to moderate the two million pieces of content flagged on its platform every day? If they had to pay staff directly and support them in their work, rather than offload the responsibly to outsourced parties in developing countries?

They wouldn’t do it, of course. The cost, both financial and human, would cut into their profits too severely that I’m sure they would look again at what they offer Facebook users. No doubt the Facebook product(s) would change immeasurably.

Manoush Zomorodi touches upon this in the latest episode of IRL, the podcast from Mozilla about life online and the future of the web. In it, she interviews the two German directors of The Cleaners, the documentary about content moderation in the Phillippines that premiered last year. Hans Block and Moritz Riesewieck's view on outsourcing is clear: ‘We really need to question if its right to outsource big parts of our digital public sphere to public companies’. Listen to it here.

Related #1: 1,000 new Facebook jobs in Ireland will be in content policy and moderation team

Related #2: Facebook will now show page admins removed content that goes against Community Guidelines

Not forgetting..

What happens when an Asian man creates a lighthearted meme-driven Facebook group that balloons to 1 million fans and raises questions about what constitutes a stereotype

Subtle Asian Traits Launched a Self-Stereotyping Debate - PAPER

Does Subtle Asian Traits, a Facebook group with millions of members, empower or orientalize its members?

Rachel Chen at Motherboard makes the case that everyone has a duty to report offensive content and that it's the least we can do to make the internet a healthier place

Facebook and Twitter Are Broken, But You Should Still Report Hate - Motherboard

Users shouldn’t feel obligated to help Facebook, Twitter, and other social media giants police hate. But experts say it’s still worthwhile.

Expect to see female-friendly communities on the rise in 2019. The founder of Enty, a fashion app for getting feedback on outfits and haircuts, explains how she made it work

Online Female Communities: Why They Matter And How To Build Them

Women account for 80% of consumer spend and communities built by women for women are on the rise. Read how HER, Peanut and Enty have built, grown and nurtured their unique platforms.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.