4 min read

📌 A good/bad/ugly week for YouTube

The week in content moderation - edition #17

I wanted to start this week by saying hi to a new flurry of new subscribers (including folks from Gedi Digital, Twipe and the University of Melbourne) who I expect found Everything in Moderation via this stellar list of newsletters that was published this week (thanks to Ana for the mention).  Delighted, genuinely, to have you subscribe and very much welcome your feedback and comments. I've included some background to this funny little side project at the end of today's newsletter.

To the rest of you: you're cool too.

Thanks for reading — BW


YouTube's Wild West week

When I wrote about YouTube’s new policy for dangerous challenges and pranks at the end of January, I didn’t expect to be discussing the video mega-platform less than a month later. But it’s been one of those weeks for YT so I thought I’d round-up the stories and recap the takeaways.

The good

One thing that went down pretty well this week is YouTube’s new community guidelines, which come into force on 26 February. Tubefilter have a full round-up of what that means but the headline is that creators will now get a warning before being given a strike (which results in a freeze in uploading videos). There will also be new messaging via email and app notification that makes it clear about the reason for the takedown and the appeal process.

Takeaway? What’s interesting to me is actually not the changes but that this update ever happened at all. It’s the first time YouTube updated the strike system and community guidelines since George Bush was US President and "I Kissed a Girl" by Katy Perry was number one in the UK (NB: I hunted out The Guardian report of that last policy update  in 2008 and it only served to show me how little has changed in the intervening decade). YT also reportedly took NINE months working on these tweaks with creators, which goes to show the time and thought that needs to go into policy changes (take note, Discord).

The bad

You may have seen the news about how a YouTuber called Matt Watson was quickly and easily able to find ugly comments from child predators on videos of young girls dancing and doing gymnastics. If you didn't, he was rightly outraged, started #YouTubeWakeUp campaign and set off another advertising backlash. YT's response today has been to remove 400 channels and tens of millions of comments but I don't expect that to be the end.

Takeaway? What was striking about this story from a content moderation perspective was how the timecode was often the means by which these predators were letting other users jump to the parts where the children were in suggestive poses. Because they used digits and no words, these were untraceable to any moderator, whether human or AI, and were only spotted because Matt understood the context. Very much like moderation challenge posed by emoji and emoticon and, again, deeply worrying.

The ugly

New research that came out this week shows the proliferation of conspiracy theories as a result of YouTube recommendations. At one flat-earther conference, 29 of 30 people admitted that they had been convinced by watching YouTube. Buzzfeed also did an experiment which found anti-vax videos appear high up in search results for ‘immunisation’.

Takeaway? It's the response from the YouTube spokesperson that's most mind-blowing here. They say the algorithmic changes YouTube has made recently will help downrank falsities but ‘will be gradual and will get more accurate over time’. That's nice but how long will it take for this to happen? Weeks? Months? And what level of accuracy are they aiming for at the end of this process? Do they even know what the end goal is?! The statement, like much of YouTube's comms of late, fails to recognise that this is a live issue — people are currently disagreeing that the earth is round and people are stopping vaccinating their children — and 'wait and see' isn’t a suitable response.

Not forgetting...

Good news: Amazon have developed an 'enhanced moderation model'  that improves outcomes and saves time/money. Bad news: it's being used by law enforcement to track people on the street.

Amazon Rekognition Slashes False Positives by 40 Percent

Stay ahead of the tech curve with Computer Business Review, bringing you latest tech news, exclusive interviews & analysis into major enterprise IT trends. Get updates on emerging technology, cloud, internet, cyber security, big data etc

Facebook's policy of outsourcing content moderation continues to yield great success (not). This time, the anger is by those working for Accenture.

Facebook moderators are in revolt over 'inhumane' working conditions that they say erodes their 'sense of humanity'

Other Facebook employees have reacted with outrage to the rules, calling them "inhumane."

I know its focus was disinformation and fake news but I expected the 18-month Digital, Culture, Media and Sport select committee's report to at least refer to 'moderation' once. I was disappointed

Facebook labelled 'digital gangsters' by report on fake news

Company broke privacy and competition law and should be regulated urgently, say MPs

Research from China looking at WeChat found that 8,092 of the approximately 11,000 articles that were removed from the platform in 2018 were actually removed by their authors. Self censorship is the worst form of censorship.

WeChat's most censored topics in 2018 include US-China trade war, Huawei CFO arrest: Report

A new report from a team of researchers at The University of Hong Kong's Journalism and Media Studies Centre, examined the most sensitive topics subject to censorship on Tencent's WeChat platform in 2018.

The background

Everything in Moderation started after the flood of Alex Jones news last summer and a realisation that there wasn’t one single place I could go to satisfy my interest in content moderation and the policies, people and platforms that made it happen.

In my last job, I headed up the audience function at The Times (including its moderation team) and was confronted with questions of policy, anonymity, hate speech and increasing meaningful involvement (while trying to make it pay). So I decided to give this newsletter a go myself.

If there’s anyone that you think would like to read it, I’d be eternally grateful if you forwarded it to them or shared this link via any (reputably moderated) social network - BW


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.