đ How to spin YouTube's conspiracy problem
Welcome to another Everything in Moderation â I hope youâre all channelling the spirit of Captain Tom Moore in these tough times.
A special welcome to new subscribers who found me via the excellent Slice Frames newsletter, including smart folks from The Bristol Cable, Â Malaysiakini and The School of Slow Media.
Drop me a note to say hi if you havenât already (calls are also welcome). And, if you know other folks that might like EiM, you can send them here.
This week, I take a look at YouTube vs conspiracy theories.
Thanks for reading and stay safe â BW
đ¤ Borderline content, straight from the CEO
Susan Wojcicki doesnât shy away from being interviewed, nor does she mind talking about content moderation.
The CEO of YouTube sat down in front of a large crowd at Code 2019 just after the Carlos Maza/Stephen Crowder dispute and has been interviewed by The Guardian, CBS News and even Alfie Deyes in the few months since then.
In her latest one-on-one, this time with NBC Newsâ Dylan Byers, she addresses the video-sharing siteâs ongoing struggle to reduce borderline content (you can listen here as a podcast).
One thing struck me as I read the transcript: Wojcicki uses a paper by academics at Berkeley to claim that YouTube has made making good progress tackling conspiracy theories on the platform.
She explains:
"..there was a study that just came out in the last couple of days by the Berkeley researchers. And, you know, they looked at some of the progress that we've made about borderline content. And, like, you know, they said, like, we're doing a much better job with recommendations. They of course also have areas that they work we could do better. And we will continue to do better."
The study in question is 'A longitudinal analysis of YouTubeâs promotion of conspiracy videos', authored by Marc Faddoul, Guillaume Chaslot, and Hany Farid, who together analysed more than 8 million YouTube recommendations from 1000 of the most popular news and informational channels over 15 months.
The paper's conclusions werenât exactly positive, though. As the New York Times explains:
âthe Berkeley researchers found that just after YouTube announced that success, its recommendations of conspiracy theories jumped back up and then fluctuated over the next several months. The data also showed that other falsehoods continued to flourish in YouTubeâs recommendations, like claims that aliens created the pyramids, that the government is hiding secret technologies and that climate change is a lie.â
âA much better job with recommendationsâ? Hardly. But then again, there is increasing evidence of misinformation being a top-down phenomena so perhaps we shouldnât be surprised by Susan's slip.
đŁ Talking of transparency..
If you've been reading EiM for a while, you might remember me mentioning the Santa Clara Principles, a set of guidelines designed to help social media platforms be more transparent about content they take down.
Well, the Electronic Frontier Foundation (EFF) is  seeking to expand those standards and has announced a call for feedback. Submit your thoughts here by 30 June 2020.
đ¨ Public health platforms? (week 6)
I can recommend each of these very good analyses on COVID-19:
- A strong call for social media companies to do more to flatten the curve by Joan Donavan, Research Director at the Shorenstein Centre (Nature)
- A good look into the coronavirus links with 5G concludes that 'the moderation process of the debunked conspiratorial content has been slow and inconsistent across platforms and functions' (Disinfo.eu)
- Youâll have no doubt read at least one of the viral Medium essays on COVID-19. This piece looks at why it happened and how the company has responded (The Verge)
- Itâs still easy to set up a Facebook page and post ads saying that the virus is a hoax (TechCrunch)
âł Not forgetting...
StackOverflow has released a new version of its unfriendliness text-checking bot â UR-V2 â to great effect. Read this article and the original post by two of its developers.
Stack Overflow banishes belligerent blather with bespoke bot â but will it work? ⢠The Register
Conflict monitoring organisations like Airwars and the Syrian Archive are significantly affected by page takedowns, according to this Time article.
These Tech Companies Managed to Eradicate ISIS Content. But They're Also Erasing Crucial Evidence of War Crimes
Facebook and YouTube designed algorithms to detect extremist content. But they're hurting citizen journalists and human rights groups, too
I donât use Twitch but Iâm quietly addicted to news about the banning and reintroduction of its big name users. This week: Anisa Jomha, streamer and girlfriend of iDubbz.
iDubbbzâs girlfriend banned on Twitch following controversy | Dexerto.com
iDubbbz's girlfriend Anisa Jomha has been banned on Twitch following the YouTuber's recent drama, which sparked controversy online.
I havenât got round to watching it all but NYC Media Lab this week hosted an interesting-looking online panel about Section 230.
Section 230 Revisited: Web Freedom vs Accountability
In a time of declining trust and confidence in large technology firms - there is growing bipartisan agreement that itâs time for legislative change.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.