📌 Bianca Devins and the great AI myth
Hi everyone. It’s crazy hot here in London but I'm sparing a thought for subscribers in mainland Europe. Stay cool and keep safe.
Last week’s EiM didn’t happen because I was prepping for a two-sitting BBQ (yes that’s right, one at lunch, one at dinner). I hope the longer-than-usual reading list serves as a useful catch-up.
Get in touch if there’s anything you've spotted that caught your eye.
Thanks for reading — BW
AI isn’t going to save us
The difficulties of moderation at scale are summed up by the fact that I know Bianca Devins' name.
Bianca was the 17-year-old artist with around 2k Instagram followers that was murdered last week by a 21-year-old man she met, first online and then in person in New York City.
That would be tragic enough but for the fact that the man that killed her documented the whole awful event on Instagram. The pictures of her body were accessible via his account for over 20 hours, meaning thousands of people were able to screengrab, share and even sell them. Instagram’s failure to prevent the uploading of those images, as well as its failure to respond quickly to user flags, triggered the coverage that mean I know her by name.
But I shouldn’t know her or the fact she recently graduated from high school. You see, Instagram regularly boasts about the technology it utilises to prevent content, like the pictures of Bianca that were shared last week, making their way online. Just five months ago, Adam Mosseri went on a PR offensive about the fact that it would no longer show images of self-harm after 14-year-old Brit Molly Russell committed suicide. Before that, Mark Zuckerberg got up in Congress and said AI would solve Facebook’s most complex problems within 5-10 years. That’s looking increasingly unlikely at best and wilfully incorrect at best.
A new report published last week by Ofcom, the UK’s communication regulator and written by Cambridge Consultants, made that even clearer. While it states that AI can help improve pre-moderation, synthesise training data to help develop better models and assist human moderation, all three cases "increases the costs and challenges for organisations to develop these”. Facebook and Instagram don’t have the desire to part with the cash to do so, at least until they are told to do so.
Brian Merchant in Gizmodo goes in even harder:
Tech companies are unwilling to put their money where their feeds are and to deploy robust systems able to block the rot as quickly as possible. It’s as simple as that, sadly.
The fact that AI will solve the challenge of content moderation at scale is a lie people use to distract people who don’t know better. And all those who now know Bianca Devins’ name are proof of that.
A reminder, over on Twitch, that emojis are a whole new level of tricky to take action on.
This time, gaming streamer Trihex wasn’t happy to see baby block emojis spell out a NSFW acronym in his stream chat (I won't post it here, you might be able to guess). If it was posted in normal text, I presume it would have been caught in Twitch’s filters.
Trihex isn't a complete saint: he was on the receiving end of a ban last year for using a derogatory term on his stream. But this new front (which I wrote about in EiM 10) is something we’ll see more and more of.
YouTube published new terms of service in the EU and Switzerland last month somewhat on the sly. Chris Stokel-Walker (who’s new Medium publication on YouTube is worth reading) looks at how it will become the global norm.
Regulation is Coming to YouTube, and It’s Going to be Ugly
The digital video platform, which has battled repeated negative headlines in the last two years, published new terms of service for its users in the European Union and Switzerland last month. Hardly…
Nick Cave (yep, the musician) on the consequences of free speech (via Matt Locke’s Storythings newsletter)
Nick Cave - Issue #52 - Do you get any nasty or annoying comments and questions via The Red Hand Files?
You can ask me anything. There will be no moderator. This will be between you and me. Let's see what happens. Much love, Nick
Charlie Beckett (a professor at London School of Economics and someone who’s opinion I respect a lot) has given an interesting interview on content moderation to Institut Montaigne.
Challenges of Content Moderation: Addressing the Knowledge Gap | Institut Montaigne
Interview with Charlie Beckett, Professor of Media and Communications at the London School of Economics (LSE), for Institut Montaigne.
I’d never heard of The Meet Group or any of their social networking apps (meetme, Lovoo or Tagged among others) until this week. They make a lot of their moderation practices - their Safety page claims half of their staff (250) work on moderation - and now they’ve joined forces with a digital identity company to prevent add multi-factor authentication to presumably reduce anonymity
The Meet Group Teams With Digital Identity Company Yoti to Help Create Safer Communities Online
The Meet Group plans to trial Yoti’s innovative age verification and age estimation technologies designed to create safer communities online.
I missed it earlier this month but Gab, the so-called free speech social network, moved over to Mastodon. Mike Masnick at Techdirt says the result is ’the kind of experimentation and more distributed decision-making we’d like to see online'
Gab, Mastodon And The Challenges Of Content Moderation On A More Distributed Social Network | Techdirt
Instagram’s latest policy change means users will be banned if they violate the site terms several times (we don’t know how many) with a certain period (also unclear). Something I guess.
Instagram Will Now Warn Hate-Spewing Users They’re About To Be Banned
This month, the company rolled out a small batch of features meant to make the photo sharing network a warmer, fuzzier place.
An American senator has put forward a bill that he thinks will help moderation efforts in the tech giants. A former Google in-house moderator says the idea is nuts.
Former Content Moderator Explains How Josh Hawley’s Bill Would Grant Government Control Over Online Speech | Above the Law
Surprisingly, knee jerk reactions make for bad policy.
Some Russian trolls tried to claim Facebook had no right banning their page. A US judge was having none of it.
Russia Fucked With American Democracy, But It Can't Fuck With Section 230-Federal Agency of News v. Facebook - Technology & Marketing Law Blog
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.