Hello everyone. I’ve been quiet for a few weeks writing a mammoth piece about media business models for a journalism journal. But I've missed writing about moderation and intend to get back into my weekly rhythm.
Last time out I recommended a podcast and I’ve stumbled across another which you might enjoy. Tracy Ann Oberman’s Trolled is Twitter-celeb heavy but gives listeners a sense of what it’s like being on the end of an angry, anonymous cabal of social media users. It's as grim as you might expect.
Thanks for reading — BW
Academia vs Twitter
Twitter this week announced that it is working with researchers to understand the effect of white nationalism on its platform and the role the far-right plays in counter speech. The goal, their head of trust and safety said, is to understand the best approach of dealing with Nazi sympathisers.
There are two questions here: 1) why now? (I’m less interested in this, to be honest) and 2) what do they expect to learn that isn’t already known?
Perhaps there isn’t a person at Twitter that reads some of the hundreds of papers that have been written about its relationship with the far-right (there are over 10,000 results alone on this academic portal) but perhaps there should be. They’d learn a lot.
For example, back in 2013, J.M Berger and Bill Strathearn analysed the number of interactions between 3542 Twitter users that follow well-known nationalists and found that (and I quote):
members of the (white nationalists) dataset were highly engaged with partisan Republican and mainstream conservative politics.
His recommendation was to select targets for counter-messaging and to instigate disruptive tactics like terms of service violation messaging, investigative reporting and open debate by influencers (we’ll come back to these ideas).
Two years later, Twitter was used to show that white extremist ideology was converging with mainstream politics, for which the rather chirpy-sounding term of ‘inter-ideological mingling’ was coined.
In 2016, Berger’s latest research found that white nationalists and Nazi sympathisers used Twitter with ‘relative impunity’ with some attracting significant followings (from 3.5k in 2012 to 25k in 2016). The research was widely covered here, here and here. So it could hardly have gone unnoticed at Twitter HQ. And there was plenty more research besides these three papers.
This week, Bharath Ganesh, a senior fellow at CARR and a researcher at the Oxford Internet Institute, made another case for J.M Berger's idea of focusing on disrupting these networks, rather than trying to identify content and remove it. At a time when ideas of how to combat far-right ideology on Twitter is thin on the ground, surely this is worth a go?
For over a decade, academics and researchers have looked at Twitter’s relationship with the far-right. And yet the micro-blogging platform has seemingly ignored it.
What’s any different about now?
Facebook’s old-fashioned spinning
I shouldn’t be surprised but Facebook’s latest community standards report (released on its blog last week) is yet another attempt to spin how much toxic content exists on its platform.
Take this: ‘For every 10,000 times people viewed content on Facebook, 11 to 14 views contained content that violated our adult nudity and sexual activity policy’. Sounds positive. Except that we don’t know how many times a single user views a piece of content in a given timeframe.
Without that number, it’s not possible to work out how many posts containing adult nudity of sexual activity Facebook’s roughly 2 billion users see. If every user saw one piece of content, my back of the envelope calculations suggests that would amount to between 2.2m and 2.8m pieces of content.
Facebook strength - its user base - is also its weakness. Expect it to continue to spin the numbers.
Electronic Frontier Foundation has created a series of case studies about the negative impact of poor moderation and account takedowns under the moniker 'TOSsed Out’ (the caps is shorthand for Terms of Service)
Today we are launching TOSsed Out, a new iteration of EFF’s longstanding work in tracking and documenting the ways that Terms of Service (TOS) and other speech moderating rules are unevenly and unthinkingly applied to people by online services.
The folks at Motherboard outline why the distorted Nancy Pelosi video (the one where she’s made to look drunk) can’t justifiably be removed
When Facebook announced it wouldn’t take down an altered viral video of Nancy Pelosi, experts disagreed as to whether or not the platform made the right call. But there may not be a right call.
Snapchat has been relatively untouched by the moderation issues of the other platforms until last week, when it had to remove a number of porn lenses created by an adult company to promote other services.
Naughty America's x-rated lenses didn't last long on Snapchat, but users may still be able to create their own versions.
TikTok had its short-lived takedown revoked in India at the end of April after making changes to the way it moderated content on its app. Their Global Public Policy Director explains what they’ve done.
TikTok to use humans and AI for content moderation, empower users with safety tools says global policy director- Technology News, Firstpost
Tik Tok will allow users to restrict the visibility of their uploaded content to followers only.
Some light relief: Facebook have officially banned selling horses on its platform, which is good news for legitimate horse salespeople
Facebook has banned the advertisement of Horses..
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.