Welcome to this week’s Everything in Moderation, which now counts both halves of the excellent Splice Media among its subscribers. Thanks also to four others who entrusted me with their email address.
This week marks a small but important milestone for EiM: I've sent a newsletter every week for six whole months. This is something I struggled to do last year and I hope to keep it going for the rest of 2020. Tips and support welcome.
Before you continue, an advanced warning: this newsletter contains more fascists than normal.
Thanks, as ever, for reading — BW
📸 Are the far-right heading for Instagram?
In internet years, it feels like a decade ago. But it was only last Friday — just after I sent last week’s EiM — that Twitter permanently banned British right-wing commentator Katie Hopkins for violating its hateful conduct policy.
The less that is said about Hopkins’ vile personal views, the better. But where the former Apprentice contestant turned Trump cheerleader shows up next is interesting for people, like you and I, interested in content policy and moderation. As Alex Jones, Milo Yiannopoulos and Tommy Robinson have all demonstrated, users banned from mainstream platforms rarely take their ban silently.
Now, a lot has been made about Hopkins signing up to Parler, a ‘non-biased, free speech’ social network created by mysterious software developer and Conservative, John Matze Jr. Donald Trump, a host of White House staff and several Fox News contributors have signed up over the last 12 months and claims to have 1.5m users, according to its freshly-updated Wikipedia page.
However, as ABC journalist Max Chalmers points out, Instagram looks to be a more immediate home for Hopkins.
It's a plausible theory: It was on Instagram that Hopkins tweeted an explanatory video about her removal from Twitter and where she has posted almost daily since her permanent suspension.
But are far-right figures correct when they say the photo and video sharing app lacks sufficient moderation policies?
Data published this week by the European Commission suggests that yes, the platform lacks the moderation capabilities to deal with Hopkins and other incendiary figures who may start to increase their usage.
The report is the fifth evaluation of the Commission’s Code of Conduct on Countering Illegal Hate Speech Online, which Facebook, YouTube, Microsoft and others signed up to in 2016. Regular monitoring exercises take place to check if the platforms are abiding by the code.
The latest data from December 2019 shows that Instagram assessed 91.8% of hate speech notifications in less than 24 hours, which is more than YouTube (81.5%) and Twitter (76.6%). Facebook — Instagram's parent company — naturally trumpeted the results as a big success.
However, a deeper look at the EU’s data shows that any celebration might be premature:
- Instagram’s assessment rate is based on a very low number of notifications of hate speech - just 109. (By comparison, Facebook received 2348 notifications and Twitter 1396). This makes the 91.8% number look flimsy and unlikely to scale.
- Instagram also removed just 42% of flagged content, only less than Twitter (35.9%) and substantially less than 70.6% removal rate of December 2018 when the last Commission test was conducted. This suggests that moderation capacity and knowledge of policies isn’t what it needs to be for its user growth.
The findings add weight to Facebook's own report from November 2019, which showed that detection of violating content was lower on Instagram than Facebook across all categories.
Hopkins move to use IG is likely to intensify these weaknesses. Compounded with its parent company's hands-off policy to hate speech, we should expect to see the photo-sharing platform hit the headlines for its moderation practices sooner rather than later.
+ Bonus read: The Right’s New Favourite Social Media Platform Parler Is Just as Restrictive as Twitter (OneZero)
⚖️ 'Moderator' turned 'whistleblower'?
From one right-wing 'commentator' to another.
Project Veritas (founded by US conspiracy theorist James O’Keefe) this week published undercover footage of third party Facebook moderators explaining how they take down posts by MAGA-wearing Trump supporters.
The footage — taken by Zach McElroy, who worked for Cognizant in Florida but is likely to be a plant by O'Keefe— is neatly edited to rile Republicans, who have long claimed that pro-Trump Facebook posts are censored. It's worth watching if you have 20 minutes.
McElroy, who is looking for work following the end of his contract at Cognizant, has also started a GoFundMe campaign to raise $150,000. He has so far raised over $38,000 from over 1100 donors in just two days.
It will be interesting to see how Facebook and Cognizant react to O'Keefe's video, especially since McElroy broke an NDA to speak out against what he saw. More to come I expect.
👀 Not forgetting...
The (deep breath) Transatlantic High-Level Working Group on Content Moderation Online and Freedom of Expression this week published a long list of recommendations for reducing hate speech while maintaining freedom of expression. I’ll hopefully have more on this in next week’s newsletter.
A transatlantic framework for moderating speech online
Mike Masnick, the founder of Techdirt, chatted to a16z's Sonal Chokshi in a longer-than-usual interview about the flaws of the ‘public utility’ argument and why Facebook, as an edge service, doesn’t qualify.
16 Minutes on the News #32: Section 230, Content Moderation, Free Speech, the Internet - Andreessen Horowitz
Section 230 of the Communications Decency Act has been in the headlines a lot recently, in the context of Twitter, the president's tweets, and an executive order put out by the White House just this week.
I joked in last week’s EiM that Twitter’s move into audio would end badly but I didn’t expect it to happen this soon.
Twitter —a platform that already struggles with curbing harmful content —has not detailed a plan for moderating its new audio feature.
Roger McNamee, early Facebook investor and now a continual thorn in Zuckerberg’s side, writes for Time about why better moderation isn’t a panacea.
Social Media Platforms Claim Moderation Will Reduce Harassment, Disinformation and Conspiracies. It Won't
The social media giants say they can fix the disinformation on their platforms via moderation. They can't. We need other solutions.
The Oxford Mail newspaper has written a long and strong post about its non-tolerance policy to abusive or hateful comments under its stories. More news organisations would benefit from this.
FREEDOM of expression is a human right. We live in a democracy and the Oxford Mail recognises the importance of public debate.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.