Welcome to another Everything in Moderation, your weekly round-up about the policies, platforms and people deciding how content is moderated on the web.
Today’s newsletter is shorter and more link-heavy than usual as I’m taking a few days off this week after a long time without a break. I hope you’re all able to do the same.
I continue to think about the next steps for EiM and would love to chat to anyone that has ideas about how to improve what appears in your inbox. I’ve found chatting to existing subscribers (looking at you Paul, Matt and Tim) incredibly useful and also super motivating. So yeah, thanks.
I'm planning to take a break from sending in late August to make some changes that I hope will serve you all better. More on that soon.
Stay safe and thanks for reading — BW
🍕 Can QAnon be nullified?
The story of the week came from Twitter, which took the notable step of banning 7,000 accounts associated with QAnon — the conspiracy theory popular with Donald Trump fans — and limiting the reach of 150,000 others on the platform.
The announcement comes four months out from the US election and just a month after it was revealed that half a dozen Republican candidates have shared conspiracy theories espoused by QAnon.
URLs from QAnon-related sites have also been banned by Twitter as has 'swarming', the co-ordinated harassment of social media users by QAnon followers for often baseless reasons.
Facebook will reportedly follow suit within the next month although analysis published over the last few days suggests QAnon's theories have gained sufficient traction in other parts of the web to shut it down, or at least nullify its affect, completely.
The election result in November will be point at which we judge whether this worked as Twitter intended.
Bonus read: Sarah T. Roberts, professor at UCLA and commercial content expert, has written a good thread on why Twitter doesn’t need to be any more transparent about its decision than it has been.
⏲ Not forgetting...
A brilliant essay in Real Life Mag about trans people evading content filters and community policies in an attempt to have their bodies recognised by themselves and others. Really, a must-read.
Content moderation tries to render certain bodies unseeable
I was heartened to read, in this piece from Yahoo Finance, about two new hate-speech AI algorithms developed by universities on either side of the Atlantic. Not because they won't have flaws (they almost certainly will) but state investment is always good thing and more of it is a positive sign that online speech matters.
The future of discourse on the internet depends on it
Facebook covered up research that showed Instagram’s system for automated account removal made it 50% more likely that Black users would have their accounts disabled than other users.
Facebook management has ignored and stymied its own researchers who have displayed racial bias in the company's content moderation systems.
Preventing extremist content has long been the jewel in Facebook’s automated moderation crown. But a report by London think tank, Institute for Strategic Dialogue, has examples of content slipping through nonetheless.
A study has discovered that a section of pro-ISIS accounts and networks manage to evade Facebook's content moderation filters and share content
Politifact, the US fact-checking, has published new community guidelines imploring users to respond to each other rather than ‘simply post their stance on an issue’. Nice way of framing it.
PolitiFact is a fact-checking website that rates the accuracy of claims by elected officials and others on its Truth-O-Meter.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.