Platforms vs the Taliban, verifying users and TikTok's anti-troll vigilante
Hello and welcome to Everything in Moderation, the weekly newsletter that helps you keep on top of what’s going on in the world of content moderation. It’s put together by me, Ben Whitelaw.
The events of this week have consumed a lot of my attention and energy, as I’m sure they have for a lot of you too. When stories emerge like those that took place in Haiti, Plymouth and Kabul, the topic of content moderation can feel a little frivolous. But, as we’ve seen play out, who is able to speak online and what they do with that opportunity is increasingly a part of each unfolding tragedy.
If you’re enjoying the newsletter, you can support its growth by forwarding it to your colleagues or peers or sharing via your favourite (hopefully well-moderated) social media platform.
Here are your links for this week, including a flurry from the last 24 hours. Look after yourself — BW
📜 Policies - emerging speech regulation and legislation
Should the Taliban be allowed to post online and how does that stance change now that the group has seized control in Afghanistan? That was the big question this week, which the dominant digital platforms each sought to answer differently. Here’s a recap:
- Facebook designates the Taliban as a terrorist group and has banned it from the main app since at least 2012 as well as Instagram
- YouTube bans the group for the same reasons
- Twitter allows the organisation to post but told Recode it removes violent content on a case-by-case basis
It has been fascinating to see how the Taliban have turned the screw on Facebook in particular, claiming in a press conference that journalists should ask the company to clarify its stance on “freedom of speech”. They know that social media is a path to legitimacy. I’m sure many platform policy people will be busy in the background on this one.On matters related to Covid-19 now: a Facebook post claiming that lockdowns are ineffective by a state-level medical council in Brazil has been left up following a ruling by the independent-but-Facebook-funded Oversight Board. The Board said the post, seen 32,000 times, did not create a risk of “imminent harm” and recommend instead that health information from public authorities be sent to fact-checking partners.
💡 Products - the features and functionality shaping speech
How can you trust that the person you’re speaking to online is who they say they are? And how do you do that without requiring marginalised groups to give up their anonymity? (EiM #59). Dating app Tinder suspects the answer might be verification via a user’s photos or, if users prefer not to go down that route, a driving license. In an interview with Casey Newton’s Platformer, Rory Kozoll, head of trust and safety product at Tinder, said photo verification had led to users “starting to feel like more of the people they see on Tinder are real”. The big question is: are companies like Match Group (which owns Tinder) to be trusted with sensitive data linking online info with the real world? Its record isn’t exactly spotless.
Talking of building products for content moderation, this post by Mux software developer Dylan Jhaveri sums the challenges up pretty well. He calls moderation the “dirty little product secret that no one talks about” and uses a video streaming product that he built last year to explain how other programmers could approach building a robust system. Nice to see.
💬 Platforms - efforts to enforce company guidelines
OnlyFans announced yesterday that it will ban adult content despite having built its name almost solely on sexually explicit material since being founded in 2016. The company said the move was designed to “ensure the long-term sustainability” for over 130 million creators but in reality comes after pressure from US politicians concerned about underage content. Reports also suggest unhappy payment providers and unwilling investors led to it cleaning up its act now. Money, it seems, is the greatest moderator of them all.
Twitter launched a new reporting category — “It’s misleading” — in South Korea, Australia and parts of the United States this week as part of efforts to combat misinformation. Its head of site integrity said the test would help to see if “a public reporting option can also be a useful signal for those detections”. I’m interested to see how this pans out.
Trust and safety crisis protocols at GoFundMe triggered during the Taliban takeover of Afghanistan temporarily prevented a fundraiser from accessing more than $24,000 donated to help LGBTQI people in the country. The fundraising platform, as per its rules, began reviewing every new fundraiser — including 27-year-old Afghan-Australian Bobuq Sayed — to “ensure we are following the law, protecting organizers & donors, and sending money to the right people”. Frustrating for Bobuq but a sensible approach in difficult circumstances.
A coalition of social justice organisations has called on YouTube to strengthen its policies around deadnaming and misgendering trans people. The letter, signed by the likes of Center for Countering Digital Hate, Equality Federation, Free Press and others, accompanies research that shows that high-profile YouTubers regularly get millions of views for videos where deadnaming and misgendering occur.
Transparency efforts took a strange turn this week as Facebook published its first Widely Viewed Content Report, at least in part to combat the idea (proffered by NYT reporter Kevin Roose) that the platform is a cesspit of US right-wing political propagandists. I haven’t got rounding to reading it yet but there are some good Twitter threads knocking about (here and here, for starters). [Thanks to Tom M for the heads-up]
In this week’s ‘Least surprising news’, fresh research out of Stanford shows that Gettr, the Twitter clone created by former advisors to Donald Trump, has almost no moderation whatsoever.
👥 People - folks changing the future of moderation
TikTok users seem to care about moderation more than most — you may remember I covered Zev Burton and his #releasetheguidelines campaign back in April (EiM #109).
The Great Londini takes that further. The “moderation vigilante” exposes online abusers, trolls and bullies and has built up more than 2 million followers over just a few months. The account was formed after the 14-year-old son of a friend took his life following months of being harassed online.
Little is known about the people behind the account but Sophie Smith Galer at the BBC has spoken to one of its volunteers — a former US marine, as it happens — who claims TikTok is “not doing [moderation]” as it should.
🐦 Tweets of note
- “Sure, people can fact-check but these factors are also v important and crucial for review” - Snapchat’s Juliet Shen gives some thoughts on how Twitter could make its ‘misleading’ reporting test a success.
- “I do not think the blanket bans of FB & YT are tenable” - Atlantic Council fellow Emmerson T Brooking takes a different view on the Taliban to many others.
- “It’s really, really important for news organization to have clear and transparent content moderation policies.” - Melody Joy Kramer speaks to a topic close to my heart.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.