📌 A human rights approach to moderation, Spotify releases its rules and the "election tsunami"
Hello and welcome to Everything in Moderation, your Friday recap of the content moderation week that was. It's written by me, Ben Whitelaw.
Welcome to freshly subscribed folks from Mindgeek, Rakuten, Moonshot, Spotify, Little Dot Studios, Princeton, Facebook, ActiveFence and other corners of the web. Your arrival is well-timed, I published a brand new Viewpoint article this week with a research fellow thinking about how moderation intersects with human rights. Read on for more.
A big thank you to the people that have completed my short survey about ways of supporting EiM. It’s great to hear from new and old subscribers alike and I'm excited that so many of you want to be involved in a myriad of ways. I’d love to get 25 more responses before I close the survey on Tuesday - can you spare two minutes before you read today's edition?
Once that's done, here's what you need to know this week — BW
📜 Policies - emerging speech regulation and legislation
Tension between Indian government officials and platform representatives boiled over at the start of this week following disagreements about content moderation processes, according to Reuters. The Ministry of Information and Broadcasting told Google, Twitter and Facebook need to do more on fake news and that a failure to do so leads to government takedowns (see last week's EiM #145) that compromise the ruling party's image. Indian state elections are taking place in the next six weeks and reports, particularly this one by the Reuters Institute for the Study of Journalism, show the extent to which Indian politics is firmly intertwined with the moderation of its digital services.
Next door, in Pakistan, a regional court ordered the country's telecommunication regular to block "immoral" content from TikTok because it was "affecting the younger generation". The Pakistan Telecommunication Authority argued that 289m immoral videos and almost 15m offending accounts have already been removed before the hearing was adjourned. TikTok has already been banned four times, including for five months last year, which ended in November (EiM #138).
In Russia, Vladimir Putin is looking to introduce a "self-regulated register of toxic content" to "protect minors". Details are scarce but Reuters reports that it is in response to street protests by political opponents and fears that the country's youth are being corrupted by the web. Russia has threatened to introduce legislation forcing platforms to remove "prohibited information" since late 2020 (EiM #93) and it can't have helped that, in September last year, YouTube deleted two channels operated by state-backed Russia Today (EiM #130).
What is next for the Digital Services Act? If you're wondering the same thing, Tech Policy Press (which EiM recently collaborated with) has a good podcast with Mathias Vermeulen on the next steps for Europe's flagship legislation to harmonise platform takedowns. Subscribe to both the podcast and the newsletter.
💡 Products - the features and functionality shaping speech
A list of third-party tools designed to keep Twitter users safe can now be found on a dedicated site, according to a company release. Twitter Toolbox features three moderation tools — Block Party (featured in EiM #135) as well as Bodyguard and Moderate — and six others for writing and analytics that have been "vetted to Twitter's quality and safety standards". Social Media Today makes the point that perhaps Twitter should consider building these tools into the core experience and I don't disagree.
I forgot to include this in last week's EiM but it feels notable that the world's fastest AI supercomputer, announced by Meta for release this year, will be used, in part, to train content moderation algorithms and detect online harms. That suggests two things to me: 1) that the challenge of keeping people safe online is one of the most difficult challenges that humanity faces right now and 2) that eradicating humans within the process of defining abuse or hate speech is very much Mark Zuckerberg's goal. Gulp.
💬 Platforms - efforts to enforce company guidelines
So, it turns out that Neil Young wields a lot of power. After the singer removed his music from Spotify in protest of misinformation, the company published its platform rules, which have been in place — according to The Verge — for "years" but never published. There's nothing very surprising about the scope of the rules although there's a strange mix of vague and specific examples (coronvirus parties gets a mention) and some of the wording feels overly headmaster-y ("rule breakers"?).
Yelp this week published its 2021 Trust and Safety report, which showed the effect of the pandemic on users of its service. The most interesting stat to me was the significant year-on-year increase — 161% — in reviews from people claiming they got Covid-19 from a business or criticising its safety measures (both banned under its guidelines). There were also 1300 businesses that were "review bombed" as a result of media coverage or attention via social media.
Facebook is not ready for "the coming electoral tsunami" and needs to better "understand how heated discussions are shifting in real-time", writes Katie Harbath, the company's former public policy director, in an op-ed for The New York Times. There's an interesting line in the piece about the workforce required: 500 full-time employees plus 30,000 folks working on safety and security could only handle three major elections at a time. Time to get hiring.
Meanwhile, Twitter admitted that it has not been enforcing its civic integrity policy in relation to lies about the 2020 elections since March 2021, just four months after the election took place. But it only made the admission when asked by the media. So much for "a longstanding commitment to meaningful transparency".
👥 People - folks changing the future of moderation
The so-called metaverse is less than zero days old and already there have been a number of stories about women experiencing abuse and sexual harassment.
The latest is Nina Jane Patel, who wrote about being attacked in Meta's Horizon Venture metaverse in December and this week gave an interview to a UK newspaper in which she talked about the attack and suffering anxiety since the incident. Imagine that for just one second. (NB: Being interviewed led to further abuse and death threats but, hey, the media is somehow exempt).
Platforms know that safety in virtual and augmented reality is not only a problem but could make abuse, as Quinta Jurecic put it, "much, much worse". But, that isn't stopping them move forwards at a great pace.
Patel's experience, and her work trying to solve these problems, are helpful handbrakes and a reminder of what is at stake.
🐦 Tweets of note
- "so much more to do, and the fight is never over, but i'm happy to be part of a team that takes safety seriously" - Juliet Shen, safety product manager at Snap, reflects on a good quarter for the camera company.
- "Proposed: Anyone who writes "platforms do x" must cite a non-Facebook example or else revise to "Facebook does x." - Stanford Cyber Policy Centre's Daphne Keller reminds us that Facebook is not the internet.
- "Protecting children online is a huge problem, solved by good collaboration, investment, hearing children's voices, while being guided by data and research." - Meta child sexual exploitation and abuse expert John Buckley explains what he learnt as he announces his departure.