📌 A new way to stop "hate raids", DSA concerns and a ban on vaccine misinfo
Welcome to Everything in Moderation, the weekly newsletter that keeps you up-to-date about content moderation and its impact on the world. It's written by me, Ben Whitelaw.
This week, I finally published a report about the challenges of Trust and Safety professionals based on first-hand accounts and a survey of people working in the online safety space. It's a piece of work that I've been doing with the team at Kinzen for over three months and, to tell you the truth, I didn't know if it would chime with people. But it has been great to see so many smart folks noting the effect that a lack of goals and company knowledge has on this crucial work.
Let me know if the report strikes a chord — I'd love to talk about it with a wider group of T&S professionals.
And without further ado, here's this week's digest — BW
📜 Policies - emerging speech regulation and legislation
The Digital Services Act does "not address the systemic problems of the centralised platform economy, which could reinforce censorship" according to the Executive Director of the European Digital Rights (EDRi), a digital rights NGO. Claire Fernandez warned that "already marginalised groups are the most likely to be affected" and, together with 12 other organisations, signed an open letter raising concerns about recent amendments to the DSA that could erode fundamental rights.
Chile is the latest country to consider an online speech bill that creates "an absolute incentive for the elimination of content that can be classified as having the potential to be illegal", according to one expert. Digital rights organisations criticised the bill for ignoring the work of academics and human rights experts while Joan Barata, a legal expert working on limited liabilities at Stanford, called it one with a very much 'vague liability regime'. I've lost count of the countries where this is true.
Talking of Stanford, the university has set up a new Content Policy and Society Lab to respond to moderation "challenges posed by differences in culture, language, and communities". Headed up Julie Owono, executive director of Internet Sans Frontiéres and an Oversight Board member, and Dr Niousha Roshani, founder of Global Black Youth, the Lab will create a space to "craft solutions to widely recognised problems". A welcome addition to the space.
💡 Products - the features and functionality shaping speech
Twitch viewers will now be asked to verify their account with a phone number as part of efforts to mitigate the "hate raids" that have become an increasing problem for the platform. Users can verify five accounts per number and, if one account is suspended, all associated accounts will be removed too. My question is: will buying the cost of a cheap smartphone deter those that are motivated to spread racist hate? I somehow doubt it.
The ability to switch off comments on Facebook posts is being used by Australian ministers and senior politicians, following a high court decision to make page owners responsible for their comments. The Guardian reported that Tasmanian premier Peter Gutwein told followers: “We know social media is a 24/7 medium, however, our moderation capabilities are not." You're telling us, Peter.
If you're interested in seeing under the hood of an automated moderation system or just like good headlines, you'll enjoy this read from the team at Mux, a video API for developers, about how they think about it. (Thanks Steve for sharing)
💬 Platforms - efforts to enforce company guidelines
It's been a busy week for YouTube, which on Wednesday finally banned broad vaccine misinformation and content that promoted vaccine hesitancy. The move comes almost a full year after Facebook did the same, six months on from a report which highlighted the role of YouTube in the dissemination of Covid-19 misinformation and a month on from Reddit's own reckoning about virus disinformation (EiM #126).
The policy change comes in the same week that YouTube deleted two German channels operated by RT, a Russian state-backed channel, for repeated violations. Russia has a history of restricting channels that don't comply with the government's wishes (see Twitter back in March) and is expected to do the same.
The fallout from the Facebook Files continues, as the Wall Street Journal published six of the documents that formed the basis of its reporting. I haven't looked at them all but, based on some people's reactions, they don't reflect well on the senior leadership of the world's largest social network. Former FB employee Samidh Chakrabarti's thread is a good place to start.
TikTok has created an Urdu-language safety centre and tripled its spending on content moderation to deal with the app's growing popularity in Pakistan, according to a Vice article. It notes that 15% of 6.5m videos removed in Pakistan in the six months to March 2021 violated its sexually explicit policy, which led to pressure from the Pakistan government to clean up the platform. It may well be a playbook that other states may seek to follow.
👥 People - folks changing the future of moderation
As soon as Facebook was called to appear in front of the US Senate Commerce Committee following recent revelations about Instagram's effect on teenagers, attention turned to the employee that would be sent to fight its corner. Antigone Davis was that person.
The Global Head of Safety may not be widely known but, according to this good profile from Tech Policy Press (also my read of the week), has worked at the platform since 2014 and is known for sticking to company lines about its approach to hate speech and technology addiction.
She faced some hard questioning (as well as some wasted comments from the subcommittee chairman) and was accused of not being straight with lawmakers from both sides. But broadly, Davis reiterated the messaging of the last week that Facebook is equipped to keep children safe on its platform.
🐦 Tweets of note
- "So this happened" - A cautionary thread from Paul Weedon, whose 14-year-old YouTube video that led him to become a meme, was taken down for violating the platform's violent/graphic content policy. (It has since been reinstated)
- "SEEKING LGBTQ INSTAGRAM USERS" - Páraic Kerrigan, an assistant professor at University College Dublin, is working on an interesting new study about the effect of shadowbanning on the photo-sharing app.
- "Not just the Cleggs and Kaplans, but also the folks writing the talking points, the ones testifying then getting promos" - former Pinterest employee Ifeoma Ozoma wonders whether there will be consequences for not holding people at platforms to account.