📌 DSA takes 'major step forward', Substack's misinfo problem and BPOs focus on wellbeing
Hello and welcome to another of Everything in Moderation, your need-to-know guide to this week's content moderation news, out every Friday. It's written by me, Ben Whitelaw.
New subscribers from the University of Michigan, Meta, Global Internet Forum to Counter Terrorism (Gifct), ActiveFence, Mindgeek and elsewhere, welcome and thank you for signing up. I'm fast approaching 150 editions of EiM and inching towards 1000 subscribers and every one of you matters as much as the first.
Here's what you need to know from the last seven days — BW
📜 Policies - emerging speech regulation and legislation
The European Parliament yesterday approved the text of the Digital Services Act, marking a major milestone in the regulation of platforms and how they moderate content. The vote — 530 votes to 78 with 80 abstentions — sets the stage for upcoming negotiations with the 27 member states and the European Commission, after which the DSA will become law. AlgorithmWatch called it a "major step forward in regulating Big Tech".
Counterspeech has a larger role to play in addressing hate speech in India, according to an op-ed by two researchers. Prateek Waghre (who also writes the very good Information Ecologist newsletter that EiM collaborated with recently) and Tarumina Prakhakar make the case in The Indian Express that "content moderation should be considered a late-stage intervention" and that de-escalating dialogue and enforcing social norms around respect and openness are underused and potentially impactful interventions.
A new report from The Royal Society has warned against removing content as a means of combatting scientific misinformation, stressing that it is "not a silver bullet and may undermine the scientific process and public trust". Produced by an esteemed working group, the report also highlights a number of possible remedies to combat bad actors including better researcher access to platform data and a beefed-up role for fact-checkers.
💡 Products - the features and functionality shaping speech
A Twitter experiment to allow users to flag a tweet as misleading has been extended to the Philippines, Brazil and Spain, according to the platform's head of site integrity. 3 million reports of false information were submitted in the three original test countries — the US, South Korea and Australia — but the new countries, all of which have or are likely to have an election this year, will post significantly different challenges.
Amazon's moderation software for video has increased its accuracy rate for detecting explicit nudity following months of testing. Rekognition Video, which counts CBS and the NFL among its customers and claims to "lead to a better experience for human moderators and more cost savings", has also been rolled out to all AWS regions, according to a company release.
💬 Platforms - efforts to enforce company guidelines
It's been a while since Substack was featured in EiM but a new report from ISD Global suggests its laissez-faire moderation policy still has significant holes. It highlights Joseph Mercola, who regularly shares Covid-19 misinformation on his $5 a month newsletter to thousands of subscribers, as well as white nationalist groups and QAnon influencers drawn to the platform for its so-called censorship-free approach. My read of the week.
Twitter's efforts to prevent the disclosure of how it moderates content was dealt a blow this week when a Paris appeals court upheld a ruling that stated it must share information with NGOs fighting hate speech. The company can refer the case to France's highest court and said it was "looking into the decision" before doing so.
A Facebook policy designed to limit offline violence is under scrutiny again after the company removed posts and suspended the Instagram account of a media publication in Sri Lanka in the belief that it was a terrorist organisation. Thusiyan Nandakumar, the editor of the Tamil Guardian, told The Intercept his outlet fell foul of the Dangerous Individuals and Organizations policy and that he received only "vague assurances" from the co that the error wouldn't happen again.
👥 People - folks changing the future of moderation
Business process outsourcers (BPOs) that provide moderation services for platforms are finally addressing the mental health toll of being a frontline internet worker.
In just the last few years, TaskUs—which has 27,000 employees (not all moderators) and works with Facebook and others—created a wellness and resiliency team while Accenture, another big Meta partner, has accepted that the role can cause PTSD following accusations that its counseling service is "woefully inadequate".
Others are getting in on the act too. This week Telus International — which employs 60,000 staff in customer experience and moderation roles — announced Dr Lucy Rattrie as Global Director of Workplace Wellbeing. Dr Rattrie is a chartered psychologist and will focus on "preventative wellness practices and developing a psychological health curriculum", according to the press release.
Her appointment comes just two weeks after Telus was in the news after a contractor brought a lawsuit against the company (EiM #142) and in the same week that a subsidiary of Telus in Barcelona had a legal action brought against it in Ireland. The cycnic in me wonders if the announcement isn't entirely accidental.
🐦 Tweets of note
- "Suggests that people do cry wolf when it comes to misinfo and relying on users to police misinfo themselves may not be so effective" - evelyn douek reads between the lines on Twitter's user reporting pilot.
- "The ginormous content moderation enterprise is impressive, but too weighted towards the U.S" - NED's Kevin Shieves on a dawning realisation that platforms are not as international as they make out.
- "I worked at a large tech company for 2 years. Some of the work I did was in content moderation" - Lawyer Alex Peter provides a peek behind the curtain at what working at a large tech company is like.