📌 A new way to appeal your ban, 'GDPR for public discourse' and more mental health lawsuits
Hello and welcome to Everything in Moderation, your guide to understanding how content moderation is changing the world. It's written by me, Ben Whitelaw.
I want to welcome new subscribers from Discord, the American University, Twitter, Facebook, TaskUs, the Oversight Board, HBM Advisory, and elsewhere. And thanks to everyone that reads each week.
There's a lot to get into this week, including an exclusive Q&A with the head of Trust and Safety at Clubhouse, made possible by EiM's founding members. Read on for more information about how you can join them.
Here's what you need to know this week— BW
📜 Policies - emerging speech regulation and legislation
It's been just over a week since the European Parliament and Council reached a provisional agreement on the Digital Markets Act. The milestone has seen an uptick in commentary around its sister regulation — the Digital Services Act — which will focus on moderation processes and algorithmic transparency when it is passed later this year. Here's a flavour:
- The Centre for Democracy and Technology warned that "the DSA will be to public discourse what the GDPR was to privacy" and will have "far-reaching implications well beyond the European jurisdiction".
- Whistleblower Frances Haugen argues in an op-ed for the Financial Times that civil society organisations "with a record of integrity and excellence in research" should be given access to platform data as part of the legislation.
- The Washington Post notes how the DMA and DSA represent another example where US lawmakers are "chasing Europe’s lead yet again".
As far as the UK's Online Safety Bill goes, there were several notable developments while I was away:
- Conservative peer Michael Grade has been chosen by the government as the preferred candidate to be chair of Ofcom, the proposed regular of the bill. He'll have a pre-appointment hearing before officially taking the job.
- Policy experts warned that the Bill's legal but harmful definition may breach the European Convention of Human Rights.
- Analysis by Carnegie UK notes how the Bill "remains too complex" and is "still some way short of a truly systems-based approach".
Finally, a new blogpost from Techdirt's Mike Masnick makes the case that moderating content actually supports principles of free speech by creating "create spaces where more people can feel free to talk". In it, he looks at the distinction between free speech for commercially-regulated websites and the internet itself and concludes true free speech online looks like "a diversity of communities, not all speech on every community" My read of the week.
💡 Products - the features and functionality shaping speech
New more intuitive reporting flows on Twitch will enable users to search for the reporting violation that fits best their complaint, reports TechCrunch. The changes were mooted last year and, while they not quite as radical as the "symptoms first approach" adopted by Twitter last year (EiM #140), it's a simple way of improving a usually unwieldy process.
What's more, Twitch has launched a new appeals hub to allow users to appeal rulings and monitor their complaints. The company plans to use the portal to give more detail on its decision and even attach video clips (what these will be, it's not yet clear). Transparency ftw.
WebPurify, one of the plethora of AI moderation companies on the market, has launched a VR Moderation Studio to come up with new moderation techniques for reducing harmful behaviour in VR/AR. Its co-founder claims it is "the first company to offer something like this" although I'm very sceptical of anything that refers to the metaverse with anything other than a wry smile.
Jess Mason joined as Head of Global Policy and Public Affairs at Clubhouse 12 months ago and spoke to me for the latest Viewpoints piece about why she split policy and operations and how she dealt with last year's negative press about the app's content moderation. Read it in full.
Viewpoints will always remain free to read thanks to the support of EiM members. If you're interested in becoming a founding member, join today and receive a 10% lifetime discount.
💬 Platforms - efforts to enforce company guidelines
Be careful who you respond to on LinkedIn: an investigation by NPR in conjunction with Stanford researchers found thousands of fake accounts with AI-generated avatars designed to sell products to unsuspecting users. The businesses behind these accounts said "there are no specific rules for profile pictures or the use of avatars" but LinkedIn's Community Policies states otherwise and removed more than a dozen companies identified. Busted.
A second lawsuit has been brought against TikTok by former moderators who allege that they were not provided with adequate mental health support while doing their job. Ashley Velez, who worked for Telus International, and Reece Young, who was a contractor for Atrium in the US, are seeking class-action status that would allow other TikTok mods to join the lawsuit. Candie Frazier, another ex-TikTok moderator, filed a similar lawsuit late last year (EiM #142) although that has reportedly been dropped.
This one's from 10 days ago but worth noting: Telegram will now monitor 100 of its most popular channels in Brazil after the country's Supreme Court threatened ISP and app stores to block access to the messaging app for failing to deal with rampant misinformation. Telegram previously deleted the account of Congresswoman Marjorie Greene Taylor for spreading false Covid-19 narratives (EiM #142) but rarely concerns itself with content moderation. This is different.
👥 People - folks changing the future of moderation
There's been a development in the case of Daniel Motaung, the former Sama moderator in Kenya who was allegedly laid off for trying to organise colleagues in order to get a pay increase. It was his story that formed part of TIME's recent investigation into the poor working conditions of moderators in Africa (EiM #150).
Motaung's legal firm this week issued 12 demands against Sama and Meta, which contracted the company's services, for failing to adhere to Kenyan labour, privacy and health laws. If they fail to respond in 21 days, a lawsuit will be filed.
A lot is a stake here both for Motaung and Mercy Mutemi, a lawyer working for Nzaili and Sumbi Advocates, who is leading the action. She told TechCrunch that "This isn’t an ordinary labor case – the working conditions for Facebook moderators affect all Kenyans.” She's right, more people should care about the outcome.
🐦 Tweets of note
- "Spare a thought for your legal and policy colleagues" - Mozilla EU internet policy expert Owen Bennett bemoans the drip-feeding of press releases to announce the Digital Markets Act deal.
- "Three years ago, Congress passed a law that got people killed" - Evan Greer, director of Fight for the Right, reflects on where SESTA/FOSTA laws failed and how lessons must be learnt.
- "Every time Will Smith punches someone on live TV, 50 content moderation escalation workers get their wings" - St Johns Law professor Kate Klonick on a fundamental law of the web.
🦺 Job of the week
This is a new section of EiM designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.
Tech Against Terrorism is looking for a full-time Communications Associate to deliver its marketing and comms strategy across all channels.
This is a great opportunity in a smart team doing a wealth of important work. The salary is £30,000 – £35,000 a year (props to TAT for disclosing that on the ad, I didn't even have to ask) and includes flexible working, healthcare, pension, and training budget.