Welcome to Everything in Moderation, your weekly digest about all things content moderation. It's written by me, Ben Whitelaw.
A warm welcome to new subscribers from Roblox, Spectrum Labs, Linkedin, Twitter, Carnegie Endowment for International Peace, Tech Policy Press and a bunch more.
I'm pleased to announce a new article format for EiM that I'm calling 'Explorations'. Each week, I'll interrogate a question about content moderation and online safety that I'm wrestling with (and maybe you are too) and try to join the dots. The first one looks at whether Trust and Safety teams in big platforms are set up to fail. I'd love to hear what you think — and for you to submit a question that you'd like interrogated.
Onto this week's news and analysis — BW
📜 Policies - emerging speech regulation and legislation
An associate professor at one of Taiwan's best universities has accused Facebook of censoring several prominent nationals for supporting Taiwanese independence. In a piece for the Taiwan Times, Chiang Ya-chi explained how YouTuber Chen Yen-chang received a 30-day ban for his comments and saw similar posts reportedly disappear. Facebook’s Chinese-language content moderators are mostly Chinese nationals at a time when China has been escalating its military activity and talk of an invasion grows.
Whistleblower Frances Haugen's testimony (see People) has had a perhaps unintended and certainly interesting side-effect: spurring on regulation efforts in Europe. The New York Times reported that the former Facebook product manager met with European diplomats Thierry Breton (EiM #52) and Vera Jourova after her Senate appearance. The report also noted she had spoken to Christel Schaldemose, a Danish MP involved in the Digital Services Act, in the last few weeks. Talk about getting around.
PS: Trust and Truth Online, a conference that brings together academics, industry and non-profits to discuss how to create trustworthy online spaces, resumes later today. I watched a fascinating presentation about detecting emoji hate.
💡 Products - the features and functionality shaping speech
Recommender systems that "reward engagement and ultimately drive revenue for the platforms" is where our focus should be if we want to get rid of bad content, according to the founders of the non-profit Global Disinformation Index. In a blog post, Clare Melford and Danny Rogers say the focus on content moderation only serves to keep "platforms, policymakers, researchers and academics spinning their wheels for ages" and that a renewed focus on products that drive financial incentives is the way forward.
Roam Research, the knowledge management software, has got itself into some bother by banning a dozen critical voices from its community of fans without warning. In a long Reddit post, CEO Conor White-Sullivan tried to explain why he wielded the ban hammer but admits "it's totally possible that you were banned unjustly." It's not a great look, not only because the company hyped up its community (aka #RoamCult) but because it is yet another example of startup founders ruling by arbitrary decree. (Thanks to Matt for alerting me to this)
Finally in this section, Twitter's warning labels are rolling out but have er, not exactly been working as intended, according to some users. Tech reporter Will Oremus noticed the same. One to keep an eye on.
💬 Platforms - efforts to enforce company guidelines
OnlyFans must do more to protect its users and should "reflect on the potential risks of pushing content creators away", according to two researchers. Dr Elena Martellozzo and Paula Bradbury argue in a blogpost that the steps taken to protect content providers have been "unsatisfactory" and that the decision to row back on a ban on explicit content "appears to be purely financial".
👥 People - folks changing the future of moderation
This time last week, only a handful of people knew the name Frances Haugen. That's very different now.
Like many whistleblowers that have gone before her, the former Facebook employee showed extreme bravery and coolness in a 13-minute segment on last Sunday's 60 Minutes and then a wide-ranging Senate testimony on Tuesday. Her performance was perhaps not surprising having read about the health issues the 37-year-old has faced in recent years, including being homebound for a year with a blood clot.
Her thanks? To be told repeatedly by her defensive ex-employer that she "did not work on child safety or Instagram or research these issues". (If you're interested in the media tactics, check out this Input Mag piece on Facebook's press rottweiler-in-chief Andy Stone).
What made her appearance notable was the way Haugen shone a light on the inner workings of Facebook and its capacity to make content decisions. She backed the idea of forcing the company to work with researchers and has previously mooted the idea of having public officials oversee the company from the inside.
Yale fellow Chinmayi Arun, whose paper touches on the topic, hit the nail on the head when she said "we have to stop treating [Facebook] like a monolith and take into account its internal complexity". Only with more whistleblowers like Haugen will that happen.
🐦 Tweets of note
- "I am once again asking you to be as concerned about the harm done by Facebook suppressing content as you are about it amplifying content." - Evan Greer of Fight for the Future gives us a timely reminder that it's not as simple as we think in this thread.
- "Everyone I've met who works on these issues has a deep sense of purpose driving them vs extrinsic rewards like money/fame" - Snap's Juliet Shen echoes my thoughts exactly on the people doing the work.
- "any one of these things could make them ‘publishers’ and suddenly liable to lawsuits" - Planetary co-founder Tom Coates brings it back to a fundamental debate.
Updated (15 October 2021): Following reader feedback, this edition was updated to better reflect the argument of the article about OnlyFans' content policy and its effect on sex workers.