4 min read

📌 No ban for anonymous accounts, life as a Pornhub mod and what Neal Mohan said next

The week in content moderation - edition #149

Hello and welcome to Everything in Moderation, a weekly recap of the latest news related to content moderation from around the world. It's written by me, Ben Whitelaw.

Welcome to new subscribers from Trust and Safety Professional Association, Cornell University, ActiveFence, Helpful Digital, Disinfo.eu, Twitter, Logically and others. To everyone reading today's edition, I'm sending you all strength, wherever you are, at the end of this most challenging of weeks.

It pales in comparison with what's gone on but I wanted to share some big news for me and for the newsletter: I'm launching a membership programme for Everything in Moderation. I'll explain more over the next few weeks but, after some positive feedback, I feel it's the right time to step things up in support of everyone reimagining a better, safer future for the web.

Onto this week's jam-packed EiM, including my read of the week — BW


📜 Policies - emerging speech regulation and legislation

New transparency legislation designed "to hold powerful online companies accountable for the promises they make to users" has been announced this week by three members of US Congress. The Digital Services Oversight and Safety Act (DSOSA) would establish a unit within the Federal Trade Commission with over 500 technologists and lawyers and create an "internal complaint handling system" for users to appeal content moderation decisions. DSOSA is just one of a handful of legislative means being considered in the US to make tech platforms more accountable. Tech Policy Press has the full story.  

The likelihood that anonymous social media accounts will be banned by UK law looks to be dead in the water, judging by a line in this government-friendly news article about the Online Safety Bill. The Sun reports that ministers have "decided against banning anonymous accounts altogether", putting an end to what smart policy folks have long said is a terrible idea. I'll say it again: anonymity is not the problem (EiM #59).

A working group to streamline engagement between Pakistan's government officials and platform representatives met for the first time last week, according to reports. The National Social Media Coordination Working Group (NSMCWG) —  announced by Prime Minister Imran Khan in October — discussed the need for timely moderation practices in compliance with local law. TikTok, which has had a difficult relationship with the government (EiM #146) was seemingly not present.

💡 Products - the features and functionality shaping speech

A social media platform for teenagers known for its abuse-preventing technology is "dangerous" for children, according to an investigation by The Sunday Times. A reporter went undercover on Yubo for 10 days, using her own photos to bypass age verification technology, supplied by a company called Yoti. She came across multiple instances of sexual harassment, racism and bullying that I won't repeat here. The app, which has 3.6m UK users and over 40m globally, has previously been lauded for its preventive approach to online safety.  

Another platform with a reputation for "industry-leading" technology, Roblox, has also came under fire this week after the BBC discovered sex games playing out among users. The incidents, known as "condos",  manage to bypass '"human and machine detection" but are taken down within an hour, according to the company.

💬 Platforms - efforts to enforce company guidelines

Following the invasion of Ukraine by Russian forces, Facebook announced that it has established a Special Operations Centre manned by native speakers. Nathaniel Gliecher, Facebook's head of security policy, took to Twitter to announce the measures and a new feature that allows users to lock their profile, preventing bad actors see posts or downloading photos. Gleicher has previously shared details of how his team put in place protocols to disable the Querdenken in Germany and was famously called 'Facebook's top troll hunter" by CNN in 2018. How things have moved on since then.

Nextdoor, the neighbourhood-based social network, has released its first transparency report. Axios has the top lines but it's broadly a positive story for its community-led moderation model and the 233,000 volunteers that helped to identify violating posts. But there's a warning in there too: almost 2% of all content on Nextdoor was deeded reportable, which feels very high.  It wasn't too long ago that Nextdoor got in trouble for deleting BLM posts (EiM #71).

A Pornhub moderator has revealed what it was like working in the company in the early 2010s in a fascinating piece published by The Verge this week. Nathan Munn never read the platform's terms of service and yet had to handle video takedown requests, which were "treated by management as a minor annoyance", while also writing dirty jokes for its social media profiles. Every paragraph contains something jaw-dropping. My read of the week.

👥 People - folks changing the future of moderation

If you subscribe to the idea that YouTube gets away with its approach to content moderation, then you have to look at the person responsible for that. And at the video platform, that's Neal Mohan.

Mohan will be familiar to most of you. In his role as chief product officer and the person in charge of Trust and Safety, he pops up a lot in blog posts and has a growing public presence. The last time I featured him in EiM (🔭) was when he had to defend his dual position at the top of the company. He's now followed that up with his take on how the company is thinking about mis and dis-information.  

Casey Newton of Platformer suggests the blog post marks a new phase of public communication in which platforms like YouTube "get more comfortable admitting that they don’t always know what to do". And he even got Mohan to reveal that the video platform was considering "additional types of labels to add to a video or atop search results" and even "exploring" partnerships with local experts and non-governmental organisations designed to reduce "borderline content".

I look forward to reading a fuller blog post from Mohan all about those measures at the soonest convenience.

🐦 Tweets of note

  • "There is no content moderation at scale today & safety in the metaverse requires action now." - Digital policy expert Kristina Podnar says recent stories are just at the tip of the iceberg.
  • "The Digital Services Oversight and Safety Act (DSOSA) has more than a passing similarity to the EU Digital Services Act" - David Sullivan, executive director of Digital Trust and Safety Partnership, notes some eerie similarities.
  • "A de-centralized web will only be safer, more understanding, & more equitable if we invest in those things" - Cornell assistant professor J. Nathan Matias with a thought-provoking thread on the need to resource self-organising online communities and the institutions that support them.