3 min read

📌 When to pre-moderate, ending online abuse for women and the DSA's architects

The week in content moderation - edition #119

Welcome to Everything in Moderation, the weekly newsletter about content moderation and online speech, now and in the future. It’s curated and produced by me, Ben Whitelaw (this week, from the depths of Cornwall, UK).

Special thanks to Ahmed for sharing last week's newsletter with his  network and the host of new subscribers that subsequently signed up. Don't be shy — hit reply to say hello or to give me your first-look feedback.

Let's get on with this week's round-up — BW


📜 Policies - emerging speech regulation and legislation

At last, some hope in the quest to end the terrible abuse of women on the dominant digital platforms. Facebook, Google, TikTok and Twitter yesterday announced a series of commitments designed to improve how women curate their experience and report abusive users. The commitments, which were developed in a series of workshops led by the Web Foundation with over 120 organisations from 35 countries, signal a big step in the right direction. Now to hold networks' feet to the fire.

Over in the US, a Florida bill designed to control the moderation practices of "social-media providers deemed too large and too liberal" has been blocked by a judge for being "at odds with accepted constitutional principles".  Not a surprising development to anyone who's been following the case but bad news for platforms looking to build a theme park in the state.

💡 Products - the features and functionality shaping speech

Now here's a feature I would have liked to have when I was corralling commenters: pre-moderation for new users. Comment hosting service Disqus launched the feature this week as "a kind of parking lot for new commenters" that should help sites avoid spammy comments and organised drive-bys. Wonkette, the US politics blog and a Disqus user, wrote about the feature and how it will also help "protect regulars here from stalkers who try to follow them here from other sites".

Checkstep, one of the plethora of AI-based moderation software now on the market, announced a £1.3m funding round this week that will allow it to boost its R&D arm and scale its focus beyond toxic content and abuse. It follows several others funding rounds for AI mod services this year: Caliber AI raised €600,000 in January while Hive announced a chunky $85m back in May. There's gold in them there posts.

💬 Platforms - efforts to enforce company guidelines

TikTok has published the number of users under 13 years old that it removed from its platform for the first time. The short video sharing site pulled 7.3m accounts, according to its latest transparency report, and was keen to point out that this equates to less than 1% of all users. A New York Times report last year had estimated that as many as 1/3 of users were under 13.

Microsoft is planning to boost its legal and corporate teams by 20% in response to the incoming tidal wave of tech regulation around the world, according to Axios. President Brad Smith called the legislation part of a "sweeping set of changes". Definitely a good time to be a lawyer.

👥 People - folks changing the future of moderation

Legislation, especially at the level of the European Commission, has hundreds, perhaps thousands, of people input into its creation. The Digital Services Act would have been no different.

But, of course, someone is leading the process. And for the DSA, those folks are Prabhat Agarwal, head of the Commission’s Digital Services and Platforms unit, and Gerard de Graaf, director for the digital transformation in the Commission’s Communications Networks, Content and Technology directorate-general.

They appear in this fireside chat organised by the Atlantic Council’s Digital Forensic Research Lab and discuss the overdue need to regulate, how it's like "regulating the fire exits" of the digital world and the additional obligations of larger platforms (those that reach 45m users or 10% of the EU population).

🐦 Tweets of note

  • "This should be a wake-up call for the left: calling for more and faster social media censorship will always backfire on marginalized social movements" - Evan Greer, Fight for the Future director, notes the significance of the banning of Right Wing Watch in this good thread.
  • "I'd like to think I follow moderation news pretty closely, but this is news to me." - Former tech lawyer and Copyright Office attorney Mark Gray is, like me, surprised by news of Facebook's hard line on Capital rioters.
  • “This will... not be sufficient" - Stanford Internet Observatory's Alex Stamos looks at the security policies of new Trump-affiliated platform Gettr and finds, well, not a lot.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.