3 min read

📌 Policy-violating Pins, India's awful internet law and slow forwarding

The week in content moderation - edition #103

Welcome to Everything in Moderation, your weekly newsletter about content moderation by me, Ben Whitelaw.

A big thank you to those subscribers that shared last week's newsletter and to community-building specialist Rosie Sherry who included it in her weekly round-up newsletter. If you have a friend (or foe) that would enjoy EiM, send them here.

Right, here's what happened over the past seven days — BW


📜 Policies - company guidelines and speech regulation

India's new internet law — which requires companies to take content down within 36 hours — is so bad that it has united two long-sparring adversaries: platforms and the media. The Foundation for Independent Journalism, which operates news site The Wire, will argue in the Delhi high court that the rules are an “over-reach” and have free speech implications.

Are US state legislators turning against the dominant digital platforms that have long lived in their backyard? This week, two fights broke out between platforms and individual states, which is a new one as far as content moderation regulation is concerned:

  • Twitter sued Texas' attorney general for opening an investigation into its content moderation practices, which it claims will allow bad actors to “carefully design their content to evade Twitter’s scrutiny”. Ken Paxton opened the investigation in January following Donald Trump's ban.
  • Utah narrowly passed a bill that means social media platforms with users in the state must clearly state their content moderation policy, and inform Utah folk within 24 hours when they run afoul of it.

Mike Masnick published a handy piece just yesterday over at Techdirt if you want to know more about US states going head-to-head with the platforms.

💡 Products - features and functionality

The engineering team at Pinterest has written a detailed blog post about the machine learning models for detecting harmful pins and boards. It is unsurprisingly pretty technical ('feed forward network' anyone?) but the efforts have contributed to an 80% decrease in self-harm since being introduced in 2019, it claims.

Limiting the number of groups that WhatsApp users could forward messages to "reduced forwarding by about 25% globally", according to Will Cathcart, who heads up the platform. Removing the quick forward button also "cut highly forwarded messages by 70%". Less problematic content, less need to moderate.

💬 Platforms - dominant digital platforms

Facebook's ads celebrating Black History Month were removed by its own system for "appearing without disclaimers noting they concerned “social issues, elections or politic". Proof once again that automation is hard.

Wellbeing influencers using #wearethecontrol, users spelling "vaccines" as "va€€In3s" and subtle usage of the syringe emoji; this piece by Salon is a wild look at how Instagram posters are evading content moderation through alternative means.

Cameo — the personalised video messages app used by celebs — is creating a Trust and Safety team and hoping to hire a part-time attorney that can lead its efforts. The news may or may not have something to do with the fact that British political loudmouth Nigel Farage joined the platform this week.

👥 People - those shaping the future of content moderation

I know its only one company but I can't get enough of reporting that explains the inner workings of Facebook's moderation/safety efforts. And this Technology Review piece on Joaquin Quiñonero Candela is up there as one of the best.

In it, reporter Karen Hao explains how Quiñonero was appointed head of its Applied Machine Learning (AML) team back in 2012, sat close to Zuckerberg in the Facebook offices for several years and created FBLearner Flow, the AI model developer platform that helped scale its use of algorithms, including ones for content moderation.

If you like insane detail and previously unreported conversations, definitely have a read of this one.

🐦 Tweets of note

  • 'editing is the new content moderation is the new editing' - Lawfareblog managing editor, Quinta Jurecic on the Glenn Greenwald/Substack debacle.
  • 'And here I thought I was getting targeted with "Facebook supports regulation" ads because I'm such a prominent tech policy influencer' - Wired reporter Gilad Edelman on his new piece about how Washington is awash with Facebook's pro-regulation messaging.
  • 'Hiring a postdoc for fall- seeking social scientist (quant or qual)' - great opportunity to work with D Yvette Wohn at the New Jersey Institute of Technology.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.