5 min read

How to audit platform algorithms, Supreme Court round-up and TikTok reshuffle

The week in content moderation - edition #237

Hello and welcome to Everything in Moderation, your guide to the policies, products, platforms and people shaping the future of online speech and the internet. It's written by me, Ben Whitelaw and supported by members like you.

Here's something I'm really excited to finally share: EiM is launching a new newsletter.

Trust & Safety Insider will provide cutting-edge insights to help inform your approach to online speech and safety within your platform, community or organisation. Written by Alice Hunsberger — an industry expert with over 10 years of experience at Grindr, OkCupid and now PartnerHero — T&S Insider will analyse emerging issues and share essential tips, resources and strategies. You can read more here; the first edition will hit your inbox every Monday.

I'm so pleased to have Alice writing for EiM and complementing the Friday newsletter, which will now be known as Week in Review. But if two doses of EiM a week are too much, don't worry; you can easily update your newsletter preferences in Your Account.

Here's everything in moderation from the last seven days — BW

PS Last week's EiM saw some subscribers be prompted to subscribe. This was an A/B test and there are no plans to go 'behind a paywall' just yet. But do become an EiM member if you — or your employer — can afford to do so.


Today’s edition is in partnership with SafetyKit. SafetyKit handles all of your Trust & Safety risks end-to-end with a single integration.

SafetyKit checks all of your content types (text, images, audio, video, support tickets, user flags, websites, and documents) against all of your policies with human-level nuance.

Our integrations connect to the tools you use, like Zendesk, and you can automate T&S tickets and user flags without engineering resourcing.

If you’re a forward-looking T&S leader trying to cover all of your risks without a dozen integrations and a constantly growing team, let’s talk. 


Policies

New and emerging internet policy and online speech regulation

The fact that it took four hours of oral arguments to hear two cases just goes to show just how much is at stake in the NetChoice cases (EiM #236). You may have followed the livestream, or The Verge's live blog; I caught up by picking through the avalanche of commentary:

  • Most of the justices seemed to side with NetChoice but, as The Verge's Lauren Feiner wrote, they also seemed "worried about creating an industry that could not be touched by regulation". It was suggested that discretion be awarded to platforms on the basis of the "expressive nature of the business", meaning Facebook or Reddit would have more leeway than, for example, Uber or Etsy.
  • A key component of the discussion was the type of challenge that NetChoice, the tech industry body representing the platforms, made. Vox's Ian Millhiser did a good job (at least as far as this non-US, non-legal scholar is concerned) of explaining the difference between a "facial challenge" and an "as applied" one.
  • From what I read, most people think the Supreme Court will reinstate the Texas and Florida laws to see what effect they have. This will force platforms to comply or pull out of states entirely. In the meantime, wrote David Sullivan of the Digital Trust and Safety Partnership, platforms should continue to "demonstrate that the decisions they make are neither arbitrary nor discriminatory."

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member