6 min read

Platform design in the dock, Meta all in on AI moderation and Block Party gets bought

The week in content moderation - edition #330

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

When we look back in five or ten years time, I wonder what we'll say about the last seven days. A week when the US juries handed down two landmark juries, European regulators turned up the heat on age assurance and the world's largest platform ramped up AI moderation.

You can hear the incredulity — and perhaps fatigue — in the voices of Mike and I in this week's episode Ctrl-Alt-Speech, which we've titled For Meta or Worse. Have a listen and share with family, friends and any colleagues that could do with some context.

To new subscribers from 2K, Irdeto, Ofcom, Playroom, Crisp, Rover, Illuminate and elsewhere: welcome. It’s not always this hectic, though weeks like this are exactly why EiM exists — to help busy people in and around trust and safety make sense of the stories that matter.

Get in touch — ben@everythinginmoderation.co — with feedback and suggestions. And thanks for reading — BW


DECISIONS MAKERS READ EVERYTHING IN MODERATION

"EiM is hands down the best T&S newsletter out there, and I think it fills a niche that is very much needed more than ever!" - T&S product manager

Thousands of experts and decision makers get their fix of industry news and analysis from Everything in Moderation's weekly newsletters, Week in Review (every Friday) and T&S Insider (every Monday).

Talk to them by becoming a sponsor today.

FIND OUT MORE

Policies

New and emerging internet policy and online speech regulation

The big, almost unavoidable, news this week was the verdicts in two US cases against Meta — one in New Mexico (child exploitation) and the other in California (social media addiction) where Google/YouTube were also involved. The common thread was juries accepting the argument that product design contributed directly to the harm of young users, something that Mike and I discussed at length in a recent Ctrl-Alt-Speech episode.

The penalties issued to Meta — $375m in the New Mexico case, $6m in California, 70% of which it is liable for — pale in comparison to the reputational risk and 8% decline to its stock price on Thursday. US lawyers will be licking their lips; more than 3,000 similar cases have been filed across the US, according to Bloomberg, with two similar to the California trial expected before the end of the year.  

The takes worth reading include: 

  • My esteemed co-host Mike Masnick of Techdirt wrote a strongly argued piece on why we shouldn’t be cheering these decisions for a whole bunch of reasons, including an often overlooked legal procedural one.  
  • Rina Chandran at Rest of World speaks to a host of non-profits that say the cases represent an “earthquake that shakes Big Tech’s predatory business model to its core”.
  • Matt Stoller from American Economic Liberties Project makes the interesting point that the verdicts fall into a trend of juries wanting to make big corporations pay for their actions — something we touched on in this week’s podcast.

Not to be outdone, regulators across the pond have been busy too: 

Also in this section...

Products

Features, functionality and technology shaping online speech

Block Party, the safety startup that started as an anti-harassment tool for X/Twitter, has been acquired by privacy company, DeleteMe. The tool — not to be confused with the British indie band of the same name (but not spelling) — has had quite the journey: cut off by Elon Musk after he bought the platform, it expanded its product to cover 12 platforms before launching an enterprise product.  

Meeting the (ban) moment: Founder Tracy Chou has talked about Block Party as a solution to the urge to quit social media after a unexpected pile-on or harassment campaign. I’ve always wondered if it couldn't also be used by parents, in conversation with their children, to eradicate the most challenging parts of being online. Being part of a larger company can only increase those chances.

How London became a Trust & Safety hub
It might not be San Francisco but UK regulation and new industry events have seen London emerge as an important destination within the Trust & Safety ecosystem

Platforms

Social networks and the application of content guidelines

Meta announced at the back end of last week that it would lean more heavily on AI for safety, claiming it will — wait for it — "catch more severe violations like scams faster and more accurately, with fewer over-enforcement mistakes.” The press release espoused the impact of its “more advanced AI systems”, which IT news site The Register called "odd" and felt like fairly common use cases of AI in moderation dressed up as something new.

To give the company some credit, the announcement said AI won’t be used everywhere and without oversight; "people will remain at the center (sic) of our approach", it claims, to "design, train, oversee and evaluate our AI systems, measuring performance and making the most complex, high-impact decisions".

Talking my language? The boldest claim that Meta’s new systems will be able to support moderation outcomes for 98% of users globally, far beyond the 80 languages its existing systems are said to cover. As I explain on the this week’s podcast, I asked Meta’s AI assistant about where it is getting its “cultural niche” from and I got a bunch of hallucinatory garbage back. No wonder the Oversight Board is urging caution about the rollout. 

Also in this section...

For Meta or Worse - Ctrl-Alt-Speech
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:How London become a T&S hub (Everything in Moderation) Everyone Cheering The Social Media Addiction Verdicts Again…

People

Those impacting the future of online safety and moderation

One reason the California verdict (see Policies) was interesting to me is that it didn’t just rely on arguments about “addictive design” but it was also built on testimony from a longstanding Meta ad exec. Brian Boland spent more than a decade trying to help Facebook and Instagram grow and make money and is the most senior executive to testify against his former employer. 

Boland previously testified in front of the Committee on Homeland Security, saying “the focus on and investments in safety remained small and siloed.”  And, from media reports of the recent trial, Boland comes across as fair and balanced; he explains how growth and engagement goals shaped product decisions without saying that Meta’s underlying advertising business models or its algorithms are inherently bad. A Verge profile from last month notes that colleagues regard him someone with “strong moral character” . 

But it’s not without cost: he has shared how he has left Facebook stock on the table and, like many whistleblowers, been overlooked for jobs in Silicon Valley. No wonder more senior executives haven’t followed suit.

Posts of note (London special)

Handpicked posts that caught my eye this week

  • “My hot take - instead of focusing on risk mitigation, prioritise the user experience for the vast range of users (who are well intentioned), and you'll be surprised how much that can improve trust, negative sentiment and commercial gains.” - After two days at the T&S Summit, Depop's Sophie Walsh is going upstream.
  • "That's why Cinder will be at both events. Glen, Thatcher, and Sam are eating crumpets and boiled mutton in London and Henry Hilton and I will be in and out at RSAC." - I know the UK isn't renowned for its food but I wish Brian Fishman and the Cinder team had asked for some restaurant recommendations.
  • “Last night the Internet Watch Foundation (IWF) held a parliamentary reception to mark the launch of our new report: Harm without limits: AI child sexual abuse material through the eyes of our Analysts.” - It's not London unless you've got the London Eye in the background, like IWF's Bobbie Dennis.