6 min read

Social media ban 'isn't working', $100m AI war chest and Bickert steps down

The week in content moderation - edition #331

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by paid members like you.

Becoming a paid EiM member supports independent coverage of the T&S industry at a time when it's most needed and gets you unfettered access to EiM's archive of 450+ editions.

Lawmakers like to make out that online speech regulation is straightforward. This week, even more so than others before it, suggests otherwise. Whether it’s Australia or the EU, it’s trade-offs and sadness all the way down.

This week’s Ctrl-Alt-Speech — Age Old Questions — gets into the weeds of those stories and a few more covered in today’s Week in Review. Get it wherever you get your podcasts and leave us a review if you haven’t already.

Welcome to new EiM subscribers from Zevo Health, Automattic, Askmiso, Asia For Animals, Electronic Arts, Coimisiún na Meán, the Internet Watch Foundation and other discerning independent media consumers. This is your Week in Review — BW


Policies

New and emerging internet policy and online speech regulation

A new report from the eSafety Commission shows that Australia’s under-16 social media ban is, to put it mildly, not going 100% to plan. Despite having removed 4.7m accounts across the 10 platforms in scope, the report suggests significant numbers of children are still able to access restricted platforms — something that media reporting has highlighted since the ban went into place on 10th December last year. The regulator said it will now scrutinise how five platforms — Facebook, Instagram, TikTok, Snap and YouTube — have implemented measures to keep children safe, with the view to announcing enforcement action "by the summer". Crikey has more.

Evidence gaps: Using 2014 population data, 4.7m accounts works out at roughly three per person; that's a number that will appeal to parents and child safety advocates. However, because the report relies heavily on a survey of 900 parents, the eSafety Commission can't be fully confident about the scale of the circumvention or how representative the problem is. As I said on the podcast, it all feels a little exceptional while analysis in The Guardian recommends that "other countries... consider waiting for more data on the effectiveness of the ban" before moving ahead.

It's not just Australia that is threatening to haul platforms over the coals: lawmakers in Indonesia have asked Google and Meta officials to explain how their own social media ban — which came into place last weekend — seems to have been evaded by savvy teens. The New York Times has more.

In a perfectly timed policy note on the effectiveness of existing bans, UNICEF argue that "setting a minimum age for accessing social media alone will not eliminate risks of harm" and rightfully highlight the range of interventions that must accompany it for it to be successful.

The failure to extend a temporary exemption of the ePrivacy Directive — which allow platforms in Europe to legally scan for child sexual abuse material (CSAM) — means that, as of today, it is illegal for platforms to do so. Despite tech companies making clear that children could be at risk if an agreement was not reached, European politicians left a 'hail Mary' vote last Thursday without the necessary alignment. It means platforms now face a tough choice: continue scanning and risk breaching privacy laws, or stop and lose a key mechanism for detecting abuse — a trade-off that has left T&S teams in legal limbo. More in this week's podcast.

Age Old Questions - Ctrl-Alt-Speech
In this week’s round-up of the latest news in online speech, content moderation and internet regulation, Mike and Ben cover:It’s official: Australia’s teen social media ban isn’t working, yet (Crikey)Social Media Minimum Age - Compliance update (e…

Also in this section...

T&S is political. Fund it like it is.
The Trust & Safety Summit reminded us that, if T&S is central to how platforms govern speech, behaviour and risk, it should be treated as a strategic function rather than a cost centre.

Products

Features, functionality and technology shaping online speech

Major AI companies have been invited by Common Sense Media to contribute $100 million each to fund independent research into the impact of AI systems on children, as well as a technical advisory council to shape future regulatory standards. According to Politico, the pitch — which was also made to philanthropic organisations — was made while the non-profit also advocated AI regulation in California and just months after it joined forces with OpenAI to support the Kids Safe AI act.

Also in this section...

Enjoying today's edition? Support EiM!

💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.

💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.

📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!

Platforms

Social networks and the application of content guidelines

If you need reminding that 'free speech' is a marketing tagline to big platforms and that CEOs will always be their moderators-in-chief, check out this pair of stories this week:

  • The New York Times’ interview with YouTube CEO Neal Mohan is worth reading for the sheer ineptitude of his answers about content policy. Clearly keen to talk about creators and YouTube’s growing, Mohan repeatedly struggles when pressed on content moderation, including Trump’s return to the platform and the growth of figures like Candace Owens. It's as if he didn't listen to last March's Ctrl-Alt-Speech podcast, Chief Equivocation Officer.
  • The OpenAI case has thrown up a text exchange showing Meta CEO Mark Zuckerberg telling Elon Musk that Meta was ready to take down content that “doxxed” or threatened DOGE employees. That would be less notable had it not come just 24 days after Zuckerberg’s infamous Joe Rogan appearance, where he cast himself as newly resistant to government pressure. Techdirt and Platformer both have good accounts of the brazen hypocrisy of the man known to his employees as "MAGA Mark".

Also in this section...

People

Those impacting the future of online safety and moderation

When I read the reports this week about Monika Bickert stepping down from her role at Meta, I gasped. 

Bickert has been at the company since 2013 and has seen it all. She wrote the 2020 white paper that set Meta’s regulatory stall (EiM #52) and defended the controversial decision to keep a doctored video of Nancy Pelosi on the platform. In 2021, following two rare media appearances, I described her as being “at the heart of much of Facebook's decision making” (EiM #92) and I’ve seen nothing since to convince me that that changed.

I described Bickert’s departure to a T&S friend as the end of Alex Ferguson’s era at Man Utd or like Bill Belichick’s parting from the New England Patriots (other sports teams and coaches are available). Unlike them, she isn’t retiring: according to the Harvard Crimson, she’ll become the Steven and Maureen Klinsky Visiting Professor of Practice for Leadership and Progress at Harvard Law School.