6 min read

Is the Oversight Board just "safety washing"?

Meta’s refusal to follow the Board’s recommendations on LGBTQ+ hate speech could be the beginning of the end for this much-debated platform accountability experiment

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.

This week in T&S Insider, I review disappointing statements from Meta in response to their Oversight Board's decisions on LGBTQ+ safety, as well as some more bad news for LGBTQ+ youth.

Besides thinking about the above, I've been trying to settle into the rhythm of summer, being as gentle as I can with myself amidst what seems like endless heartbreaking news in the world. I hope you're all taking care of yourselves and finding small moments of joy where you can.

You may have noticed that archived editions of T&S Insider are now accessible exclusively to EiM members via the website — just like Week in Review, Ben's Friday news and analysis. For now, both newsletters will continue to be accessible in full in subscriber's inboxes (ie no paywall).

This shift is part of an effort to support the long-term sustainability of EiM, which continues to be one of the only dedicated sources of news and analysis about online safety, internet regulation and content moderation.

Support from individual and organisational members support ensures its continuation — huge thanks to them! — but right now, only a small percentage of readers currently contribute financially. So this is a small step to making membership more attractive.

Both Ben and I understand that not everyone can pay for access. So if I ever link to an archived edition you think would be useful and you're unable to access it for financial reasons, just reach out — I’ll gladly help.

As always, get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice


in partnership with Resolver Trust & Safety: Urging action on AI CSAM

The emergence of AI-generated Child Sexual Abuse Material (AI CSAM) presents a grave new challenge to online safety.

This rapidly evolving threat, which includes manipulated and entirely synthetic content, is proliferating at an alarming rate, with the Internet Watch Foundation identifying over 20,000 AI CSAM images in just one month.

Offenders are exploiting readily available AI tools, blurring the lines between real and synthetic content and creating a disturbing culture that bypasses traditional barriers to abuse. This crisis demands urgent, coordinated action across technology, law, and platform governance.

Learn more about this critical issue and what needs to be done.

READ OUR FULL BLOG

The Oversight Board was nice while it lasted

Why this matters: When Meta doesn't follow the recommendations of their Oversight Board, then the board ultimately feels like an elaborate and ineffective PR stunt. This doesn't bode well for the protection of human rights for the billions of Meta users around the world.

The concept of an independent accountability mechanism for social media platforms is good, at least in principle.

Founders and CEOs of tech companies aren’t experts on human rights, and there can be serious conflicts of interest between what’s good for stock prices and what’s good for users. Having a external body to review and challenge decisions should be a net positive: it acts as a necessary check on corporate power, especially given the influence these platforms now wield.

That’s why I’m especially disappointed in how Meta’s Oversight Board seems to be failing. Let’s recap: 

  • Back in January, Meta changed their community guidelines to carve out allowances for hate speech against LGBTQ+ people, pointedly using the dehumanising and outdated terms “homosexuality” and “transgenderism.” Here’s the policy:
“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words such as 'weird,'”

You can read more from me and more from LGBTQ non-profit GLAAD on why these terms are especially problematic. TL;DR: these are clearly far-right, hateful dog whistles embedded in Trust & Safety policy from a major platform, which is unprecedented and dangerous.

  • Three months later, in April, the Oversight Board issued a decision in line with what LGBTQ+ advocates were calling for, stating: 
“In respect of the January 7, 2025, updates to the Hateful Conduct Community Standard, Meta should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. It should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness.” 
and “Remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.”
  • Now it’s June, and Meta has finally released a statement, saying they are “assessing possible updates” to their community guidelines language, but that “Achieving clarity and transparency in our public explanations may sometimes require including language considered offensive to some.”

Get access to the rest of this edition of EiM and 200+ others by becoming a paying member