Is the Oversight Board just "safety washing"?
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that Trust & Safety professionals need to know about to do their job.
This week in T&S Insider, I review disappointing statements from Meta in response to their Oversight Board's decisions on LGBTQ+ safety, as well as some more bad news for LGBTQ+ youth.
Besides thinking about the above, I've been trying to settle into the rhythm of summer, being as gentle as I can with myself amidst what seems like endless heartbreaking news in the world. I hope you're all taking care of yourselves and finding small moments of joy where you can.
You may have noticed that archived editions of T&S Insider are now accessible exclusively to EiM members via the website — just like Week in Review, Ben's Friday news and analysis. For now, both newsletters will continue to be accessible in full in subscriber's inboxes (ie no paywall).
This shift is part of an effort to support the long-term sustainability of EiM, which continues to be one of the only dedicated sources of news and analysis about online safety, internet regulation and content moderation.
Support from individual and organisational members support ensures its continuation — huge thanks to them! — but right now, only a small percentage of readers currently contribute financially. So this is a small step to making membership more attractive.
Both Ben and I understand that not everyone can pay for access. So if I ever link to an archived edition you think would be useful and you're unable to access it for financial reasons, just reach out — I’ll gladly help.
As always, get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice
The emergence of AI-generated Child Sexual Abuse Material (AI CSAM) presents a grave new challenge to online safety.
This rapidly evolving threat, which includes manipulated and entirely synthetic content, is proliferating at an alarming rate, with the Internet Watch Foundation identifying over 20,000 AI CSAM images in just one month.
Offenders are exploiting readily available AI tools, blurring the lines between real and synthetic content and creating a disturbing culture that bypasses traditional barriers to abuse. This crisis demands urgent, coordinated action across technology, law, and platform governance.
Learn more about this critical issue and what needs to be done.
The Oversight Board was nice while it lasted
The concept of an independent accountability mechanism for social media platforms is good, at least in principle.
Founders and CEOs of tech companies aren’t experts on human rights, and there can be serious conflicts of interest between what’s good for stock prices and what’s good for users. Having a external body to review and challenge decisions should be a net positive: it acts as a necessary check on corporate power, especially given the influence these platforms now wield.
That’s why I’m especially disappointed in how Meta’s Oversight Board seems to be failing. Let’s recap:
- Back in January, Meta changed their community guidelines to carve out allowances for hate speech against LGBTQ+ people, pointedly using the dehumanising and outdated terms “homosexuality” and “transgenderism.” Here’s the policy:
“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words such as 'weird,'”
You can read more from me and more from LGBTQ non-profit GLAAD on why these terms are especially problematic. TL;DR: these are clearly far-right, hateful dog whistles embedded in Trust & Safety policy from a major platform, which is unprecedented and dangerous.
- Three months later, in April, the Oversight Board issued a decision in line with what LGBTQ+ advocates were calling for, stating:
“In respect of the January 7, 2025, updates to the Hateful Conduct Community Standard, Meta should identify how the policy and enforcement updates may adversely impact the rights of LGBTQIA+ people, including minors, especially where these populations are at heightened risk. It should adopt measures to prevent and/or mitigate these risks and monitor their effectiveness.”
and “Remove the term “transgenderism” from the Hateful Conduct policy and corresponding implementation guidance.”
- Now it’s June, and Meta has finally released a statement, saying they are “assessing possible updates” to their community guidelines language, but that “Achieving clarity and transparency in our public explanations may sometimes require including language considered offensive to some.”
In short, they have declined to follow the Oversight Board’s guidance. The board was created so that Meta execs didn’t have to be responsible for the tricky and controversial content moderation decisions, but it’s clear now that Meta will only follow guidance when it suits them, and that, despite best attempts at making considered decisions to uphold human rights, ultimately the board has no power. GLAAD responds:
"Meta’s decision to retain this dehumanizing characterization ("transgenderism") is intentionally aggressive and hostile toward the many LGBTQ people — especially transgender and gender nonconforming people — who use its platforms every day. Instead of taking accountability, Meta’s PR team is trying to downplay this by inaccurately calling it merely ‘offensive to some.’ This will have real-world consequences, escalating violence against an already vulnerable minority. As the Oversight Board rightly acknowledged, hate speech policies should align with human rights standards that respect the dignity of all.”
I was hoping that the Oversight Board would be a model for positive governance at social media companies, but unfortunately, the formation of the Board now feels like “safety washing”. Having practices in place that appear to be commitments to user safety, but that ultimately are for PR purposes. It’s a huge disappointment, yet not surprising given Meta's general track record.
Happy Pride?
Ironically the Meta statement was released in June, which is Pride Month. But that's not the only bad news we've received this month.
The Trump Administration is ending specialized support for LGBTQ+ youth on the 988 helpline because of "radical gender ideology". Meanwhile, in Thorn's latest report on sextortion, they found that 1 in 5 teens have experienced it, and of those,
"1 in 7 victims were driven to harm themselves as a result of their experience [with sextortion]. For LGBTQ+ youth, who are less likely to have an offline support system which can increase their isolation, that number nearly triples to 28% compared to their non-LGBTQ+ peers. Behind each of these numbers is a young person feeling trapped, afraid, and possibly hopeless."
If you'd like to do the right thing for LGBTQ+ youth, consider donating to Trevor Project or Trans Lifeline, or volunteering with local groups.
For actions you can take as a T&S professional, read my guide on protecting LGBTQ+ users on social platforms as a starting point.
Please also consider joining the roundtable I'm leading at TrustCon with GLAAD's Jenni Olsen, Addressing Anti-Trans Hate Speech Online. It's July 21, 2025 3:00 PM-4:30 PM PT in Pacific Concourse J.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touchAlso worth reading
T&S Teaching Consortium survey
Why? If you teach Trust & Safety, consider filling out this form to help the T&S Teaching Consortium improve.
Massive Creator Platform Fansly bans furries (404 Media)
Why? Fansly was pressured to change their content policies by payment processors, which is a shame, "The changes blame payment processors for classifying “some anthropomorphic content as simulated bestiality.” Most people in the furry fandom condemn bestiality and anything resembling it, but payment processors—which have increasingly dictated strict rules for adult sexual content for years—seemingly don’t know the difference and are making it creators’ problem."
The Conservatives On The Supreme Court Are So Scared Of Nudity, They’ll Throw Out The First Amendment (Techdirt)
Why? "The conservative justices may think they’re just protecting children from pornography, but they’ve actually written a permission slip for the regulatory state to try to control online expression. The internet that emerges from this decision will look much more like the one authoritarian governments prefer: where every click requires identification, where any viewpoint can be age-gated, and where anonymity becomes a luxury only the powerful can afford."
Bonus: Four articles on GenAI that I think are helpful.
Member discussion