We’ve never been better at child safety
I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.
Two major child safety reports dropped last week, both from the Tech Coalition. They document what unglamorous progress looks like in an increasingly unforgiving information environment. Today I’m digging into the most interesting and important parts of these reports.
Get in touch if you'd like your questions answered or just want to share your feedback. Don't be shy; I hope to dedicate an upcoming edition of the newsletter to questions I get from EiM subscribers and via LinkedIn Here we go! — Alice
For the month of May, T&S Insider will be guest-edited by Georgia Iacovou, author of Horrific/Terrific
The Industry Keeps Getting Better at Child Safety; While the Information Environment Keeps Getting Worse.
The T&S cases that have stayed with me longest in my career are all child safety cases, and I suspect that's true of everyone who has worked in this area seriously. Child safety is absolutely the most challenging part of T&S. There are many people working incredibly hard on child safety programs across the tech industry; anyone who does this work day after day is obviously not doing it for any reason other than genuine mission.
Unfortunately most public discourse about child safety online doesn't reflect that reality. The deeply important but unglamorous work of frontline child safety is often sidelined in favor of proposals that make for good press releases but incomplete outcomes, or bashing of big tech for failing children. The very real tradeoffs and complexities are often glossed over, and the acknowledgement of society's impact outside of online spaces is often left out completely.
Two reports released last week offer an unusually clear window into both the work and the complexity: the Tech Coalition's annual transparency report and a separate report for Lantern, its cross-platform signal-sharing program. The Tech Coalition is a membership organisation whose members have actively chosen to invest in child safety, so these reports represent a leading edge of the industry, not a universal standard. These reports outline a year of serious commitment to this work, which matters more than it ever has, given what’s happening to children online.
It’s also important to disclose that I was formerly a member of the Tech Coalition when I was Head of T&S at Grindr. That said, I don’t have any insider knowledge now, and am writing this week’s edition purely from their public reports.
Here’s what jumped out at me.
Abuse doesn't stay on one platform, and neither does the money
One of the core structural problems in child safety is that offenders don't operate on a single service. They groom on one platform, distribute on another, and collect money somewhere else. Each company can only see their respective slice of these activities. There’s no mechanism for sharing intelligence across platforms, meaning enforcement is necessarily incomplete. Bad actors exploit this by deliberately moving across services to stay ahead of detection, and platforms are left chasing a shadow.
Lantern launched in 2023 and is the most serious attempt the industry has made to address this. It allows participating companies to securely share indicators of abuse so that when one platform identifies a bad actor, others can investigate whether the same person is operating on their services. By the end of 2025, 31 companies were enrolled, and engaged companies nearly doubled from 12 in 2024 to 23 in 2025.
Here’s one great example of Lantern’s work: Twitch shared signals from their platform with Meta, which enabled them to identify a UK-based individual allegedly exploiting multiple minors while working as a children's football coach. The individual was later arrested — and this would have been a significant challenge without cross-platform coordination.
In fact across all of 2025, signals such as this contributed to enforcement actions against more than 31,000 accounts, nearly 28,000 URLs, and almost 18,000 pieces of content — and those numbers reflect only what 26 companies voluntarily reported back, so the real impact is likely larger.
Lantern have also made excellent strides in detecting sextortion cases: historically there’s been a massive blind spot around these cases, because to find them you need to follow the money and the content — something that’s almost impossible to do if content platforms and financial institutions don’t speak to each other. In 2025 Lantern ran a pilot with financial institutions, demonstrating that signals from online platforms can help identify suspicious transactions. Western Union even noted that they got more sextortion leads through Lantern than they ever had through law enforcement. The program is now permanent, adding new capability to T&S that was sorely needed.
But as usual the tech industry has to backfill policy when regulators fail to keep up: the above work is only possible if companies are allowed to scan for CSAM. Which in the EU, they are not. In March, the European Parliament effectively made the proactive scanning of CSAM illegal. Child protection advocates have warned this will have serious consequences. The tech industry’s approach to child safety is often sloppy and careless — but this time Google, Meta, Snap, and Microsoft announced they would continue scanning voluntarily despite the EU's decision. As NCMEC's John Shehan said plainly: "When detection goes dark, the abuse doesn't stop."
A new taxonomy for tracking chains of online harm
There is an online cycle of abuse that is getting harder and harder to break: research from Thorn has shown that minors are increasingly both the victims and perpetrators of online child abuse. Among young victims of sexual extortion, 38% reported being extorted by another minor. For older teens, the rates of minor and adult extortionists were roughly equal. These are complex chains of harm where previously abused minors then become abusers themselves.
Take the online gang 764. In February the FBI's Boston field office published an open letter warning parents about a sharp increase in activity from 764 and similar violent online networks. These groups target children across gaming platforms, social media, and messaging apps, manipulating and coercing them into producing CSAM, engaging in self-harm, and committing acts of animal cruelty, often livestreamed. The perpetrators are frequently young people themselves, usually males under 25, who have been groomed into participation.
In response to this tranche of online abuse, Lantern developed a new taxonomy of signals which includes a provision specifically for sadistic online exploitation — which is exactly what groups like 764 engage in every day. This is a key step in tracking and understanding how these kinds of harms proliferate across platforms, and affect multiple victims and perpetrators. This is made very clear in a case study where Lantern’s signals led to a 13-year-old in Brazil actually being detained during a livestream. They were a known distributor of CSAM, and may have abused their infant sibling.
Technical interventions are integral to child safety, but these problems are social, and go beyond what T&S teams can solve alone. We are failing young people in ways that no detection tool can fix; policy responses that treat this as purely a technology problem will keep falling short.
AI is supercharging the harms, creating an arms race where no one wins
Generative AI is making certain child safety harms cheaper, faster, and more accessible. The Lantern report shows a huge spike in signals related to ‘nudify’ or undressing apps: AI-related signals rose from 327 in 2024 to 5,641 in 2025, and nearly all of them are related to tools that synthetically ‘undress’ people in images. Some of these are even distributed through major app stores.
Once again, the technology hasn’t created the social conditions which enable this behaviour, but it has made it trivially easy at a scale that would have been impossible just a few years ago — and unsurprisingly a lot of nudify app users are kids, and they are using it on each other. The apps that enable this content are small and hidden from regulatory scrutiny, with major app stores still failing to consistently enforce their own policies against them. A Tech Transparency Project investigation published just last month found that Apple and Google's search and advertising systems actively point users toward nudify apps, in some cases surfacing them as sponsored results, with 31 of the identified apps rated suitable for minors.
It’s worth noting that policy is catching up. Minnesota's nudification ban passed last week with a 65-0 Senate vote, which is a meaningful step, even if state-by-state action is a slow way to address a global problem. The Tech Coalition’s membership is also adapting well: A working group of leading AI companies has developed standardised templates for reporting AI-generated CSAM to NCMEC's CyberTipline, and a separate template covers material identified during red-teaming of generative image models.
But as the Tech Coalition's annual report acknowledges, AI is increasing the scale, speed, and complexity of abuse while simultaneously strengthening the capabilities used to detect and disrupt it. The defensive tools are improving, but so are the offensive ones.
Let’s take a moment to acknowledge the meaningful progress this industry has made
In my own work at Musubi, I'm seeing more companies interested in proactive and behavioural detection than ever before, at both the account and content level. The Tech Coalition data shows that behavioural detection is expanding across the industry, with platforms analysing account activity and interaction patterns to identify risks before explicit content surfaces. Safety-by-design is now reported by the large majority of surveyed member companies (49 of 57) which is a number I found surprising and encouraging — there is real structural progress happening within companies. Repeat offender detection is growing, and deterrence measures, like in-product warnings to users who may be engaging in harmful behavior, and messages to potential victims, are more common than a year ago.
Unfortunately all of this good stuff tends to stay concentrated among the biggest players. Smaller platforms are more vulnerable to exploitation because their safety infrastructure is thinner, and they struggle to get support from places like the Tech Coalition because of membership fees and application requirements. This is why the growth of Pathways is so exciting. This is the Tech Coalition’s initiative to give companies free access to tools, guidance, and resources. In 2025, 80 companies engaged with Pathways initiatives.
The Tech Coalition is also piloting Elevate, which offers hands-on consulting support to help companies strengthen their child safety systems before applying for membership. For anyone who has tried to operationalise child safety at a smaller platform without a peer network or a budget for outside help, these programs represent a real on-ramp, which hasn’t existed before.
I'm more convinced than ever that the committed part of this industry knows what it's doing. The people working on child safety at these companies are doing some of the hardest, most consequential work in tech, and they are doing it well. That doesn't mean industry is blameless. There are absolutely companies doing too little and failing to enforce their own policies, and that should be said out loud. But after sixteen years in this work, what I find most frustrating is a public conversation that can't seem to hold two ideas at once: that the same industry contributing to child harms is also doing serious work to prevent them, and that getting that distinction right is what makes meaningful change possible. A public conversation that can only see the industry's failures, and not the serious work happening alongside them, makes the work harder for the people doing it, and doesn't make children any safer.
You ask, I answer
Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*
Get in touch
Member discussion