6 min read

Child safety: a 'lose-lose' situation?

Platforms who try in good faith to commit to child safety are still criticised from all sides, yet suitable technology and public best practices are hard to come by.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

I'd like to take a moment to say thank you to everyone who has provided feedback on this newsletter since its launch. I heard from three people last week who told me that they've used this advice internally at their companies to help advocate for themselves and their teams, or to approach a problem differently. If you find my writing valuable, consider becoming an EiM member.

This week, I'm thinking about:

  • Why platforms can never "get it right" when it comes to child safety
  • Tech policy fellowship opportunities

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice


Why platforms can never "get it right" when it comes to child safety

Why this matters: Platforms who try in good faith to commit to child safety are still criticised from all sides, yet suitable technology and public best practices are hard to come by.

A important new report has shone light on the improvements needed to make CyberTipline and National Center for Missing & Exploited Children (NCMEC) reporting more effective and result in the prosecution of child abusers. With child sexual exploitation on the rise, its release couldn't be more timely.

The authors at Stanford Internet Observatory interviewed law enforcement officers, Trust & Safety professionals working at platforms, and NCMEC employees and came up with a host of recommendations to improve the system. I found it personally eye-opening to hear directly from this wide range of stakeholders and had the following reflections:

Platforms are incentivised to be vague

One of the more frustrating things from a platform perspective is that NCMEC is not allowed to tell platforms what to look for or how to report. With this lack of information and heavy penalties for not reporting, platforms are incentivised to report everything, but to be vague about the specifics in case they got it wrong. This over-reporting and under-labeling makes it difficult for NCMEC and law enforcement to triage and focus on the most egregious cases. However, platforms aren't given the information they need to succeed, as the report notes:

Upon learning that NCMEC gives platforms guidelines for filling out reports in customized CyberTipline onboarding trainings, we asked NCMEC why they do not write those guidelines down. They told us that if they had a written document, defense attorneys would characterize this in criminal cases as “NCMEC is advising companies what to report.” NCMEC found it preferable for best practices to come from the Tech Coalition, which is composed solely of private companies, rather than NCMEC.

Public resources are hard to come by

Groups like the WeProtect Global Alliance and Tech Coalition allow people at platforms to share best practices, such as the Industry Classification System. This is a great start but the Tech Coalition requires a membership application and a minimum of $10,000, which is a big outlay for small companies. Additionally, in order to join the Tech Coalition, a platform must prove commitment to child safety, but membership in the Tech Coalition is one of the only ways to learn how to do that. There's no free, comprehensive, public handbook for startups or established platforms who want to do better.

Small companies find it hard to scale

The report also talks about the limitations of NCMEC's tech stack but one area that I felt was glossed over in the report was discussion of NCMEC's API system. While it has an open API to allow companies to report faster and more easily, building custom reporting flows for Trust & Safety tooling is a big ask for smaller companies. If they are unable — or unwilling — to build this software themselves, the only publicly available and advertised option that I'm aware of is Safer, which is limited in terms of at-scale efficiency for moderation teams.

Another hurdle is finding and funding the moderation itself. Human review is critical to allow NCMEC and law enforcement to view the content without a warrant, but this human review is costly, both in terms of monetary commitment from a company but also moderator wellness. Again, it's difficult for smaller companies to manage these wellness needs in-house and a challenge to find an outsourcing company that is both willing to do child safety work and who manages teams ethically and responsibly.

A cycle of criticism and defensiveness

Finally, there's the criticism. Even if a company has invested time, effort, and money into tooling and human moderation teams, and has done the work to ensure that the people involved are well taken care of, they are get censured:

One former employee of a platform that submits many CyberTipline reports expressed frustration with how platforms are perceived and treated. They said there is a misperception that companies are trying to do the bare minimum with reporting requirements. They said they have heard this sometimes from NCMEC and frequently from Congress. These comments, they said, cause companies to get defensive: “Everyone should be rowing in the same direction, everybody wants the same result […] companies aren’t trying to monetize this stuff. […] It’s not like there’s a lobbying group out there that’s in favor of CSAM, everyone’s on the same side.” They perceived that the adversarial nature of interactions was a result of the fact that people need an enemy. It is not helpful if someone is in a meeting worrying that anything they say will be used against them.

I'd also add to this that, in some ways, platforms are disincentivised to proactively search for and report CSAM. NMEC publishes transparency reports which show exact reporting numbers for each platform. A high number of reports makes a platform look like they have a serious child safety issue, where it could be that the platform is actually putting in more of an effort to proactively find and remove content. This results in a no-win situation, and is compounded when it comes to the balance between safety, privacy, and self expression (a topic I come back to frequently, it seems!).

[An] interviewee observed that “it feels like a lose-lose” for platforms: if they don’t do enough information-sharing, they get critiqued; if they “do the gold standard,” they’re called too privacy-invasive. The interviewee noted the need to “balance human rights [and] not overindex on this [CSAM] topic without other voices in the room” who have historically been adversely impacted by platforms’ policies and moderation choices. Recently, they said, there has been more engagement with sex-worker and LGBTQI+ groups, civil rights groups, and privacy groups to “invite them into the conversation.

One approach won't fit all platforms, but having a publicly accessible set of best practices to point to will allow platforms to follow what everyone else is doing while improving the effectiveness of their efforts.

In short: if you're at a platform and are finding all of this difficult, it's not just you. My hope is that this report results in some action to make improvements, but in the meantime, please know that there are others out there who are navigating the same difficult system. If you're without a peer network to talk to and want some introductions to folks who are passionate about child safety, please let me know and I'd be happy to do so.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Job hunt

For those of you between jobs, or looking for a change of pace, there are an increasing number of fellowships available for tech practitioners as well as academics. These fellowships give you support and structure to pursue research and analysis around Trust & Safety issues affecting our industry today. Some, but not all, are paid.

One upcoming (unfortunately unpaid) option is Berkely's Tech Policy Fellowship, with an application deadline of May 31. Some additional fellowships that I'm aware of:

If you know of any other fellowships, let me know.


Also worth reading

How negative experiences on social media changed in the last 12 months (Designing Tomorrow)
Why? "A year ago, 1 in 3 US adults reported a negative experience on at least one of the social platforms they used; today, that number is just over 1 in 4 US adults" - improvements at NextDoor seem to account for much of this.

AI’s Most Pressing Ethics Problem (Colombia Journalism Review)
Why? Allowing AI to use synthetic data creates a dangerous feedback loop: "the use of synthetic data presents one of the greatest ethical issues with the future of AI."

Imagining the Past: Justifications of Ideology in Incel Communities (Global Network on Extremism and Technology)
Why? Fascinating insights into how Incels (involuntary celibates) justify misogyny.

Discord CEO Jason Citron makes the case for a smaller, more private internet (The Verge)
Why? It's always interesting to see how CEOs talk about moderation. Mike and Ben talk more about Jason's thoughts on moderation in this week's Ctrl-Alt-Speech podcast