The age-check internet, TikTok's labour troubles and open-source mod tools
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
I might not have been able to go but that didn't stop many EiM readers revelling in the atmosphere and expertise at TrustCon this week. I heard so many great things about it, not least the second live recording of Ctrl-Alt-Speech, with Mike, Alice, guest panellist Ashken Kararyan. Listen to it in your preferred podcast feed or via ctrlaltspeech.com.
Unfortunately, it's not all upbeat, though. Two sobering reflections on the state of the Trust & Safety industry — one in Tech Policy Press and the other by Platformer's Casey Newton — show what a crossroads the industry is in and the many directions it could go in over the next few years.
I'm on holiday for two weeks so your next Week in Review and T&S Insider newsletters — including Alice's TrustCon highlights and lowlights — will be some time in mid August.
Thanks for reading and see you soon — BW
What do toxic Call of Duty players and gig economy scammers have in common?
More than you'd think. This blog post tells the surprising story of how ToxMod, Modulate’s voice moderation tool for games, uncovered key behavioural signals that fraudsters use — leading to the creation of VoiceVault.
Learn how real-time voice analysis is helping protect fintech, contact centres, and gig platforms from costly scams.
Policies
New and emerging internet policy and online speech regulation
I predicted last week (EiM #298) that we’d see some larger platforms announce their response to today’s UK child safety deadline (today, Friday) and that was proven right:
- Discord rolled out a ‘teen-appropriate experience by default’ including a ‘privacy-forward age verification experience’ using k-ID.
- Grindr will use biometric tech company FaceTec for what it calls a ‘fast, one-time check’.
- X/Twitter vaguely outlined its verification options in a page published late last night (Thursday) but said measures 'should be made available in the following weeks’. Sounds like a missed deadline to me?
Many others will have been more coy, pushing code live and subtly updating terms of service hoping users won't realise or care. I'll keep an eye out for those over the coming weeks.
Pressing matter: From my reading, we're seeing an uptick in the media coverage of the age verification story. CNET headline summed up the general feeling in the technology press up when it said “Welcome to the Era of Online Age Verification” while Wired explained the global trend and highlighted the risk of handing more power to platforms and third parties who enforce age tech, often imperfectly. The BBC, meanwhile, continue to do their public service duty.
Just across the Irish Sea, the second part of Ireland’s Online Safety Code came into force on Monday targeted at video-sharing platforms headquartered in the country. These new provisions include cyberbullying, harmful or illegal ads and adult-only content and layer on top of the general rules that came into force in November last year. The Online Safety Code works in tandem with the EU’s Digital Services Act so doesn’t cover as wide a remit as the UK’s Online Safety Act (sad reminder for those who blanked the whole of 2016: unlike the UK, Ireland is still part of the EU).
Also in this section...
- Rapid Response Building Victim-Centered Reporting Processes for Non-consensual Intimate Imagery (Center for Democracy and Technology)
- "A new chapter in repressive Internet regulation in Russia" – experts explain how Russia's new law will affect VPN users (Techradar)

Products
Features, functionality and technology shaping online speech
T&S tooling non-profit Roost (EiM #281) has released new open source tools designed to help platforms, particularly smaller ones, teams build better moderation workflows. Coop is an open sourced version of T&S startup Cove’s content review tooling while Osprey was developed for OSINT and incident response by Discord. The platform has passed it over to the non-profit for wider distribution within the ecosystem.
Spread the word: With tools now in the public domain, the question now is how quickly Roost can get them into the hands of sites and forums. Most will not have heard of the non-profit, which launched earlier this year, and may not even recognise they have challenges that can be solved by Roost’s growing tech stack. But, with new leadership, there is momentum behind it’s open-source safety vision.
Disclosure: Some funders of the Roost initiative also funded pilot episodes of Ctrl-Alt-Speech, the podcast I co-host with Mike Masnick. That funding has now elapsed.
💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.
💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.
📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!
Platforms
Social networks and the application of content guidelines
150 TikTok staff in Germany launched a one-day strike this week in protest after being told they may lost their jobs as the company transitions to AI moderation (EiM #282). Euronews reports that the Trust & Safety department in Berlin will be phased out as it is consolidated into fewer locations — if you have more information and want to talk, get in touch via Signal (benwhitelaw.04)
The shift to expertise: The protests come as AI companies shift towards highly paid industry specialists that can optimise Large Language Models. That means the low-cost, Global Majority worker model that I’ve written about many times on EiM could be under threat. I discussed this recently with Kenyan lawyer Mercy Mutemi on Ctrl-Alt-Speech but it’s a topic I’d love to go deeper on.
Also in this section...
- Hard labour conditions of online moderators directly affect how well the internet is policed – new study (The Conversation)
- What If Social Media Filtered Abuse Like It Filters Spam? (Mrs Magazine)

People
Those impacting the future of online safety and moderation
I’ve written about the growing presence of online safety in literary fiction (EiM xx) and we can now add the straightforwardly titled Moderation by Elaine Castillo to the list.
In it, the central character works as a content moderator, fielding the usual queues of hard-to-stomach content. Much like other representations, including the theatre production I saw back in April (EiM xxx), there is a love story and a promotion.
The Spectator’s review says the novel could be more polished but it’s still a candidate for your holiday read.
Posts of note
Handpicked posts that caught my eye this week
- "At Meta, “someone else” is always “working on it.” This creates what I call “buffers of ignorance” – structures that preserve the speed of business by ensuring decision-makers never have to confront the uncomfortable questions their choices raise." — Kelly Stonelake on a possible change as to Meta's approach to trust.
- "Chuffed to see my new draft paper on platform ownership billed as "hidden gem of the week" over at The Syllabus." - I already had Paddy Leerssen's paper on bookmark; really can't wait to read it now.
- "New from me: Stanford Institute for Human-Centered Artificial Intelligence (HAI) just published my policy brief about student-on-student deepfake nude incidents in schools, and what state policymakers can do about them" - more must-read academic literature from Riana Pfefferkorn.
Member discussion