4 min read

Online safety fallout, AI safety series raise and speaking to strangers

The week in content moderation - edition #300

Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.

This is edition #300 of EiM. Three hundred. What started as a side project in 2018 has, somewhat improbably, made it this far. Thanks to everyone who has read, shared, supported, and followed my takes on the obscure-but-crucial internet rules, policies and products that shape how we all communicate online.

To mark the milestone, I’m hosting a very informal EiM drinks in September in London. If you’ve ever shared an interesting link, sponsored a newsletter, or forwarded an edition of EiM to a colleague, you’re invited. EiM members will get first dibs. More details soon.

Taking of milestones, Mike and I reflect on what’s next for the podcast in the latest episode of Ctrl-Alt-Speech. Want to know how the sausage is made? We'll be answering listeners questions next week.

Thanks for reading — BW


Policies

New and emerging internet policy and online speech regulation

The last two weeks have seen the Online Safety Act come under fire from all sides following the deadline for child safety provisions — aka the great UK age check reckoning (EiM #299). Think tanks, opposition MPs, Jim Jordan and even Marc Andressen weighed in, forcing Ofcom to put out a panicky three-sentence statement (see last paragraph here).

While I’m concerned about the unintended consequences of age verification tech, I was shocked by the tone of coverage, which would've been more surprising to UK citizens who haven’t been following its slow progress through the British political system:

  • An op-ed in Newsweek calls it a “thinly veiled effort to normalize censorship in the U.K. and expand surveillance of British citizens and guests within their borders.” Gulp.
  • In The Guardian, Taylor Lorenz said that the regulation will lead to young people being “fed a steady diet of sanitised, government‑approved narratives until the age of 18.”
  • The New Yorker uses the hack of dating app Tea to warn “what’s at risk when we attach our real-life identities to our online activities” — despite the uploading of user IDs not being in response to any regulation.

All this tells me that age verification is fast becoming a UK wedge issue — not unlike Brexit, immigration or trans rights — in the wider battle over digital rights and state control of the internet. And it's not just Brits feeling the heat. The US Supreme Court is weighing in.

Also in this section...

Products

Features, functionality and technology shaping online speech

Datumo, whose AI evaluation product lets policy and T&S professionals test for bias and hallucinations, has raised $15.5m as part of a push to accelerate R&D efforts and scale beyond its native Korea, where the majority of its clients are based.

If you’re not a Datumo client, this might seem like a non-story. But I see it as another sign of how data labelling companies — like Scale AI and Hive before it — are moving from task-based services to enablers of safe, explainable and (ideally) high-performance AI. Expect the big BPOs — including the likes of Teleperformance, TaskUs, and Majorel (EiM #187) — to follow suit.

Also in this section...

Enjoying today's edition? Support EiM!

💡 Become an individual member and get access to the whole EiM archive, including the full back catalogue of Alice Hunsberger's T&S Insider.

💸 Send a tip whenever you particularly enjoyed an edition or shared a link you read in EiM with a colleague or friend.

📎 Urge your employer to take out organisational access so your whole team can benefit from ongoing access to all parts of EiM!

Platforms

Social networks and the application of content guidelines

Bluesky has drafted new community guidelines and terms of use and is seeking feedback from users by the end of the month. The announcement points to the growing userbase as the need for a refresh, but frequent mention of the UK’s Online Safety Act and Digital Services Act make it clear this is a shift towards regulatory alignment.

One interesting thing to point out: should a user bring what it calls an informal dispute, someone from Bluesky will call you “because we think most disputes can be resolved informally.” Remarkable in one sense; however, a quick look at the exact wording makes the call a condition of an arbitration demand.

Also in this section...

People

Those impacting the future of online safety and moderation

The internet forges deep connections — romantic, professional, familial — as well as strangers that you come across in your feed or groups for your favourite hobby or local area. But what if you met these acquaintances in real life?

That’s the question New_Public’s Mirium Tinberg asks after reading a cool data story about having 30-minute conversations with strangers.

‘Hello Strangers’ by professor and journalist Alvin Chang uses the CANDOR corpus to show that chats with people you don’t know can be civil. Of the 1700+ 30-minute conversations in the dataset, the majority report feeling better by the end.

As someone who has signed up to have conversations with strangers and uses tools where chatting with random folks is the norm, I’m with Mirium in asking: what can we learn from this? And how do we take a bit of that into our online selves?