Safety tech startups snapped up, DSA gets closer and the problem with recommender systems

Hello and welcome to Everything in Moderation, your weekly missive on content moderation and online safety from around the world. It's written by me, Ben Whitelaw.
A big end-of-the-week welcome to new subscribers from Microsoft, Platformer, ECNL, Shopify, Match, Ofcom, Yale, Linklater and elsewhere.
If today's EiM was forwarded to you, subscribe for free here and if you're not enjoying EiM, feel free to unsubscribe here (I won't be offended). More importantly, if you read EiM every week, why not become an EiM member for a few dollars/pounds (since they're pretty much the same now) a week.
Here's what you need to know this week — BW
Policies
New and emerging internet policy and online speech regulation
Less than a year after European member states agreed in principle on its "ground-breaking horizontal regulations" (EiM #138) and just six months after reaching a provisional agreement on its scope (EiM #157), the Digital Services Act has been approved by the European Council and moves a step closer to becoming a reality. It will now be signed and published in the Official Journal of the European Union (which makes it binding), after which a 15 month countdown begins before it will apply to digital services operating in the Union. So we're looking at around February 2024, as it stands. You've been warned.
In Singapore this week, following a period of consultation, ministers introduced a bill in Parliament to regulate "online communication services" including social media platforms. Like other proposed legislation before it, the Online Safety (Miscellaneous Amendments) Bill would force services to comply with a code of practice and gives government the right to issue directives that mean "egregious content... would not be accessible by Singapore users". It will be debated in Parliament in November.
Products
Features, functionality and startups shaping online speech
Weeks go by without an online safety acquisition announcement and then two come along in a week:
- Spotify has acquired Kinzen, the audio moderation technology company with a focus on misinformation. The Dublin-based company has worked with the streaming giant since 2020 and founders Mark Little and Aine Kerr were announced on its inaugural safety council back in June (EiM #163). [Full disclaimer: I worked with Kinzen as a contractor until June 2022, including producing a report about what it was like working in trust and safety].
- Reddit has bought Swedish startup Oterlu and will incorporate its four-person team into its Safety unit to continue building tools that detect harmful content.
What do we take from this? Well, since both startups build machine learning classifiers and tools for detecting harmful content, we can safely say that platforms have underinvested in safety tech in the past and are now rapidly trying to catch up. Expect more acquisitions of this nature as they do.
Platforms
Social networks and the application of content guidelines
Meta, parent company of Instagram, and Pinterest both issued statements saying they were "committed" to making their platforms safe following the conclusion of the inquest into the death of 14-year-old UK school girl Molly Russell. The coroner concluded that "she died from an act of self harm while suffering from depression and the negative effects of online content" — not suicide, he was at pains to point out—raising questions about the safety standards that both platforms had in place at the time.
Recommender systems are also at the heart of a Supreme Court case about whether YouTube "aided and abetted" extremists in the run up to the 2015 Islamic State attack in Paris. The family of 23-year-old student Nohemi Gonzalez say that Google, it's parent company, knew the system was aiding the group but did nothing about it. The tech giant has argued no link between the recommendations and the attack is made in the court documents.
There's also been some kerfuffle this week at Twitter, which it seems remiss for me to get into until we know the outcome. I don't believe Mr Musk subscribes to EiM yet but in case this makes it to his inbox, here's a primer on why online speech gets moderated from The Washington Post.
People
Those impacting the future of online safety and moderation
If the general public is to understand how the internet is changing, it's vital that online speech experts find their way into mainstream media and conversation. So it was great to see Danielle Keats Citron appear in The Observer at the weekend, talking about her new book.
Like many of you, I've been familiar with Danielle's work for many years, particularly her research into cyberstalking and the role that platforms and other private companies play in combating it. The Fight for Privacy: Protecting Dignity, Identity and Love in Our Digital Age builds on that.
She reminds us that:
After 25 years of the Section 230 legal shield, we need to recognise that although it has emboldened and enabled all sorts of speech and activities online, there are a lot of costs to speech too
The voices calling for the reform of 'the twenty-six words that created the internet' (h/t Jeff Kosseff) have become quieter since the departure of the former US President (EiM #90) but I'm glad the opposite is true of Citron.
Tweets of note
Handpicked posts that caught my eye this week
- "You mean you support a system that would promote and benefit affluent people like you and me..." - I could quote this whole tweet from Kenneth White aka Popehat, or any number of the replies to this lacklustre take by Scott Galloway, but this snippet will do.
- "In which the UK commits to an open and secure internet while also pushing through a law which will impose a general monitoring obligation, content interception, and the introduction of an age/identity verification layer across the entire UK-accessible internet" - Heather Burns, my UK tech policy go-to, can spot hypocrisy a mile off.
- "What are some good papers or tech blogs on content moderation / data labelling pipelines?" - Amazon ML engineer Eugene Yan creates a handy reading list in plain sight.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1400+ EiM subscribers.
Yelp is looking for a Product Manager on its UK-based content platform team to "collect, curate, and surface the data Yelp needs to help consumers connect with the right businesses."
Part of the role involves working with the content moderation and machine learning teams so experience in executing content modelling, building content moderation tools, and overseeing machine learning projects is essential.
I don't know the salary (anyone from Yelp help out here?) but there are a number of PM roles being advertised by Yelp at the moment so it looks like you'd be part of a growing team with a significant mandate.