KOSA returns, abortion speech under threat, and OpenAI’s safety trick
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
One of the challenges that Week in Review tries to solve for is the constant moving of goalposts in the T&S space; things move so quickly that it's a 9-5 job to keep up.
This week is no exception with several stories emerging in the last 24 hours. If you find EiM helpful to your work or just interesting, remember that you can become a member for less than the price of a Jim Jordan stamp addressed envelope heading to Brussels.
A big welcome to news subscribers from IFTAS, Der Standard, ActiveFence, The Bureau of Investigative Journalism, The Alan Turing Institute, Viking (ok, I'll take a free cruise), Université Libre de Bruxelles, FullFact and others.
Here's everything in moderation this week — BW
Does your platform have messaging, search, or generative prompt functionality? Thorn has developed a resource containing 37,000+ child sexual abuse material (CSAM) terms and phrases in multiple languages to use in your child safety mitigations.
The resource can be used:
- To kickstart the training of machine learning models
- To block CSAM prompts
- To block harmful user searches
- To assess the scope of this issue on your platform
Apply today to get access to our free CSAM keyword hub.
Policies
New and emerging internet policy and online speech regulation
The European Commission yesterday found TikTok in breach of the Digital Services Act for failing to make public a repository of its advertising. The ruling is a small part of a wider investigation announced in February 2024 (EiM #235).
Henna Virkkunen, the Commission's lead for tech sovereignty, security and democracy, notes that the video-platform's ad library implementation prevented "the full inspection of the risks brought about by its advertising and targeting systems" and that "citizens have a right to know who is behind the messages they see". TikTok may appeal but could be fined up to 6% of global revenue.
Just weeks after I noted the eerie quietness surrounding it (EiM #291), the Kids Online Safety Act (KOSA) has returned, nearly unchanged, in a new bipartisan push. As The Verge reports, the bill now contains language that states "KOSA would not censor, limit or remove content from the internet", which is in direct response to speech rights concerns raised by US civil society groups.
Various outlets noted that Apple is now a supporter of the bill, although trade association Computer and Communications Industry Association — of which Apple is a member — said in a statement that there are outstanding "serious First Amendment concerns".
Also in US speech legislation news: a revealing piece in User Mag this week outlines how US states are experimenting with indirect speech regulation as a way to further limit abortion access. With Texans finding ways to terminate their pregnancies, legislators are targeting online platforms for carrying or amplifying abortion-related content as well as individuals themselves. And the incentives are wild.
Also in this section...
- Online safety for kids and teens: A Vys biweekly brief (Quire)
- Board to Address Impact of Meta’s Content Moderation on Freedom of Expression in Syria (Oversight Board)
Products
Features, functionality and technology shaping online speech
OpenAI has launch a public-facing safety evaluations hub as part of an effort to be more transparent about the safety testing of its AI systems. The hub, which contains a subset of evaluations, claims to be part of a "company-wide effort to communicate more proactively about safety". It follows a series of mistakes (EiM #274) and reports that safety testing had become less thorough, going from a few months to just a week in some cases.
Take the win: Although just a snapshot of its safety efforts, this feels like a win for safety researchers and users of ChatGPT, it's primary product. But it won't help staff nerves: an OpenAI researcher previously opened up about how releasing a new model is "a bit scary because you don't know what is will or won't be able to do" (EiM #200).
Also in this section...
- The Singapore Consensus on Global AI Safety Research Priorities (Singapore Conference on AI)
- Beyond sycophancy: DarkBench exposes six hidden ‘dark patterns’ lurking in today’s top LLMs (VentureBeat)
Platforms
Social networks and the application of content guidelines
TaskUs, one of the largest providers of T&S services to major platforms, is being bought by private equity firm Blackstone in what’s been mooted as a $2bn deal. Its founders say the the deal will enable it to invest in AI although it hasn’t been without its critics. The announcement comes just as the Trust & Safety posted stellar results for the fifth quarter in a row; perhaps demonstrating why Blackstone sees long-term value.
Leader vs aspirants: I talked on Ctrl-Alt-Speech a few weeks back about Majorel/Teleperformance’s challenges in Ghana (EiM #291) and it speaks to the two-tier market; on one end, leaders like TaskUs with investing in the future; on the other, smaller firms fighting operational and reputational fires. I can see further consolidation of the BPO market taking place, facilitated by more equity buyouts like this one.
Many of us knew it but now its there in black and white: the safety of LGBTQ+ users on social media is on the slide. According to GLAAD’s fifth annual Social Media Safety Index — which measures policies rather than enforcement — none of the five major platforms scored above 56 out of 100, all down on last year (although there has been a change in methodology).
The rowing back of LGBTQ+ rights on Meta platforms has been well-documented (EiM #276) but I was particularly interested in X/Twitter's paltry score of 30, which GLAAD gave for inadequate policy protections and contingency upon "local laws". Read the full thing if you can.
Also in this section...
- Kanye’s Nazi Song Is All Over Instagram (404 Media)
- Why Africa Is Sounding the Alarm on Platforms' Shift in Content Moderation (Tech Policy Press)
People
Those impacting the future of online safety and moderation
If it wasn't already high on the public agenda, Max Tegmark is doing a decent job of keeping the existential threat of AI in people's consciousness.
The MIT physicist and founder of the Future of Life Institute has been one of the loudest voices in warning that Artificial Super Intelligence (ASI) poses a theat to humanity. This week, following the publishing of anew paper, he backed that up by calling for AI companies to actually quantify the potential risk posed by their models.
The Compton constant — a nod to the calculations carried out by US physicist Arthur Compton before the first nuclear test in 1945 — would create the "political will" to agree global safety regimes, he told The Guardian. I hope he's right.
Posts of note (Digital Services Act edition)
Handpicked posts that caught my eye this week
- "Looking at the Impact of the EU’s Digital Services Act via a Facebook case study : did it reduce harmful content substantially?" - I didn't get round to reading this new NATO report but, from Marie-Doha Besancenot's teaser, it sounds like I'll have to.
- "Meanwhile, the EU is accused of double standards and of failing to ensure that Poland respects free speech and the rule of law." - MLex senior correspondent Luca Bertuzzi notes that good friend of Ctrl-Alt-Speech, Jim Jordan, is not overly happy with the DSA. Again.
- "Though the court didn’t grant urgent access (and the election is long over anyway), the ruling was a victory: it confirmed that researchers can bring these cases in national courts, not just in Ireland where X is based." - Brandi Geurkink, exec director of the Coalition for Independent Tech Research, on a welcome ruling in Berlin.
Member discussion