Live audio moderation (finally?), new transparency report and a message for Musk
Hello and welcome to Everything in Moderation, your global content moderation and online safety round-up. It's written by me, Ben Whitelaw, and supported by good people like you.
A big welcome to new subscribers from Bumble, Bytedance, Feeld, Daily Maverick, Bristol University, Meta, Salesforce and elsewhere over the last few weeks. The variety of people that receive this newsletter, and the interest and expertise they have in speech governance, never cease to amaze me.
A reminder that if anyone has been affected by trust and safety cuts at platforms this past fortnight (EiM #180) and could do with a hand making connections or finding work, hit reply and I'll try to help.
Here's everything in moderation this week - BW
Policies
New and emerging internet policy and online speech regulation
What has happened to the Online Safety Bill, the UK's controversial catch-all legislation which was finally introduced to Westminster in March 2022 (EiM #152) but has gone quiet since? A new piece by Tim Bernard for Tech Policy Press explains how the revolving door that is the UK prime ministership has stopped it in its tracks and may "imperil its passage" through Parliament.
The piece is a timely one for the fact that regulators from the UK, Australia, Ireland and Fiji (yes Fiji) this week announced a new network to "pave the way for a coherent international approach to online safety regulation". There's not much information available about the grandly-titled Global Online Safety Regulators Network but it speaks to something St Johns law professor Kate Klonick mentioned in an interview with Marketplace recently:
"The nation-states will in fact use geolocation technology to raise borders up into cyberspace, and then the traditional notions of geopolitics will play out along that, about what people can say and what people can do in certain spaces."
As I tweeted, there's so much that is odd about this strange group of bedfellows that it's hard to know where to start.
If you're unconvinced that moderation has become a political hot-button issue, you only need to look across the Channel where French president Emmanuel Macron last week made children's online safety one of his key re-election promises. Macron, who has a history of going after the Silicon Valley platforms (EiM #12), announced the creation of the Children Online Protection Laboratory to "explore, promote, develop and evaluate solutions aimed at improving the safety of minors in the digital environment". Politico noted that recent French efforts haven't always ended successfully but we're reminded again that moderation is political and politics is increasingly about moderation (EiM #158).
Products
Features, functionality and technology shaping online speech
In a reported first for a major social media app. live audio moderation is coming to French social network Yubo. Created in partnership with cloud moderation company Hive —which raised a huge chunk of money back in early 2021 (EiM #119) — the capability will initially be available in four English speaking markets of the 140+ countries where Yubo has users.
The solution works by automatically transcribing 10-second snippets of live audio with 10 or more people before sending the snippet to human specialists to review. It is highlighting around 600 streams for review a day, according to a company press release, which feels low for an app with 40+ million users.
The wider context is that audio moderation has been bubbling away as an issue since the pandemic when the overnight success of Clubhouse and Zoom caused what I said at the time felt like a "full-blown audio moderation crisis" (EiM #109). Spotify also had its own woes, leading it to acquire an audio moderation startup (#176). Yubo itself came under fire earlier this year for its lax age verification approach (#149). This may change that.
Platforms
Social networks and the application of content guidelines
Microsoft this week launched its first transparency report for its Xbox gaming platform, joining the likes of Nextdoor (EiM #149), Bitchute (#163) and the independent-but-Meta-funded Oversight Board (#121) that have started to share public reports in the last 18 months.
The standout stat is there has been a x9 (nine times!) increase in the number of proactive enforcements in the period that the report covers (461k to 4.78m), which could be down to greater integration with AI moderation company TwoHat, which Microsoft acquired back in November 2021 (EiM #135). All of the uplift came from "detecting accounts that have been tampered with or are being used in inauthentic ways”, The Verge reported.
Elsewhere, I couldn't help but share this message to Twitter owner Elon Musk from Peter Micek, general counsel for the digital rights group Access Now, from this piece on the effect of staff cuts on users' human rights: content moderation "is not cheap ... but it can help you from not contributing to genocide". What more to say?
People
Those impacting the future of online safety and moderation
I make no excuses for featuring online speech whistleblowers and platform workers who have put their head above the parapet in this section of the newsletter. In the past, I've highlighted Frances Haugen (EiM #131), Gadear Ayed (#136) and lawyer Mercy Mutemi (#153) who is working with Daniel Motaung.
Melissa Ingle is the latest person to speak out. The former Twitter employee was a senior data scientist working on algorithms to monitor harmful political content before she became one of the thousands of contractors that lost her job this week without warning.
Ingle told Rest of the World that we're yet to see "the negative impact of these policies" and that "algorithms will get more and more porous and let more misinfo in", a frightening prospect ahead of the elections in Turkey and Nigeria over the next 12 months.
Tweets of note
Handpicked posts that caught my eye this week
- "Who are the smartest people you know thinking about child safety online from a youth rights & agency perspective?" - Omidyar Network's Emma Leiken is on the hunt for people (maybe you?) with EU digital rights experience.
- "an extremely challenging aspect of content moderation for my team back in the day" - Alex Bilzerian, former Hive ML lead turned investor, shares an interesting looking Cornell paper looking at narrative kernels.
- "Personal News: I'm excited to announce that I’m joining Twitter as Head of Trust and Safety!" - Cybersecurity expert Jackie Singh manages to get me to laugh in what has been an otherwise unfunny week.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.
I ran out of time to find a suitable Job of the Week for today's editon. But a reminder that if you're looking for online safety or content moderation talent, drop me a line to find out how you can advertise right here.