6 min read

UK safety startups in demand, Meta agrees to XCheck changes and a Twitch policy update

The week in content moderation - edition #195

Hello and welcome to Everything in Moderation, your out-of-the-box guide to online safety and content moderation. It's written by me, Ben Whitelaw and made possible by members like you.

This week's edition contains a mixture of regulatory criticism, startup investment and platform industry partnerships, all of which feel like will persist as themes for the rest of 2023 (and beyond). It's a long edition but (and obviously I'm biased) one worth spending time with.

Greetings to new subscribers from Cyacomb, Internet Safety Labs, Google, Project Liberty, Automattic, Cinder and other moderation-minded folks from across the globe. If you're policy-minded and want to contribute to an upcoming EiM series, remember we'd love you to get in touch.

That's enough preamble; here's what you need to know from the past week — BW


New and emerging internet policy and online speech regulation

This week marked the six-month countdown for large companies to comply with the Digital Services Act and, for now at least, it continues to be billed as a piece of regulation with serious teeth:

  • MIT Technology Review recaps its recent milestones and calls it "quite revolutionary" and "setting a global gold standard for tech regulation when it comes to user-generated content".
  • That's helped in part by reports that the EU has told Twitter that it must hire more trust and safety workers and that it "expect[s] platforms to ensure the appropriate resources to deliver on their commitments,” according to the Financial Times.

There are some reservations, however, about whether the Act's underlying principles will translate to other parts of the world. Writing for Tech Policy Press, Théophile Lenoir questions whether the DSA is applicable outside of the EU because discussions relating to the balance of speech, privacy and freedom "are of no use in places where people do not care about them equally." My read of the week.

By contrast, there is growing discord about the Online Safety Bill (EiM #194), with both promoters and detractors seemingly unhappy with its direction:

  • Global Network Initiative's Justin Pielemeier writes for Just Security that thoughtful aspects of the Bill are being "overshadowed and at risk of being negated by some of the more politically-motivated, hyperbolic aspects"
  • Richie Koch over at Proton calls on legislators to "clarify what services and content the bill covers, eliminate the potential for harmful unintended consequences, and take steps to ensure this bill will not compromise end-to-end encryption."
  • Lucy Powell, UK Shadow Secretary of State for Digital, Culture, Media and Sport, said the government had "delayed and now watered down the online safety bill".
  • Glitch's Seyi Akiwowo (EiM #113) noted that the Bill's "delay is disappointing" and calls for an amendment to recognise gender-based violence specifically.

This story broke soon after last week's newsletter hit your inboxes but is an important one in my mind: Meta has responded to 32 recommendations by the independent but Meta-funded Oversight Board (EiM #184) with regards to the controversial XCheck (or Cross Check, if that's your vibe) programme uncovered as part of The Facebook Files in late 2021 (EiM #128). Of the 32 recommendations, Meta agreed to implement 11, partially implement 15, continue to assess the feasibility of one, and take no further on the remaining five.

However, Meta's commitments entail a lot of "exploring ways", "working towards" and "where possible" so whether these things happen is debatable. That's why, as Mashable reports, there are concerns that Meta hasn't gone as far as it should have.  

Become an EiM member
A quick pause in today's edition to thank you for reading and to share a request.

As you hopefully have gathered by now, Everything in Moderation is designed to be your guide to understanding how content moderation is changing the world.

Between the weekly digest, regular perspectives and occasional explorations, I try to help people like you working in online safety and content moderation stay ahead of threats and risks by keeping you up-to-date about what is happening in the space.

If you look forward to the newsletter or are one of the hundreds of people that read it each week, you might want to become an EiM member to support these efforts.

Becoming a member for a few dollars/pounds/euros a week helps me connect you to the ideas and people you need in your work making the internet a safer, better place for everyone. No pressure but it would be nice.

Thanks for reading — BW


Features, functionality and technology shaping online speech

Big news for two London-based startups this week and for the online safety investment scene generally:

  • ActiveFence has completed the acquisition of Rewire, the AI startup founded in 2021, for an undisclosed amount. Rewire's selling point is that its models are more accurate despite being trained with less data, which fits neatly with ActiveFence's suite of AI products. But also this feels like a talent play too: CEO Bertie Vidgen and CTO Paul Röttger have vast experience from their time at the University of Oxford and the Alan Turing Institute and will make great additions to the ActiveFence team.
  • Unitary, the 2019 founded contextual AI company, has secured $8 million of funding to develop its technology and accelerate its partnerships. Led by Plural Platform, the injection will also allow it to continue its open-source work, which is an interesting and commendable use of VC money. The vastly experienced former Meta executive Carolyn Everson has also invested and will also join its board.


Social networks and the application of content guidelines

Reddit and OnlyFans used International Women's Day to announce that they have joined StopNCII.org, a tool and coalition of platforms designed to prevent the non-consensual sharing of intimate images online. Its hash technology, launched in December 2021 by non-profit SWGfL, has reportedly enabled 20,000 adults to create cases to prevent their images from appearing, which by my rough maths is a staggering 320-odd a week.

Last week (EiM #195), five platforms become the inaugural members of Take It Down, a similar service for those under 18s.

Twitch will update its Adult Sexual Violence policies after a "deepfake porn" incident in January. Its announcement this week reiterated that "synthetic non-consensual exploitative images (NCEI) were "not welcome on Twitch" and that its Adult Sexual Violence and Exploitation policy and Adult Nudity policy were being updated accordingly." It will also run a Creator Camp for streamers to explain how to spot NCEI and what to do if you come across it.


Those impacting the future of online safety and moderation

When some tech executives with half-baked views about online speech — I'm looking at you Substack co-founders (EiM #145) — are criticised, it's often because they lack any real-life experience of the problem.

That criticism can't be levied at Tracy Breeden. She was the former Head of Global Women's Safety at Uber and spent two years as Vice President, Head of Safety & Social Advocacy across over 15 brands at Match Groups. She describes herself as a "queer fearless leader entrepreneur and activist" and her consultancy focuses on supporting and making safe women, LGBTQ+, and other marginalized groups.

In a short interview with Rest of World this week, Breeden calls out the lack of investment in tackling the "harm [that] is happening on these platforms" and the need for anything that "opens the door to get help from the outside".

Many trust and safety workers at platforms are crying out for resources and support and many, I'm sure, would love to call upon Breeden too.

Tweets of note

Handpicked posts that caught my eye this week

  • "This is part of the reason the UK's Online Safety Bill keeps running into the sand" - Benedict Evans disputes the world-leading nature of the legislation chugging through Westminster.
  • "TL;DR Despite popular perception we've actually been in the Golden Age of Tech Accountability for the last five years, but now it's ending, and it's kinda our own fault." - Kate Klonick with a must-read (and must-subscribe )Substack.
  • "Transparency is necessary but only a starting point to secure trust & safety online" - Alex Howard has a whole thread of gems from the State of the Net conference.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.

Spotify is hiring a Product Manager to join its Content Safety Analysis team based in Dublin.

The role will focus on scaling understanding of the platform’s content through tooling and other "content understanding capabilities", working with Engineering and Design teams to come up with solutions.

As with all product manager roles, you'll need to show you can maintain and evolve large-scale products by using research, data, insights and iterative practices. Experience with building long-term roadmaps is a must too.

Salary is for discussion with the recruiter, according to its AI-powered recruitment bot, but a great gig at a company with lots of work to do.