5 min read

EU regulators turn up the heat, 'legal but harmful' dropped in UK bill and getting into T&S

The week in content moderation - edition #183

Hello and welcome to Everything in Moderation, your weekly review of the most critical stories in content moderation and online safety. It's written by me, Ben Whitelaw.

You'll notice that today's edition contains even more stories than usual about Twitter. Why is that? Well, since Musk's takeover, there seems to be less oxygen (or perhaps appetite) for important stories about online safety and fewer stories about platforms that don't have a South African space cowboy at the helm. I can only link to what I read elsewhere.

This is concerning. I've written before about how most media coverage of this vital topic isn't conducive to creating a better internet and how there's huge potential to do more. For now, while working full-time elsewhere, EiM is all I'm able to do to try and foster a more nuanced understanding of speech governance and support those doing important work.

I'd like that to change in 2023. I'm still working through how but if you'd like to work together to change that, drop me a note.

Thanks to new subscribers from Strava, EFF, Stanford University and elsewhere for joining the club. If you enjoy today's newsletter, share your feedback via the new like/dislike buttons at the end of today's newsletter.

That's enough of my rambling, here's everything in moderation this week — BW


Policies

New and emerging internet policy and online speech regulation

The Online Safety Bill came firmly back into focus this week as the UK government dropped the "legal but harmful material" provision which critics argued could lead to the over-removal of content by platforms and posed a threat to free speech.

Protections for children have been added — culture secretary Michelle Donelan was quick to point out — but that didn't stop the bill being criticised for being watered down, notably by Ian Russell, whose daughter Molly committed suicide after viewing content on Instagram and Pinterest (EiM #176). With the bill returning to Parliament next week, I'd recommend reading analysis from:

Public and private collaborations that emerged out of the Christchurch Call (EiM #38) have been disrupted because the "entire [Twitter] team the New Zealand government was planning to work with disappeared" in the recent layoffs. That's the verdict of Markus Luczak-Roesch, an associate professor at Te Herenga Waka — Victoria University of Wellington, who writes for The Conversation that "the [NZ] government was arguably badly advised to partner with Big Tech players on such a fundamental project" as algorithmic transparency. It comes as footage of the Christchurch attack this week circulated on Twitter after the platform's automated tools failed to detect the clip. A setback for everyone involved.

Twitter's gutted workforce was also responsible for another policy change this week as the platform silently stopped users from spreading false information about the Covid-19 virus or vaccines (EiM #155). The change, which was spotted by an eagle-eyed user and first reported by CNN, was brought about because "misinformation policies are very labor intensive to enforce", according to a former employee. But it has real significance as far as online speech regulation goes: EU diplomats told Politico that it had "jumped to the front of the queue of the regulators" and was being looked at in the US too.

Products

Features, functionality and technology shaping online speech

Zepeto, the Korean metaverse platform with a reported 300m users, has created a safety advisory council to "serve as an independent committee that advises on issues related to community trust and user safety." The nine-strong council is made up of academics from Northeastern University; the University of Pennsylvania and the University of Alabama as well as experts from Thorn and non-profits Promundo and ConnectSafely.

Twitter, TikTok and Twitch all have similar councils (EiM #102) and were joined by Spotify over the summer (#163). Whether they work or have any tangible impact, I'm not sure.

Platforms

Social networks and the application of content guidelines  

Just two weeks after being welcomed back onto Twitter, Ye (previously known as Kanye West) has been removed from the platform for violating its anti-semitism policy. The Verge has more on why but nothing about this story is surprising. Nor is this whack-a-mole likely to end any time soon: Platformer reported that Twitter has been reinstating over 60,000 accounts (76 with more than 1 million followers) in what is being referred to as “the Big Bang" by staff. Elon, in his new role as chief moderator, is going to have his work cut out.

That's not all: a number of antifascist organisers, activists and journalists have had their Twitter accounts suspended this week in what is believed to be a coordinated campaign by right-wing operatives, according to The Intercept. Chad Loder, a cybersecurity expert who was removed and reinstated twice, said the platform is “going to turn into Gab with crypto scams.”  

Despite the above, Elon Musk is "not the unequivocal villain of this story" according to former Twitter head of safety Yoel Roth, who was speaking at the Knight Foundation's conference on digital democracy (podcast interview here). In his first on-stage interview since quitting, Roth also talked about how "polls are more prone to manipulation than almost anything else" (Rolling Stone) and that it was a mistake to remove the Hunter Biden story from the platform back in October 2020 (Yahoo Finance). Other sessions from the conference are online here.

Finally for this all-Twitter platform section of today's newsletter, Wired reported that the blue bird's child sex abuse material (CSAM) team responsible for the Asia Pacific region — which is based in Singapore — has just one full-time employee following recent layoffs and resignations. Gulp. I hope that person is doing ok.

People

Those impacting the future of online safety and moderation

The personal stories of folks working in trust and safety are few and far between. It's why I worked with the Integrity Institute on a mini-series about what it's really like to work in trust and safety (more coming soon...).

It's also why I enjoyed this new Business Insider profile detailing how Matt Soeth came to work in online safety. Matt, who works at Spectrum Labs, talks about his prior life as a high-school teacher before embarking on a career change, first as the founder of The Net Safety Collaborative in 2011 and later at TikTok.

Matt notes how "trust and safety is about managing human behavior (good and bad) and the systems and tools behind that" — something he knows a bit about from working with students — and that there are few people "interested in safety who doesn't deeply care about this topic". I concur.

Tweets of note

Handpicked posts that caught my eye this week

  • "The shift toward user augmentation, combined with privacy, security, and trust and safety, is this necessary shift toward a new and better web.” - German political science Ilona Kickbusch on the forces at play.
  • "meanwhile most of your spaces have lost Black women in general and so has Twitter" - Sydette Harry says Elon's Twitter is actually a continuation of the norm.
  • "are any funders or foundations stepping up to invest in trained moderators and good security practices for Mastodon instances?" - Careful Industries' Rachel Coldicutt asks where the help is coming from.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.

Automattic has a unique job offer: the software company is looking for anyone that can help turn around Tumblr.

The ad is an effort to actively court staff being let go by Twitter and explains how they are "open to bringing over individuals — even entire teams — if there’s a clear path to have a 10x impact on Tumblr’s growth and revenue."

That might be tricky to prove if you work in trust and safety but Matt Mullenweg (EiM #177) has demonstrated a clear understand what it takes to keep a platform safe so it could be worth a speculative application.