5 min read

A "conceptual framework" for the future, Oversight Board changes and DSA deadline

The week in content moderation - edition #192

Hello and welcome to Everything in Moderation, your all-in-one guide to content moderation and online safety. It's written by me, Ben Whitelaw and supported by members like you.

This week, I was lucky enough to be invited on the 10k Posts podcast onto talk about internet labour, Stacy Horn and why the term "content moderator" has lost all meaning. Listen here for a taster and get the full episode from wherever you get your podcasts.

Welcome to new subscribers from Logically, Zoom, Unitary, Cloudflare, Trustlab, Medianama, Reddit and elsewhere; why not drop me a line and let me know where you found out about EiM? Alternatively, hit the thumbs at the end of today's edition.

Here's everything in moderation this week — BW


Got something to say?

Some news, as they say: Everything in Moderation will be introducing handpicked sponsors in the weekly newsletter from March onwards.

If you're an institute with some new research to share or a tech company looking to connect with trust and safety experts, EiM sponsorship gives you the chance to get your message in front of 1500+ online safety thinkers and doers from around the world.

Thanks to those of you who got in touch last week — I'll be reaching out in the coming days. For those that missed it, fill in this quick form to register your interest...


Policies

New and emerging internet policy and online speech regulation

The House Judiciary Committee, which considers US legislation relating to legal proceedings, this week subpoenaed five technology companies as part of its investigation into allegations that conservative voices were suppressed o their platforms. CEOs of Alphabet, Amazon, Apple, Facebook parent Meta and Microsoft were asked to produce documents “referring or relating to the moderation, deletion, suppression, restriction or reduced circulation of content” by March 23, it was reported by NBC News and others.

If you need help understanding where this is coming from, maybe this will help: Committee chairman Jim Jordan used his letters to praise the so-called "Twitter Files", the documents released by a number highly partisan media figures and called "a distraction" by experts. A reminder that moderation is politics and politics is moderation.

Talking of which, if you want to see the effect of India's IT Rules which I covered here recently (EiM #191), look no further than this Wired article. In it, Akash Banerjee, a journalist who runs satirical YouTube channel The Deshbhakt (“The Patriot”), explains how he doesn't know if he can make a video on the BBC documentary in case "the government pull that off, also citing emergency powers". He says the rules, which are due to be expanded, will be the "death knell of many social media channels, especially the smaller ones". My read of the week.

Remember the new digital taskforce I mentioned at the end of last year? (EiM #184) Well, it's inaugural members have been announced and its like a trust and safety version of the Avengers. The Task Force for a Trustworthy Future Web will see 35 policy heads, CEOs, digital rights experts and technologists come together over four months to "define the current components that make up both the immersive and digital information ecosystem(s)" and build a "conceptual framework" for the future internet. There'll be a report in June and I'll be closely tracking how it goes.

Finally for this section, it's worth noting that today (Friday) is the deadline for platforms to share how many EU users they have as part of the Digital Services Act. Politico's Clothilde Goujard has a helpful running list.

Have you worked on creating or revising content policies? Want to share how you helped make users safer online? Good news! Everything in Moderation is starting a new series of content policy case study interviews, conducted by the excellent Tim Bernard.

Get in touch by filling in this short form and Tim will reach out to set up a discussion. Interviews can be conducted by Zoom or asynchronously and will be published from March onwards — BW

Products

Features, functionality and technology shaping online speech

Pre-match blocking is coming to Tinder as part of an effort to make it easier "to avoid seeing a boss or an ex”, according to Wired. Previously, the safety feature was only available once you matched with the user, by which point it was too late (he says, maybe from experience). The dating app has also introduced Incognito mode, which shows your profile to people you've liked, and Long Press Reporting, which allows users to easily report directly from a message by, you guessed it, long pressing.

Platforms

Social networks and the application of content guidelines  

The staff cuts at YouTube have left just one person in charge of global misinformation policy and are part of a broader trend of platforms slimming down, or cutting completely, their trust and safety efforts, acccording to the New York Times. It also got rid of two of its five hate speech and harassment policy leads.

Some people noted that such deep cuts was inconsistent with the 6% headcount reduction across the organisation, although Google, YouTube's parent company, was quick to point out that experts working on extremism, child safety were spared. Sadly, we'll only know the effect when it's too late.

Finally, a sadly all too familiar story about Twitter: a new survey of LGBTQ organisations and influencers found that abuse and speech has increased in the months since Elon Musk took over. 60% of respondents of the survey, co-ordinated by Amnesty International, GLAAD and the Human Rights Campaign, said they had experienced more abuse than usual while 88% had nothing happen when they reported the abuse. Bleak.

People

Those impacting the future of online safety and moderation

First there were 20, then there were 23 and this week another: the independent-but-Meta-funded Oversight Board added its latest member, constitutional law professor Kenji Yoshino, on Tuesday as it announced changes to how it reviews cases.

It's an interesting choice because it highlights an area where the Overisght Board must think it is lacking expertise: issues of equality, diversity, and inclusion. As the NYU announcement explains, Yoshino's work has focused on "the ways in which the right of expression and right to equality can collide with each other", which sounds helpful in cases about criticism of Iran's Supreme Leader and drill music alike.

Sidenote: his recently published book — Say the Right Thing: How to Talk About Identity, Diversity, and Justice — sounds fascinating. I will be ordering.

Yoshino also notes how "global scale of communications" with a "localized cultural content" is "one of the most compelling issue of our time". That could well be the slogan for EiM.

Tweets of note

Handpicked posts that caught my eye this week

  • "It's certainly true that first generation regulation is often imperfect - that's not a reason not to start." - tech industry advisor Jess Figueras says the time for government regulation is here.
  • "Just this acknowledgment: "We’re keenly aware that a product like this can be vulnerable to abuse and manipulation." Bloomberg's Jillian Deutsch notes the holes in Twitter's EU disinfo reporting.
  • "How to resolve content moderation dilemmas between free speech and harmful misinformation?" - Berlin based researcher Anastasia Kozyreva shares her new paper, which I'm planning to read with a coffee this weekend.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.

Unitary is looking for a two Machine Learning Engineers to design, develop, and maintain machine learning algorithms that power its visual moderation products.

The roles involves evaluating model performance across different datasets, investigating model failure modes keeping on top of the latest ML research and papers. You can also expect to working closely with the engineering and product teams and communicate complex information to other team members.

It's not clear wha the salary is but share options and flexible public holidays are some of the perks of this role, if you're successful.