6 min read

Detecting non-consensual images, XCheck report published and a new taskforce

The week in content moderation - edition #184

Hello and welcome to Everything in Moderation, your in-a-nutshell review of the latest online speech, safety and content moderation news from the last week. It's written by me, Ben Whitelaw.

Festive greetings to new subscribers from Access Now, Teyit, Hinge, the University of Notre Dame, CounterHate, Handshake and elsewhere. Those of you that signed up last week expecting a newsletter, apologies; I was ill.

This is the penultimate newsletter of 2022 and frankly, I've no idea where the year has gone. I'll share a rundown of the last 12 months next week so be sure to check that out (or don't, if you'd rather not be reminded).

There's a hefty Product section in today's edition including some notable funding news that I think gives a taste of what we can expect in 2023. Hit reply with your best comments and critiques and, if you are able to support the costs of running EiM in these straightened economic times, become an individual or organisational member from just $80 a year.

Here's everything in moderation this week — BW

PS If you're interested in submitting a panel session about moderation or online safety for next year's International Journalism Festival, drop me a line.


Policies

New and emerging internet policy and online speech regulation

Ireland this week signed into law its long-awaited (but not as long as the UK's own) Online Safety Bill, which creates a new Media Commission that will act as a regulator for broadcasters but also online platforms. Experts have previously raised concerns that the Bill could conflict with provisions in the Digital Services Act (EiM #143) and there is, as Tech vs Terrorism explain, "uncertainty as to how the law will work in practice both in the present and future climate". Nonetheless, it's been widely welcomed by those working on children's rights.

This one was published last week but is too important not to include here: the independent-but-Facebook-funded Oversight Board found that Meta's XCheck programme (EiM #128) is "flawed in key areas" and "appears more directly structured to satisfy business concerns" than its human right commitments, according to its report.  Rather than recommend its closure, which some expected, the Board goes on to make a series of recommendations about how to improve it. However, Wired's Stephen Levy, who has been embedded with the board in the past, wrote that he had "doubts about how eagerly Meta will embrace all those suggestions". The company has 90 days to respond.

Products

Features, functionality and technology shaping online speech

Meta has open-sourced a tool it uses to detect terrorist and child exploitation content to help other sites and services find copies of images or videos at scale. Hasher-Matcher-Actioner (who makes up these names?!) allows platforms to label and find content as well as plug into other databases (such as one owned Global Internet Forum to Counter Terrorism) to detect content that may have appeared elsewhere first. The timing of the announcement is not an accident: Meta takes up its new role as chair of the GIFCT’s Operating Board in January (PS the blog post by Nick Clegg reads like a narky text to Elon Musk about the importance of content moderation, go read it immediately).

Talking of hash databases, a new one designed to stop non-consensual intimate image abuse will now be used by TikTok and Bumble. StopNCII.org has been built by Meta and UK non-profit SWGFL over the last year and now contains 40,000 images from over 12,000 affected people. The video and dating apps are the first partners but scaling will be key to the success of this initiative. “We now have four platforms, but we need thousands,” said SWGfL’s CEO David Wright.

Cinder, the self-described "operating system for trust and safety", has raised $14m in seed and Series A funding, it was announced this week. CEO and co-founder Glen Wise said in a blog post that the US-based company is designed to combat the issue of "outdated tools... that leave data disconnected and hinder efficient investigation and decisive decision making". As well as Wise — a former red team engineer at Meta — the founding team also includes Philip Brennan and Declan Cummings, who worked in Meta's threat intelligence team; and Facebook's former counterterrorism director Brian Fishman. It follows in the footsteps of Spectrum Labs (EiM #145) and Bodyguard (#151) to have received funding in 2022 and paves the way for others in 2023.

Community Notes, the Twitter tool formerly known as Birdwatch (EiM #172), was rolled out worldwide this week but not without some teething issues. Viewing and rating Notes became accessible to everyone but some potential Note contributors were told their mobile carrier (eg Airtel and some Jio users) was not sufficiently 'trusted' for them to become contributors. So hardly a global rollout at all.

Platforms

Social networks and the application of content guidelines  

Twitter dissolved its trust and safety council of over 100 organisations just days after three members quit because of fears about "the safety and wellbeing of Twitter’s users". Politico reported that an email was sent to council members just 40 minutes before a scheduled meeting, which is special behaviour even for Twitter CEO Elon Musk. The move came despite new polling from YouGov this week which found that 74% of Americans believe social media companies have a responsibility to prevent users from harassing one another, which was (reads note) what the council was expressly there to help do.

The wider, and perhaps missed, point about this story — as I wrote on Twitter — is that the Trust and Safety Council was in fact a series of advisory groups that helped make the platform safer for many underrepresented users and marginalised groups. I fear for what happens next. As if the departure of the (real) Rocket Man wasn't bad enough.

All of this, you'll know, came after the weird release of the so-called "Twitter Files", which contained some interesting nuggets but was so barmy in what it claimed that I can't bring myself to try and unpack it here. Charlie Warzel over at The Atlantic has the most sensible take.

People

Those impacting the future of online safety and moderation

The multifaceted nature of online speech is such that it will take many different specialisms and expertise — law, computer science, digital rights, privacy, and technology — coming together to figure out what to do next. And for that reason, it's important to have spaces where those ideas can permeate and spread.

This happens in lots of forms but not least in working groups like the catchily named Transatlantic High-Level Working Group on Content Moderation Online and Freedom of Expression (EiM #42). Over 18 months in 2019, it brought together 25 experts from across Europe and the US to produce 14 papers on a range of topics linked to online governance (#71)

The latest effort to do something similar is a new task force designed to help "protect users’ rights, support innovation, and center trust and safety principles" which has been set up by the Atlantic Council’s Democracy + Tech Initiative at the Digital Forensic Research Lab (DFRLab).  It has set itself the task of defining "the current components that make up both the immersive and digital information ecosystem(s) and the field working to make them healthier and safer". No small feat.

The person tasked with making that happen is Kat Duffy, Senior Fellow at the Atlantic Council's DFR Lab, who brings a wealth of experience from the US Department of State, World Economic Forum and Internews and happens to have one of the best Twitter handles around. The task force's plans will be announced in early 2023. I'll share them here as I hear more.

Tweets of note

Handpicked posts that caught my eye this week

  • "It's about my de-platforming from IG & TikTok, the failures of automated content moderation and in-platform appeals" - interesting looking newly-published paper from Dr Carolina Are on the idea of selective safety in platform governance.
  • "Content moderation is like what Churchill said about democracy - it is the worst system except for all the others" US senator Brian Schatz with something for the history-loving CoMo advocates.
  • "They claim this will increase the quality of debate, something we know there is limited evidence for in the first place" - I don't know how the question of whether anonymity improves conversation is still a talking point but Matt Taylor, a colleague of mine at the FT, has a good thread on the new real-name commenting policy of our former employer.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers.

Tinder is looking for a Product Manager, Trust & Safety (Remediation & Tooling) to lead the development of the detection and moderation tools roadmap.

You'll work with the Group Product Manager and engineering team to lead the strategy for reducing bad actors and work with various teams across the business including Analytics and Member Experience.

You need at least two years of product management experience, knowledge of the content moderation space and the ability to get into the office in California twice a week (could be worse).

I don't have salary info but there are a bunch of great perks including a free subscription to Tinder Gold. Worth it for that alone.