The future of Section 230, a new taxonomy for social media and a call for TrustCon
Hello and welcome to Everything in Moderation, your weekly catch-me-up on content moderation and online speech from around the world. It's written by me, Ben Whitelaw and supported by members like you.
This week, as the highest court in the US began hearing a case that could change the internet (and content moderation) as we know it, I was busy running for flights and being ill. Somewhat telling, I thought. If there's any must-read analysis that I've missed, drop me a line and I'll include them in next week's newsletter.
Hopefully, new EiM subscribers can count on a clean bill of health; welcome to folks from Unitary, Bumble, Google, Internet Freedom, ActiveFence, Atlantic Council and elsewhere. And stay safe out there.
Here's everything in moderation this week — BW
Policies
New and emerging internet policy and online speech regulation
All eyes turned to the US Supreme Court this week as oral arguments were heard in the Gonzalez vs Google case regarding whether YouTube's algorithms are protected by Section 230 in the event that they recommend illegal content. If you missed it, Corbin K. Barthold of Tech Freedom has a detailed 109-tweet thread with the key exchanges and quotes.
One theme that has emerged already is the Court's unsuitability to decide on the future of the web: Vox noted that the Justices' deemed themselves as “not the nine greatest experts on the internet" while Axios published a round-up of actual experts' concerns for what might happen next.
That's led to a feeling, shared by Casey Newton at Platformer, that it's unlikely the justices will side with the plaintiffs. But we'll have to wait and see: the case will continue until the summer.
Elections in Nigeria begin tomorrow (25th February) amidst fears that misinformation about leading candidates could mislead voters or lead to violence. Earlier this month, Meta explained its plans for an Election Operations Centre and a #nofalsenewszone brand campaign on local radio while new research from The Integrity Institute shows that misinformation has been shared more widely on Twitter than Facebook, relative to its expected engagement prior to the election period (or what it calls the misinformation amplification factors).
The major platforms have revealed how many European users they have as part of their adherence to the incoming Digital Services Act. Last week's deadline (EiM #192) saw 17 companies pass the 45m user threshold, although perhaps not the ones you might think. As LSE's Martin Husovec noted, Airbnb, Dailymotion and Pornhub were not large enough to become Very Large Online Platforms (VLOPs).
Products
Features, functionality and technology shaping online speech
An app designed to block child abuse images on users' phones has received £1.8m in funding from the European Union. Salus (yep, named after the Roman goddess), which has been created by a consortium of organisations under the Project Protech umbrella, works by using artificial intelligence to block illegal images. It will be tested with 180 people from five countries over 11 months although, as the BBC notes, "many details of the operation of the app still need to be worked out".
An AI startup that allows users to clone people's voice has introduced new safety measures after it was used by 4chan users for "malicious purposes". Eleven Labs went public last week and, within hours, saw its technology being used to have celebrities read Mein Kampf and share hate speech. It has made the service available to paid users only and will ban accounts that break its terms and conditions.
Product and Engineering are two of six new tracks at this year's bigger and better TrustCon conference for trust and safety professionals. I was gutted to miss last year's event (EiM #175) and am hoping to submit a panel this year (if you want to discuss a joint submission, let me know). The deadline for proposing a lightning talk, presentation, panel, and/or workshop is Friday, March 17.
Everything in Moderation is your guide to understanding how content moderation is changing the world.
Between the weekly digest, regular expert perspectives and occasional explorations, I try to help people like you working in online safety and content moderation stay ahead of threats and risks by keeping you up-to-date about what is happening in the space.
Becoming a member helps me connect you to the ideas and people you need in your work making the web a safer, better place for everyone.
Platforms
Social networks and the application of content guidelines
Etsy has claimed that it invested $50m into its trust and safety efforts in 2022 alone, following new research from Citron that showed it had failed to keep counterfeit goods off the platform. The report lays out how mass-produced goods, which also appear on Amazon and Alibaba, affect Etsy sellers and prevent them from selling their handmade products. Andrew Left, Citron's founder, says the platform's "corporate culture on transparency and reporting should be challenged."
I don't know how Etsy arrived at that number, or how it compares to other platforms, but get in touch if you can shine a light on either.
People
Those impacting the future of online safety and moderation
Like the folks at New_Public (whose works and newsletter I heartily recommend), I read everything and anything by Ethan Zuckerman.
Working with Eli Pariser and Deepti Doshi of New_Public and Yale Justice Collaboratory's Tracey Meares and Tom Tyler, Zuckerman has come up with a new classification for social media to "make it easier to talk about complex topics". It's my read of the week.
You should go and read it yourself but I particularly like the use of physical terms ("Big Room" and "Many Room") to give shape (almost literally) to online spaces. But I wonder whether the analogy could be extended further by linking platforms to a space everyone understands and can relate to: the office.
Offices have their own forms of governance (normally relating to the disposal of teabags or eating of smelly food) and could map neatly onto Zuckerman's new platform taxonomy. So how about "Open plan", "Cubicles", "Co-working" and "Home office"? Let me know what you think...
Tweets of note
Handpicked posts that caught my eye this week
- "I am increasingly interested in looking at dating apps as a site of trust and safety, data privacy, platform accountability issues" - Replies to Emma Leiken's tweet are a treasure trove of insights and research.
- "havent tested it out yet but i'm guessing it has less policy restraints than its more moderated ai colleagues" - Ben Decker, CEO of Memetica, notes how Gab is experimenting with Dalle-2 image generator. Yikes.
- "Identity verification on social platforms used to be a matter of trust and safety" - Josh Benton of Nieman Lab shares his piece on how platform safety becoming a tradeable commodity.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1600+ EiM subscribers.
Reddit is looking for a Senior Threat Analyst, Threat Detection to drive the detection and analysis of harmful activity on the platform.
The role involves applying what the platform calls "analytic tradecraft" to find suspicious and malicious activity targeting and turning them into analytical outputs for stakeholders across the business.
Applicants should have 5+ years of experience in intelligence analysis, an understanding of cybersecurity and geopolitical issues and, ideally, SQL skills.
Reddit helpfully includes a salary (EiM salutes you, hiring manager) which is a not insubstantial $145,700 - $218,600.