6 min read

Platforms admit to 'soft moderation', Texas law twist and election denialism report

The week in content moderation - edition #174

Hello and welcome to Everything in Moderation, your once-a-week content moderation and online safety round-up. It's written by me, Ben Whitelaw.

This is the first time that new subscribers from Facebook, TaskUs, Unitary, Image Analyzer and Microsoft are receiving EiM in their inbox so a special welcome to you all. A shout out to several new members who have kindly parted with their hard-earned cash to support EiM and ensure it keeps hitting your inboxes.

After a break over the summer, I'm delighted to have the fourth instalment of the "Getting to Know" mini-series, in collaboration with the Integrity Institute. Read on for more information.

If you're into the vagaries of the US court system, this week's edition is one for you. Thanks for reading, here's what you need to know this week — BW


Policies

New and emerging internet policy and online speech regulation

The controversial Texas law banning platforms from removing posts based on political speech took its latest turn this week after it was upheld by an appeals court.

A quick timeline for those who have lost track of where we're at with this: it was signed into law last September before a federal judge blocked it a few months later (#EiM 139); it was then allowed to take effect in May (EiM #159) but not long before the Supreme Court overturned that decision without explaining itself (EiM #162).

This week's ruling from the U.S. Court of Appeals for the 5th Circuit again stated that social media companies are “common carriers", which is an argument we've heard before and will do again. The reaction has been a combination of shock and fear: Protocol called it "tech's season finale", which somewhat underplays its significance, while Vox called it "potentially an existential threat to the social media industry".

Not to be outdone, Florida asked the Supreme Court to decide on its own social media law, which has had a rollercoaster side since May 2021 (EiM #119). When an expert like Genevieve Lakier says laws like this could "shape the operation of the internet really significantly", you'd better believe it's true.

In news from outside US courts, a new report has said that major platforms "must treat protecting elections and the democratic norms that go with them as a year-round job, not one that is suspended between elections" following a spike in election denialism. The 24-page report, authored by Paul M Barrett and Mariana Olaziola Rosenblat at the NYU Stern Center for Business Rights, traces the trend back to the 2016 Republican presidential nomination campaign and points the finger in turn at YouTube, Facebook, Twitter and even TikTok.

Products

Features, functionality and startups shaping online speech

Almost a third of Americans on social media and gaming sites say they have “used emojis or alternative phrases to circumvent banned terms", according to a new published by Forbes. The findings—on a sample of just 1000, it should be noted— give weight to the idea that algospeak is on the rise (EiM #155) as users try to avoid filters and sidestep algorithmic downranking. And, as this Mashable report notes, anti-vaxxers are also cottoning on the benefits (Thanks to Erika for sharing this with EiM via Twitter)

[For what it's worth, Telus International, which produced the research, does not have a great record of looking after its contractors (EiM #142 and #153)].

UK company Logically is part of a joint venture that has received almost $700,000 from the US Department of Homeland Security to research white nationalism and white supremacy in gaming. The misinformation specialists have partnered with Middlebury Institute’s Center on Terrorism, Extremism, and Counterterrorism (CTEC), based in California, and the non-profit Take This. The use of video games to recruit teenagers to extremist causes has been an issue for some time but the extent of the problem has been debated because research has been limited.

Platforms

Social networks and the application of content guidelines  

Leaked audio of a TikTok meeting from 2021 has revealed that the video behemoth has been affording accounts with more than 5 million followers "more leniency" when it comes to moderation. The worst part of this story is that it came in the same month that it was revealed that Facebook had a secret tool called XCheck to triage famous users (EiM #128). A spokesperson denied the claims that influencers follow different rules, suggesting this has been scrapped in the last 12 months.

A Pinterest employee has admitted that the platform favoured a policy of "lighter content moderation" in 2017, around the time that 14-year-old Molly Russell used the platform and at the point she committed suicide. Jud Hoffman, Pinterest's head of community operations, was giving evidence at Russell's inquest in London this week and admitted that the company's guidelines and tools were only strengthened in 2019, after her death.

Twitter has opened applications for researchers wanting to be part of its Moderation Research Consortium, which is designed to support the sharing of "comprehensive data about other policy areas" with researchers from academia, civil society and NGOs. Applicants must demonstrate a public interest research use case and have prior experience in data-driven analysis. I hope media organisations take up the opportunity to do better reporting on this crucial platform (EiM Exploration).

This one happened just before I sent last week's newsletter but YouTube last week announced new moderation measures to address violent extremist content. Details are very sparse — frustratingly, YouTube hasn't published a blog post, only a few tweets from its chief product officer — and so all we know is that the video platform will begin to "remove videos glorifying [violent] acts for the purpose of inspiring others or fundraising". Which, frankly, I'd hoped they were already doing.  

New Q&A: Over the last few months, I've run Q&As with a number of integrity professionals as part of a collaboration with the Integrity Institute. The "Getting to Know" mini-series has been fascinating for me to hear directly from folks doing the work and, from what some of you have said, it's been helpful for you too.

After a short break, I'm glad to publish the latest instalment with Bri Riggio, the head of platform policy at Discord. It's heavy on varied career experience and creating space to learn. Have a read and let me know what you think.

Viewpoints will always remain free to read thanks to the support of EiM members. If you're interested in supporting more Q&As like this, become a member today.

People

Those impacting the future of online safety and moderation

Chances are that you don't know the name Anika Collier Navaroli. When the former Twitter employee gave testimony to the January 6th committee back in July, her voice was masked for fear of attack.

This week, she went public in an interview with The Washington Post, in which she gave details about joining Twitter in 2020, pushing the company to adopt a firmer approach to former US President Donald Trump and, eventually, seeing that decision culminate in the Capitol Hill riots.

She left Twitter last year — the interview doesn't say when — and shares how the period since has been "one of the most isolating times of my life.”

There's a lot of interesting stuff in there — not least Twitter's response that it took "unprecedented steps" to respond to threats in the lead-up to the 2020 election — but the bit that stood out for me was a Slack message that Navaroli sent on January 5th: “When people are shooting each other tomorrow, I will try and rest in the knowledge that we tried.”

Tweets of note

Handpicked posts that caught my eye this week

  • "It's an incredible feeling to have world leaders nodding along with you..as you talk about civil society and as you talk about transphobia" - Dia Kayyali shares what it was like speaking to world leaders and platform execs during the third Christchurch Call leaders summit.
  • "The idea of US free speech is putting marginalised communities in rest of the world at risk of harm." - Nighat Dad, founder of Digital Rights Foundation and Oversight Board member, with a timely and yet evergreen tweet.
  • "A little bombshell in the Online Safety Bill could dampen public debate on immigration & safe routes" - Dr Monica Horten, policy lead at Open Rights Group, notes how a line in the UK's incoming legislation could put platforms in a tricky situation.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1400+ EiM subscribers.

Are you hiring for your policy or operations team? Looking for researchers, analysts or engineers that care about user safety? Want to reach savvy, high-quality applicants who are passionate about the future of the web? I didn't have time to find a suitable job role this week but you can share your job ad here in future editions of EiM by becoming a member. Hit reply if you want to find out more...