Hello and welcome to Everything in Moderation, your guide to the rapidly changing world of content moderation and online safety. It's written by me, Ben Whitelaw.
Today's newsletter looks slightly different from normal because EiM has had a facelift: a new logo, easier-to-read newsletter font (something several of you asked for), clearer headings and even a new visual language to tie everything together. Plus no more emojis.
There's more work to do but it marks the next stage of the evolution of EiM and I'm really pleased with it. Thanks to Adam at Grammar Studio for his work and to EiM members whose support helped make the new look happen. Let me know if you approve.
Today's design update won't mean a great deal to new subscribers from The Berlin Social Science Center (WZB), Spectrum Labs, MLex, Article19, The Takshashila Institution and elsewhere but here's to you all being here for the next phase.
I'm on holiday next week so the newsletter will return on 17th June. Thanks for reading - BW
New and emerging internet policy and online speech regulation
A big development in the short, seesawing life of Texas' 'must carry' law: the Supreme Court has blocked HB20 just three weeks after it looked like it would come into force (EiM #159). The US' most senior court ruled 5-4 and, perhaps surprisingly considering the bill emanated from fears about anti-conservative bias on social media, included both Democratic and Republican appointees.
If you want to get into the weeds, there are plenty of good threads knocking around but, before I move on, special attention should be given to the statement from John Bergmayer, Legal Director at Public Knowledge:
"It is alarming that so many policymakers, and even Supreme Court justices, are willing to throw out basic principles of free speech to try to control the power of Big Tech for their own purposes, instead of trying to limit that power through antitrust and other competition policies."
Elsewhere, New_ Public recently published a good overview of the Global Internet Forum to Counter Terrorism (aka GIFCT), the consortium created by Microsoft, YouTube, Twitter and Facebook almost five years ago. The piece charts the growth of its membership to 18 organisations (although not yet TikTok) and the evolution of its threat coordination system (TIL: the Buffalo shooting was the third time its highest tier has been deployed since 2019). It's also my read of the week.
Features, functionality and startups shaping online speech
Law enforcement alerts for missing or abducted children have been launched on Instagram, replicating a feature that sister platform Facebook has had since 2015. Amber Alerts, which were created after the death of a US schoolgirl in 1996 and are activated by local law enforcement, are typically targeted at people in a geographic area via SMS, email and other media. As of this week, they will also appear in users' Instagram feeds too, starting in 25 countries.
This is a no brainer but the cynic in me has questions: why has it taken a decade since Facebook bought the photo app to introduce alerts like this? Why weren't they introduced on Instagram when they were launched on Facebook some seven years? Might the announcement have been timed to coincide with the child sexual exploitation and abuse summit of policymakers and regulators taking place in Brussels this week? (remember Antigone Davis? EiM #130) And is such PR-friendly timing even a surprise anymore?
In funding news now, a UK-based startup using machine learning to moderate content has raised $5 million dollars in seed funding. Checkstep claims to be able to moderate hate speech, misinformation, child sexual abuse material (CSAM), terrorism, illegal goods and copyright infringement (which is quite the list) and has an emphasis on flagging where platform's policy guidelines have been breached.
The round was led by Dawn Capital and Form Ventures and comes after the company raised $1.3m in July last year (EiM #119). It is the third significant funding round received for moderation startups in the last 12 months after Bodyguard (#151) and Hive (#119).
Social networks and the application of content guidelines
A new report has found that the far-right platform Gab has boomed since the Capital Hill riots and has become the de-facto home for "an openly white Christian nationalist demographic". Analysis by Stanford Internet Observatory found that its lax approach to moderation has seen it become a "significant platform for dissemination of anti-vaccine conspiracies, QAnon-related discourse, and, more recently, content supporting and organizing trucker convoy protests." The authors also found that "white genocide" and "great replacement" ideology — which was cited by the Buffalo shooter on Discord (EiM #160) —also thrives on the platform. Author David Thiel has more insights in this thread.
TikTok is the subject of a letter by New York government officials after the video app rejected ads for a public health campaign on marijuana. The state legalised weed for medical and recreational use in March 2021 and the New York State Office of Cannabis Management (OCM) has since tried to use the video app to educate individuals over 21 about the changes. But TikTok's ad policy (or its implementation) has not been updated to reflect that.
The wider context here is that platform advertising policies tend to go under the radar because advertisers only speak up (as in this case) where policies prevent them from trying to do good (which isn't often). Recently though, regulators (see the UK's Online Safety Bill, EiM #126) and platforms (Pinterest's climate ad misinfo ban, EiM #154) have begun to pay more attention to them and close loopholes. I expect the same will happen here.
As Elon Musk's bid for Twitter has stalled, so has the commentary around his plans for the platform and what it could mean for users' speech. But this Wired piece is a good reminder of what is at stake and, despite the headline, contains good background on takedowns and regulation in Turkey and India.
Those impacting the future of online safety and moderation
Few people have worked in trust and safety as long as Kevin Lee. Lee is currently VP of Trust and Safety at Sift, but prior to that, he worked at Google, Square and more recently Facebook in various risk and spam prevention roles. In total, he has more than 15 years of this work under his belt.
Kevin's vast experience means I keep an eye out for his work and I got a lot out of this takeaways blogpost from the recent Marketplace Risk Management Conference (for which he is an Advisory Board member), particularly on the rise of collusion fraud in marketplaces. Sift works with the likes of Twitter and Airbnb to prevent fraud and abuse and are better off for having Kevin to call upon.
Tweets of note
Handpicked posts that caught my eye this week
- "She focused on content moderation issues while Zuckerberg focused on product. Under her tenure, Facebook has incited genocide in Myanmar and spread white supremacy in U.S" - Joseph Cox gives me a good reason to include news of Sheryl Sandberg's departure in EiM. Thank you Joseph.
- "Do I have any trust and safety contacts at Meta? Or anyone at Meta who can use XCheck?" - an all too familiar scenario, courtesy of Bryant Zadegan, in which Dr Siyab — a doctor with 400k followers on TikTok — is banned for, you guessed it, health misinfo.
- "Excited to welcome our members, friends and allies in fight [sic] against #childsexualabuse online to @WeProtect Turn the Tide summit" - Iain Drennan, WeProtect Global Alliance Executive Director (and a guest on a podcast that I recently hosted), seems glad to be able to hold an in-person summit again.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1000+ EiM subscribers or get in touch to enquire about a one-off posting.
The aforementioned Global Internet Forum to Counter Terrorism (aka GIFCT) is hiring an Operations Manager to lead day-to-day and longer-term operations out of its Washington, US office.
The role is broad and includes (wait for it): Contracting, Compliance, and Risk Management, Financial Management, Personnel Support and IT/security among other responsibilities. It reports directly to Joannah Lowin, GIFCT's Chief of Staff, and the salary is $75,000 to $105,000, depending on experience.