📌 "World-first" online safety law, Ukraine u-turn and a different Nazi problem
Hello and welcome to Everything in Moderation, your guide to understanding how content moderation is changing the world. It's written by me, Ben Whitelaw.
New subscribers from Clubhouse, Spectrum, Privately, Form Ventures, ActiveFence, Inetco, Conscious Comms and Bytedance, don't be shy — hit reply and say hello.
I don't normally take a break at this time of year but the last few months have been intense and I need some time and space to think (not least about EiM membership). I'll be back in your inbox on April 1.
On the topic of next week, the Justice Collaboratory at the Yale Law School is running a free one-day intensive Trust and Safety workshop next week (March 24th) teaching practices for building healthy, prosocial environments online. Find out more and register here.
Onto this week's must-reads — BW
📜 Policies - emerging speech regulation and legislation
It's finally here: the Online Safety Bill was yesterday introduced to the UK parliament, five years after work began on it and with a number of significant changes from the draft I wrote about back in May 2021 (EiM #112).
The big news is that company execs will be liable for jail time if they fail to comply with legislation within just two months of the bill becoming law (it was going to be two years). It's a worrying development that bears out in some of the initial reaction to the bill:
- Rowland Manthorpe at Sky News sums it up best when he writes "No one much likes it, but no one can agree why."
- The Guardian's Dan Milmo focuses on the stories of online abuse victims.
- The Spectator's editorial bemoans the Bill's "soft censorship".
- Article19 doesn't mince its words in saying it "risks setting a dangerous precedent and providing a blueprint for excessive digital control in countries with authoritarian tendencies around the world."
- The director of Big Brother Watch writes for The Telegraph that "internet freedom as we know it will be a thing of the past."
For what it's worth, the two experts on this stuff that I read religiously are, let's just say, not very hopeful about the direction of travel. Let's see where we are in a few weeks' time.
Meta, which owns Facebook and Instagram, performed a notable u-turn just 72 hours after it was leaked that Ukrainians could temporarily call for violence against "invading Russians". Global affairs President Nick Clegg told staff in an internal post on Sunday that calling for the death of a head of state was now prohibited as was "violence against Russians in general". Emmerson T Brooking, writing for Tech Policy Press, said that the move "demonstrates an irreconcilable tension in trying to adapt content moderation policy to major conflict". It certainly feels like a policy decision we'll be studying for years to come.
If you're anything like me, you'll have found it a challenge keeping up with changes to platform policy in relation to the conflict so here are two useful resources to bookmark:
- Tracking Social Media Takedowns and Content Moderation during the 2022 Russian Invasion of Ukraine (The Media Manipulation Casebook at Shorenstein Center)
- Russia, Ukraine, and Social Media and Messaging Apps (Human Rights Watch)
The independent-but-Facebook-funded Oversight Board has announced three new cases, including an Instagram post in Arabic which was reviewed six times by human moderators and involves the reclamation of derogatory terms for gay people. Complex doesn't even cover it.
Platforms are more likely to remove Islamist terror content than far-right equivalent, according to the first transparency report released by Tech Against Terrorism. The report, which relates to alerts sent to platforms from its Terrorist Content Analytics Platform and covers the 12 months to November 2021, doesn't explain why but notes that an explanatory article will be published on its site soon with more detail. I'll share the link here next time.
💡 Products - the features and functionality shaping speech
Live streamers on Instagram now have the option to add a moderator to help maintain civility in the comments. The announcement comes just a few weeks after a prominent Pakistani actress was left in tears after being targeted with sexual harassment via a live video.
As Protocol points out, it's a mere half a decade since live videos were introduced on the platform so it's frankly about time. Although that's nothing compared to Facebook, which introduced live video in 2015 and only got around to allowing moderators in, wait for it, December last year.
Become a founding member today and get a 10% discount for as long as EiM exists!
💬 Platforms - efforts to enforce company guidelines
Twitch is yet to implement strong enough safety tools to halt the flow of hate raiders on marginalized streamers’ channels, according to an open letter signed by a group of Black streamers. Color of Change noted the release of phone-verified chat, which has helped to limit hate (EiM #130) but said the company had otherwise failed to "conduct a racial equity audit that would allow Twitch to identify areas of growth and eliminate any manifestation of bias, discrimination, or hate across its products and as an employer."
Kuaishou, the short-form video platform with almost 1 billion users around the world, released its first transparency report this week. The report noted that 12 million videos — 2% of total videos —were removed, with almost half falling foul of its policy on illegal activities and regulated goods (mainly selling drugs, money laundering and defrauding). The app has a significant Chinese userbase but is known as Snack Video in Pakistan and Indonesia and as Kwai elsewhere.
👥 People - folks changing the future of moderation
If regulation goes the way critics expect, and automated takedowns become even more commonplace, there's going to be a growing class of people in the future caught in the jaws of unjustified moderation decisions. And so the experience of Dean Reuter is the latest window into a grim future.
Reuter is General Counsel at the Federalist Society for Law and Public Policy Studies and wrote a non-fiction book in 2019 called The Hidden Nazi. No drama about that until, three months ago, he uploaded a YouTube video to his 76-strong subscriber channel to promote the book. That was promptly taken down — we suspect for referencing Nazis — before being restored after two weeks, presumably when he made representations. His video was subsequently watched 270,000 times.
Could being caught in a content moderation controversy become a tactic to promote your latest artistic endeavour? Who knows. But Reuter's story —mentioned in this research agenda published by Lawfare this week — bears remembering when it happens to you too. It's my read of the week.
🐦 Tweets of note
- "Content moderation: an online retailer just blocked me sending my Nan flowers with the message "you jumped out of a plane, & you'll kick COVID's ass too." - Andrew Smith from Oversight Board proves that bouquet jokes are dead.
- "We take a look at how content creators on platforms like YouTube moderate against abuse, and focus on how a common tool---word filters---could be reimagined to better support creators." - Amy X Zhang, who is a semi-regular in EiM before, shares a thread about her most recent work.
- "I have spoken to a lot of people over the last few months about her approach to the flagship legislation, which she has made her personal mission." - Politico's Annabelle Dickson on the most recent custodian of the Online Safety Bill, Nadine Dorries.