6 min read

Reaction to "insane" speech law, Meta's Kenyan lawsuit and coup for Koo founder

The week in content moderation - edition #159
Signs in a window in the United States (Dennis Fraevich, cropped and coloured)
Signs in a window in the United States (Dennis Fraevich, cropped and coloured)

Hello and welcome to Everything in Moderation, your guide to what's happening in online safety and content moderation, now and in the future. It's written, as ever, by me, Ben Whitelaw.

New subscribers ActiveFence, Copenhagen Municipality, Ofcom, Unitary, the University of Amsterdam and more, thanks for joining. You can find out more about me and EiM here and get in touch by simply hitting reply.

In last week's newsletter, I asked about whether the newsletter was getting too long and lots of you got in touch (thanks by the way) to say you preferred a comprehensive overview. Which is lucky, because there are a number of important stories — from America's southern states to the east coast of Africa — that require close attention.

Here's what you need to know this week. Thanks for reading - BW


📜 Policies - emerging speech regulation and legislation

Back in September, I said that (EiM #126) it was hard to say which way the controversial Texas bill would go. This week, we got the answer. NetChoice and the Computer and Communications Industry (CCIA) failed to block the HB 20 bill, as they had done in Florida last year, meaning that services with over 50 million active monthly users must not restrict content based on "the viewpoint of the user" with immediate effect.

The reaction has been almost universal among the policy/legal experts that I follow: it's been called a "clear First Amendment violation", "insane", "monumentally stupid" and "constitutionally rotten" while American attorney Ken White, aka @Popehat, has an arm-long list of reasons why it's dumb.

The state's governor, who signed the bill, reacted by calling it "a big win for speech in Texas", although we have no idea what actually happens now; Protocol reported late yesterday that the Supreme Court may intervene as early as today. In the meantime, spare a thought for experts having to make sense of this crazy decision and the folks, somewhere in the future, that are likely to be at the sharp end of this  'must-carry' law.

💡 Products - the features and functionality shaping speech

Moderators in AltSpaceVR, Microsoft's virtual reality platform, had to lobby, yes lobby, developers to get safety tools  implemented. That's just one concerning nugget from this otherwise entertaining article from Slate writer Aaron Mak on being a "bouncer" in the metaverse. He spent a few days with Educators in VR, a group of volunteers who ensure metaverse events run smoothly, and heard that muting users and blocking people from entering certain areas were not prioritised. Mak came to the conclusion that "moderating in the metaverse is a delicate dance of guessing at motivations and making quick judgment calls."

Not strictly related to product matters but worthy of inclusion here nonetheless is a piece from TechRepublic on why Trust and Safety officers need to be part of company's executive team. Experts from Spectrum Labs and Gartner make the point that "safety issues can have a terrible impact on a brand and its reputation, and can even be business-ending". I predict that, in five years, few companies will be without one.

💬 Platforms - efforts to enforce company guidelines

A significant update in the case of Daniel Motaung, the Kenyan former content moderator who was fired trying to unionise for better pay and working conditions (EiM #153). On Tuesday, his lawyers filed a suit accusing Meta, Facebook's parent company, and outsourcing firm Sama, of forced labour, human trafficking and union-busting. The lawsuit is seeking compensation for all Sama content moderators, mental health support on a par with Facebook employees and the right to unionise.

Cori Crider, director of Foxglove, which is representing Motaung alongside Kenyan Mercy Mutemi (EiM #153), said that the lawsuit sends a message that "the days when you can get away with treating your content moderators as disposable and scaring them out of speaking are over". Meta denies that Motaung was employed by them and is seeking to have its name struck from the case.

Twitter's prospective new owner has questioned the nature of permanent bans and, in doing so, opened the door to allowing Donald Trump to back onto the platform. Elon Musk, speaking at a Financial Times event (full transcript), said:

permanent bans should be extremely rare and really reserved for people where they’re trying for accounts that are bots or spam, scam accounts, where there is just no legitimacy to the account at all.

He called instead for tweets to "made invisible or have very limited traction" where they are "illegal or otherwise just, you know, just destructive to the world". Reddit, you'll remember, quarantined r/The_Donald back in 2019 (EiM #28) in a similar vein.

Elsewhere on Twitter and Musk this week:

  • A research roundup by a professor of informatics and computer science at Indiana University "shows that weaker moderation policies would ironically hurt free speech", not improve it.
  • Musk's absolutist views "point to a future for Twitter where the platform openly permits hateful, right-wing, white nationalist conspiracies" according to Muslim Advocates Senior Policy Counsel Sumayyah Waheed.
  • Any changes to the way content is moderated will have a sizeable effect in Venezeula, where it is not bots but Nicolás Maduro's "digital troops" which pose the largest problem, according to this Global Voices op-ed.
  • Musk’s free speech absolutism could provide "an opportunity for [Middle East] regimes to extend their surveillance and harassment against dissidents", warns a good piece from The New Arab, noting that Saudi Arabia, Egypt, and the UAE have some of the highest takedown rates of state-backed. My read of the week.
  • Meanwhile, US conservatives are getting themselves in a pickle about Democrat reactions to the overturning of Roe vs Wade on the platform. The New York Post carries an op-ed that claims "to incite violence on Twitter, as long as it’s for the right cause, against the right people". Diddums.

Alex Stamos, ex-Facebook and now Stanford, wrote a thread on the implications of Muskian moderation ideas that is also worth reading.

Finally, a notable story that I missed last week: Facebook intentionally took down the pages of Australian hospitals, charities and emergency services in order to gain favourable regulatory treatment by the Australian parliament, according to whistleblowers and court documents seen by the Wall Street Journal. The fact that this was during a global pandemic and Australians relied on these pages and associated Facebook Groups to stay connected during Covid-19 tells you everything you need to know.

👥 People - folks changing the future of moderation

Aprameya Radhakrishna has cropped up in EiM several times but you'd be forgiven for not knowing him just yet.

The co-founder of Koo, the Indian micro-messaging app that's a lot like Twitter, is influential in India's tech scene and spoken about his platform's approach to moderation (EiM #100) and the recent rollout of a self-verification programme for users (🔭 Is authenticating users a good way to foster free speech on a platform?). In short, he's a company executive who knows more than a bit about how speech is moderated.

The rise of both Radhakrishna and Koo have been recognised in the inaugural Rest of World 100 global tech changemakers list, which has sought to find the "most influential, innovative, and trailblazing personalities in fintech, e-commerce, policy, digital infrastructure, and a range of other sectors that intersect with and influence technology".

Don't let the fact that Nick Clegg is also on the list, fool you; it's a big deal.

🐦 Tweets of note

  • "Guess what? If your content moderation is aggressively opaque (e.g., absolutely ZERO information about why content is removed or accounts are banned) then people assume the worst" - Casey Fiesler of the University of Colorado Boulder on worrying reports of abortion content suppression.
  • "Publishing quotes asserting there is left bias in content moderation from sources who provide no evidence is - given the fact research suggests this is not the case - amplifying information that is at best misleading" - Professor of political communication Rasmus Kleis Nielsen on why media coverage of this stuff matters.
  • "I find this report insanely important - much larger implications" - Sticking with the theme, Jason Kint, CEO of Digital Content Next, riffs on why a report from The Washington Post on Vijaya Gadde is vital.

🦺 Job of the week

Checkstep, the content moderation AI detection tool, is hiring a Sales Engineer to "act as a bridge between the sales and marketing team, the technology team and our customers". The role also involves some exciting thought leadership work, writing blog posts about the product and its integration.

The salary for the role is £70,000 and 110,000, depending on experience. Too often companies hold back this information so huge credit to the company for making it public. Good luck to any EiM subscribers that apply.