4 min read

📌 How to decide what's dangerous, India's "good censors" and a toolkit for abuse

The week in content moderation - edition #132

Welcome to Everything in Moderation, your weekly newsletter about content moderation, now and in the future. It's curated and written by me, Ben Whitelaw.

Welcome to new subscribers from TaskUs, Bumble and City University as well as to regular readers of the newsletter.

If you enjoy EiM, please consider forwarding it to colleagues and competitors alike or sharing today's edition via your favourite social network. Every one of you found the newsletter by word of mouth and I'd like that to keep that trend going.

Onto the hard stuff now. Here's the content moderation week that was — BW


📜 Policies - emerging speech regulation and legislation

A former UK minister of state has warned that the Online Safety Bill could become a vehicle for lots of people to "hang their own particular hobby horse". Ed Vaizey made the comments in an interview with Techcrunch as debate continued about whether the scope of the legislation should cover online fraud and advertising scams. The article also reveals that Vaizey interviewed for the role of chair of Ofcom, the body that will oversee the legislation, before controversy ensued. This interview suggests he's gunning for the job more than ever.

India's "competitive advantage in the tech economy has always been high-quality human capital at scale" and it has an opportunity to be the go-to place for "good censors", according to the founder of a Bangalore-based think tank. In an op-ed for LiveMint, Nitin Pai said he hopes that "market forces will drive companies and individuals to invest in training in ethics, responsible strategy and social impact analysis". Pai has some skin in the game — his organisation runs a public policy school — but business processing outsourcers, like Cognizant and Accenture, are likely to take the opportunity to develop new expertise and fresh business opportunities. Let's hope it's done the right way.

💡 Products - the features and functionality shaping speech

The week started with a trickle of tweets from journalists about LinkedIn blocking profiles in China and has ended with the logical conclusion: Microsoft last night announced that the networking site will no longer operate in China. Mohak Shroff, Senior Vice-President of Engineering (who I felt was a strange choice to put forward), justified the decision by saying that the company had faced a "more challenging operating environment and greater compliance requirements". A replacement jobs-only site will launch later this year but would "not include a social feed or the ability to share posts or articles".

If you work in Trust and Safety, particularly on the product or tooling side, this event next week on better integrating trust and accessibility into data-intensive systems is worth taking a look at. Run by researchers at Northumbria and Oxford University, you'll also get access to a toolkit outlining the risks of technology-mediated abuse.

Before we move on, I wanted to note a couple of Twitter's (not wholly successful) product efforts to make its platform a more pleasant place to hang out:

  • Downvoting tweets have begun testing with some users following its announcement back in July. Judging by screengrabs that I've seen, plans for upvoting have been dropped (if you know more, do get in touch). Too positive for Twitter perhaps?
  • I've written a few times about Birdwatch, the collective fact-checking initiative currently being piloted among users (EiM #98, #125). But I didn't know, until Dan Nguyen spotted it, that promoted tweets are also eligible to be rated. Isn't it Twitter's job to check if its advertisers are who they say they are?

💬 Platforms - efforts to enforce company guidelines

The US-centric nature of platform policy was thrust into the spotlight this week as Facebook's Dangerous Individuals and Organisations list was published in full by The Intercept. The 100-page document, which the independent-but-Facebook-funded Oversight Board has asked to be made public as recently as August, lists 4000 organisations banned from maintaining a presence on Facebook and Instagram. Six categories, including Terror, Hate and Militarised Social Movement, also govern what Facebook users can say about them.

It's a significant leak and reflects what The Intercept's Sam Biddle believes is "a clear embodiment of American anxieties, political concerns, and foreign policy values since 9/11". The moderation guidelines designed to police these rules, published alongside the list, are equally anxiety-inducing. How is any moderator meant to parse edicts like this? My read of the week.

👥 People - folks changing the future of moderation

It's important to note, as I did in last week's newsletter (EiM #131), that the path that Frances Haugen has walked has been treaded by numerous others before her. Sophie Zhang, who went public about Facebook's failure to combat abuse by politicians and governments around the world (EiM #115), wrote recently about how it took 18 months for her to decide to go to the media.

Ifeoma Ozoma is another with a story to tell. The public policy expert and now founder of technology consulting firm Earthseed blew the whistle on former employer Pinterest after she was doxxed by a colleague and overlooked for pay rises and promotion. I've featured her in EiM for her principled stands against NDAs and white nationalists (EiM #122, #95, #90) and it's clear she knows what it's like to speak out.

Ozoma has now released a guide to the whistleblowing process for anyone that wants to do the same. "The Tech Worker Handbook" outlines options for encrypted communication and thinking through in advance what documentation might be needed to back up any claims. Every moderator, Trust and Safety practitioner and machine learning engineer should have it close to hand, just in case.

🐦 Tweets of note

  • "Just want to note that it is possible to fundamentally disagree, and have a civil, productive conversation online about content moderation" - Techdirt's Mike Masnick unveils perhaps the biggest surprise of the week.
  • "I'm pretty pessimistic about social media regulation from having worked at a federal regulator and seen it up close" - Mark Hanson, product manager at Twitter, pushes the idea of self-regulation for social media companies in this interesting thread.
  • "Making the world more connected had those huge upsides, but it also had enormous downsides." - Kate Klonick threading up a storm about Facebook's mission.
  • Bonus thread: "Most people find online debates more hostile than offline debates. The real question is: Why?" - Political science professor Michael Bang Petersen on status-seeking and why people tend to be jerks online.