Welcome to Everything in Moderation, a weekly newsletter for people interested in online speech and the people, products, platforms and people that are shaping its future. It’s curated and produced by me, Ben Whitelaw.
A big welcome to new subscribers from Reddit, The Atlantic Council, Spotify, Google, the Calyx Institute, Cornell University and many more. Feel free to say hi and share where in the world you’re reading from.
Lots of you seemed to like the ‘Read of the Week’ idea that I tried in last week’s newsletter so I’m making that a permanent feature. Look out for it as you go.
This was the week that, would you believe it, content moderation made the front page of the New York Times. Read on for that and lots more from the past week — BW
📜 Policies - emerging speech regulation and legislation
Texas passed one awful bill this week and is inching its way to another; this time about how political content is moderated. The bill — which looks very like the Florida one blocked back in June (EiM #119) — would seek to make it unlawful for platforms with 50m+ users to ban, de-platform, or demonetise posts or users based on their political views or location. I’m expecting this to go the same way as the Sunshine State’s bill but you never know after what’s happened this week.
Is deplatforming the Taliban a good idea? Ali Breland at Mother Jones takes a closer look at the platform bans applied to the new government of Afghanistan (covered in EiM #124) and explains that “what’s to be done is less clear” than the platforms have made it out to be. This section, in particular, stood out to me:
whatever metric of brutality and violence that could be used to justify banning the Taliban would force tech platforms to at least consider banning these [Western] governments (and several others) if they were sincerely interested in equal application of their rules.
My read of the week.
💡 Products - the features and functionality shaping speech
A new safety mode will be available to a handful of Twitter users as the company tries to “reduce disruptive interactions” on the platform. The setting automatically creates a seven-day block for accounts that use offensive language or send multiple uninvited replies, although accounts can be manually overridden in the event that safety mode goes too far. It’ll be made available for a “small feedback group” before being rolled out more widely, according to a company blogpost.
Defaulting accounts to private, pausing notifications after 9pm and turning off auto-play videos are just some of the changes platforms have made in response to the UK’s Age Appropriate Design Code, which came into force this week after a one year grace period. Covering similar ground to the Online Safety Bill (EiM #112), the code is designed to “create a better internet for children” by setting privacy and advertising standards that platforms must follow. Proof, once again, that product design is an important weapon in Trust and Safety teams’ armouries.
💬 Platforms - efforts to enforce company guidelines
I said last week (EiM #125) that the Reddit mods against disinfo story would unfold quickly and culminate in a blackout and that’s how it turned out. On Monday, I wrote a long thread about why the story was undercovered by the media, after which a host of stories emerged, and which quickly led Reddit into performing an embarrassing change of tack.
Facebook “quietly paid others to take on much of the responsibility” for content moderation, according to this explosive long read published by the New York Times on the role played by Accenture. According to the report, the consulting firm was paid at least $500 million a year for providing around one third (approximately 5,000) of Facebook’s mod workforce and, in return, gave the platform with “a veneer of respectability”.Other nuggets from the piece include:
- Accenture’s San Francisco team working on the Facebook account was called “Honey Badger” (Having met one of these awful creatures, I can only imagine the relationship this team had with Big Blue).
- Hiring tech exec Arun Chandra to the role of VP of Scaled Operations, which I covered here back in June 2019 (EiM #27), was a big step in the partnership.
- New moderation contracts were restricted back in October and now require senior approval.
Finally in platform news: if you poured over last week’s recommend articles on the fallout from the OnlyFans porn ban and u-turn (EiM #125), you’ll also want to read this op-ed from Jillian C York on the people affected by what she calls “the whims of the banking and financial tech industries”.
👥 People - folks changing the future of moderation
Not so much as a person as a role this week.
Cloudflare, the web hosting company, is hiring for a Director, Head of Trust and Safety and boy is that a job with a lot of responsibility.
The job description explains the company is looking for someone “able to create policies and protocols that account for this complexity and effectively manage a global team responsible for implementing those changing policies”. So not much then.
The takedown responsibilities of “infrastructure services” (EiM 104) has been under the spotlight since Cloudflare pulled the plug on The Daily Stormer (EiM #11) back in 2018. That’s only going to increase in the coming years.
The previous incumbent — Justin Paine — seems to have moved to a new role in Threat Intelligence although it’s not super clear when this happened. I expect his successor to appear here in EiM before too long, hopefully for the right reasons.
🐦 Tweets of note
- “@Instagram has now suspended @plancpills” - the effects of Texas’ anti-abortion law is already having a direct effect on users, spots health educator and master’s student Hayley McMahon. (See also this useful thread from the Electronic Frontier Foundation).
- “Users commenting on current events would be banned for committing "picking quarrels & instigating trouble” - Chinese Human Rights Defenders flags a new Weibo rule that effectively bans critical comments of state officials or media.
- “Today, I resigned as a moderator nearly everywhere on @reddit where I held that title” - must-read thread from Mister Woodhouse following this week’s Reddit debacle.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.