4 min read

📌 Becoming a certified mod, Twitter unveils Communities and court orders in Brazil

The week in content moderation - edition #127

Welcome to Everything in Moderation, your weekly roundup about content moderation, curated and produced by me, Ben Whitelaw.

Hello to new subscribers from Sky, New York University and the House of Commons and to many loyal, longstanding readers. EiM’s growth comes from word-of-mouth so if you enjoy the newsletter, please consider:

  • Sending me an email with one piece of feedback (positive or negative!)
  • Sharing this edition in Slack, Discord or wherever you get your news about content moderation.
  • Subscribing here if it was forwarded to you.

(Thanks in advance 🙏🏽)

Onto this week’s roundup (including my read of the week) — BW


📜 Policies - emerging speech regulation and legislation

Brazil‘s government this week passed new legislation that means social media platforms must get a court order to take down certain types of content, such as coronavirus misinformation. The law is the latest play by president Jair Bolsonaro to “defend free speech” ahead of what looks like will be a hotly contested 2022 national election. Platforms have 30 days to comply with the new rules, which expire in 120 days unless Brazil’s Congress votes for them to become permanent.

A bad week for Australian media organisations, who were told by a High Court judge in the Dylan Voller/ABC case that they were now responsible for comments on their Facebook pages. The court found that having a page and publishing content “facilitated, encouraged and thereby assisted the publication of comments from third-party Facebook users, and [the news organisation was], therefore, publishers of those comments”. This has so many possible side effects for journalism, especially if other countries go down this route.

On the topic of the press, if you’re interested in how the UK’s Online Safety Bill could spell problems for news organisations, I’ve written a piece for the latest edition of the British Journalism Review on just that. Again, I’m not hopeful.

💡 Products - the features and functionality shaping speech

Twitter’s push into topic groups continued this week as it launched a pilot of Communities, distinct user-owned and moderated spaces that allow users to tweet a defined group of tweeters rather than all of their followers at once (an issue I definitely have).A few interesting requirements to note from the Communities FAQs:

  • Communities initially need to be approved by Twitter during the pilot but will eventually be opened up
  • Admins and mods must have two-factor authentication turned on and their tweets must be public — which won’t work for everyone
  • Twitter, in its own words, “does not provide payment for creating, administering, or moderating communities”

It’s interesting to note how, while the VC world is ploughing cash into community-led startups, news media is backing away from being a destination for interaction and dialogue faster than ever. Salon, the progressive US magazine, is the latest to do so, stating that “conversations are mostly happening in different ways now, and it makes sense for us to adjust accordingly”. That means sending its readers to social media platforms to have their say. Which, in the context of everything else, just makes no sense to me.

💬 Platforms - efforts to enforce company guidelines

WhatsApp has more than 1000 Accenture moderators tending to user reports, each of which has to get through one per minute, according to a long, detailed report published by ProPublica this week. The piece also lays out a process of ‘proactive’ moderation that scans unencrypted data, including the names and profile images of a user’s WhatsApp groups and status message, to detect rogue actors. My read of the week.

I touched on ‘infrastructure as moderation’ in last week’s newsletter (EiM #126) and this week has another significant story in the same vein: Reuters reports that Amazon AWS is hiring a Head of Global Policy, will expand its Trust and Safety team and hire experts to help it get ahead of future threats.

Female Indian developers have called on Github, the Microsoft-owned developer platform, to put in place better moderation of projects and code on its site following the discovery of a site that rated and threatened women. Twitter and YouTube accounts associated with Sulli Deals — a derogatory term for a Muslim woman used by Hindu trolls — were also suspended.

Twitch staff have agreed to meet with streamers about new anti-abuse tools following ongoing hate-raids on prominent Black users’ streams. The breakthrough follows concerted organising by streamers (EiM #123) as well as a boycott on September 1 — #ADayOffTwitch — which The Washington Post say affected daily viewership numbers by almost 10%.

Finally in this section: Reddit yesterday launched its mod certifications in beta to “understand how to set up and run a community using Reddit’s suite of mod tools. It follows Discord’s creation of its Moderation Academy (EiM #94) at the start of the year and is well-timed with the recent stepping back of some mods after the company’s recent Covid-19 disinformation debacle (EiM #126).

👥 People - folks changing the future of moderation

Football is increasingly looking like the battleground where platform reform might be won and lost. Over the last six months, we’ve had clubs’ accounts going dark, popular players like Theirry Henry (EiM #106) closing their accounts and then the grim reckoning after the Euro 2020 final (EiM #123).

That trend continued this week as two prominent British black ex-players (and brothers) spoke to MPs at respective select committees this week about their experience of online abuse and hate speech:

  • England and Leeds United defender Rio explained having to tell his children about the monkey emoji and explain people were posting bananas under his Instagram pictures.
  • Anton, who played for West Ham United and Queen’s Park Rangers, talked about mental health issues from using social media and suggested that inaction was linked to platforms’ business model.

Not everyone will agree with the brothers on some points; for example, Rio placed emphasis on AI, which we know isn’t the panacea others/Zuck think it is, while Anton advocated for accounts being verified with passports or driving licenses. But their first-hand accounts of the devastation online abuse causes are nonetheless vital to making MPs aware of the scale of the problem.

🐦 Tweets of note

  • “How did they manage it - and why didn’t TikTok notice, millions of views later?” - Sophia Smith Galer, formerly of the BBC and now Vice, with a good thread on how users bypass the video platform’s filters and AI.
  • “As Parliament returns and scrutiny of the #OnlineSafetyBill starts in earnest, here’s what lies ahead this autumn” - Carnegie Trust’s Maeve Walsh tees up a busy few months in UK parliament.
  • “remember the thousands of human moderators that look at the worst parts of humanity posted to the internet” - A Labor Day reminder from St Johns assistant professor Kate Klonick.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.