4 min read

📌 Section 230 report, friction tech and an unlikely advocate

The week in content moderation - edition #79

Welcome to this week’s Everything in Moderation and hello to new subscribers from ESRI, Seven League, the University of California, Irvine and Dentsu G that signed up before my break. I'm glad to be back.

If you've been subscribed to EiM for a while, this week’s newsletter might look a little different. Over the last few months, I've spoken to half a dozen subscribers to understand why they subscribe and what they would like to see improve. You can read more about my conversations with them here.

The changes I've made — which include a renewed focus on four key areas and a clearer, more consistent structure — are designed to help subscribers like you better navigate the newsletter and find content that's useful and interesting. I'd love for you to reply with a line or two about what you think.

The longer analysis pieces will return as soon as I can find out a smarter way of bringing them to you. But for now, enjoy a round-up of this week's content moderation news - BW

📜 Policies - company guidelines and speech regulation

This report was published a few weeks back but I wanted to flag it for those that may have missed it. In it, Paul M. Barrett — an NYU Stern law professor whose last report I covered in EiM #68 — calls for the reform, rather than the revoking, of Section 230 and the creation of a new US regulatory body to administer platform responsibility. His piece in MIT Tech Review explains more.

It’s timely because, just last week, three Republican senators proposed a new billThe Online Freedom and Viewpoint Diversity Act — designed to remove the protections that Barrett and countless others have suggested are worth keeping.

If you’re interested in content regulation in Canada, check out this free panel discussion on Monday (21st September).

💡 Products - features and functionality

I talked a bit about ‘friction tech’ here last month (#EiM 76) and it seems the idea has crept into Netflix’s new documentary ’The Social Dilemma’ (I haven’t got round to watching it yet — if you have, let me know what you think).

According to Aza Raskin, who is co-founder of the Center for Humane Technology and appears in the doc, designing features to slow the ease of sharing false content could give the “human brain the chance to catch up to this impulse”. It's a nice idea but one that feels lightyears away from how products and services are designed in pursuit of attention and profit.

💬 Platforms - dominant digital platforms

i. Community guidelines, content policies, moderation rules: the names that digital services use for their standards are many and varied. And so is the way that these policies are applied around the world.

We see this with Facebook in Ethiopia, where Vice News reported this week that a failure to stop and remove genocidal attacks on the platform has led directly to political violence and killings. Facebook’s community standards, I found out in the piece, are not available in Amharic and Oromo (Ethiopia's two main languages) and there are only 100 outsourced staff assigned to the whole continent. A similarly bleak report came out of Sudan recently, where one activist has untold issues getting Instagram to remove pictures of women taken without their permission.

This two-tier enforcement system is creating similar conditions to that which we saw in Myanmar, where hate spread and violence erupted against Rohingya Muslims. All sadly familiar.

ii. Not all policies are intended for the public en masse; some are about marshalling discussion among staff. So it was interesting to see that both Alphabet-owned Google and Facebook issued clarifications this week to the moderation guidelines on its internal communication tools.

According to CNBC, two of Google’s tools — Memegen (a meme generator) and Dory (its Q&A system) — were described as ‘rarely’ respectful and had seen a significant rise in the number of flags in 2020 compared to last year. At Facebook, plans for its internal forum were vague but there is expected to be closer moderation of staff discussions.

The increase has been put down to the rise of working remotely and the active discussion of emotionally-charged topics. I wonder how many other companies are seeing — or would admit to — a similar trend.

🐦 People - those shaping the future of content moderation

i Like me, perhaps you don't immediately recognise the name Ajit Mohan. Since last year, the former streaming CEO has been managing director of Facebook India and, when you consider the number of users in the country, arguably one of the most powerful people within the platform’s leadership.

This week, following pressure about the alleged preferential treatment of the BJP party, Mohan came out to deny that Facebook had a political bias and explained that content enforcement decisions were made separately from policy decisions.

It comes just a few weeks after it was reported that India government officials were discussing the creation of ’standard rules' for all social platforms that have a presence in the country. There are many reasons why that's a bad idea. (ICYMI, I covered India's content moderation reckoning in EiM #65).

ii. I never thought Kim Kardashian West would concern herself with content moderation but then nothing is predictable about 2020. This week the social media behemoth joined other celebs in freezing her account as part of campaign group Stop Hate for Profit’s week of action. Her messages didn’t give the impression she knew a huge amount about the issues at play but that’s where EiM can help. KKW, feel free to subscribe at your leisure.

🐦 Tweets of note

  • ‘I cannot keep up with the pace of new scholarship on content moderation and human rights!’ - David Sullivan at the Global Network Initiative on something we all feel (indeed, it's partly why EiM exists).
  • ‘I continue to be concerned about the ideological discrimination’ - despite no evidence, Republican senator Mike Lee continues to believe anti-conservative bias is rife on social media.
  • 'At what point should these companies pay journalists for doing their content moderation for them?’ - Julia Angwin, editor-in-chief of The Markup, on Amazon’s frankly pathetic efforts to remove drugs from its platform.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.