5 min read

📌 The great podcast moderation problem, 'Facebook's chief fixer' and another dislike button

The week in content moderation - edition #156

Hello and welcome to Everything in Moderation, your weekly shot of news and analysis about online safety and content moderation. It's written by me, Ben Whitelaw.

A warm welcome to new subscribers from Stanford University, USA Today, Salesforce, the Integrity Institute, OSTIA as well as a host of other people presumably thinking about the same questions that I am. And thanks to everyone that spends time with EiM each week.

In today's edition, there's a mix of the practical and the political as well as an interesting job role that caught my eye. I hope you enjoy it — BW


📜 Policies - emerging speech regulation and legislation

The podcast ecosystem "can and should employ far more robust content-moderation measures", according to a detailed review from the Brookings Institute. Analyst Valerie Wirtschafter and fellow Chris Meserole noted in a blog post how Spotify and Google Podcasts lack "easy-to-use' mechanisms for user reporting" (with gifs to prove it) and advocate for clearer language in their moderation guidelines. For example, Apple's current creator guidelines prohibit "mean-spirited" content, whatever the hell that means.

The UK's Online Safety Bill had its second reading in Parliament this week meaning we saw a flurry of articles about its so-called "fundamental protections":

  • In The Telegraph, Secretary of state Nadine Dorries and the Children's Commissioner for England Rachel de Souza write, with strong 'we don't care' vibes, that "it’s hardly a surprise that some in the tech world don’t like the Bill".
  • Conservative MP Steve Baker and LSE professor Paul Dolan write in The Times that "ministers will face a difficult time from their own benches if the most egregious parts are not changed".
  • Commentator Paul Goodman called the Bill a "Christmas tree (that) risks becoming so laden with decorations as to come crashing down". Yikes.

💡 Products - the features and functionality shaping speech

A dislike button is being tested on TikTok comments, according to a company blog post last week, as part of efforts to clear up the space under videos. The Bytedance-owned app is also experimenting with safety tool reminders — such as filtering and bulk blocking — to creators who receive large amounts of spam and abuse about tools they have at their disposal. It follows Twitter releasing downvoting to some users earlier this year as well changes to the way YouTube (EiM #136) display its dislike count back in November 2021.

Remember back in February when Twitter launched a dedicated site of third-party apps, including ones to keep users safe? (EiM #146). Well, those apps will now be highlighted at timely moments on the platform. It means that, if you block a user, you'll be prompted to download either Block Party, Moderate or Bodyguard (EiM #151). It's a nice idea and, as this Techcrunch pieces notes, about time. But there's one problem: the toolbox has limited information about how apps are vetted or, crucially, when apps get their permissions revoked. Cambridge Analytica, anyone? (Thanks Ian for sharing)

💬 Platforms - efforts to enforce company guidelines

Meta, which owns Facebook, Instagram and WhatsApp, is "just not taking [moderation in Africa] seriously enough", according to expert voices in this new Guardian read. Violent speech and anti-vaccine voices have been left to flourish as a result of a failure to invest in content reviewers with real local knowledge, echoing what Frances Haugen — and plenty of others before her — noted last year. It comes quickly on the tail of revelations that Kenyan moderators working for Sama were denied wellness breaks and some were even fired trying to secure an increase to their $1.50 an hour salary (EiM #148).

In more of the same, Instagram has come under fire for being slow to user reports of 'tribute pages' of children in swimwear or revealing clothing. One flagged account with 33,000 followers was toldit  "probably doesn't go against our community guidelines" and another was allowed to post until campaign group Collective Live caused a fuss about it on Twitter.

This story was published just after I hit send last week but TikTok is under investigation by US government agencies over its handling of child sexual abuse material, according to the Financial Times. Investigators are looking into how a privacy feature is being exploited by predators to groom children.

Another week; another tranche of Twitter/Elon Musk commentary. I've come around to the idea that this isn't going to go away anytime soon so here's the best of the commentary that touches on how the buyout could affect online speech:

  • Techdirt's Mike Masnick goes through Musk's TED interview and comes to the conclusion that he has "not even begun to think through any of (the tradeoffs)" inherent in moderating content.
  • For The Washington Post, Elizabeth Dwoskin speaks to a number of former platform workers, who are less than complimentary about Musk's vision.
  • Ripple CTO David Schwarz, a crypto advocate who you might expect to side with the Tesla founder, said it "looks like Musk has literally never spoken to anyone who has tangled with social media moderation problems at all".

👥 People - folks changing the future of moderation

The physical edition of Wired dropped through my letterbox this week (old skool, I know) so I finally got around to reading the long read on Joel Kaplan that was bouncing around social media last month. His influence drips from every paragraph and rarely in a good way.

Benjamin Wofford, the author of the Wired piece, describes Kaplan as Facebook's "chief fixer" in a new podcast courtesy of Tech Policy Press. And you can see why: Wofford tells a frightening story about Kaplan jumping on an emergency company call in December 2015 in response to Donald Trump posting on Facebook about banning Muslims from the United States while actually sitting in India, schmoozing the BJP government (which, we now know, it has close ties to).  

Kaplan is still there, eight years after his elevation to the global head of public policy, and shows no sign of moving any time soon.

🐦 Tweets of note

  • "This is a significant lobbying failure on the part of big tech" - Internet policy buff Konstantinos Komaitis reflects on the Digital Markets Act and why the platforms won't be happy.
  • "The FBI compiled an internal 83-page guide to internet slang which is now available online thanks to a 2014 FOIA" - Tech reporter Taylor Lorenz surfaces this absolute doozy of an article on internet slang.
  • "They kind of care about money, but mostly they wish you would shut up and be civil." - Former Reddit CEO (2012-2014) Yishan Wong says what we kinda all knew about the Silicon Valley tech giants and their approach to speech.

Bonus thread: Dave Willner, head of product policy and Open AI, responding to Wong, talks about on the ripple effect of abuse (also check out thoughtful replies).

🦺 Job of the week

This is a new section of EiM designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.

Amazon is looking for a Head of Customer Trust and Product Policy, EMEA to "apply and communicate internal and external content and safety policies".

The job can be based in one of four cities — London, Munich, Berlin or Luxembourg — and although the description doesn't disclose salary, LinkedIn's search filter says this role is paid between £60,000 and £70,000. Which, to be honest, isn't a lot for what feels like a senior position.