3 min read

📌 How to spot abuse, New Zealand's legislative review and mapping hate

The week in content moderation - edition #117

Welcome to Everything in Moderation, your weekly newsletter about content moderation. It’s curated and produced by me, Ben Whitelaw.

A special welcome to new subscribers from Georgetown University, Hogeschool Utrecht, Harvard University, the Australian eSafety Commission and elsewhere.

If this edition of EiM was forwarded to you, subscribe here. If you like what you read, don't forget to forward to share in Slack, Discord or wherever you talk shop about content moderation. And if you're not getting what you want from the newsletter, feel free to unburden your inbox. I won't mind.

This is what you need to know from the past week — BW


📜 Policies - legislation and company guidelines

New Zealand has initiated a review of its content regulatory framework as it attempts to reduce the prevalence of online abuse and disinformation on the dominant digital platforms. In doing so, it joins a growing list of countries (EiM #100) designing legislation for their own ends.

The country, you may remember, has been a key proponent of online regulation since the Christchurch terror attack in March 2019, which led Prime Minister Jacinda Ardern to create the multilateral Churchill Call to clamp down on terrorist content. This new review, which has no timeline as yet, takes that a step further.

Photo courtesy of Ulysse Bellier/Flickr (with edits)

💡 Products - features and functionality

Now, this is interesting. Facebook has announced that it will make several new moderation tools available to Group admins and mods, including the ability to place limits on user posts and add restrictions on how many comments can be posted by all group members within a timeframe. It is also testing "conflict alerts" that let admins know if there's an unhealthy conversation taking place.

It's notable because it's the first time (to my knowledge) that Facebook has created tools that work directly against its business model of driving more engagement and thus more ad revenue. A study out this week suggested placing limits on group sizes and contributions could help avoid the spread of hate. Perhaps Facebook staff read it and are taking note?

💬 Platforms - dominant digital platforms

New research by George Washington University shows for the first time how online abuse travels across social networks. Analysis of six networks, including Facebook, VKontakte and Gab, found over 1200 'hate clusters' in data spanning a little over a year and was able to show countless inbound and outbound interconnections between the six networks. The takeaway is that malicious activity can seem like it has been eradicated when it has in fact moved to another less moderated platform. Worrying news for our information ecosystem.

A handful of Jewish creators on TikTok believe they have been targeted by a mass reporting campaign against them following a spate of bans for soft violations of the platform's community guidelines, according to NBC News. TikTok says the takedowns, which included counterspeech videos drawing attention to anti-Semitic abuse on the app, were made in error, an excuse that Twitter (EiM #99) and other sites (EiM #67) have become increasingly accustomed to using. TikTok, don't forget, has a history of hosting racists (EiM #61).

👥 People - those shaping the future of content moderation

A good piece by Rest of World goes deep into Nigeria's decision to suspend Twitter from operating in the country after it removed a tweet by President Muhammadu Buhari. And what's becoming clear is the important role played by Lai Mohammed, the country's minister of Information and Culture.

It was Mohammad that previously accused the platform of "double standards" for not removing the posts of a separatist political leader and him who has ignored Twitter's requests for a meeting this past week. The minister and other government officials are reportedly "gearing for war" and have even reached out to China to understand how it can regulate social media, according to reports. On Wednesday, Mohammed also said Twitter CEO Jack Dorsey was ‘vicariously liable’ for the #EndSARS protests last year, adding fuel to an already fiercely burning fire.

🚨 Have your say

Do you work as part of a trust and safety team? Or do you know people that work in content moderation companies? I’m helping Kinzen, a Dublin-based technology company that uses a mix of human expertise and artificial intelligence to tackle harmful content, work out how to support and empower people creating and enforcing content policy. If you’d like to take part, complete this short survey (5 minutes) by Thursday 24th June 2021.

🐦 Tweets of note

  • "Everything is a content moderation problem" - law professor evelyn douek reflects on news that anti-vax fake reviews are flooding Yelp.
  • "Deplatforming is effective in minimizing the reach of far-right channels" - Adrian Rauchfleisch, assistant professor at National Taiwan University, shares his new preprint on what happens when the likes of Alex Jones are booted off YouTube.
  • "Aren't we all sick of this shit?" - marketer Jared Hatch doesn't hold back as he shares a worthwhile Columbia Journalism Review piece.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.