5 min read

📌 Verifying who you say you are, Canada's regulatory experts and banning climate misinfo

The week in content moderation - edition #154

Hello and welcome to Everything in Moderation, your online safety and content moderation week-in-review. It's written by me, Ben Whitelaw.

Welcome to new subscribers from ActiveFence, Ofcom, Discord and a host of other folks at the International Journalism Festival, where I'm on a panel tomorrow about the internet's essential workers.

To coincide with the festival, I wanted to highlight the particular challenge of abuse and threats faced by reporters, particularly women and journalists of colour. The subsequent Viewpoints Q&A with former Jigsaw research lead Tesh Goyal hopefully provides some hope that, with more investment, that won't always be the case. Do have a read.

Today's edition covers everything from Ukraine to moderating virtual experiences. Let me know what you think and, if you find EiM useful, consider becoming a supporting member — BW


📜 Policies - emerging speech regulation and legislation

44 days into the Ukraine war and there continues to be a number of significant changes to how major social media companies manage and moderate content about the conflict.

The major shift this week was at Twitter, which announced that it would stop amplifying state-run accounts via the timeline, search or explore pages and would limit accounts that post videos or images of prisoners of war.

Over the last two weeks, questions have been asked about how platforms will maintain evidence that might later be used in trials for war crimes and, as Tech Policy Press points out, it's clear that these sites "were never designed with atrocity documentation in mind". Twitter's announcement speaks to an acknowledgement of that vital function.

Elsewhere in Ukraine developments:

  • Findings by social media research collective Tracking Exposed and reported by Wired suggest that TikTok's policy to prevent Russian users from seeing videos from outside the country created a "vast echo chamber intended to pacify president Vladimir Putin’s government".
  • Enforcement teams at Meta, according to the New York Times, have been unable to "keep up with shifting rules about what kinds of posts were allowed about the war in Ukraine" because they were changing so frequently.

This one came out just before last week's edition hit your inboxes but is worth noting: the Canadian government has announced a new expert advisory group on online safety to help develop legislation to address harmful online content. The group, which consists of a mix of law and media professors (as well as a few EiM subscribers) will hold nine workshops and meet with Canadian representatives of the major social media platforms. There's no timeline for when their work will be completed but it will be published online.

Regulatory experts had criticised Canada's previous plans (EiM #118) to introduce new legislation within 100 days of the new parliament. That deadline passed in February and the unveiling of this group appears to usher in a less rushed, more transparent regulatory process.

💡 Products - the features and functionality shaping speech

Koo, the Indian Twitter clone, now allows users to verify themselves on the platform using a government-approved ID in a move the company claims will promote "responsible behavior on the platform". The process reportedly takes less than 30 seconds and, in return, verified users get a green tick, although it's not clear how that affects algorithmic ranking and recommendation systems. Koo has been featured in EiM for its widespread coverage of Indian languages (EiM #100) but also for its close ties with the country's ruling political party, the BJP.

They seem like unlikely bedfellows but Epic Games and the Lego Group announced a partnership this week to "build an immersive, creatively inspiring and engaging digital experience for kids". Although not clear what this will look like (A new game? A VR experience?), the release makes clear that safety and privacy will be paramount. Both companies have a decent record when it comes to online safety following significant investment over the last few years, notably Epic's purchase last year of kidstech company Superawesome.

Enough has been written about Elon Musk's new 9.2% stake in Twitter without me adding to the noise but it's interesting to note the free speech/moderation angle of some of the analysis (as well as the appearance of names that will be familiar to regular readers of EiM):

  • Tracy Chou, the founder of Block Party, noted in a report from Slate that relaxing moderation guidelines means there's a need to "balance that by giving individuals more control over whether or not they want to engage".
  • FreePress' Tim Karr, in The Hill, says Musk might have "ideas about how to moderate better and in more sophisticated ways"
  • Richard Kramer of Arete Research says on CNBC that "content moderation is a new tax on social media" and that Twitter will lose. advertisers if it allows free speech.
Exclusive read: Journalists, especially women reporters and people of colour, are facing a tidal wave of abuse for simply doing their job. I've seen it first hand and it's getting worse by the day.

To coincide with the International Journalism Festival in Perugia, I spoke to Tesh Goyal, formerly Jigsaw's user research lead for Conversation AI, about what can be done to mitigate the social media toxicity that journalists face. His recent research and the development of Harassment Manager — a tool to better evidence threats and abuse — are worth checking out, whether you're a journalist or not.

Viewpoints will always remain free to read thanks to the support of EiM members. If you're interested in becoming a founding member, join today and receive a 10% lifetime discount.

💬 Platforms - efforts to enforce company guidelines

Pinterest has become the first major platform to ban climate misinformation to "cultivate a space that’s trusted and truthful.” The policy, which was devised with help from the Climate Disinformation Coalition and the Conscious Advertising Network, applies to both user-generated content and advertising.

Facebook suspended 400 Filipino accounts, pages and groups responsible for hate speech and misinformation as the company gears up for the country's election on 9 May.  In a blog post, the company also explained that it would activate an Election Operations Center, although didn't say when or with how many people. The announcement comes just days after a reporter for Philippine Inquirer wrote about how her efforts to report content on the platform "always ended in disappointment" and were only successful "by flagging content as spam rather than flagging it as hate speech or fake news". I'm sure the two aren't linked.

👥 People - folks changing the future of moderation

Australia was an early force in the push for greater online safety regulation and, in  2016, appointed the world's first online safety regulator, Julie Inman Grant.

Grant, an American that has lived down under for over 20 years, was involved in the development of Section 230 of the Communications Decency Act and worked for Microsoft, Twitter and Adobe before taking up the role. She's a respected name in the online safety space and, as she explains in this webinar with All Tech is Human last year, can count the safety by design movement as one of her major achievements.

This week, Grant turned her attention to risks posed by metaverse and, in an interview with the Sydney Morning Herald, said that governments need to come together to "counter the might, the wealth, and the stealth of the technology giants".

Nothing particularly surprising perhaps but sounds to me like a prelude to her office assuming further powers in the not-so-distant future.

🐦 Tweets of note

  • "if the Oscars joke about a woman with alopecia were made in a tweet after the UK's OnlineSafetyBill had been enacted.." - UK internet law expert Graham Smith gently mocks the UK's incoming legislation in this poll.
  • "Ok, this expansion to Indian channels seems concerning." - Prateek Waghre, the editor of the Information Ecologist newsletter and a friend of EiM, is worried about the Indian government's takedown of 18 YouTube channels used to spread "anti-India" information.
  • "Who knows what kinds of legitimate content/debate/discussion may be obscured or hidden as a result of this so-called “duty"" - Big Brother Watch legal and policy officer Mark Johnson doesn't see eye to eye with Twitter's safety mode.