Hello and welcome to Everything in Moderation, your all-in-one weekly guide to the world of online safety and content moderation. It's written by me, Ben Whitelaw.
Unforeseen circumstances meant the EiM summer break was a little longer than planned but I'm glad to be back in your inbox.
A special welcome to those that have become subscribers to EiM since the last edition; people from the Technology Coalition, Microsoft, MSN, Bumble, Adobe, Google, Logically, Linklaters, Sidechat, ActiveFence and elsewhere. To the two dozen EiM members who support the newsletter, a big thanks to you too.
I write EiM (almost) every week to help people like you stay on top of this fast-evolving space and to try and unpack what the changes — to policies, products and platforms — mean for you and your organisation. Do drop me a line and let me know what you're working on and what brought you here.
Here's what you need to know this week - BW
New and emerging internet policy and online speech regulation
Hate speech in Japan is a huge problem for platforms like Twitter and Wikipedia and lurks "out of sight" of mainstream media coverage, according to an interesting read from TIME magazine. Under-resourced teams of moderators and the failure to create counter-narratives mean that historical revisionism and xenophobic views of Korea and China are rife online.
The article explains how the main culprits are "netto-uyoku", loud, far-right Japanese netizens that represent just 2% of Japanese internet users but account for the majority of the abuse. They target many of the country's minority groups, including Zainichi Koreans. My read of the week.
Why does this happen? Well, there's a clue lurking in the fact that Yahoo! News Japan — which reportedly radicalised the gunmen that shot former Japanese prime minister Shinzo Abe — has 10.5 million comments a month (around 350,000 a day, some 15,000 an hour) and just 70 moderators to review them all. You do the maths.
It's from a week or two ago but this Stratechery long read about Google’s approach to child sexual abuse material (CSAM) and how it almost unwittingly lost a man his whole digital life is worth a read. It’s primarily a story about privacy but there's also a strong thread about the unintended consequences of well-meaning policies and the “unprecedented power [of a company’s policies] over people’s lives”. That could equally be said for all rules relating to moderation too.
Features, functionality and startups shaping online speech
Twitter has announced that it will expand Birdwatch in the coming weeks, allowing around half of its US users to see notes providing additional context. It marks the biggest expansion of the user-led fact-checking pilot since March 2022, when the company revealed that users were 20-40% less likely to believe a misleading tweet if they read a note about it (EiM #151).
But it's not the only incidence of Birdwatch in the news this week: an audit commissioned by Twitter's former head of security, Peiter Zatko, noted that the pilot was a conspiracy risk and was not prioritised by the company. The Washington Post reported that internal disinformation experts had "little guarantee that their safety advice would be implemented in new products such as Birdwatch." Which is a worry as it continues to expand.
Social networks and the application of content guidelines
Not for the first time, Instagram has removed Pornhub's account on the photo-sharing site, citing a breach of its community guidelines. The adult video company does not post explicit content on the platform (I'm not a follower, FWIW) but, four days later, its 13 million-strong account is not available.
What's particularly interesting here is the content and tone of the reaction from the Mindgeek-owned company. In a statement, it takes aim at "Instagram’s overly cautious censoring of the adult industry" and notes that "thousands of adult performers deal with [this] everyday [sic] despite not violating any of Instagram’s terms of service." Yes, that's the same Pornhub that was found to host child assault and rape and criticised for its own moderation practices just two years ago (EiM #94). Pretty high and mighty, you have to say.
There's been some updates in the ongoing 'right-wing apps in the Google Play Store' storyline: Parler has returned to the home of Android apps after upgrading its moderation practices: users can now block and report each other and the company says it has a team monitoring speech on the app. The news is not so good for Donald Trump's Truth Social, which still violates app store terms and conditions for lacking "effective systems for moderating user-generated content".
YouTube has removed a channel run by "Britain's most racist YouTuber" following an investigation by The Times to unmask his identity. James Owens, publishing under the moniker 'the ayotollah', used coded terms and algospeak to insult Jews and Black people, thus avoiding filters that might have seen his far-right videos flagged earlier. YouTube reiterated that it "strictly prohibits content that promotes violence or hatred against individuals or groups based on certain attributes" but a recent report noted that it has a long history of profiting from white supremacy.
Wikipedia is in hot water in India after a cricketer's page was updated with hate-filled language and led to the country's Minister of State for Electronics and IT calling out "this type of misinformation”, which is banned under its IT Rules (EiM #103 and others). The Hindu has an in-depth piece about the whole saga and how Wikipedia conducts its moderation. A story to watch unfold.
Those impacting the future of online safety and moderation
Back in October 2021 (EiM #134), I included Sahar Massachi and Jeff Allen in this section after it was announced that they were starting a new organisation to help shape a better social internet.
The organisation deserves another mention following the announcement that it has raised $1 m of funding to continue its work from the likes of the Omidyar Network, Knight Foundation and others (full list).
Unlike before, the Integrity Institute is more than just Sahar and Jeff; there are five of them (plus 14 fellows) working to create a community of integrity professionals. And it's why I partnered with the team on the mini-series featuring some of the Institute's members.
(FYI I wasn’t paid for this and, for new EiM subscribers, always label any partnerships or jobs ads which are paid for).
The Institute’s presence and approach are a net plus for the web right now and I’m glad it will continue to be around.
Tweets of note
Handpicked posts that caught my eye this week
- "We know this legislation has the potential to make a huge difference to online safety" - Glitch UK takes a trip to Westminster with the Online Safety Bill in the balance.
- "Content moderation is hard" - US lawyer Andrew Fleischman gets on the wrong side of Twitter's policy.
- "Someone please leak Meta and Twitter's Queen death content moderation protocols" - UCL law professor Michael Veale gets all patriotic.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1200+ EiM subscribers.
Grindr is looking for a Content Moderation Operations Manager to join its team based out of Detroit but working remotely.
The role involves a mix of vendor management, product design, staff training and operationalising policy as well as acting as an escalation point for "high crisis incidents that pose a risk to the company brand or to our users". You'll score well in an interview if you know additional languages or have worked in a global role.
I've asked for more information on salary and will report back next week if I hear more. I'm a big fan of Grindr's work over the past few years and this is a great chance to join a smart, inclusive team.