Hello and welcome to Everything in Moderation, your guide to what's changing in content moderation and online safety. It's written by me, Ben Whitelaw, and supported by 15 EiM members.
(Support the newsletter by becoming an individual or organisation member)
It's been two weeks since EiM last appeared in your inbox, which means there's a bumper batch of subscribers to welcome to the list including folks from Zoom, Westend Strategy, Reuters, Pattr, Spotify, Unitary, Milltown Partners, Middlesex University, Discord and a host of others. Thanks for coming on board.
I've consciously focused on the news and analysis from the last seven days, although there are one or two exceptions. Let me know what stories caught your eye or drop me a line with what you're working on — your feedback helps shape EiM and ensures it hits the mark each week.
Onto this week's round-up - BW
New and emerging internet policy and online speech regulation
Nigeria has announced plans to regulate social media companies by sharing a draft of a new code of practice. It includes a number of familiar platform provisions, including appointing a designated representative in the country, providing a compliance mechanism to avoid publication of prohibited content and supplying timely information to the government on accounts and content that violate Nigerian law.
The wider context here is that Nigeria has not been hesitant to come down hard on platforms in the past; you'll remember that Twitter was banned for seven months in 2020 (EiM #143) after deleting a tweet from President Muhammadu Buhari (EiM #115). The move towards legislation is likely to exacerbate tensions that, according to TechCrunch, have yet to subside.
A ruling from the independent-but-Facebook-funded Oversight Board has overturned a decision to twice remove a post containing Arabic homophobic slurs. The Board said that it did not violate the Hate Speech policy on the grounds that it was “used self-referentially or in an empowering way”, something that was clear from the post of the text. Another case regarding the decision to restore a graphic video showing violence against a Sudanese civilian was upheld.
Amendments to India's Information Technology rules, announced last week, now mean that platforms now have to respond to user complaints that "threaten the integrity of India" within 72 hours, rather than the previously stated 15 days. As outlined by Indian Express, there will also be a grandly-titled Grievance Appellate Committee to review content moderation decisions taken by platforms. Livemint summed it up nicely when it wrote that "it’s a bad idea for the government to wade into the messy business of social media restraints as a super-arbiter." Don't expect that to be the end, though.
A new child sexual abuse material (CSAM) policy paper shows that the European Commission is "vulnerable to lobbying from motivated coalitions seeking to securitize some types of policy", according to a platform governance researcher. Robert Gorwa, who is a postdoc fellow at Berlin Social Science Center, looks at the "chat control" proposals in a detailed piece for Lawfare and comes to the conclusion that maybe Ashton Kutcher and Demi Moore shouldn't have access to high-level European officials. My read of the week.
Not related to policy directly but worthy of inclusion: a new non-profit designed to help the success rate of litigation against tech companies will be set up by whistleblower Frances Haugen, according to Politico. Beyond the Screen will recruit staff from around the world as part of its efforts to expose "the ills of social media companies".
Haugen also used this week to criticise the reliance of platforms on so-called 'artificial intelligence' during a panel discussion with Daniel Motaung, the Kenyan moderator and whistleblower currently in a legal battle with his former employer Sama and Facebook/Meta. TIME's Billy Perrigo, who moderated the panel, has the write-up.
Features, functionality and startups shaping online speech
Good news for moderators of Discord servers: the chat platform has made available a new tool to help admins manage conversations, bans and spam. Automod, much like the Reddit and Twitch equivalents, detects and deletes phrases and portions of words before they become visible and can also put in place time-outs for offending users, according to The Washington Post. Apparently, AutoMod has been in the works for “quite some time” and wasn't a reaction to the Buffalo shooting (EiM #160), which I can believe; the company has previously been proactive in trying to raise the standards of its moderators (EiM #94).
Google's Perspective API lacks transparency and would benefit from "well-designed archives of explanations, gifs, dials, and contextual examples" that help to explain its machine learning models, writes design researcher Caroline Sinders in the new edition of New_ Public's magazine. Sinders questions the lack of context of the open-source code and says it's easy for software companies to create the "appearance of transparency". Amen to that.
PS I've also shared a few other very good New_ Public articles via @eimdotco, in case you're a tweeter and not already following.
Social networks and the application of content guidelines
Spotify this week announced its Safety Advisory Council, an 18-strong group of academics, researchers and founders to advise the company on how it moderates content. The group, which will meet several times a year, is made up of partner organisations — such as the Dangerous Speech Project, Center for Democracy and Technhology, Kinzen (full disclaimer: I've produced research for Kinzen in a paid capacity) and the Institute of Strategic Dialogue — plus a host of independents (some of whom you may recognise from other safety councils).
If you're interested in more analysis on the makeup of safety councils, become an EiM member or drop me an email to talk.
YouTube contains a "troubling amount of extremist content" and yet is "almost inscrutable" compared to other social media platforms, according to a new report from NYU Stern Center for Business and Human Rights. Written by Paul Barrett and Justin Hendrix (with whom EiM has collaborated before), it also outlines a series of recommendations, including greater transparency, better researcher access to data and a "hybrid" regulatory approach. Lots in this one.
Bitchute, the alternative video hosting platform once described as a "hotbed of hate", has published its first transparency report. Covering the 12 months ending in March 2022, it gives a prevalence number (around 1.5%) and provides quarterly breakdowns of nine types of prohibited content. An attempt to provide a veneer of respectability? Perhaps.
One from last week but vitally important: human rights organisations have called on Meta to revise its flawed process for Persian language content on Instagram. It comes after the company blocked content on hashtags, including #iwilllightacandle too (following the 2020 shooting down of a Ukraine International Airlines plan over Tehran), as well as the takedown of content including "death to Khamenei", the country's violent supreme leader since 1989.
Those impacting the future of online safety and moderation
It's rare for a policy expert to have the chance to speak publicly about their work, especially in the New York Times' Style section. But that's just what Bumble's Payton Iheme got to do this week.
As head of public policy for the Americas, Iheme has been working hard to pass legislation that penalises "cyberflashing", in which men (let's be honest, it's men) send sexual images through an app, via text or through file-sharing. Her success in Virginia and ongoing efforts in Wisconsin, I assume, made her a perfect candidate for an interview in Style (the excellent earrings must help too).
A Bumble colleague notes that public policy experts like Payton are usually "sensitive to nuance" as well as "tenacious and nimble". Having worked in a similar role at Facebook and also been a White House advisor and US Army intelligence officer, Iheme must have those qualities in spades.
Tweets of note
Handpicked posts that caught my eye this week
- "The platform regulation bills coming out of states right now are bananas." - Stanford's Daphne Keller speaks her mind about the new social media bill coming out of Arizona.
- "Wouldn’t it be nice to summarize some agreed-upon best practices? What would be researchers’ collective recommendations for content moderation?" - Colorado Phd student Aaron Jiang shares more about his recently published paper.
- "One part of the article that did really resonate with me and I also think could have used even more focus is around the limitations of measuring harms based on what the average experience of a user is" - Brandon Silverman, former Crowdtangle CEO, reacts to a New Yorker piece on social media harm.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1000+ EiM subscribers or get in touch to enquire about a one-off posting.
The Oversight Board is looking for a Vice President, Operations to design and deliver a "cohesive and effective operations strategy which supports the Oversight Board in achieving its overarching mission and goals." It's a senior role at a still young organisation and the ad outlines that successful applicants will have "significant experience of operational leadership with a preference for experience in digital media, human rights, freedom of expression and/or trust and safety."
The role is Washington or London based and hopefully earns certainly more than the $30,000 that LinkedIn suggests is the salary. I expect a number of EiM subscribers to have the requisite experience for this one. The deadline is 27th June.