Avoiding 'unevenly enacted' rules, anger at Meta's India report and why we need whistleblowers

Hello and welcome to Everything in Moderation, your handcrafted guide to the week's content moderation and online safety news. It's written by me, Ben Whitelaw.
Not all editions have a clear thread running through them but today's does. As you'll see, each section underlines the crucial role that non-governmental and civil society organisations play in seeking to address the abundance of content moderation challenges that we, as a global population, face.
Their importance can be seen in fierce reaction to a delayed human rights report, in the protection of whistleblowers and in the holding to account of platforms for their (lack of) protection of minority groups. It is there and it needs recognising.
It goes without saying that new subscribers from Pinterest, Amazon, Genpact, ByteDance, Image Analyzer, Spotify, Hinge, Feeld and Strava are very welcome here, as is everyone who signed up this week. One day, I'll find a way to get you all together but until then, enjoy the newsletter and, if you can, become a member to support its upkeep and the writing of exclusive articles like this one from Jen Weedon.
Here's what you need to know from the last seven days - BW
Policies
New and emerging internet policy and online speech regulation
The story I've been drawn to most this week has been Facebook/Meta's first annual human rights report or, rather, the reaction to it. Like many others, I wasn't very impressed.
A bit of background before we go further: in 2019, became plagued with false political information in the run-up to the general election. Research and investigation after investigation found that, as religious and caste violence accelerated, there was "little to no response from the company", despite the fact that India is Facebook's largest market. It led to calls for a human rights impact assessment (HRIA) which never arrived until this week's four-page "synthesis" buried within the 83-page pdf. Hardly comprehensive.
Meta director of human rights Miranda Sissions asked people to "read [the report] in the spirit we wrote it" but the criticism was swift and angry. Alaphia Zoyab, Luminate media and campaigns lead, said it was "an insult to Indian civil society and the human rights community" and called for "more whistleblowers" to come forward while Divij Joshi noted how the company "consistently refused to account for its complicity and inaction in human rights abuse in India". AccessNow's Marwa Fatafta said the report meant she wasn't "holding my breath for its investigation on Israel/Palestine." To say Facebook hasn't covered itself in glory is an understatement.
Elsewhere, new research has found that content creators and influencers "don't have much of a say in their content moderation policies" and suffer from rules that are "unevenly enacted". Cornell Unversity's Brooke Erin Duffy and Colten Meisner interviewed 30 creators on TikTok, Instagram, Twitch, YouTube, and Twitter about the algorithms and policies that govern their visibility and found that significant amounts of energy and time were invested by creators "in [the] hopes of understanding them".
A light-hearted take on a serious topic: The Guardian's Alex Hern writes that the Online Safety Bill is a 'Goldilocks piece of legislation' that annoys all groups equally, which means that it's "probably it’s time to throw the whole thing in the bin". How things change eh?
Products
Features, functionality and startups shaping online speech
A blockchain startup designed to resolve online disputes has released a beta version of its social media moderation product. Kleros Moderate will initially work in Discord and Telegram by allowing users to request a ban of another using an Ethereum bond. The company was founded in 2017 by entrepreneur and lecturer Federico Ast and computer scientist Clement Lesaege and won a 2020 European Commission Horizon 2020 prize (more on the company here).
I don't know enough about blockchain to know whether this could work at scale but I'm interested in procedural justice (EiM #113) and moderation juries (#72) as possible means to remove power away from more centralised forms of moderation decision-making. Let's see.
I approached the Integrity Institute to try and change that and together, we've come up with "Getting to Know", a mini-series about the folks whose job it is to protect people from each other online.
Jen Weedon, former senior manager in Facebook's threat intelligence team, is the first interviewee in the series and has a ton of stuff to say about how to measure your team's impact, the importance of curiosity and watching Ted Lasso. Go read it.
Viewpoints will always remain free to read thanks to the support of EiM members. If you're interested in supporting more Q&As like this, become a member today.
Platforms
Social networks and the application of content guidelines
Over 80 organisations from around the world have signed a letter calling for Facebook to stop attempts to gag whistleblower Daniel Motaung (EiM #159) with contempt of court proceedings. The letter, addressed to Mark Zuckerberg and Sama CEO Wendy Gonzalez, argues that the company is failing to "meaningfully address his allegations" and contrasts Motaung's treatment with that of Frances Haugen (who is well-funded and white, lest we forget). It goes on to urge both companies to support the unionisation of its content moderation workforce, which is something that is massively overdue (EiM #19).
Five major social media platforms continue to have a lot of work to do when it comes to LGBTQ user safety, according to the second annual Social Media Safety Index. Facebook, Instagram, Twitter, YouTube, and TikTok all scored less than 50 out of 100, with the Bytedance-owned video app laggin behind the others. Produced by the Gay & Lesbian Alliance Against Defamation (GLAAD) in partnership with Ranking Digital Rights and Goodwin Simon Strategic Research, the Index is made up of 12 indicators including policies for deadnaming and misgendering, pronouns on profiles and harmful advertising prohibition.
YouTube has agreed to settle a 2020 class action lawsuit that claimed it failed to protect moderators from PTSD as a result of viewing content on the platform. The video platform denied wrongdoing but agreed to pay $4.3m to moderators employed directly by the company or its contractors going back to 2016. The settlement represents a much smaller outlay than the $52m that Facebook agreed to settle, also in California, following a similar action. However, YouTube must provide on-site counselling and peer support groups as part of the settlement.
Here's an interesting one: Amazon is targeting review brokers who use Facebook Groups to recruit users to post misleading reviews in exchange for money or products. A lawsuit was filed in Seattle on Tuesday which seeks to unmask the admins behind the groups, of which 16,000 were removed last year. Spam reviews will never go away, will they?
People
Those impacting the future of online safety and moderation
Content moderation has been used as a topic for documentaries (EiM #41), theatre productions (#111), art projects (#81) and even been turned into fiction (#161). Finally, it will become a film. Specifically, a horror film.
Set in the Philippines, Deleter charts the story of a content moderator who deletes a suicide video made by her co-worker. In doing so, she comes face to face with her past (isn't that always the case?).
Director Mikhail Red told Variety that the film "attempts to unlock the dark secrets and consequences of their world, especially in a world where the truth is filtered and distortion is prevalent.”
Shooting will take place in August and September with an edit due to be completed by the end of the year. I, for one, can't wait to see it.
Tweets of note
Handpicked posts that caught my eye this week
- "Moving #internetgovernance beyond content moderation means addressing online harassment at the design level of AI systems & data they use." - Dr Courtney Radsch shares a tidbit from the Internet Governance Forum USA, taking place this week.
- "..America's top fascism-leaning supergenius has no idea how content moderation or employee morale works" - Karl Bode of Techdirt and elsewhere on what we can expect to happen with our good pal Elon.
- "It will make a huge difference in cases of backlash & pile-on" - Dr Rebecca Whittington welcomes the arrival of a new Twitter safety tool.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1200+ EiM subscribers.
Elevate is looking for a Public Policy Intern to monitor incoming legislation, write policy briefs and pen blog posts to communicate the company's views to the general public and wider community.
The company, which is known for its work building decentralised chat apps like Matrix, would ideally recruit a PhD candidate that can work 16 hours a week and is "well versed in “translating” legalese into English".
Denise R. S. Almeida, Elevate's data protection officer, kindly responded to my salary query to let me know the role is £27,000 pro rata. Get your applications in now.