The adult content fightback, Meta achieves climate consensus and dropping a Grade

Hello and welcome to Everything in Moderation, your globally-minded content moderation and online safety week-in-review. It's written by me, Ben Whitelaw.
My jealousy levels have been through the roof this week as I've watched updates coming through from #TrustCon and Trust and Safety Research Conference in Palo Alto. If you got to go, I'd love to hear what you thought. If you didn't, listen to this Tech Policy Press podcast explaining why conferences and communities like these are important for the maturation of the practice and the profession.
A warm welcome, as ever, to a gaggle of new subscribers from Yale University, Linklaters, Discord, Depop, University of Leuven, Pinterest and others. Do say hi.
Keep an eye for my read of the week and, if you enjoy today's edition, consider becoming a member to support the growth of EiM to new corners of the world. Thanks for supporting and for reading — BW
Policies
New and emerging internet policy and online speech regulation
Last week's newsletter hit inboxes just too late for me to include arguably the most important policy audit report to emerge since the 2018 Myanmar assessment: the long-awaited impact assessment of Meta's moderation on Palestinian digital rights and Arabic users.
The report is worth reading in full but catalogues countless incidences of over-enforcement of Arabic content which harmed "the ability of Palestinians to share information and insights about their experiences as they occurred". Individuals interviewed by BSR, which conducted the analysis, went as far to say that Meta "appears to be another powerful entity repressing their voice that they are helpless to change", something I have catalogued in previous editions of EiM (EiM #112, #135)
Nadim Nashif, founder and director of digital rights organisation 7amleh, called it "a landmark in the struggle for digital justice" and urged Meta to "commit the co-design of its policies and adapt more transparent policies" (more of which below). Meta's response was light on contrition but committed to 10 of the report's 21 recommendations while assessing the others for "feasibility". Let's hope we hear back before the next atrocity happens.
Texas' HB 20 law has been described as something "that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous" in a wide-ranging piece in The Atlantic about the future of the web. Including insights from Techdirt's Mike Masnick and Daphne Keller of Stanford (and today's Tweets of note), the piece even imagines the creation of 'Chaos Versions' of platforms that contain beheadings, child sexual abuse material and anorexia content to appease states like Texas.
If you want more on the topic:
- There is a good Q&A with two Slate writers on the "political stunt" that is HB20 and the timeline for it arriving at the Supreme Court.
- Enjoy Reddit moderators displaying how they feel about the law in the most Reddit way possible: by forcing commenters on the PoliticalHumour subreddit to include the phrase "Greg Abbott is a little piss baby". Reddit mods have a history of being right (EiM #105) so you'd be wise to pay attention.
Products
Features, functionality and startups shaping online speech
Three types of community label are being made available on Tumblr as part of the platform's effort to allow all users "to fully express themselves while also having control over what they encounter on their dashboards". The labels — for content referencing drug and alcohol addiction, violence and sexual themes — are designed to be added by posters, according to a blogpost, and enable Tumblr to blur, or hide content based on user preferences and age. It marks an evolution of its strict 2018 ban on adult content (EiM #4), which went down like a lead balloon with its users.
Here's one from last week: Instagram is working on a safety feature that blocks nude photos from being sent via direct message. Like the Hidden Words feature launched last September (EiM #123), it will be enabled in Settings and act as an automatic filter. However, according to The Verge, testing is still a few weeks away so this could months before release. The issue of unsolicited nudes is pervasive — but difficult to do well — so it will be interesting to see how this pans out.
Platforms
Social networks and the application of content guidelines
One the topic of adult content, 60+ sex workers and performers have signed a letter from Pornhub demanding that Instagram provide "an explanation to why our accounts are continuously deleted" and why content is removed "even when we do not breach any of Instagram's rules". The letter even takes aim at Kim Kardashian for posting her "fully exposed ass to her 330 million followers without any restrictive action". The irony is that the letter was flagged for some users on Twitter with a "sensitive content" warning, despite containing zero flesh or anything that could be deemed unsuitable.
A Facebook experiment that got together 250 people to discuss solutions to problematic climate information on the platform led to "high amounts of both participant engagement and satisfaction" and could open up opportunities for users to help write speech rules. A company called Behavioural Insights Team ran three sessions across five countries back in February 2022 to ask people what should be done about climate misinformation. Facebook won't say what was decided but 80 percent of participants said users like them should have a say in policy development. Participative forms of decision-making, including digital juries (EiM #72) have huge potential in my opinion and this shows it again. My read of the week.
People
Those impacting the future of online safety and moderation
If you've been following EiM over the last year, you'll know a bit about the political hot potato that if the chairmanship of Ofcom, the intended regulator of the possibly doomed UK Online Safety Bill. It was going to be Paul Dacre, then it wasnt' and then it was announced that Tory peer Michael Grade would take up the helm (EiM #153).
Well, Grade this week gave his first speech at the Royal Television Society's convention in London in which he called for "new era of accountability where companies have to prioritise trust and safety alongside clicks and profit". In other quotes reported by The Guardian, Grade said social media content was "shrill" and shocking" and that:
“Big tech firms must shift their regulatory responsibilities from the public policy departments, where they sit today, to the frontline staff responsible for designing and operating their products.”
I'm not exactly sure who told him about the tone of social media because Grade famously told MPs upon his Ofcom appointment that he does not have an account on any platform. That doesn't seem to be exactly true though.
A quick Google finds a sparse but plausible looking LinkedIn page (which has been updated with 'Chairman' but no organisation) and a Twitter account that looks like it was made at the tail end of his time at ITV. His one tweet? A gloating, coded message about the lack of media coverage of cuts that he imposed to ITV Consumer just a day before Channel 5, under Dawn Airey, followed with a similar job cut announcement.
Who's being shrill now, Michael?
Tweets of note
Handpicked posts that caught my eye this week
- "This should include shift in focus on *who* (which people/affiliations/institutions) gets heard to speak about "global" speech governance." - Jenny Domino, of the Oversight Board, makes a great point about what is a systemic issue.
- "Potentially one of the few tech sub-sectors that could do quite well out of the OSB" - Tony Blair Institute's Tom Westgarth thinks safety tech is on the way up.
- "So when the two work together, and platforms do the state's bidding, do users have any right to object?" - Daphne Keller, who appeared on the new Content Moderated podcast with evelyn douek and Genevieve Lakier last week, explains why the recent UK drill Oversight Board case was particularly interesting.
PS. Jenny is right about broadening who we listen to on the topic of speech governance. My Twitter list of moderation experts is global although still not as diverse as I'd like. Is there anyone you think I've missed? Get in touch.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1400+ EiM subscribers.
The Electronic Frontier Foundation is on the hunt for an Associate General Counsel to identify and analyse legal issues relating to the civil liberties non-profit.
The successful candidate will have their hands full helping defend EFF from legal threats or litigation, as well as providing staff and management with legal advice for internal matters.
Preferred experience includes working on data privacy and/or cybersecurity law, for which you'll get a salary of $140,000 to $150,000 DOE and the chance to work closely with EFF's General Counsel, Kurt Opsahl.