5 min read

Moderating royalty, Meta restructures integrity and California dream up new law

The week in content moderation - edition #173

Hello and welcome to Everything in Moderation, your compendium of content moderation and online safety news from the last seven days. It's written by me, Ben Whitelaw.

A big hello to EiM newcomers from Niantic Labs, Ofcom, DCMS, Depop, Tech Against Terrorism and other corners of the big wide web. If today's edition was forwarded to you, subscribe for yourself or, if you like what you see, become a member of the newsletter to ensure it reliably hits your inbox each week.

It's been just over a week since Elizabeth II died and, inevitably, the news cycle has moved on to the way platforms moderate views about her life and legacy. With next week likely to be a continuation of the theme, it's worth a look.

Without further ado, here's what you need to know — BW


Policies

New and emerging internet policy and online speech regulation

California's plans to regulate social media moved forward this week as its Governor Gavin Newsom signed AB 587, despite fears of it being unconstitutional. The law forces platforms operating in the state to submit twice-yearly reports to the state attorney general "about automated content moderation, how many times people viewed content that was flagged for removal, and how the flagged content was handled". Similar laws in Texas (EiM #139) and Florida (EiM #151) are on pause.

At the start of the year, evelyn douek and Chris Riley predicted here on EiM that a "greater regulatory scrutiny of American tech companies in 2022 is an inevitability" and, after a quiet summer period, that continues to be the case.

The deletion of a tweet by a US professor about Queen Elizabeth II "illustrates how criticisms of powerful people, however distasteful, can be disappeared from social media sites for murky reasons", according to The Intercept. Uju Anya's wish for the monarch's "pain [to] be excruciating" led to a series of frothing articles from UK media outlets, her post being deleted on grounds of "abusive behaviour" and her account temporarily locked. I'm not convinced at all on this one.

According to The Drum, Twitter and Snapchat were both "proactively monitoring for emerging narratives", which is a euphemism for "ensuring they don't find themselves in a PR nightmare" (EiM Exploration). Their hands will be full next week.

Products

Features, functionality and startups shaping online speech

First there were questions about DALL-E 2's moderation (EiM #171) and now comes the inevitable ethical dilemma about a new Chinese text-to-image service. ERNIE-ViLG has been created by technology company Baidu as part of its Wenxin NLP project but blocks any mention of Xi Jinping, Tiananmen Square or revolution. There is no moderation policy on its site although users get a message that “The content entered doesn’t meet relevant rules. Please try again after adjusting it”. The computer, quite literally, says no.

Platforms

Social networks and the application of content guidelines  

The big story of the week comes from Meta, where it has been revealed that the ad and content integrity teams will be merged to cut costs amid tightening belts at the company. Around 3,000 staff will work across Facebook and Instagram, it was reported by Axios.

There are a few aspects to this which are interesting to me:

  • Ads (including fake or fraudulent job posts) are not always considered a safety threat in the same way as regular posts are concerned and receive less media coverage because, I guess, the concepts of speech and censorship - and the politicians, celebrities and influencers involved - make for more interesting reading. But, they are just as much of a problem, as 3.4bn blocked Google ads in 2021 alone suggests (EiM #158). As such, it’s not the worst thing that these teams are working more closely together if the headcount remains the same.
  • Organisational structures and internal processes were one of the largest challenges for trust and safety professionals that I spoke to for some user research in partnership with Kinzen last year. So this might actually be a step forward. Let's see.

A new report has found that YouTube plays host to a "pattern of unchecked hate speech, misogyny, racism, and targeted harassment singularly focused on famous and identifiable women" and does little to prevent it. The Bot Sentinel research found 29 YouTube channels that monetised harmful content about Meghan Markle, with the top three garnering 76 million views alone. It's the second big criticism of YouTube's approach to moderation in recent months following Paul Barrett and Justin Hendrix's report back in June. My read of the week.

People

Those impacting the future of online safety and moderation

Brian Boland is a died-in-the-wool Facebook exec, having been at the company for over a decade before quitting in 2020. He held several roles and reportedly pushed for more data transparency for journalists and researchers. Now he has left, he wants even more visibility of the effects of the platform's safety features.

In front of the Homeland Security Committee this week, he drew a parallel with the automotive industry: “There’s almost no ability to protect our future and create a version of crash-testing a car”. It’s a metaphor I like and have written about in the past (EiM #19).

Boland's testimony came just before execs from Meta, TikTok, YouTube and Twitter, including Neil Mohan (EiM Exploration), ducked questions about their trust and safety work and were criticised for avoiding “sharing some really very important information with us” about how each moderates content.

Twitter’s Jay Sullivan at least noted that the company has 2,200 people working on trust and safety (perhaps the first time we’ve seen that number?) but I’m with Brian on this: more transparency, much more quickly.

Tweets of note

Handpicked posts that caught my eye this week

  • "But, in order to truly address the challenges of today’s Internet, it’s critical we move folks up this list, esp those in positions of influence or power." - I don't disagree with this at all from Thorn VP John Starr. In fact, it's partly what EiM exists to do.
  • "Labelling social media data for contested and complex categories like sexism is challenging and nuanced" - Hannah Rose Kirk invites us to improve English-language models for sexism detection.
  • "Looking for a limited amount of people for (paid) research interviews." - Had your account malicious flagged on TikTok or Instagram? Researcher Dr Carolina Are wants to talk to you.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1200+ EiM subscribers. This week's job of the week is from an EiM member.

ActiveFence is looking for a Senior Cyber Threat Intelligence Analyst to help drive its analysis of multiple cyber threat intelligence sources on the darknet and deep web.

The role has responsibility for leading processes of Request For Information (RFI) analysis and quality assurance of the CTI team deliverables. The successful candidate requires at least three years of experience.

The company is also looking for a Head of Mobile to manage its mobile intelligence team and work with ActiveFence’s clients. Experience with mobile malware and mobile app monetisation is key as are strong management skills. This role reports to the VP Mobile.

There are no salary details but the deadline for both roles is October 1 so you have some time to prep your application.

Last thing: the Grindr role I shared in last week’s newsletter (EiM #172) has a salary of $100,000 and is fully remote.