Hello and welcome to Everything in Moderation, your need-to-know digest of content moderation news and, importantly, what it means. It's written by me, Ben Whitelaw.
There have been weeks in the past — say, when the former President was deplatformed (EiM #94) — that felt like moments in which online speech broke out to become an issue of national or even global interest. But those weeks pale in comparison with the last seven days.
I've tried to provide a comprehensive overview but hit reply if there are any interesting articles or analyses I haven't included. I'll make sure to share them via EiM's small but mighty Twitter account. For EiM members, I've written about whether authenticating users is a good way to foster free speech.
Welcome to new subscribers from Niantic Labs, Stanford University, FullFact, Twitter, Linklaters, Buzzfeed and others from the EFF CoMo list. If this email was forward to you, hello and feel free to sign up here to get it every Friday.
Here's the week that was — BW
📜 Policies - emerging speech regulation and legislation
"Watershed", "landmark", "sweeping", and "strict": the words used to describe the Digital Services Act might have varied but none of the coverage in the aftermath of its agreement last Saturday underplayed its significance.
(If you're wondering what the hell the DSA is, Stanford's Daphne Keller has written a very good 101 guide to the act and what it means).
Reading between the lines, European officials are pretty happy about the time that the DSA has taken to come together since it was proposed in December 2020. "swift", "speed" and "in record time" are all used in the official release. It's hard not to see the comments as a dig at UK politicians, who are still trying to introduce their bill despite starting three years earlier in 2017.
Not everyone was so pleased:
- The Electronic Frontier Foundation welcomed the DSA but said there was still "a lot to fear" about possible over-removal as well as the potential for "enforcement overreach" caused by a lack of "human rights focused checks and balances".
- EU Disinfo Lab pointed out that platforms' reporting and appeal processes need to "empower people, rather than confuse them" if the DSA is to do its job.
- Access Now called it a "step in the right direction" but said it had "missed an opportunity to take a strong stance in favour of privacy and confidentiality of communications".
As for its possible impact, look no further than possibly disrupting the other big online speech story of the week: Elon Musk's takeover of Twitter. Justin Hendrix over at Tech Policy Press notes that:
even if Musk may wish to make Twitter more free-wheeling, it may need to maintain a more judicial approach to satisfy the DSA.
Platformer's Casey Newton also notes that the DSA's penalties are "not the ticky-tack fines you’re used to from dealing with, say, the Securities and Exchange Commission".
Away from Europe now, new research has shown that legislation is urgently needed in the United States to stem its huge child sexual abuse material (CSAM) problem. The Internet Watch Foundation found that 30% of the global total URLs hosting CSAM could be traced back to servers in the US. One of the proposed pieces of legislation — the EARN IT Act (EiM #58) - has been roundly criticised by human rights advocates.
💡 Products - the features and functionality shaping speech
Technology companies must offer better features than block and mute if the VR/AR experiences are to address hate and harassment. That's one of the findings from the Institute of Engineering and Technology's recently published report, 'Safeguarding the Metaverse'. Five researchers spent "many hours" in VR spaces during which they came across targeted instances of simulated violence, virtual groping and racist language. It comes as another non-profit released a white paper on children in the metaverse and a recent edition of Channel 4's Dispatches exposed abuse and racism.
UNICEF and the Lego Group have produced new research that they hope will "inform the design of digital products and services used by children, as well as the laws that govern them". The Responsible Innovation in Technology for Children (RITEC) report draws on 300 interviews with kids in 13 countries and existing insights from 34,000 others to produce a wellbeing framework of eight considerations. There are strong parallels to work done in the UK around age-appropriate design (EiM #126).
💬 Platforms - efforts to enforce company guidelines
I don't know how long it would take someone to read all of the commentaries around Elon Musk's takeover for Twitter but I've given it a good shot. Here are the four major themes, as I see it, in coverage that touches on moderation/speech issues:
- Problems: with open-sourcing Twitter's algorithm (Wired, Technology Review); with "authenticating all humans" (The Verge) and distinguishing good bots from spambots (Washington Post).
- Predictions: the return of Trump (New York Times), the quick exit of many Twitter employees (Engadget); crypto integration (Bloomberg) and selling access to tweets (Ben Thompson/Vox).
- Concerns: women for whom the platform could become "a hostile environment" (New York Times); black people — particularly women — are worried that attacks will spike (Slate); trans users say the prospect of a Musk takeover is "frightening" (NBC News); human rights groups fear "disproportionate and sometimes devastating impacts, including offline violence" (Reuters); advertisers are worried it will become "more toxic and less brand-friendly" (TechCrunch) and investors aren't sure he has the cash (Reuters).
- Warnings: the UK government has told Musk he must be "responsible" (pot? kettle? black?) (The Times) while the EU somewhat ominously reminded the billionaire that "there are rules" (The Financial Times). I'm sure both will go down well.
There is a strain of commentary that believes that Musk will lose interest soon enough, in which case we can park this whole episode until the next billionaire wants to cheaply acquire almost a million followers a day.
While on the topic of Twitter, the platform has been sharply criticised for its approach to climate mis and disinformation in a new report from Friends of the Earth, Avaaz and Greenpeace USA. Researchers used a 27-point assessment criteria to analyse content policies at five major platforms and found Pinterest, and perhaps surprisingly, YouTube ranked top. Another thing for Elon to worry about.
The Washington Post has done an interesting long read on how Telegram has become the "most prominent platform for the right-wing fringe" and home of white nationalists, election-fraud conspiracy theorists and disinformation spreaders. It's still my read of the week although there's no mention of Marjorie Taylor Greene, whose Telegram account was suspended for misinformation (EiM #142) or that the encrypted messaging app has agreed to monitor 100 popular channels in Brazil for misinformation (EiM #152).
👥 People - folks changing the future of moderation
Vijaya Gadde has appeared in EiM once or twice before (EiM #47, 116) and from what I've read about her, she's an extremely competent lawyer and incredibly well-liked by her colleagues at Twitter. The company's legal, policy and trust lead certainly didn't deserve the attention — and the abuse — she received this week.
As Qz reports, Gadde was thrown into the limelight after Elon Musk (him again) drew attention to her work by responding to a right-wing YouTuber that had called her Twitter's "top censorship advocate". Sadly, sexist and racist posts ensued.
Gadde hasn't tweeted herself since the incident but she has liked a post that said "guys my shit list is getting so long". You wouldn't blame her after the week she's had.
🐦 Tweets of note
- "Remember Elon Musk told the UN that if they gave him a breakdown of how $6 Billion could end world hunger, he would donate the amount. They broke it down and he didn't do it" - Article19's Mahsa Alimardani on why you shouldn't trust some people.
- "“More freedom of speech” isn’t a business plan" - Dr Sarah Roberts, UCLA professor, does the numbers.
- "WHY DOES EVERYONE THINK CONTENT MODERATION IS EASY?" - that's it. That's the tweet, via Lindsay aka linguangst.
🦺 Job of the week
This is a new section of EiM designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.
Bumble is looking for a VP of Machine Learning and part of the role is driving customer happiness by working on "content moderation and abuse prevention".
Candidates must have 7+ years of experience delivering "productionalised ML driven solutions at scale" and should bring "diverse thinking with an inclusive attitude". The salary isn't stated but, when you add a filter on LinkedIn, it's between £40,000-50,000. If that is true, Bumble HR shouldn't expect many applications.