Welcome to this edition of Everything in Moderation, your weekly newsletter about the policies, products, platforms and people shaping content moderation. It’s curated and produced by me, Ben Whitelaw.
Here’s what you need to know from the last seven days, with a focus on UK legislative matters. As ever, you can get in touch with comments, questions or caffeine, ideally in that order — BW
📜 Policies - company guidelines and speech regulation
It's been touted for years, in white paper form since 2019 (EiM #51) and now it's finally here: the UK's draft Online Safety Bill (formerly Online Harms Bill). Announced during the Queen's speech on Wednesday, it outlines how social media firms must remove harmful contact in a timely manner or face significant fines.
Not much has changed in its scope since the UK Government published its final response back in December 2020 (EiM #92) but that hasn't prevented it from going down like a lead balloon:
- The protective bubble around journalistic content, writes Demos' Ellen Judson, is "opaque" and "sets off multiple alarm bells".
- Tech UK's Lulu Freemont warns that its "over-prescriptive and complex approach which could overburden smaller companies" that don't have the same capacity as the dominant digital platforms.
- Alex Hern at The Guardian said the legislation had become "encrusted with artefacts of the all-consuming culture war".
- Politico's technology correspondent Mark Scott notes three big gaps but, as ever, Open Right Group's Heather Burns has the real unvarnished take.
The only consolation is that the bill will go into a period of pre-legislative scrutiny, involving oral and written evidence from concerned organisations, and is unlikely to come into force until 2022 at the earliest.
Elsewhere, Thomas Hughes, director of the independent-but-Facebook-funded Oversight Board, spoke to Protocol about the Donald Trump case (EiM #111) and called the outcome a "strong decision... that clearly prioritizes freedom of expression". Not sure Republicans would see it like that.
💡 Products - features and functionality
The use of age verification on platforms is not preventing children aged as young as nine access harmful content, according to a report by NGO Thorn and shared with Casey Newton's Platformer. 85% of the children surveyed use YouTube while almost half use Facebook and TikTok (all minimum age 13). The report also found a high prevalence of reporting, blocking, or muting and that platforms could do more to make these features simpler and more child-friendly.
Instagram's automated moderation system limited posts with hashtags relating to Al-Aqsa, the third holiest site in Islam and the site of Israeli violence this week, “in error”, according to Buzzfeed. Somehow in 2021, technical glitch continues to be a valid excuse for human-made errors (EiM #67).
💬 Platforms - dominant digital platforms
Reddit's chief operating offer paid testament to its volunteer-led "unique layer of moderation" as the company outlined its growth plans for 2021. Jen Wong, its chief operating officer, said "there's still more work to be done, but at the heart of it, I think that what you see is working.” The company has come a long way since 800 subreddits revolted against the company's policy towards white supremacy in June last year (EiM #69).
Amazon's lack of moderation "has always been inherent to the model", writes tech analyst Benedict Evans after noticing that 20 of the top 50 best-sellers in the “Children's Vaccination & Immunisation” category are by anti-vaccine polemicists. Really hope other platforms don't pivot to adopting such a strategy.
👥 People - those shaping the future of content moderation
Isabella Plunkett took a big risk this week. The Facebook moderator, employed via Dublin-based company Covalen, became the first working moderator to give evidence to a government body anywhere in the world when she spoke at an Irish Parliament hearing on Wednesday. The 26-year-old felt the "need to speak for the people that are too afraid".
Her testimony is worth reading yourself and would be shocking were it not for the fact we've heard much of this before:
- Plunkett was forced to work from the office during the pandemic despite living with her vulnerable mother, who has twice had cancer.
- She was told by the company's counsellor to do "karaoke and painting" but said, "sometimes you don't always feel like singing, frankly, after you've seen someone being battered to bits."
- Lunch break was 34 minutes long and "if I go over for 1 second twice a month, I lose my bonus for the month" (via Foxglove).
- Her allocated 1.5 hours of "wellness" time a week is "not enough" and she sees "the content I view in work in my dreams."
Plunkett joins Chris Gray, Selena Scola (EiM #6), Viana Ferguson and Alison Trebacz (EiM #85) and Clifford Jeudy (EiM #51) in hitting out at Facebook. Slowly but surely, previously silenced voices are being heard.
🐦 Tweets of note
- "Is Insider Trading A Content Moderation Issue?" - Dan Hon, principal at digital consultation Very Little Gravitas, being tongue in cheek, I hope.
- "They made the same recommendations many have for years while receiving a 6-figure salary & are applauded" - former Facebook employee and now Berggruen fellow Yael Eisenstat goes in hard against the Oversight Board.
- "excellent thoughts today on powers & limitations of interoperability to make social media better and challenge monopoly power" - Ethan Zuckerman's timeline is awash with brilliant nuggets from the Knight First Amendment Institute's recent event, Reimagine the Internet. I couldn't make it so will be watching back the talks here.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.