📌 Bring moderators in-house, Online Safety Bill concerns and content filters
Welcome to Everything in Moderation, your weekly newsletter about content moderation. It’s curated and produced by me, Ben Whitelaw.
Thanks to everyone for their Covid-19 well-wishes. I'm recovering slowly but surely.
A warm welcome to new subscribers from Twitter, the House of Lords, Tech Against Terrorism, Libero and the University of Edinburgh; your interest has been a well-timed shot in the proverbial arm.
This is the last newsletter for a few weeks while I take my traditional summer break. EiM will return in August with a new look and a bumper edition. Thanks for reading and I'll be back in your inbox before you know it.
Before I go, here is the online speech news you need to know from the last seven days... — BW
📜 Policies - emerging speech regulation and legislation
Pakistan this week blocked TikTok for the fourth time in a year for continuing to host what the government called "immoral and obscene" content. You'll remember that the video-sharing app was put on 'final notice' back in September (EiM #80) and, since then, tensions have been exacerbated by new legislation that did little to hide its undertones of minority group censorship (EiM #89).
In regulatory news, the Electronic Frontier Foundation became the latest organisation to raise concerns about the UK's Online Safety Bill, citing that it represents "serious threats to freedom of expression online and must be revised". It joins the Carnegie Trust, the Lords Communications and Digital Committee, the Committee to Protect Journalists (CPJ) and others in recently noting big holes in the proposed legislation.
Meanwhile, British MPs across the political divide continue to point fingers at each other about the speed and make-up of the government committee designed to scrutinise the Bill. Doesn't bode well, does it?
💡 Products - the features and functionality shaping speech
A sensitive content control to allow users to decide what kind of posts they see is Instagram's latest attempt to protect users from harm. The new setting, which applies to the Explore page but presumably will be extended to Reels and Shop in the future, limits content that is sexually suggestive, violent or features drugs and firearms.
However, having used the filter this week, it is severely lacking on (at least) two fronts:
1) There are no examples of "sensitive content" on the Settings page where the filter is enabled. This means it's impossible for users to make an informed call about whether they should limit such content or not.
2) It fails to explain to users how such content is categorised (AI? User reports? A combination of both?) so it's very difficult to have any confidence that the right content will be screened from view.
While the theory of giving users more control is a good one, this is an example of it being done badly.
💬 Platforms - efforts to enforce company guidelines
Facebook has a responsibility to stop authoritarian regimes on its platform and should "go further" than it does in relation to world leaders, according to a highly-respected free-speech advocate. Courtney Radsch called for an end to the "newsworthy" exemption afforded to heads of state in a piece for Project Syndicate, insisting that "governments, public officials, and political parties must face swift and severe consequences if they violate a platform’s terms of service".
60 moderators from Ireland, Portugal, Spain and the United States have signed a letter calling for Facebook to bring all moderation in-house to equalise pay and benefits. The letter — written in collaboration with Foxglove and directly addressing the CEOs of two of Facebook's outsourcing partners — also calls for proper mental health support and the end of restrictive NDAs that prevent people from speaking out about work conditions. A potentially significant moment.
👥 People - folks changing the future of moderation
I often think that experience often trumps expertise and Nicola Roberts is a great example of that. The Girls Aloud singer hasn't studied online speech but she knows from suffering from online abuse by an ex-boyfriend what needs to be done to better protect people on the web.
Last week, she spoke out against the UK's Online Safety Bill, noting that it "ultimately was still contributing to countless people experiencing abuse online" by not closing loopholes that allow banned users to set up new accounts.
The Bill has many flaws and this is just one of them. But we only come to know those failings by listening to the testimonies of people like Roberts, who suffered when they needn't have done.
🐦 Tweets of note
- "I will continue saying this until everyone hears it" - former Pinterest employee and Earthseed founder Ifeoma Ozoma goes in hard on NDAs.
- "I think you can believe at the same time that Mosseri is sincere but also that this is an incredibly weak and slow move" - Bauer CEO (and a former colleague of mine) Chris Duncan picks holes in Adam Mosseri's response to the England footballer online abusef
- "Focus less on what platforms leave up and takedown, and more on what their fundamental design and algorithms incentivize and promote" - Will Oremus, now writing about tech at the Washington Post, on the Biden vs Facebook beef.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.