π Moderation filter fail, Texas bill blocked and new platform policy
Hello and welcome to Everything in Moderation, rounding up the must-read content moderation news and analysis every Friday. It's written by me, Ben Whitelaw.
A big EiM welcome to new subscribers from Reddit, Microsoft, King's College London and Figma as well as others corners of the web.
A reminder for any tweeters that I have a regularly updated list of moderation experts that I use to help me put together EiM each week. Follow it here and let me know if there's anyone I'm missing (including you).
Here's what you need to know this week β BW
π Policies - emerging speech regulation and legislation
Following last week's news that the EU's Digital Services Act could be signed off as early next year (EiM #138), you might be forgiven for thinking that everyone was happy with what's on the table. Not the Wikimedia Foundation.
General counsel Amanda Keton and executive director of Wikimedia Deutschland Christian Humborg have written an op-ed for TechCrunch outlining how Wikipedia "works in fundamentally different ways" to the platforms that the DSA seeks to reign in. They also warn that "proposed requirements could shift power further away from people to platform providers, stifling digital platforms that operate differently from the large commercial platforms."
As far as the Online Safety Bill is concerned, there are increasing signs that paid advertising may fall into the bucket of what's in scope following revelations that traffickers used Facebook ads to advertise Channel crossings. It comes just a few weeks after Martin Lewis, campaigner and founder of MoneySavingExpert, wrote a letter to the Prime Minister urging him to consider treating scam adverts "just like user-generated online scams" in the bill. Critics say there are already rules and authorities in place to deal with ads (see Advertising Standards Agency) and we know culture minister Nadine Dorries is not keen.
Over in the US, rational thinking has prevailed as a federal judge in Texas blocked a law that sought to punish social media platforms for moderating content. The bill was designed to correct for the so-called "anti-conservative bias" on large platforms for which there has been no evidence. But, like the Florida bill before it (EiM #119), it was quickly tossed out on First Amendment grounds.
Transparency reporting for platforms is all the rage when it comes to online regulation. However, as Daphne Keller of Stanfordβs Centre for the Internet and Society notes in the latest EFF podcast, Β "cumbersome processes" can become a huge advantage for large platforms "if they are things that the incumbent can do and other platforms can't". My listen of the week (thanks to Shane for flagging).
π‘ Products - the features and functionality shaping speech
It's impossible to create moderation filters that catch every potentially harmful or discriminatory input, especially when it comes to personalised products sold online (remember Coca Cola?). Saying that, the recent example from pyramid-shaped chocolate company Toblerone is a particularly egregious example. TellMama UK, the portal for measuring anti-Muslim attacks, found that some spellings of Mohamed were permitted on its website but other more common spellings and the word Islam (often a surname) "didn't pass our moderation check".
One from last week that evaded my, ahem, filters: Twitter is introducing aliases for users taking part in its Birdwatch fact-checking/moderation programme. Research showed that aliases "helped people feel more comfortable crossing partisan lines", avoided a focus on the author of the note and were preferred by women and people of colour. All good stuff, then.
π¬ Platforms - efforts to enforce company guidelines
The big platform news this week came from Twitter, which announced that it was updating its private information policy to also include "private media" in addition to the personal data β location, contact info, identity documents β that has been standard for some time. Twitter Safety explained the decision in a blog post by arguing that media can be used "as a tool to harass, intimidate, and reveal the identities of individuals". However, the reaction has been anything but positive with Mike Masnick at Techdirt and Platformer's Casey Newton both expressing raised eyebrows at the move.
Your use of YouTube has always been subject to the Community Guidelines and a strikes process, but the platform announced this week that it would now explicitly include its Community Guidelines strikes information in the Terms of Use as part of an effort to increase transparency. There are no changes to how Community Guidelines strikes operate, or when a channel or content might receive a strike, but it perhaps demonstrates a change in YouTube's historically laissez-faire approach.
Don't believe Facebook when it says it's improving its "protections for marginalized communities", says an editorial published by The Washington Post this week. The article doesn't pull any punches: it notes that the company has a record of prioritising, as one former Facebook product manager put it, "rich White man friends" at the expense of people of colour, particularly women. Ouch.
π₯ People - folks changing the future of moderation
Fundraising and petitions platforms occupy an interesting and under the radar place in the moderation conundrum (see the case of GoFundMe). Now, in the case of one of the biggest sites, there will be a new face at the helm of its trust and safety efforts.
Tony Sebro was this week announced as the new general counsel of Change.org, the San Francisco based social change platform with some 500 million users. Sebro joins from the Wikimedia Foundation, where he was deputy general counsel, working with the aforementioned Amanda Keton (see Policy).
Change.org, from my recent memory, hasn't been involved in any online safety issues and Sebro will hope it stays that way.
π¦ Tweets of note
- "I'm not sure how much longer platforms can just look at individual pieces of content vs the broader context of an account" - former Facebooker Katie Harbath reacts to Susan Wojcicki's latest comments.
- "Even when well-intended, big tech censorship with vague rules will inevitably chill valuable journalism" - Documentarian Ford Fisher doesn't think much of Twitter's new personal information policy.
- "In the nick of time" - How EFF director David Greene referred to the Texas moderation judgement in this long and useful thread.