📌 Moderation as 'moral responsibility', new Oversight Board report and a billboard protest
Hello and welcome to Everything in Moderation, your weekly digest about the people, products, platforms and policies driving the brave new world of content moderation. It's curated and written by me, Ben Whitelaw.
New folks from Insight Centre, InternetLab, Cornell University and elsewhere, thanks for taking a punt on the newsletter.
Today's EiM is a little shorter than usual as I've been eating my own bodyweight in cornetti and scoping out career changes for when we're all forced to live as shapeless avatars. Despite spending (comparatively) little time online this week, I've noted a number of important stories that it's worth you knowing about. So here they are — BW
📜 Policies - emerging speech regulation and legislation
Canada's widely-derided online speech regulation plans are the result of dealing with "the most controversial aspect of internet governance — the issue of online speech regulation — in isolation", according to a researcher at the University of Ottawa. Yuan Stevens, writing for The Conversation, argues that the country "hasn’t reckoned with the business models of behemoth social media platforms" and urged the digital rights community to hold Justin Trudeau's recently re-elected Liberal government to account as it pushes through legislation.
The independent-but-Facebook-funded Oversight Board™ released its first quarterly transparency report this week and it's a mixed bag if you ask me. In a detailed blog post, the Board outlined that it had received over 500,000 appeals between October 22 and the end of June 2021 (which feels like a good takeup) but almost half were from the US and Canada (which isn't the point of the OB as I understood it). It also proudly boasts of almost 10,000 public comments on cases but, in the report itself, goes on to note that just 176 comments (!) were contributed to the 10 cases that didn't concern Donald Trump (which had 9,666 comments alone). That's a mere 16 comments per case. Local planning applications get better engagement than that. There's work to do.
Michael McConnell, one of the Board's four co-chairs, was sent out to bat for the Oversight Board model and told Reuters (as you might expect) that "the impact (of the Board's existence) might seem to be positive". He graciously noted the downside, however:
“The most justifiable complaint is we’re using a thimble to deal with a fire hydrant, and there’s a lot of truth to that, but my impression is that those thimblefuls are having reverberations through the culture of the company.”
Snap's CEO Evan Spiegel waded into the online speech regulation debate by arguing that it wasn't "a substitute for moral responsibility". Speaking at WSJ Tech Live, he also noted that "unless businesses are proactively promoting the health and well being of their community, regulators are always going to be playing catch up." Well said Evan.
💡 Products - the features and functionality shaping speech
It's easy to say this now but I just presumed that posts in Facebook Groups from users that had broken its community guidelines weren't treated in the same way as other rule-abiding posts. Turns out that wasn't the case until just this week when Facebook announced it would demote such posts in other users' feeds to reduce their exposure. The announcement also revealed a new 'Flagged by Facebook' label to demonstrate to Group admins that content will be soon removed unless they say otherwise. I call this micro-transparency.
It seems crazy to conceive an idea like the metaverse, in which we're surrounded by digital recreations of real-world friends and foe, without first getting to grips with plain ol' screen-based moderation. We are where we are though and, helpfully, this Slate article runs through the reasons for concern, including some on-the-nose comments from UCLA's Sarah T. Roberts. As she says, "It’s not like these companies don’t have a track record". My read of the week.
💬 Platforms - efforts to enforce company guidelines
A few weeks ago, we discovered — via Frances Haugen — about Facebook's 'do not ban' list (EiM #128) This week, courtesy of the recent Twitch hack, we found out that the streaming site had one too, albeit one with more nuance and fewer A-list stars with dubious pasts. The Washington Post has all the details but what's particularly interesting to me is that particularly toxic streamers had specific moderators who saw to their complaints, presumably because they were more experienced and knew the streamer's content. Dystopian stuff.
Facebook's proposed name change has led to renewed calls for the hiring of more experts — including moderators — to address the network's abuse problem. In this Qz piece, Paul Barrett, the deputy director of the Center for Business and Human Rights at NYU, says that “all the staffing in the world won’t solve it” unless leaders take responsibility.
This brings me neatly onto the final story of this section: Accenture-contracted moderators working for Facebook have worked with Foxglove Legal to run a billboard protesting against poor conditions and low wages. The board will travel from Maryland, where Accenture CEO Julie Sweet lives, to Washington DC. If you can remember as far back as March 2019 (EiM #19), I'm glad to see this organising taking place.
👥 People - folks changing the future of moderation
You may or may not have noticed that there's a quiet revolution is taking place in the Trust and Safety team over at Airbnb.
Last week, Juniper Downs was hired from Google to become the accommodation marketplace's new Global Head of Community Policy and Partnerships. She follows Donald Hicks, VP of Trust Policy and Partnerships, who joined in February from Twitter and previously led operations teams at Facebook, Google and Amazon.
Downs' role is important in that she will oversee the global Community Policy team, including maintaining partnerships with external groups that support keeping its users safe through policy advice and implementation. It's an area that the company have done more on than most other platforms: for example, the National Network to End Domestic Violence and Polaris have been engaged in Airbnb's respective domestic violence and human trafficking work for some time. In July, the company also announced the Trust and Safety Alliance, a group of UK-based "expert organisations" to provide guides and information for hosts and other users.
This is what serious investment in Trust and Safety looks like: concerted, pre-emptive and at the senior level. I expect it will serve Airbnb well.
🐦 Tweets of note
- "I feel so lucky to be joining a company that centres women’s experiences and whose mission & values I align with so closely." - Azmina Dhrodia announces her great new policy gig at Bumble.
- "Please know that I have so much gratitude for you and that I will miss you terribly." — Twitter's VP Trust and Safety Del Harvey bids farewell to her team today (22 October) after almost 13 years at Twitter.
- "Any action rate taken alone is an unreliable narrator. A high action rate isn't fundamentally better" - TSPA's Charlotte Wilner goes deep on T&S metrics in this great thread (something that I wrote recently that we need more of).