Welcome to Everything in Moderation, your weekly newsletter about content moderation signed, sealed and delivered by me, Ben Whitelaw.
If you missed last week’s newsletter, I'll soon be getting to work on ‘Meet the Moderators’, a series of in-depth interviews designed to give content moderators a voice in the future of online speech. You can support the series in three different ways. Here's how.
Now, onto this week’s update — BW
📜 Policies - company guidelines and speech regulation
It’s just a few weeks before the European Commission launches the Digital Services Act, which will set out new rules for online companies, including how they should moderate content. December 2 is the big day.
From draft legislation and recent leaks, we expect the Commission to force platforms to report removal rates, complaints and outcomes of user reports as well as declaring tools used to moderate and any co-operation with authorities. Large online platforms (LoPs) will have more obligations than smaller counterparts and, overall, it should bring about a degree of transparency that academics and policy makers have been calling for (EiM #71).
Naturally, all the dominant digital platforms have been war-gaming the DSA’s launch. This week, a leak of Google’s plan to lobby against industry chief Thierry Breton (who has gone up against Facebook in the past, EiM #52) forced them into an embarrassing apology. It’s going to be a bloody war.
While we're on Europe, Marietje Schaake — former MEP and the current International policy director at Stanford University’s Cyber Policy Centre — touches on the Digital Services Act in this week's Lawfare podcast, noting the difference between the US and European approaches to online harms. Very much worth a listen.
Finally, if you’re free today, Yale and Wikimedia are hosting a virtual workshop on the impacts of content moderation at 3pm GMT. I have a bad record of making webinars I've signed up to but perhaps I’ll see you there.
💡 Products - features and functionality
A very stark reminder that content moderation isn’t just about comments or videos. It’s about usernames too. The Athletic has uncovered that the official Premier League fantasy football game has hundreds of racist and anti-LGBT team names, despite a commitment to combatting discrimination and some automated checks already taking place. Other team names included political phrases like ‘White Lives Matter’ as well as swear words.
As we know, and what the article does well to make clear, context is key. An anti-Semitic term used endearingly by Spurs fans about their club’s past can be used in both good and bad ways. I hope the Premier League consider adding the ability to report team names as well as banning name changes during the season, at the very least.
💬 Platforms - dominant digital platforms
There was another US Senate hearing with Mark Zuckerberg and Jack Dorsey this week that focused in part on content moderation. However, there was little that was new or notable so I won’t waste your time.
More notable was two Facebook/election stories I came across this week:
- In the aftermath of Election Day, Spanish-language accounts with large followings were claiming a Trump victory and voting rigging, raising questions about how social media platforms were staffed to deal with Spanish content.
- The Civic Integrity Unit, which governs how the platform deals with elections, had their recommendations for limiting the spread of false claims vetoed because execs thought it would look too much censorship.
👥 People - those shaping the future of content moderation
Not one person this week but a group of people, specifically third party contractors that are joining TikTok en masse.
CNBC this week found that 25 mods have recently left Accenture and Cognizant, where they mainly worked for Facebook, to join one of the video app's trust and safety hubs based in Dublin, San Francisco and Singapore.
TikTok reportedly has 10,000 people working on trust and safety worldwide and plans to add 200 people in Dublin by January 2021. That's some growth.
🐦 Tweets of note
- "All reading material is public, including my teaching edits of some of the longer cases” - I can’t wait to dig into the Stanford 2020 Platform Regulation reading list, shared by Daphne Keller.
- 'I uh, I'm actually kind of surprised at the extent of user-level moderation features Parler has’ - Meedan content moderation lead Kat Lo looks at the free speech platform’s controls.
- 'This is a big step forward’ - Harvard lecturer Evelyn Douek on Facebook’s announcement that it will include the prevalence of hate speech in its quarterly public reports.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.