š Human mods, changes to Groups and bringing law to Discord
Welcome to Everything in Moderation, your weekly newsletter about content moderation on the web, curated by me, Ben Whitelaw.
If you missed last weekās edition, youāll notice EiM has a new simpler structure (read the background as to why). Thanks to everyone who sent their feedback (particularly Matt and Nick) as well as those who forwarded to unsuspecting colleagues and friends. Content moderation is a niche world and so word of mouth is everything. As always, you can support the newsletter here.
Onto this weekās updates āĀ BW
š Policies - company guidelines and speech regulation
Itās always the way isnāt it: you wait for an announcement about the bodies overseeing content policy and then two come along at once.
Facebook sources this week divulged to the Financial Times that its Oversight Board would start accepting cases from mid-October, a mere two years after Mark Zuckerberg first announced the idea and five months after the board members were unveiled (see EiM #63, The worldās highest-profile moderators).
Meanwhile, TikTok announced the seven names of its Asia Pacific Safety Advisory Council, including academics and activists from India, Pakistan, Singapore, Indonesia and elsewhere. The group, like its US equivalent launched last March, will convene quarterly.
The timing of both announcements is notable but not surprising: the US election is scheduled for 3 November and being able to point to a functional Oversight Board will give Facebook some breathing space in the inevitable post-election shakedown. Similarly, TikTok has already been banned in India and is āon final noticeā in Pakistan following accusations that it is allowing the spread of āvulgar contentā. It needs all the help it can get if it wants to avoid being shuttered elsewhere too.
On the topic of regulation, I can recommend two interesting reads:
- Brookings Institute's round-up of content moderation regulation from around the world and the risks associated with a ādo somethingā approach.
- Postdoc researcher Garfield Benjamin writes for the London School of Economics blog about how UK regulation is held back by multiple regulators and little regulatory co-operation.
š” Products - features and functionality
This went under my radar last week but feels like significant move for the moderation of closed groups: Facebook is making it harder for moderators of banned groups to create new ones.
Mods now join admins in serving a 30 stay of execution when a policy violation has occurred as well as facing the removal of the group entirely if they approve violating posts. Admins and mods demonstrate different behaviours to regular users but, apart from the occasional policy tweak and a new tool here and there (see August 2019), have rarely been targeted in this way before. Thatās seemingly changing.
Itās been a while since Iāve had to put a comments death notice here in EiM but here's one: Mumbrella, the Australian media industry site that Iāve covered previously (EiM #64), has closed comments to users just eight months after trying to clamp down on toxic drive-by users.
š¬ Platforms - social networks and their approach to online speech
Is this an admission of AI moderation's limitations? YouTubeās chief product officer this week said that human moderators are back reviewing content for the platform following an unusually high video removal rate in the three months till June. In total, 11 million videos were removed, which is double the usual rate of removals ā and far beyond what Neal Mohan made out to be āerr(ing) on the side of cautionā.
Iām torn as to how I feel about this: as more than one Twitter user put it, this was hardly difficult to see coming Ā but, at the same time, the platforms did warn it would happen.
For the last two months, Twitter has taken aim at QAnon (EiM #74, No pizzgate posting for you) and this week, itās head of site integrity suggested that the approach was working. Yoel Roth said that that impressions of QAnon related content were down 50% and that it would continue to āinvest in the approachā. Ā My two cents: presuming that Twitter's product team implemented the easiest, highest impact changes first, why has QAnon exposure only decreased by 50%? And what happens with the rest?
š„ People - those shaping content moderation
Sometimes it is easy to forget that Discord, the distributed community platform with a $3bn valuation, is a real startup and not just a collection of lawless servers.
I was reminded of that this week as I read a Q&A with Clint Smith, its recently appointed and first-ever chief legal officer. The article is register-only but talks about:
- Being pro-moderation ('We want to make sure that on a policy basis we keep the right to do that moderation and keep a safe and trusted platformā)
- Doing so while with just three people in the policy team (although doubling in size in the next 12 months)
- Having to consider the regulations of non-US users, which make up 75% of Discord's user base
Not a simple job then.
She may only be known as Jane Doe at the moment but a moderator working for YouTube could be the next person to successfully sue a tech giant for failing to protect her wellbeing during work hours. In a suit filed in California, the moderator, who worked for Collabera, alleges she experienced panic attacks and came to fear crowded spaces. Following Facebookās $52m settlement back in May, more like her will launch actions and for good reason: Content moderation takes its toll.
š¦ Tweets of note
- "We agreed that we should focus on how harmful content is distributed": VÄra JourovĆ” ā VP for Values and Transparency at The European Commission ā talking to Jack Dorsey about throttling vs removing.
- From a current European politician to a former one: Stanford Cyberās Marietje Schaake reflects on big brand's negotiation of fresh definitions of harmful content with the digital dominant platforms.
- And from one Stanford Cyber Policy Center director to another: Daphne Keller flags a piece by Stephen Wolfram (he of search engine fame) about content moderation, calling it "much more technically fleshed out than other things I've seenā.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.