📌 Is this moderation's toughest battleground?
Hello everyone. A nice reminder from The Verge’s Casey Newton about why it's important to keep tabs on the policies, people and platforms at the heart of the evolution of content moderation. He said in his newsletter this week:
"the swarm of headlines about content moderation over the past week should not be mistaken as a coincidence. What stays up — and what comes down — has never been a more salient question in people’s minds.”
As a subscriber to EiM, let me congratulate you for being on the pulse.
One more thing: I asked Twitter's hive mind if anyone knew of research/studies that look at the motivations of voluntary moderators and got some great responses - let me know if you have any other ideas by hitting replying or, ya know, tweeting me.
Thanks for reading — BW
Moderating live video: a recent timeline
Here's a rundown of news about bad actors using live streaming over just the last six months.
15 March: 49 people are killed in a terror attack at a mosque in Christchurch, New Zealand. The gunman broadcasts the video on Facebook, causing thousands of copies before it can be taken down. The Australian prime minister calls for a moratorium on live-streaming and The Christchurch Call is created to mitigate terrorist content spread via live video.
14 May: Facebook announces a one-strike-and-out policy for Facebook Live, meaning anyone who violates its community policies would be ineligible to post live video. NZ Prime Minister Jacinda Ardern welcomes the move
1 October: Two Hat, the Canadian security firm, signs a deal with British company Image Analyzer to be able to moderate live-streaming on the go. Founded in 2005 , Image Analyzer claims to be able to identify content like the Christchurch shooting and shut it down in seconds.
9 October: A lone gunman motivated by right-wing ideologies kills two people during an attack on a German synagogue. He live-streams the whole thing on Twitch.
10 October: Donald Trump, President of the United States, broadcasts his first Twitch stream in which he regurgitates a far-right blog’s claim that a Democratic representative Ilhan Omar married her brother for US citizenship. The claim goes against Twitch’s guidelines.
There is more but it's enough to demonstrate that live-streaming is fast becoming one of the key content moderation battlegrounds. It has become the obvious medium to use if you want to do something you shouldn’t, whether that’s going on a killing spree or repeat falsehoods about political rivals.
Why do I bother spelling these developments out? It's a warning of what we can expect. The moderation processes associated with live-streaming are not as advanced or as well-tested compared to other mediums like text and video. Crucially, the technology is not there yet (despite Image Analyzer's claims). And as platforms like Twitch and Mixer gain a critical mass, that poses a real problem not just for those playing online games there but for everyone.
Moderator lyf
Rob (who I've met before and knows his stuff) is one of Reddit’s karma kings and moderates a bunch of subreddits. His picture, sadly, speaks for itself.
Not forgetting...
I’m interested in how users verify themselves online so I was drawn to this New York Times report on how moderators of subreddit Black People Twitter have been asking users to post pictures of their forearm to prove they’re not white. A topic I’d like to return to in coming weeks.
Discussing Blackness on Reddit? Photograph Your Forearm First - The New York Times
Moderators of an online forum called Black People Twitter have caused an uproar by requiring participants to submit a photograph proving they are not white
TikTok is creating 'a new committee to advise the company on a wide range of issues including censorship, child safety, hate speech, misinformation and bullying'. In PR speak, it means they’re trying to make people forget about the censoring of the Hong Kong protests (EiM 36, The ugly side of TikTok).
TikTok taps corporate law firm K&L Gates to advise on its US content moderation policies – TechCrunch
As TikTok continues its rapid U.S. growth, the company is being challenged to better explain its content moderation choices.
Three Australian-based academics have written for The Conversation about why Facebook’s content moderators don’t get the support that they should and reflect on how community managers are organising in Germany and Australia (EiM 18, 'Moderators of the world, unite!')
Revenge of the moderators: Facebook's online workers are sick of being treated like bots
Mark Zuckerberg may try to minimise their concerns, but Facebook moderators and other online workers are beginning to organise for their own protection.
Twitter has given more clarity around their as-yet-unused approach to dealing with world leaders that violate their guidelines. It’s all theory until it happens for real.
World Leaders on Twitter: principles & approach
An update on Tweets from world leader
Another paper on Alphabet’s Perspective API from the University of Washington which shows that it is not just bad at dealing with spelling but is racist too.
Oh dear... AI models used to flag hate speech online are, er, racist against black people • The Register
Facebook could do worse than appoint Kara Swisher as part of their Content Oversight Board, as she suggests in this New York Times piece.
Opinion | Facebook Finally Has a Good Idea - The New York Times
The company’s latest plan to police toxic social media content is intriguing — and even laudable.
This feels like an appropriate way to end this week’s EiM: Facebook has released documents admitting their long-heralded AI is a blunt instrument that lacks context.
Facebook admits its moderation tools are a "blunt instrument", can't understand context
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.