📌 Social norms FTW
I've been lax with welcoming new EiM subscribers recently so a belated hello to folks from Spredfast, Medium and Reword.ly and a special 👋🏾 to Giuseppe, whose weekly data visualisation newsletter I’ve subscribed to for a long while and highly recommend.
I took a break from EiM last week as I had a presentation that took up a lot of my time and energy (thankfully, it went well). Too much going on this week to miss another newsletter.
Thanks, as ever, for reading — BW
Put a pin in it
Sometimes the challenges facing how we moderate content on the web seem insurmountable, especially if you’re working in a team reviewing content every day. This week, a newly published research paper provided a glimmer of hope that something can be done now.
The paper, by Nate Matias, a postdoc at Princeton, showed messages emphasising forum norms significantly prevented unruly behaviour and increased newcomer participation in the r/science subreddit (13.5m subscribers back in 2016 when the research was conducted, 21m now).
The Reddit post was pinned for 29 days at the top of some threads and not others and, in the subreddits where it featured, was found to increase the likelihood that first-time commenters would get involved in the discussion by 70% on average. It also led to an 8.4% higher chance that those comments wouldn’t be deleted. (details)
It’s amazing that something so simple could yield such drastic results. And at the same time, I’m not at all surprised.
In my last job at The Times, a community journalist in my team sent an informal welcome email to users leaving their first comment. While we didn’t have the capacity to measure the test properly, anecdotally we saw those users behave most better in the long run.
So, if we know social norms work for improving civility, the question is where else can they be emphasised. One idea may be onboarding. Back in 2014, I wrote about how Twitter could reiterate what constitutes acceptable conduct on their platform by focusing on the terms of service in the sign-up journey (rather than simply how to follow someone). A different way of establishing the norms that Nathan refers to in his paper.
Nate is inviting other Reddit communities to take part in the next stage of research. If you’re a moderator or know someone who is, sign up by the end of May.
Creators, unite!
Back in February (EiM 18) I suggested, somewhat speculatively, that content moderators had a case to unionise. So I note the creation of Unionized Memes with interest.
Unionized Memes has been set up to represent the interests of original content creators making memes across the world and is designed to put pressure on the platforms that monetise their work. While unlikely to be recognised by the USA's National Labor Relations Board anytime soon, they have over 100 members and are growing.
If it goes well, who knows, maybe mods will be next...
Not forgetting...
Black Lives Matter organisers, anti-racism podcast hosts, even black former Facebook staff: all have had posts removed for trying to start a conversation about racism they’ve faced. They call it being ’Zucked'
Facebook while black: Users call it getting 'Zucked,' say talking about racism is censored as hate speech
Black Facebook users complain they can't talk about racism without being censored for hate speech. Civil rights groups want Facebook to fix it.
Sometimes even Facebook’s staff sound like they’ve been Zucked. Joaquin Quinonero Candela, Director of Applied Machine Learning, said in an interview with Bloomberg about new efforts to screen harmful content: "I feel like when I was doing super-complicated math, that felt a lot easier than this.” Poor Joaquin.
Facebook Wants AI to Screen Content, But Fairness Issues Remain - Bloomberg
One of Facebook Inc.’s biggest issues in trying to stop the spread of fake news on its platform is being able to train its algorithms on good examples of truth and falsehoods.
Following declarations from Value’s Steam and Google’s Stadia to clean up their platforms, Microsoft has followed suit by strengthening their guidelines about what constitutes harassment and giving examples of the kind of ‘clean trash talk’ users can use.
Xbox Targets Toxic Trash Talk, Updates Community Guidelines – Game Rant
Microsoft updates its community guidelines to more accurately represent its values, both acknowledging trash talk's role while setting transparent limitations.
There’s no such thing as a 'decided issue' anymore. That’s what Jillian C York and David Greene argue in a new piece for EFF, in which they argue that a lack of recent research about an important, agreed upon topic (eg immunisation) can cause ‘data voids’ that allow counter-narratives to creep in.
Censorship Can't Be The Only Answer to Disinformation Online
With measles cases on the rise for the first time in decades and anti-vaccine (or “anti-vax”) memes spreading like wildfire on social media, a number of companies—including Facebook, Pinterest, YouTube, Instagram, and GoFundMe—recently banned anti-vax posts.But censorship cannot be the only answer...
Politico’s morning tech email contained some interesting quotes from Wikimedia Foundation CEO Katherine Mayer in which she expressed her views on Section 230 and Europe regulation of tech.
Data privacy deadlock - POLITICO
Wikipedia behind the scenes — Tuesday’s robocall hearing
A series of preliminary recommendations by EFF about how to readdress the broken content moderation system. Tools, similar to those that Facebook’s News Feed team recently announced, will be key.
Content Moderation is Broken. Let Us Count the Ways
Social media platforms regularly engage in “content moderation”—the depublication, downranking, and sometimes outright censorship of information and/or user accounts from social media and other digital platforms, usually based on an alleged violation of a platform’s “community standards” policy. In...
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.