It’s colder and darker here in the UK after the clocks went back on Sunday. But new folks signing up from the Global Network Initiative, Twitter, Cornell University, Harvard and Wikimedia have helped counteract the winter chill. Good to have you all on board.
I’ve tweaked the format of the links this week after some feedback from a reader (thanks Nick!). Whether you’re new on the list or been here since July, do send me a note about whether I’m hitting the mark.
Thanks for reading — BW
Emojis are moderator’s kryptonite
How do you deal with a guy like Glasses Guru?
If you weren’t watching last week’s Content Moderation III conference (all the sessions are online here), Glasses Guru is a fictional character that appeared in the You Be The Moderator session in which conference participants had to vote on how to moderate provocative hypothetical content on a generic social network. And it raised an interesting question I hadn’t considered.
To explain the scenario: Glasses Guru pens a comment advocating that all men should die (you can see the comment in full on the screen grab below). The moderation policy is strongly against hate speech, violence and dehumanising language. Does it come down?
20 people thought so, stating that it clearly went against the policy. But a larger group suggested it should stay up because a 😂, placed halfway through the comment, made the comment feel jokey or, at the very least, not something that posed an imminent threat.
A participant then made the point that adding an emoji couldn’t become a means of cloaking hateful ideas and viewpoints. Another asked if the meaning of the comment changed if it was a 😉 and not a 😂? And how do you clarify emoji use in a policy that spans multiple regions in which emoji use is varied?
No-one had a clear answer. But it was obvious that this was just the tip of the iceberg and that job of unpicking what emojis mean isn’t going to get easier any time soon.
The ultimate irony
You might have read the piece by Mike Masnick at Techdirt that I linked to back in August about the difficulties of moderation at a large scale. Well, he revealed last week that that very article was later found to have violated Google’s AdSense policies for ‘dangerous or derogatory content’ (it was nothing of the sort).
Not understanding why that was, Mike asked for it to be reviewed and got told again that the article was ‘non-compliant’ and that the (unspecified) violations needed to be fixed before ads would be served on the page again. Cue a stalemate.
Luckily Mike doesn't care about one article but he knows it's a slippery slope when there's no provision to fight back against unfathomable rules. The sooner Google starts to follow its own principles, the better.
Be more Benedict
As a lapsed Catholic, I should probably know more about St Benedict and his code of ethics than I do.
This week, someone found all 72 of his ’instruments of good works' in the code of conduct of a widely-used piece of open source software. (Some people do read CoCs after all)
It turns out the founder of SQLite is a devout Christian and thought it would help contributors live ‘a happier and more productive life’.
Has he ever been on the internet before?
If you’re going to ban someone, at least have the self-respect to tell them. That’s not what happened to Chris Hoffman, who left a review on a scam item he bought, only to find two years later that his review had been deleted and his review privileges removed.
Two years ago, I got scammed by a counterfeiter on Amazon. I left a review on the product warning others about my experience. Eventually, Amazon deleted my review and banned me from leaving reviews for “violating Community Guidelines.”
It wasn't long ago that Jack Dorsey said journalists needed to be factcheckers on the platform for the good of society. That’s exactly what the Daily Beast had to do last week when a reporter spotted Milo glorifying the bomb attempts last week.
Instagram had first told The Daily Beast that Yiannopoulos’ post regretting that the bombs didn’t go off had not violated its standards.
Facebook have been opening their doors to media folk and academics at Menlo Park in the hope of getting people to understand how difficult they have on their hands. This Wired piece is interesting although I'm not any more sympathetic.
Moderating the posts of more two billion people is a colossal job. AI doesn't necessarily provide the solution that Facebook wants – only democracy can do that
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.