📌 Bad actors vs insensitive users
Hello everyone and welcome to another EiM.
Some nice news for me: I got to give my dulcet tones a runout on Patrick O’Keefe’s Community Signal podcast this week, where we spent an hour talking about everything from the effect of Brexit commenting threads to how credibility is created and transferred. Have a listen.
This week (Tuesday) was the Content Removal at Scale conference at the European Parliament in Brussels, where policy folks from Facebook, Pinterest and other technology companies came to discuss their user content policies. I haven’t had time to comb through the reaction yet so let me know if you tuned in or if there are any interesting reactions you’ve read.
As ever, thanks for reading — BW
You can't regulate against people like Clayton
Amidst the calls for regulation of social media platforms in Europe and across the Atlantic, it’s good to be reminded about the challenge of differentiating between bad actors and regular, misguided folks. Two examples in the last few weeks do just that.
The first is the tragic case of Jackie Griffin, an Irish courier driver who died in a ‘horrific crash' on the M50 outside Dublin a few weeks ago. Her death became controversial after several people shared images of her body (including a user reportedly named Clayton Mahoney) as their cars crawled by the wreckage.
The media in Ireland caught wind, leading Jackie’s family to urge people not to share these posts and police to ask for information about the offending users on the grounds that it’s illegal to use mobile phones while driving on the motorway.
As is customary with cases like this, academics and politicians followed suit. The platforms where these images appear, they said, needed to be accountable for the distribution of extreme images. Mary Aiken, psychologist and author of The Cyber Effect, wrote a piece in The Sunday Independent, saying that children are particularly at risk of trauma. Yet no-one made the simple point that it was clearly wrong for Clayton and anyone else to share an image of a reportedly decapitated woman and that, if they took a second to think or had a drop of compassion, they would have refrained from doing so. But it's much simpler to bash Facebook et al.
Last week, a similar tragedy unfolded near where I live in London. Nedim Bilgin, a 17-year-old was stabbed on Caledonian Road after an altercation with two other teenagers. There was a large police presence at the scene around 7pm, when the incident took place, and my housemate happened to see paramedics doing CPR on Nedim in the rain before he was declared dead. We didn’t need to know Nedim or his family to be deeply affected.
Around 9pm that evening, I saw a tweet about the stabbing posted with an image of a body bag. The picture was mid range and had been zoomed in, as if taken from across the road. The user (who I won't name for obvious reasons) had clearly walked past at that very distressing moment and decided to take and share the picture with their 2k followers.
I thought about reporting it but figured the fastest way to have it removed was to get in touch myself. I sent the following message:
The response was swift and the user removed it within 10 minutes. I got a DM to say it had been taken down and, the next day, several more messages followed which showed that it was clearly a mistake, a lapse in concentration and compassion at a difficult time.
Afterwards, I thought about the similarities between the images of Jackie , which had gone viral, and the picture of Nedim, which had been pulled down just in time. In both cases, the people who posted them couldn't be argued to be bad actors — they're not looking to disrupt the outcome of an election or extract data for unethical or illegal ends — but users whose only crime is insensitivity and a lack of compassion. They are people who education and coaching, not regulation, to help them understand the impact of their online decisions.
They also act a useful reminder that regulation, if and when it comes, won't solve all of our problems.
The people changing the 1/9/90 rule
The 1% rule has been around for 15 years since two authors looked at some data from Yahoo and ProductWiki about who creates content and noticed that there were was a very similar pattern.
If one team of scientists and engineers at Alphabet have their way, it may not last much longer. As outlined in this good PCMag feature, Jigsaw, the incubator that sits within Google’s parent company, are working on democratising the debate by creating less toxic and more welcoming spaces online.
The product manager there, CJ Adams, puts it like this:
"That means that of that 1,000 people that could be in the room, you have only a handful represented in the discussion; let's say, 10 people. I have deep faith that we can build a structure that lets that other 990 back into the discussion and does it in a way that they find worth their time.”
Jigsaw's Perspective product, the API that determines the probability of toxicity in a comment thread, is already being used by the New York Times and in various Reddit Threads so progress is being made in that sense. One can only wonder what an internet where 5% or even 10% of users are posting happily will look like.
The head of the Counter Extremism Project wishes Facebook a ‘happy 15th birthday’ by calling for more human intervention alongside AI to weed out extremist content
We have to force Facebook to be responsible on illegal content
Further proof that everyone hates TikTok. First China stated that the video sharing platform needed to moderate its own content and now India is doing the same (possible paywall)
Financial Times India takes aim at popular Chinese social media apps
The Telegraph went looking for videos of self-harm on YouTube and, unsurprisingly and sadly, found some.
YouTube criticized for recommending 'self-harm' videos in searches - Business Insider
YouTube said it removes videos that promote self-harm (which violate its terms of service), but may allow others that offer support to remain.
Damian Hinds, the UK’s education secretary, has said tech giants (and schools) can do more to help prepare children for ‘the real world’. It comes after a spate of teenage suicides in which social media (particularly Instagram) have been flagged as causes by parents
You have amazing ability, Damian Hinds tells tech chiefs — now use it to do good
Social media companies have a moral duty to do more to remove content that promotes suicide and self-harm and should use their technical genius to do “social good”, the education secretary has declared.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.