📌 France’s very own NetzDG
European regulation of social media has been discussed for so long that it can be easy to forget about its significance when it eventually happens. That’s why this week I’m looking at what France’s new ‘Avia Bill’ means for moderation across the continent.
Our run of moderation podcasts continues this week with Verge journalist Casey Newton’s appearance on NPR’s Fresh Air (thanks to Lauren Katz for the tip).
Thanks to Adam, Mark and Max for their positive feedback on last week's edition. Get in touch if you have comments (good or bad).
Thanks for reading — BW
What’s French for ‘regulation’?
It’s been just over two years since their neighbours passed NetzDG, the law that makes social network platforms responsible for unlawful content (EiM 13). Now France is following suit.
On Tuesday, French MPs comprehensively passed a law that makes social media networks liable for offending content on their platforms. There are strong similarities to the German legislation, which came into force in January 2018 — failure to remove ‘obviously hateful’ posts within 24 hours will result in fines up to €1.25m and repeat offenders will be stripped of 3% of their revenue.
France’s new legislation isn’t surprising if you've listened to the noises that have been coming out of the EU giant over the past six months:
- Last year, Emmanuel Macron agreed with Mark Zuckerberg that French officials would visit Facebook's offices to find out how they deal with hate speech (EiM 11).
- In April, a new Minister of Digital Affairs, Cédric O (yes, that’s his full name) was appointed and expressed strong views on moderation from the get-go.
It’s just like banking regulators. They check that banks have implemented systems that are efficient, and they audit those systems. I think that’s how we should think about it.
- A report titled "Creating a French Response to Make Social Media Responsible” was published in May that laid out how social networks ‘create(d) problems in other countries’ and gave recommendations to ensure networks had greater responsibility for the abuse on their platforms.
- Charismatic MP Laetitia Avia, a 33-year-old former corporate lawyer, has been raising awareness of racist abuse she receives on Twitter for some time and led the campaign to bring the bill to French parliament.
As with NetzDG, there are reservations that the 'Avia Bill' reaches too far. Law professor Anne-Sophie Choné Grimaldi claims in a piece for Le Mondé that this way of ’treating all the platforms identically is problematic’.
Meanwhile, Article 19, the British human rights and freedom of expression organisations, argues that it 'entrenches private censorship of a wide range of illegal content at the expense of the courts’. These reservations are justified.
So, what does this mean for moderation in France? Initially, most likely a host of new jobs in the policy and operation teams of tech companies based in France. You may remember Facebook hired 500 people in Essen to deal with the new demands of NetzDG.
Further down the line, you can also expect big fines, just like the €2million one that Germany's Federal Office for Justice dished out to Facebook earlier this month for underreporting complaints on its platform.
But this isn't about the money. It's about Europe's fightback against unchecked technology and the as-yet-unknown consequences on online speech that will happen as an unintended consequence.
GitHub's stark stance on nudes
This week gave us a reminder that community guidelines and takedown disputes stretch beyond words and images.
GitHub, the widely-used code management tool, took down a repository (essentially a set of files) because they contained information about how to run DeepNude, an app that creates fake naked pictures of women.
Hosting or transmitting ’sexually obscene content’ is against user policy and so the repo was rightly pulled down. However, GitHub doesn’t monitor projects on the platform and rely on users flagging content to them. How long until something like this happens again?
Twitter has followed Facebook in getting civil rights groups to help them to update its policy on hate speech. Like Facebook, they fell way short
Civil Rights Groups Mostly Unimpressed by New Twitter Policy Against 'Dehumanizing' Language
Civil rights organizations were, somehow, both pleased and exasperated with Twitter on Tuesday after the social network announced the latest update to its rules against “hateful conduct
I'd never heard of Rich Kyanka before The Verge interviewed the founder of the Something Awful forum. But his advice to YouTube (to put 'proverbial heads on the pike around your site’) bears reading
Something Awful’s founder thinks YouTube sucks at moderation - The Verge
YouTube has been particularly bad about moderation lately. Earlier this month, Susan Wojcicki, YouTube’s CEO, apologized to the LGBTQ community after the company didn’t take definitive action against the conservative YouTuber Steven Crowder, who had over the years been hurling a litany of homophobic slurs at Vox host Carlos Maza.
The lawyer behind the Facebook oversight board explains more about the concept and how it will respond in a timely way to content on the platform
Exclusive: The Harvard professor behind Facebook’s oversight board defends its role
The idea seems to have its merits, but how Facebook implements the plan will speak volumes about the company’s real motivations.
More hard-to-read tales from Facebook’s contractors in the Washington Post.
Jillian C York reminds us in this Gizmodo piece that questions of moderation are questions of money and, for me, that’s why they’re so interesting
How American Corporations Are Policing Online Speech Worldwide
In the winter of 2010, a 19-year-old Moroccan man named Kacem Ghazzali logged into his email to find a message from Facebook informing him that a group he had created just a few days prior had been removed from the platform without explanation.
Tuesday’s Supreme Court judgement proves the leader of the free world is powerful but he’s no match for Twitter’s moderation guidelines
Appeals Court Asks Wrong Question in Trump Twitter Blocking Case - Bloomberg
An appeals court ruling that says the president can’t block followers is actually bad for free speech.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.