š Franceās very own NetzDG
European regulation of social media has been discussed for so long that it can be easy to forget about its significance when it eventually happens. Thatās why this week Iām looking at what Franceās new āAvia Billā means for moderation across the continent.
Our run of moderation podcasts continues this week with Verge journalist Casey Newtonās appearance on NPRās Fresh Air (thanks to Lauren Katz for the tip).
Thanks to Adam, Mark and Max for their positive feedback on last week's edition. Get in touch if you have comments (good or bad).
Thanks for reading āĀ BW
Whatās French for āregulationā?
Itās been just over two years since their neighbours passed NetzDG, the law that makes social network platforms responsible for unlawful content (EiM 13). Now France is following suit.
On Tuesday, French MPs comprehensively passed a law that makes social media networks liable for offending content on their platforms. There are strong similarities to the German legislation, which came into force in January 2018 ā failure to remove āobviously hatefulā posts within 24 hours will result in fines up to ā¬1.25m and repeat offenders will be stripped of 3% of their revenue.
Franceās new legislation isnāt surprising if you've listened to the noises that have been coming out of the EU giant over the past six months:
- Last year, Emmanuel Macron agreed with Mark Zuckerberg that French officials would visit Facebook's offices to find out how they deal with hate speech (EiM 11).
- In April, a new Minister of Digital Affairs, CĆ©dric O (yes, thatās his full name) was appointed and expressed strong views on moderation from the get-go.
Itās just like banking regulators. They check that banks have implemented systems that are efficient, and they audit those systems. I think thatās how we should think about it.
- A report titled "Creating a French Response to Make Social Media Responsibleā was published in May that laid out how social networks ācreate(d) problems in other countriesā and gave recommendations to ensure networks had greater responsibility for the abuse on their platforms.
- Charismatic MP Laetitia Avia, a 33-year-old former corporate lawyer, has been raising awareness of racist abuse she receives on Twitter for some time and led the campaign to bring the bill to French parliament.
As with NetzDG, there are reservations that the 'Avia Bill' reaches too far. Law professor Anne-Sophie ChonĆ© Grimaldi claims in a piece for Le MondĆ© that this way of ātreating all the platforms identically is problematicā.
Meanwhile, Article 19, the British human rights and freedom of expression organisations, argues that it 'entrenches private censorship of a wide range of illegal content at the expense of the courtsā. These reservations are justified.
So, what does this mean for moderation in France? Initially, most likely a host of new jobs in the policy and operation teams of tech companies based in France. You may remember Facebook hired 500 people in Essen to deal with the new demands of NetzDG.
Further down the line, you can also expect big fines, just like the ā¬2million one that Germany's Federal Office for Justice dished out to Facebook earlier this month for underreporting complaints on its platform.
But this isn't about the money. It's about Europe's fightback against unchecked technology and the as-yet-unknown consequences on online speech that will happen as an unintended consequence.
GitHub's stark stance on nudes
This week gave us a reminder that community guidelines and takedown disputes stretch beyond words and images.
GitHub, the widely-used code management tool, took down a repository (essentially a set of files) because they contained information about how to run DeepNude, an app that creates fake naked pictures of women.
Hosting or transmitting āsexually obscene contentā is against user policy and so the repo was rightly pulled down. However, GitHub doesnāt monitor projects on the platform and rely on users flagging content to them. How long until something like this happens again?
Not forgetting
Twitter has followed Facebook in getting civil rights groups to help them to update its policy on hate speech. Like Facebook, they fell way short
Civil Rights Groups Mostly Unimpressed by New Twitter Policy Against 'Dehumanizing' Language
Civil rights organizations were, somehow, both pleased and exasperated with Twitter on Tuesday after the social network announced the latest update to its rules against āhateful conduct
I'd never heard of Rich Kyanka before The Verge interviewed the founder of the Something Awful forum. But his advice to YouTube (to put 'proverbial heads on the pike around your siteā) bears reading
Something Awfulās founder thinks YouTube sucks at moderation - The Verge
YouTube has been particularly bad about moderation lately. Earlier this month, Susan Wojcicki, YouTubeās CEO, apologized to the LGBTQ community after the company didnāt take definitive action against the conservative YouTuber Steven Crowder, who had over the years been hurling a litany of homophobic slurs at Vox host Carlos Maza.
The lawyer behind the Facebook oversight board explains more about the concept and how it will respond in a timely way to content on the platform
Exclusive: The Harvard professor behind Facebookās oversight board defends its role
The idea seems to have its merits, but how Facebook implements the plan will speak volumes about the companyās real motivations.
More hard-to-read tales from Facebookās contractors in the Washington Post.
Jillian C York reminds us in this Gizmodo piece that questions of moderation are questions of money and, for me, thatās why theyāre so interesting
How American Corporations Are Policing Online Speech Worldwide
In the winter of 2010, a 19-year-old Moroccan man named Kacem Ghazzali logged into his email to find a message from Facebook informing him that a group he had created just a few days prior had been removed from the platform without explanation.
Tuesdayās Supreme Court judgement proves the leader of the free world is powerful but heās no match for Twitterās moderation guidelines
Appeals Court Asks Wrong Question in Trump Twitter Blocking Case - Bloomberg
An appeals court ruling that says the president canāt block followers is actually bad for free speech.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.