š Facebook is very pro thresholds
Hello everyone. I spent the start of the week in the eye of Storm Dennis next to this full-to-bursting river and I think the experience helped me prepare for the onslaught of Facebook-inspired content moderation news this week.
Thank you to new subscribers from Ofcom, Yiibu, Technology Review and, erm, my mum (no idea what took her so long...). Iām now over the 200 subscriber mark šā do hit reply and say hi or send me a celebratory note.
Thanks for reading āĀ BW
š A 22-page-long line in the sand
Itās a time-old policy discussion trick: ahead of a meeting with a senior EU commission representative, produce a white paper which sets out your stall and then put your CEO up on stage at a conference.
Thatās what Facebook did earlier this week as it published a 22-page white paper, written by VP Content Policy Monika Bickert, that outlined four key questions about what a regulation framework may look like. Meanwhile, over in Munich, Mark Zuckerberg was in front of an audience at the Munich Security Conference explaining his preference for something 'between a publisher and a telcoā. He also penned a punchy piece in this FT.
One thing stood out about the paper and the surrounding noise: Facebookās admission that it does not believe perfect content moderation is possible (see EiM #50). On page 7, it states:
Given the dynamic nature and scale of online speech, the limits of enforcement technology, and the different expectations that people have over their privacy and their experiences online, internet companiesā enforcement of content standards will always be imperfect.
This conciliatory tone feels new. In the recent past, hereās been little mention of Facebook failing to deal with the challenge of moderation at scale, only that it is ongoing. There was no allusion to imperfection in any of the Oversight Board documentation (EiM #44) and it certainly didnāt come in Zuckerbergās Georgetown speech in October 2019, which is still the most comprehensive account of his views on free speech that we have. Imperfect marks a definite change.
Why does it matter? Admitting that a flawless system of moderation is not possible allows for Facebook to argue for 'thresholds' (which appears five times in Bickertās paper) and opens the door to 'performance targets' (six mentions). Being below an agreed line on, say, the prevalence of hate speech, is, according to Facebook, the best both they and the EU Commission can hope for. Eradicating it completely is not possible and shouldn't be sought, according to Bickert. Any regulation should be about being good enough, not about being great.
Sadly for Zuck et al, thatās not going to wash ā Thierry Breton, the French commissioner called the proposals ātoo slow.. too low in terms of responsibility and regulationā. But nevertheless, admitting fallibility seems to be Facebookās latest tactic to secure the least bad form of regulation in Europe.
Bonus read: Facebookās proposed regulations are just things itās already doing (The Verge)
š¤ DoJ on the mic
Back on Zuckerbergās home soil, the US Department of Justice held a workshop on Section 230 with legal and policy experts ominously entitled āNurturing innovation and fostering unaccountability?ā featuring the US Attorney General and FBI director. Axios has a good read through about what it could mean in the long run and how it had Donald Trump's chubby fingerprints all over it.
And if youāre at a loose end this evening, you might decide to watch all four hours of the workshop here on YouTube. Or you know, maybe not.
š© The UK presses on with regulation
After last weekās EiM (#51) touched on the outcomes of the UKās Online Harms white paper consultation, I was struck by this tweet from the Guardianās media correspondent.
It's a reminder that, when you call for systems and frameworks and regulation, organisations get caught up in them that perhaps you didnāt expect to.
ā° Not forgetting...
The 100th edition of the Social Media and Politics podcast features Dr Tarleton Gillespie, author of Custodians of the Internet. Listen here.
Content Moderation and the Politics of Social Media Platforms, with Dr. Tarleton Gillespie
Social Media and Politics is a podcast bringing you innovative, first-hand insights into how social media is changing the political game.
Twitter will test what it is calling ācommunity labellingā, a way for users to identify misleading information by public figures. It comes in soon ā March 5th according to Reuters ā and might involve badges/points.
Twitter tests labels, community moderation for lies by public figures - Reuters
Twitter Inc said on Thursday that it was testing a new community moderation approach that would enable users to identify misleading information posted by politicians and public figures and add brightly colored labels under those tweets.
Remember Alison Parker, the US news anchor that was shot and killed while doing a live interview for her TV channel back in 2015? Her father cannot get YouTube to remove the videos of her death.
YouTube refuses father's request to remove video of daughter's killing
The father of journalist Alison Parker, who was shot and killed while conducting an interview in 2015, is fighting with YouTube to have the video of his daughterās death removed from the site.
Kickstarterās new union, the first of its kind at a tech company, will have a say in the way content is moderated, as well as company pay and benefits. Amazing stuff. Well done to them.
Kickstarter's new union rules give staff greater control over platform moderation
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.