Hello everyone. I spent the start of the week in the eye of Storm Dennis next to this full-to-bursting river and I think the experience helped me prepare for the onslaught of Facebook-inspired content moderation news this week.
Thank you to new subscribers from Ofcom, Yiibu, Technology Review and, erm, my mum (no idea what took her so long...). I’m now over the 200 subscriber mark 🎉— do hit reply and say hi or send me a celebratory note.
Thanks for reading — BW
📃 A 22-page-long line in the sand
It’s a time-old policy discussion trick: ahead of a meeting with a senior EU commission representative, produce a white paper which sets out your stall and then put your CEO up on stage at a conference.
That’s what Facebook did earlier this week as it published a 22-page white paper, written by VP Content Policy Monika Bickert, that outlined four key questions about what a regulation framework may look like. Meanwhile, over in Munich, Mark Zuckerberg was in front of an audience at the Munich Security Conference explaining his preference for something 'between a publisher and a telco’. He also penned a punchy piece in this FT.
One thing stood out about the paper and the surrounding noise: Facebook’s admission that it does not believe perfect content moderation is possible (see EiM #50). On page 7, it states:
Given the dynamic nature and scale of online speech, the limits of enforcement technology, and the different expectations that people have over their privacy and their experiences online, internet companies’ enforcement of content standards will always be imperfect.
This conciliatory tone feels new. In the recent past, here’s been little mention of Facebook failing to deal with the challenge of moderation at scale, only that it is ongoing. There was no allusion to imperfection in any of the Oversight Board documentation (EiM #44) and it certainly didn’t come in Zuckerberg’s Georgetown speech in October 2019, which is still the most comprehensive account of his views on free speech that we have. Imperfect marks a definite change.
Why does it matter? Admitting that a flawless system of moderation is not possible allows for Facebook to argue for 'thresholds' (which appears five times in Bickert’s paper) and opens the door to 'performance targets' (six mentions). Being below an agreed line on, say, the prevalence of hate speech, is, according to Facebook, the best both they and the EU Commission can hope for. Eradicating it completely is not possible and shouldn't be sought, according to Bickert. Any regulation should be about being good enough, not about being great.
Sadly for Zuck et al, that’s not going to wash — Thierry Breton, the French commissioner called the proposals ’too slow.. too low in terms of responsibility and regulation’. But nevertheless, admitting fallibility seems to be Facebook’s latest tactic to secure the least bad form of regulation in Europe.
Bonus read: Facebook’s proposed regulations are just things it’s already doing (The Verge)
🎤 DoJ on the mic
Back on Zuckerberg’s home soil, the US Department of Justice held a workshop on Section 230 with legal and policy experts ominously entitled ’Nurturing innovation and fostering unaccountability?’ featuring the US Attorney General and FBI director. Axios has a good read through about what it could mean in the long run and how it had Donald Trump's chubby fingerprints all over it.
And if you’re at a loose end this evening, you might decide to watch all four hours of the workshop here on YouTube. Or you know, maybe not.
🚩 The UK presses on with regulation
After last week’s EiM (#51) touched on the outcomes of the UK’s Online Harms white paper consultation, I was struck by this tweet from the Guardian’s media correspondent.
It's a reminder that, when you call for systems and frameworks and regulation, organisations get caught up in them that perhaps you didn’t expect to.
⏰ Not forgetting...
Social Media and Politics is a podcast bringing you innovative, first-hand insights into how social media is changing the political game.
Twitter will test what it is calling ‘community labelling’, a way for users to identify misleading information by public figures. It comes in soon — March 5th according to Reuters — and might involve badges/points.
Twitter Inc said on Thursday that it was testing a new community moderation approach that would enable users to identify misleading information posted by politicians and public figures and add brightly colored labels under those tweets.
Remember Alison Parker, the US news anchor that was shot and killed while doing a live interview for her TV channel back in 2015? Her father cannot get YouTube to remove the videos of her death.
The father of journalist Alison Parker, who was shot and killed while conducting an interview in 2015, is fighting with YouTube to have the video of his daughter’s death removed from the site.
Kickstarter’s new union, the first of its kind at a tech company, will have a say in the way content is moderated, as well as company pay and benefits. Amazing stuff. Well done to them.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.