📌 What does $130m get Facebook?
Hello from a dreary London, which this week is miserable in both the meteorological and political sense.
There were three significant content moderation stories relating to the big platforms this week — one each for Facebook, Twitter and YouTube — and you’ll find them all below.
Estimated reading time of this week’s edition is 2 minutes, 23 seconds and contains 534 words.
Thanks for reading — BW
The price of independence
$130,000,000 over six years. That’s what Facebook’s Oversight Board will cost according to a company blog post published yesterday (Thursday).
Kate Klonick, law professor and content mod expert, has an interesting thread on the announcement but, I'm mostly interested in the cost of the board and its value to the company.
First, let's look at how it stacks up in terms of moderation hours. If you think $15 is a sufficient hourly wage (I don’t but it’s the wage that Cognizant pay at their Texas operation, as reported by The Verge earlier this year), $130m gets you 8,666,667 hours of content moderation.
Over the six years — the time that the board has been guaranteed to exist — that works out at a little under 4000 hours of content moderation per day. In terms of headcount, that's 494 people working eight-hour-long shifts.
Now, that might not sound like much when you’ve already got some 15,000 people keeping tabs on content on your platform. But that people-power would go a long way to improving the lives of existing moderators, who would no doubt appreciate the additional support, longer toilet breaks and generally having to make decisions about complex, context-dependent content in a matter of seconds.
By contrast, $130m, according to yesterday's blog post, gets you:
- staff (exact number TBC) to run the board's organisation (legal, research, HR)
- office space (?)
- 40 oversight board members and travel expenses
Doesn't seem a lot does it.
Now, Facebook obviously isn’t short of money (it spends a reported $5bn on safety and security each year) but even it will want to ensure the Oversight Board isn’t cash chucked down the drain. Some, like Damon (below), believe the Board's mere creation will have justified its cost in crisis comms many times over. This time next year, we'll know for sure.
Yesterday’s other big news was Twitter announcing a new project to explore an operand decentralised standard for social media. Some smart people are sceptical so we should be aware of. Thanks to Nick W for the heads up.
Twitter wants to decentralize, but decentralized social network creators don’t trust it - The Verge
Twitter CEO Jack Dorsey has announced a project called Bluesky to work toward making Twitter a decentralized social network. Developers who work on similar projects like Mastodon and ActivityPub are wary.
YouTube has made three changes to its content moderation policies regarding harassment following the Carlos Maza incident in June.
6 months after a major public controversy, YouTube is changing its anti-harassment policies
The company is expanding its definition of harassment on the video platform, but there’s still plenty of room for debate.
I’d never heard of MeetMe until this week and, to be honest, it’s not a great first impression
A convicted sex offender on MeetMe shows the difficulties of online screening - The Washington Post
MeetMe says it checks users to ensure they aren’t convicted sex offenders. But the success of one streamer came within weeks of his release from prison.
This deserves a larger look at a later date - over the next 6-12 months, tens of thousands of content moderation jobs are expected to be created in India driven as a result of popular video sharing apps like TikTok
They’re watching rape and raunchy videos so that you don’t have to | India News - Times of India
India News: With video sharing apps facing heat for inappropriate content, the demand for content moderators to clean up the internet is only rising in India
This is a couple of weeks old but France’s Avia’s Law (designed to regulate hate speech) is not liked by European Commission and deemed ‘dead on arrival’ if it ever gets that far
European Commission severely criticizes France's impossible "hate speech" laws, freezes the bill
You might not think it but recently published data suggested LinkedIn has an issue with harassment and fake accounts
For the first time, LinkedIn included data on its moderation efforts in its biannual transparency report
For the first time, LinkedIn included data on its moderation efforts in its biannual transparency report for the H1 2019 reporting period.
Related: this Coda Story piece looks at the regulatory routes incoming Commission president Ursula Von Der Leyen might go down.
At the forefront of Europe’s battle for tech transparency - Coda Story
Europe is already home to some of the world’s strictest data privacy laws. The EU thinks it can do much more
A bit of positive news I missed: US funder, the Knight Foundation, is putting $3.5m into research about internet governance and policy. Excellent news.
Status Update: Big Tech at Crisis Point – Knight Foundation
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.