2 min read

📌 Google’s nine principles for policing the web

The week in content moderation - edition #8

Hello to a handful of new subscribers, who may have found their way here from a kind tweet from someone whose opinion I respect a great deal. Glad to have you on board.

I’ve tried to tighten up the spelling in this edition after a few clangers last week. As ever, my inbox is always open to feedback and ideas.

Thanks for reading — BW

The inner dialogue of a tech giant

How does Google (or any of the big tech giants) even start to tackle the issue of bad behaviour on the web? A lengthy internal report, of course.

Published in March 2018 but leaked this week, the 85 slide document, entitled ‘The Good Censor’, summarises interviews with authors, influential thinkers and 'micro observers’ about the causes of the shift towards moderation and censorship.

Although it's authored by Google’s internal research group so isn’t official policy or strategy, it gives some indication to the company’s train of thought. Pretty early on (slide 21), there's the admission that it's no longer enough to say ‘we’re not responsible for what happens on our platforms'.

Towards the end of the presentation, in just four slides out of the 85, nine principles are outlined that are designed to help Google ‘make the most’ of the opportunity presented by the backlash against free speech online.

They revolve mainly around greater transparency and responsiveness and, by and large, make a lot of sense (particularly improve communications, justify global positions, create positive guidelines and enforce standards clearly).

However, it’s the size of the task that’s most striking and the lack of detail around what needs to change, internally at Google and elsewhere, for these nine principles to have the desired effect.

The world’s most cynical camera filter

Over at Instagram, they’re thinking about other, less rigorous ways to improve behaviour on their platform.

Amidst an announcement about the expansion of their comment filter, which hides offensive words and phrases, new head Adam Mosseri launched a ‘kindness camera effect’ which fills your selfie/cute photo of your dog/arty street scene shot with kind comments of all different languages and encourages you to ‘tag a friend you want to support'.

Because, if you’re struggling with work or going through a rough time in your relationship, the thing you really want is a chirpy picture of your friend overlaid with text you can’t understand.

Who comes up with these ideas?

Not forgetting

Weibo will stop letting kids under 14 register new accounts (Abacus)

It’s also creating a kids-only version of the app, like Facebook’s Messenger Kids

Weibo are saving the next generation of social network users from themselves by banning them from signing up to China’s most popular platform and instead created a special version for kids under 14. Do they not know how rude 13-year-olds can be?

After Troubles in Myanmar, Facebook Charges Ahead in Africa (WIRED)

Activists say Facebook has learned lessons from its experience in developing countries, but they question its ability, and willingness, to control misinformation and hate speech.

Facebook have furthered connectivity efforts into Africa, working with a partner to set up 1,100 hot spots for locals to browse the web. However, civil society groups fear it could be another Myanmar.

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.