5 min read

šŸ“Œ How China tech does moderation

The week in content moderation - edition #21

Hello from sunny London. Hope your week is been as productive as Katie's.

Thanks to Katherine who got in touch during the week to recommend Evan Hamilton's Community Manager Breakfast newsletter. A few of you also recommended Casey Newton’s The Interface which covers content moderation and online speech issues from time to time. If there are any others, do let me know and I’ll share.

Thanks for reading — BW


In the belly of the beast

The timing couldn’t have been better. In the week that the UK unveiled plans to introduce regulation to ensure social networks better tackle ā€˜online harms’, the South China Morning Post published a fascinating behind-the-scenes piece on the moderation practices of Inke, one of China’s biggest live-streaming apps.

Zhi Heng and his team at Inke

It’s the first time I’ve read a piece like it: a candid and open explanation of how a tech company that is dealing with the challenges of moderation at scale runs their team.

There are a few really particularly interesting points that I wanted to pull out:

1. The restrictions put in place by the Chinese government mean moderators look at everything - the livestreams themselves but also text, images and audio. Every time a stream starts, it appears on someone’s dashboard. (A video shows team members used software to scan over every frame of a stream - it’s forensic work). There’s no mention of users flagging streams. The onus is on them to spot offending content.

Screengrab from South China Morning Post video

2. The team is huge - 60% of Inke’s workforce is moderators, some 1200 people out of around 2000. 200 are on staff, the rest are contractors (many of them grad students). Despite that, Inke still made a $164m profit last year.

3. It’s difficult, exhausting work which takes a toll - annual team turnover is 10% (which feels high but is half the Chinese average). The team manager, Zhi Heng, even admits that watching for long periods ā€˜could make you question the meaning of your life’. ā€˜Many hires’, he says, don’t make it through the ā€˜one-month boot camp’.

4. The job is taken very seriously. Zhi Heng, a former power plant worker, calls himself as a ā€˜public janitor’ and says he sees cleaning up the internet as his ā€˜duty’. It feels different to the way moderation is viewed in Western democracies.

5. The possibility of misuse is very clear. Zhi Heng recounts a story when they turned off live-streaming in 10km area where a protest against the local government was taking place. The justification? To prevent 'the incident from getting worse’.

Putting aside China’s censorship for a second, it’s hard not to draw a distinction between the way Inke talk about their team and policy and the way that Facebook, YouTube et al do. One is clandestine, covering people’s faces in photos, putting up posters when reporters arrive. The other lets a reporter into internal meetings, is open about the challenges and gives details on salaries and staff.

It’s just not the way round you’d perhaps expect.

Let's be brief

Facebook provided an update this week on their Remove, Reduce, Inform strategy, which they set up in 2016.

You can read the whole update here but I wanted to draw attention to two bits of news, in particular, one positive, one less so.

1. Facebook has created a ā€˜Recent updates’ tab to show changes that have been made to its community guidelines. Community policies are living documents and constantly evolving so it’s a positive step that changes can be viewed in this way.

2. Instagram posts deemed ā€˜inappropriate’ but not against community guidelines will now not appear in the Explore or hashtag pages. It’s an expansion of the 'borderline content’ idea that Zuckerberg wrote about last year (in short, that more people engage with more sensationalist and provocative content). Clearly, deciding what is inappropriate differs vastly from person to person and Instagram's policy doesn't have examples (Techcrunch does) so I can't see this going well.

Not forgetting...

The Register has the most nuanced take on Monday’s Online Harms white paper, the long-trailed paper by the UK government Ā setting out plans to make social media platforms more responsible for the content that users publish.

You were warned and you didn't do enough: UK preps Big Internet content laws • The Register

Alex Feerst, head of legal at Medium, spoke this week on a panel organised by the Human Rights Initiative of CSIS in Washington titled 'Everything in Moderation' (good name, I approve). He makes an interesting point about policy specialists seeing themselves as ā€˜product interns’ as much as enforcement officers.

Everything in Moderation: The Unintended Consequences of Regulating Hate Speech

This is a raw video feed. Event begins at 15:00. Edited video will be posted shortly.

PS his essay (on Medium, of course) with interviews from 15 people involved in content moderation policy creation at tech companies is a real from-the-horses-mouth must-read

Your Speech, Their Rules: Meet the People Who Guard the Internet

Tech platform trust and safety employees are charged with policing the impossible. They open up to Medium’s head of trust and safety.

Sarah Jeong in the New York Times says there is regulation precedent in the Hays Code, a voluntary set of rules dreamed up by Hollywood studios in the 1930s to police on-screen displays of affection and depictions of the police and designed to avoid strict governmental legislation on the film industry. Familiar huh?

Facebook Wants a Faux Regulator for Internet Speech. It Won’t Happen. - The New York Times

Not in the United States, anyway.

The WSJ is changing their approach to comment moderation and will rebrand their team ā€˜audience voice reporters'. Nice move Todd and co.

Goodbye ā€œmoderators,ā€ hello ā€œaudience voice reportersā€: how The Wall Street Journal is refocusing the comments

"Focusing on the small subset of users who comment frequently and want no one intervening at all in their comments is costing us the opportunity of engaging with our much larger, growing, and diversifying audience."

Facebook’s managing director of Indian operations explained how they have been dealing with fake accounts and abusive behaviour as the country went to the polls yesterday

Facebook's AI is deleting one million accounts every day as the Indian election heats up

Facebook India’s managing direction, Ajit Mohan, just published a blog outlining Facebook’s 18 month journey to prepare for India’s general election. In the

In the latest of a long line of 'Facebook Group selling bad things' stories, people's card details were found by a security firm being sold for as little as $5 a piece.

Facebook Let Dozens of Cybercrime Groups Operate in Plain Sight

Who needs the dark web? Researchers found 74 groups offering stolen credit cards and hacking tools with simple Facebook searches.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.