š How China tech does moderation
Hello from sunny London. Hope your week is been as productive as Katie's.
Thanks to Katherine who got in touch during the week to recommend Evan Hamilton's Community Manager Breakfast newsletter. A few of you also recommended Casey Newtonās The Interface which covers content moderation and online speech issues from time to time. If there are any others, do let me know and Iāll share.
Thanks for reading āĀ BW
In the belly of the beast
The timing couldnāt have been better. In the week that the UK unveiled plans to introduce regulation to ensure social networks better tackle āonline harmsā, the South China Morning Post published a fascinating behind-the-scenes piece on the moderation practices of Inke, one of Chinaās biggest live-streaming apps.

Itās the first time Iāve read a piece like it: a candid and open explanation of how a tech company that is dealing with the challenges of moderation at scale runs their team.
There are a few really particularly interesting points that I wanted to pull out:
1. The restrictions put in place by the Chinese government mean moderators look at everything - the livestreams themselves but also text, images and audio. Every time a stream starts, it appears on someoneās dashboard. (A video shows team members used software to scan over every frame of a stream - itās forensic work). Thereās no mention of users flagging streams. The onus is on them to spot offending content.

2. The team is huge - 60% of Inkeās workforce is moderators, some 1200 people out of around 2000. 200 are on staff, the rest are contractors (many of them grad students). Despite that, Inke still made a $164m profit last year.
3. Itās difficult, exhausting work which takes a toll - annual team turnover is 10% (which feels high but is half the Chinese average). The team manager, Zhi Heng, even admits that watching for long periods ācould make you question the meaning of your lifeā. āMany hiresā, he says, donāt make it through the āone-month boot campā.
4. The job is taken very seriously. Zhi Heng, a former power plant worker, calls himself as a āpublic janitorā and says he sees cleaning up the internet as his ādutyā. It feels different to the way moderation is viewed in Western democracies.
5. The possibility of misuse is very clear. Zhi Heng recounts a story when they turned off live-streaming in 10km area where a protest against the local government was taking place. The justification? To prevent 'the incident from getting worseā.
Putting aside Chinaās censorship for a second, itās hard not to draw a distinction between the way Inke talk about their team and policy and the way that Facebook, YouTube et al do. One is clandestine, covering peopleās faces in photos, putting up posters when reporters arrive. The other lets a reporter into internal meetings, is open about the challenges and gives details on salaries and staff.
Itās just not the way round youād perhaps expect.
Let's be brief
Facebook provided an update this week on their Remove, Reduce, Inform strategy, which they set up in 2016.
You can read the whole update here but I wanted to draw attention to two bits of news, in particular, one positive, one less so.
1. Facebook has created a āRecent updatesā tab to show changes that have been made to its community guidelines. Community policies are living documents and constantly evolving so itās a positive step that changes can be viewed in this way.
2. Instagram posts deemed āinappropriateā but not against community guidelines will now not appear in the Explore or hashtag pages. Itās an expansion of the 'borderline contentā idea that Zuckerberg wrote about last year (in short, that more people engage with more sensationalist and provocative content). Clearly, deciding what is inappropriate differs vastly from person to person and Instagram's policy doesn't have examples (Techcrunch does) so I can't see this going well.
Not forgetting...
The Register has the most nuanced take on Mondayās Online Harms white paper, the long-trailed paper by the UK government Ā setting out plans to make social media platforms more responsible for the content that users publish.
You were warned and you didn't do enough: UK preps Big Internet content laws ⢠The Register
Alex Feerst, head of legal at Medium, spoke this week on a panel organised by the Human Rights Initiative of CSIS in Washington titled 'Everything in Moderation' (good name, I approve). He makes an interesting point about policy specialists seeing themselves as āproduct internsā as much as enforcement officers.
Everything in Moderation: The Unintended Consequences of Regulating Hate Speech
This is a raw video feed. Event begins at 15:00. Edited video will be posted shortly.
PS his essay (on Medium, of course) with interviews from 15 people involved in content moderation policy creation at tech companies is a real from-the-horses-mouth must-read
Your Speech, Their Rules: Meet the People Who Guard the Internet
Tech platform trust and safety employees are charged with policing the impossible. They open up to Mediumās head of trust and safety.
Sarah Jeong in the New York Times says there is regulation precedent in the Hays Code, a voluntary set of rules dreamed up by Hollywood studios in the 1930s to police on-screen displays of affection and depictions of the police and designed to avoid strict governmental legislation on the film industry. Familiar huh?
Facebook Wants a Faux Regulator for Internet Speech. It Wonāt Happen. - The New York Times
Not in the United States, anyway.
The WSJ is changing their approach to comment moderation and will rebrand their team āaudience voice reporters'. Nice move Todd and co.
Goodbye āmoderators,ā hello āaudience voice reportersā: how The Wall Street Journal is refocusing the comments
"Focusing on the small subset of users who comment frequently and want no one intervening at all in their comments is costing us the opportunity of engaging with our much larger, growing, and diversifying audience."
Facebookās managing director of Indian operations explained how they have been dealing with fake accounts and abusive behaviour as the country went to the polls yesterday
Facebook's AI is deleting one million accounts every day as the Indian election heats up
Facebook Indiaās managing direction, Ajit Mohan, just published a blog outlining Facebookās 18 month journey to prepare for Indiaās general election. In the
In the latest of a long line of 'Facebook Group selling bad things' stories, people's card details were found by a security firm being sold for as little as $5 a piece.
Facebook Let Dozens of Cybercrime Groups Operate in Plain Sight
Who needs the dark web? Researchers found 74 groups offering stolen credit cards and hacking tools with simple Facebook searches.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.