5 min read

A playbook for increasing trust and safety, Indonesia's 'repressive' law and life after TikTok

The week in content moderation - edition #170
An aerial view of a Minecraft city
Minecraft users can now report one another on private servers (photo courtesy of Rawpixel under CC0 1.0)

Hello and welcome to Everything in Moderation, your content moderation week in review. It's written by me, Ben Whitelaw.

A hearty welcome to new subscribers from LinkedIn, Unitary, Bytedance, the Oversight Board and a bunch of other folks who (I presume) care about the future of the speech and safety on the web too. Do drop me a line to say hi (or just put me right).

If you haven't already checked out the latest in the Getting to Know mini-series with Lauren Wagner, you should. And if you enjoy it, consider joining the growing ranks of members who support EiM in all its forms.

Here's what you need to know this week — BW


Policies

New and emerging internet policy and online speech regulation

Indonesia has blocked eight companies from operating in the country after they failed to register with the government as part of its new MR5 online speech law.

The bans, which included PayPal and Steam, came after threats by the country's Ministry of Communication and Information and despite the fact that the deadline was extended several times. In total, 200 foreign and 8000 domestic ESPs registered, including Meta and Amazon right before the cutoff.

The wider context here is that, back in February, the Electronic Frontier Foundation said MR5 "may be the most repressive [internet regulation] yet" and just a month ago, a coalition of human rights organisations wrote to the Indonesian government urging for its repeal on account of being "inconsistent with internationally recognised human rights" (EiM #167). The worst part is that, as Dr Pauline Leong recently noted, other south-east Asian countries are following suit.

On the topic of draconian speech laws, this Global Voices piece outlines the unfettered power that the Sudanese government has given itself in recent years to filter content and block websites. The 2020 law opens "a huge space to lawfully block any access to web content" despite not adhering to international standards such as the International Covenant on Civil and Political Rights (ICCPR) which I touched upon in this Viewpoint article with research fellow Talita Das. A very concerning precedent.

Products

Features, functionality and startups shaping online speech

Mojang Studios, the creators of Minecraft, have taken significant steps to "hold players to its terms of service no matter where they’re playing the game" by releasing a player reporting feature in its latest version. The move has proved controversial as it means that users can now report one another on private servers as well as Minecraft ones (called Realms).

This has fuelled privacy fears and, as Kotaku reports, worries that malicious reports could get users banned from the game as a whole. Reassurances from the Microsoft-owned company that reports will be viewed by a human moderator have not assuaged fears.

The Digital Trust & Safety Partnership has published its inaugural best practices report, with insights from the likes of Discord, Google, LinkedIn and Reddit. Each company was asked to rate itself on the DTSP's five commitments and maturity and there are some fascinating nuggets in there, particularly around how platforms still have a lot to do to integrate product development into the safety process. My read of the week.

Platforms

Social networks and the application of content guidelines  

TikTok is under the spotlight again for its working practices following accusations of poor working conditions for its outsourced workers in Morocco. A report by Business Insider described the working conditions at outsourcing firm Majorel as one of "near-constant surveillance and near-impossible metric goals" and that they suffered "severe psychological distress as a result of their jobs". Nine current and former employees spoke out against the video streaming app but did so pseudonymously for fear of reprimand by Moulay Hafid Elalamy, one of Africa's richest men and the owner of an investment firm that controls Majorel.

I somehow missed this last week but it's too important not to include: Facebook and Sama have backed down from their legal threat against former moderator Daniel Motaung and legal firm Foxglove. Two weeks ago, I covered here how 80 organisations had written a joint letter calling for contempt of court proceedings to be dropped (EiM #168) and that seems to have worked.

Catch up: Lauren Wagner is the second interviewee in the Getting to Know series, in collaboration with the Integrity Institute.

Lauren has a ton of experience to draw upon from her time at Facebook and Google and talks about the value of working both inside and outside the major platforms and funding "novel ideas about how to improve the internet". Take a look.

Articles with industry pros like Lauren will always remain free to read thanks to the support of EiM members. If you're interested in supporting more Q&As like this, become a member today.

People

Those impacting the future of online safety and moderation

Between December 2020 and May 2022, Marika Tedroff worked for TikTok as a policy manager for EMEA. That meant, in her own words,  writing "global policies affecting one billion users from scratch in your third week!". And dealing with regulators who are "completely out of touch with reality and how things work in practice". Oh, and what she calls "China dynamics".

We know all this because Tedroff wrote about it herself on Substack just last week. And with so few policy professionals able to speak out about their work, her account stands out not only for its content but for its chatty tone and brutal honesty. This is trust and safety at its most raw.

Casey Newton spoke to Tedroff this week for Platformer and, in an interesting back-and-forth, she admitted that "back in 2020, many of the [TikTok] policies felt ad hoc and not culturally adapted to the specific markets".

I commend Tedroff for speaking out and hope this is part of a new culture of trust and safety professionals sharing their experience in the public realm.

Tweets of note

Handpicked posts that caught my eye this week

  • "Deplatforming isn’t useless. It can be impactful. But the long game…" - Jillian C York tries to keep her cool on finding out Alex Jones became more profitable after being banned from major social media networks.
  • "Flirty Legal / Trust and Safety talk!" - Good to see Duolingo get with the programme via Jamie Tomasello.
  • "Everything is content moderation, and any open platform that allows free participation and competition will be gamed." - Benedict Evans reminds us why we're all here.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1200+ EiM subscribers.

Google is looking for a Senior Engineering Quality Analyst for its Trust and Safety team in London or Dublin.

It's an interesting-looking role that involves working with a range of teams to find product vulnerabilities in its Ads product (something that its competitors are struggling with at the moment...). Note that you'll need a computer science degree.

If you get it, you'll spend your day doing "complex, exhaustive investigations of our systems, features, and processes". I've not been able to find salary information on this one (when will they learn) but seems like a fun one.