4 min read

📌 TikTok’s first transparency report

The week in content moderation - edition #46

Hello, happy new year 🎇 (yes, I’m still wishing people HNY) and welcome back to Everything in Moderation. I trust everyone had a relaxing festive period? Yes? Good.

I’m presuming that no-one received a fix for all online speech issues in their stocking so we’ll continue as we left off in 2019: rounding up and reflecting on the big content moderation stories of the week. FYI: This week is link heavy.

Get in touch if you have any moderation-related resolutions for 2020.

Thanks for reading — BW

Obfuscation report isn't as catchy

Here’s a mini-prediction for you: transparency reports will be meaningless by the end of the year.

Why do I say that? Well, over the holiday break, TikTok published its first transparency report detailing the number of requests from copyright owners and governments looking to request information and remove content from the platform between January and the end of June 2019. As TechCrunch reports, India was top with 107 followed by the US and Japan.

Now, this process was always about appealing to US regulators and was never going to be the same as the community standards reporting that Facebook and Twitter have done for the last few years.

But even so, the report doesn’t fill me with confidence. For one, it was published on 30 December, when most of the world’s media were drinking eggnog and having a little downtime. Then there’s the fact that China reportedly requested NO takedowns in the six month period (which is pretty unlikely). No explanation for either. Hardly transparent.

The real test for TikTok will come later this year when its next transparency report (presuming they do it again?) covers the June to December period that saw the Hong Kong protests erupt and led the Washington Post to report about a suspicious lack of videos about the unrest on the platform.

For now, we’ve reason to be sceptical about the concept of transparency reports in general, what’s in them and why they are theree. As Heidi Tworek wrote in December, the focus on numbers, rather than process, make them easy to game and simple for an uninitiated journalist to report badly on.

In short, if there hasn't been some questions asked about them by the end of the year, I’ll be very surprised.

+ 💵 Bonus read: TikTok also updated its community guidelines this week to bring it in line with other social networks. Among other things, it clarifies its stance in manipulated videos and content relating to terrorist groups.

Who are you talking to? No, really

Twitter announced at CES that it will begin experimenting with four new audience options - Global, Group, Panel, Statement - as a way of improving conversation and limiting abuse. Sounds good in theory but J Nate Matias, an assistant professor at Cornell whose work I respect, isn’t hopeful...

Not forgetting...

Becca Lewis, a PhD student in online subcultures, looks at the YouTube algorithm and suggests the far-right radicalisation on the platform is a more complex issue than we’ve been reporting

All of YouTube, Not Just the Algorithm, is a Far-Right Propaganda Machine

In recent years, the media has sounded a constant drumbeat about YouTube: Its recommendation algorithm is radicalizing people. First articulated by Zeynep Tufekci in a short piece for The New York…

Matt Halprin, YouTube’s global head of trust and safety, gave a short Q&A about moderation policy. If you get to the end, enjoy this somewhat sinister line: "For every workflow, for every policy, I get a measure of how accurate our reviewers have been regularly.” Great

Insider Q&A: How YouTube Decides What to Ban - The New York Times

Matt Halprin, the global head of trust and safety for YouTube, has a tough job: He oversees the teams that decide what is allowed and what should be prohibited on YouTube.

Twitter suspended an account purporting to be a NY Post reporter after it was found to be sharing pro-Iranian regime propaganda

New York Post Reporter’s Identity Hijacked to Spread Pro-Iran Propaganda

It’s one of a number of bogus accounts that spread fake news about enemies of the Iranian regime.

TikTok removed a video of a user kissing his same-sex partner on New Year’s Eve before it was reinstated following notice that the user planned to write a story about it

Man ‘devastated’ after TikTok removed video of him kissing his boyfriend because it ‘violated community guidelines’

A man claimed that TikTok removed a video of him sharing a kiss with his boyfriend at midnight on New Year's Eve, saying that it violated their guidelines.

Pro-anorexia accounts continue to live and thrive on Instagram, despite attempts to make it more difficult to find, according to this Vice report

How pro-eating disorder communities are thriving on Instagram - i-D

Despite the platform's efforts at moderation, and the rise of body positivity content, pro-ana accounts are easy to find. Here's what can be done.

A dark but interesting story about Goodreads (via Adam Tinworth). A defunct Reddit community has targeted an author, Patrick Tomlinson, and left hundreds of one-star reviews on a book that doesn’t come out until October 2020 to apparently drive him to suicide.

A former Facebook moderator in Dublin talks about how the company asked him to clock in and out when going TO THE TOILET. Valera Zaicev worked there in 2016 and is one of several people working with Coleman Legal Partners to bring an action to Facebook for failing to protect its staff.

Facebook Is Forcing Its Moderators to Log Every Second of Their Days – Even in the Bathroom - VICE

"People have to clock in and clock out even when going to the toilet and explain the reason why they were delayed, which is embarrassing and humiliating."

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.