5 min read

Moderation data for researchers (finally), starting an 'Instarrection' and three thorny cases

The week in content moderation - edition #169
Moderation data for researchers (finally), starting an 'Instarrection' and three thorny cases
Close-up of a ripped face mask on the ground (courtesy of Ivan Radic on Flickr via CC BY 2.0 - colour applied)

Hello and welcome to Everything in Moderation, your Friday review of the past week's content moderation news. It's written by me, Ben Whitelaw.

A warm welcome to new subscribers from Ofcom, Tremau, Spotify, JAAG, Hivebrite and elsehwhere as well as a couple of new EiM members, whose contributions support the creation of the newsletter, occasional analysis and interviews with smart people working in the industry.

Talking of which, I'll be publishing another Getting to Know article, in partnership with the Integrity Institute, later today. Keep an eye out on the website and on Twitter and let me know who else you'd like to see appear in the series.

Warning: today's edition is very platform-heavy. Thanks for reading — BW


Policies

New and emerging internet policy and online speech regulation

This week's most significant regulatory development comes from New Zealand, where five of the world's biggest tech companies have agreed to "actively reduce harmful content" through voluntary self-regulation. Google, Meta, TikTok, Amazon, and Twitter agreed to sign up for the Aotearoa New Zealand Code of Practice for Online Safety and Harms, which has been shepherded by non-profit Netsafe over the last year.

Like mandatory regulation that we've seen employed by other countries, the code of practice means that signatories have to publish an annual report and will be fined if they fail to adhere to its rules. However, critics have called it "a Meta-led effort to subvert a New Zealand institution" and said that NZ Tech, which will oversee the code, "lacks the legitimacy and community accountability to administer a Code of Practice of this nature". All eyes on this one.

Some interesting comments coming out of Nigeria this week, where its minister of communications and and digital economy argued that regulating platforms helped to ensure "Big Tech has more power than the Government". Professor Isa Pantami made the comments at the country's first content moderation and online safety summit in Abuja, organised by Africa-focused policy platform The Advocacy for Policy and Innovation (API). In June, Nigeria shared a draft of a code of practice (EiM #163) which has been criticised by civil society organisations for its far-reaching nature and has a long history of wanting to make platforms pay (EiM #115).

The Oversight Board has announced some of its most thorny and perhaps important cases since it began its work in 2020 and has also accepted a request from Meta to advise on one of its most discussed policy areas, Covid-19 misinformation. The cases include a poem that urges the killing of Russian fascists and two photos posted by a trans and non binary couple, the judgement on which, according to LGBTQ news site Them, "will affect the user experiences of trans and nonbinary people everywhere".

The third case — which is the first from the UK taken by the Board — has also raised eyebrows for its involvement of an "internet referral unit", essentially a team of government moderators that respond to public reports, investigate users and push for the removal of said content. I hope to return to IRUs in a future edition of EiM.

There's an interesting wider context to the Covid-19 policy referral too, which comes just days after Meta committed to funding the Oversight Board for another three years to the tune of $150 million.

Meta has previously refused to implement a third of the policy recommendations made by the Board so it is strange me that it now wants the very same group of individuals to, er, make a policy recommendation on the hot button topic of the last three years. Why this and why now? What has changed since it refused previous policy recommendations? I'm dubious.

Remember also that the company made a request as recently in March 2022 for guidance on the Russia/Ukraine war, only to withdraw it less than two months days later. Could it do the same here?

Products

Features, functionality and startups shaping online speech

A new moderation system API will allow researchers to evaluate TikTok's moderation system and even test out individual pieces of content for themselves, the company has announced. The video app will also share the tool with its Content and Safety Advisory Council as well as expanding its transparency reports.

The move comes on the back of YouTube doing a similar thing a few weeks back and represents a win for those, like campaign group Change The Terms and Zev Burton (EiM #109) who have pushed for this for some time. The bad news is that it will only be available in the fall via its Transparency and Accountability Centre. Which means it comes too late for the Kenyan elections and US midterms (EiM #165).

Also, is it just me or is it strange that The Washington Post writes straight-faced articles about product improvements to Twitch without acknowledging that both are owned by the same man? No? Ok.

Platforms

Social networks and the application of content guidelines  

TikTok's enforcement of its Community Guidelines "frequently misses violations and fails to respond to ban evasion tactics". That's the top line from the new report from GNET's misinformation researcher Abbie Richards, who studied the video platform in the wake of the Buffalo shooting (EiM #160) and warns that "the glorification of perpetrators of violence" has become "so pervasive on TikTok that the pervasiveness itself has become a meme".

People

Those impacting the future of online safety and moderation

Ana, aka Neoliberalhell on Instagram, normally posts grabs of funny reply tweets, edited news headlines and stuff like this $65 mask nativity. But lately she's been posting more about Mark Zuckerberg and the 'Instarrection'.

The Instarection is a protest organised by a handful of Instagram creators, including Ana, to demand changes to what they called "Meta's harmful and unjust moderation system". These include: greater transparency about its community guidelines, a stop to shadowbanning, a thorough review process for removed content and "real user support from real people".

The protest got a bunch of media traction when it took place last week after some creators handcuffed themselves to the doors of Instagram's HQ and there's a risk that it is seen as a frivolous stunt.

However, the demands are spot on and, well, creators have power to bring about change (see the aforementioned Zev Burton). As Ana says, "this is all a lot more serious than just memes”.

Tweets of note

Handpicked posts that caught my eye this week

  • "You had the control so it was essentially your problem. That justification doesn't work anymore." - Former Meta employee Samidh Chakrabarti notes an interesting side-effect of this week's (rolled-back) Instagram feed changes.
  • "She's always available and willing to engage with civil society, and I've seen her take our input very seriously" - It's nice getting compliments, especially when it's Dia Kayyali who's giving the praise.
  • "It should be taxed so that it becomes cheaper for actors to properly fund content moderation rather than pay for not doing so." - Open source advocate Tobie Langel with a novel solution to the mess we find ourselves in.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1200+ EiM subscribers.

Snap is looking for a Senior Counsel to work on matters relating to law enforcement and platform safety in the UK and EU.

The core of the job involves supporting the trust and safety team's enforcement of the platform's community guidelines and terms of services as well as meeting "legal obligations to produce user data to governments while adhering to relevant laws and protecting user privacy". No easy task, then.

If you're up on government surveillance laws and have 8+ years of practicing law for a US multinational corp, this could be for you. My efforts to find the salary didn't get very far (if you work at Snap and know, please drop me a line) but this was a role in a company that thinks carefully about user safety so I didn't want to not share it with you all.