3 min read

šŸ“Œ 'Content filtering as a matter of mission'

The week in content moderation - edition #47

Hello everyone, especially new folks from Twitter and Agora. You’re (hopefully) in the right place.

This week's edition is heavy on awesome female lawyers and all the better for it, IMHO.

Talking of the legal profession, here's a date for your diary: March 10 is the Cato Institute’s Return of the Gatekeepers event, featuring Reddit’s director of policy and the former head of legal, trust and safety at Medium (now general counsel at Neuralink) plus others. Sign up to be notified about the live webcast.

Thanks for reading — BW


Like books but comments

It’s fair to say there’s a scramble happening right now to find a model for online content regulation. Platforms and the people working for them are trying to conceptualise a web that is open but not overrun. It isn't straightforward.

There are some early forerunners (many of which I’ve covered in previous editions of EiM): the oversight board, the 'magic API’ idea, a human rights approach and straight-up stricter regulation of what already exists. But none make complete sense and, as I noted in EiM #32, there’s definitely room for weirder and wilder ideas still yet.

One such idea is the public library approach, which Rebecca Tushnet, a law professor at Harvard Law School, outlines in her keynote speech at the 2018 Journal of Law, Technology and the Internet Symposium, a copy of which was published as a paper last week.

Libraries, she says, are:

The real ā€œsharingā€ economies, and in the US, they have resisted government surveillance and content filtering as a matter of mission.

Designed to host ā€œmultiple overlapping and sometimes conflicting communitiesā€, they are a place for everyone: adults looking for saucy fiction, teens searching for revision material and young kids just starting to read. It’s almost like what the internet should've been.

What makes libraries able to do this? Tushnet argues that it's the tagging of content. The process, she notes, of someone (in the case of a library, a librarian) categorising a piece of work means only those who want to see it, by seeking it out, will do so. No content is exposed unnecessarily. She also gives good examples from Archive of Our Own (AO3), a fanfiction website and non-profit she founded, of how this tagging works in practice.

At the same time, Tushnet admits that content categorisation goes completely against the ethos of the attention-economy, profit-seeking platforms at the heart of the debate about content moderation. They want people on their site for longer; they want repeat visits and they want habit. Tagging isn't a way to drive those behaviours.

So maybe it’s not libraries that will provide the model for content moderation, at least not now. But that doesn’t mean that we shouldn’t stop thinking, like Tushnet, that public institutions could help us come up with the answer.

The woman that can hide Trump’s tweets

This is a great Bloomberg profile on Vijaya Gadde, Twitter’s top lawyer and the woman ultimately responsible for deciding what gets kept up and what comes down on everyone's favourite microblog (lol) site. There's also nice detail on the process of hiding the batshit crazy 280-character ramblings of President Trump (if it were to ever happen).

A little birdie

On the topic of Twitter, there's a bunch of jobs (some new and some old) going in its Trust and Safety team, both in San Francisco and Singapore.

Not forgetting...

Conservative YouTubers have had pro-Trump and anti-CNN t-shirts removed from underneath their videos after they were judged to go against community guidelines.

YouTube removes pro-Trump t-shirts from creator merch shelves, says they violate community guidelines

Iranian activists explain the implications of Instagram and Facebook removing posts from ordinary folks (via Casey Newton’s newsletter).

Why activists get frustrated with Facebook - The Verge

When political speech gets removed, they have little recourse — and a lot to lose. Instagram’s sanctions against Iran are only the latest exampl

A recent Indian Supreme Court judgement requires platforms to act on government demands within just 24 hours and are a ā€˜recipe for over-compliance’, according to Daphne Keller in this op-ed.

Shreya Singhal case was one of the defining rulings of modern internet law | The Indian Express

With Shreya Singhal judgment, India showed the world how to protect plurality and innovation online. Draft Intermediary Rules by the IT ministry move away from that achievement

The latest platform-ad-targeting-Nazi-madness. When will it stop? (actually, don't answer that)

Twitter apologises for letting ads target neo-Nazis and bigots - BBC News

Social network apologises for allowing the use of discriminatory ad keywords it had meant to ban.


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.