š 'Content filtering as a matter of mission'
Hello everyone, especially new folks from Twitter and Agora. Youāre (hopefully) in the right place.
This week's edition is heavy on awesome female lawyers and all the better for it, IMHO.
Talking of the legal profession, here's a date for your diary: March 10 is the Cato Instituteās Return of the Gatekeepers event, featuring Redditās director of policy and the former head of legal, trust and safety at Medium (now general counsel at Neuralink) plus others. Sign up to be notified about the live webcast.
Thanks for reading ā BW
Like books but comments
Itās fair to say thereās a scramble happening right now to find a model for online content regulation. Platforms and the people working for them are trying to conceptualise a web that is open but not overrun. It isn't straightforward.
There are some early forerunners (many of which Iāve covered in previous editions of EiM): the oversight board, the 'magic APIā idea, a human rights approach and straight-up stricter regulation of what already exists. But none make complete sense and, as I noted in EiM #32, thereās definitely room for weirder and wilder ideas still yet.
One such idea is the public library approach, which Rebecca Tushnet, a law professor at Harvard Law School, outlines in her keynote speech at the 2018 Journal of Law, Technology and the Internet Symposium, a copy of which was published as a paper last week.
Libraries, she says, are:
The real āsharingā economies, and in the US, they have resisted government surveillance and content filtering as a matter of mission.
Designed to host āmultiple overlapping and sometimes conflicting communitiesā, they are a place for everyone: adults looking for saucy fiction, teens searching for revision material and young kids just starting to read. Itās almost like what the internet should've been.
What makes libraries able to do this? Tushnet argues that it's the tagging of content. The process, she notes, of someone (in the case of a library, a librarian) categorising a piece of work means only those who want to see it, by seeking it out, will do so. No content is exposed unnecessarily. She also gives good examples from Archive of Our Own (AO3), a fanfiction website and non-profit she founded, of how this tagging works in practice.
At the same time, Tushnet admits that content categorisation goes completely against the ethos of the attention-economy, profit-seeking platforms at the heart of the debate about content moderation. They want people on their site for longer; they want repeat visits and they want habit. Tagging isn't a way to drive those behaviours.
So maybe itās not libraries that will provide the model for content moderation, at least not now. But that doesnāt mean that we shouldnāt stop thinking, like Tushnet, that public institutions could help us come up with the answer.
The woman that can hide Trumpās tweets
This is a great Bloomberg profile on Vijaya Gadde, Twitterās top lawyer and the woman ultimately responsible for deciding what gets kept up and what comes down on everyone's favourite microblog (lol) site. There's also nice detail on the process of hiding the batshit crazy 280-character ramblings of President Trump (if it were to ever happen).
A little birdie
On the topic of Twitter, there's a bunch of jobs (some new and some old) going in its Trust and Safety team, both in San Francisco and Singapore.
Not forgetting...
Conservative YouTubers have had pro-Trump and anti-CNN t-shirts removed from underneath their videos after they were judged to go against community guidelines.
YouTube removes pro-Trump t-shirts from creator merch shelves, says they violate community guidelines
Iranian activists explain the implications of Instagram and Facebook removing posts from ordinary folks (via Casey Newtonās newsletter).
Why activists get frustrated with Facebook - The Verge
When political speech gets removed, they have little recourse ā and a lot to lose. Instagramās sanctions against Iran are only the latest exampl
A recent Indian Supreme Court judgement requires platforms to act on government demands within just 24 hours and are a ārecipe for over-complianceā, according to Daphne Keller in this op-ed.
Shreya Singhal case was one of the defining rulings of modern internet law | The Indian Express
With Shreya Singhal judgment, India showed the world how to protect plurality and innovation online. Draft Intermediary Rules by the IT ministry move away from that achievement
The latest platform-ad-targeting-Nazi-madness. When will it stop? (actually, don't answer that)
Twitter apologises for letting ads target neo-Nazis and bigots - BBC News
Social network apologises for allowing the use of discriminatory ad keywords it had meant to ban.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.