📌 Monitoring platform takedowns, Israel resolves to regulate and jail for Facebook racist
Hello and welcome to Everything in Moderation, your weekly compendium of content moderation news and analysis. It's curated and written by me, Ben Whitelaw.
If this week's edition can be characterised, it is by a collective move to action. Politicians, platforms and even slow-moving judicial systems are seemingly done with talking and are stepping into the issues around online safety, inspired either by lofty goals (like free speech and democracy) or the vast sums of money on table. It's what makes the topic such a fascinating one.
As ever, a warm welcome to new subscribers from Twitter, the Mozilla Foundation, Kinzen and elsewhere. If you like and regularly read EiM, show your appreciation by sharing via social media.
Here are this week's links — BW
📜 Policies - emerging speech regulation and legislation
Israel's efforts to regulate social media are ramping up. In an interview this week, Communications Minister Yoaz Hendel described platform liability as an issue in "no man's land" for too long and has set up two parliamentary committees to look into the issue. It comes a week after Facebook's public policy director for Israel and the Jewish Diaspora wrote that the company was "pleased that this process has begun".
It's important to note that Facebook has occupied a problematic position in the Israel-Palestine conflict for some time. Last month, the company was criticised by Human Rights Watch for its suppression of content posted by Palestinians (see also EiM #112) while new documents from the Frances Haugen trove show that Facebook employees questioned the demotion of a Palestinian activist's Instagram Stories to little effect. Back in May, it even created a "special moderation centre" (EiM #113) to remove violating content more quickly although public information about its work is limited.
Related to this news, a new tool to combat unexplained Palestinian account suspensions and takedowns was launched this week. The Palestinian Observatory of Digital Rights Violations enables users to submit reports sent to all platforms, including Facebook, and allows a team of researchers to "follow up on the case and provide all needed support to reverse the violation or get a justification for".
This week has also seen a flood of stories about the Online Safety Bill, which are worth noting:
- The Times reported that trolls could face two years in prison for posting content that causes "likely psychological harm" (I don't like giving ad dollars to Brieitbart but its coverage of the story is gold).
- The chief executive of Ofcom, the body nominated to act as the regulator of the bill, warned that the business models of platforms at the root of the problem we have with safety and harm."
- The recruitment process for the chair of Ofcom has been relaunched and the job description has some, let's say, interesting tweaks, courtesy of The Guardian.
- The Digital Regulation Co-operation Forum, set up in 2020 to ensure collaboration between UK's different online regulators, has announced former Google employee Gill Whitehead as its chief executive.
💡 Products - the features and functionality shaping speech
The big news of this week as far as product is concerned broke just after I sent last week's newsletter: Microsoft announced that it had acquired TwoHat, the content moderation solution with a close partnership with Xbox (which Microsoft owns), for an undisclosed sum. It's been a good year for AI classification tools so far: Discord bought Sentropy back in July (EiM #121).
Mark Zuckerberg's plans for a world in which everyone is immersed in VR/AR "means that all the current dangers of the internet will be magnified", according to a new Lawfare blog piece. Quinta Jurecic and Alan Z Rozenshtein argue that the metaverse should be designed "in such a way that limits engagement, constrains virality, and in general makes for a more human-scale platform than Facebook itself is".
I touched on the algorithmic amplification in last week's newsletter (EiM #134) and, following Frances Haugen's testimony, it feels like a weakness in the platforms' "we're just a pipe" argument. Meanwhile, Twitter has been pushing people towards algorithmically-organised timelines. Inc's Jason Aten, who spotted it, wrote that it was frustrating that Twitter was "trying to use a dark pattern to get you to opt into something you've already opted out of".
💬 Platforms - efforts to enforce company guidelines
A British Facebook user who live-streamed himself racially abusing three black England players in the wake of the Euro 2020 final has been jailed for 10 weeks. 52-year-old Jonathan Best's 18-second clip was removed by the company three days after it was posted. The question remains: what of the hundreds of others that did the same? (see EiM #121).
A few weeks ago, LinkedIn retreated from China (EiM #132) and this week Yahoo called time on its operations in the country. The announcement was mainly symbolic — the US company closed their last office in the country in 2015 — but its a reminder that, as Freedom House reported earlier this year, "the push to regulate the tech industry... is being exploited to subdue free expression and gain greater access to private data".
👥 People - folks changing the future of moderation
Tracy Chou should be familiar to anyone who has been keeping tabs on the evolution of online safety in recent years. I've featured the software engineer and activist — and her startup, Block Party — many times in EiM because she's one of the few people seeking to address the structural issues at the heart of platform abuse. Chou also gave a great talk at Tech Policy Press' online event a few weeks back, which is worth listening to if you haven't already.
But, even as someone that has followed her work closely, I had no idea about some of the crap she, like many women and people of colour, has put up with. Stalkers showing up near her San Francisco home. Being asked to give quote after quote for press articles on diversity. Being sexually harassed while fundraising. This new profile from Fast Company is a horrifying account and only increases my respect for her. It's also my read of the week.
🐦 Tweets of note
- "uyghur and tibetan languages have been removed from the chinese language learning app, talkmat" - Aurora Chang notes the latest crackdown by the Chinese government.
- "Platforms have *a lot* to answer for here, but governments around the world also play a role" - Samantha Floreani on the problems with Australia's Basic Online Safety Expectations.
- "Never do any public presentation (or internal All Hands) without making safety 30% or more of the content" - Samidh Chakrabarti provides a roadmap for making the metaverse a safer place in this thread.