4 min read

📌 Moderating the metaverse, Github's 'least restrictive' approach and detecting toxicity

The week in content moderation - edition #137

Hello and welcome to Everything in Moderation, your Friday newsletter with all the content moderation news you need to know. It's curated and written by me, Ben Whitelaw.

This week saw the announcement of another big Trust and Safety hire that shows once again how platforms are investing heavily in smart, experienced online safety professionals. We've seen similar moves at Google (EiM #133) and Snap (#128) over the last few months and it doesn't show any sign of ending: there are open senior roles at Cloudflare (#126) and YouTube and dozens of others at all levels. A trend to keep an eye on as we go into 2022.

As ever, if you enjoy the newsletter, forward this to a colleague and let them know that they can subscribe here.

Onto this week's links — BW


📜 Policies - emerging speech regulation and legislation

Aspen Institute, the research think tank, this week published the results of its six-month study into mis- and dis-information. Happily, there was more than an honourable mention of content moderation within the 79-page pdf (why is it always a pdf) and its 15 recommendations about how governments, private companies and civil society can work together to reduce online harms. They included:

  • Adopting common definitions and standardized metrics in order to facilitate public and researcher understanding.
  • Releasing data about messages that are "shared at scale and by whom, whether they are paid, and how they are targeted".
  • Ensuring content moderation and amplification are "representative of the cultural terrain of marginalized communities impacted by disinformation".

This might not be new if you're working in content moderation or a regular EiM reader but it serves as another reminder of the importance of the work that lots of you do.

Government legislation should adhere to international human rights law to "ensure that content moderation lives up to international standards instead of national legislations", according to a report published this week. Danish think tank Justitia recommends that platforms sign a non-binding free speech agreement to signal their commitment to freedom of expression and avoid what it calls "a regulatory race to the bottom".

If you're in the weeds of the Digital Services Act or want to be, this guide from digital rights network Edri on the 2300+ amendments tabled by various European Parliament committees is very handy. For a condensed version, bookmark this Twitter thread from Jaqueline Rowe, Policy Officer at Global Partners Digital.

💡 Products - the features and functionality shaping speech

How do you build a machine-learning toxicity detector that operates in 50 languages across 150 countries? Massimo Belloni, data scientist at Bumble, explains in a new blogpost how the company developed its Rude Message Detector and the architectures and validation routines it used in the process. Warning: contains phrases like "(multi-headed) self-attention mechanism" and "classical RNNs that compute mutual importance".

If you use filters, muting and blocking on social media, you might want to take part in Amy X Zhang's study on users' preferences for content settings. The University of Washington professor previously worked on a paper on digital juries (EiM #72) so you can bet the research will be worth reading. You'll even get paid for your input.

💬 Platforms - efforts to enforce company guidelines

Github came under fire back in September for hosting a project that ranked and rated women (EiM #127) but a new blog post on its approach to moderation demonstrates that the company has upped its game. Abby Vollmer, Github's Director of Platform Policy, outlines how the software development platform expects its users to moderate their own projects and sets out its 'least restrictive approach' to addressing violations of its Terms of Service. My read of the week.

Here's an interesting debate for your next dinner party: would you rather Facebook held data about users indefinitely in order to potentially help law enforcement with real-world crimes or does the company have a responsibility to delete that data when pages and accounts are taken down? That's the question at the heart of a court case playing out in Albuquerque, New Mexico, where prosecutors are trying to get a civil injunction to bar an armed civilian group called New Mexico Civil Guard from acting as a paramilitary organization in future protests. Will Oremus at The Washington Post has the story.

👥 People - folks changing the future of moderation

Niantic Labs might not mean much to some of you but the games company spun out of  Google in 2015 was responsible for augmented reality hits Pokémon Go and Ingress. And now it has its first Global Director for Trust & Safety and Policy.

Camille Francois joins from Graphika, where she was Chief Innovation Officer, and spent time before that as Principal Researcher at Jigsaw, Google's innovation unit which often crops up here in EiM for its work protecting users online.

In short, Francois has vast experience and doesn't seem to be messing about either. In a Twitter thread coinciding with the announcement, she outlined her goal of "re-invent(ing) how a trust and safety team can help steer towards a better future".

Her first task is not an insignificant one: helping Niantic define how it will moderate in the metaverse and "imagine a future of worlds that can be overlaid on the real world".

🐦 Tweets of note

  • "We need to think about platform systems/design much more broadly than 'what content moderation/reporting procedures do platforms have?" - Demos' Ellen Judson, who this week appeared on a UK parliament evidence session about tackling online abuse, talks up the benefits of 'safety by design'.
  • "These days, everyone seems to extoll the virtues of transparency, but nobody can agree on what it actually means." - UCLA Tech executive director Michael Karanicolas unpacks his new paper on establishing FOIA for social platforms.
  • "For the first time, we have the chance to directly see (online) Indian elected lawmakers seeking to hold a Facebook executive to account." - Raman Chima, Asia Pacific Policy Director at Access Now, on an important moment for India.