6 min read

Venessa Paech on the rise of AI speech governance and content moderation's PR problem

Covering: building a moderation community of practice and her new conference, All Things in Moderation
Venessa Paech, director of Australian Community Managers and founder of All Things in Moderation
Venessa Paech, director of Australian Community Managers and founder of All Things in Moderation

'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.


Depending on who you listen to, generative AI will diminish our lives, enhance them or both. But what effect will it have on content moderation?

Researchers from Microsoft recently noted that it would be some time before large language models could act as content moderation tools and went as far as saying that “we are always gonna have humans in the loop”. Phil Tomlinson, head of trust and safety at TaskUs, also said on a podcast that it'll be "an intersection of technology and humans" that clean up the internet for the foreseeable future.

So what does this leave the people doing the work? Those working in commercial environments at platforms and for business processing outsourcing companies (BPOs) but also the forum volunteers, server owners, Facebook Group admins and listserve maintainers? What do they need to continue to fulfil their role —paid or otherwise— as regulation and new tools change around them?

It's these questions that have got me excited for a new virtual conference catering specifically for "moderators of all types, across cultures, countries and contexts". All Things in Moderation will take place on 11-12 May and promises to be a platform for "knowledge sharing, learning and action-focused collaboration". Exactly the kind of thing that's needed right now, if you ask me, and it's why EiM is a proud partner of the event.

I spoke to Venessa Paech, who came up with the idea for the conference, to find out more about the genesis of the idea, why we have a narrow view of moderation and her work setting the rules for internet communities for over two decades.

This interview has been lightly edited for clarity.


We're in a funny phase in which there is so much hype about generative AI but said AI was trained to be less toxic by outsourced human workers (EiM #188). How do you think about this?

Kate Crawford’s work in illuminating the visceral nature of AI creation and production (from raw materials to humans) makes this point clear - we ignore these factors at our peril. Humans are still a key part of our machine culture and are likely to be for some time yet. If prompt engineers are making six figures, what does that tell us about how we should support the humans we’re entrusting to steward social norms that, in turn, impact shared digital cultures?

What are you hopeful for and what do you worry about when it comes to AI?

My PhD is focused on AI and online communities, and I’d say I’m generally bullish. I see impressive progress, and I’m excited about the capabilities of AI tools to lift the burden, sharpen sense-making and harness collective intelligence. There’s little doubt it’s going to be transformative. Of course, we need to be very careful, considering the world we’d like to build, rather than the one that seems the most convenient - or the one that offers the quickest route to profit. I’m most concerned right now about existing power asymmetries and how AI might deepen them, and about the impacts of AI in a post-truth world.

There are different forms of moderation taking place under different guises; for example, brand safety and community management. Can you explain how you view the space?

Moderation has been part of our lives since the early web. It’s the foundation of how we act with each other and with machines, and it shows up almost everywhere.

Social media management, online community management and commercial content moderation for large platforms are each different specialisms within the broader field of moderation. Most people only think of commercial content moderation when they consider moderation in the discourse. This is ultimately a narrow take that doesn’t help us address the interconnected issues impacting the building of safe and thriving digital spaces across incredibly diverse contexts. The more these practitioners can interact and support one another, the better. In my view, moderators are micro-regulators and cultural intermediaries - the ultimate integrity worker. Moderation itself is a critical social infrastructure, consisting of humans first, and tools second.

You founded Australian Community Managers  (ACM) in 2009 and have written about the need to build communities of community builders. Do moderators need the same?

Communities of practice are critically important for humans working to keep other humans safe online. Most humans who moderate in any context experience similar challenges - how to deploy their power responsibly and equitably, complying with regulations while balancing freedom and agency, and maintaining personal well-being and resilience while staving off online harms and bad actors.

Peer support from a community of practice is indispensable in facing these challenges. I’ve seen it make a huge difference and I’ve helped organisations develop this for their digital front-line staff. There is also tremendous value in exchanging knowledge about known bad actors, trends in the space, and emerging technologies and techniques.

Why hasn't it happened so far?

Moderators are gathering and organising across various regions, but there’s not been much movement globally - nor across different aspects of the practice, or across sectors (e.g. moderators in mental health communities talking with gaming moderators, etc.) I suspect this is largely due to the distractions and stresses most of us share. It’s difficult to find the time and space to forge those exchanges. I’m hopeful my work as a community builder can be a positive influence.

BECOME A MEMBER
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.

They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.

Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW

Part of the issue is that moderation has a PR problem; people don't know what it is and what mods do. Why do you think that is?

Absolutely. There’s solid research about the structural marginalisation of moderation (Dr Jennifer Beckett likened it to 21st-century ‘sin-eating’). It is often positioned as ‘janitorial’, low-skill, low-context labour when in reality it’s usually high-context, cultural intermediation. Because certain types of moderation are invisible, it’s easy to misunderstand or minimise that work (and those that conduct it). The realities of moderation can be an uncomfortable mirror that many would rather ignore.

As a practice, moderation requires us to be intentional about the human behaviours we’re looking to incentivise or disincentivise. These are big questions and discussions that all of us should care about, whether we work in online safety or not.

All Things in Moderation seems to try and address that by focusing on the practitioners, those doing the work. How else is it different from other conferences?

Most gatherings where moderation is discussed focus on policy or the role of platforms. There are frustratingly few opportunities for the humans who actually moderate to gather, exchange knowledge, learn from and support one another across contexts and industries.

It’s also vitally important for humans who do the work of moderation to have their voices heard and to sit at the table with policy-makers, platforms and researchers. All Things in Moderation is designed to unite these groups for constructive outcomes, amplify those voices and shed light on the richness of moderation practice happening around the world.

What can people look forward to if they attend the conference?

We have two days of keynotes, talks, panel discussions, workshops, resources and more, covering topics from building cultures of care online, to institutional support for moderation as a practice (chockas, as we say in Oz). We are looking at moderation through an indigenous lens, a restorative justice lens, a technological lens and more.

We have remarkable speakers from APAC, Europe, America, China, Uzbekistan and beyond, and attendees so far registered equally as far and wide. Those who don’t moderate directly in the space will learn more about the nuances and challenges of the practice, including problems they can help solve. And digital front-line workers who moderate —community managers, social media managers, commercial content moderators, facilitators and more— will hopefully take away actionable knowledge while feeling heard, supported and connected. I’m hopeful we can spark some productive collaborations, and build on this again in 2024!

Finally, you’ve helped govern internet communities for over two decades. What do you know now about how the web works that you wished you’d known then?

Like so many of us, I was far too naive to the capacity of the web to perpetuate inequalities and reinforce existing power structures. I’ve learned how important intentionality is in moderation (and in all things), and how we can never afford to take inclusion for granted. Systems consistently benefit from this work when we do, and are impoverished when we fail to.


Want to share learnings from your work or research with thousands of people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.