7 min read

Jess Mason on how Clubhouse structures its trust and safety team

Covering: working with third-party groups to develop effective policy and how to get feedback from users on new guidelines
Jessica Mason, Head of Global Policy and Public Affairs at Clubhouse
Jessica Mason, Head of Global Policy and Public Affairs at Clubhouse

'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.


Back in 2017, just before General Data Protection Regulation (GDPR) was introduced, demand for data protection experts went through the roof. According to one estimate, vacancies surged by 700% as businesses were forced to hire Data Protection Officers in order to comply with the new legislation. Whole teams took shape where once there were none.

I predict a similar thing will begin to happen with trust and safety roles following the recent publication of both the Online Safety Bill and the Digital Services Act. Companies that previously had little need or interest in policing their platforms will have to create roles and reporting lines and those that already have teams will grow significantly. But how to go about doing that?  

A new episode of the Tech Against Terrorism podcast sheds a little light on how two technology companies have thought about setting up their policy teams. In it, Jessica Mason, Head of Global Policy and Public Affairs at Clubhouse, and Josh Parecki, Head of Trust & Safety at Zoom, explain:  

  • how their teams are structured
  • what goes into creating globally applicable policies
  • what happens when real-world events force policies to be updated

Jessica was kind enough to answer a number of follow-up questions via email about how Clubhouse's approach to trust and safety has evolved since she arrived at the company last year as well as the policy areas that her team have prioritised in that time.

The interview has been lightly edited for clarity.


You arrived at Clubhouse 12 months ago from Google. How did you prioritise your in-tray and what have you done in that time?

When Clubhouse first reached out to me the company was just nine people and had only been available in the App Store (so iOS only) for three months. The company was small but mighty, and everyone worked diligently on building safety into the product. I was lucky that I arrived at the same time as our incredible head of operations and head of community health engineering, so I had this great team to start building with to help support the explosive growth of the app. We focused all of our efforts on the three Ps: People, Product, and Policies.

People: We immediately set to work to make sure we had the right teams and right language support and coverage. We’ve grown from a full-time team of just 9 people in January 2021 to 100 people today, with a sizable portion of our team working on trust and safety. Additionally, we have invested in an extended team to help us keep the platform safe in different languages and time zones.

Product: We set out to understand what the biggest safety challenges were and what we could do in our product to mitigate them. That included everything from better blocking features to hash matching for child sexual abuse imagery to better tooling for users to report things to us. We also set up a cross-functional process to build safety features into our products before we launch them.

Policies: Good content moderation requires clear policies, processes for enforcing them, and data on how you’re doing. We worked to make sure we had clear policies for everything we wanted to prohibit on the app, strong and detailed guidance and training for our teams on how to enforce, and ways to understand how well we’re doing and where we need to improve with more training or updated policy guidance.

BECOME A MEMBER
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.

They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.

Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW

Before you arrived the company had received some negative press coverage about its content moderation. How did that change how you approached the role?

Even before joining, I spent a lot of time with leaders at Clubhouse to understand whether they were committed to safety on the platform and it was clear that they were. At the time, Clubhouse had a small and nimble team that worked swiftly to address incidents, often putting aside other launches to prioritize safety features. One of the things I really sought to understand before joining was whether our leadership was too fixated on negative press. In other words, would they react to a press article about a negative experience at the expense of making data-driven decisions based on need and severity? I wanted to make sure that I could focus on the types of abuse that are the most serious (eg violent extremism, hate, and child safety) not what happened to be in the news or on Twitter. It was clear to me that they were focused on tackling the most egregious types of abuse with as much effort and resources as we could muster.

As we were planning for the app’s general release for summer 2021, that’s how we made decisions—we focused on expanding the team and putting more manpower behind trust and safety and the health of our community. We prioritized issues according to data and severity.

How is your team structured and can you talk about where it interacts with the wider organisation?

My team covers global content policy, product policy, safety for new product launches, privacy and security policy, public policy and public affairs—i.e., our relationships with governments and civil society groups. So it is a lot of policy! My absolute favorite thing about it is that we interact with everyone! From our recruiting team who wants to be able to tell candidates about how we’ve approached this space responsibly, to our operations team who manages our partners that help enforce our policies all around the world, to our product team who work on baking safety in from design to launch.

On the podcast, you mention the policy team was intentionally created separately from the operations team. Why is that? And what, if any, are the downsides of that decision?

That’s right! We wanted our policy team to focus on research, principles, and data rather than react to individual cases of abuse or escalations. One of the best ways to do that was to pull them out of daily work looking at cases of abuse and to look at trends and data collected and analyzed with our ops team instead. We still spend a lot of time looking at individual examples, but usually when it is part of a larger research project for a policy change. We wanted our operations team, on the other hand, to focus on keeping users safe as quickly and consistently as possible. In our case there have not been any downsides because we have a world-class operations organization that runs in a very data-driven way and works in lockstep with our team.

You’re working with Tech Against Terrorism to support Clubhouse's policy development for terrorist and extremist groups. What are other third parties do you work alongside and to what end?

We work with a number of third parties—for example, we’re members of the Tech Coalition to combat online abuse of children, and similar to Tech Against Terrorism they provide resources and help facilitate conversations with experts and with other companies. We also work with a variety of civil society groups and deeply value their partnership and input—in the last few weeks alone my team and I have chatted with the American Jewish Committee, Witness, Access Now, and 7amleh, just to name a few!

I was very interested in your weekly Townhalls, which allow Clubhouse users to ask questions and give feedback on new policies. Where did that idea come from and how has it evolved since it started?

The weekly town halls were created by the founders and have been happening since the earliest days of the app. Paul and Rohan had a mission to build a platform that was a meeting place on the internet and truly community-first. A vital part of that is listening and taking in the feedback to learn how we can best support them. A number of features on the app today were those that were requested by the community.

We’re proud of the community we’ve built, and Paul and Rohan’s dedication to the town halls demonstrates how connected they are to it. It’s rare to have the founders of an app speak directly to their community to share highlights from the week, new product updates, and address questions and product thinking live.

We learn so much from our community from their feedback in Townhalls and through our support center – it is invaluable and we’ve made important policy decisions because of what they’ve shared with us.

You mentioned that Clubhouse invested in trust and safety early and would recommend other companies doing the same. What does that actually look like in headcount or budgetary terms?

I think what made the app so successful in the beginning was how thoughtful the founding team was and how measured their approach was in building the app. The invite-only model helped control the growth of the app and allowed the team to build out the core functions of the app—especially trust and safety. Even still we often roll new features out slowly to test our safety responses. The team set out to build a more human place on the internet, and establishing safety first is aligned with that mission and better enables us to do just that.

That being said, we are in a lucky position because of the funding we received. It meant we could build a global extended team across the world in different time zones and languages and that we could hire an engineering team focused on safety, in addition to our operations and policy teams. For other companies who do not have the same resources, I highly recommend engaging with third party groups like Tech Against Terrorism and considering safety by design principles in your product teams.  

What does success look like for you and your team? And what metrics do you look at to know you're on the right track?

As a company focused on the health of our community, we look at the prevalence of issues or abuse on the app, our response times in addressing user reports and requests, and the accuracy or quality of our response. My team is particularly focused on prevalence, how we mitigate issues and lower prevalence with product solutions in addition to how accurately we enforce our policies.

The Tech Against Terrorism podcast is a deep dive into the evolving use of the internet by terrorists and violent extremists, how this relates to real-world harms, and what can be done to support the tech sector to disrupt this threat.

Listen to Anne Craanen talking to Jessica Mason, Head of Global Policy and Public Affairs at Clubhouse, and Josh Parecki, Head of Trust and Safety at Zoom, about what informs the development and implementation of their counter-terror policies.


Want to share learnings from your work or research with 1000+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.