12 min read

Bri Riggio on writing content policies and helping her team learn and grow

Covering: making the transition into the trust and safety industry and creating policies with trade-offs
Headshot of Bri Riggio, Head of Platform Policy team at Discord
Bri Riggio, Head of Platform Policy team at Discord

'Getting To Know' is a mini-series about what it's really like to work in trust and safety, in collaboration with the Integrity Institute.


Every week, new and interesting roles are appearing for people who want to make the web a safer space for those that use it. I'm regularly looking at what's out there because I include the most eye-catching roles in my weekly newsletter.

When you look closely at the job descriptions, two things stand out:

  1. Many of the roles expect the applicants to have worked in other sectors (the trust and safety space is, after all, very young) and;
  2. The skills and interests that are sought after are wide-ranging and varied, and rarely limited solely to legal or product management or engineering or policy.

Just take a look at the recent job ad that at Grindr I shared (EiM #172): it asks for experience dealing with user-generated content but also conducting research, doing data analysis, managing clients and, ideally a proficiency with languages. It's effectively half a dozen roles in one and something that an applicant fresh out of college or university would find tough to prove.

It's for this reason that the fourth interview in this mini-series is perhaps my favourite. Bri Riggio heads up the Platform Policy team at Discord but had an interesting career in a different sector before joining the instant messaging platform. In this Q&A, she expands upon:

  • Moving from higher education into trust and safety
  • The importance of having a good support network
  • Making policies, and decisions, that fail to please everyone
  • Getting better at setting work-life boundaries during the pandemic

Getting to Know is a series of Q&As with people with deep experience working in trust and safety, in collaboration with the Integrity Institute. Previous Q&As include:

If you enjoy this Q&A, get in touch to let me know or consider becoming a member of Everything in Moderation for less than $2 a week to support the creation of other articles like this.

This interview has been lighted edited for clarity.


​​What's your name?

Bri Riggio

What is it you do?

I currently head up Discord's Platform Policy Team, which focuses on developing and improving policies that govern what kind of content and behaviour is and is not allowed on the service. My team members do a mix of researching and drafting policy documents, as well as consulting with any other internal teams that want to better understand the policies or are in need of new policy recommendations for their own projects or work. I started this role in October 2021, and prior to that I was working on Discord’s Trust & Safety team focused primarily on countering violent extremism on the service.

How did you get into the industry?

Before joining Discord, I had an almost 10-year career working in higher education administration in which I did a little bit of everything — executive assistance, course catalogue planning, faculty affairs, student affairs, alumni services, and career advising. I loved working with students, but I eventually reached a breaking point with the bureaucracy and felt like I wasn’t having the impact on the world that I had originally wanted when I got my master’s degree in international conflict studies.

I was lucky to have a support network that allowed me to quit my job with nothing else lined up while I figured out what other kind of career I wanted to transition to. I won’t say it was easy — I upended my lifestyle to save money, and I definitely had some tear-filled days and took some hits to my self-confidence every time yet another job rejection came through. To stay sane through this time, I really leaned into playing video games and streaming on Twitch and hanging out with friends and connections online. I eventually picked up some part-time contracting work doing administrative tasks for a full-time Partnered Twitch streamer who I knew, and he ran his entire business over Discord, which was how I got really familiar with the chat service.

My long days reading job descriptions and doing informational interviews eventually ended when I discovered the field of Trust and Safety and realized that the work might be the type of non-technical job I could do in the tech industry. It’s funny to think back on it now because I did a Google search to see who was hiring for these roles at the time, and one of the first hits was Discord, and I thought, “Oh, well I like Discord, why don’t I just see if I could even get an interview for this type of job.” So I whipped up a cover letter explaining why this higher education professional wanted to transition to an entry-level Trust and Safety analyst role, and they called me back a few days later. Three weeks later, I was hired, and I moved to San Francisco and started the role in September 2019.

What are your main responsibilities?

While working in trust and safety, I started out reviewing online content, moderating abusive material, and learning and adapting existing enforcement policies as necessary. As I grew in the organisation, I specialized in reviewing violent extremist content and behaviour and ultimately took ownership of that abuse vector and a specialized team of ICs (which stands for “individual contributor” and designates workers who don’t have people management responsibilities) to help me do that work. In my newer Policy role, I still balance managing a team of IC policy specialists, but I also now develop platform policies myself, engage with stakeholders both within and outside of my company, and do strategic planning and visioning for the future of my team.

What does a typical day look like?

Every day is a little bit different for me, but generally, I'm doing a mix of various things: managing the people on my team as professionals, but also as human beings, as I believe we’re all people first and workers second. I also may spend time developing strategic plans and roadmaps for policy updates or development, or meeting with our Trust & Safety, Legal, and Communications teams to ensure that information is being shared as strategically and proactively as possible. Of course, I also actually write and review content policies myself. Finally, I am a point person for crisis escalation so when real-world events begin to impact what we're seeing online, I lead the response from the content moderation perspective, often with support from one of my team members if that’s appropriate. I really try to balance giving my team members opportunities for growth and professional development, while also not asking them to regularly take on work that is well above their pay grades and compensation levels.

What do you enjoy most about your role?

There are a lot of things that I really enjoy about the role that I am in right now. I enjoy that the work is consistently interesting and allows me - no, forces me - to keep learning about different societal issues, international laws and policies, online subcultures, you name it. I also enjoy leading a team and helping facilitate the professional development of others. I enjoy the people I work with and am grateful for them because this work is not easy to do without a good support network, and I’m lucky to have colleagues who care. I enjoy feeling like I may be making a difference in the way that people think about and approach online issues and safety. It’s a responsibility that sometimes feels overwhelming, but I do enjoy the thought that I am hopefully having a positive impact on people’s lives - even if they never know who I am.

What do you keep track of to understand whether your work is, well, working? How can you tell you're making an impact?

This is a question I've come back to a lot over the past couple of years, and I'm still not entirely sure I have a great answer. When it comes to harm mitigation, I think it's difficult to measure or show the impact of something that didn't happen, which is always the goal of this kind of work - to prevent bad things from happening to people. There’s a lot of pressure to get quantitative metrics that show the impact of your work (this is true everywhere, but feels especially true in tech), but there’s not one single data point that can show this on its own. You can look, for instance, at the number of flagged pieces of violating content, but more flags don’t necessarily mean that the policy is “working.” It could mean that your detection methods got better, or it could mean that increased user education is prompting more people to make reports on the content, or it could mean that some real-world event has happened that is resulting in this type of content manifesting more online. Data is useful, but only when understood in a broader context, and only in tandem with other data points.

I’m biased because I focused on qualitative methodologies during my education, but I think the power of qualitative data, of being able to tell a really compelling story and extrapolate on case studies, can be really informative and powerful in understanding the impact of one’s work. Getting external signals from trusted partners or peers that a policy or enforcement action is in line with their research or recommendations can be helpful, and gauging user understanding of our policies and actions is another thing I'm always on the lookout for. But I admit that measuring the impact of this work both quantitatively and qualitatively in a way that is easy for others to understand has been a challenge. I'm still figuring out the best way to do this and communicate those successes to others.

What question are you grappling with most right now?

How do we prevent communication and social media platforms from facilitating what feels like a global descent into hateful authoritarianism and fascism? Related, but tangential to that: how can online communication or social media be used to provide more net positive to people and the world than net negative? Is that even possible? [BW: what’s your sense?] I think it's possible! I have to believe it's possible, or else I don't know what I’m doing staying in this line of work. But I do fear more and more that the incentive structures of our current economic, political, and social systems are ultimately incompatible with this. I hope I'm wrong!

What do you wish the world understood about your work?

There are tradeoffs to every policy decision that is made. Some issues are a bit more clear-cut than others if you come at the issues with human rights and ethics in mind, but every issue is always more complicated than it seems and every decision carries tradeoffs and unintended consequences. Even decisions that seem pretty straightforward can become complicated very quickly. For example, you might decide that certain topics of conversation — let’s say conversations about drinking alcohol — are not conversations that you want teens or children to have access to. Perhaps you decide that this content should only be made available to adults because the harm associated with underage drinking is too high to ignore and you definitely don’t want your platform associated with that behaviour. You can make that decision and have a good rationale behind it, particularly in the United States where the drinking age is 21 and where youth binge drinking is a problem.

But does that kind of approach make sense for users in other countries, where the drinking age might be lower or nonexistent? What if you were to discover a youth rehabilitation group on your platform where users are recovering from alcohol addiction and therefore are talking about alcohol, but aren’t encouraging each other to drink - is that maybe an exception? What about discussions revolving around alcohol consumption in a movie or video game? How do you decide what is allowable alcohol talk among teens versus non-allowable. These are the kinds of thought exercises that you have to go through with literally any policy decision that you might want to make for a platform or service, and it’s rare that you’ll ever come to a final decision that pleases everyone everywhere.

What was the last thing you read that resonated strongly with you?

This sounds a little silly, but I got really into "Designing Your Life" by Bill Burnett and Dave Evans back when I was in education and career advising. I found their style a little cheesy, but their idea of applying design thinking to life and career planning really spoke to me, and I used a lot of ideas and activities in that book to design career workshops for my students.

Since I'm hoping to do some career development sessions for my team in the coming weeks, I picked up the updated version of their newest book, "Designing Your New Work Life", partly as a refresher for myself, but also to see what new nuggets of wisdom I could mine from the updated version. A lot of the book is focused on reframing "dysfunctional beliefs'' that someone is holding into empowering or action-oriented new mantras, and in the very first chapter, their first reframing message is to reframe feelings of work inadequacy and lack of success into being "good enough - for now." That idea has really stuck with me.

Since I joined the world of tech, I feel like I've been on a neverending treadmill of trying to climb the promotion ladder and make up for lost time since I started this second career of mine a bit later in life. And harm mitigation work is a neverending cycle of dealing with and trying to respond to horrible things happening that will wear you down. "Good enough - for now" has become my grounding, stoic mantra these days, and it helps me remind myself that I'm doing the best that I can and I don't need to be going 110% all of the time, nor am I going to be solve all the problems of online safety by myself or in one day, week, month, or year.

How do you spend time away from work?

For two years, there was rarely "time away from work" for me, so this year I've been aggressively trying to set better work-life boundaries and rediscover old passions. I love hiking and exploring state and national parks when I need to get away from screens, and I also play a lot of video games when I can stomach a bit more screen time. I recently started taking pottery lessons after a 13-year hiatus, and the tactile experience of getting my hands dirty and morphing clay on a pottery wheel has been incredibly restorative for me.

Question from fellow Integrity Institute member Allison Nixon at Unit 221B: “How do you deal with the anxiety that people will paint your work as privacy-violating rather than something that results in a net gain in privacy? I worry about that sometimes.”

Oh gosh, I completely understand where this question is coming from, particularly with the kind of work that Allison does. There are a lot of conversations and concerns about user privacy happening right now, and there is always some element of having to decide how much privacy is worth trading for safety in the integrity workspace. To answer truthfully, I guess I don’t really worry about this, at least not on a day-to-day basis, partly because I feel like if I am ever making a policy enforcement recommendation that could be interpreted as invading privacy, I need to have a good, well-reasoned justification backing it up before I even suggest it, so if someone wants to paint that recommendation as privacy-violating, I have typically already done the work ahead of time to respond and defend my position. I do think, though, that there is some foundational and proactive work that individuals working in the integrity, safety, or policy space can be doing to help at least explain these concerns around privacy tradeoffs, and that is building relationships and explaining the task of safety and integrity work with individuals who otherwise don’t have a reason or need to think about these issues. It’s difficult sometimes because no one wants to hear about how this cool thing that they created is being used for bad or nefarious things, but one of my mentors once told me - and this has stayed with me since - that the main job of doing policy work is teaching people how to understand and care about safety and integrity. So that’s what I feel like I’m trying to do a lot of the time.

Before we go, what advice would you offer someone wanting to do your job?

Generally speaking? Keep an open mind, pursue opportunities that expose you to different viewpoints and experiences, read and learn, and never stop. Read tech newsletters like this one, but also read the regular news, read academic articles, and read books. Learn internet history, and learn about global history and global powers politics. Those last bits sound a bit lofty, but I can’t count the number of times that my knowledge about a weird internet meme or the history of a regional conflict has actually been applicable to a policy recommendation or decision that I have made.

Tactically speaking, though I feel like people find their way into Trust and Safety from lots of different paths, the pathway to content or platform policy frequently seems to be by way of doing trust and safety content moderation work first. That’s obviously not the only way to go about it, but it can be really helpful to have direct, intimate knowledge of how content policies are implemented and enforced so that you know what needs to be considered when actually developing the policy. I also don’t know that there are many entry-level tech policy jobs out there right now, so it seems wise to think about these policy roles as more of a “next step” or “end goal” to work towards through a career transition or promotion within a company, rather than assuming that you will be able to step into a role with no experience or familiarity with a platform or app. I feel like I am effective at my current role in large part because I know Discord, and while I do think I could step into a similar role at another company I know less about, I don’t think I’d be as impactful or even as compelling of an interview candidate. I do hope to see more entry-level roles for policy work in the future because I think new, fresh perspectives are just as important as old, more veteran perspectives, but right now, tech policy jobs are feeling more like a “long-term game plan” kind of thing.

What question would you like the next person to answer?

If you could wave a magic wand and fix one issue in the integrity space, what would it be?


Want to share learnings from your work or research with 1200+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.