12 min read

Jen Weedon on anticipating platform threats and how to manage burnout

Covering: empowering integrity professionals to do their job and aligning trust and safety goals with other teams
Picture of Jen Weedon, former senior manager, threat intelligence at Facebook
Jen Weedon, former senior manager, threat intelligence at Facebook

Getting To Know is a mini-series about what it's really like to work in trust and safety, in collaboration with the Integrity Institute.


If you listen to supporters of particular political parties or get your news from certain news outlets, you'll soon arrive at the following viewpoint: that the people working in social media platforms and tech companies in charge of protecting users from harm online are evil, inept or both.

Preventing predators from targeting children? Not a problem, they say. Creating artificial intelligence systems to spot hate speech? A cinch. Drawing the line on what a democratically elected president can and can't say? Easy.

The reality — as anyone who has tried their hand at writing policy or, worse, enforcing it — is altogether different. Content moderation decisions at all levels are a series of trade-offs (eg speed vs accuracy) in which you have little to no chance of pleasing everyone. Minimising the risk of harm is just about the best you can do.

Since I started writing EiM back in 2018, I've been critical of individuals and companies who haven't fulfilled their responsibility to keep users safe. But I've also tried to highlight good work and best practice that goes unnoticed amidst the deluge of coverage about how it should be easy to create flawless moderation systems to mitigate the thousands of ways that humans seek to harm each other online.

That's why I'm really glad to have collaborated on this series with the Integrity Institute to demonstrate what it's really like to work in trust and safety.

Over the next five weeks, you'll hear from people working at different levels of the integrity process about their work, how they measure impact and (my favourite question) what they wish others knew about their job. I hope it's useful for peers in the industry as well as those wanting to pursue a career protecting people on the web.

Jen Weedon, former senior manager in Facebook's threat intelligence team, is a great first interviewee for the series and talks about:

  • Working with a host of teams at Facebook to make it "harder for the family of apps and services to be exploited"
  • The "intelligence cycle" her team used to find, prioritise and take action against harm.
  • The "grinding" nature of integrity work and how to manage burnout
  • Her passion for empowering trust and safety professionals — and falconry

If you enjoy this Q&A, get in touch to let me know or consider becoming a member of Everything in Moderation for less than $2 a week to support the creation of other Q&As like this.

This interview has been lighted edited for clarity.


What's your name?

Jen Weedon

What is it you do?

Most recently I supported various teams at Meta/Facebook that worked on producing intelligence and conducting investigations about emerging or thorny threats that posed an outsized risk to our users and the company.

There were many teams focused on risk management and reduction, particularly around drilling into business and operational risk, but our team typically looked at our work through an adversarial lens. We’d identify who or what the ‘adversaries’ were, how they (ab)used the platforms and services, how they might respond when we take countermeasures, and how we could enable better decision making around mitigating associated harm with cross-functional partners, all with an eye towards scaling solutions over time.

In the best circumstances, we worked closely with our product (product management, engineering, user experience research, data science), operations, policy and communications, and legal partners to jointly scope what areas were most important, develop intelligence (or derive learnings from investigations), and then determine courses of action that took the cross-functional team’s perspectives and goals into account (sometimes these goals laddered up into top-line metrics, frequently not).

At a high level, there was analysis geared towards anticipating or preventing issues for our users, and retroactive analysis geared at patching holes in existing detection or operational vulnerabilities, which we learned about through long-term threat tracking, investigations, and disruptions. We also tried to integrate “off-platform” intelligence into our work because clearly, the harms that manifested on Facebook’s apps and services were part of a broader ecosystem (both online and offline).

Our output varied depending on the analysis and needs of our partners. Examples included: written products, presentations, sprints or ideation exercises (joint exercises with other teams to solve specific problems or illuminate specific threats), policy gap identification or “case law” to inform policy development, protocol development to help fill the gaps we'd identified, root cause analysis exercises, and sometimes long-form papers. All of this was geared towards an end goal of preventing or mitigating harms upstream or at least making it harder for the family of apps and services to be exploited.

I’ve since left Meta and will be joining another company in a few weeks to focus on related works around adversarial planning, red teaming, and the process of integrity by design.

How did you get into the industry?

I’ve always been interested in different facets of security and how they intersect. At the start of my career, my focus was on human security: I had a Fulbright fellowship in Ukraine looking at the trafficking of women right after college, worked at a civil liberties law firm, and then got my master’s degree in international relations (while also interning in Tbilisi, Georgia). Geographically I was very focused on Russia and environs, and after graduate school I ended up working for a federal contractor.

At the time, there was a lot of contractor money for cybersecurity efforts, and firms sought people who could conduct analysis and then communicate it and apply it beyond describing the tactical and technical composition of the problem space. My liberal arts training, regional expertise, interest in security, and general curiosity positioned me well for working in the federal government as a contractor, and that was my entry into the industry more than 10 years ago.

I began by focusing on analysis of Russian military and information security doctrine and how this compared (read: didn’t) to how the West thinks about the use of the internet to achieve certain objectives. This is all openly talked about ad-nauseam now, but at the time, it was really only military theorists and niche hobbyists that cared. It certainly wasn't something regularly covered by established journalism or an acceptable discussion topic at happy hour.

I transitioned to the private sector and worked at a number of threat intelligence firms that were at the forefront of bringing strategic threats into the light. I was on the team at Mandiant that wrote the APT1 report and several other seminal reports that helped mainstream discussion of information security as a business and national security issue, and exposed some of the efforts of government-backed threats to users online. In 2015, I joined the Facebook security team, where there wasn’t a lot of intentional work into understanding how geopolitical realities affected users’ experience and how the platform could be gamed. So we built out the security presence in Facebook’s Washington DC office and the rest is history.

What are your main responsibilities?

As a leader supporting an integrity-focused team at a big platform, I wore many hats. Most importantly my role was to support, develop, and empower my teams. In the integrity space, the work can be grinding (some thoughts here). When you’ve got a team of smart and mission-oriented people, and they work in a space where they need to constantly advocate to get their problem spaces and analysis addressed, it can easily lead to burnout, so that was a key area of focus. There were also the basics of leadership: setting strategy, holding people accountable, and evaluating progress at the individual, team, and organisational level.

The managers and ICs [individual contributors] on the teams I supported were responsible for intelligence generation in their problem spaces (“problems” is another word for threats, and included areas like influence ops, counter-espionage, e-crime, core product security, human exploitation, child safety, dangerous organisations and individuals, and other emerging harms).

From an intelligence perspective, we sought to integrate our team’s work and findings into greater Meta integrity machinery by creating demand, improving actionability, and, ultimately, challenging and refining the entrenched and unquestioned incentives for people’s work and how ‘impact’ was determined. It was not trivial to move beyond simple quantified measures of impact that reflected operational efficiency (vs actual measures of progress against solving a problem). This is particularly tricky at an organization that fetishises automation and scale (naturally so!), and where it was hard to point to preventative wins (how do you celebrate something you prevented from happening?). This is a structural and cultural challenge that remains, as far as I know, a topic of discussion and continued iteration.

What did a typical day look like?

The teams I supported were typically looking at adversarial trends, actors, and behaviours as opposed to content. They worked with partners to identify what topics we needed more information and intelligence on and figure out how to action it. On top of that, they would scope out proactive work ("where are the adversaries going next?"), evaluate new forms of detection, glean insights from investigations, and work with partner teams to try and enact change.

Management-wise, my job was to empower them and also work on engagement models between teams such as ours and more traditional product teams. We also had a fair amount of reactive and crisis-driven work but were always trying to tip the scales to the proactive side.

I often get asked, "How did the managers and teams doing the actual work even know where to start?". How, for example, does one “find” human traffickers, influence campaigns, or surveillance-for-hire firms targeting users? Well, there isn't a one-size-fits-all answer as it can vary based on who perpetrates the harm, the signals that can be derived from the behaviour, the product’s functionality, and the bad actors’ operational security (ie how well they cover their tracks).  

The intelligence cycle is one way to explain parts of how we worked, but we had hybrid intel and investigations teams. In the most simple and abstract terms, our teams would (with more rigour than I outline below!):

  • Prioritize the harms (taking into account expertise as well as type of harm, region, event-based concerns like elections, and other business-driven variables) and then figure out what information is needed and by when to answer specific questions about harm reduction, risk mitigation, event prevention, etc.
  • Collect information and understand how the harms manifest, which includes both either how they already look or how they could look. Sometimes the analyst or investigator’s understanding could be translated into signals or behaviours generation (heuristics) that the analysts could go and proactively look for (with rules and guidelines to account for privacy and acceptable use cases, of course), and then assess how effective their rules/signals were at finding actually bad actors and behaviours. Sometimes the harms were perpetrated by bad actors who could be ‘tracked’, who tend to reuse infrastructure, exhibit tell-tale tactics, techniques and procedures, or telegraph their intentions and motivations. Sometimes an existing dataset of “known bad” could be used to find more. Then there were external pieces of information that could be used to enrich and otherwise fully understand the whole picture.
  • Then there’s some level of analysis and actioning. This could look like scoping out an investigation (how widespread is X cluster of bad activity?), taking action (enforcing against the activity and/or actors, reducing its spread etc), or evaluating why this matters, how it happened, and providing recommendations to reduce it moving forward.

What did you enjoy most about your role?

As I mentioned, I left Meta in April and I am in between roles. But what I enjoyed most about my last role was helping develop the integrity professionals of the future, giving voice to users and problem types that were chronically underinvested in, and applying subject matter expertise to challenging threat areas. These aspects of the work were intellectually satisfying and mission-oriented.

What do you keep track of to understand whether your work is, well, working? How can you tell you're making an impact?

I won’t bore readers with acronym soup and the varied approaches we took over the years to OKRs [Objectives and Key Results] and KPIs [Key Performance Indicators].

One element that was key, and yet was not easy to pull off, was to create joint goals and success metrics with our partner teams from the start so that they could adopt some of the team's findings and recommendations. On top of that, there were a few other high-level ways to gauge success, usually derived from a north star vision:

Progress on the problem area: Are we moving the needle on preventing or mitigating harm to users? This could look like policy creation or change, updates to protocols to enforce on given violating actors or behaviours (or adding friction there), influencing strategy and roadmap coverage to address gaps we’ve identified (for example, adding operational resources to help with coverage during an election, or shifting certain efforts to be more prevention-focused).

It could also look like taking action (enforcing) or helping expand aspects of detection based on misses, helping inform partner teams on how different abuses could manifest when they’re in the design phase of a new feature, or collaborating with other tech companies to root out abuse more widely.

Progress on how we operated as a team or system of teams: We tried to use maturity frameworks to assess how well we were operating as a team and evaluate half-over-half progress in how we did the work (which is distinct from what we did). It was far from perfect, but it was one way to track forward momentum.

Feedback on team and individual contributor health: We can’t make progress on the problems we work on or as an effective team without regularly evaluating how people are doing and feeling about their work, careers, and sense of wellness.

What question are you grappling with most right now?

My focus over the last few years, and moving into the future, is on how to build an integrity discipline within product design with safety and security considerations incorporated from the beginning versus bolted on as an afterthought. This won’t happen overnight and will need intentionality, cultural change, organizational redesign, and continued commitment on behalf of leaders to hold teams accountable. This kind of culture change has to come from the top and be backed by commitments that go beyond compliance checklists or feel-good efforts that lack teeth.  

I’d like to see integrity and trust and safety professionals feel more empowered, and their work be integrated into business machinations and top-line measures of success. Too often, their work is treated as a cost centre or the people as external nay-sayers. One of the fastest routes to burnout in this field is having the mandate to find gaps and get ahead of risks, but find the mechanisms that address what has been uncovered be broken, dysfunctional, or worse, a Potemkin village.

I’m also committed to this being a sustainable field. There’s a lot of churn and burn, and this is a high-pressure industry. The internet isn’t going away anytime soon, nor are the harms it enables. I’d like to ensure people don’t burn out after a few years because we need them. Leaders need to prioritize this.

What do you wish the world understood about your work?

For the tech industry: Being proactive about security and safety matters, even when it’s tricky to measure.

For the tech industry critics: There are some really smart, creative, dedicated people working in "big tech" behind the scenes that are working to solve problems and not just move metrics. Their voices aren’t often heard, but they’re there.

For integrity workers: It's ok to rest.

What was the last thing you read that resonated strongly with you?

It wasn't something I read, but something I experienced recently. I went to a women's college for my undergraduate education, and I recently attended my 20th reunion. The power of its community and legacy of envelope-pushing and activism was palpable. It reminded me of my love of advocacy and social justice work, the shape of which has changed over my career. Being together again reminded me of all the opportunities to work collectively with like-minded people to build something better.

How do you spend time away from work?

I spend time with my husband and very active 5 year-old son. We go to the beach, hike, and read together and recently vacationed in Maine in a yurt! I'm also into birdwatching — I recently did falconry for the first time — and I'm trying to teach myself about gardening and linocut printing.

I also enjoy true crime, amassing books I’ll never have time to read, tasting the local oysters where I live, listening to podcasts, and I’m a late convert to watching Ted Lasso.

Question from Integrity Institute founder Sahar: If you could mind control Mark Zuckerberg, or be the dictator of Tiktok for one day, what changes would you make?

I’d sit down with actual users and experts in the field that are studying the effects the platforms have, and listen to what they had to say with curiosity and openness. I’d surround myself with people who aren’t afraid to challenge me and my ideas, who come from different backgrounds and parts of the world. Both of these suggestions really boil down to exhibiting more empathy and curiosity.

Before we go, what advice would you offer someone wanting to do your job?

Pick your battles - there’s a lot of work that could be done, and knowing where to invest your time and energy is an important lesson, and one I didn’t learn until later in my career. Also, and it sounds trite, but integrity work is a marathon and not a sprint. Take care of yourself and your people. Be realistic about what you can achieve, especially if you are working at a big platform. Have fun and keep a sense of humour.

What question would you like the next person in the series to answer?

What's been your biggest failure working in this space, and what did you learn from it?


Want to share learnings from your work or research with 1200+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.