11 min read

Glenn Ellingson on mitigating bad behaviour and the limitations of enforce/allow

Covering: adding friction to increase user safety and measuring on and off-platform outcomes
Glenn Ellingson, former engineering manager at Meta
Glenn Ellingson, former engineering manager at Meta

'Getting To Know' is a mini-series about what it's really like to work in trust and safety, in collaboration with the Integrity Institute.


Of all the many job types within trust and safety (and there any many), engineers probably get the most criticism. Often characterised as ignorant or detached from the issues at hand, they are otherwise cast unable to stop users from being harmed.

I remember one particular story from the Wall Street Journal's Facebook Files in 2021 in which it was reported that Meta data engineers found it difficult to detect and remove harmful content. The implication was two-fold: that they were helpless to do anything themselves but also that they struggled to engage senior colleagues to take notice of the problem.

The reality is that it is those work in product roles —engineers but also designers and product managers— are among the deepest thinkers about how to address the difficulties of detection and scale. It is largely a technical challenge, after all.

Back in 2020, I wrote about the growing "movement of product folks" in trust and safety (EiM #76)  and, more than two years on, I wanted to speak to someone with direct experience of that.

Glenn Ellingson is a former engineering manager at Meta, who worked for three years on a host of complex challenges, including electoral and health misinformation. He left the company in the summer to take a sabbatical but continues to be a member of the Integrity Institute.

In this Q&A, he discusses:

  • Making difficult resourcing trade-offs, even at the world's biggest platforms
  • His role as an engineering leader at Meta and what that means
  • The importance of friction to user safety (with a nod to India's highways)
  • Why it's crucial to look after yourself and celebrate the wins

This interview has been a while in the making so I'm particularly grateful to Glenn for his patience and understanding.

Getting to Know is a series of Q&As with people with deep experience working in trust and safety, in collaboration with the Integrity Institute. Previous Q&As include:

If you enjoy this Q&A, get in touch to let me know or consider becoming a member of Everything in Moderation for less than $2 a week to support the creation of other articles like this.

This interview has been lighted edited for clarity.


What's your name?

Glenn Ellingson

What is it you do?

As an engineer and engineering manager, I work on internet platforms with user-generated content, making those platforms safer and more helpful for the people using them. Most recently, I worked on Instagram's responses to the Covid-19 pandemic, elections protections, and misinformation.

Before that, I supported civic integrity teams at Facebook working on civic misinformation, voter suppression, civic harassment, and trying to understand and mitigate how Meta's platforms could be particularly dangerous to users with lower digital literacy such as people coming online for the first time through a smartphone and Facebook.

I have also worked at PayPal/eBay and StyleSeat, non-social-media platforms where bad actors —or even just actors or content of variable quality— presented their own challenges to other users.

How did you get into the industry?

Actually, I don't view myself as being in the "integrity industry"—I'm not sure it should be called an industry yet although there are certainly some companies selling moderation/detection/security products. I view myself as someone dedicated to making our technological lives resilient to the actions of a few that destroy value to us all.

The most valuable technologies do not just spin away in some hidden vaults somewhere— they enable humans to connect in new ways to generate value. The value derives from human actions; technology just enables them. But humans are gonna human and we all come to these platforms with varying levels of knowledge, social positions and motives. Some will inadvertently hurt others; others may cause damage without much caring; and a few intentionally cause damage.

This damage can be wildly disproportionate to the benefit an actor accrues if they only care about one side of that ledger. In the physical world, an equivalent would be someone who causes $2500 damage to a car to steal a catalytic converter they can fence for $100. Online this could be someone who consumes entire lifetimes of fellow humans to make a few bucks. Imagine a spammer sending 1m emails, 100,000 of which evade automated filtering before a user spends 10 seconds realizing it is spam and deletes it. Maybe the spammer nets $100 somehow... but the frictionless scale of the internet means that other people have wasted 1m seconds— that's 270 hours—clicking the spam, which is more leisure time than most humans have in a month. Left unchecked this small minority of actors can and has destroyed the value of entire platforms.

I saw these patterns on Usenet in the 90s, in my email inbox earlier this century, then in e-commerce and now on social media. Today platform owners spent a lot of energy on "from all the content being created, how can I bias toward showing a user the content that is the best for them." It's equally important to work on "from all the content being created, how can we bias away from distributing content that will cause real harm?" Without weighing both sides of the ledger, platforms will carry and even amplify content that causes real, whether harm accumulates over many small doses (eg disproportionate consumed attention) or in big whacks (eg identity theft, crypto scams, child endangerment etc).

What are your main responsibilities?

All platform companies come across integrity challenges at some point, and for a variety of reasons—not least to preserve profits— they must address them. At a big company like Meta, this integrity investment has swelled to thousands of people. There are dedicated teams—with deep academic and research backgrounds—evolving typologies of harm, around which teams coalesce to analyse the harmful behaviours, determine appropriate responses (Ranking changes? In-app warnings? Content removal? Deplatforming users?) and implement these mitigations.

Resources are always constrained so there are strong pressures to a) use automation, such as machine learning classifiers that identify problematic content, b) leverage and reuse best practices and patterns across different types of harmful content and unfortunately, c) to pick the problems you are going to actively work on, as well as the languages, geographies and cultural contexts you can spend enough time with to effectively moderate.

It is heartbreakingly common to see integrity teams say "we cannot address X because Y is a higher priority", and even "we have to pull the protections we have had in place for Z because we don't even have time to properly maintain them, so they have become dangerously inaccurate or biased." When X, Y, or Z could be something like "dangerous health scams" or "sexual harassment of teens over direct messaging" or "ethnic violence in Ethiopia" that's tragic.

This is why I am a strong believer that while moderation (identify and enforce) type integrity plays are a necessary function —in the same way that police must solve crimes and arrest criminals— this is not sufficient; real success will require approaches that scale with little to no effort and function smoothly across problem areas, cultures, and contexts.

What does a typical day look like?

As an engineering leader, my primary responsibility is the team's health— supporting the mental health, productivity, and growth of the humans doing the "real work." Engineering leaders are also responsible for a lot of recruiting; creating clarity of the team's purpose and goals; and the ongoing work of how teams fit and work together.

So my day is spent almost entirely sitting (often virtually) with other humans. I meet 1:1 or in small groups with team members to offer support; with other stakeholders from policy teams or other engineering teams to align on priorities and goals; or even with prospective team members in formal interviews or other exploratory discussions. At Meta, a lot of time also goes into crafting and sharing (on Meta's internal social media platform, Workplace) strategy, planning, and other documents so everyone can keep abreast of what's happening across the team and the company.

What do you enjoy most about your role?

The ability to work to help and protect billions of people, and the social fabric we all rely upon. The more connected humans get, and the more technology reduces the costs of publication and distribution of information, the more critical it is to understand and manage that process, for the health of the platforms (Instagram or Reddit or whatever) and, given the huge impact these global platforms have, the health of our entire society. Trust —in traditional media, in governments, in each other, as fellow human beings— have been cratering over the last couple of decades.

I can't believe the human species has suddenly gotten worse in the 21st century but, like seeing acne on athletes in HDTV closeups, we're now seeing a lot of detail, not all of it lovely, and we need to evolve how we live together in the information age. There is no task for the world more critical than humanizing our new media.

And I must add: one of the greatest privileges of working in this space is that integrity attracts absolutely fantastic humans. It's not easy or sexy, but some of the most impactful and user-centred work in the world. As a people manager, it's amazing to be able to bring such great people together to work on gnarly, global problems.

What do you keep track of to understand whether your work is, well, working? How can you tell you're making an impact?

At Meta, we measured many outcomes, on-platform and off. It's a huge investment. The company has researchers and data scientists, many with academic backgrounds in communication, social media, political science, health, and other related areas, who define the framing of specific problem spaces and then operationalize these definitions into classifiers that can identify the problem (usually "violating content", e.g. "misinfo" or "harassment" or "spam") at scale. And yes this elides a lot of details and hard work by human raters, ML engineers, and others!

We could then measure the "prevalence" (frequency) of these problems, and (with some additional work) structure our interventions into randomized controlled trials (RCTs) so we can validate the impact of each intervention (as well as our overall program) on the problem. Teams might set goals like "reduce misinformation prevalence by an additional 10% this quarter" and then plan out a dozen interventions, each with a hoped-for impact on prevalence, which would be validated via RCT, and the overall platform-wide measure would be watched to ensure the collective work was indeed building toward the goal (rather than, for example, different projects overlapping too much and cancelling each other out; or a project from another team somewhere else in the company inadvertently making the problem worse).

Of course, not all problems can fit this model well. For example, voter interference is not a steady-state problem. It only happens in the middle of an election so you can't say "let's reduce it by 10% this quarter." This demands other types of goals, such as "during the election season we want there to be no more than two pieces of voter suppression information in the 10,000 most-viewed pieces of content each day."

Meta also continuously runs survey-based measures, red-teaming, root cause analyses of prominent "misses," and other exercises to cross-check integrity systems' performance. The company also provides some data to external parties, such as the ad library and various quarterly or annual reports.

What question are you grappling with most right now?

"Transparency" and "legitimacy" are thrown around a lot to justify what is seen as inaction or non-interference with the distribution of speech that has somehow become blessed as "natural." Why is ranking for engagement (as a proxy for short-term corporate profit) seen as legitimate, but ranking for other forms of user value not legitimate? How can we reframe this for decision-makers in both corporate and oversight roles?

As I’ve alluded to, the moderation model has massive challenges with scale and coverage. When moderation can't scale, platforms usually end up providing users with distribution that is unmoderated (for at least some types of harms, in some languages etc). These same platforms would never distribute user content without ranking for engagement in any language or for any content type. If we know there is unacceptable harm on the platform in language X so we moderated it, why is it ok for the platform to also support "smaller" language Y but with no moderation? And if we cannot afford to moderate in this second language, why is the platform defaulting to "full open" with the same distribution algorithms as the well-supported language that is moderated to protect users? Maybe the algorithms should at least be set to something a bit more conservative where moderation is limited.

When I visited India, I encountered the famously chaotic city streets, often without lanes or lights or people directing traffic. It certainly felt dangerous, but it also "mostly worked" because the sheer number of actors kept speeds limited and forced a level of emergent cooperative behaviour. Then I left the city and headed out on a rural highway and encountered periodic barriers placed across the highways, forcing traffic in both directions to snarl together and single-file through. I thought this was the most wasteful thing in the world, until a local explained to me: they can't afford to effectively police these long highways, and they found that without adding these snarls people accelerated to unsafe speeds and many people were killed in accidents. The United States had the governmental and social framework to support high-speed rural highways; India did not, so they intentionally added friction to save lives.

What do you wish the world understood about your work?

Binary violating/non-violating enforce/allow content moderation will never generate high-quality content streams or communities. I've talked about some of the scaling challenges, but additionally: this just isn't how the real world works. All speech is not equally valuable and useful until a bright line is crossed and the speech is so bad it must be suppressed. Sure, yelling "fire!" in a crowded theatre is sanction-worthy. But maybe some people would also like a quiet movie experience; others might light some banter called out over their movie. A joke might be funny to a friend but hurtful (or even frightening) to a stranger. As more and more of our communities are mediated, we must develop new ways for those communities to define and practice norms, with resilience to harmful interference.

What was the last thing you read that resonated strongly with you?

So many... I'll probably single out Yishan Wong's thread from a while back about Elon and content moderation which did a fabulous job—certainly better than I can—of introducing some of the challenges and contradictions around moderation of online spaces.

How do you spend time away from work?

Honestly? Playing with cars to blow off some steam :). I've been involved in a sport called autocross—basically, time trial racing in parking lots—for more than 20 years. I also have a soft spot for the automotive extremes, which explains some questionable decisions such as owning a whole pack of miatas—one of them over 400 horsepower—or a two-door Buick convertible that's just shy of 19' long.

Question from fellow Integrity Institute member Bri Riggio at Discord: “If you could wave a magic wand and fix one issue in the integrity work space, what would it be?”

Easy answer to say, hard to achieve: convince the core product/growth team to measure themselves on value created (and lost) for the community of users, not short-term engagement. Even known-flawed measures like net promoter score (NPS) seem more likely to build lasting product value than counting likes & comments.

Before we go, what advice would you offer someone wanting to do your job?

Integrity work is incredibly emotionally challenging. It's a job that can never be "finished;" where everything you can't complete means real-world harm happening to others; and where it will feel like nobody will ever be satisfied with your performance, and media coverage will be extensive and vary from unfriendly to openly hostile. Visible, important, impossible; what a combo! But the most important work generally looks like this. So please: take care of yourself and those around you. Make sure you have emotional support in your life. Take the time you need. And always always always celebrate the wins and the value you have been able to protect.


‌Want to share learnings from your work or research with 1500+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.