11 min read

Sahar Massachi on co-creating a community of integrity professionals from scratch

Covering: the difference between integrity and ethics and what it was like building the Integrity Institute
A headshot of a man called Sahar Massachi from the Integrity Institute
Sahar Massachi, Integrity Institute 

'Viewpoints' is a space for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.

Few announcements over the last few years have made ripples in the trust and safety space quite like the launch of the Integrity Institute in October last year.

In an interview with Protocol, former Facebook staffers Sahar Massachi and Jeff Allan laid out how they planned to create a network of integrity professionals with the aim of creating "some kind of public consensus about the nitty-gritty scientific and philosophical questions that integrity teams have mostly tried to answer behind closed doors." Nice idea, I thought, not to mention a change from the usual doom-and-gloom about online safety.

In the nine months since it launched, I've followed its work (EiM #143), dipped into its resources and even taken part in a panel discussion with one of its advisors. I've been impressed with the conversations it has fostered in a short time and decided to circle back and see how the Institute was doing.

When I spoke to Sahar and the team about an interview, it quickly become apparent that many of its members had vast amounts of experience and knowledge that they were prepared to share.

EiM has always sought to showcase the work of talented people working in trust and safety. So, in addition to this conversation with Sahar, I've collaborated with the Integrity Institute team on a mini-series called with a handful of members about what it's like to be an integrity worker.

The first Q&As will be published over the coming weeks and shared via EiM's newsletter, the Integrity Institute's email list and on Twitter. If people find them interesting or useful, get in touch and we'll find a way to do more.

The interview has been lightly edited for clarity.

A meaty one to begin with: can you talk about what you think it means to have integrity in a digital world?

First off, to my mind, "having" integrity is very different from "building" it, or instantiating it.

When people hear integrity, they might think of the word ethics, in the sense of “acting with integrity”. That makes sense! It’s important, it’s not wrong. When we talk about integrity, while ethical behaviour is 100% part of it (see the Integrity Oath), the type of integrity we mean is more like "structural integrity". A system with structural integrity is able to resist "attacks" or attempts at "hacks", even those that weren't explicitly predicted by the makers of the system.

One way I put it is like this:

Cybersecurity is protecting the code from outsiders, or hackers, or protecting the system from being breached

Ethics is protecting users from the company.

Integrity is protecting users from each other.

In this framework, integrity is about protecting users from each other – protecting from hate speech, ad fraud, hoaxes, child exploitation imagery, so many types of harms. Ethics is also part of integrity, which is why I love the word. Ethics is about protecting people from attacks coming from the inside. To do integrity work well you need ethics. Otherwise, you'd see things like CEOs or lobbyists or salesman undermining integrity work to help a favoured client, appeasing a loud media figure, or placating a politician or party.

To sum it up, systems that have integrity – or people whose work it is to embed integrity in systems – think about how we set up ecosystems that are robust from attack. And the meaning of “attack” is broad: it can encompass different kinds of things.

To what extent do you think social media platforms have had integrity — that is, been honest and had morals — over the last decade?

Platforms can't have morals. They're just code. Companies can, maybe. People, definitely. I'd say the morals of the companies are embedded in the platforms. And the things that a platform boosts or demotes matter, perhaps, far more than how the company PAC [political action committee] sends its donations or the PR team spins things, or the posters the company might have on its walls.

So the question might be: does the system that the company created, tend to have outcomes that are consistent with its stated ethics.

Again, what you're calling integrity, I'd call ethics or values.

Also, companies are more than just collections of people who happen to work on a platform. They are entities that exist in the world, and interact (as companies, not platforms) with institutions in the world: the press, governments, employees, etc.

If you were to say: do companies act with ethics? I'd say: there's a lot of information that the public (and I!) don't know they have. We don’t have the transparency that we’d need to answer that well.

Also, beyond the concerns that whistleblowers might theoretically uncover about “company X did thing Y at time Z”, we live in a world of tradeoffs. For example, a company might have to capitulate to, say, the Turkish government, otherwise their employees would be arrested, maybe tortured (depending on the country). We’re seeing this recently with the Indian hostage-taking laws.

Companies should have a duty to protect their employees' lives. So bad long-term thinking might lead to terrible short-term choices.

Overall, rather than hoping, or pontificating on a little evidence, we can have the transparency to actually know the answers to these questions. That'd be nice.

One of those companies is Facebook, where you worked for almost four years on integrity and election interference. How do you look back on that time now?

I met a lot of great people. Some friendships that I hope become lifelong.

The civic team in particular was an amazing place full of good people. A lot of people are still in the company doing great work. I miss them, and I learned a lot.

We should applaud them for staying there, putting in the energy and stress to try to make things better, and just generally thank them for their service. That’s how I see it.

You and co-founder Jeff Allen spent a year working on the idea for the Integrity Institute before its launch in October 2021. What questions were you grappling with at that time?

It turns out that starting a nonprofit is really hard! Maybe especially ours. “Integrity Institute” is not just the name of a legal entity that holds some staffers. This sort of organisation had to be powered by our network of members, by our community. That takes work to build trust. We had to get people on the same page about what we were building and why.

We also had to figure out what our goals were. Our mission. Our vision statement. How do we relate to our members. Tiers of membership. How do you balance being community-powered but also a think tank? What sort of people should be members? What is the interface between staff and members? What kind of roles, or experience, of ideas are we open to membership?

We had to make a lot of decisions! Here’s an example: imagine someone who is a scholar of platforms. A good one. They do good work. Are they eligible to be members? After talking a lot, we decided no.

I’m really proud of the time we spent thinking through what sort of organisation we wanted to be.

And it wasn’t just Jeff and I. We brought our community along. We had about a dozen or so people (who became fellows and the Community Advisory board) who helped us think it through, together.

We modelled the good behaviour of bringing our community along and co-creating with them. It was really fun! But hard. I’m proud that we did it.

What do you hope to achieve through the work that the Institute does? What's the end game?

We have three goals at the Institute. The first is that platforms are built with integrity well. Or, as I put it: companies do integrity better. The second is people understand the field of integrity well. Or, as I put it, people understand integrity better. The last is integrity professionals have more power (to do the right thing).

Within those goals, there are a lot of outcomes this could look like. I think every organisation has a thing that makes them special or a secret sauce. For us, it’s not what we do, it’s who we are: the independent organisation of integrity professionals, independent of platforms, figuring it out together.

That gives us flexibility in what we end up doing. We are member-powered. So a lot depends on what members want to do. Maybe we end up advising on policy, maybe on self-regulation. Maybe we end up writing textbooks for college classes, maybe it looks more like professional development in another way, like bootcamps for how to be an integrity worker. Maybe it's about reimagining how companies are structured so that integrity work isn't siloed away as a cost centre.

There are many different ways it could go! But some things are clear: those three goals, and some other baseline commitments. Our vision is that the social internet should help individuals, societies, and democracies thrive. And all three — individuals, societies, maybe especially democracies — are important lenses. It’s important to me that democracy is in there as a value that we hold.

The next baseline commitment is our mission: Advance the theory and practice of protecting the social internet, powered by our community of integrity professionals.

We also talk about how we do three big things: build the community of integrity professionals, advance the integrity profession, and share this knowledge with stakeholders: policymakers, academics, NGOs, companies

So, I don’t know! Maybe we’ll end up like the FDA [US Food and Drug Administration] and become nationalised. Maybe we’ll be like the ACM [Association for Computing Machinery] and become a more professional society. Maybe we’ll be like the Associated Press and define what the industry becomes.

As long as we follow our mission and vision, I’m happy.

What is one thing that has surprised you most since you launched?

We were just much more successful, quickly, than I thought would happen.

Look at the EU Code of Practice on Disinformation. It’s hard to explain succinctly, but we were part of that process! We were in conversations with stakeholders and people writing documents that guide behaviour by platforms. We were in meetings, looking at drafts, and giving our expert opinion on how to firm them up and achieve their goals. I didn't think we'd be at that stage for another three years.

Things are moving very fast, compared to what was our conservative estimate. It’s pretty cool!

A criticism of the Institute could be that its fellows are mostly former Facebook employees. What would you say to that and how are you growing the pool?

Okay! Let’s start with context. Well, when we started, we hadn't consulted with many fancy lawyers to make sure what we were doing was safe and legal. Frankly, before we understood what kind of organisation we would be. So it made sense that the early group of people we reached out to were people we trusted. Surprise, many of those became fellows!

Also, it's more uncommon than you might think to find people in this space who didn’t work at Facebook at some time. Facebook has, to their credit, hired a lot of integrity people over the years. Our underlying criteria is “have worked on these issues for a platform”. Our pool of eligible members is determined by who the companies will hire.

For example, we have fellows right now who I know of only as working at company X. (Twitter, for example). I think of this person as a Twitter person. I only think of them as a Twitter person. But it turns out that they also used to work at Facebook back in the day! I don't know them from Facebook. That's not how I experience them. But if you work in the field long enough, there's a good chance you'll have been at Facebook at some point.

So that’s the context. That throat-clearing aside: we are an institution that's about the "physics" of social platforms, not the specific details of one platform. Being seen as focused on one platform or company isn’t what we want to be, and not who we are. So it makes sense to clearly communicate that to the world, and also put systems in place to emphasise that internally.

To that end, we have people in the pipeline to be fellows, that have never worked not only at Facebook, but any giant tech company We have members that cover at least 25 platforms right now. I would need to clean up some records and ask some questions of some members, to be sure, but I don’t want to bug them right now.

Our staff, for example, includes Cass Marketos, whose background is Kickstarter and Urban Dictionary. That’s really cool!

I think you’ll see the proof in the pudding of “we really are focused on the ecosystem of this space, and not just locked into one company, or one size of company”. More on that in the next few months, but it’s coming.

We could go on but I want to finish with this: where are we on the path to building an internet where we can all thrive? And what makes you hopeful about the future?

The response to us going public was incredible. People came out of the woodwork to join us as members. People who I did not know, did not know of, and sometimes were working at places that I didn’t realise existed, which was really cool.

As we onboarded new members, one thing we kept hearing was a really strong need or desire to find others like us. People felt lonely at their jobs. They want a professional community.

Just as cybersecurity needed IRC chats and wikis and mailing lists, so too are we helping to build that strata of connection for integrity work. And that helps our goals of giving integrity professionals a greater voice and power on the job, which will mean improvements for the whole world.

But also, we’ve been talking about connection, and other things that might feel too touchy-feely. So let’s get specific about knowledge. There are so many small startups reaching out to us asking us for help. They want to learn how to do things correctly as they grow. And I have to tell them that I’m sorry, but we don’t have the bandwidth to help you. We haven’t even had the capacity to email them back and say so, sometimes. That’s just a function of how much interest there is in us, but also of how we are constrained in capacity.

Perhaps the coolest but saddest thing is how much the world has been emailing us, pinging us, etc, and we can't respond to everyone. We even had to turn away Senate staffers who wanted us to take a look at bills they were working on. We’ve turned away startups, we’ve teams away teams at existing large companies who want to talk to us about specific projects they are working on.

So the hope is clear: there's a strong desire across many sectors, to do it better.

The last thing that gives me hope is the increase in sophistication of the public conversation, and even conversations we've had or seen with policymakers, NGOs, academics, etc, in the year and a half that we've been around.

People are levelling up. Yes, the conversation really is still pretty remedial compared to the conversations that you and I, or experts in the space have. But it has improved a lot!

And I truly think that many people: builders, regulators, funders, thinkers — many of them truly want to do the right thing. And that's good to see.

To summarize: I feel excited that people in the world are excited by us. And I’m excited to see that people in the world are refining their positions, increasing their sophistication, and trying to get it right.

We as a society have tackled huge challenges in our parents' lifetimes. We can do it still. That’s what gives me hope.

What question would you like the next person in the series to answer?

If you could mind control Mark Zuckerberg, or be the dictator of TikTok for one day, what changes would you make?

Want to share learnings from your work or research with 1200+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.