Allison Nixon on detecting threats and 'high-harm' actors
'Getting To Know' is a mini-series about what it's really like to work in trust and safety, in collaboration with the Integrity Institute.
During the course of a normal week, as I find interesting articles to include in Friday's newsletter, I end up reading the occasional cybersecurity website.
My intricate network of Google Alerts and Twitter lists means that I end sometimes end up on Dark Reading, occasionally Infosecurity Magazine or more often a cyber-related article on technology news site, The Register. The articles I read tend to be about new scam ad networks, platform features that could be exploited or criticism of EU or UK regulation. They are interesting to someone working in online safety, if not always directly relevant to their day-to-day work.
One question that crops up regularly in my reading is where trust and safety fits alongside cybersecurity. An article published by Security Boulevard in 2021 asked:
is the new term [trust and safety] just about putting a positive spin on the same old task, or is there something new about focusing on building a culture of trust, as opposed to chasing fraud? Is it just about fraud prevention?
The distinction feels somewhat arbitrary but it made me wonder if there was more that the trust and safety community could learn from security and cybersecurity professionals that have long fulfilled the role of protecting users online.
The third interview in the Getting to Know series is with one such person: Allison Nixon, chief research officer at Unit 221B and Integrity Institute member. She expands upon:
- What it's like researching "high-harm" threat actors — and why it's not easy
- The enjoyment that comes from disrupting a "scheme"
- The difficulty of getting a paid security role
Getting to Know is a series of Q&As with people with deep experience working in trust and safety, in collaboration with the Integrity Institute. Previous Q&As include:
- Jen Weedon on anticipating platform threats and how to manage burnout
- Lauren Wagner on shipping trust and safety products and reimagining the social web
If you enjoy this Q&A, get in touch to let me know or consider becoming a member of Everything in Moderation for less than $2 a week to support the creation of other articles like this.
This interview has been lighted edited for clarity.
What's your name?
What is it you do?
I am Chief Research Officer at Unit 221B. We are an investigations company and we research cybercrime on behalf of our clients.
How did you get into the industry?
I’ve been working in cybersecurity and threat research for a while. It's not usually called integrity work but there is a lot of overlap.
What are your main responsibilities?
I do investigations and threat research into high-harm threat actors. When I say "high harm", I basically mean threat actors that are so irritating that companies will pay us huge amounts of money to find out more about these individual people specifically.
There have been cases where one person has caused hundreds of thousands of dollars in damages or caused a threat to life. In this scenario, the hope is to have them arrested or make them stop their fraud schemes.
What does a typical day look like?
There is no typical day for me, but generally, it involves investigations into threat actors, triage, and if the bad actor is a bad enough repeat offender, referring them to LE [law enforcement].
What do you enjoy most about your role?
I love following these schemes from start to finish. When we start investigating some scheme, it's usually in full swing causing harm either financially or to life and safety, and by the end of it, the scheme has fallen apart either. Normally this is because its participants are in jail or fugitives or laying low, or because whatever method they used got patched, and that’s immensely gratifying.
What do you keep track of to understand whether your work is, well, working? How can you tell you're making an impact?
For a client, we provide a brunch of proof that the action was successful. This includes tracking the cessation of attacks, arrests, convictions, and complaints about bans from high-harm communities. On our side, the main piece of feedback we get is whether the client pays us for more work.
What question are you grappling with most right now?
How to scale my work. I’m thinking a lot about how I can delegate tasks and train people and also set aside time for that.
What do you wish the world understood about your work?
That online child abuse is way more common than many people estimate, and it's not some made-up problem.
What was the last thing you read that resonated strongly with you?
I don't have the attention span for books... but the "Darknet Diaries" podcast episode "Dirty Coms" was something very close to my work and I listened to it several times to absorb everything and learn more about the harmful communities I study.
How do you spend time away from work?
Gardening, playing with the dog. Weird pandemic hobbies — growing gourmet mushrooms and microgreens. Also breeding tomatoes.
Question from fellow Integrity Institute member Lauren Wagner at Link Ventures: What motivated you to work in integrity/trust and safety?
Back in 2012, I was working in a security operations centre analysing security alerts and I wondered to myself, "who thinks this is a good idea to make our clients visit these exploit kits? Who are the kind of people who do this? Do they talk to each other? There's a forum for everything, so there has got to be a forum for this."
And I actually found some criminal forums from Google searching related terms. I was astonished that such websites were allowed to exist and I've spent the past ten years looking at attacks from this angle.
Before we go, what advice would you offer someone wanting to do your job?
It's really difficult to get a job doing actual paid "security research", but if you want to start down this career path, I suggest starting at a place like a security operations centre or a helpdesk. Such jobs will expose you to all the attacks happening right now. You will have access to really good data.
If you want to get into security research for the glory or the money, then I will say your motivations are all wrong. You should be motivated by the love of the data and finding crazy stuff. The money will come after you get good at it.
Finally, what question would you like the next person to answer?
How do you deal with the anxiety that people will paint your work as privacy-violating rather than something that results in a net gain in privacy? I worry about that sometimes.
Want to share learnings from your work or research with 1200+ people working in online safety and content moderation?
Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.