12 min read

Shelby Grossman on creating an academic discipline around trust and safety

Covering: the ethical challenges of researching misinformation, Stanford's trust and safety teaching consortium and encouraging private research to be made public
Picture
Shelby Grossman, Research Scholar at the Stanford Internet Observatory and co-editor of the Journal of Online Trust and Safety

'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.


Until just a few years ago, if you were a researcher or academic specialising in trust and safety issues, you typically published your work in a cybersecurity journal or perhaps a computer science one. If you wanted to present your work, you might have to go to a political science conference or psychology symposium. Despite many years of collective thinking about internet safety and even longer publishing academic literature, there was no natural space for folks to discuss cutting-edge research.

Nowadays, there is the Journal of Online Trust & Safety and the Trust and Safety Research Conference, both run by the Stanford Internet Observatory for academics and researchers working on issues relating to online harms. If you're like me, you may have jealously followed last week's conference via the social media posts of its participants.

Although I couldn't make the event, I've featured the Journal's work in EiM countless times — I'm making my way through the new issue of the Journal as I type — and appreciated the thoughtful work of its co-editors. So it made sense to reach out to one of them, Shelby Grossman, to find out more about the Journal's work over the last two years.

As with the last EiM Viewpoint, this interview was kindly conducted by Angelina Hue, who recently completed her Masters degree in Media and Communications at The London School of Economics and Political Science, specialising media policy and content regulation.

Angelina spoke to Shelby over the summer and, because of my tardiness getting this published, they speak about the conference in the future tense. Nonetheless, this is a great primer on Stanford's efforts in trust and safety education with a ton of useful links that are worth bookmarking and returning to later on. Thanks to both of them for making the time.

This interview has been lightly edited for clarity.


I first came across your work when I was looking through the Stanford Internet Observatory. Can you give us a brief introduction about your personal research and work in online safety?

I'm a political scientist and a researcher at the Internet Observatory, and I research ways in which the Internet can be abused to cause harm to people. So there are a number of big topics that we focus on. One topic that I focus on is self harm and suicide content online. I have a paper with co-authors that essentially audited the top search engines — Google, Bing, and DuckDuckGo — to see how they performed when someone types in queries that we know are associated with suicide and self-harm.

I also work on the information environment, so I have a paper evaluating how effective large language models will be at creating propaganda, and then I have some kind of new research projects and in general, my team has a bunch of research projects on child safety.

I also read that a region of interest in your research is sub-Saharan Africa. I'm curious what your thoughts are on the latest ethical controversy surrounding content moderation outsourcing in countries like Kenya. So I think recently the case of Daniel Motaung, Sama and with Meta’s alleged union busting. Human content moderation labor seems like a grey area right now; how do you see broadly the future of moderation to kind of ensure online safety?

I think the role of human moderators is just super critical. There are so many moderation decisions that just can't be automated because you need context or you need local knowledge. I think the conditions for human content moderators everywhere is currently... there's still a lot of issues, I think.

Casey Newton [founder and editor of Platformer] has a number of really interesting proposals. I don't want to mis-state how he says this, so I'll just kind of summarise some policy proposals that are out there – include limiting the amount of time that moderators have to see some of the most egregious content online, like violent extremism, content and child sexual abuse material.

I think there are also some interesting ideas to create a pathway for professional development for moderators so that if they do this kind of work for a period of time, there's a ladder that they can climb professionally. There are lots of issues with the conditions that human content moderators face, both in Africa and the US and elsewhere. But I think their jobs are really important and can't, at least at the moment, be automated away.

Right, so would you say sub-Saharan Africa would continue to be a significant market for human moderators?

It's not really my expert area of expertise, sorry.

It's really far from my area, so sorry.

OK, no problem. Moving on to the Journal because I saw that you're a co-editor of the Journal Online Trust and Safety, why was the journal started in the first place, as I've seen it’s just coming up on its second year? I'm just curious why the Journal was started in the first place at Stanford and where is the support coming from?

Broadly speaking, the Internet Observatory is trying to create an academic discipline around trust and safety. And so there are a number of ways we're doing this, and one of the main ways is through the Journal of Online Trust and Safety, which we started for a number of reasons.

The journal exists to bring together people who are doing cutting-edge research on online harms. It’s not that that research hasn't been happening, but one of the issues is that it's often siloed within various disciplinary journals. So for example, there might be a really cool article about an automated approach for detecting harassment online that gets published in a computer science journal, but as a political scientist, I just don't read those journals, so I'm not going to even know that that research is out there. And so we're trying to create a journal that brings together the best work that's happening across various disciplines.

We're also doing a lot of work to make sure that this work is accessible to practitioners in this space. The journal is open access, so don't have to pay anything to read the articles; they are just posted on our website. And another issue that we noticed in this space is that it was taking a really long time for important research to get published because the academic journal publishing process is really, really slow; it's not uncommon to have an article that's under review for a year, and then it to be two years at least later until the article gets published. By that point in this space, a lot of these issues are out of date.

So we have a rapid review process to make sure that important research gets published quickly, and we accomplished this without sacrificing rigour in the peer review process by paying reviewers. We pay reviewers $300 if they submit the review within two weeks, which is a decent amount of money, at least in this area. That's kind of the motivation behind the journal.

To clarify, is the Journal a part of the Internet Observatory or is it an independent project from the Internet Observatory?

Yeah — the editors of the journal are Jeff Hancock, who's the faculty director of the Internet Observatory, Alex Stamos, who's the director, and myself.

Great. This is kind of just a broader question about Stanford and the journal’s position within Stanford – Stanford is located in the epicentre of the Silicon Valley tech hub. How do you think this has impacted, if at all, the type of research emerging on online harms in tech companies?

One of the things the journal is really trying to do is encourage research that's happening in industry to be made public. So there are lots of people with PhD's who are working at Meta and Google and all sorts of online platforms who are doing really rigorous and important safety research like within their companies, but oftentimes this research is never published. Being in Silicon Valley, we actively try to encourage researchers and industry to publish their work in the Journal. That way, platforms that might not have the resources to hire lots of PhD's can benefit from the research on safety that's being done at some of the bigger platforms.

That's that's a really interesting point. What would you say is the most interesting piece published [in the Journal] to date based on what you've read?

I have lots of favourites, but one of my favourite articles that we've published is by researchers at a nonprofit called Protect Children. The journal also publishes work by researchers in civil society who are working on online safety issues, and this is a paper that surveyed people who were searching for child sexual abuse material on the dark web. The researchers partnered with a dark web search engine, and whenever someone typed in certain keywords that are known to be associated with child abuse material, the search engine kind of had a little pop up thing that said "would you be interested in taking an anonymous survey?".

People were asked questions about, for example, their online behaviour, but also whether they reach out to children directly. And one of the big questions in this space is whether looking at child sexual abuse material images or videos increases the likelihood that people will reach out to children directly to harm them. It's a very difficult question to answer, but what this paper showed is that people self-report that there's a positive correlation between looking at this content and then reaching out to children, which highlights the importance of getting rid of this content online and deterring people from looking at it. I thought this was a really important paper that had a really clever research design.

Thank you for sharing that. Having read through so many submissions and so much research, what would you say are the strongest ethical challenges that tech or media companies face in the US, particularly regarding online safety?

Interesting. I'm going to answer the question slightly differently and say what are some of the biggest ethical challenges for researching online safety.

I think the one of the big issues relates to research on misinformation. To study the effect of seeing misinformation, a lot of people, myself included, do survey work where you show people false information and then ask them questions about it, and it's kind of tricky because by just showing people this false information, you might be increasing the likelihood that they go on to believe it when they wouldn't have believed it before.

So to give an example from my own work, I did a survey in Nigeria about COVID misinformation, and we asked people if they believe that Bill Gates has put microchips into the Covid-19 vaccine. And you can't ask someone that without just saying that, and so it's possible that someone who didn't think that before and hadn't even heard of that, then all of a sudden are saying "oh, maybe Bill Gates has put microchips in vaccines". We have ways to mitigate these harms – these potential harms. For example, we only ask about widely held pieces of misinformation; in Nigeria, most people have probably already heard this rumour, but there are those kinds of issues with misinformation research.

BECOME A MEMBER
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.

They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.

Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW

Yeah, that's a really good point. To pivot into other initiatives that are happening at Stanford coming out of the Internet Observatory, can you talk about the gaps in research that motivated you to start, beyond the Journal, a [trust and safety] conference and consortium?

We're doing the second annual Trust and Safety Research Conference at Stanford in September, registration just opened and here’s the draft agenda. [Ed: the conference has now taken place]

The purpose of the conference, which we did for the first time last year, is to bring together researchers across professions who are studying online safety, so we want to bring together researchers and academia, industry, civil society, and government. Most of our panels include a speaker from industry and a speaker from academia, for example; we want to facilitate research partnerships, and to expose people to the cutting edge work that's being done that they might not be aware of.

Last year, about 450 people came and we were sold out. We're really excited about the conference this year, there's a lot of really cool panels on generative AI as well as a main panel on mental health and generative AI.

We also really care about teaching because we are at a university; we teach two classes on trust and safety. I teach a trust and safety class in the political science department called the Politics of Internet Abuse, and the Internet Observatory director Alex Stamos teaches a class in the computer science department called Trust and Safety Engineering. Both classes talk about various types of online harms, like child safety, misinformation and disinformation, harassment, hate speech, violent extremism, suicide and self harm. And for the final project, students work together across the classes, so the political science students work with the computer science students to build out a bot on discord to moderate a certain type of abuse.

Students just presented their final projects last week and there was a project on trying to automatically detect bullying content. The political science students help with the policy dimension of this and the computer science students program the bot and also work on the policy dimension as well. For the final project, it's a poster session where we invite professionals from trust and safety teams in Silicon Valley to be guest judges, so it's a lot of fun. Broadly, we're trying to make sure that the next generation of people who are going to be working in tech or government or civil society are kind of thinking about what we currently know about online harms and ways to mitigate them.

Along those same lines, we have started a trust and safety teaching consortium, which includes about 30 people from industry and academia and civil society, and we recently made open access all sorts of really cool teaching materials. Anyone can go to the [Github] link; it has a reading list for all sorts of online harms, and then it has 12 slide decks for different abuse types that you can just download and then edit and customise for your for your class.

So we're trying to make it easy for people to teach trust and safety. There's a challenge that when you want to teach a new class for the first time — it's just a lot of work — and so it can discourage you from teaching a new class and make it so that you keep teaching the same stuff over and over again. Our hope is that this reduces the frictions to teaching a class on online safety.

I graduated from Stanford in 2020, and at that time there was not this much going on in trust and safety. So it's really great to hear that there's so many initiatives being taken on education, because it all really does stem from the students who end up working in these professions and become the industry professionals, so I really appreciate that you guys are focusing on the educational perspective.

All this kind of brings me to a broader question about how the Journal and also the work of the Internet Observatory falls into Stanford's commitments to trust and safety. I can imagine there seem to be so many developments happening in the trust and safety field, even upcoming lawsuits in the US – are you keeping materials updated on contemporary developments?

We definitely have to think about how to keep it updated. We have to be very careful to make sure that instructors look at the content before they teach it. But we also have a lot of ideas for next steps for the consortium so we might try to make some pre-recorded professional videos of people lecturing on these topics that then anyone could use. And when Alex and I teach these trust and safety classes, we always update our slides.

It must be nice to have a GitHub then that you can keep updating?

Yeah. You can check out my favorite deck in the GitHub; it's on government regulation and it's a really, really good deck created by Karen Maxim, who used to work at Zoom and gives a good primer on what regulations are out there.

If our subscribers and readers want to get involved with the Journal, what is your formal submission process?

There are three ways that people can get involved. First they can apply to join – or attend – our conference, and again, registration is open right now. [Ed: the conference has now taken place]

The next deadline for the Journal of Online Trust and Safety is August 1st and it publishes both peer reviewed research articles along with commentaries. So if someone wants to publish a commentary for the conference proceedings of the journal, that deadline is August 1st and then after that, the next deadline is October 6th – I'll put the link in right there [The next deadline for peer-reviewed research for the special issue on Authoritarian Regimes and Online Safety is November 1 2023]. There's generally about a five month turn around time, so from the day that you submit to the day the article's published is about five months.

The other way that people can get involved is by joining the teaching consortium if they want to help create teaching content. The way to get involved in that is to e-mail the journal e-mail and just express interest in getting involved in the teaching consortium and can add you to the to the e-mail list for that.

OK, great. I feel like I've unearthed a trove of materials that I think everyone is going to be very, very eager to dig into. Thank you for sharing all this, thank you for your time today, Shelby.

Thank you.


Want to share learnings from your work or research with thousands of people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.