8 min read

Content policy is basically astrology? (part two)

Large language models (LLMs) allow policy rules to be enforced more consistently, and wouldn't allow for exceptions. But history — and my experience at Grindr — shows us that that is rarely how the world works.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job. This week, I'm thinking about:

  • Where I think moderation using AI won't work.
  • How to network (for introverts).

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice

Messy actions, high consequences

Why this matters: Large language models (LLMs) allow policy rules to be enforced more consistently, and wouldn't allow for exceptions. But history shows us that that is rarely how the world works.

There are two kinds of moderation that platforms do:

  1. Mass, at-scale individual content moderation (looking at single lines of text, or single images) and...
  2. More investigative/artisanal/holistic moderation that takes into account more than just individual pieces of content (for example, including all profile history and messages and offline behaviour)

When talking about at-scale content moderation, I agree with Dave Willner, whose talk I summarised in last week's T&S Insider and who believes that Large Language Models (LLMs) should replace human moderators.

This is the kind of work I started out doing as a moderator 14 years ago. It feels robotic, and is exhausting to do all day, every day. No one enjoys doing it. It's already being automated in many cases but, as Willner points out, the machine learning models are trained on humans, and they're not perfect. Replacing this with an LLM instead seems smart.

However, I can still see cases where humans are needed. Willner left this out of his talk, perhaps in order to be more thought provoking and not muddy the point too much, but he did address it briefly in the Q&A session. He conceded that there will be middle-ground issues that are hard to make a call for that human moderators will still need to look at, but perhaps assisted by an LLM.

Humans for the win?

One example of this that I can see is moderation appeals, which is on my mind because I just wrote this PartnerHero guide to it. Ultimately, platforms make the decision to ban a user either because: a) they're doing something blatantly illegal and/or terrible (easy decision to make), or b) because they've done so many lower-level bad things that it's likely they'll continue.

Many platforms have a strike system for this reason: be mean to people X number of times, and you're gone forever, even if any one particular instance of being mean wasn't egregious. In the latter case, the point of banning the user is to keep the community positive (thus maintaining engagement, growth, etc), and nominally to save some money on content moderation.

Let's say the person writes in and says they're truly sorry, says they totally understand the rules, and promises they'll never do it again (or maybe claims their account was hacked and it wasn't them, or they were going through a mental health crisis but it's under control now, or any number of other excuses).

Now we have two questions:

  1. Do we believe them?
  2. If we do, is it safe to give them another chance?

Those who argue for controlled, consistent moderation may say that strict rules and consequences keep things fair. They'd say that it doesn't matter if one person is able to articulate how sorry they are, because they clearly broke the rules.

However, I believe it's important to consider the consequences of deplatforming someone when making these decisions. At Grindr, where I worked until recently, the consequences of banning a user is that they are potentially cut off from the queer community, especially if they don't live in a big city. Sure, there are other smaller LGBTQ+ dating apps, but Grindr has much larger scale and reach. For many people around the world, Grindr is truly the only viable option to meet other queer people, and a ban would have significant real-life consequences.

When I was there, I saw some users appeal bans by saying that they'd kill themselves if we didn't let them back on the platform, because it was so important to them. Sometimes these pleas seemed like emotional blackmail — perhaps another reason to keep them banned— but sometimes they seemed like cries for help from someone who genuinely wanted to continue connecting with their community and just needed to be given another chance.

Is it objectively more fair to enforce the rules completely consistently, and not allow any exceptions? Yes, absolutely. But it's not the most equitable or humane. Marginalised people are far more likely to experience severe online harassment and I've seen plenty of cases where, after experiencing harassment from many different people for a sustained period of time, someone suddenly snaps and lashes out in an attempt to make the abuse stop.

The story behind the content

In these cases, power dynamics come into play, as well as the full history of someone's account, their own identity, their intent and the intent of those who harassed them. Reviewing an individual message or piece of content may indeed reveal a clear violation of policy, but the story behind that content shows someone who was under an enormous amount of pressure and acted out of character.

If the consequence of keeping someone banned is high, then, to me at least, it makes sense to have a robust appeal system that takes intent into account and gives people a genuine chance to get back on the platform (again, not counting completely egregious violations). This is how I set up appeals at Grindr. It wasn't cheap, and it wasn't easy, but it was the right thing to do. Cases with messy human interactions and high consequences are also frequent for platforms that are someone's sole source of income, or where users are meeting offline and in-person behaviour is considered as part of a ban decision.

Could LLMs handle this kind of case? Maybe one day. But, right now, that feels like a human decision to me.

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

A T&S Insider reader writes...

Dear Alice,
I am an introvert by nature and even though I worked in this Trust & Safety space for around 12 years, I had not ventured out beyond the company I was working with. Now that I am out in the market, I am realising how critical it is to build those connections beyond my immediate colleagues. What would be your advice on how to go about it? Is there any networking event or group that you recommend?

Thank you so much for your question! The good news is that networking gets easier with practice, and that there are plenty of opportunities out there, even for introverts. The idea of "networking" can feel awkward if you think about it in purely transactional terms, but really it's about being friendly, curious, and open to connections.

In person events

Conferences are some of the best places to network professionally. Personally, although I'm not afraid to speak in front of a crowd, I have always felt really awkward at parties and social gatherings where I don't know anyone. (Shoutout to the multiple people who came up to me to say hi at TrustCon last year when I was nervously milling around unsure what to do!).

  • I've learned that I feel better if I am given a job to do (or have some structure to the interactions), so I look for volunteer opportunities at events when possible.
  • I try to arrange to go with or meet a couple of people I know, so that I always have a "buddy" to fall back on.
  • I come to events with a few questions scripted in advance. They don't have to be complicated! For a conference, I might ask things like: What has your favorite talk been? or What have you learned so far?

Two upcoming conferences I'll be at are Marketplace Risk and TrustCon, both in San Francisco. And if you live in a city with a sizable tech industry, keep an eye out for happy hours and talks. I regularly see T&S related events scheduled in SF, NYC, Austin, and London. LinkedIn is a great way to find out about these, or sign up for the mailing lists of some of the groups and organisations I mention below.

Online networking

If in person events aren't your thing, the good news is that there are plenty of organisations that have online networking for T&S professionals. The key to online networking is to provide value without expecting anything in return. Offer up kind comments, encouragement, posts about what you're reading or thinking about, and genuine appreciation for others. People will notice your authenticity, and you'll start to see the same people around and get to know them.

LinkedIn is probably the best networking option for introverts! If you're feeling brave, you can also directly reach out to people you admire or want to get to know. A kind, personalised message on LinkedIn often gets a response! I've been really surprised at who I've been able to chat with. If you're reaching out to someone, be mindful of their time though. The best messages start with a kind word of appreciation, and have one or two short and targeted questions (similar to the ones you sent me!). Don't expect a busy executive to answer lots of vague questions about career advice.

The Integrity Institute is a non-profit that is free to join for people working in T&S and integrity. You have to submit a membership application, but once accepted, there's a vibrant Slack group, online events, and a community of folks who genuinely want to help each other learn and grow professionally. I've made some genuine friendships through the Integrity Institute.

TSCollective (run by ActiveFence) has the occasional virtual wine tasting — they send you wine, a sommelier talks about it, and then there's time for networking — as well as webinars and in person events. They have a members portal/forum, but I personally don't really participate in groups that are outside of Slack so I can't speak to how active it is.

The Trust & Safety Professional Association is the group behind TrustCon, and they make a big effort to create opportunities for networking for their members. The Slack group is a little slow, but still very helpful, and their coffee chat mentorship program is excellent, and a great way to meet people in a low pressure environment.

All Tech is Human isn't Trust & Safety specific, but they did recently hire Matt Soeth (former head of community for TSCollective) as their head of T&S, so I expect them to ramp up T&S events this year. The ATIH Slack group has thousands of people and is really active, and their events in NYC are incredibly popular.

I hope this helps! Thanks so much for sending the question.

Also worth reading

Trust & Safety Hackathon: Safety by Design (T&S Hackathon)
Why? Australia's eSafety Commissioner is teaming up with the T&S Hackathon for a remote-friendly Safety by Design hackathon the week of April 22nd. Registration is open now!

Governable Spaces: Democratic Design for Online Life (Nathan Schneider)
Why? The internet was supposed to be democratic, but this book argues that online communities are feudalist fiefdoms. The link above provides access to the full book (which I confess I have not read in full). For a shorter excerpt, read New_Public's post.

The Octopus is Back: the imperial history of an AI meme (Tech Policy Press)
Why? Octopus imagery has historically symbolised imperialist world domination, and now we have Shoggoth jokes about AI. I would have loved if the article dug into the complicated history of Lovecraft more, but it was fascinating nonetheless.