7 min read

A guide to protecting LGBTQ+ users

Anti-LGBTQ+ hate is on the rise worldwide, and hate speech is increasing across social media platforms. We can and should do more to protect the LGBTQ+ community online.

I'm Alice Hunsberger. Trust & Safety Insider is my weekly rundown on the topics, industry trends and workplace strategies that trust and safety professionals need to know about to do their job.

After being very philosophical last week, I'm thinking about more practical matters in today's edition including:

  • Specific actions that T&S teams can take to protect the LGBTQ+ community (Happy Pride Month!)
  • Tips for individual contributors and people who are a "department of one"

Get in touch if you'd like your questions answered or just want to share your feedback. Here we go! — Alice

How T&S teams can protect the LGBTQ+ community

Why this matters: Anti-LGBTQ+ hate is on the rise worldwide, and hate speech is increasing across social media platforms. We can and should do more to protect the LGBTQ+ community online.

I wish it was an easier time to be part of the LGBTQ+ community. Unfortunately anti-LGBTQ+ hate crime in the US rose a staggering 19% between 2021 and 2022 (2023 data isn't available yet), and we're seeing a worldwide trend of LGBTQ+ rights being removed and challenged.

While platforms can't be expected to solve all of society's problems, we must acknowledge the role that social media plays in exposing people to hateful ideas — including anti-LGBTQ+ speech — which then can sometimes lead to real-world harms.

Unfortunately, T&S practitioners are not always given the resources we need to do their job and it can be easy to overlook some simple practices to protect the LGBTQ+ community when there is so much other work to do.

To help, I've put together a short guide helps you think through some of the design/product, policy and operational ways that you can make a difference to the LGBTQ+ community this Pride season. As always, please reach out to me if you want to talk about any of this.

Design and Product

  • Consider the most vulnerable and marginalized users first. (If you want to learn more, there's an upcoming panel at TrustCon on designing from the margins for dating apps.)
  • Allow identity and pronoun self-identification, but make these fields optional and easily edited.
  • Create solid feedback loops so you know when something isn't working. Make sure there's a way to listen to users, and to incorporate their feedback into the product. Do segmented user research for the LGBTQ+ community.
  • Remember that privacy is safety for LGBTQ+ users. Well-thought through settings protect people who don't feel safe being out. Consider features like encryption, disappearing messages, PIN numbers, and opt-out of targeted advertising.


  • Implement LGBTQ+ specific policies against targeted deadnaming and misgendering, conversion therapy advertising, and hate speech. GLAAD's Social Media Safety Index gives these recommendations, and documents how each of the major social media platforms are doing.
  • Make sure to have internal policy documentation showing examples of reclaimed language to help moderators avoid false positive bans.
  • Where possible, create gender-inclusive nudity policies or at least be aware of where nudity policies are harmful for trans and non-binary users. Be specific about what nudity rules non-binary users are expected to follow.
  • Clarify sex work policy carefully. SESTA-FOSTA has put platforms in a difficult situation - they must not allow sex work to ensure that there is no trafficking on their platform. However, taking a very hard stance against sex work (and sex work-adjacent) content will disproportionately affect the LGBTQ+ community. It can be helpful to be very specific about what is and is not allowed, so that sex workers have the ability to navigate platform policies successfully.
  • Have policies that allow trans and non-binary users to use their chosen name if a "real name" is required on your platform.
  • Know that being LGBTQ+ is criminalised in 60+ countries, so blanket policies against "illegal activity" could include simply being gay. Think through the ethics and repercussions of this when writing policy.

Operations and automation

  • Check whether content by people belonging to marginalised communities is falling into the "acceptable error rate". Are they more likely to be banned, but then appeal and get unbanned? Are they more likely to be reported but to have the report be dismissed? Where are the areas where your system is failing? With data, you can sometimes successfully ask for more resources.
  • Check moderation automations for false positive rates for the LGBTQ+ community. Are ML models catching instances of reclaimed speech, or mislabeling trans users photos incorrectly as disallowed nudity? (As far as I know, all of the off-the-shelf ML models for automated image moderation only have binary male/female labelling).
  • Be aware of biometric bias against people who are trans or non-binary. Test age estimation models for accuracy with trans users. Ensure that ID verification can be done by people who no longer use their legal name.
  • Limit automated banning of LGBTQ+ users based on user-submitted reports and flags. Consider a separate moderation queue for flags by and against LGBTQ+ users, especially trans users. It happens all too often that innocent trans users get disproportionately flagged as "fake" or having violated a platform's policy by bigots.
  • Require anti-bias and LGBTQ+ awareness training for all moderators/ frontline support. This is especially important if you are using moderators from areas that are traditionally religious and conservative. These moderators will have grown up with cultural assumptions about the LGBTQ+ community and it's critical to ensure that they are not applying their own bias to decision making.
  • Where possible, review profiles in addition to content. Reviewing a full profile will give additional context to content, allowing moderators to make more accurate decisions. Sometimes seemingly innocent posts can be hateful when put into a wider context, or seemingly hateful posts can be reclaimed language used within an in-group.
  • Be as transparent as possible about decision making. Unfortunately, people in the LGBTQ+ community are often discriminated against, and this can sometimes lead people to assume that every action taken against them is due to their identity. Transparency about site violations can help users understand what they did wrong and why, so they don't jump to conclusions.
  • Have a robust appeals flow, and allow users to submit additional context and information about what they think happened. Track appeal decisions, and look at what caused false positive bans. Use this data to refine your moderation practices.

Further listening

If you want to hear more about these ideas in action, check out these podcasts that I've taken part in over the last few years:

You ask, I answer

Send me your questions — or things you need help to think through — and I'll answer them in an upcoming edition of T&S Insider, only with Everything in Moderation*

Get in touch

Tips for scaling your impact

One common theme in my career over the last decade and a half is that I've always started out as a department of one, and then built out my teams from there.

At OkCupid, I was the first support hire and built the department from the ground up, eventually taking on leadership for T&S as well. At Grindr, I came in as a senior leader with inherited outsourced teams, and built up the internal department focused on Customer Experience and Trust & Safety. And now, at PartnerHero, I'm again a team of one, this time working cross-functionally with all aspects of the business.

That's pretty unusual, now that I think about it, but I'm sure there are plenty of people reading this who are individual contributors. For those who are in similar positions, here are some tips and resources from my time scaling up from one to many:

  1. Embark on a listening tour when you first start - Talk to anyone and everyone about what they are working on, their challenges and recent wins and what they think you should. Always ask them who you should talk to next.
  2. Be super meticulous about project tracking and transparency - I created this spreadsheet to track what I'm working on, and why. (Feel free to make a copy and use it yourself).
  3. Go on a speaking tour once you've set your priorities - Explain to everyone at the company what you heard from them, and how you're incorporating this feedback into your project plans.
  4. Give yourself clarity about what you need to do by creating RACI charts for any cross-functional work - There's the added benefit that you also lkeep collaborators accountable in the process.
  5. Track where you spend your time, and how your time is most effectively used - When you start out as a department of one, you're inevitably going to have to spend some of your time on work that is "below your level". I find this super valuable in the first few months of a job, as it can help me learn first-hand what's going on and why. But eventually, I find myself getting weighed down with too much to do and not enough time to do it. When that happens, I make sure I have documentation of how my time is spent, how it could be better spent, and what I want to hand off.
  6. Be vocal about your wins - Track all of your success (I use a spreadsheet or a week in review template but you can also use this well-worn method), and make sure that others hear about it! It can feel weird to brag, but remember that this success came from everyone working together. When you acknowledge the hard work that others have put in to help make your priorities successful, it can earn goodwill as well as make people more likely to want to work with you in the future. And as you prove your impact, you also have more justification to grow your team.

Also, in job hunt news, there's a new T&S job board on the way.

Also worth reading

NIST reports first results from age estimation software evaluation (NIST)
Why? Now that regulators are calling for age assurance, one potential solution that is more privacy-protecting than others is age estimation software. Understanding how accurate it is is more important than ever, but there hasn't been a lot of independent research on how effective it is. NIST shows that generally the software has improved, but there is still an error margin of just over three years, and that it's less accurate for women and people of colour.

AI moderation will cause more harm than good (GamesIndustry.biz)
Why? An opinion piece on the magical thinking about AI moderation and the importance of investing in moderation teams.

Quantifying the impact of misinformation and vaccine-skeptical content on Facebook (Science)
Why? Researches found that "vaccine-skeptical" content, which didn't include outright lies or misinformation and therefore wasn't fact-checked on Facebook, led to overall vaccine hesitancy more than flagged misinformation did.

The DSA at 100 days (Tech Policy Press)
Why? Ben shared this series of posts in his Week in Review but I wanted to mention it again. I particularly liked this piece on observing manual vs automated moderation practices at major platforms, which pulled data to see how many moderators there are per platform (LinkedIn only has 146 moderators!).

Outsourcing with Integrity (PartnerHero)
Why? Four months into my new job, I wrote a guide about how I think outsourcing companies should think about T&S and moderation work. It also announces that we're working on a public set of standards for moderation operations. Transparency is just as important for vendor companies as it is for platforms.