6 min read

šŸ“Œ How to fight back against sexist trolls

The week in content moderation - edition #78

Hello everyone — this is Everything in Moderation, your weekly dispatch for and about folks interested in online content moderation.

This week, I take a look at BBC Sport’s recently-announced efforts to curb abuse on its social media platforms and am pleased to have a Q&A with one of the journalists that made it happen. It's a must-read for anyone involved in managing online communities or social media accounts.

As I mentioned here a while back, I’m taking a short break in sending the newsletter to catch up with friends and family. EiM will be back in your inboxes in September.

Stay safe and thanks for reading - BW


🚫 ā€˜There are plenty of grey areas to contend with'

For publishers and news outlets, having a social media presence has become a necessary evil. Yes, platforms help to distribute stories and attract readers but they also expose you to the unfiltered opinions of readers and random users.

In my time working at two UK newspapers, the vitriol from social media was one of the hardest parts of the job. Stories about race or gender posted to Twitter and Facebook would bring out the worst in followers, no matter how straightly you presented the article, and staff writers would often be personally abused on the back of a headline.

As a team, we often talked about being more active and vocal — replying to comments, challenging people’s views, that kind of thing—but, with a small team stretched thinly, it was practically impossible.

That’s why I was interested in BBC Sport’s recent announcement that it will start to play a more active role in the management of social media comments relating to its stories.

It comes on the back of a survey of elite British sportswomen which found that a third had suffered abuse on social media, double the last survey in 2015. Racism, misogyny and death threats were sadly common.

The results were a wake-up call for the BBC Sport team, who have now pledged to make their accounts — which have over 33 million followers across numerous platforms — ā€˜kind and respectful places’. Certainly an admiral goal.

Last week, a tweet was pinned to the top of the broadcaster’s 8 million-strong Twitter account explaining more about the new policy (although it didn’t take long to find the type of comments that the new policy presumably hopes to stamp out).

I wanted to find out what the new policy meant in practice and reached out to Caroline Chapman, a producer at BBC Sport, to understand a bit more about why the BBC felt the need to intervene and what systems were now in place to mitigate hate on social media.

Here is what she told me via email this week.

Q Your announcement makes clear that BBC Sport intends to take responsibility for discussion on its platforms. What made you realise there was a problem in the first place and who raised it internally?

I’ve worked as a producer on the social media team for a couple of years now and while there has always been a certain amount of negativity directed towards certain subjects (mainly women’s sport), we were seeing it more and more across all our platforms and hateful comments were also appearing frequently on any post to do with race, LGBTQ+ and equality issues. There had been a few occasions where a couple of blue tick accounts on Twitter had rightly called us out for seemingly not taking action on these comments. When the BBC Sport website surveyed over 500 elite British sportswomen, 30% said they had been trolled online. I didn’t feel like we could report on this stat and not do something to try and help the situation, so I approached BBC Sport’s editor with a plan for how we could practically tackle the issue.

Q How big is the BBC Sport team responsible for dealing with user feedback both on and off-site? And how will that change with this announcement?

Given the size of our following, we regularly have over 15k comments on our Twitter, Facebook and Instagram pages every day. The BBC’s moderation services department have access to our Facebook and Instagram accounts, and will largely hide/delete/block anything which overtly breaks our guidelines. But for technical reasons they can’t moderate Twitter for us, and it’s on this platform that we find the most negativity. Since we introduced our new stance, it has been the job of the daily producer to perform regular moderation checks on Twitter, as well as keeping across our new inbox where users can flag comments themselves. The producers also keep an eye on certain stories on the other platforms - the stories we know are likely to be a target for trolls.

A message pinned to @BBCSport

Q You've committed to making BBC Sport's accounts 'kind and respectful places'. How in practice do you plan to do that?

We are taking very small steps for the first few weeks and not making any wild promises about eradicating hate speech from our accounts entirely...because we’re not sure we can! In practice, I have drawn up a very rudimentary traffic light system of green, amber and red actions so the producers can look at the comments, see which bracket they fall into and can take action accordingly. This includes replying to certain comments or reminding our audience of our stance, as well as having the power to block users who overtly break the rules. By hiding hateful comments and engaging with the most positive and astute replies, we hope that the so-called trolls will stop, or at least think before they type.

Q As many EiM readers will know, getting buy-in to invest in moderation/community management can be difficult. How did you do it and what tips would you recommend?

When I took the initial idea to various parts of the BBC to be signed off, the reaction was positive towards its intentions but I’d often hear ā€œthis is very ambitiousā€ or a simple ā€œgood luckā€! But I never saw it as ambitious. Finding our voice and saying enough is enough was simply the right thing to do. I was extremely lucky to have unwavering support from the BBC’s marketing and creative departments who helped fund the video assets and graphics on launch day. And as I said before, what we’re doing isn’t radical on a practical level, we are simply upping our game on a day-to-day basis and will build from there.

Q Some might say preventing hate on social media is a job for the platforms, not something the BBC should have to do. What would you say to that?

I absolutely agree that the platforms should be doing more and should make this issue a priority. But I also have first-hand experience of how difficult it can be to police, and there are plenty of grey areas to contend with, so I have some sympathy. My big hope is that platforms have taken notice of what we’ve done with our project and realise things need to change - and that big accounts won’t be staying silent on the issue.

Q Finally, what reaction have you seen since the announcement?

Anecdotally, it’s been really positive - both internally and externally. But the hard work starts now! If any of your readers have any feedback, I’d love to hear from you via our moderation email: socialmoderation.sport@bbc.co.uk

Thanks to Caroline for taking the time to answer my questions. You can read previous EiM Q&As here, here and here. Who else would you like me to do a Q&A with? Let me know.

šŸ“£ Not forgetting...

[Policy] Another story about Facebook’s public policy officials interfering in how conservative politicians are treated on its platform. This time, it’s in India (where, as I’ve written about, there is huge disagreement about the future of social media regulation).

Facebook’s Hate-Speech Rules Collide With Indian Politics - WSJ

A company executive in the vital market opposed a move to ban a controversial politician who staffers concluded had violated hate-speech rules. Some current and former employees allege the ruling party got favorable treatment.

As in the US after Zuckerberg's handwringing about Trump's tweets, Facebook employees in India have written a letter imploring senior leadership for more consistency enforcing its community guidelines.

If you're keen to read more, Jason Kint, CEO of Digital Content Next, has written a good thread on this while The Quint has a smart op-ed from research fellow Faiza Rahman which is worth a read too.

[Platforms] I mentioned in last week's newsletter that we were starting to see the effect of COVID-19 on content moderation. Issie Lapowsky at Protocol has a good readout of exactly how.

How COVID-19 helped — and hurt — Facebook’s fight against bad content - Protocol

The amount of child sexual abuse material Instagram caught and removed fell dramatically, while hate speech removals on Facebook and Instagram grew.

[Policy] Holocaust denial is still prevalent on Facebook, according to new research from a counter-extremist organisation and despite a recent ban on anti-Semitic conspiracy theories. Sigh.

Facebook algorithm found to 'actively promote' Holocaust denial

Similar content is also readily accessible across Twitter, YouTube and Reddit, says UK-based counter-extremist group


Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.