📌 How platforms tried to contain Russia, largest ever CSAM survey and Kenyan mods update
Hello and welcome to Everything in Moderation, your Friday round-up of what's new in the world of online safety and content moderation and, crucially, what it all means. It's written by me, Ben Whitelaw.
Today marks a big milestone in the young life of EiM — it's the 150th edition of the newsletter 🎉 Things have come a long way since the first dispatch back in 2018, in which I boldly asserted that "moderation is mainstream" and vowed not to make the newsletter just about Facebook.
I stand by both of those points but could never have expected online safety to receive the level of attention and importance that it has since then. Thanks to longstanding readers of EiM and to those just joining the party alike.
To coincide with EiM's sesquicentennial, you can now show your support by becoming a paying member.
Becoming an EiM member today demonstrates your appreciation of my work over the past 3 1/2 years covering this complex and crazy niche as well as the time I invest week in, week out to bring you the best stories to your inbox. And, in time, being an EiM member will also have benefits that I'll share more about very soon. Read on for a special EiM founding member offer.
Finally, I want to welcome to new subscribers from Tech Against Terrorism, University College London, the St Gallen Endowment for Prosperity through Trade, Leeds City Council and elsewhere. It's great to have you as part of EiM. Here's how you can get in touch.
That's the lengthy anniversary celebrations over with. Here is this week's must-reads — BW
📜 Policies - emerging speech regulation and legislation
Has there been as unified and as swift a response to a single event by platforms as there has been to the Ukraine invasion by Russia? Over the last seven days, there has been a concerted effort to contain misinformation, with particular emphasis on limiting the reach of state-backed channels Russia Today (RT) and sister network Sputnik.
Here are some of the measures enacted this week:
- Meta, Microsoft, TikTok, Google and Apple banned the outlets within the space of 48 hours (Platformer).
- Both also had their accounts limited by Twitter in EU member states following the announcement of European Union sanctions (Politico).
- Instagram blocked RT accounts in 27 European countries for the same reason. Interestingly, the photo-sharing app only started labelling accounts as "state-controlled media" 18 months ago (Engadget).
- RT France had its Telegram channel, which had over 50,000 users deactivated. It's not clear to me if there are other RT Telegram accounts still out there.
- r/Russia and r/RussiaPolitics were quarantined by Reddit after being flooded by what the company called "a high volume of information not supported by credible sources." A moderator of both communities was also removed for "acting in bad faith" (Mashable).
Roskomnadzor — the Russian federal agency for supervising media — had already been slowing down traffic to Facebook (part of Meta) following the restriction of four Russian state media accounts on 24 February. It has since criticised TikTok for showing content to children with "a pronounced anti-Russian character" and yesterday demanded that Google "stop distributing false political information" via YouTube adverts but stopped short of any retaliative measures. Expect more indignant press releases over the coming weeks.
Away from the conflict, watching child sexual abuse material can increase the risk of contacting children online, according to the largest survey of CSAM watchers. Researchers at the Finnish human rights group Protect Children placed surveys on the darknet and received 15,000 responses to its 30+ question survey. 42% said they sought contact after viewing CSAM with 58% worried about committing abuse in person. The findings were published in the latest Journal of Online Trust and Safety from the Stanford Internet Observatory. I hope to be able to bring you more from some of the papers in the coming weeks.
The topic of child protection online has "attracted new political attention" in the wake of Frances Haugen's testimony last year but continues to be a battlefield of ideas, according to this report in Italian outlet Il Post. It touches on the Kids Online Safety Act, recently presented to the US Senate, as well as the Kids Internet Design and Safety Act, also in the US, as well as ongoing plans in the UK, India and even Russia. A good read that you'll need to translate unless your Italian is better than mine.
💡 Products - the features and functionality shaping speech
Alerts to inform Facebook users when their posts have been taken down by automated systems are being tested "in certain locations". The development, reported by Protocol this week, was spotted in the latest report of the independent-but-Facebook-funded™ Oversight Board.
Governance tokens touted as a means of managing community moderators and even deciding who gets to speak on a platform are "a thrilling utopian idea" but could lead to "an explicitly classed internet", according to a Wired report. A close look at NFT platform SuperRare, which uses tokens to decide who is able to create a mini-gallery on the platform, found them used by people with "experience putting together major exhibitions and sales, at times with institutional art world collaborators". Doesn't sound very decentralised, if you ask me. My read of the week.
Everything in Moderation is your guide to understanding how content moderation is changing the world.
Between the weekly digest (📌), regular perspectives (🪟) and occasional explorations (🔭), I try to help people like you working in online safety and content moderation stay ahead of threats and risks by keeping you up-to-date about what is happening in the space.
Becoming a member helps me connect you to the ideas and people you need in your work making the web a safer, better place for everyone.
To acknowledge the leap of faith you'd be taking in supporting EiM at this early stage, I've created a special 10% lifetime discount for founding members who come on board early. Become a founding member today.
💡 Products - the features and functionality shaping speech
Two platforms recently released new significant updates to their guidelines related to misinformation in a move that we can presume is linked to the Ukraine invasion:
- Twitch announced yesterday that it would ban "harmful misinformation superspreaders". A host of QAnon streamers were instantly removed but, from their posts elsewhere, seemed less bothered than I expected them to be.
- Discord last week introduced provisions for false or misleading information "that is likely to cause physical or societal harm" as what hoped would be "an effective countermeasure against dangerous medical-related falsehoods:
Both made me recall Professor Lilian Edwards' warning that "it’s important that controversial debates on scientific matters not be closed down without careful consideration" (EiM 🔭).
Facebook moderators contracted via Sama will receive a salary increase of 30-50% following a recent TIME investigation about working conditions and low pay. Sama's HR director said that salary changes were due to happen anyway but regular readers will know that bad press is often a driver of changes to moderation practices. This looks like no exception.
👥 People - folks changing the future of moderation
“This is a crisis. It’s a crisis for kids, a crisis for parents, a crisis for lawmakers. And it’s a crisis for society.”
Baroness Beeban Kidron is very clear on the challenge we face to protect children online. She set up 5Rights Foundation, a charity that develops policy on digital issues relating to children, back in 2013 and has campaigned for kids to be treated differently to adults online ever since.
Kidron was key in the passing of the UK's Age Appropriate Design Code last year, legislation that has led to YouTube disabling auto-play for children and TikTok preventing notifications after 9pm for kids under 15. A profile in last week's Sunday Times gives a good sense of the seriousness with which she takes the issue.
As someone that doesn't have a huge amount of faith in the UK politicians tasked with kicking the tyres on the Online Safety Bill, it is reassuring to have Baroness Kidron in the committee and inputting as she does.
🐦 Tweets of note
- "This decision by Twitter on TheTweetofGod is fully aligned with international freedom of expression standards" - Stanford intermediary liability expert Joan Barata notes how Russia hasn't yet got the better of Twitter's takedown process.
- "95% of content moderation has nothing to do with law. It has to do with social norms, org behavior, markets, etc etc. It's okay to say this." - DataSociety's Robyn Caplan on the real forces that impact takedown decisions.
- "but why are there 9.8 million pieces of terorrism (sic) content on Facebook and why did YOUR AD tell me about it?" - Jamal Jordan, Stanford lecturer and civic media fellow, raises a valid question about Meta's new ad campaign
🦺 Job of the week
This is a new section of EiM designed to help companies find the best trust and safety professionals and enable folks like you to find impactful and fulfilling jobs making the internet a safer, better place. If you have a role you want to be shared with EiM subscribers, get in touch.
Twitch is hiring a researcher to work in its Community Health product design teams, which build its safety and moderation features.
I reached out for more information about the salary and received the following clarification:
The base salary for this role in our San Francisco office is $167,400 to $226,500. We also offer competitive signing bonuses as well as equity in the form of Amazon RSUs (restricted stock units) as part of our overall offer package. Twitch also offers competitive benefits including breakfast, lunch and dinner in our San Francisco office, wellness benefits, unlimited PTO, and more.
Twitch doesn't have application deadlines so you'd be advised to get your application in as soon as possible.