6 min read

The algorithm that scores 'racy' photos, Kenya case update and three strikes for TikTok

The week in content moderation - edition #191
OnlyFans CEO Amrapali Gan on stage during day one of Web Summit 2022 in Lisbon Portugal
Amrapali Gan on stage during day one of Web Summit 2022 by Sam Barnes/Web Summit via Sportsfile and licensed CC BY 2.0. Colour applied 

Hello and welcome to Everything in Moderation, your speech governance and content moderation week-in-review. It's written by me, Ben Whitelaw and supported by members like you.

EiM's mission, when I've tried to boil it down into something pithy, is to support and empower the people building a better internet. And it's for this reason that I'm excited about the new interview series I'm working on with the stellar Tim Bernard about how good (and bad) policy is created. We're starting to reach out to policy folks to ask them to participate but we also want to hear from you. Read on for how to do so.

New subscribers from Meedan, Common Cause, Counterhate, Snap, ActiveFence, Logically and other pockets of the web, don't be shy. You may be new around here but make sure you get involved as well. To the new EiM members, thanks for your support.

Here's everything in moderation from the last seven days — BW

Want to reach hundreds of trust and safety experts?

From March, you'll be able to sponsor this slot in a future edition of Everything in Moderation. In doing so, you'll get your message in front of 1500+ trust and safety experts working on the thorniest online safety problems at platforms, governments, academic institutions and technology companies around the world.

To find out more about the packages on offer, fill in the following short form...


New and emerging internet policy and online speech regulation

In the UK, two well-known think tanks caused a stir by calling the Online Safety Bill "not fit for purpose" and withdrawing support on the grounds that it "risks making the online world less safe for many". Writing for Politics Home, staff working for Demos and FairVote, who have given evidence to Parliament and produced work I've linked to here, noted that "good digital regulation is urgently needed" but the Bill "doesn’t get to grips with the nature and extent of online harms". The wise Heather Burns wrote that the bill had become "all narrative over substance" and was resembling "the end of a marriage". I await the divorce papers.

If you read last week's edition about India's Appellate Grievance Committee (EiM #190) and left concerned, well, there's even more reason to be worried. A proposed amendment to India's IT Rules (EiM #163) will introduce a new "fake" or "false" news justification for takedowns and "essentially mak[e] the Union Government the arbiter of permissible online speech". That's according to Prateek Waghre, Policy Director at the Internet Freedom Foundation, and Associate Policy Counsel Tejasi Panjiar in this excellent Tech Policy Press read.  Consultation on the revised amendment closes on 20th February so expect movement after that point.

Finally in this section, US President Joe Biden again used his State of the Union address to call for bipartisan support to pass legislation that holds "social media companies accountable for experimenting they’re doing running children for profit". It was watched by whistleblower Frances Haugen (EiM #131), who was a guest of First Lady Jill Biden.

While there seems to be increasing alignment in Congress on the need for regulation, that isn't the case among voters' attitudes towards moderation, according to a new Morning Consult poll. Almost 3 in 5 Republicans would rather platforms leave alone or loosen how they moderate and a similar number think censorship is a "major threat".


Features, functionality and technology shaping online speech

An investigation by The Guardian has shown how biased algorithms affect the reach of women's photos on social media platforms. Researchers found that Microsoft's Azure tool gave a higher "racy" score to women's photos than similar images containing men, often leading to reduced distribution and shadowbanning. The most eye-opening part is a gif of Gianluca Mauro, founder of AI Academy and co-author of the investigation, demonstrating the problems with Azure by putting on and taking off a bra. My read of the week.

The pitfalls for fully automated content moderation are also laid out in this Computer Weekly piece, which talks to academics and technologists about the likelihood that humans could ever be removed from the process. My favourite quote — and not just because I enjoy the car industry metaphor (EiM #19) — comes from FullFact's Glen Tarman:

“We repeatedly crash cars against walls to test that they are safe, but internet companies are not subject to the third-party independent open scrutiny or testing needed.”

In platform feature news, TikTok has announced an Account status page that lets users see what violations they've received as part of an effort to "better act against repeat offenders". Users will be able to see how many strikes they've received and also be able to appeal strikes, similar to the tool Instagram launched in December. The screenshot on the announcement blog post includes the phrase "Your account is in good standing" which sounds like a line from Blackadder sketch.  

Big news! Everything in Moderation is starting a new series of content policy case study interviews, conducted by the excellent Tim Bernard.

If you’re a professional who has led the process of creating or revising content policies and would like to share how one particular example went from impetus to impact, we want to hear from you.

Whether it was your proudest achievement, a cautionary tale, or just something you do every day, chances are that it will be of interest to the broader content moderation community.

Get in touch by filling in this short form and Tim will reach out to set up a discussion. Interviews can be conducted by Zoom or asynchronously and will be published from March onwards — BW


Social networks and the application of content guidelines  

The big story of this week came earlier this week, as a Kenyan labour court ruled that Meta could be sued for alleged poor working conditions and union busting. Judge Jacob Gakeri said it "would be inopportune for the country to strike out the two respondents from the matter", the other being the so-called ethical outsourcing company, Sama. It will be the first time the platform has been subjected to a court of law outside the US, according to Amnesty International.

I've highlighted the work of Motaung and his lawyer Mercy Mutemi (EiM #153) here before and it's right that their case will be heard in Nairobi. Wired has further background on the case.

Remember the TikTok transparency centre I wrote about back in early 2020? (EiM #55) Well, a host of technology journalists finally visited it to play with the software used by moderators, find out how its recommendation system works and hear from executives about Project Texas, the $1.5bn project to move US user data to US shores. The Verge's Alex Heath called it "a lot of smoke and mirrors designed to give the impression that it really cares" while Casey Newton over at Platformer was equally sceptical.


Those impacting the future of online safety and moderation

When CEOs are asked about content moderation, what's interesting is often not what they end up saying but what they can't answer.

The latest to step into the breach without a proper briefing is Amrapali Gan, OnlyFan's chief executive since December 2021, whose interview in The Times suggests she cares for safety but doesn't know a lot about how it works at the creator subscription platform.

The former chief marketing officer could not say how many moderators OnlyFans works with (it's over 1000, a PR emails to say afterwards) and initially says that "ultimately everything on the site is reviewed by a human”. By the interviewer's maths, the platform is over 2000 moderators short to even attempt that.

Gan also dodges questions on the Online Safety Bill, including one on what a penalty for a big-tech boss could look like if found guilty. Probably for the best.

Tweets of note

Handpicked posts that caught my eye this week

  • "The first of my outputs on "Radical infrastructure" thinking through material analyses of Internet infrastructure to imagine people-centered alternatives" - interesting sounding paper from Dr Britt Paris and her co-authors.
  • "journalists usually play the role of moderators at tech policy conferences." - David Sullivan of the Digital Trust and Safety Partnership notes a stellar panel at a Colorado event this week.
  • "When people say "the discourse really changed after x happened" they usually mean "I first noticed this after x happened" - Daphne Keller helps explain an all-to-common feeling.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1500+ EiM subscribers. This week's job of the week is from an EiM organisational member.

Ofcom is looking for a Market Intelligence Analyst to work within its Research and Intelligence team and inform its policy work.

The associate-level role involves working across R&I projects and will build on previous work to understand what types of online services UK consumers are using, including what services children are active on.

Understanding, assessing and communicating the opportunities and limitations of analysing data about a range of online platforms is key. As such, the successful candidate should be comfortable handling APIs, web scraping or third-party commercial datasets as well as engaging with stakeholders across the industry.

The role comes with a flex allowance of £1,500, which can be paid into your salary, added to your pension or used for extra benefits such as holidays. The deadline isn't far away — Monday 27th February — so don't mess about.