The reality of being an integrity worker, UK regulation stutters and content filters come to TikTok

Hello and welcome to Everything in Moderation, your carefully curated digested of content moderation news and analysis, delivered every Friday. It's written by me, Ben Whitelaw.
A warm welcome to lots of new subscribers that found EiM via LinkedIn this week including folks from GLAAD, Bytedance, Meedan, Berkeley University, the University of Waterloo, OkCupid, Stack Overflow, Twitter, INETCO, the Online Safety Exchange, TaskUs, Spotify, Microsoft, Kabuni, Yale, Flickr and many others. It's great to have you here — send me a sentence about what you do and what your biggest challenge is.
For the first-time recipients here, a quick need-to-know: every week, I read and critique the latest news, analysis and research about content moderation to give you a broader, more global perspective about online speech and internet safety. I also write my own analysis and do Q&As with smart folks working in the space, supported by EiM members.
I'm excited to also announce a brand new Q&A published this week as part of a new mini-series about what it's like being an integrity worker. Read on for more — BW
Policies
New and emerging internet policy and online speech regulation
Some significant regulation news out of the UK this week: the Online Safety Bill is set to be delayed. The bill was due to be passed in parliament next week before Westminster goes on summer recess but, according to Politico, it is now expected to be dropped, ironically, in favour of a government vote of confidence in itself. It's notable because the new Conservative leader and Prime Minister may have different ideas for the bill and at least one candidate has said it will be scrapped if she wins.
The change of schedule provides some time to take stock of the much-criticised bill and I recommend this thread from Professor Paul Bernal on the free speech implications, this report from Matthew Lesh and Victoria Hewson of the Institute of Economic Affairs on how it threatens innovation and this blog post from my go-to UK policy person Heather Burns on well, what a complete and utter nightmare the bill is.
Matters relating to the much-lauded Digital Services Act have been relatively quiet since it was agreed upon in April (EiM #157) but that hasn't stopped Human Rights Watch noting that it "falls short in some important ways". Senior researcher and advocate Deborah Brown welcomes the landmark bill but writes that there are "problematic loopholes, such as allowing “trade secrets” to be used to justify not providing researchers with access to data."
In Indonesia, a coalition of human rights organisations has written to the government to urge them to repeal a content moderation law passed in 2020 for being "inconsistent with internationally recognised human rights". The letter, signed by AccessNow, Electronic Frontier Foundation and others, came after ministers insisted that over 4500 digital platforms comply with new ID registration measures immediately or face fines (sound familiar?). News reports suggest TikTok and Linktree are the only major foreign platforms to have done so to date.
If you're looking to go deeper into the Twitter vs Indian government case that I covered last week (EiM #166) check out:
- The Indian Express' view that "government need to be more transparent in decision making".
- This podcast discussion from Tech Policy Press, featuring three Indian internet experts.
- This Washington Post editorial board piece on how "this battle is not only about Twitter, and not only about India".
Products
Features, functionality and startups shaping online speech
Content filters and maturity ratings are coming to TikTok in the coming weeks, the company announced on Wednesday. 'Content Levels' will see content classified by moderators and then assigned a maturity score that will be applied to 13-17-year-old user's feeds.
A hashtag filtering system will also allow users to screen out content they don't want to see; TechCrunch gives the example of a vegan who doesn't want #meat videos hitting their feed, but — with all due respect — that is frankly the least of our worries right now.
Platforms
Social networks and the application of content guidelines
A new report from UNESCO has found that Telegram has high levels of Holocaust disortion and is "a safe haven for those who wish to deny or distort the genocide”. The research, published in partnership with the World Jewish Congress, was based on manual review of 4000 Holocaust-relate posts across five platforms and found that 49% of references on the Russian-founded messaging app either deflected responsibility for the Holocaust onto its victims, minimized the Holocaust’s impact or celebrated the perpetrators. Telegram's response, unsurprisingly, ducked the issue.
It's rare to see platform employees talk openly about their work so it's good to see TikTok's Eric Han invited to talk about work/life balance over on CNBC. The video app's head of safety explains how "mental health days and emotional support services" are vital to decompress and time-off is encouraged. Gadear Ayed (EiM #124) and the former moderators that brought a case for lack of adequate mental health support (EiM #153) might disagree.
I asked one of its co-founders, Sahar Massachi, to explain what it was like building this community of integrity workers and why he thinks it needs to happen now. As part of the exchange, Sahar unpacked the difference between integrity, cybersecurity and ethics, which I found fascinating.
Viewpoints will always remain free to read thanks to the support of EiM members. If you're interested in supporting more Q&As like this, become a member today (lifetime discount with this link).
PS Look out next week for a new mini-series on what it's like working in online integrity, in collaboration with the Integrity Institute.
People
Those impacting the future of online safety and moderation
The women's European football championships, which started last week, are being held against a backdrop of important campaigns about online abuse.
Mobile provider EE has enlisted male managers and players as part of its #HopeUnited initiative, which includes demo videos of how to block users, while governing body UEFA will show videos on stadium screens as part of its Real Scars campaign.
Former player and now TV presenter Alex Scott has seen her fair share. In a recent interview, she explained how she'd received misogyny and death threats but feels a "responsibility to change perceptions by sitting in that [commentator's] chair and talking about football". Remarkably, England's manager Sarina Wiegman admitted that she'd had to speak to her players about "the best plan to stay away from the abuse and not getting it into your system." As if winning a tournament isn't hard enough already.
Male players also receive extensive abuse, as we saw during last year's Euro 2022 tournament (EiM #143) and validated in a recently published report. But I bet most male professionals don't feel the same need to justify their existence, or the pressure to alter the future of the game they play and love, in the way Scott describes.
More companies with more budget, doing more campaigns, please.
Tweets of note
Handpicked posts that caught my eye this week
- "A good overview of the rules, not just of TikTok" - Rebecca Tushnet from Harvard Law School recommends this Vox piece on the disclosure of ads.
- "I talked about recommender algorithms, what engagement is, why "amplification" is a silly word, and how platform design has consequences for peace and justice." - Berkley's Jonathan Stray teases his appearance on the Lawfare podcast (disclosure: it's good).
- "We tried HARD on trying to integrate intelligence into decision-making; sometimes successfully, sometimes not." - Jen Weedon, former Facebook security staffer, reflects on leaving the company after 6.5 years. (Look out for a Q&A with her in EiM next week).
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1200+ EiM subscribers. This week's job of the week is from an EiM member.
ActiveFence, a technology company that enables Trust & Safety teams to be proactive about online integrity, is looking for an Intelligence Project Manager to manage its Extremism projects and oversee the day-to-day performance of its team of talented project managers.
You will be expected to guide colleagues in both intelligence and management-related issues, assist in their professional growth and improve their work processes. The role is Tel-Aviv or Europe based and the salary will be discussed in the course of applying.
The team is also looking for an Extremism Webint Analyst. This role involves working remotely to gather and analyse data from all corners of the web, create reports on your findings and action deliverables for the company's customers. Extremism is one of ActiveFence's core competencies so it's a chance to learn a lot, quickly.