Hello and welcome to Everything in Moderation, your at-a-glance review of the week's content moderation and online safety news and analysis. It's written by me, Ben Whitelaw.
A big welcome to new faces from Linternaverde, Alltrails, the University of Nottingham, Linklaters, Google, Unit 21, Khoros and other cracks and crevices of the web. If you were forward this email — or clicked a link and ended up here — subscribe now to get it in your inbox every Friday. You can also support the newsletter to help cover the costs of it and to keep it free for others to enjoy.
This week's edition is shorter than usual as I'm in Amsterdam for the weekend seeing some friends (Want to say hi? Let me know). Here's your newsletter - BW
New and emerging internet policy and online speech regulation
The UK's proposed online speech regulator has announced that there will be "high-risk services for closer supervision" which will have less time to comply with new duties of the Online Safety Bill when it is passed. Ofcom hinted at a tiered approach this week, as it published its roadmap to regulation and a call for evidence on key areas of the Bill, including risk of harm from illegal content, child access assessments, and transparency requirements.
It also committed to publishing a draft Code of Practice on illegal content harms within 100 days of the bill being passed in 2023 (although after what happened this week, who knows when that will actually happen).
Mark Bunting, Ofcom's director of online safety, penned a sensible thread about what's coming and how "no service on which users freely communicate and share content can be entirely risk-free." The question is: how will the soon-to-be regulator define "high-risk" and crucially, how will it ensure such decisions are not politicised?
Russia has increased its hostilities towards platforms by passing a bill to fine companies that haven't set up an office in the country. It's an extension of the rule, introduced in July 2021, that required companies with more than 500,000 users to have a representative in the country or face a penalty.
Features, functionality and startups shaping online speech
Streamers on Twitch will soon be able to share banned lists with each other, according to a grab taken from the platform and reported by NME. The feature seems designed to save creators time and energy although, in theory at least, it could also be used to perpetuate unjust suspensions. Twitch users have called for safety improvements since the hate raids (EiM #130) so it will be interesting to see how well-received this latest measure is.
Social networks and the application of content guidelines
Twitter has accused the Indian government of applying its IT Laws "too broadly" in a legal suit filed in the Karnataka High Court in Bangalore. It comes after the government issued a takedown order for posts although, as The Register noted, what those posts were is not clear. In any case, it's first time that the company has pushed back against the rules, which came into force last May (EiM #103), and comes amidst further accusations of prominent journalists and Sikh voices being silenced by the ruling party.
Angrej Singh, writing for Tech Policy Press, has done a full rundown of the recent "wave of removals" as well as the concerns of civil society organisations about government proposals to extend control over social media platforms. It's my read of the week.
Clubhouse has become the latest platform to become a Tech Against Terrorism member, meaning it will commit to implementing "responsible industry practices with regard to online counterterrorism efforts". I did a Q&A with Jess Mason, Clubhouse's Head of Global Policy and Public Affairs, back in April about how to respond to real-world events which is worth reading if you missed it.
Those impacting the future of online safety and moderation
It's an interesting admission for a piece of company comms — rightly or wrongly, using AI for content moderation has almost become the norm for most platforms working at scale — but the unguarded honesty makes more sense once you know the blog is by Alice Hunsberger, Grindr's VP of customer experience.
I've mentioned Alice's work in EiM a number of times and following her on Linkedin (one of the ways I keep on top of company trust and safety news) shows her careful approach to policy, the care she exhibits for her team and her understanding of how policy and product can make users safer online.
And when she says "Because it’s important to get this right, we’re going to take our time to implement these models carefully over the next year", you get the sense she means it. And for that she deserves mention.
Tweets of note
Handpicked posts that caught my eye this week
- "Meta didn’t suddenly start restricting abortion content. We just suddenly started noticing."- Eva Galperin, director of security at EFF, on how the last few weeks are not new, just new for some.
- "Wendy, will you sit down with me for a proper interview? (All my private requests have gone unanswered...)" - TIME's Billy Perrigo reminds Sama's CEO Wendy Gonzalez that he's still waiting to hear back.
- "On the verge? It's been over the verge, through the hedge and into the next county" - IT lawyer and UK bill watcher Graham Smith reacts to news the Online Safety Bill is potentially "unworkable".
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1000+ EiM subscribers or get in touch to enquire about a one-off posting.
Reddit is looking to hire a Content Policy Lead, Platform & Legal Policy to help protect "the expression rights and private information of communities and users across Reddit through strategic solutions that balance defensible risk". No small feat, then.
The ideal candidate will have five years of policy experience, a "global perspective on policy development, enhancement, and implementation" and a willingness to work across time zones. I've asked for salary information (yet another ad without it...) and hope someone will share more details.