Mitigating harm upstream, Snap's AI mistakes and limiting the spread of under-age nudes
Hello and welcome to Everything in Moderation, your Friday feast of content moderation news and analysis. It's written by me, Ben Whitelaw and supported by members like you.
It's a smaller than usual list of new subscribers but that doesn't mean folks from Wired, Mozilla and the Oversight Board are any less welcome. Know someone that would enjoy EiM? Send them here to subscribe for themselves.
Here's everything in moderation from the past week — BW
Today's edition is in partnership with All Things in Moderation, a new two-day moderation conference
All Things in Moderation (May 11-12, 2023) will host global moderation practitioners, leading researchers, policymakers and those invested in the governance of digital social spaces.
The virtual event spans two days of keynotes, workshops, and panels, on topics including: building online cultures of care, burnout and vicarious trauma, and institutional support for moderation. Early bird tickets are on sale now.
Policies
New and emerging internet policy and online speech regulation
The UK government's tech policy plans have been subject to "ever-changing policy priorities" and "ministers divided between looking tough against Big Tech and reining back the regulation from when the U.K. was part of the EU", according to a new long read from Politico. The piece touches on the Online Safety Bill, which it says is "expected to be completed by the autumn", but looks more widely at the reasons for the deathly slow introduction of supposed world-beating tech policy. Thanks for nothing Boris/Liz/Rishi/insert other.
In related news, a non-profit policy advisory has called on the UK government to scrap its latest amendment to the Online Safety Bill, which would introduce jail terms for platform managers found not to comply with regulation. Global Partners Digital note that the criminal liability amendment "fails to provide sufficient clarity for an individual to reasonably know what conduct is prohibited under the law". Remember it was a matter of weeks ago that the bill was called "not fit for purpose" by two experts (EiM #191).
Oral arguments in last week's Gonzalez vs Google Supreme Court case (EiM #193) "focused more on what platforms do than on what users want", according to this helpful Q&A with legal scholars James Grimmelmann and Kate Klonick. The pair dive headfirst into Section 230, describing it as a "load-bearing wall" which I find to be a helpful metaphor for understanding where this case is likely to go (I don't know much about DYI but I know you can't get rid of a load-bearing wall without a lot of mess and expense).
A new body has been set up to help platforms operating in Europe comply with new regulations related to online terrorist content. Tech Against Terrorism Europe, much like its global equivalent, will help what it calls "hosting service providers" (HSPs) develop one-hour removal mechanisms and transparency reporting. You can register your interest on its site.
Products
Features, functionality and technology shaping online speech
A 60-minute daily screen limit will be introduced for TikTok users under the age of 18 in what is the latest safety feature released by the video app to placate government fears about its design and privacy. Younger users will now have to enter a passcode if they want to extend the time using the app and will be sent an inbox notification of their screen time. It comes 18 months on from the release of Direct Messaging privacy settings and mindful push notifications (EiM #98).
It could be seen as another win for the idea of mitigating harm upstream as well as the ICO's Age Appropriate Design Code, which came into force in September 2020 and was followed last year by its Californian equivalent.
Platforms
Social networks and the application of content guidelines
Five platforms have agreed to join Take It Down, a new initiative from the National Center for Missing & Exploited Children to remove nude photos of users under 18. As part of the initiative, Facebook, Instagram, OnlyFans, Yubo and Pornhub will use a new hash list to scan their services for explicit content and take action to limit its spread, according to a press release.
This approach has worked well for terrorism content — GIFCT owns and operates the hash list used by most major platforms (EiM #35)— and I expect we'll see other collectively maintained and utilised resources like this in the future.
Snap has pre-emptively apologised in advance for its new AI-powered chatbot, which it says is "prone to hallucination and can be tricked into saying just about anything". The bot, which will initially only be available for $3.99-a-month Snapchat+ subscribers, will allow users to chat with it like any friend or contact. It's an odd move for a platform known for its cautious approach to trust and safety and whose CEO was espousing the importance of "moral responsibility" not too long ago (EiM #133). Sure doesn't feel very responsible.
People
Those impacting the future of online safety and moderation
A former Reddit trust and safety worker is taking the social media platform to court for reportedly ignoring her requests to move to a different position with less exposure to violent and disturbing content.
Maya Amerson made the request after being diagnosed with PTSD and taking 10 weeks off work, according to Coda Story, before she subsequently resigned in September 2022. She now works at Spotify.
Unlike previous examples of legal actions against platforms (see EiM #112 for a list), Amerson was a staff member at Reddit — her role, according to LinkedIn, was Global Training and Quality Operations Leader, Trust & Safety Vendor Operations— rather than a contracted or outsourced worker. It's a reminder that being on staff should but doesn't always improve outcomes, including staff well-being.
Tweets of note
Handpicked posts that caught my eye this week
- "Very pleased to see that @twitter's Community Notes (nee Birdwatch) seems to cover ads as well." - Georgetown law prof Anupam Chander pays testament to the good folks fact-checking the latest wild Twitter ad.
- "A special occasion for me cause it'll be my first time presenting the final, unpublished section of my PhD - the one with the big picture conclusion" - Paddy Leerssen, a University of Amsterdam postdoc, shares news of his upcoming PlatGov session.
- "We focus only on content moderation. It's like there is a polluted river. We take a glass, clean up the water & dump it back." - Defend Democracy's Alice Stollmeyer quotes the one and only Maria Ressa.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad with 1700+ EiM subscribers. This week's job is from an EiM organisational member.
Ofcom is hiring a Programme Manager on a fixed-term contract to lead its work in keeping users of video-sharing platforms (VSPs) safe.
The role involves mapping the regulator's VSP activity over the next 12-18 months, delivering updates on the work across the organisation and managing budget and resources.
The right person should have programme management capabilities, strong communication skills and an interest in emerging safety regulation (you wouldn't be here if you didn't have that).
If that's not the one for you, a Policy Manager role is also available to shape Ofcom's approach to regulating VSPs. The role involves creating innovative policy thinking, managing stakeholders and ensuring briefings and research are up to standard for internal use or external publication.
Deadlines close at 00:01 on Monday 6th March so you've got the weekend to get in your applications.