5 min read

Designing online safety risk assessments, Germany's mod collective and digital ID funding

The week in content moderation - edition #196

Hello and welcome to Everything in Moderation, your speech governance and content moderation week-in-review. It's written by me, Ben Whitelaw and supported by members like you.

It's been a busy week on the day job so I'm glad to have found time to pull together today's newsletter, which feels more constructive and hopeful than usual. If you found it a useful round-up (or felt buoyed), please consider sharing it with your network.

Kevin kindly did and so I'm glad to be able to welcome a flurry of new subscribers who made it here from his network, including folks from Snap, the Open University, Ofcom Glassdoor, Reddit, Uber, March, ActiveFence, Harvard and elsewhere. Hit reply to say hi and tell me what you do (for money or for fun, it's up to you).

Here's everything  you need to know from the last week — BW


Policies

New and emerging internet policy and online speech regulation

Ofcom, the UK's intended online speech regulator, has set out how it intends to approach the risk assessments that platforms will be required to submit as part of the Online Safety Bill. In a four-page discussion paper, it outlined a four-step assessment process and noted it was working with other legal jurisdictions to "work towards international coherence around the novel regulatory approach that is risk assessments".

Stanford's Daphne Keller has given it a read and comes to the conclusion that "the UK is continuing on its course toward two opposite and irreconcilable legal frameworks for platforms." A formal consultation process will begin once (if?) the Bill is passed.

I missed this last week but the White House has produced an executive summary of its initial blueprint for reducing technology-facilitated gender-based violence (GBV). It follows the establishment of the Task Force to Address Online Harassment and Abuse by President Joe Biden (who has come a long way in the last three years) last year, which led to the interviewing of hundred of stakeholders about the effect of online harassment on health, education and careers.

The blueprint includes digital equity grants for online safety projects, cybercrimes training and resources for law enforcement and workshops to identify safety research gaps. I'll do my best to track how this gets built out over the next year.

Products

Features, functionality and technology shaping online speech

We had two notable safety startup stories last week (EiM #195) and we have another this week: digital identity company Yoti has received £10m from Lloyds Banking Group. The cash injection will be used to develop "a new reusable digital identity proposition that will complement Yoti's existing solutions". No further details were shared but it feels like a step in a new direction beyond its previous work with Match Group (EiM #130) and Instagram (#180), announced last November.

ChatGPT, everyone's new favourite technology flex, will be used to optimise Discord's AutoMod tool, it was announced this week. The OpenAI chatbot will interpret a server's rules and apply them to users, which sounds wild, to be honest. PC World reports that, in a demo of the new functionality, the beefed-up AutoMod was able to flag messages promoting other social media channels and highlight deliberate misspellings to avoid moderation bans. Powerful if it works.

Platforms

Social networks and the application of content guidelines  

Snap has unveiled details about how its moderation powers the algorithmic distribution of content as part of an ongoing effort to give parents greater control of the content their children consume. Content, the company explains, is tagged as sensitive or suggestive, which can then be screened by parents via Snap's Family Center.

It's a significant development in the camera company's policies: up until now, only vetted publishing partners and creators had access to the guidelines. Oddly, the recommendation guidelines linked from the announcement are showing an introduction copied and pasted 10 times. The correct version, if you're looking for it, is here.

A former employee of TikTok's Trust and Safety division has raised concerns about Project Texas, the $1.5bn plan to create a US version of the app to allay data privacy fears. The anonymous whistleblower, who was quoted in The Washington Post, is reported to have shared code snippets with authorities and said "data from more than 10 million US users (was) exposed to China-based employees". TikTok said the employee had "misconstrued the plan" and was not up to speed on the latest plans for Project Texas.

Another former safety worker now, this time from MindGeek, has said "the rules changed constantly" and that it was "really hard" to determine if people (let's be honest, women) were minors. The man, who gave an interview as part of the new Netflix documentary Moneyshot: The Pornhub Story, also said content that should have been removed "stayed up for months".

The life of a Pornhub moderator was laid out bare in a piece for The Verge this time last year and was my read of the week (EiM #149). Not an easy job, I'll say that.

People

Those impacting the future of online safety and moderation

It's been described as "a rare moment of coordinated pushback by tech workers" and "the first industry-wide collective of its kind in Europe". So it makes sense to highlight the 40 TikTok and Meta moderators based in Germany who have joined forces to demand better working rights.

The group —which came together following a meeting organised by German trade union Verdi, Superr Lab, Aspiration and tech justice group Foxglove— want platforms to recognise their right to bargain or unionise and to form legally protected "works councils". Legal action is on the cards if they don't.

It's been just over four years since I asked "Is it time for moderators to organise?" (EiM #18) and a matter of months since Daniel Motaung tried to do just that in Nairobi and was sacked for his troubles (EiM #179 and others).

Maybe, just maybe, it's finally happening.

Tweets of note

Handpicked posts that caught my eye this week

  • "everyone having the ability to use AI systems to create never-ending autonomous harassment campaigns might be a bad thing!" - Aviv Ovadya on the need for collective decision-making on AI and the potential downsides.
  • "Social media platform companies are reeling from an onslaught of regulatory efforts by all the major governments in the world." - Digital rights lawyer Mishi Choudhary with a fascinating thread on why platforms no longer challenge government takedowns.
  • "I'll be speaking at the Community Clubhouse at GDC2023 on building and sustaining safe communities in games" - Netflix's Christina Camilleri shares an exciting looking trust and safety event next week.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1700+ EiM subscribers.

TikTok is seeking a Head of Content Compliance to oversee the company's adherence to the Digital Services Act and Online Safety Bill.

The interesting role involves implementing a monitoring and oversight programme,  liaising with the European Commission and Ofcom and conducting gap analyses on TikTok's products and services.

The successful individual will be an experienced compliance professional —15+ years is desired with 5+ in a regulated environment, no less— and have management experience and "outstanding communication skills".

No salary information is available, unfortunately (if you work at Snap and know more, I'll happily share next week).