5 min read

One year of the Oversight Board, China mulls over new rules and Ofcom's CEO on what's ahead

The week in content moderation - edition #164
Melanie Dawes, CEO of Ofcom, the soon-to-be UK harms regulator, sitting in a chair on stage during a Websummit panel in 2021
Melanie Dawes, CEO of Ofcom, the soon-to-be UK harms regulator (Courtesy of WebSummit under Creative Commons 2.0 - colour and crop applied)

Hello and welcome to Everything in Moderation, your weekly exploration of news and analysis about online safety and content moderation. It's written by me, Ben Whitelaw.

(Support the newsletter by becoming an individual or organisation member)

To new subscribers from Nextdoor, Terra.do, Trinity College Dublin, Cyacomb, Spotify and SWGFL as well as a dozen others, thanks for coming on board. If you came via the latest edition of the New_ Public newsletter, a special welcome to you.

This week's edition feels like we've come full circle in many respects and not in a good way: authoritarian countries taking regulatory inspiration from European speech laws, failures to takedown upskirting content and the age-old issue of review-bombing rearing its ugly head.

Keep an eye out for my read of the week. And thanks for reading — BW


Policies

New and emerging internet policy and online speech regulation

Draft regulation proposed by the Chinese internet watchdog could see each and every post reviewed by a moderator in a significant shift in the way online speech is managed.

Currently, a 2017 law — Provisions on the Management of Internet Post Comments Services — mandates that "comments under news information" are censored. But, according to MIT Technology Review, platforms will soon have to censor forum posts, replies and bullet chats on live videos as well. The new rules, whi put more responsibility on content creators themselves, will mean that Chinese companies have to hire more people to carry out moderation and face warnings, fines and even suspension if they don't comply.

Elsewhere, Singapore is gearing up to introduce its own rules that will govern how platforms moderate content, as reported by The Straits Times. The proposed codes will focus on child safety, user reporting and platform accountability and bear all the hallmarks of legislation making its way through parliament in the UK, Ireland and elsewhere. Public consultation will begin in July, according to Minister for Communications and Information Jo Teo.

Facebook continues to limit data to the Oversight Board and should be more transparent, according to the first annual report from the independent-but-Facebook-funded advisory group of laywers, human rights experts and academics. The report lays out the scale of its work to date (1 million appeals submitted by users, 10,000 comments on 20 cases and 86 recommendations) as well as the challenges it faces going forward (just 1/3 of cases to date came users outside the US/Canada and Europe).

The report is full of interesting stats but the one that most stood out to me was the tiny number of cases — just 47 —referred to the Oversight Board by Meta itself. That's 0.004% of the total cases that came across its desk. What does that say about the company's confidence in its ability to audit itself? Very telling.

Products

Features, functionality and startups shaping online speech

Epic Games thinks it has a solution to the age-old problem of review bombing in its online store, reports The Verge. Rather than anyone being able to rate a game, Epic will now randomly survey users that have been playing for two hours to provide a score out of five. Rotten Tomatoes did a similar thing in 2019 when stopped users making comments on a film before it was released to prevent malicious reviews (EiM #18). Gaming platform Valve and even Disney+ have felt the effect of malicious review campaigns recently.

A new tool to help companies test hate speech models has been released by its creators. HateCheck.ai was created by a team of researchers at Rewire, a startup founded by researchers from the University of Oxford, and seeks to support "the creation of fairer and more accurate hate speech detection models". The project puts special emphasis on counterspeech, which often gets mistaken for hate, and was supported by Google’s Jigsaw team (although it's not clear from the website exactly how).

Platforms

Social networks and the application of content guidelines  

Grindr has announced a deeper partnership with Spectrum Labs, an artificial intelligence moderation solution, as part of its effort to combat illegal activity, scams and harassment. The dating app has been doing some very thoughtful safety work over the last few years (EiM #160) and I recently had the chance to speak to Alice Hunsberger, senior director of customer experience at Grindr for a recent podcast on how its design prevents user harm.

The agreement sees Spectrum further corner the dating moderation solutions market, having already partnered with dating behemoth Match Group. The company raised $32m in funding back in January (EiM #145).

Facebook has removed a large number of accounts and groups posting pictures of upskirting, following an investigation by BBC News. The video report, which contains real footage of men chasing underage girls, also notes that, when reported to the company, the posts were not deemed to "go against one of our specific Community Standards". They were eventually removed when contacted by the BBC but this is another example platform's takedown process not standing up to scrutiny.

One from last week that I'd missed: Rumble, the video-streaming platform famed for being home for high profile right-wing video stars, is crowdsourcing its new moderation policy and appeals process. In a blogpost, the Canadian company shared how the new rules would be "designed by creators and anchored in transparency" and implemented by the end of 2022.

Interestingly, the first draft of the proposed policies prohibits "attacking other users or content creators on the platform, based on that user’s race, religion, or other legally protected status". Which, judging by some videos on its homepage, will mean they will be busy.

People

Those impacting the future of online safety and moderation

The reaction to Bloomberg's interview with Melanie Dawes, the chief executive of Ofcom, concentrated on her reluctance to use Twitter, one of the services that her organisation will oversee when the Online Safety Bill is passed in the next 12 months or so.

But for me, it was Dawes' comments on the corporate culture of platforms which most stood out. The long-time civil servant explained how “too many of the platforms have prioritized growth and revenues over safety” before adding "It’s hard to say that we will see clear trends in the data, or anything like that" and closing with "Cultural change is the thing that is the most important of all”. A harms regulator not interested in the data? It doesn't bode well.

Ofcom has been on hiring spree (several EiM subscribers have taken jobs there in recent months) and, according to Dawes, has plans to add another 340 people to Ofcom's already 1100 strong workforce. That's a significant number of people but is it enough to regulate platforms with billions of users? Dawes must hope so. It's also my read of the week.

Tweets of note

Handpicked posts that caught my eye this week

  • "sex workers and other margnizalized users are the dolphins in content-moderation's tuna nets" - Cory Doctorow shares a long thread about feminist author Susie Bright and the struggle to create digital space for sex workers.
  • "Looking forward to discussing soon at #DiPLab, an interdisciplinary research group, the use of automated technologies in social media content moderation value chains for controlling workers in India and in Germany" - Phd candidate Sana Ahmad on her plans to share her offshoring fieldwork next week.
  • "Sometimes FB oversight board decisions feel nit-picky, but today's doesn't." - WSJ's Jeff Horowitz reflects upon the latest Oversight Board decision, on a anti-Serbian cartoon.

PS Check out this cute two-minute cartoon released by Discord to promote Automod (EiM #163). More mod product promos like this please.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1000+ EiM subscribers or get in touch to enquire about a one-off posting.

Social Simulator, a company that trains the crisis response teams of government agencies, FTSE 100 and Fortune 500 companies, is hiring for a US-based senior consultant and a digital account executive in Asia-Pacific.

Both jobs require experience in communications, media or crisis management, being active on social media and a background in business development. There's a lot of flexibility in terms of work hours and you'll be joining a fast-growing team. Salaries are $55-65000 and 30,000-35,000 respectively. No deadline but don't wait around.