5 min read

What The Wire did next, age limits on livestreaming and the speech risks of BeReal

The week in content moderation - edition #178

Hello and welcome to Everything in Moderation, your (later than usual) review of the week's content moderation and online safety news. It's written by me, Ben Whitelaw.

It's a pleasure to welcome a host of new subscribers from Witness, ActiveFence, Linklaters, Ofcom, Newsguard, London School of Economics, and elsewhere.

If you enjoy the newsletter and can afford to do so, become a EiM member for less than $2 a week. In return, you'll get exclusive Q&As with the deepest thinkers in trust and safety and a warm and fuzzy feeling of supporting a growing, independent newsletter.

Here's everything you need to know — BW


Policies

New and emerging internet policy and online speech regulation

The big story from last week — Meta's full-on fight with The Wire — has moved on significantly, as I predicted (EiM #177). If you struggled to keep tabs this week, the key developments were as follows:

  • An internal investigation by Meta revealed that the Workplace account which formed the core of the story and part of The Wire's rebuttal was set up three days after the first story was published.
  • One of the two experts who verified the emails purportedly sent by Meta spokesperson Andy Stone performed an abrupt u-turn and said he wasn't contacted during the process before the story was published.
  • On Tuesday, The Wire published a statement saying it would remove the articles while it undertook "an internal review of the materials at our disposal".
  • On the same day, the Instagram post that started the whole thing was silently reinstated by Meta and the takedown notification was removed, according to the user who posted it.
  • In an interview with Platformer, Siddharth Varadarajan, editor-in-chief of The Wire, admitted that the technical verification of the source material for the story was a "weakness for us" and that he was "not a technical guy, email headers are all gobbledygook to me".
  • The whole saga "can be best summed up as utter confusion", according to Rest of the World, which is the article to start with if you're just catching up.

What now? Well, the big question is still: who are the two people responsible for leaking the fake story and what was their motive? The Wire suggested that they have known one for at least five months so they will want to get to the bottom of how this slipped through the cracks. But I'd also hope that Meta—armed with details of an individual who went to great lengths to create a free and fake trial account via its Workplace platform—will want to find out too. In short: this story isn't done yet.

EiM member bonus read: Following the Wall Street Journal's reporting last year (aka the Facebook Files), I explored whether the media covers online safety in a way that helps platforms improve the role they play in society and keep users safe. The Wire's reporting, and the fractious nature of the briefing and counter-briefing over the last fortnight, made me return to it this week. See what you think.

Just a week after Ofcom published a report on the Buffalo shooting (EiM #177). New York's Attorney General Letitia James published her own about the role of online platforms in the May 2022 atrocity. The 49-page report notes that "a lack of oversight, transparency, and accountability of these platforms allowed hateful and extremist views to proliferate online" and makes recommendations, including restrictions to live streaming and the reform of Section 230, to mitigate similar scenarios in future.

Products

Features, functionality and startups shaping online speech

New safety features are being added to live streaming on TikTok following reports about girls as young as 14 being tipped to perform on the app. According to an announcement, the video app will send suggestions to creators about keywords they may want to consider adding to their filter list. This follows a trial that saw nearly twice as many filters added among those it was tested with. From November 23, the minimum age of livestreaming will also increase from 16 to 18 and an adult-only audience targeting feature will be rolled out too.

Not-for-profit Meedan has received $5m funding to build out a suite of fact-checking tools to help Asian-American and Pacific Islander (AAPI) communities flag misinformation and false stereotypes. From what I can gather from the promo video, Co-Insights will use human tips and artificial intelligence to proactively find misleading claims—such as the "model minority" myth— and enable the quick and early takedown of conspiracy theories and media manipulation. One to watch out for.

Platforms

Social networks and the application of content guidelines  

Researchers have found that BeReal, the popular Gen Z app, could be the next free speech concern following a review of its  "vague terms of use [that] give the company a high degree of discretion over content moderation." Writing for The Conversation, academics Madeleine Hale and Colin Campbell from Deakin University in Victoria, Australia noted that users should "be concerned [that] material on BeReal could be removed without explanation, warning, transparency or avenue for appeal."

People

Those impacting the future of online safety and moderation

Who would have thought that Ye, formerly known as Kanye West, would be eligible for this spot in EiM? But, following his purchase of small-but-controversial social media platform Parler, he very much is.

As with Elon Musk and Twitter, what's interesting here is not the why — Parler has 725,000 monthly users, according to Forbes, so it's not a commercial venture— but the what and the how.  

There is speculation that the rapper may "strip away whatever content controls [that] Parler has put in place" before it returned to the Apple Store in May 2021.
If you read the recent edition of EiM (#172), you'll remember that the self-declared “the premier global free speech platform" has only just been allowed to return to the Play Store after finally satisfying Google that it will moderate users' posts.

With the cultural capital that the rapper has and the media attention that he garners, millions of people are about to get a Kanye-approved crash course in free speech and online governance. I'll be here, watching through my fingers.

Tweets of note

Handpicked posts that caught my eye this week

  • "One thing that keeps coming to mind is the power of machine learning models to "simulate humans" - Aviv Ovadya has a good thread on "jury learning" which I touched on in last week's newsletter
  • "Ah, I see the Oversight Board is strategically prioritizing.... almost all of content moderation" - evelyn douek assesses the to-do list of Meta's Supreme Court function following a new set of cases.
  • "I'll be joining the amazing team as their Product Director for Anti-Abuse where I hope to help make queer love and sex safer for everyone!" - A good team just got even better with news about Juliet Shen's move.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1400+ EiM subscribers.

Rockstar Games is looking for a Product Manager to oversee its online safety efforts.

The role is an exciting one working across its Social Club product teams, which include Grand Theft Auto Online and Red Dead Online among others, with a remit to develop guidelines and features that prevent socially harmful content and negative interactions between players.

The role is based in the bonnie city of Edinburgh, according to LinkedIn, but unfortunately, there's no salary. Good luck if you put your hat into the ring.