Oversight Board gives verdict, verification is back but 4chan may not be
Hello and welcome to Everything in Moderation's Week in Review, your need-to-know news and analysis about platform policy, content moderation and internet regulation. It's written by me, Ben Whitelaw and supported by members like you.
The Oversight Board is a topic I've covered regularly since its launch in 2020 and it continues to be one of the most fascinating internet governance experiments out there. There's a lot to discuss about this week's announcement so I've gone deeper than usual — do you want more analysis like this? Let me know by hitting the thumbs at the bottom of today's edition or sending me an email — ben@everythinginmoderation.co.
Welcome to new subscribers from eSafety Commission, TikTok, University of Southern California, the Netherlands Authority for Consumers and Markets (ACM), WeGlot and others. A reminder that you can customise the newsletters that you receive in your Account.
Here's everything in moderation from the last seven days — BW

VOX-Pol’s REASSURE Project is dedicated to understanding and improving the security, safety, and resiliency of online extremism and terrorism researchers and allied professionals.
Following our 2023 report, we are now conducting a quantitative survey to gather broader insights from those tasked with responding to online extremism and terrorism.
Do you routinely encounter online extremism and / or terrorism content in your work? If so, we invite you to contribute to this important survey.
Your anonymous and confidential responses will help us develop best practices and enhance protections for those researching, analysing, and / or moderating online extremism and terrorism content.
Request from the REASSURE Project team: please do not share this link on social media
Policies
New and emerging internet policy and online speech regulation
After a long period of silence, the Oversight Board — the independent-but-Meta-funded body that critiques its content moderation decisions — has finally weighed in on the controversial policy changes announced in January (EiM #276). It also published decisions on 11 cases.
There’s a lot of documentation shared in the blogpost, which I've tried to comb through. But a few reflections in no particular order:
- The tone: As Mike and I mentioned on this week’s Ctrl-Alt-Speech, the tone of the announcement is somewhere between concerned and mild despair. The Board doesn’t hide its feelings about “hastily” announced January changes or the fact that “no information [was] shared as to what, if any, prior human rights due diligence the company performed”. Ouch.
- The timings: The case on the UK riots was selected on 3 December 2024, almost five months ago, while the EU Migration policies and immigrants case was picked for review on 17 October 2024, back when Kamala Harris still had a chance of becoming the US president. For an organisation that has made efforts to speed up the number of cases it covers, that’s not very fast and demonstrates at least an element of disfunction.
- The relationship: Meta could have used the Board’s expertise to provide feedback on January’s policy changes and this week's announcement reminds the platform that it “is ready to accept a policy advisory opinion referral” (which it has done only three times in the last five years). The fact that it hasn’t sought help for a significant policy shift has already annoyed some board members (EiM #283) and suggests the relationship between company and Board is not what it should be. The fact fact a Board co-chair commented about its future to Reuters suggests nervousness about what might come next.
- The decision: The decision to leave up videos questioning people's gender identity has received a lot of coverage (Platformer, GLAAD) and understandably will leave LGBTQ+ users worried. Many will argue that the case is not disimilar to the 2023 overturning of Meta’s decision to leave up a video that referred to a woman as a “truck” for violating its Bullying and Harassment policy. A symptom of the political climate?
Antitrust cases against Big Tech companies are like Ctrl-Alt-Speech episodes titles with rhetorical questions: you get none for ages and then several in quick succession. While the FTC’s trial against Meta rumbles on (EiM #290), Google has been found to have an illegal ad monopoly and this week faces the Justice Department and a handful of US states in a ‘remedies trial’ related to its search monopoly. The judge is expected to rule by September.
What’s behind this: Ted Cruz met with Google CEO Sundar Pichai last month as part of what Politico calls “a pressure campaign meant to shift Google’s content policies to align with changes being made by its corporate rivals”. Talk about working the refs.
A public service announcement, more than anything else: UK regulator Ofcom has published new draft codes under the Online Safety Act, outlining how tech firms must protect children from harmful content, including through age checks and stricter content moderation. It follows the release of the first set of codes in December (EiM #275).
Also in this section...
- How activists want to bring Global Majority perspectives into EU tech policy (Nertzpolitik)
- The trade war’s surprising targets: content creators (Mashable)
- Why deregulating online platforms is actually bad for free speech (The Conversation)

Products
Features, functionality and technology shaping online speech
Verified accounts are coming to Bluesky — but with a difference. Following limited take-up of its website username verification method, the company will give “authentic and notable accounts” their own check mark (think government officials and media outlets) as well as enable trusted verifiers (see: NYT and Wired to begin with) to verify staff, according to a blogpost.
Wired noted how multiple organisations will be able to verify one account. I look forward to my Bluesky profile looking like this.
Respondology, a Colorado-based firm that works with 450+ brands and sports teams to “discreetly hide” spam, bots and inappropriate social media comments in real time, has secured $5m in Series A funding. Adweek reports the news and also links to the company’s pitch deck, which notes that brands have to ‘fend for themselves’ in the wake of Meta’s January announcement (deck). Who says that brand safety is dead? Investors clearly like it.
Also in this section...
- Exposing Pravda: How pro-Kremlin forces are poisoning AI models and rewriting Wikipedia (Atlantic Council)
- AI images of child sexual abuse getting ‘significantly more realistic’, says watchdog (The Guardian)
- Algorithmic Gatekeepers: Impacts of LLM Content Moderation on Civic Space and Human Rights (European Center for Not-for-Profit Law)
Platforms
Social networks and the application of content guidelines
Following a massive hack last week, notorious message board 4chan is still offline and may not be returning. A internet culture spring turned far-right hangout, the site seems to have been undone by a fued with a splinter image board called soyjack.party, as reported by PC Gamer. I fully expect people to make a Netflix documentary about the beef that led to its demise.
Also in this section...
- Facebook Groups are fueling a black market for Uber and DoorDash accounts, says a new report (Fast Company)
People
Those impacting the future of online safety and moderation
Alex Mahadevan has straddled the worlds of editorial and internet speech for some time.
Working for a large Florida-based local media publisher, and now director of Poynter’s digital media literacy program MediaWise, he’s spent a lot of time researching the rise of Community Notes and what platforms are getting wrong. With TikTok launching its own version (EiM #290), he’s a good person to know.
In a Q&A for Poynter this week, he argues that while crowdsourced context can be useful, recent changes amount to platforms “telling users: “Hey, it’s a hostile digital world out there, and you’re on your own.” But he does note that X/Twitter’s Community Notes is “a brilliant system if it existed within a true trust and safety program.”
It’s a big if. TikTok T&S team, I hope you’re taking notes.
Posts of note
Handpicked posts that caught my eye this week
- "Had the absolute pleasure of facilitating a discussion on trust & AI with some of the sharpest minds in the field" - Trying not to be jealous of the stellar panel that Newsguard's Sarah Brandt got to take part in.
- "It is notable how often these case decisions refer to the "majority" and "minority" opinions of Oversight Board members, which illustrates how intractable content moderation can be in practice." - Dunstan Allison-Hope, member of the Christchurch Call board of trustees, does some Oversight Board close reading.
- "From platform governance and AI in moderation, to trust, safety, and rebuilding digital spaces that actually work for people, ATIM brings together the sharpest minds in the field. Think Bluesky Social, Front Porch Forum, Starlight Children's Foundation, One Future Collective, The Truth Initiative, and many more." - All Things in Moderation's Venessa Paech teases speakers for next month's event. Proud that EiM is a partner.
Member discussion