How to determine a 'dangerous organisation', age checks on Instagram and Marwick's model
Hello and welcome to Everything in Moderation, your companion to the fast-changing world of online content moderation. It's written by me, Ben Whitelaw, and supported by you.
This week has seen the biggest uptick in EiM subscribers since I started writing the newsletter back in 2018. A big welcome to new faces from Shopify, Luminate, GIJN, Amazon, Google, Zefr, ByteDance, Nintendo, Strava, Deloitte and many others that found their way via TSPA's latest newsletter (I recommend subscribing). You can find out about me and what I do when I'm not writing this newsletter here. Do hit reply and say hello.
A quicker primer for those that are new: every week, I round up the key news and analysis relating to online speech, safety and moderation across four areas that have come to be foundational to EiM — Policy, Product, Platforms and People. The categories aren't comprehensive and the stories don't always fit but it's a framework that hopefully allows you to find something new or useful in every newsletter.
(Support the newsletter by becoming an individual or organisation member)
Enough from me, here's your round-up. Look out for my read of the week, it's a good one — BW
New and emerging internet policy and online speech regulation
The Supreme Court's decision to overturn Roe vs Wade has made women's reproductive rights (and I can't believe I'm having to write this) "the newest content moderation minefield", according to CNN.
Information about abortion clinics posted to Instagram has already been labelled as sensitive while hashtags relating to abortion pills have been restricted, in signs that platforms are struggling to react to the decision. Andy Stone, the infamous Meta spokesperson (EiM #131) called them "instances of incorrect enforcement", which is both better and worse than "technical glitch" (EiM #67).
Evan Greer and Lia Holland, writing in Wired, noted that the United States is on the verge of "mass censorship of online content about abortion" as a result of "dangerous legislation" that would weaken Section 230. In their own powerful words:
If Section 230 is weakened, online platforms like GoFundMe and Twitter, web hosting services, and payment processors like PayPal and Venmo will face a debilitating and expensive onslaught of state law enforcement actions and civil lawsuits claiming they are violating state laws.
Meanwhile, elsewhere in the US:
- New York became the latest state to attempt to control how platforms moderate content. The proposed law suffers from the same issues as similar laws in Florida and Texas (EiM #159), according to the Reporters Committee for Freedom of the Press but with one difference: this law encourages more moderation rather than less.
- Californian lawmakers met to discuss The California Age-Appropriate Design Code Act, a piece of legislation modelled on the UK's Age Appropriate Design Code but billed by experts as "a trojan horse for comprehensive regulation of Internet services" in the state.
In China, social media platforms clamped down on users following on online backlash caused by comments made by a senior Community Party official about Covid-19 restrictions remaining in place "for the next five years". The quotes, published by Beijing Daily, a government-backed newspaper, were retracted but not before Weibo users questioned the policy and some threatened to leave the country. Weibo responded by banning the hashtag "for the next five years" from its platform, according to CNN. It raises further concerns about the recently published draft regulations for further internet censorship (EiM #164).
Bonus read: How China tech does moderation (EiM #21)
On the topic of the world's most populous country, Europe has been accused of following in its footsteps by "attempting to circumscribe the Internet within its own political, social and cultural confines", according to a Politico op-ed by an internet policy veteran. Konstantinos Komaitis, who currently works for the New York Times, makes a strong case that Europe's values don't chime with the "the Internet’s own values" and that the bloc is missing an opportunity to "promote an Internet that offers the best of both worlds". It's a great piece and my read of the week.
Features, functionality and startups shaping online speech
New methods of age-verification are coming to Instagram, as the platform wages a fresh war with users that try to claim they are over 18 years old. A blog post that I missed last week announced a partnership with Yoti that will see the UK firm provide facial detection analysis to estimate a user's age (NB: it's not always very accurate). Instagram will also offer social vouching, where three adults are asked to confirm that the user is above 18 and thus suitable to graduate from "age-appropriate" Instagram shown to users between 13 and 17 years old.
Meanwhile, Playstation has launched a new online hub that brings together information about privacy, security and online safety. The detailed guides include how to mute users, block them or what to do if you're suspended. The gaming giant's only appearance in EiM came in December last year when it filed a patent to automatically detect disruptive behaviour (EiM #141), suggesting a renewed focus on safety.
Social networks and the application of content guidelines
Jane's Revenge, a US pro-abortion group, has been added to Facebook's Dangerous Individuals and Organisations Rulebook, according to a report from The Intercept. The group claimed responsibility for an attack on an anti-abortion centre on 10th May, a week after the leaking of Roe vs Wade Supreme Court decision, and was added to the watchlist a day later.
The move is notable for two reasons: 1) only two of the 4000+ DIO entities are associated with anti-abortion violence or terrorism 2) the Oversight Board, as well as other academics, have repeatedly called for additions to the databased be made public. Little is known about Jane's Revenge but this change, according to Facebook's rules, means that users cannot identify themselves as members of the organisation.
Finally in this section, TikTok has come under fire for its role in the recent Phillippines election, won by the recently sworn-in, son of a former dictator, Bongbong Marcos. Tim Culpan at Bloomberg writes that the video app surfaced false videos that disparaged vice-president Leni Robredo and warned that TikTok is unlikely to be able to "get its content moderation and platform policies hardened in time to stop its misuse ahead of the Kenyan elections or the US midterms this fall".
Kenyan journalist and Mozilla fellow Odanga Madung warned the same thing in an op-ed for The Guardian this week and I just hope we don't look back in six months and say how right they were.
Those impacting the future of online safety and moderation
If you've ever been harassed online, outside of political discussions, you might wonder why you were targeted. In which case, you might have come across Alice Marwick's morally motivated networked harassment model.
Marwick, who is an associate professor of communication at the University of North Carolina, developed the model as part of a paper published in 2021. In it, based on 37 interviews with people with experience of online abuse, she explains how a target is accused of violating the norms of the harasser, which triggers moral outrage among the harasser's community and the inevitable pile-on.
Marwick spoke about the model in a recent podcast episode from Untangled, a newsletter about technology, people, and power written by Charley Johnson, who works at Data & Society. It's a good listen between two people that know their stuff and, if you like what you hear, I recommend subscribing to Charley's thoughtful monthly posts on everything from DAOs to pseudonymity.
Tweets of note
Handpicked posts that caught my eye this week
- "I anticipate a tech policy agenda hell-bent on rampant content moderation, surveillance + policing online, all under the misnomer of "safety"" - important thread from Digital Rights Watch's Samantha Floreani on the goings-on in the White House.
- "These aren't even particularly provocative tweets. They're basically dry policy analysis tweets by @freedomhouse on the impact of internet censorship." - Aroon Deep on India's latest use of its IT laws.
- "Over moderation of LGBTQ content and accounts by tech companies have created a digital closet. Now, those same content moderation forces are coming for discussions about abortion." - Alejandra Caraballo from Harvard Law traces the link between over moderation of LGBTQ+ content and where we are today.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1000+ EiM subscribers or get in touch to enquire about a one-off posting.
Twitch is looking for a Director for Global Brand Safety, based out of London, to "support the efforts to ensure Twitch is a trusted and safe community for brands and advertisers".
The role isn't in the Trust and Safety team but works closely with it and will dedicate time to"support[ing] go-to-market strategy for brand safety tools and controls". It's listed on LinkedIn with a salary of at least $40,000 but expect more.