How to minimise misinformation, Meta disputes India leak and justice algorithms
Hello and welcome to Everything in Moderation, your weekly whistle-stop tour of the big stories in content moderation and online safety. It's written by me, Ben Whitelaw.
What a week. Between the Musk/Ye love-in, the fallout between Meta and parts of the Indian media and Birdwatch taking off in the US, there's been few like it since I started writing EiM back in 2018. I've done my best to make sense of it all in 1400 words. If you find today's edition helpful, consider becoming a supporting member or petitioning your organisation or institution to do the same.
Welcome to EiM's newest subscribers from Will Media, Taso Advisory, Pocket, Salesforce, UCL, Tech Against Terrorism and a flurry of folks from Meta. Thanks for letting me in your inbox — reach out if you have any questions.
Here's what you need to know this week — BW
Policies
New and emerging internet policy and online speech regulation
The big story of the week comes from India, where a row has erupted between Meta and independent media outlet The Wire about its reporting of the takedown of a satirical cartoon mocking a leading politician from the BJP, India's ruling party. I won't go into the ins and outs here — Newslaundry has a comprehensive chronology — but safe to say that the story that will have serious implications for Indian media and technology regulation whether the documents on which the reports are based are found true or otherwise [if you want to share more information about what happened, drop me a line in confidence].
I will say one thing: it is not a good look, as a company exec, to publicaly dispute the veracity of an email address in a leaked document only for several journalists to contradict that fact. It doesn't mean that the contents of the email are real but it's an overly confident rebuttal and one that, if experience tells us anything, we should be especially wary of. More on this next week.
Start with a dose of anti-semitism, add the world's richest free speech absolutist and sprinkle in some censorship laws in Florida and Texas and voilà, you have the "incredible difficulty of knowing what you’re supposed to do" as a platform, according to legal experts quoted this Washington Post analysis. The piece majors on Kayne West/Ye's outburst but also notes that both Instagram and Twitter declined to say which specific rules his posts violated, which feels like a missed opportunity to educate the general public on how moderation works. Maybe next time (because there will be a next time).
Activists and human rights groups "must continue to demand that Meta respect peoples’ rights and hold it accountable for its censorship" after publishing the recent impact assessment of its enforcement of speech rules of Arab content on Facebook and Instagram (EiM #175). That's according to Access Now's Marwa Fatafta who, writing for 972 Mag, draws particular attention to the fact that Meta has failed to build speech classifiers in Hebrew. My read of the week.
Products
Features, functionality and startups shaping online speech
Users of Birdwatch — Twitter's decentralised fact-checking programme — have focused on reviewing tweets relating to COVID, vaccination, and the US government’s response to the pandemic, according to a new analysis of almost 33,000 notes by The Verge. It comes just a week after Birdwatch was made available to all US users and some 21 months since the first pilot was announced (EiM #97). Will be really interesting to see how this scales and what its impact is.
This one is from last week but an interesting development: Stanford researchers have developed a jury learning algorithm that seeks to replicate marginalised voices that are traditionally and disproportionately affected by toxic content. The model, created by PhD student Mitchell Gordon and his seven-strong team of researchers, predicts juror’s individual responses to controversial posts based on annotations given to it and then repeats the result 100 times to give a toxicity score.
Platforms
Social networks and the application of content guidelines
Not much detail to this one but Twitter is reportedly reviewing its policy of permanently banning users for so-called lesser offences, such as sharing misleading information (which I'm not sure you can call lesser after this week). That doesn't mean Donald Trump is likely to be back on the platform any time soon, according to reports, but it might mean fewer deplatforms in the future.
Talking of the former US president, Trump's Truth Social app was finally allowed into the Google Play Store this week following a commitment that it will enforce its moderation guidelines (EiM #172). How likely that is remains to be seen: its Moderation FAQ contains a lot of "watch this space".
A curious story now about Paypal, which has said that new plans to fine its customers up to $2,500 for sharing "misinformation" were released by mistake. The updated Acceptable Use Policy, due to take effect on November 3, was spotted over the weekend and was criticised in some quarters. Trial balloon or a real gaffe?
Twitch has claimed that it resolves 80% of user reports within 10 minutes, a frankly astonishing stat and one that, if true, marks a u-turn since the hate raids (EiM #131) this time last year. The claim was made in a panel with Angela Hession, (VP Trust & Safety), Alison Huffman (VP of Product, Community Health) and Connie Chung (Head of Global Policy, Trust, and Safety) at TwitchCon 2022 last weekend and reported by blogger Black Girl Nerds.
This story is too mad to include but also too crazy not to: Meta vice president of global policy Nick Clegg and two colleagues (including a trust and safety director) have been 'inadvertently' identified in a legal suit alleging that they accepted bribes on behalf of OnlyFans as part of a scheme to help the adult platform dominate its industry rivals. Erm yeah.
People
Those impacting the future of online safety and moderation
Tumblr has had a flurry of mentions in EiM over the past month (including its new community labels feature, EiM #174) and so it makes sense that the CEO of its parent company has come sharply into view too.
Automattic's co-founder Matt Mullenweg doesn't crop up often in the technology press (he was once described as having the "quiet alter-ego thing down pat") but it was his blog in September that sought to explain why Tumblr wasn't able to return to the laissez-faire moderation approach of old.
The Guardian dissects Mullenweg's blog in this piece and the sense I get is of a guy who understands the complexities inherent within content moderation. Indeed, in an interview with Protocol in 2021, he says just that: “I have an appreciation for the challenge of moderation on Facebook,” he explained.
If you go back far enough on his blog, you can find clues about his attitude toward maintaining civility in online spaces. In 2009, a post listing six ways to "ruin whatever shred of community you had on your site" includes "Don't moderate". Seems like those views still apply, 13 years later.
Tweets of note
Handpicked posts that caught my eye this week
- "The new solution to content moderation challenges. Just have Elon call up everyone who violates the rules to have a heart to heart" - Techdirt's Mike Masnick is joking but it's not beyond Elon Musk to do this.
- "perfect encapsulation of the current state of center-left discussions around content moderation. i have no notes" - Evan Greer of Fight for the Future shares a screengrab that perfectly encapsulates these trying times.
- "Industry has come a long way since Christchurch but we still expect them to learn lessons." - Ofcom principal Dr Murtaza Shaikh shares a new report from the proposed UK speech regulator.
Job of the week
Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1400+ EiM subscribers.
I haven't come across a role that is fits the bill for this week's Job of the Week. However, it's a good chance to remind all subscribers that if your organisation is hiring for roles related to speech governance, online safety or content moderation, drop me a line to find out how you can advertise right here.