5 min read

India's grievance committees, Yelp clamp down on paid reviews and Musk's main woman

The week in content moderation - edition #190
Indian Prime Minister Narendra Modi at a meeting of BRICS heads of state in 2017
Narendra Modi at a meeting of BRICS heads of state in 2017 by Press Service of the President of the Russian Federation/www.kremlin.ru and licensed CC BY 4.0. Colour applied

Hello and welcome to Everything in Moderation, your at-a-glance guide to the week's content moderation and online safety news. It's written by me, Ben Whitelaw.

A warm welcome to new subscribers from Khoros, Bodyguard, Google, AnnualReviews.org, NextDoor and elsewhere. Whether you're receiving this for the first time or a long-time follower, I'd love to hear what you think - drop me an email or hit the thumbs at the end of today's edition.

That's enough preamble, here's everything in moderation this week - BW


New and emerging internet policy and online speech regulation

The big story this week comes from India, where the government have announced the first three committees that will hear complaints brought by users against internet intermediaries.

The concept of Grievance Appellate Committees was announced last June (EiM #163) to ensure that "that Internet in India is Open, Safe & Trusted". Now we have the names of the people who will take on that sizeable task, including a retired police officer and the former general manager of the Punjabi National Bank. Users still can't submit cases, though: the website for submitting cases reads "available soon" and will reportedly come online on March 1.

One case that doesn't need to go to Committee is that of the recently released BBC documentary, India: the Modi Question, which examines the prime minister's role in the 2002 Gujarat riots. That's because Twitter and YouTube have already complied with orders from the Ministry of Information and Broadcasting to remove links to it on grounds that it represents “hostile propaganda and anti-India garbage”, according to a government adviser. Mandating that public journalism be removed on a whim is obviously not a good look for a government and digital rights groups are not happy.

On the topic of platform relations with elected officials, Thierry Breton has warned there is "huge work ahead" to get Twitter to comply fully with the Digital Services Act. Following a call on Tuesday with Elon Musk —the first since the 'fly by our rules' tweet— Breton said that "the next few months will be crucial to transform commitments [to regulation] into reality".


Features, functionality and technology shaping online speech

The safety tech sector is "on track to hit £1bn in annual revenues" in the next five years, according to an industry group. The surprising stat was published by Safety Tech Network in a 2022 recap, which made special note of the influx of startup investment (something I cover regularly here in EiM).

£1 billion is obviously a long way from being a major UK industry —£60bn is needed just to break into the top ten—  but it's possible that safety tech may pass traditional ones like steel on the way down.  

(Disclaimer: I hosted a podcast for the Safety Tech Network last year in which I interviewed industry experts)

Interested in building products for a safer web? Re-read this Getting to Know Q&A with Lauren Wagner, formerly of Facebook and now at early-stage startup investor Link Ventures, about her efforts to build "the social web stack".

Lauren is just one of a dozen experts I've interviewed over the last year as part of my Viewpoints series, including Jess Mason (Clubhouse), Tesh Goyal (Google) and Bri Riggio (Discord). 


Social networks and the application of content guidelines  

Yelp has published its latest annual trust and safety transparency report, which tells a story of the booming industry of incentivised reviews. 1740 accounts were closed in the 12 months to December, a big increase of 128% compared to 2021. However, the platform added fewer alerts on business pages, suggesting the company is getting better at taking action against fraudulent reviews.

Buried deep in the report there's also a wild story of a lawyer who tried to exploit a loophole in California to obtain information about an anonymous Yelp poster who was critical of their casework. Yelp, thankfully, was able to protect their information.

Sidenote: I was hoping to compare this year's numbers to 2021's report (EiM #146) but it's no longer accessible via Yelp's website. Which begs the question: What's the point of a transparency report if you can only read it for a limited time?

Update: Yelp's comms team reached out to let me know that the 2021 trust and safety report is available on its site, after all. User error on my part.

In a move that demonstrates the platform's chaotic approach to online speech, Twitter suspended a white nationalist just 24 hours after he returned to the platform. Nick Fuentes was banned in July 2021 for "for repeated violations of the Twitter Rules" and began posting anti-semitic content almost immediately after being reinstated on Tuesday, reports Salon.

Beyond the headline, the piece has some good detail about how far-right activists are using platforms in different ways to stay on the right side of policies while spreading content and ideas. My read of the week.


Those impacting the future of online safety and moderation

Imagine being described as "chief executor of Elon Musk's whims". Well, that's how Ella Irwin, Twitter's head of trust and safety is described in this detailed Bloomberg profile, published this week.

The piece has some interesting background on her previous roles in cyberattack prevention and marketplace abuse at Twilio and Amazon as well as a management style that one former colleague summed up as requiring "a certain standard of excellence."

Within days of joining Twitter, she reportedly sent her new team "a multi-page document advising them on how best to work with her" and often clashed with her predecessor Yoel Roth (EiM #182), who was at the same level as her.

Despite receiving comment from Irwin via email, there's no sense of why she's agreed to do Musk's bidding or what the long-term game is.

Tweets of note

Handpicked posts that caught my eye this week

  • "As Zuboff says, content moderation is quicksand" - DCN's Jason Kint shares a Financial Times interview with the one and only Shoshana Zuboff.
  • "It's time to rethink platform transparency reporting." - Anna Sophie Harling shares news of a new Ofcom paper in the latest edition of the Trust and Safety Journal.
  • "Everything is content moderation these days" - Felix Simon reacts to Medium's decision not to amplify content created by generative AI tools.

Job of the week

Share and discover jobs in trust and safety, content moderation and online safety. Become an EiM member to share your job ad for free with 1600+ EiM subscribers.

The Trevor Project, a suicide prevention charity for LGBTQ young people, is looking for a Manager to oversee the team responsible for its online community.

The core responsibility is managing a team of community moderators and acting as an escalation point for policy enforcement for the 400,000 users who use the platform. Not easy but certainly fulfilling.

The salary for the role is between $75,000 and $90,000, with health insurance and flexibility to work where you like in the US.