📌 Mendelsohn on moderation
Hello everyone — I hope you’ve had as good a week as these folks.
A special thanks to those of you who forwarded last week’s EiM to their friends and colleagues — it was my most read newsletter for total opens yet.
Also, some personal news from me before we get into this link-heavy newsletter: I’m going freelance and moving to Sierra Leone for 12 months 🇸🇱. I've never lived abroad so I'm excited and nervous in equal measure. It also means I'll have more time to grow EiM, which  I can't wait to do.
This Twitter thread has more details but if you have interesting media/digital/community strategy projects that I can help to make happen, drop me a line - hello@benwhitelaw.co.uk.
Thanks, as ever, for reading — BW
Moderation gone mainstream
I almost didn't believe it happend: a discussion about content moderation first thing in the morning on live UK TV.
That's what you would have seen if you tuned into popular morning TV show, Good Morning Britain, on Wednesday. Nicola Mendelsohn, Facebook’s EMEA chief, was sat on the famous GMB sofa, talking about the social network’s desire for international regulation. It was, in many ways, quite bizarre.  (Thanks Julie, aka my mum, for the tipoff).
As well as reassuring presenter Susanna Reid about some fake ads that had appeared on the platform with her name (far from the first time that’s happened), Mendelsohn revealed that:
’We now have 35,000 people around the world working to make sure the things you’re seeing each day, and the advertising that’s out there, is good and true’.
Here’s the clip if you missed it:
Dates for your diary
I came across a couple of interesting looking events via my moderation experts Twitter list this week and thought I'd share:
- [UK] The Oxford Empirical Legal Studies discussion group is hosting a 3-hour academic workshop about platform governance on 6 March (the programme looks ace).
- [online] Article19 is hosting a Twitter Q&A about social network appeals processes with their Senior Legal Officer and digital rights lead, Gabrielle Guillemin, on 4 March at 2pm GMT.
Not forgetting...
L1ght, the online toxicity company whose mission is to keep children safe on the web, has raised $15 million in a seed round of funding which it will use to scale its platform and invest in R&D.
L1ght raises $15 million for AI that protects children from online toxicity
L1ght, a fledgling AI startup that wants to help technology companies combat online toxicity, bullying, and abuse, has raised $15 million.
Chris Gray, one of the Facebook moderators bringing a case against the social network, gave a very eloquent and entertaining talk at an event this week organised by technology ethics collective Tech Won’t Build It and Dublin Institute of Technology. Watch it here and read the round-up.
Life as a Facebook moderator: ‘People are awful. This is what my job has taught me’
Chris Gray, one of former employees suing tech giant, gives talk about gruelling work
Sizeable judgement in the US: YouTube has been ruled as a private forum and therefore not subject to the First Amendment. We are moving towards an internet comprised of many differently policed sub-internets.
First Amendment doesn’t apply on YouTube; judges reject PragerU lawsuit
YouTube can restrict PragerU videos because it is a private forum, court rules
Reddit’s latest transparency report talks about how its use of users for content moderation is the best, scalable solution it has come across. I wouldn’t disagree. (via good friend David Tvrdon)
Transparency Report 2019 - Reddit
Some nice work being done at the University of Sheffield to train academics and early-stage researchers to be able to understand and develop expertise in AI and NLP that can help alleviate online toxicity.
AI tools could consider context when detecting hate speech | E&T Magazine
University of Sheffield researchers are developing tools that could detect and tackle online abuse in a manner which accounts for differences in language between communities
Finally, a cool new bot from researcher Caroline Sinders and programmer Alex Fefegha about preventing harassment online.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.