π Speech policy's 'Super Tuesday', photo bots and Clegg's greatest hits
Welcome to Everything in Moderation, your weekly newsletter about content moderation cooked up by me, Ben Whitelaw.
This is the last EiM newsletter of 2020. The next one will be filled with renewed hope and optimism and will hit your inboxes on 1st January 2021.
If youβve enjoyed the newsletter this year, perhaps you'd be willing to share it with a friend, send it around on an email list or share a link on LinkedIn or maybe Parler (no judgement)? I'm edging towards 350 subscribers and each and every one is a reason to keep sending. Thank you for opening and reading.
There's an avalanche of news to get into this week. Here's what you need to know βΒ BW
π Policies - company guidelines and speech regulation
You wait for months for an announcement about content legislation that might change the fabric of the internet and then two come along at once. That was the case on Tuesday: Digital Services Act (European Union) and Online Harms (UK).
πͺπΊ In Brussels, legislators announced large fines (up to 6% of revenue) for companies that fail to limit illegal material on their platforms as well as insisting on greater access to internal data and the appointment of independent auditors to ensure compliance with the new rules. Legal firm Allen Overy has a broader overview.
The legislation, which will now be debated with EU members states and is unlikely to come in before 2023, also had a competition component to ensure small platforms are not squeezed out by Google and Facebook et al.
I found the tone of coverage in US news outlets very interesting: Β The Washington Post noted that American companies would be "submit to particularly aggressive rules" while the WSJ accused the EU of wanting to "expand their role as global tech enforcers". Talk about bitter.
π¬π§ Across the channel, the UK government published its final response to the 2019 Online Harms white paper (EiM #51) which was designed to force platforms to have a duty of care toward users.
As expected, Ofcom β the UK government's communications regulator β has been granted power to issue fines of up to 10% of the companies revenue. Previously mooted criminal liabilities for company execs have been dropped for now.
Graham Smith has written a legal heavy summary, with a particular focus on the definition (or lack) of harm while Heather Burns at Open Rights Group notes that there is a real risk of 'collateral censorship' by service providers because they have no choice but to define and remove content.
πΊπΈ As if that all wasn't enough, over in the US, some of the web's biggest companies including eBay, Cloudflare and Tripadvisor, have created Internet Works, an industry lobbying group to explain the benefits of Section 230 and the implications of legislation changes. Section 230 is in the firing line, courtesy of one Donald Trump, but this tech collective could mark the start of a fightback.
π‘ Products - features and functionality
Flickr is launching moderation bots to 'automatically update mis-moderated content to the correct moderation levels according to our established policies', it announced this week (Thanks to Adam T for the tip).
It will also increase the visibility of the Flag Photo feature and overhaul the categories available via its Report Abuse system.
Flickr has continued to make a loss since being bought by SmugMug in 2018 and, just a year ago, CEO Don MacAskill said the company was burdened by the 'increasing cost of operating this enormous community'. But it has continued to develop tools for its paying Pro community and this seems like another feature designed to make the service worth shelling out for.
π¬ Platforms - dominant digital platforms
Twitch has rolled out a new Hateful Conduct and Harassment Policy to better protect women, LGBTQ+ users and people of colour following a spate of toxic incidents.
The policy, which will go live on 22 January 2021, includes several updates so is worth reading in full but includes:
- separating Hateful Conduct and Harassment into three areas: Harassment, Hateful Conduct, and Sexual Harassment
- banning the encouragement of doxxing, DDOS attacks and raiding of social media profiles
- prohibiting victims being labelled as crisis actors
Not being a regular user, I was surprised to hear TikTok is home to 'get rich quick' schemes. Although maybe not for long: the video platform amended its community guidelines this week to ban content that 'depicts or promotes Ponzi, multi-level marketing, or pyramid schemes'. Strong 1990s-email-chain-vibes, that.
π₯ People - those shaping the future of content moderation
If it's difficult to put my political views on Nick Clegg to one side, it's not hard to see that the former UK politician turned Facebook spokesperson is now shaping content policy on a global scale.
It was Clegg that was sent to speak at Facebook's two-day Fuel for India online event (an ironic name considering the flames the platform has fanned in the country in recent months). During the discussion, he sang a few of Facebook's favourite hits, according to The Print: the high removal rate of illegal content through AI (now apparently 99%), the creation of the Oversight Board and last but not least, a warm welcome of government regulation (India is seriously thinking about it).
I've written about Clegg and the revolving door between the regulator and the regulated (EiM #62). In a week when some of Clegg's former colleagues in Westminster made an announcement that would affect his new employers' whole business, my unease hasn't changed.
π¦ Tweets of note
- 'Thread breaking bits out follows' - Will Perrin produces a handy tweeted digest of the notable aspects of the final response to the Online Harms consultation, out this week. Ellen Judson, a DEMOS researcher, also produced a similarly useful thread of first thoughts.
- 'To other employees: I need your help. Signal contacts in my bio.' - WSJ reporter Jeff Horowitz calls out to Facebook employees in India following the latest story about violence linked to account takedowns.
- "Social media is us. Everyone who has used it has now experienced trauma and a mental health crisis." - Writer Heidi N.Moore on the effect of poorly thought through moderation systems.
Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.