12 min read

Talita Dias on tackling hate speech with civil and political rights

Covering: how the International Covenant on Civil and Political Rights (ICCPR) can tackle hate speech and maintain freedom of expression
Talita Dias, Shaw Foundation Junior Research Fellow in Law at Jesus College, Oxford University
Talita Dias, Shaw Foundation Junior Research Fellow in Law at Jesus College, Oxford University

'Viewpoints' is a space on EiM for people working in and adjacent to content moderation and online safety to share their knowledge and experience on a specific and timely topic.

Support articles like this by becoming a member of Everything in Moderation for less than $2 a week.


Myanmar. Syria. Yemen. Ethiopia. The devastating effects of both algorithmic and human-led content moderation processes at large online platforms have been widely reported over the last few years. Experts have increasingly called for a human-rights approach to content policy, which is slowly but surely turning into concrete policies (EiM #104) and instruments for self-regulation (#100).

The question I've been wrestling with is this: what obligations do platforms and governments already have to human rights right now? And how, if at all, are they being applied online?

A few weeks back, I attended a webinar with Talita Dias and hosted by The Media and Peacebuilding Project on this very topic. Dias is the Shaw Foundation Junior Research Fellow in Law at Jesus College, Oxford University, as well as a research fellow at the Oxford Institute for Ethics Law and Armed Conflict. She recently wrote a chapter on how the International Covenant on Civil and Political Rights (ICCPR) — a human rights treaty that commits signatories to respect individuals' civil and political rights and signed by 173 states parties — can be applied to tackle hate speech while maintaining freedom of expression.

I'm not a legal expert by any means so I asked Dias about the ICCPR, why human rights treatises have been underutilised in content moderation policymaking in the past and whether the social media platforms have shown any interest in her work.

This interview has been edited for clarity.


For people who don't know, what is the International Covenant on Civil and Political Rights (ICCPR) and what relevance does it have to the question of combatting online hate speech?

The ICCPR is an international treaty that recognises a number of fundamental freedoms of a chiefly liberal nature, also known as ‘civil and political rights’. It builds on the (more famous) Universal Declaration on Human Rights, which, unlike the ICCPR, is not per se binding on states under international law. The ICCPR is one of the most widely ratified human rights treaties, with 173 states parties to date. Moreover, most of its provisions are reflective of customary international law – unwritten rules of international that bind all states irrespective of treaty ratification. Although the ICCPR and its customary counterparts are only legally binding on states and intergovernmental organisations, the human rights framework recognised therein is a powerful policy tool to guide the behaviour of other stakeholders. In particular, the ICCPR has inspired the social responsibilities of corporations to respect human rights, that is, to avoid infringing those rights and to address adverse impacts on them. The ICCPR can also guide individual behaviour – after all, it is through the actions or omissions of individuals that states, companies and other collective bodies may interfere with human rights. All in all, the ICCPR provides an internationally recognized legal framework and a common language that can be used by states, companies, civil society and other stakeholders to tackle various human rights abuses – offline and online.

Its relevance to the question of combatting online hate speech is twofold. On the one hand, Article 20 of the ICCPR requires states to prohibit particularly serious forms of online hate speech, namely, propaganda for war and advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence. This provision is a manifestation of the right to non-discrimination, which, more generally, obliges states to refrain from and prohibit any discrimination. It requires them to guarantee, online and offline, equal and effective protection to all persons against discrimination on any ground such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status. on the right to free non-discrimination. In sum, the most serious forms of online hate speech must be prohibited by law at all times, whereas states may be required to limit other, less serious forms of online hate speech to protect individuals from discrimination.

On the other hand, Article 19 of the ICCPR protects the rights to freedom of opinion and expression, which are fundamental in democratic societies. According to Article 19(2), individuals are, as a general rule, free to ‘seek, receive and impart information and ideas of all kinds, regardless of frontiers, either orally, in writing or in print, in the form of art, or through any other media of his choice’. This includes expressions and ideas that might shock, disturb, or even offend others, such as satire and criticism. Because online expressions of hatred are speech acts, they are in principle protected, unless they amount to prohibited speech acts (Article 20) or deserve to be limited for a legitimate purpose – i.e., to respect of the rights or reputations of others, or to protect national security, public order, public health or morals. According to Article 19(3), these limitations are exceptional and must be laid down by clear laws, as well as necessary and proportionate to fulfil one of the legitimate purposes. This means that online hate speech acts falling short of prohibited speech (war propaganda and incitement) may be limited to give effect to, inter alia, the rights of others to non-discrimination. But both the definition of limited online hate speech acts and their limiting measures must be clearly stipulated by law, and well-calibrated to the severity of the harm they cause.

BECOME A MEMBER
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.

They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.

Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW

This might be a stupid question but why don't states that have signed up to ICCPR adhere to its protections with regards to online speech?  

This is a very good question. There are a number of reasons for non-compliance. First, with nationalism and xenophobia on the rise again, hatred is embedded in many governments and societies around the world. In this context, and given the pervasiveness of the internet and social media platforms, online hate speech has become a political strategy to sow division and polarisation in different societies, including both developed (e.g. the United States, Poland, and Hungary) and developing countries (e.g., Myanmar, Philippines, and Brazil). In short, even if most states have signed up to the ICCPR, upholding a legal obligation to curb different forms of hate speech, many leaders have actively chosen to disregard this duty. Second, at the other end of the spectrum, in an effort to curb online hate speech, many governments and social media companies have clamped down on freedom of expression. Some of these efforts are genuine, and excess is often due to a lack of knowledge of the legal standards applicable under Articles 19 and 20 of the ICCPR. But without due regard to freedom of expression, and careful calibration of limiting measures, such as general content takedown policies for online hate speech, governments and companies may indiscriminately censor lawful speech acts, such as expressions of protest that cite or expose unlawful hate speech. This approach may ultimately undermine the very values it seeks to uphold, such as diversity and non-discrimination.

Other instances of unlawful limitation to freedom of expression may occur under the pretext of protecting individuals from online hate speech and other online harms, such as disinformation and misinformation, but to oppress dissenting voices and eliminate diversity and independence in the online media space. Finally, some governments are not doing anything about online hate speech – neither directly expressing/endorsing nor limiting it – because they simply lack the necessary legal or institutional capacity to understand and enforce the ICCPR’s applicable legal framework. This is the reality in many developing countries, especially those facing armed conflict or struggling to transition to democracy.

Your recent chapter explains how platforms can moderate hate speech with the ICCPR in mind. Can you explain your framework?

Yes. The chapter focuses on content moderation of online hate speech as a means to give effect to the relevant provisions of the ICCPR, especially Articles 19(3) and 20(2). This focus is justified because content moderation is a flexible, comprehensive set of measures that could address different types of online hate speech that fall under the scope of the ICCPR. Although we tend to associate ‘content moderation’ with content deletion or takedown, either by a human moderated or an automated system, it goes way beyond that to include a wide array of measures of content governance. Examples include labelling, redacting or deprioritising content, warning or suspending users, or more long-term measures such as algorithmic reform. At its heart, content moderation is about balancing freedom of expression and the protection of other important rights and interests in the online environment. Quite literally, moderating content is about finding moderate, reasonable responses to different speech acts. As I mentioned earlier, this balancing exercise is indispensable to give effect to Articles 20 and 19 of the ICCPR.

So, my proposed framework is really about fleshing out, tailoring and applying those very general provisions of the ICCPR (Articles 20 and 19 of the ICCPR), adopted in the 1960s, to the phenomenon of online hate speech, using content moderation as a key implementation measure. Specifically, I propose two things.

First, I put together a classification or taxonomy of online hate speech under Articles 19 and 20 of the ICCPR. Second, I propose specific content moderation and other measures to tackle different categories of online hate speech. After all, hate speech manifests itself in different ways, factually and legally, depending on, among other things, the seriousness of the content, the speaker’s intent, the audience targeted and the societal context. Online hate speech is not a single phenomenon or even a legal concept, but an umbrella term that captures different speech acts. It covers various expressions of hatred or opprobrium against individuals based on protected characteristics, and each of these expressions may have different legal consequences under the ICCPR.

Based on those legal consequences, I classify online expressions of hate into prohibited, limited, and protected speech acts. ‘Prohibited online hate speech’ is speech that states are required to prohibit under Article 20 of the ICCPR, i.e., war propaganda and advocacy of hatred that explicitly or implicitly incites others to discriminate, be hostile or commit violence against individuals on the basis of race, nationality, religion or other characteristics protected under Article 26 of the ICCPR. Examples include direct calls for violence or expressions of racial superiority, such as those that we have seen in the context of the Rohingya crisis and against English black footballers following the Euro 2020 finals. Because these speech acts must be prohibited by law, states and companies should seek to take them down when their character is manifestly clear.

Conversely, ‘limited online hate speech’ covers speech acts that states are authorised, but not required, to limit for a legitimate purpose by law, provided the limitation is necessary and done in a proportionate way. This category includes Holocaust denial, which, in many countries, significantly increases the risk of discrimination or violence, and the use of emojis to attack individuals on the basis of protected characteristics. As I mentioned earlier, responses to or measures to limit/tackle these speech acts need to be calibrated to the seriousness of the speech act, taking into account who the speaker is, their intention, the means of dissemination, the audience, and the broader societal context. Thus, states and platforms should not simply take down these acts but adopt a number of content moderation measures that are tailored to the circumstances, such as content labelling, deprioritisation and redaction.

Lastly, protect online hate speech acts are those that fall short of amounting to either prohibited or limited speech acts, and must be protected by states. These are usually expressions of hatred against institutions or religious tenets, rather than individuals. Because they are protected, they cannot be moderated but must be prevented or mitigated by other actions, such as educational or awareness-raising campaigns.

What can be gained by states and social media platforms adopting this approach? What's at stake here?

A lot is at stake here. Ultimately, this approach seeks to guarantee a free, diverse and safe online media environment for all of us. That is, a space where we can freely express our views without fear of discrimination or oppression from other users, platforms, or governments, but with in-built safeguards against content that undermines this freedom and other human rights. These safeguards must be as diverse as the human rights impact of different speech acts, to ensure that the different rights and interests at stake are well-balanced. This is a very sensitive area – different views, interests and stakeholders are involved, from political and corporate leaders to content moderators and individual users in remote areas of the world. Thus, decisions may not be easily made, given the highly contextual nature of speech. But the lines should be threaded as carefully as possible, and, where wrong calls are made, effective remedies must be afforded to individuals affected.

So a lot can be gained from adopting an ICCPR-consistent approach to content moderation of online hate speech. For one, states and platforms can avoid the existing binary and reactive approaches, where they either simply take down content or do nothing about it. The ICCPR’s flexibility allows states and companies to adopt more balanced responses to the phenomenon. For another, users can feel more empowered online, knowing that their human rights will be upheld and whatever decision is made about the content they post or receive, there will be avenues for redress. Finally, if states, platforms and users across the world follow the ICCPR when it comes to moderating online hate speech, we can ensure that responses are overall consistent, whilst affording the necessary margin of discretion in different social contexts.

In your opinion, why have core human rights treaties such as ICCPR been overlooked as a mechanism for tackling hate speech in the past?

I think the ICCPR has been overlooked because people tend to associate hate speech and content moderation with their most extreme manifestations, i.e., incitement to genocide or other atrocities and content removal or censorship. Thus, in previous instances where hate speech has caused violence and division, such as in the Second World War or during the Rwandan genocide, greater attention was paid to the international criminal law framework applicable. What lies in between those extremes is often overlooked, i.e. less serious expressions of hatred that can be addressed by more ‘moderate’ measures, such as redaction or warnings to viewers. And because the ICCPR is precisely about moving away from extreme, binary approaches, to carefully balance the various human rights and interests at stake, it has been neglected. But the digital age and social media, in particular, have brought to the fore the diversity of online hate speech acts, as well as the diverse impact it can have on human rights in different corners of the world. So I hope this is a wake-up call for us to revive this fundamental legal instrument which is the ICCPR.

What interest have you had in your research, particularly from those social media platforms that are under pressure to apply human rights to content moderation processes?

So far, my research has sparked interest from governments (e.g., diplomats, legal advisors, and domestic policymakers), civil society organisations, the media, and social media whistle-blowers, but, sadly, not social media platforms themselves. And I think this is a reflection of their lack of willingness to change their business practices and models, as recent revelations about Facebook seem to suggest. Social media platforms have been very reactive in their responses to online hate and other online harms: if there is too much hate online, they excessively take down content, and if these content takedowns get to the point where governments and users start complaining about private censorship, they revert back to a laisser faire model.

But platforms need to understand that online hate speech is a complex problem that cannot be addressed by simple, binary solutions. There is no one-size-fits-all response to online hate, whether you talking about the content itself or the context where it occurs. So they need to start thinking more holistically about how to diversify their content moderation measures, calibrate these to different types of online hate speech, and ensure that affected users can challenge their decisions. More fundamentally, companies need to understand that they are not simply passive observers or moderators of online hate but may actively contribute to it through their engagement-based recommendation algorithms and advertisement policies. The problem will never be addressed at its root until these features of their business models are fundamentally changed. And I think the ICCPR provides companies with the necessary guidance to undertake those pressing reforms.

You mentioned in your evidence to the UK government's subcommittee on online harms and disinformation that "striking the balance between online safety and other fundamental rights remains the most pressing challenge facing Parliamentary scrutiny'". How confident are you that such a balance will be struck?

I am confident that many parliamentarians and other policymakers in the UK and beyond are aware of the problem and genuinely willing to strike the appropriate balance between freedom and safety. This was my impression during my oral evidence session before the Subcommittee and their recent report on the UK’s Online Safety Bill. But I am not very confident that the UK government is up to the challenge. I question how open the Department for Digital, Culture, Media & Sport is to amending the Online Safety Bill in light of the Subcommittee’s report and the vast amounts of evidence and suggestions for change presented before Parliament. I fear ministers will want to use the Bill to clamp down on freedom of expression without really addressing the problem of online hate and online harms, just for the sake of finding a quick, easy way out.

Likewise, the Bill leaves key decisions that must be made by law, such as amounts to illegal content and what measures are needed to address it, to government members, such as the Secretary of State and the regulator, i.e., OFCOM. This is not a good sign in terms of compliance with the ICCPR, which requires prohibited and limited speech to be tightly defined by law. I do hope, however, that work of the Subcommittee and the Joint Committee on the Online Safety Bill will encourage other MPs to make fundamental amendments to the Bill to, at the very least, afford greater protection to freedom of expression.


Want to share learnings from your work or research with 1000+ people working in online safety and content moderation?

Get in touch to put yourself forward for a Viewpoint or recommend someone that you'd like to hear directly from.