Perhaps one of the truly saddening facts about hate speech such as online hate speech is that it does not just stay online. Online hate speech can – and all too often – lead to real world hate and violence.
Not surprisingly, the advent of online platforms and the subsequent ability to distribute hate speech – anonymously – has made hate speech a global concern. The European Union, and many other jurisdictions have placed hate speech in the area of being a criminal offence.
In trying to define hate speech, one of the most common analogues one finds is through defamation. Historically, Roman law, for example, barred public shouting at someone. This was seen as being contrary to good morals. Yet, the Roman Empire itself might not always have had good morals.
Of course, there are two – contradictory – lines of arguments on hate speech. One argument says, there should be unregulated free speech. This is known as the anti-ban standpoint. The second point of view refers to the enforcement of penalties which aims to prevent hate speech.
In both versions, the term hate speech is used with political, legal, cultural, and sociological connotations. Taking this approach, Germany, for example, uses the term Volksverhetzung – the incitement to hatred when alluding to hate speech.
This concept of hate speech suggests that hate speech is the public disturbance of peace through an attack on human dignity. It features three key elements:
- instigation of hate;
- urging others towards violence; and,
- using insults, scorn, slander, vilification, etc.
In other words, hate speech is generally seen as an agitation against a specific group within a certain population. In more severe cases, hate speech can even mean war crimes and crimes against humanity.
Currently, the EU has preferred a rather functional classification of hate speech. It sees hate speech as an incitement to violence or hatred against a group – usually the out-group. This is defined in relation to race, religion, or ethnicity. Simultaneously, the EU’s regulation stresses the need to safeguard freedom of speech.
Meanwhile, a recent book sees hate speech as poisonous mushrooms. A current United Nations’ investigation has detected that hate speech has been moving more and more into the mainstream. And that the increased prevalence of hate speech is undermining peace, democracy, and our common humanity.
Given all this, it seems likely that the outlook for actually finding a singly and above all universal definition of hate speech seems problematic, to say the least. Still, hate speech usually occurs through the communication of animosity or disparagement of an individual or a group.
In setting an in-group against the out-group, this is often linked to an out-group’s specific characteristics such as race, color, national origin, sex, disability, religion, sexual orientation, etc.
Hate speech that targets the other – the out-group – often occurs online. In order to detect and eliminate online hate speech, many corporations that run online platforms on which hate speech is likely to occur, have been working on something that is called “the automatic detection of hate speech”. Simultaneously, they argue that we should also be aware that negative speech is not necessarily hate speech.
Such acts on online platforms often express no more than discontent, resentment, blame, and rudeness. As a consequence, we might need to differentiate between a) simply uncivilized speech and b) intolerable speech or hate speech.
In all that, it becomes clear that hate speech, in general, makes an explicit reference to the emotion of hate. For that, hate speech uses speech devices such as: negative stereotyping, dehumanizing speech, expressions of violence, harm or killing, as well as the use of offensive language such as insults, slurs, degrading metaphors, and wordplays. Yet, virtually all forms of hate speech share, more or less, eight common elements:
- Hate speech targets a group or individual as a member of a group;
- It’s content in online and other message expresses hatred;
- Hate speech causes harm;
- The speaker intends harm and often plans a hurtful activity;
- Hate speech incites destructive actions beyond the speech itself;
- Hate speech occurs either in public or is directed at members of an out-group;
- The context of hate speech makes violence possible; and finally,
- The speech has no redeeming purpose.
To counter hate speech, the EU’s Code of Conduct on Countering Illegal Hate Speech Online obliges online providers – e.g. Facebook, YouTube, Twitter (now X), Instagram, TikTok, etc. – to prohibit hate speech on their platforms; to respond quickly to the detection of hate speech; to provide updates on enforcement statistics; and to promote counter-speech.
As a general guideline, hate speech tends to involve race, ethnicity, nationality, religion, caste, disability, sexual orientation, and gender identity. Its expressions are all too often dehumanizing in character. Yet, hate speech exists not only with regard to human beings as hate speech can also relate to animals as has been found, for example, in advocating animal cruelty.
In the end, virtually all forms of hate speech are a violation of social and moral norms and the common values of our society. Beyond that, hate speech also weakens the foundations of our common humanity – a term preferred by, for example, the United Nations.
In short, one might see four possible versions of a definition of hate speech. Firstly, a teleological definition of hate speech would frame hate speech as speech that acts with regard to their inherent character and intention. Such hate speech would be directed towards having a specific negative impact.
Secondly, a consequentialist definition would focus on the impact of hate speech. It concentrates on hate speech that has an inherent and contextual causal relationship to specific negative impacts.
Within the consequentialist definition – or outcome – Susan Benesch’s model helps to ascertain hate speech. Her theme includes five variables that are designed to analyze the seriousness and harmfulness of hate speech, namely:
- The degree of a speaker’s influence over an audience;
- The grievances or fears of an audience that can be cultivated by the speaker;
- Whether or not a speech act is understood as a call to violence;
- The social and historical context – previous episodes of violence;
- Whether the means of dispersing hate speech is influential (media outlets, online platforms).
Thirdly, a more formal and legalistic definition might disregard the impact of hate speech while concentrating on the actual form, idea, and ideology that is expressed in hate speech. Fourthly, a consensus definition would see hate speech in relation to some form of general agreement as to how hateful speech acts can be defined.
Whichever definition of hate speech would succeed, such a definition will provide the foundation for the aforementioned automated hate speech suppression. This is particularly crucial since a significant volume of hate speech occurs online.
An agreed definition could build the signpost for developing an even more explicit lists of contextually relevant speech acts that constitute hate speech useful to automatic, AI-based or algorithmic based detection.
In the end, defining hate speech is not like Supreme Court Justice Potter Stewart’s famous pornography statement – I know it when I see it. He also said, I shall not today attempt further to define the kinds of material. Unlike judge Potter Stewart, we can – and have – defined hate speech. Firstly, hate speech is dehumanizing and secondly, hate speech is defined as any kind of communication in speech, writing or behavior, that attacks or uses pejorative or discriminatory language with reference to a person or a group, on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, color, descent, gender or other identity factors.
Photo (source: Lara Klikauer – Wednesday, November 15, 2023)