Propagandists have used hate for as long as propaganda exists — but with the popularisation of online communication the opportunities for distributing (online) hate have proliferated.
Hate kills. It is an affect that is considered to be among the most detrimental in our societies. This is partially because of its own destructive force: It functions as an absorptive vortex that not only removes the humanity from those who hate, but also creates communities through the perverse pleasure that hate gives to haters. The destructive nature also applies to these who are being hated, because of the violence that is often produced by hate.
The relationship between hate and violence is not automatic, after all — one can hate in solitude and silence — but the intensity of hate begs for communication, to convince the potentially likeminded and to hurt the objects of hate. This is where violence enters the equation, as the communication of hate is a form of symbolic violence. Communication can act as sharp weapons in its own right, but the construction of an us/them dichotomy is also a condition of possibility for physical violence, for the attempts to destroy the object of hate.
This is why the communication of hate is considered to be so problematic. It is not a neutral exercise of the freedom of expression, but narratives that directly harm others, and create the conditions for unleashing the worst possible human behaviour. Even if this is not a new phenomenon — propagandists have used hate for as long as propaganda exists — but with the popularisation of online communication the opportunities for distributing (online) hate have proliferated.
In turn, this has provoked societal debate, about the need, desirability, nature and efficacy of intervention, at, for instance, government and industry (e.g., Facebook) levels. Also researchers from a variety of fields and disciplines engage in these debates.
Recently, the prestigious academic journal, Nature, mostly active in the natural sciences research, published an article entitled, “Hidden resilience and adaptive dynamics of the global online hate ecology”, an interdisciplinary collaboration between scholars from physics, interactional affairs and computer sciences. Even if their metaphors from the world of the natural sciences have a bit of a 19th-century feel (when sociology used biological metaphors, which didn’t work too well), their basic analysis makes sense.
Even if many hate communities have authoritarian tendencies, which support centralisation, they are revolutionary entities that have decentralized organizational structures. They are networks and networks-of-networks, connected with each other in a variety of ways. As such this is not new, but the decentralised nature of these networks aligns very well with the decentralised nature of the internet. This also makes countering these narratives and organisational structures difficult. The authors of the Nature article do add a few elements to this model, which are worth emphasizing. First, these hate communities look for the weakest link in the online ecology: They migrate to those platforms that police hate the least, which shows that the struggle to reduce hate necessarily needs to be multiplatform.
To bring in my own take on this, and a flavour of French philosophy: These networks are rhizomes, root-shaped networks that move underground, and whose visible (upper) parts can be replaced easily when eliminated. Secondly, the authors show that these hate communities are global, connecting activists of hate that are active on different continents. Of course, as often happens in this kind of research, which is regretful, most of the research is situated in western countries (although e.g., South Africa and the Philippines seem to be included), but the idea that there are “highways” that connect different localities of hate, and different projects of hate, is precious.
The authors make four policy suggestions in their article, which might provoke less agreement. Even if the possible consequences (and successes) of these suggestions are tested through the models that the authors have developed, it remains problematic how these policy recommendations are not grounded in theory or in a thorough policy analysis, but just appear to have come to the minds of the authors when they were studying their data. Still, their four suggestions might stimulate debate, and should thus at least be mentioned here. The authors propose (1) to first ban the small hate clusters, and not try to go for the large clusters first; (2) to randomly ban “a small fraction of individual users across the online hate population”, as this will reduce the (legal) backlash; (3) to set clusters against each other and mobilizing “anti-hate users” for this purpose and (4) to exploit the internal contradictions of hate communities.
One of the problems of this line of argument is that it turns the problem of hate into a technical issue, which might indeed work well with how online platforms function in practice, but not with ideologies of hate. This technical approach comes at an expense, as it pushes out the ethical and the ideological dimensions. Random sanctioning might sound acceptable, but it introduces a chance dimension to justice, which I would not consider an ethical policy. In contrast, we should combat differences in how justice is served, but this is another discussion. The ideological dimension also disappears, but hate is not only about networks of people, but about political struggles that do have an intellectual leadership. Hate is felt and experienced, but it is also used to serve political-ideological purposes. Moreover, hate is not only affect, as it becomes condensed into communication and discourse.
And this is where the article, and in particular their third suggestions, becomes inspirational, even if some variation is needed. We need to become anti-hate activists. Our governments need to become anti-hate. Our companies need to become anti-hate. There is, more than ever, a need to reinvigorate the maybe slightly naïve belief in a better world, driven by humanity and brotherhood+. And this is why we should not focus exclusively on online hate. However important it is to reduce the visibility of hate communication, we need to start thinking in a much more integrated way, developing strategies that move hate outside our world of ideas, affects, actions, and yes, communication.
(The author is Docent at the Institute of Communication Studies and Journalism at Charles University in Prague)