Next Article in Journal
Correlates of Acquiring a Traumatic Brain Injury before Experiencing Homelessness: An Exploratory Study
Previous Article in Journal
The Political Significance of Overeducation: Status Inconsistency, Attitudes towards the Political System and Political Participation in a High-Overeducation Context
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reported User-Generated Online Hate Speech: The ‘Ecosystem’, Frames, and Ideologies

1
Peace Institute, Metelkova ulica 6, 1000 Ljubljana, Slovenia
2
Faculty of Social Sciences, University of Ljubljana, Kardeljeva ploščad 2, 1000 Ljubljana, Slovenia
*
Author to whom correspondence should be addressed.
Soc. Sci. 2022, 11(8), 375; https://doi.org/10.3390/socsci11080375
Submission received: 16 June 2022 / Revised: 31 July 2022 / Accepted: 6 August 2022 / Published: 19 August 2022

Abstract

:
The spread of hate speech challenges the health of democracy and media systems in contemporary societies. This study aims to contribute to a better understanding of user-generated online hate speech reported by Internet users to national monitoring organizations, in particular its ‘ecosystem‘, discursive elements, and links to political discourses. First, we analyzed the main characteristics of the reported statements (source, removal rate, and targets) to reveal the media and political context of reported user-generated online hate speech. Next, we focused on hate speech statements against migrants and analyzed their discursive elements with the method of critical frame analysis (frames, actors, metaphors, and references) to understand the corresponding discourse. The main discursive feature of these statements is the prognosis, which calls for death and violence, so we could label this communication as ‘executive speech.’ Other key features are references to weapons and Nazi crimes from WWII, indicating the authors’ extreme-right ideological convictions, and the metaphors, employed to provoke disgust from migrants, present them as culturally inferior and raise fears about their supposed violent behavior. The corresponding diagnoses frame migrants as a threat in a similar way to populist political discourses of othering and complement these in providing ‘final’ solutions in prognoses.

1. Introduction

Fears related to the spread of hate speech in contemporary societies particularly stem from the experiences of fascist and Nazi regimes, which not only rhetorically enforced stereotypes, xenophobia, and racism but transformed these into state policies, resulting in millions of victims. Accordingly, for decades, hate speech was considered the most dangerous in situations where it was disseminated from the top by influential persons and organizations, such as politicians or the mass media (Dangerous Speech Project 2020; Murphy 2021). This remains true and nowadays applies especially to populist political actors and far-right media, who often overtone reasonable debates with the rhetoric of hate (Lazaridis et al. 2016). However, with the expansion of user-generated content on the Internet, we now face a different phenomenon: a notable increase in ‘grassroots’ dissemination of hate speech and other socially unacceptable communication, sometimes referred to as ‘dark participation’ (Quandt 2018). This type of communication is especially problematic on social media, in online discussion forums, and in the comment sections of online media outlets (Mondal et al. 2017; Vehovar et al. 2020). With the increased use of the Internet during the COVID-19 pandemic, the problem has further accelerated (Fan et al. 2020) as the consumption of digital media increased together with discriminatory responses to fear, which disproportionately affect marginalized groups (e.g., Devakumar et al. 2020; Karpova et al. 2022) and stimulate false or unproven assertions, such as conspiracy theories (Bruder and Kunert 2020; Scan Project 2020). Hate speech is undoubtedly a pressing social issue that raises questions about the health of democracy and media systems in Europe and elsewhere, and continuously generates the need for research to better understand and cope with the phenomenon.
In this context, the aim of the following exploratory study is to expand the knowledge about online hate speech reported by Internet users to hate speech monitoring organizations. Taking Slovenia as a case study, we analyze its ‘ecosystem’ and discursive structure. First, we focus on the media and political contexts of the user-generated online hate speech and analyze the sources of the publication, removal rate, and main targets. Next, we specifically focus on hate speech against migrants and analyze the corresponding discourse using critical frame analysis (Bacchi 2009; Dombos et al. 2012; Verloo 2016) by questioning how authors frame problems and solutions, what kind of metaphors and references they use, and the ideological substance of their statements. We explore the relationship between political and user-generated hate discourses, as well as the links of user-generated hate speech to extreme-right ideologies.
The focus on migrants is related to the very specific timeframe of our analysis, which covers a remarkable period in recent European history, the peak of the so-called ‘refugee crisis’ in 2015 and 2016. At that time, nearly half a million refugees and migrants crossed Slovenia (which has a population of 2 million) along the so-called Balkan migration route, and an opaque mass of hateful comments flooded the Internet. The empirical case, based on Slovenian data, is informative for the entire European context because, with respect to attitudes, including hate speech and migrations, Slovenia consistently holds the position of a median EU27 country, which is also true for the majority of general socioeconomic indicators, including social media usage and share of households with Internet access (Vehovar and Jontes 2021, p. 3).
One original feature of this study is the data used. Namely, empirical research on online hate speech is typically based on hate speech statements observed directly in online venues (e.g., Rossini 2020) and is increasingly conducted using automatic detection approaches (e.g., Alkomah and Ma 2022; Calderón et al. 2021; Lucas 2014; Mondal et al. 2017; Vehovar et al. 2020). However, rather unexplored terrain—from the academic research perspective—are databases stemming from a self-regulatory mechanism, e.g., where Internet users more or less promptly report hateful content generated by other users to content providers or specialized organizations for monitoring hate speech (Vehovar et al. 2012; Hughey and Daniels 2013), which may achieve the removal of such content.
In our case, we used data obtained from the national hotline point for reporting illegal content: Spletno Oko (www.spletno-oko.si, accessed on 17 February 2022), a member of the international hotline network INHOPE (www.inhope.org, accessed on 5 August 2022), which was the main civil society authority in Slovenia at the time of our analysis, where citizens could anonymously report (supposedly) illegal online content, including hate speech. In most cases in our analysis, hateful content was later removed from the Internet due to internal moderation follow-up by the content provider or due to corresponding law enforcement interventions. This important part of online hate speech is, thus, typically unavailable to researchers because they usually capture this content with a considerable lag, leading to an incomplete view of the phenomena (Waqas et al. 2019), particularly after changes in hate speech treatment from 2016 onwards, when global social network companies signed the EU Code of Conduct against illegal hate speech online (European Commission 2016).
Together with the introduction of modern computer algorithms, these measures to a large extend prevented hate speech from becoming publicly visible (Meta 2022). Researching this specific data thus means that we studied the most flagrant hate speech, so we justifiably expected that the underlining patterns, if they existed, would appear in the most articulated format.
Another original contribution is the application of the critical frame analysis method to the user-generated online hate speech. In recent years, online hate speech has been extensively studied (for systematic reviews, see Castaño-Pulgarín et al. 2021; Paz et al. 2020), particularly from legal and freedom of speech viewpoints (Massanari 2017), as well as from media (e.g., Saha et al. 2019), social (e.g., Lucas 2014), and psychological perspectives (e.g., Assimakopoulos et al. 2017). Nevertheless, the use of linguistic pragmatics and discourse analysis methods in hate speech research is still relatively rare (Dekker 2017) although not entirely absent (e.g., Sagredos and Nikolova 2022; Ghaffari 2022).
We address this aspect by treating hate speech as a discourse embedded in a specific political and media context and using the critical frame analysis method, which has been predominantly applied to policy documents (e.g., Verloo 2007) and political party documents (e.g., Lazaridis et al. 2016) but rarely to content generated by Internet users (e.g., Kuhar and Šori 2017). This disparity could, in part, be due to the empirical material itself as frame analysis presupposes a rather complex textual structure, while user-generated online hate speech statements are often short and simplified. Therefore, determining the extent to which frame analysis could be used to analyze user-generated online hate speech is one of the challenges we address in this study.
The article proceeds as follows. Firstly, we situate user-generated online hate speech in political, media, and legal contexts based on the findings of previous studies. Next, we outline our own methodological approach to the empirical research and present the results. Finally, we outline our main conclusions in the Discussion.

2. The ‘Ecosystem’ of User-Generated Online Hate Speech

2.1. Definition

Hate speech can be placed under the umbrella term socially unacceptable discourse, also comprising incivility, flaming, threats, defamation, insults, negative stereotyping, obscenity, intolerance, and vulgarity (Vehovar and Jontes 2021). Although the line between hate speech and various other forms of socially unacceptable communication is blurred, researchers have reached a fairly broad consensus that it includes violent threats or expressions of prejudice against particular groups on the basis of race, religion, sexual orientation, and other personal characteristics (e.g., Paasch-Colberg et al. 2021; Meza et al. 2019; Silva et al. 2016). While the goal of such speech is often to humiliate, intimidate, or incite violence against particular groups, some researchers have pointed out that not all hate speech is related to hatred but can also be an expression of, e.g., religious beliefs, attention-seeking, or boredom (Brown 2017).
We can identify hate speech by using three criteria—negative stereotyping, dehumanisation, and expressions of violence—although all three need not be present in a statement for it to be hate speech (Paasch-Colberg et al. 2021). It is a phenomenon with different forms and nuances, which is why some researchers avoid narrow definitions to encompass hate communication on the Internet more fully, using the terms ‘hateful’ (Perifanos and Goutsos 2021) or ‘dangerous’ (Dangerous Speech Project 2020) speech or employing hate speech intensity scales (Bahador and Kerchner 2019).
Hate speech ‘attributes to a class of people certain highly negative qualities taken to be inherent in members of the class, which typically include immorality, intellectual inferiority, criminality, lack of patriotism, laziness, untrustworthiness, greed and attempts or threats to dominate their “natural superiors”‘ (Lakoff 2017). In this respect, metaphors are particularly important to study because they can reveal the underlying conceptual frame of their producer and give access to a set of assumptions about the “typical” aspects of a member of a minority (Baider et al. 2017, pp. 38–39).
In hate discourse, metaphors commonly used to describe migrants include ‘parasites, ‘disease’, ‘dirt’, ‘amorality’, ‘subhuman/alien’, ‘outlaw’, ‘burden’, and ‘danger/threat’ (Musolff 2015; Baider et al. 2017; Bajt 2016). In this context, it is also important to be aware of coded racial language, which involves substituting references to communities using benign words that seem out of context to hide hate by obeying media rules and online community policies (Magu et al. 2017). Therefore, research on hate speech must be ‘racially literate’, i.e., scholars must become acquainted with the slang and language of virtual racial invective and messaging (Hughey and Daniels 2013, p. 338).
Strategic use of metaphors can structure how people think about issues, such as immigration, particularly when media or politicians use them (Chkhaidze et al. 2021; Marshall and Shapiro 2018). A US study demonstrated that media representations of immigrants that use vermin metaphors (‘water’, ‘animals’, and ‘invasion’) lead to increased anti-immigrant attitudes, particularly among participants identifying themselves as Americans (Marshall and Shapiro 2018). These findings are particularly important when we investigate the relations between user-generated online hate speech and other hate discourses.

2.2. Legal Context

The difficulty in defining hate speech arises particularly when it comes to the question of prosecution, but the Council of Europe, a major European authority on human rights, created a clear and operational legal definition—one often used in academic research, including ours. According to this definition, the term hate speech covers all forms of expression that ‘spread, incite, promote or justify racial hatred, xenophobia, anti-Semitism or other forms of hatred based on intolerance, including intolerance expressed by aggressive nationalism and ethnocentrism, and discrimination, as well as hostility against minorities, migrants and people of immigrant origin’ (Council of Europe Committee of Ministers 1997).
Accordingly, all 47 Council of Europe member states have passed national legislation that criminalises certain forms of hate speech. However, in coping with such speech, hard law is only one part of the solution, particularly if we consider the phenomenon’s scope and diversity, as well as the right of free expression in democratic societies. Another part of the solution is self-regulation, which can include a broad range of social communication networks and actions and can be defined as ‘a way of collectively involving civil society in and making it jointly responsible for promoting a social communication system adapted to social roles and to correct ethical action’ (Aznar 2019, p. 6). Self-regulation can mean various things, from media and social networks adopting rules on participating in online debates, to moderation, removing hate speech from online platforms, and monitoring. It is particularly important in countries such as Slovenia, where the threshold for hate speech prosecution is relatively high, i.e., preventing the dissemination of hate speech is largely dependent on other mechanisms (Kogovšek Šalamon 2018).

2.3. Media Context

The spread of user-generated hate speech is strongly dependent on wider media and the political ‘ecosystem’, which nowadays is characterised by commercialisation of the media and mediatisation of politics, creating systemic conditions for visibility and dissemination of hate discourses (Šori and Ivanova 2017). Extant studies indicate that the mainstream media’s use of us-versus-them discourse and their generous provision of space for populist political actors have contributed to the normalisation and legitimisation of radical views on migration in recent years (Ekman and Krzyzanowski 2021; Pajnik and Ribać 2021; Dolea et al. 2021; Terrón-Caro et al. 2022).
Mainstream media can also serve as platforms on which Internet users spread hate speech, particularly in comment sections. Over the past decade, media organisations at first enthusiastically introduced participatory journalism and encouraged users to like, share, comment, and submit content on their online platforms (Panagiotidis et al. 2020). However, the spread of hate speech in news portals’ comment sections has forced many responsible media outlets to tighten their policies, with some eventually disabling this function (Hughey and Daniels 2013).
In Slovenia, this process coincided with the so-called ‘refugee crisis’ in 2015 and 2016, which was accompanied by a wave of user-generated hate speech that led to increased moderation of, restrictions on, and even (temporary or permanent) disabling of user comments on online news portals (Bajt 2016). However, hate speech authors subsequently moved to sites that were less strictly moderated, particularly social media networks. As a result, several Facebook groups emerged in which administrators and users disseminated hostile anti-migrant propaganda. Outwardly, these groups declared themselves patriotic while spreading, sharing, and posting nationalist, xenophobic, and homophobic messages and hatred against migrants, Muslims, and Islam (Bajt 2016, p. 36).
Online hate speech does not differ from its offline counterpart in terms of aims, though extant research indicates that how it reaches and engages its audience is potentially different (Mondal et al. 2017; Cleland 2017; Mathew et al. 2019). People behave differently online, particularly when they publish comments anonymously, although this factor may not be the most distinctive one as users are aware that anonymity is never absolute (Mondal et al. 2017). More importantly, online communication’s instantaneous nature encourages spontaneous hate speech reactions, which is why hate speech can go viral and garner responses within minutes (Brown 2018). The online setting also creates new forms of emotional expression that stimulate and accelerate reactions to hate speech stimuli (Wahl-Jorgensen 2020).
Furthermore, extant research has demonstrated how hostile users’ posts on social media spread faster and reach a wider audience than regular users’ posts (Mathew et al. 2019). Therefore, the spread of online hate speech heavily depends on social media platforms’ design, algorithms, and policies, which can implicitly or explicitly support ‘toxic technocultures’ derived from retrograde notions of gender, sexuality, and race, and characterised by opposition to diversity, multiculturalism, and progressive ideas (Massanari 2017). Therefore, the dissemination of hate speech online is related directly to existing systemic policies and self-regulation, as well as to Internet content providers’ profits.
A study in Slovenia found that roughly half the comments on Facebook news items related to migrants and LGBT issues can be labelled as ‘socially unacceptable discourse’, in which the comments containing elements of potential hate speech presented a clear majority (Vehovar and Jontes 2021). In response to these developments, governments, particularly in Europe, have increased pressure on social media companies, e.g., Facebook and Twitter, to exert more control over content published and shared on their platforms. Thus, the European Commission created a code of conduct designed to counter illegal hate speech online among leading social network companies (European Commission 2016). Consequently, like ‘traditional’ media, social media have added restrictions to their community standards in recent years.

2.4. Political Context

Apart from the Internet and the mainstream media, hate speech in political discourse is also on the rise. Globally, political movements, parties, and politicians that oppose liberal democracy and build their support by spreading hate towards marginalised and minority groups—e.g., migrants, Muslims, Roma, and gays and lesbians—are gaining popularity (Lazaridis et al. 2016; Yerly 2022; Cervi and Tejedor Calvo 2020, 2021; Ballsun-Stanton et al. 2021). This phenomenon most often is termed (far-)right or exclusionary populism, which can be defined as an ideological concept that combines anti-elitism, authoritarianism, and nativism (Mudde and Kaltwasser 2013; Mudde 2007).
In Europe, populist parties have gained strength, particularly since 2015, when they focused their communication strategies on opposing the admission of migrants and began to scapegoat them for a range of social problems. In some regions, they have been given additional impetus by the COVID-19 pandemic, which they have used to discursively reinforce the notion of the border as protection against ‘intruders’ (Yerly 2022, p. 17). In countries such as Slovenia or Hungary, far-right populist parties have also come to power and have started to implement xenophobic and homophobic policies, repress civil society, and attack the rule of law.
Ideologically, nativism primarily informs exclusionary populism (Yerly 2022; Wodak 2015; Lazaridis et al. 2016; Cervi and Tejedor Calvo 2020, 2021). Nativism is a combination of nationalism and xenophobia, manifested through the view that a country should be inhabited exclusively by ‘indigenous’ people (the nation) and that non-indigenous elements (people and ideas) are a threat to a homogeneous nation-state (Mudde 2005, pp. 22–23).
The corresponding populist discourse is similar in many ways to hate speech, characterised by the use of the discursive practice of othering, i.e., building antagonistic relationships between groups of people as one group is represented as an enemy and an existential threat to another group (Wojczewski 2020; Kuhar and Šori 2017; Frank and Šori 2015; Ballsun-Stanton et al. 2021; Lazaridis et al. 2016). Migrations are often presented as invasions and part of a larger conspiracy theory about the replacement of white European populations by others (Yerly 2022; Thiele et al. 2022; Cervi and Tejedor Calvo 2021). In this respect, populist discourses are often nothing more than a form of hate speech adapted to the conditions of parliamentary democracy and established politics (Frank and Šori 2015). Through this, distinct features of populist communication on migration morph from exclusionary, ethno-nationalistic rhetoric into an inclusionary call for solidarity, demonstrating populism’s ‘chameleonic’ nature, allowing it to adapt and fit into specific political actors’ programmatic and ideological templates (Pajnik et al. 2019). The key finding is that such political messaging legitimises and inspires hate speech and hate crimes, particularly in debates following high-profile trigger events, such as terrorist attacks (Murphy 2021; Arcila-Calderón et al. 2021; Williams et al. 2020).
We anticipate tracing references to populism and extreme-right ideologies in our own empirical analysis, and we aim to examine the relationships between user-generated, media, and political hate discourses further.

3. Materials and Methods

3.1. Research Questions

Based on the above-described background, we formulated the following research questions:
RQ1: What are the main characteristics of the ‘ecosystem’ (i.e., the source of publication, removal rate, and main targets) and the discursive structure of hate speech statements reported to the national hotline?
RQ2: What are the frames and underlying ideologies of user-generated online hate speech against migrants?
To address these research questions, we applied the frame analysis method, developed a corresponding coding scheme, and implemented it on each statement obtained from hate speech reporting to the national hotline.

3.2. The Frame Analysis

Frames are discursive elements in texts and can be defined as ‘problem-solving schemata’ or ‘mental orientations that organize perception and interpretation’ (Johnston 2004). The corresponding frame analysis approach helps to establish a consistent and sensible causal story from information on how a problem develops and should be solved (Entman 1993; Gamson and Modigliani 1989). This means that each text can be analyzed to determine the frames within which problems (diagnoses) and solutions (prognoses) are defined, either implicitly or explicitly (Verloo 2016). We will specifically refer to critical frame analysis developed by Verloo (Verloo 2007, 2016), who further elaborated on (and specified) this discursive approach to analyze (fundamental) norms, beliefs, and perceptions included in the selected texts as the subject of analysis. In our case, they are textual messages posted online by Internet users.
How the authors frame a certain problem or solution may be strategically chosen, for example, with the aim of influencing the discussion and decision-making. In addition, the authors can select frames that can most effectively dehumanize and incite hatred against other people and, at the same time, still obey the criminal prosecution and moderation policies of media outlets and social networks. However, framing can also be unintentional or unconscious and reflect (and reproduce) the dominant discourses that exist in a specific society, which means that deep cultural meanings influence the framing process (Bacchi 2009). According to some authors (Dombos et al. 2012), these can be even more important than the intentionality of the framing process.

3.3. The Data

We obtained the database of user-generated online hate speech statements from the national hotline Spletno Oko, to which Internet users anonymously reported illegal content posted on the Internet by either submitting a form on the hotline’s website or by using a special feature (i.e., the hotline Spletno Oko button) that some media outlets have incorporated into their online comment sections.
Typically, more than 500 hate-speech-related reports were received annually (Šulc and Motl 2020). Specially trained analysts reviewed and categorized the reported statements, and, when they potentially violated Slovenian hate speech legislation (roughly 10% of reports), they were forwarded to law enforcement. The timeframe of the statements collected for this analysis was between 1 January 2016 and 1 June 2017. This period was selected intentionally because it comprises the peak of the so-called refugee crisis in Slovenia, which was marked by the unprecedented escalation of online hate speech. We can thus expect that these reports about (non-moderated/non-censored) hate speech statements will reveal the most genuine frames in hate speech discourse.
The hotline provided 489 reports identified by their analysts as potentially containing elements of hate speech against persons or groups with protected characteristics, as defined by the national legislation, following Council of Europe recommendations. The hotline had already excluded reports that did not meet the definition of hate speech (e.g., indecent language, insults, or threats to persons or groups outside the protected characteristics). We further reviewed the database and excluded all statements where the initial author of the statement was not a regular Internet user (e.g., a politician who published hate speech on his social media profile), or where some of the crucial information was missing, such as the source of publication. During this review, we excluded 117 statements, so 372 statements were then coded.
Thus, our data were based on reports collected by a self-regulatory mechanism. There are at least three factors that influence citizens reporting the hateful content of other people. The first factor is the national policy context since the state sets the legal framework, which usually serves as the source for moderation policies adopted by media and Internet content providers. Another important factor is the political context, where research findings show that the most prominent targets of hate speech are often connected to current events on the political and media agenda, and that discursive patterns involve the proliferation of similar stereotypes about certain target groups (Meza et al. 2019). The third factor is the sensibility and awareness of the users themselves, as well as broader social norms related to what is considered acceptable communication; again, this largely depends on the policy and political context related to a certain nation or state.
The coding sheet was first tested by two researchers on a smaller sample. One then coded the whole sample, while the other reviewed and validated the data. However, we found almost no discrepancies because the coding was related either to administrative aspects or to frame analysis characteristics, which were robust, objective, and unambiguous.

3.4. The Coding Scheme

Statements in the original hotline database were fully anonymized by the time we received them because the hotline application form follows the highest security standards and does not log any additional information, such as the submitter’s Internet protocol (IP) address and the device used. We additionally checked whether the hate speech statement itself might potentially reveal any personal information, but we did not find any such cases. The original database was also accompanied by administrative coding, including the report date, URL, statement text, and category of the statement, which denoted whether the statement complied with the definition of hate speech. We assigned additional administrative codes, including the coding date, coder ID, media, and target group. For the purpose of frame analysis, we coded each statement according to six markers using the methodology of sensitizing questions proposed by Verloo (2016): diagnosis (what is the problem?), prognosis (what is the solution?), diagnosis passive actor (who is affected by the problem?), prognosis active actor (who should provide the solution?), metaphors (how is the target group described?), and references (to what kinds of ideologies and other references do users refer?).
The markers served to identify the frames, which enabled us to analyze the discourse of user-generated hate speech, elaborate on its discursive and ideological conceptualizations, and identify strategic framing.

4. Results

In the first step, an analysis of the basic features of online hate speech was performed on the complete database (372 statements). The analysis addressed the following questions: where the statements were published, whether they were still accessible, who were the main targets, and how the hate speech was discursively structured according to the adopted markers. In the second step, we focused on hate speech against migrants (261 statements) and performed a detailed frame analysis.

4.1. The ‘Ecosystem’ and Discursive Structure of Hate Speech Reports

4.1.1. Sources, Availability, and Targets

Most of the reported 372 statements were published on social networks (62%), followed by news portals (32%) and online discussion forums (6%). When a frame analysis coding was performed (at least 1 year after reporting), the vast majority (89%) of hate speech comments in the database were no longer available online. Discussion forums had the lowest share of removed statements (43%), followed by news portals (80%) and social networks (98%), which seemed to have the strictest moderation policies.
These results also illustrate the instantaneous nature of user-generated online hate speech as the statements strike, arouse, and motivate in real time and then quickly move out of sight, losing their inflammatory power (Brown 2018).
With respect to reports on social networks, 99% of the reported statements originated from Facebook, which was, thus, the central platform for spreading online hate speech, at least for statements reported to the national hotline. Most of the reported statements posted on Facebook originated from a single group named Slovenia Protect its Borders, which was very popular during the so-called migration crisis and had (at the peak of the crisis) more than 20,000 likes, which, in the Slovenian context, is a high number. We assume that the group was also followed by its opponents, who engaged in defending migrants’ human rights and regularly reported hate speech statements, hoping that the group would be shut down, which subsequently happened when Facebook began enforcing stricter policies.
As mentioned earlier, the reporting of hate speech observed in our study was heavily influenced by the then evolving migration crisis, which was accompanied by intensive politicized discussions in public, as well as among politicized Internet users. The influence of this context on the publishing and corresponding reporting of potential hate speech was reflected in the data and related hate speech targets (Table 1). Namely, most of the hate speech (i.e., 70% of statements) reported in the analyzed period targeted migrants,1 which clearly reflected the higher influx of migrants into the country after 2015 and the simultaneous increase in hostility in political and media discourse (Bajt 2016). This confirms the findings of other studies claiming that the most prominent targets of hate speech are connected to trigger events high on the political and media agenda (Meza et al. 2019; Murphy 2021).
The second most frequently attacked personal characteristic in our database was religion (20%), with most of the hate speech directed at Islam and Muslims. In 10% of the statements, the personal circumstances of nationality were attacked, and it most frequently targeted Albanians, who represent a considerable immigrant community in Slovenia. Some comments (13%) simultaneously attacked multiple personal circumstances and groups; the most commonly used combination was migrants and Muslims.
Since the status of a migrant involves several personal circumstances (i.e., residential status, citizenship, ethnicity, nationality, and religion), it, therefore, also intersects with hate speech against Muslims and certain nationalities, such as Albanians, making the amount of hate speech against migrants even more problematic. In 2016, migrants became the ‘quintessential other’ (Bajt 2016), which is why we can conclude that most of the hate speech in Slovenia during that period was related to racism and xenophobia in comparison to, for example, sexism and homophobia.

4.1.2. Discursive Structure

The numbers of different markers in the frame analysis coding (Table 2) show that the most important feature in the analyzed statements is the prognosis (solution). Of the 372 statements analyzed, the prognosis was identified in 80% of the statements, followed by references (58%), metaphors (48%), diagnosis (31%), prognosis active actor (16%), and diagnosis passive actor (12%).
High shares of missing features show that hate speech is an impoverished form of communication as it was often comprised of only short statements, such as calling for the death of a particular group without providing any further explanation. A low share of statements with diagnosis indicate that the users consider hate against particular groups to be taken for granted, natural, and normal, as something that does not need further argumentation. Not only was the problem (diagnosis) often missing but also the category of passive actor (who or what is affected by the problem). The relationship of prognoses (80%) versus diagnoses (31%) is very different from political othering discourses, where diagnoses usually outnumber prognoses. For example, when frame analysis was applied to texts published online by right-wing and populist political parties and movements, the number of identified diagnoses was 12% (see Ranieri 2016) or even 26% (see Pajnik and Sauer 2018) higher than the number of prognoses.
We can conclude that elaborated right-wing populist political speech uses othering to represent minorities as problems, while online hate speech is much more violent and predominantly offers solutions, which is why we can label user-generated online hate speech as executive speech. The last act in this net would then be a hate crime.

4.2. Frames of Hate Speech against Migrants

In further analysis, we elaborate on hate speech against migrants, which was the most reported form of hate speech in our database, comprising 261 statements.

4.2.1. Diagnosis: Overlapping with Populist Othering

In the diagnosis part of the analysis, we identified 64 problems (diagnoses), which we inductively clustered under eight different diagnosis frames (Table 3). To summarize, the diagnosis frames represent migrants as a threat to culture, demography, security, social wellbeing, or health, which is quite similar to political othering discourses (Frank and Šori 2015; Lazaridis et al. 2016; Šori 2015; Murphy 2021; Cervi and Tejedor Calvo 2020, 2021; Yerly 2022). A statement from a typical report can be used to illustrate this point: ‘refugees are coming to Europe to steal, kill, rape, and procure religious war.’
The most common diagnosis frame is ‘migrants endanger the existence of the Slovenian nation and Europe,’ which considers migrants as a cultural threat (18 instances). In this framing, Slovenians and Europeans in general are seen as ‘victims of genocide,’ perpetrated by migrants and their supporters. Migrants, especially Muslims, are constructed as culturally incompatible and unable to integrate, and multiculturalism is strongly rejected. The second most common diagnosis frame is that there are ‘too many migrants in the country and Europe,’ which represents migrants as a demographic threat, often with claims that Slovenia and other countries in Europe are being overrun.
For both frames, we can draw parallels with Nazi blood and soil ideology, which strongly connected the ideal of a pure national body with the settlement area. Today, such ideas are most loudly represented by far-right groups and right-wing populist parties and mainly labeled as nativism.
The third most common diagnosis frame—‘migrants are cheaters, criminals, and violent’—presents migrants as a security threat and accuses them of lying about their identities. In another frame, migrants are accused of being disrespectful and denied the right to stand up for their rights. One of the frames also represents migrants as a threat to people’s social wellbeing, emphasizing the amount of money spent by the state on migrants and that many Slovenians live in poverty. This suggests that hate speech should also be considered an economic and social issue.
Migrants are also depicted as part of a plan to erase white Europeans. Here, we can draw parallels with hate speech directed toward other groups, which is also supported by conspiracy theories, such as those against Jews (e.g., ‘the Jewish banking elite wants to rule the world’) or gays and lesbians (e.g., ‘the gay lobby wants to destroy the traditional family’). More rarely, migrants are depicted as a health threat, which might have changed with the COVID-19 pandemic in 2020 and 2021.
The discourse we are facing reflects the belief that the nation and civilization are reproduced biologically and threatened by culturally and essentially different and dangerous ‘others’. Most often, Muslim migrants are explicitly presented as a threat, which shows that religion and anti-Muslim hatred play an important role in hate against migrants. By its message, this discourse, to a remarkably high extent, corresponds with right-wing populist discourses, which have been labelled by Wodak (2015) as the ‘politics of fear.’

4.2.2. Prognosis: Strategic Incitement to Violence

Framing migrants as a threat serves as justification for the various solutions ‘to solve the problem,’ and all these solutions include extreme violence. We identified seven prognosis frames (Table 4).
By far, the strongest prognosis frame is ‘murder’ (167 statements), within which users call for the death of migrants, most often by shooting, burning, and gassing. For the users, the execution itself is often not enough but must be carried out in a culturally specific way that further humiliates and dehumanizes the victim. For example, some users seem to take sadistic pleasure in describing precisely how they would kill Muslims by respecting Islamic religious rules and rites.
Other prognosis frames appear less frequently and suggest different forms of violence against migrants (e.g., torture, insult, beating, expulsion, denying any help, and vigilante justice). The frame ‘protect the borders and our homes’ calls for closing the borders at any cost, including the use of weapons against migrants. The frame ‘revenge and vigilante justice‘ calls people to take up arms against migrants.
Clearly, killing migrants is the ultimate solution, according to the most hateful users and the most important hate speech message. Since our database is comprised of reports provided by Internet users, the results also reveal that wishing death on someone is the ultimate social limit of acceptable communication; this is the point where other Internet users recognize hate speech and feel they must react with a report.
The immediate sanction most of the hate speech authors experienced was the removal of the statement from the platform on which it was published, which demonstrated that reporting hate speech was an important self-regulatory mechanism in coping with the phenomenon. With respect to law enforcement activities, due to the existing legislation and case law, very few cases actually progressed to criminal charges.
The question of the active actor or who should provide the ‘final’ solution remained unanswered in most statements. Most often, users referred to ‘we’ or ‘the people’ (15), followed by the users themselves (12). Other categories include ‘Slovenians,’ ‘patriots,’ or ‘death squads.’
This shows that authors strategically incite violence without exposing themselves or the groups they belong to as possible perpetrators, presumably to avoid criminal charges. Despite the extreme expressions of aggression and violence, hate speech authors seem to be aware of the legal limits of their ‘free speech’ and, in this respect, they apply strategic framing. With respect to passive actors (i.e., those affected by migrations), they are rarely explicitly defined; in most cases, they consider Slovenia (seven statements) and Europe (five), which is also the prevailing implicit category.

4.2.3. Prevalence of Vermin Metaphors

The most common metaphors in our database compared migrants to vermin and pests in general (19% of statements). Other important metaphor clusters described migrants as uncivilised, violent and criminal, dirty, ethnically and religiously different, and animals. From this, we can see that the hate speech authors employ metaphors in three ways: to provoke disgust toward migrants, to present migrants as culturally inferior, and to arouse fears regarding migrants’ supposed violent behaviour (Table 5).
The vermin metaphor implicitly suggests a certain solution, which people may infer when confronted with this problem, and reminds of the Nazi propaganda, which portrayed Jews as rats. The use of such metaphors can, therefore, be seen as an implicit threat of genocide. This raises the question of the extent to which Internet users understand the historical contexts of metaphors they use. Musolff (2015, p. 41), who analysed parasite metaphors in relation to migrants in the UK, concluded that ‘it is improbable to assume a wholly “unconscious” or “automatic” use or reception in the respective community of practice, and that instead, it is more likely that they are used with a high degree of “deliberateness” and a modicum of discourse-historical awareness’. To this, we can add that, in our case, more research would be needed to determine to what extent users ‘consciously’ or ‘unconsciously’ create hate speech. However, as we will see in the next section, there are more indications that the authors sympathise with Nazism.

4.2.4. References to Weapons, Nazi Atrocities, and Positive Values

In our analysis, we were also interested in references the authors used to underpin their messages (i.e., ideologies, values, objects; Table 6). In their statements, users often refer to various types of weapons that should be used against migrants to enhance the intimidating message (20% of statements).
The data further confirmed that at least some of the authors were sympathetic to Nazism, particularly with the use of words such as ‘Hitler’, ‘chimney’, ‘Auschwitz’, and other similar expressions (17%). Writing the word ‘Hitler’ in the comment section under a news item about migrants is sufficient to understand what the author suggests as a solution. In a similar way to their use of metaphors, the authors implicitly threaten migrants with genocide or the ‘final solution’ by using Nazi references. This can be viewed as a form of coded racial language (Magu et al. 2017), but with the caveat that the meaning of most of these words is known to the general public and computer algorithms. There are also examples that are closer to the definition, such as the comment “There is a solution: 14/88”; however, comments with coded racial language are rather rarely reported. Other ideological stances include anti-communism and anti-multiculturalism.
The third group of references entails more positive values (15%), which supposedly are endangered by migrants and include ‘security’ (particularly border issues and children’s safety), ‘European civilisation’, ‘justice’, ‘peace’, ‘the Slovenian nation’, and ‘gender equality’. Among other references, the authors often used Islam and specific religious practices as a reference point for hate.
The analysis of references indicates that some users who publish hate speech online clearly share extreme-right values, including the use of extreme violence, to achieve political goals, and find inspiration in the most horrible authoritarian regimes. Previous online research indicates that extreme-right communities are characterised by networks of individuals, as opposed to formal groups, and represent a risk to pluralistic liberal democracy by violent elements (Ballsun-Stanton et al. 2021). For Slovenia, research shows that small but vibrant extremist communities exist both online and offline (Bajt 2016; Valenčič 2021). Our analysis does not allow us to draw any conclusions about the prevalence of radicalisation in the population as a whole, but, undoubtedly, we are dealing with a phenomenon that needs to be addressed politically.

5. Discussion

In this article, we examined the media and political contexts of user-generated online hate speech and its discursive features. We applied the method of critical frame analysis to user-generated online hate speech reported by Internet users to the Slovenian monitoring organization Spletno Oko. Only statements with elements potentially meeting the definition of hate speech—adopted by national legislation following the Council of Europe recommendations—were included in the analysis.
We analyzed 372 hate speech statements posted on social media in the comment sections of online media outlets and online discussion forums during the surge of anti-migrant online hate speech in Slovenia in 2016–2017. In the first step, we analyzed the main characteristics of these statements to reveal the basic features of user-generated online hate speech and to understand its ‘ecosystem.’ In the second step, we focused on hate speech statements that targeted migrants (261 statements) and analyzed their discursive elements in detail to understand the corresponding discourse and detect the underlying ideologies of the authors. We were specifically interested in the question: how is contemporary user-generated online hate speech discursively and ideologically linked to political hate discourses and extreme-right ideologies?
With respect to the ‘ecosystem’ of hate speech (RQ1), the results showed that social media represent the main platform for spreading hate speech, which is consistent with the findings of other studies (Castaño-Pulgarín et al. 2021). At the same time, they execute the strictest moderation policies since the number of removed hate speech statements from social networks was much higher than the number of ‘comments’ in the media or discussion forums.
During the observed ‘refugee crisis,’ migrants were the most attacked group in these reports, which showed strong embodiment of the reported user-generated online hate speech in the political context. Our data confirm findings from previous research that stated publishing (and we may add that also reporting) of hate speech occurs more frequently after high-profile trigger events and in situations where the attacked minority is the subject of political discussions and organized anti-propaganda (see Murphy 2021; Arcila-Calderón et al. 2021).
The results of our analysis indicate that authors, to a large extent, strategically choose metaphors, references, and prognoses and are aware of their political message. Publishing (and reporting) can, therefore, be considered part of the grassroots political struggles of Internet users trying to influence public opinion and policy adoption, but we do not exclude the possibility that there is also spontaneous expression of hate speech on the Internet.
Within the discursive structure of hate speech against migrants (RQ2) lie its main features and indicators: prognoses, references, and metaphors. Prognoses were present in 80% of the reported statements, and nearly all called for death or the use of various forms of extreme violence against migrants. On a six-point hate speech intensity scale, most would be classified in the highest group (Bahador and Kerchner 2019). The question of who should provide the ‘solutions’ remains unanswered in most of the statements, indicating strategic framing by the authors to avoid criminal prosecution.
Besides the prognosis, another important feature of user-generated online hate speech against migrants is references to weapons and Nazi war crimes during WWII. A considerable proportion of Internet users who spread hate speech seem to sympathise with the most extreme-right ideas and ideologies, and they use online platforms to disseminate their views.
Similarly, as in many other countries, the central concern of extremist and violent Internet users’ discourse is the preservation of the ‘purity’ of the nation and civilization (Ballsun-Stanton et al. 2021; Assimakopoulos et al. 2017; Krzyzanowski and Ledin 2017; Walsh 2021), which serves as a justification of hatred. This leads to their conclusion that migrants, especially Muslims, represent an existential threat. Thus, hate speech against migrants is ideologically rooted in nativism and racism and strongly linked to widespread prejudices, stereotypes, and hatred towards Islam in society.
The final important feature in this discourse is metaphors (present in 48% of statements), which were employed in three ways: to provoke disgust toward migrants, to present migrants as culturally inferior, and to arouse fears about migrants’ supposed violent behaviour. Metaphors are used not only to dehumanise migrants but also to incite other Internet users against them, indicating a strong political motivation in the dissemination of user-generated online hate speech. The use of metaphors also distinguishes online hate speech from other discourses. In the case of Slovenia, it is quite rare even for far-right politicians to refer to migrants as ‘vermin’, which was the most common word in our database to describe migrants.
The diagnoses were much less common (present only in 31% of statements) than prognoses, and they framed migrants as a cultural, demographic, security, social wellbeing, and health threat and as part of a conspiracy against the existence of the nation and citizens, as well as European civilization. This is similar to right-wing populist othering as both discourses frame migrants and other minorities as threats and use similar stereotypes (e.g., Wodak 2015; Cervi and Tejedor Calvo 2021; Murphy 2021).
If we compare the discursive structure and content of hate speech statements with political othering discourses, we can see an overlapping of diagnoses, differences in the use of metaphors, and complementarity in the setting of prognoses. While political speech is characterized by the diagnosis and representation of minority and marginalized groups as a problem, hateful Internet users focus on prognosis and complement these diagnosis messages with providing ‘final’ solutions. This is why we can label the user-generated online hate speech as ‘executive speech’ and view it as complementing political hate discourses based on othering.
This study also contributed to two methodological challenges encountered in hate speech research. First, we demonstrated that frame analysis can be a very effective tool for understanding user-generated online hate discourses, especially their ideological underpinnings and embodiment in the media and political contexts, despite the use of relatively short statements. Second, the analyzed material included statements that were mostly moderated out from the platforms on which they were originally published, and, thus, they were typically unavailable to researchers when harvesting these data (with a certain time lag) directly from the Internet. Technically, the removed statements could later (after formal removal from the public) be still captured, but, in practice, on external platforms, such as Facebook, this is getting more complicated, so we encounter almost no research specifically of such data.
Therefore, we managed to analyze the hate speech cases that are the most radical and genuine. Since our research showed that executive speech (which directly calls for hate crimes) is the essential characteristic of online hate speech discourse, it is particularly important to prevent it and/or remove it. Furthermore, in recent years, online hate speech has been recognized as such, and it is increasingly being dealt with politically and with self-regulatory mechanisms, as shown by the efforts undertaken by social media after multinational government action (e.g., Meta 2022).
With respect to the limitations of this research, we observed a rather peculiar hate speech context, predominantly related to migrants, by collecting these data in a specific period and environment. There are, of course, many other environments and contexts that may function differently; however, our findings still point to the very intrinsic characteristics of user-generated online hate speech, particularly because we studied such extreme hate speech that other Internet users reported to the national authority. One could also criticize the lack of advanced analytical methods (e.g., clustering of cases), but we estimated that the added value of such elaboration would be negligible given the required resources.
Regarding further research, we encourage studies dealing with self-regulation mechanisms, specifically hate speech reported by Internet users in national contexts and comparative studies. To address the gaps in the research data, we suggest that forthcoming studies pay closer attention to hate speech removed from the platforms. This would expand our understanding of the most critical online hate speech, where hate speech characteristics are most clearly articulated.

Author Contributions

Conceptualization, I.Š.; methodology, I.Š.; validation, V.V.; formal analysis, I.Š.; investigation, I.Š.; resources, I.Š. and V.V.; data curation, I.Š. and V.V.; writing—original draft preparation, I.Š.; writing—review and editing, I.Š. and V.V.; visualization, I.Š.; supervision, V.V.; project administration, V.V.; funding acquisition, I.Š. and V.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Slovenian Research Agency through the project Resources, methods and tools for the understanding, identification and classification of various forms of socially unacceptable discourse in the information society (J7-8280, 2017–2020), and research programs Internet research (P5-0399) and Equality and human rights in times of global governance (P5-0413). The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The codebook and coding results are available at https://doi.org/10.5281/zenodo.6652306 (accessed on 5 August 2022).

Conflicts of Interest

The authors declare no conflict of interest.

Note

1
We are using the more general term migrants in cases where asylum seekers or refugees have been attacked.

References

  1. Alkomah, Fatimah, and Xiaogang Ma. 2022. A Literature Review of Textual Hate Speech Detection Methods and Datasets. Information 13: 273. [Google Scholar] [CrossRef]
  2. Arcila-Calderón, Carlos, David Blanco-Herrero, Maximiliano Frías-Vázquez, and Francisco Seoane-Pérez. 2021. Refugees Welcome? Online Hate Speech and Sentiments in Twitter in Spain during the Reception of the Boat Aquarius. Sustainability 13: 2728. [Google Scholar] [CrossRef]
  3. Assimakopoulos, Stavros, Fabienne H. Baider, and Sharon Millar. 2017. Online Hate Speech in the European Union: A Discourse-Analytic Perspective. SpringerBriefs in Linguistics. Cham: Springer International Publishing. [Google Scholar] [CrossRef]
  4. Aznar, Hugo. 2019. Information Disorder and Self-Regulation in Europe: A Broader Non-Economistic Conception of Self-Regulation. Social Sciences 8: 280. [Google Scholar] [CrossRef]
  5. Bacchi, Carol. 2009. The Issue of Intentionality in Frame Theory: The Need for Reflexive Framing. In The Discursive Politics of Gender Equality. London: Routledge. [Google Scholar] [CrossRef]
  6. Bahador, Babak, and Daniel Kerchner. 2019. Monitoring Hate Speech in the US Media. Working Paper. Washington: The George Washington University. [Google Scholar]
  7. Baider, Fabienne H., Anna Constantinou, and Anastasia Petrou. 2017. Metaphors Related to Othering the Non-Natives. In Online Hate Speech in the European Union: A Discourse—Analytic Perspective. Cham: Springer International Publishing, pp. 38–42. [Google Scholar]
  8. Bajt, Veronika. 2016. Anti-Immigration Hate Speech in Slovenia. In Razor-Wired: Reflections on Migration Movements through Slovenia in 2015. Edited by Neža Kogovšek Šalamon. Ljubljana: Peace Institute, pp. 50–61. [Google Scholar]
  9. Ballsun-Stanton, Brian, Lise Waldek, and Julian Droogan. 2021. Online Right-Wing Extremism: New South Wales, Australia. Proceedings 77: 18. [Google Scholar] [CrossRef]
  10. Brown, Alexander. 2017. What Is Hate Speech? Part 1: The Myth of Hate. Law and Philosophy 36: 419–68. [Google Scholar] [CrossRef]
  11. Brown, Alexander. 2018. What Is so Special about Online (as Compared to Offline) Hate Speech? Ethnicities 18: 297–326. [Google Scholar] [CrossRef]
  12. Bruder, Martin, and Laura Kunert. 2020. The Conspiracy Hoax? Testing Key Hypotheses about the Correlates of Generic Beliefs in Conspiracy Theories during the COVID-19 Pandemic. Brief Research Report 57: 43–48. [Google Scholar] [CrossRef]
  13. Calderón, Fernando H., Namrita Balani, Jherez Taylor, Melvyn Peignon, Yen-Hao Huang, and Yi-Shin Chen. 2021. Linguistic Patterns for Code Word Resilient Hate Speech Identification. Sensors 21: 7859. [Google Scholar] [CrossRef]
  14. Castaño-Pulgarín, Sergio Andrés, Natalia Suárez-Betancur, Luz Magnolia Tilano Vega, and Harvey Mauricio Herrera López. 2021. Internet, Social Media and Online Hate Speech. Systematic Review. Aggression and Violent Behavior 58: 101608. [Google Scholar] [CrossRef]
  15. Cervi, Laura, and Santiago Tejedor Calvo. 2020. Framing “The Gypsy Problem”: Populist Electoral Use of Romaphobia in Italy (2014–2019). Social Sciences 9: 105. [Google Scholar] [CrossRef]
  16. Cervi, Laura, and Santiago Tejedor Calvo. 2021. “Africa Does Not Fit in Europe”: A Comparative Analysis of Anti-Immigration Parties’ Discourse in Spain and Italy. Migraciones 51: 241–68. [Google Scholar] [CrossRef]
  17. Chkhaidze, Ana, Parla Buyruk, and Lera Boroditsky. 2021. Linguistic Metaphors Shape Attitudes towards Immigration. PsyArXiv. [Google Scholar] [CrossRef]
  18. Cleland, Jamie. 2017. Online Racial Hate Speech. In Cybercrime and Its Victims. Edited by Elena Martellozzo and Emma A. Jane. Abingdon and New York: Routledge, pp. 131–47. [Google Scholar]
  19. Council of Europe Committee of Ministers. 1997. Recommendation No. R (97) 20 of the Committee of Ministers to Member States in “Hate Speech” (Adopted by the Committee of Ministers on 30 October 1997 at the 607th Meeting of the Ministers’ Deputies). Available online: https://rm.coe.int/1680505d5b (accessed on 5 August 2022).
  20. Dangerous Speech Project. 2020. Dangerous Speech: A Practical Guide’. Dangerous Speech Project|4 August 2020. Available online: https://dangerousspeech.org/guide/ (accessed on 5 August 2022).
  21. Dekker, Rianne. 2017. Frame Ambiguity in Policy Controversies: Critical Frame Analysis of Migrant Integration Policies in Antwerp and Rotterdam. Critical Policy Studies 11: 127–45. [Google Scholar] [CrossRef]
  22. Devakumar, Delan, Geordan Shannon, Sunil S. Bhopal, and Ibrahim Abubakar. 2020. Racism and Discrimination in COVID-19 Responses. The Lancet 395: 1194. [Google Scholar] [CrossRef]
  23. Dolea, Alina, Diana Ingenhoff, and Anabella Beju. 2021. Country Images and Identities in Times of Populism: Swiss Media Discourses on the “Stop Mass Immigration” Initiative. International Communication Gazette 83: 301–25. [Google Scholar] [CrossRef]
  24. Dombos, Tamas, Andrea Krizsan, Mieke Verloo, and Violetta Zentai. 2012. Critical Frame Analysis: A Comparative Methodology for the ‘Quality in Gender+ Equality Policies’ (QUING) Project. Budapest: CEU. Available online: https://cps.ceu.edu/publications/working-papers/critical-frame-analysis-quing (accessed on 5 August 2022).
  25. Ekman, Mattias, and Michal Krzyzanowski. 2021. A Populist Turn?: News Editorials and the Recent Discursive Shift on Immigration in Sweden. Nordicom Review 42: 67–87. [Google Scholar] [CrossRef]
  26. Entman, Robert M. 1993. Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication 43: 51–58. [Google Scholar] [CrossRef]
  27. European Commission. 2016. The EU Code of Conduct on Countering Illegal Hate Speech Online. Text. European Commission—European Commission. Available online: https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en (accessed on 5 August 2022).
  28. Fan, Lizhou, Huizi Yu, and Zhanyuan Yin. 2020. Stigmatization in Social Media: Documenting and Analyzing Hate Speech for COVID-19 on Twitter. Proceedings of the Association for Information Science and Technology 57: e313. [Google Scholar] [CrossRef]
  29. Frank, Ana, and Iztok Šori. 2015. Normalizacija Rasizma z Jezikom Demokracije: Primer Slovenske Demokratske Stranke. Časopis Za Kritiko Znanosti, Domišljijo in Novo Antropologijo 43: 89–103. [Google Scholar]
  30. Gamson, William A., and Andre Modigliani. 1989. Media Discourse and Public Opinion on Nuclear Power: A Constructionist Approach. American Journal of Sociology 95: 1–37. [Google Scholar] [CrossRef]
  31. Ghaffari, Soudeh. 2022. Discourses of Celebrities on Instagram: Digital Femininity, Self-Representation and Hate Speech. Critical Discourse Studies 19: 161–78. [Google Scholar] [CrossRef]
  32. Hughey, Matthew W., and Jessie Daniels. 2013. Racist Comments at Online News Sites: A Methodological Dilemma for Discourse Analysis. Media, Culture & Society 35: 332–47. [Google Scholar] [CrossRef]
  33. Johnston, Hank. 2004. A Methodology for Frame Analysis: From Discourse to Cognitive Schemata. In Social Movements and Culture. Edited by Hank Johnston and Bert Klandermans. Minneapolis: University of Minnesota Press, pp. 217–46. [Google Scholar] [CrossRef]
  34. Karpova, Anna, Aleksei Savelev, Alexander Vilnin, and Sergey Kuznetsov. 2022. Method for Detecting Far-Right Extremist Communities on Social Media. Social Sciences 11: 200. [Google Scholar] [CrossRef]
  35. Kogovšek Šalamon, Neža. 2018. Sovražni Govor: Vloga Prava in Pravosodja. In Svoboda Izražanja, Mediji in Demokracija v Postfaktični Družbi: Filozofske, Teoretične in Praktične Refleksije. Edited by Andraž Teršek. Ljubljana: Lexpera, GV založba, pp. 91–102. [Google Scholar]
  36. Krzyzanowski, Michal, and Per Ledin. 2017. Uncivility on the Web: Populism in/and the Borderline Discourses of Exclusion. Journal of Language and Politics 16: 566–81. [Google Scholar] [CrossRef]
  37. Kuhar, Roman, and Iztok Šori. 2017. Campaigning for Equality: The Frames of Homophobia in Slovenia. Ljubljana: Ljubljana University Press, Faculty of Arts. Available online: https://e-knjige.ff.uni-lj.si/znanstvena-zalozba/catalog/download/7/40/257-1?inline=1 (accessed on 5 August 2022).
  38. Lakoff, George. 2017. What Is Hate Speech? George Lakoff (blog). September 14. Available online: https://george-lakoff.com/2017/09/14/what-is-hate-speech/ (accessed on 5 August 2022).
  39. Lazaridis, Gabriella, Giovanna Campani, and Annie Benveniste, eds. 2016. The Rise of the Far Right in Europe: Populist Shifts and ‘Othering’. London: Palgrave Macmillan UK. [Google Scholar] [CrossRef]
  40. Lucas, Brian. 2014. Methods for Monitoring and Mapping Online Hate Speech. GSDRCHelpdesk Research Report No. 1121. Birmingham: University of Birmingham. [Google Scholar]
  41. Magu, Rijul, Kshitij Joshi, and Jiebo Luo. 2017. Detecting the Hate Code on Social Media. arXiv arXiv:1703.05443. [Google Scholar]
  42. Marshall, Shantal R., and Jenessa R. Shapiro. 2018. When “Scurry” vs. “Hurry” Makes the Difference: Vermin Metaphors, Disgust, and Anti-Immigrant Attitudes. Journal of Social Issues 74: 774–89. [Google Scholar] [CrossRef]
  43. Massanari, Adrienne. 2017. #Gamergate and The Fappening: How Reddit’s Algorithm, Governance, and Culture Support Toxic Technocultures. New Media & Society 19: 329–46. [Google Scholar] [CrossRef]
  44. Mathew, Binny, Ritam Dutt, Pawan Goyal, and Animesh Mukherjee. 2019. Spread of Hate Speech in Online Social Media. Paper presented at 10th ACM Conference on Web Science—WebSci ’19, Boston, MA, USA, June 30–July 3; Boston: ACM Press, pp. 173–82. [Google Scholar] [CrossRef]
  45. Meta. 2022. Community Standards Enforcement. Available online: https://transparency.fb.com/data/community-standards-enforcement/hate-speech/facebook/ (accessed on 5 August 2022).
  46. Meza, Radu Mihai, Hanna-Orsolya Vincze, and Andreea Mogos. 2019. Targets of Online Hate Speech in Context. Intersections 4: 26–50. [Google Scholar] [CrossRef]
  47. Mondal, Mainack, Leandro Araújo Silva, and Fabrício Benevenuto. 2017. ‘A Measurement Study of Hate Speech in Social Media’. Paper presented at 28th ACM Conference on Hypertext and Social Media—HT ’17, Prague, Czech Republic, July 4–7; Prague: ACM Press, pp. 85–94. [Google Scholar] [CrossRef]
  48. Mudde, Cas, and Cristóbal Rovira Kaltwasser. 2013. Exclusionary vs. Inclusionary Populism: Comparing Contemporary Europe and Latin America. Government and Opposition 48: 147–74. [Google Scholar] [CrossRef]
  49. Mudde, Cas, ed. 2005. Racist Extremism in Central & Eastern Europe. Milton: Routledge. [Google Scholar]
  50. Mudde, Cas, ed. 2007. Populist Radical Right Parties in Europe. Cambridge: Cambridge University Press. [Google Scholar] [CrossRef]
  51. Murphy, Alexander. 2021. Political Rhetoric and Hate Speech in the Case of Shamima Begum. Religions 12: 834. [Google Scholar] [CrossRef]
  52. Musolff, Andreas. 2015. Dehumanizing Metaphors in UK Immigrant Debates in Press and Online Media. Journal of Language Aggression and Conflict 3: 41–56. [Google Scholar] [CrossRef]
  53. Paasch-Colberg, Sünje, Christian Strippel, Joachim Trebbe, and Martin Emmer. 2021. From Insult to Hate Speech: Mapping Offensive Language in German User Comments on Immigration. Media and Communication 9: 171–80. [Google Scholar] [CrossRef]
  54. Pajnik, Mojca, and Birgit Sauer, eds. 2018. Populism and the Web: Communicative Practices of Parties and Movements in Europe. Abingdon and New York: Routledge. [Google Scholar]
  55. Pajnik, Mojca, and Marko Ribać. 2021. Medijski Populizem in Afektivno Novinarstvo: Časopisni Komentar O »begunski Krizi«. Javnost—The Public 28: S103–21. [Google Scholar] [CrossRef]
  56. Pajnik, Mojca, Emanuela Fabijan, and Mojca Frelih. 2019. “Chameleonic Populism”: Framing “the Refugee Crisis” in the Political Field. Paper presented at Societies and Spaces in Contact: Between Convergence and Divergence, Portorož/Portorose, Slovenia, Trieste, Italy, September 16; Available online: http://spacesincontact2019.splet.arnes.si/files/2018/09/Zbornik-povzetkov_Societies-and-Spaces-in-Contact_2019.pdf (accessed on 5 August 2022).
  57. Panagiotidis, Kosmas, Nikolaos Tsipas, Theodora Saridou, and Andreas Veglis. 2020. A Participatory Journalism Management Platform: Design, Implementation and Evaluation. Social Sciences 9: 21. [Google Scholar] [CrossRef]
  58. Paz, María Antonia, Julio Montero-Díaz, and Alicia Moreno-Delgado. 2020. Hate Speech: A Systematized Review. SAGE Open 10: 2158244020973022. [Google Scholar] [CrossRef]
  59. Perifanos, Konstantinos, and Dionysis Goutsos. 2021. Multimodal Hate Speech Detection in Greek Social Media. Multimodal Technologies and Interaction 5: 34. [Google Scholar] [CrossRef]
  60. Quandt, Thorsten. 2018. Dark Participation. Media and Communication 6: 36–48. [Google Scholar] [CrossRef]
  61. Ranieri, Maria, ed. 2016. Populism, Media and Education: Challenging Discrimination in Contemporary Digital Societies. Oxon and New York: Routledge. [Google Scholar]
  62. Rossini, Patrícia. 2020. Beyond Incivility: Understanding Patterns of Uncivil and Intolerant Discourse in Online Political Talk. Communication Research 49: 399–425. [Google Scholar] [CrossRef]
  63. Sagredos, Christos, and Evelin Nikolova. 2022. “Slut I Hate You”: A Critical Discourse Analysis of Gendered Conflict on YouTube. Journal of Language Aggression and Conflict 10: 169–96. [Google Scholar] [CrossRef]
  64. Saha, Koustuv, Eshwar Chandrasekharan, and Munmun De Choudhury. 2019. Prevalence and Psychological Effects of Hateful Speech in Online College Communities. Paper presented at 10th ACM Conference on Web Science—WebSci ’19, Boston, MA, USA, June 30–July 3; New York: Association for Computing Machinery, pp. 255–64. [Google Scholar] [CrossRef]
  65. Scan Project. 2020. Hate Speech Trends during the COVID-19 Pandemic in a Digital and Globalised Age. Available online: http://scan-project.eu/wp-content/uploads/sCAN-Analytical-Paper-Hate-speech-trends-during-the-Covid-19-pandemic-in-a-digital-and-globalised-age.pdf (accessed on 5 August 2022).
  66. Silva, Leandro, Mainack Mondal, Denzil Correa, Fabricio Benevenuto, and Ingmar Weber. 2016. Analyzing the Targets of Hate in Online Social Media. Cologne and Palo Alto: AAAI Press, p. 4. [Google Scholar]
  67. Šori, Iztok, and Ivanova Ivanova. 2017. Right-Wing Populist Convergences and Spillovers in Hybrid Media Systems. In Populism and the Web: Communicative Practices of Parties and Movements in Europe, 1st ed. Edited by Mojca Pajnik and Birgit Sauer. London: Routledge, pp. 55–71. [Google Scholar] [CrossRef]
  68. Šori, Iztok. 2015. Za narodov blagor: Skrajno desni populizem v diskurzu stranke Nova Slovenija. Časopis za Kritiko Znanosti, Domišljijo In Novo Antropologijo 43: 104–17. [Google Scholar]
  69. Šulc, Ajda, and Andrej Motl. 2020. Letno Poročilo Spletno Oko 2019. Ljubljana: Univerza v Ljubljani—Fakulteta za Družbene Vede, Center za Varnejši Internet, Spletno Oko. [Google Scholar]
  70. Terrón-Caro, Teresa, Rocío Cárdenas-Rodríguez, and Fabiola Ortega-de-Mora. 2022. Discourse, Immigration and the Spanish Press: Critical Analysis of the Discourse on the Ceuta and Melilla Border Incident. Societies 12: 56. [Google Scholar] [CrossRef]
  71. Thiele, Daniel, Birgit Sauer, and Otto Penz. 2022. Right-Wing Populist Affective Governing: A Frame Analysis of Austrian Parliamentary Debates on Migration. Patterns of Prejudice 5: 1–21. [Google Scholar] [CrossRef]
  72. Valenčič, Erik. 2021. Koalicija sovraštva II. Mladina 29. Available online: https://www.mladina.si/209276/koalicija-sovrastva-ii/ (accessed on 5 August 2022).
  73. Vehovar, Vasja, and Dejan Jontes. 2021. Hateful and Other Negative Communication in Online Commenting Environments: Content, Structure and Targets. Acta Informatica Pragensia 10: 257–74. [Google Scholar] [CrossRef]
  74. Vehovar, Vasja, Andrej Motl, Lija Mihelič, Boštjan Berčič, and Andraž Petrovčič. 2012. Zaznava sovražnega govora na slovenskem spletu. Teorija in Praksa 49: 19. [Google Scholar]
  75. Vehovar, Vasja, Blaž Povž, Darja Fišer, and Nikola Ljubešić. 2020. Družbeno nesprejemljivi diskurz na Facebookovih straneh novičarskih portalov. Teorija in Praksa 57: 27. [Google Scholar]
  76. Verloo, Mieke, ed. 2007. Multiple Meanings of Gender Equality: A Critical Frame Analysis of Gender Policies in Europe, English ed. Budapest: CPS Books. New York: CEU Press. [Google Scholar]
  77. Verloo, Mieke. 2016. Mainstreaming Gender Equality in Europe. A Critical Frame Analysis Approach. Επιθεώρηση Κοινωνικών Ερευνών 117: 11. [Google Scholar] [CrossRef]
  78. Wahl-Jorgensen, Karin. 2020. An Emotional Turn in Journalism Studies? Digital Journalism 8: 175–94. [Google Scholar] [CrossRef]
  79. Walsh, James P. 2021. Digital Nativism: Twitter, Migration Discourse and the 2019 Election. New Media & Society, 1–26. [Google Scholar] [CrossRef]
  80. Waqas, Ahmed, Joni Salminen, Soon-gyo Jung, Hind Almerekhi, and Bernard J. Jansen. 2019. Mapping Online Hate: A Scientometric Analysis on Research Trends and Hotspots in Research on Online Hate. PLoS ONE 14: e0222194. [Google Scholar] [CrossRef]
  81. Williams, Matthew L., Pete Burnap, Amir Javed, Han Liu, and Sefa Ozalp. 2020. Hate in the Machine: Anti-Black and Anti-Muslim Social Media Posts as Predictors of Offline Racially and Religiously Aggravated Crime. The British Journal of Criminology 60: 93–117. [Google Scholar] [CrossRef]
  82. Wodak, Ruth. 2015. The Politics of Fear. What Right-Wing Populist Discourses Mean. Los Angeles: Sage. [Google Scholar]
  83. Wojczewski, Thorsten. 2020. “Enemies of the People”: Populism and the Politics of (in)Security. European Journal of International Security 5: 5–24. [Google Scholar] [CrossRef]
  84. Yerly, Grégoire. 2022. ‘Right-Wing Populist Parties’ Bordering Narratives in Times of Crisis: Anti-Immigration Discourse in the Genevan Borderland during the COVID-19 Pandemic. Swiss Political Science Review, 1–21. [Google Scholar] [CrossRef] [PubMed]
Table 1. Numbers and shares of hate speech statements attacking different personal characteristics (N = 372).
Table 1. Numbers and shares of hate speech statements attacking different personal characteristics (N = 372).
Personal CharacteristicsN%
Migration26170
Religion7620
Nationality3810
Political orientation236
Sexual orientation103
Race92
Gender62
Table 2. Numbers and shares of identified coding markers of critical frame analysis in hate speech statements (N = 372).
Table 2. Numbers and shares of identified coding markers of critical frame analysis in hate speech statements (N = 372).
Coding MarkersN%
Prognosis29880
Reference21558
Metaphor18049
Diagnosis11631
Prognosis: active actor5916
Diagnosis: passive actor4312
Table 3. Diagnosis: frames, numbers, shares, and frame descriptions of the statements (N = 261).
Table 3. Diagnosis: frames, numbers, shares, and frame descriptions of the statements (N = 261).
Diagnosis FrameN%Description
Migrants endanger the existence of the Slovenian nation and Europe.187Migrants, especially Muslims, pose a threat to the existence of the nation and European civilization. Politicians and supporters of migrants perpetrate genocide on Slovenians and other Europeans. Multi-culturalism does not work.
There are too many migrants in Slovenia and in Europe.145There are already too many migrants living in Slovenia (e.g., too many migrant children in kindergartens). Users do not want any migrants in the country. Things have gone too far.
Migrants are cheaters, criminals, and violent.135Users accuse migrants of lying about their age, not fighting for their country, committing terrorism, raping, stealing, killing, and abusing animals and women, and they present them as uncivilized, radical, and violent.
Migrants do not behave properly.83Migrants behave disrespectfully and are rebellious (e.g., when demanding better housing conditions).
Asylum legislation is too generous.42Migrants abuse asylum legislation and the system.
Migrants endanger our wellbeing.31Because of migrants, the Slovenian people, especially families, will experience a lower standard of living. Migrants will be a burden on the social welfare system for life. Legislation is written for minorities.
Migrants are part of a conspiracy.31White heterosexual men are under attack by migrants and gays. The media incorrectly report on migrants.
Migrants are a health threat.10.5Migrants transmit diseases.
Table 4. Prognosis: frames, numbers, shares, and frame descriptions of the statements (N = 261).
Table 4. Prognosis: frames, numbers, shares, and frame descriptions of the statements (N = 261).
PrognosisN%Description
Murder16764Migrants should be killed.
Protect the border and homes135A wall, electric fence, mine fields, or similar barriers should be placed on the border.
Revenge and vigilante justice125For each death of a European in terrorist attacks, migrants should be killed. People should take up arms against migrants (i.e., weapons).
Torture and insult114Migrants should be tortured in various ways.
Expulsion of migrants104All migrants, especially Muslims, should be deported from Europe.
Beating83Migrants should be beaten.
Deny any help62Deny any help to migrants and reject all migrants who come to Europe.
Table 5. Metaphors: clusters, numbers, shares, and examples of statements (N = 261).
Table 5. Metaphors: clusters, numbers, shares, and examples of statements (N = 261).
Metaphor ClusterN%Examples
Pests4919Vermin, parasites, rats
Uncivilized187Backward, cannibal, chimpanzee
Violent and criminal177War criminals, rapists, pedophiles
Dirty166Dirt, stink, scum
Ethnically different145African, Gypsies, niggers
Religious93Islam-lovers, Satanists, radicals
Animals93Monkey, pig, dogs
Intellectually inferior62Imbecile, idiots, no logic
General insult52Assholes, bitches, damned
Disease31Bacteria, pig flu, virus
Sexually deviant21Goat f**kers, over breeders, faggots
Lazy21No work habits
Not man enough21Cowards
Table 6. Reference: clusters, numbers, shares, and examples of statements (N = 261).
Table 6. Reference: clusters, numbers, shares, and examples of statements (N = 261).
Reference ClusterN%Examples
Weapons53209 mm, AK47, machine gun, nuclear weapons, sterilization
Ideology481814/88, Arbeit macht frei, Auschwitz, chimney, Dachau, Desinfektion, gas chambers, sieg heil, Hitler, Mauthausen, Treblinka, Zyklon, anti-multiculturalism, anti-communism
Values4015European civilization, Slovenian nation, Security, Peace, Justice
Other3614Islam, Putin, police, prime minister, Confederate flag
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Šori, I.; Vehovar, V. Reported User-Generated Online Hate Speech: The ‘Ecosystem’, Frames, and Ideologies. Soc. Sci. 2022, 11, 375. https://doi.org/10.3390/socsci11080375

AMA Style

Šori I, Vehovar V. Reported User-Generated Online Hate Speech: The ‘Ecosystem’, Frames, and Ideologies. Social Sciences. 2022; 11(8):375. https://doi.org/10.3390/socsci11080375

Chicago/Turabian Style

Šori, Iztok, and Vasja Vehovar. 2022. "Reported User-Generated Online Hate Speech: The ‘Ecosystem’, Frames, and Ideologies" Social Sciences 11, no. 8: 375. https://doi.org/10.3390/socsci11080375

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop