1. Introduction
Fears related to the spread of hate speech in contemporary societies particularly stem from the experiences of fascist and Nazi regimes, which not only rhetorically enforced stereotypes, xenophobia, and racism but transformed these into state policies, resulting in millions of victims. Accordingly, for decades, hate speech was considered the most dangerous in situations where it was disseminated from the top by influential persons and organizations, such as politicians or the mass media (
Dangerous Speech Project 2020;
Murphy 2021). This remains true and nowadays applies especially to populist political actors and far-right media, who often overtone reasonable debates with the rhetoric of hate (
Lazaridis et al. 2016). However, with the expansion of user-generated content on the Internet, we now face a different phenomenon: a notable increase in ‘grassroots’ dissemination of hate speech and other socially unacceptable communication, sometimes referred to as ‘dark participation’ (
Quandt 2018). This type of communication is especially problematic on social media, in online discussion forums, and in the comment sections of online media outlets (
Mondal et al. 2017;
Vehovar et al. 2020). With the increased use of the Internet during the COVID-19 pandemic, the problem has further accelerated (
Fan et al. 2020) as the consumption of digital media increased together with discriminatory responses to fear, which disproportionately affect marginalized groups (e.g.,
Devakumar et al. 2020;
Karpova et al. 2022) and stimulate false or unproven assertions, such as conspiracy theories (
Bruder and Kunert 2020;
Scan Project 2020). Hate speech is undoubtedly a pressing social issue that raises questions about the health of democracy and media systems in Europe and elsewhere, and continuously generates the need for research to better understand and cope with the phenomenon.
In this context, the aim of the following exploratory study is to expand the knowledge about online hate speech reported by Internet users to hate speech monitoring organizations. Taking Slovenia as a case study, we analyze its ‘ecosystem’ and discursive structure. First, we focus on the media and political contexts of the user-generated online hate speech and analyze the sources of the publication, removal rate, and main targets. Next, we specifically focus on hate speech against migrants and analyze the corresponding discourse using critical frame analysis (
Bacchi 2009;
Dombos et al. 2012;
Verloo 2016) by questioning how authors frame problems and solutions, what kind of metaphors and references they use, and the ideological substance of their statements. We explore the relationship between political and user-generated hate discourses, as well as the links of user-generated hate speech to extreme-right ideologies.
The focus on migrants is related to the very specific timeframe of our analysis, which covers a remarkable period in recent European history, the peak of the so-called ‘refugee crisis’ in 2015 and 2016. At that time, nearly half a million refugees and migrants crossed Slovenia (which has a population of 2 million) along the so-called Balkan migration route, and an opaque mass of hateful comments flooded the Internet. The empirical case, based on Slovenian data, is informative for the entire European context because, with respect to attitudes, including hate speech and migrations, Slovenia consistently holds the position of a median EU27 country, which is also true for the majority of general socioeconomic indicators, including social media usage and share of households with Internet access (
Vehovar and Jontes 2021, p. 3).
One original feature of this study is the data used. Namely, empirical research on online hate speech is typically based on hate speech statements observed directly in online venues (e.g.,
Rossini 2020) and is increasingly conducted using automatic detection approaches (e.g.,
Alkomah and Ma 2022;
Calderón et al. 2021;
Lucas 2014;
Mondal et al. 2017;
Vehovar et al. 2020). However, rather unexplored terrain—from the academic research perspective—are databases stemming from a self-regulatory mechanism, e.g., where Internet users more or less promptly report hateful content generated by other users to content providers or specialized organizations for monitoring hate speech (
Vehovar et al. 2012;
Hughey and Daniels 2013), which may achieve the removal of such content.
In our case, we used data obtained from the national hotline point for reporting illegal content:
Spletno Oko (
www.spletno-oko.si, accessed on 17 February 2022), a member of the international hotline network INHOPE (
www.inhope.org, accessed on 5 August 2022), which was the main civil society authority in Slovenia at the time of our analysis, where citizens could anonymously report (supposedly) illegal online content, including hate speech. In most cases in our analysis, hateful content was later removed from the Internet due to internal moderation follow-up by the content provider or due to corresponding law enforcement interventions. This important part of online hate speech is, thus, typically unavailable to researchers because they usually capture this content with a considerable lag, leading to an incomplete view of the phenomena (
Waqas et al. 2019), particularly after changes in hate speech treatment from 2016 onwards, when global social network companies signed the EU Code of Conduct against illegal hate speech online (
European Commission 2016).
Together with the introduction of modern computer algorithms, these measures to a large extend prevented hate speech from becoming publicly visible (
Meta 2022). Researching this specific data thus means that we studied the most flagrant hate speech, so we justifiably expected that the underlining patterns, if they existed, would appear in the most articulated format.
Another original contribution is the application of the critical frame analysis method to the user-generated online hate speech. In recent years, online hate speech has been extensively studied (for systematic reviews, see
Castaño-Pulgarín et al. 2021;
Paz et al. 2020), particularly from legal and freedom of speech viewpoints (
Massanari 2017), as well as from media (e.g.,
Saha et al. 2019), social (e.g.,
Lucas 2014), and psychological perspectives (e.g.,
Assimakopoulos et al. 2017). Nevertheless, the use of linguistic pragmatics and discourse analysis methods in hate speech research is still relatively rare (
Dekker 2017) although not entirely absent (e.g.,
Sagredos and Nikolova 2022;
Ghaffari 2022).
We address this aspect by treating hate speech as a discourse embedded in a specific political and media context and using the critical frame analysis method, which has been predominantly applied to policy documents (e.g.,
Verloo 2007) and political party documents (e.g.,
Lazaridis et al. 2016) but rarely to content generated by Internet users (e.g.,
Kuhar and Šori 2017). This disparity could, in part, be due to the empirical material itself as frame analysis presupposes a rather complex textual structure, while user-generated online hate speech statements are often short and simplified. Therefore, determining the extent to which frame analysis could be used to analyze user-generated online hate speech is one of the challenges we address in this study.
The article proceeds as follows. Firstly, we situate user-generated online hate speech in political, media, and legal contexts based on the findings of previous studies. Next, we outline our own methodological approach to the empirical research and present the results. Finally, we outline our main conclusions in the Discussion.
3. Materials and Methods
3.1. Research Questions
Based on the above-described background, we formulated the following research questions:
RQ1: What are the main characteristics of the ‘ecosystem’ (i.e., the source of publication, removal rate, and main targets) and the discursive structure of hate speech statements reported to the national hotline?
RQ2: What are the frames and underlying ideologies of user-generated online hate speech against migrants?
To address these research questions, we applied the frame analysis method, developed a corresponding coding scheme, and implemented it on each statement obtained from hate speech reporting to the national hotline.
3.2. The Frame Analysis
Frames are discursive elements in texts and can be defined as ‘problem-solving schemata’ or ‘mental orientations that organize perception and interpretation’ (
Johnston 2004). The corresponding frame analysis approach helps to establish a consistent and sensible causal story from information on how a problem develops and should be solved (
Entman 1993;
Gamson and Modigliani 1989). This means that each text can be analyzed to determine the frames within which problems (
diagnoses) and solutions (
prognoses) are defined, either implicitly or explicitly (
Verloo 2016). We will specifically refer to critical frame analysis developed by Verloo (
Verloo 2007,
2016), who further elaborated on (and specified) this discursive approach to analyze (fundamental) norms, beliefs, and perceptions included in the selected texts as the subject of analysis. In our case, they are textual messages posted online by Internet users.
How the authors frame a certain problem or solution may be strategically chosen, for example, with the aim of influencing the discussion and decision-making. In addition, the authors can select frames that can most effectively dehumanize and incite hatred against other people and, at the same time, still obey the criminal prosecution and moderation policies of media outlets and social networks. However, framing can also be unintentional or unconscious and reflect (and reproduce) the dominant discourses that exist in a specific society, which means that deep cultural meanings influence the framing process (
Bacchi 2009). According to some authors (
Dombos et al. 2012), these can be even more important than the intentionality of the framing process.
3.3. The Data
We obtained the database of user-generated online hate speech statements from the national hotline Spletno Oko, to which Internet users anonymously reported illegal content posted on the Internet by either submitting a form on the hotline’s website or by using a special feature (i.e., the hotline Spletno Oko button) that some media outlets have incorporated into their online comment sections.
Typically, more than 500 hate-speech-related reports were received annually (
Šulc and Motl 2020). Specially trained analysts reviewed and categorized the reported statements, and, when they potentially violated Slovenian hate speech legislation (roughly 10% of reports), they were forwarded to law enforcement. The timeframe of the statements collected for this analysis was between 1 January 2016 and 1 June 2017. This period was selected intentionally because it comprises the peak of the so-called refugee crisis in Slovenia, which was marked by the unprecedented escalation of online hate speech. We can thus expect that these reports about (non-moderated/non-censored) hate speech statements will reveal the most genuine frames in hate speech discourse.
The hotline provided 489 reports identified by their analysts as potentially containing elements of hate speech against persons or groups with protected characteristics, as defined by the national legislation, following Council of Europe recommendations. The hotline had already excluded reports that did not meet the definition of hate speech (e.g., indecent language, insults, or threats to persons or groups outside the protected characteristics). We further reviewed the database and excluded all statements where the initial author of the statement was not a regular Internet user (e.g., a politician who published hate speech on his social media profile), or where some of the crucial information was missing, such as the source of publication. During this review, we excluded 117 statements, so 372 statements were then coded.
Thus, our data were based on reports collected by a self-regulatory mechanism. There are at least three factors that influence citizens reporting the hateful content of other people. The first factor is the national policy context since the state sets the legal framework, which usually serves as the source for moderation policies adopted by media and Internet content providers. Another important factor is the political context, where research findings show that the most prominent targets of hate speech are often connected to current events on the political and media agenda, and that discursive patterns involve the proliferation of similar stereotypes about certain target groups (
Meza et al. 2019). The third factor is the sensibility and awareness of the users themselves, as well as broader social norms related to what is considered acceptable communication; again, this largely depends on the policy and political context related to a certain nation or state.
The coding sheet was first tested by two researchers on a smaller sample. One then coded the whole sample, while the other reviewed and validated the data. However, we found almost no discrepancies because the coding was related either to administrative aspects or to frame analysis characteristics, which were robust, objective, and unambiguous.
3.4. The Coding Scheme
Statements in the original hotline database were fully anonymized by the time we received them because the hotline application form follows the highest security standards and does not log any additional information, such as the submitter’s Internet protocol (IP) address and the device used. We additionally checked whether the hate speech statement itself might potentially reveal any personal information, but we did not find any such cases. The original database was also accompanied by administrative coding, including the report date, URL, statement text, and category of the statement, which denoted whether the statement complied with the definition of hate speech. We assigned additional administrative codes, including the coding date, coder ID, media, and target group. For the purpose of frame analysis, we coded each statement according to six markers using the methodology of sensitizing questions proposed by
Verloo (
2016):
diagnosis (what is the problem?),
prognosis (what is the solution?),
diagnosis passive actor (who is affected by the problem?),
prognosis active actor (who should provide the solution?),
metaphors (how is the target group described?), and
references (to what kinds of ideologies and other references do users refer?).
The markers served to identify the frames, which enabled us to analyze the discourse of user-generated hate speech, elaborate on its discursive and ideological conceptualizations, and identify strategic framing.
5. Discussion
In this article, we examined the media and political contexts of user-generated online hate speech and its discursive features. We applied the method of critical frame analysis to user-generated online hate speech reported by Internet users to the Slovenian monitoring organization Spletno Oko. Only statements with elements potentially meeting the definition of hate speech—adopted by national legislation following the Council of Europe recommendations—were included in the analysis.
We analyzed 372 hate speech statements posted on social media in the comment sections of online media outlets and online discussion forums during the surge of anti-migrant online hate speech in Slovenia in 2016–2017. In the first step, we analyzed the main characteristics of these statements to reveal the basic features of user-generated online hate speech and to understand its ‘ecosystem.’ In the second step, we focused on hate speech statements that targeted migrants (261 statements) and analyzed their discursive elements in detail to understand the corresponding discourse and detect the underlying ideologies of the authors. We were specifically interested in the question: how is contemporary user-generated online hate speech discursively and ideologically linked to political hate discourses and extreme-right ideologies?
With respect to the ‘ecosystem’ of hate speech (RQ1), the results showed that social media represent the main platform for spreading hate speech, which is consistent with the findings of other studies (
Castaño-Pulgarín et al. 2021). At the same time, they execute the strictest moderation policies since the number of removed hate speech statements from social networks was much higher than the number of ‘comments’ in the media or discussion forums.
During the observed ‘refugee crisis,’ migrants were the most attacked group in these reports, which showed strong embodiment of the reported user-generated online hate speech in the political context. Our data confirm findings from previous research that stated publishing (and we may add that also reporting) of hate speech occurs more frequently after high-profile trigger events and in situations where the attacked minority is the subject of political discussions and organized anti-propaganda (see
Murphy 2021;
Arcila-Calderón et al. 2021).
The results of our analysis indicate that authors, to a large extent, strategically choose metaphors, references, and prognoses and are aware of their political message. Publishing (and reporting) can, therefore, be considered part of the grassroots political struggles of Internet users trying to influence public opinion and policy adoption, but we do not exclude the possibility that there is also spontaneous expression of hate speech on the Internet.
Within the discursive structure of hate speech against migrants (RQ2) lie its main features and indicators: prognoses, references, and metaphors. Prognoses were present in 80% of the reported statements, and nearly all called for death or the use of various forms of extreme violence against migrants. On a six-point hate speech intensity scale, most would be classified in the highest group (
Bahador and Kerchner 2019). The question of who should provide the ‘solutions’ remains unanswered in most of the statements, indicating strategic framing by the authors to avoid criminal prosecution.
Besides the prognosis, another important feature of user-generated online hate speech against migrants is references to weapons and Nazi war crimes during WWII. A considerable proportion of Internet users who spread hate speech seem to sympathise with the most extreme-right ideas and ideologies, and they use online platforms to disseminate their views.
Similarly, as in many other countries, the central concern of extremist and violent Internet users’ discourse is the preservation of the ‘purity’ of the nation and civilization (
Ballsun-Stanton et al. 2021;
Assimakopoulos et al. 2017;
Krzyzanowski and Ledin 2017;
Walsh 2021), which serves as a justification of hatred. This leads to their conclusion that migrants, especially Muslims, represent an existential threat. Thus, hate speech against migrants is ideologically rooted in nativism and racism and strongly linked to widespread prejudices, stereotypes, and hatred towards Islam in society.
The final important feature in this discourse is metaphors (present in 48% of statements), which were employed in three ways: to provoke disgust toward migrants, to present migrants as culturally inferior, and to arouse fears about migrants’ supposed violent behaviour. Metaphors are used not only to dehumanise migrants but also to incite other Internet users against them, indicating a strong political motivation in the dissemination of user-generated online hate speech. The use of metaphors also distinguishes online hate speech from other discourses. In the case of Slovenia, it is quite rare even for far-right politicians to refer to migrants as ‘vermin’, which was the most common word in our database to describe migrants.
The diagnoses were much less common (present only in 31% of statements) than prognoses, and they framed migrants as a cultural, demographic, security, social wellbeing, and health threat and as part of a conspiracy against the existence of the nation and citizens, as well as European civilization. This is similar to right-wing populist othering as both discourses frame migrants and other minorities as threats and use similar stereotypes (e.g.,
Wodak 2015;
Cervi and Tejedor Calvo 2021;
Murphy 2021).
If we compare the discursive structure and content of hate speech statements with political othering discourses, we can see an overlapping of diagnoses, differences in the use of metaphors, and complementarity in the setting of prognoses. While political speech is characterized by the diagnosis and representation of minority and marginalized groups as a problem, hateful Internet users focus on prognosis and complement these diagnosis messages with providing ‘final’ solutions. This is why we can label the user-generated online hate speech as ‘executive speech’ and view it as complementing political hate discourses based on othering.
This study also contributed to two methodological challenges encountered in hate speech research. First, we demonstrated that frame analysis can be a very effective tool for understanding user-generated online hate discourses, especially their ideological underpinnings and embodiment in the media and political contexts, despite the use of relatively short statements. Second, the analyzed material included statements that were mostly moderated out from the platforms on which they were originally published, and, thus, they were typically unavailable to researchers when harvesting these data (with a certain time lag) directly from the Internet. Technically, the removed statements could later (after formal removal from the public) be still captured, but, in practice, on external platforms, such as Facebook, this is getting more complicated, so we encounter almost no research specifically of such data.
Therefore, we managed to analyze the hate speech cases that are the most radical and genuine. Since our research showed that executive speech (which directly calls for hate crimes) is the essential characteristic of online hate speech discourse, it is particularly important to prevent it and/or remove it. Furthermore, in recent years, online hate speech has been recognized as such, and it is increasingly being dealt with politically and with self-regulatory mechanisms, as shown by the efforts undertaken by social media after multinational government action (e.g.,
Meta 2022).
With respect to the limitations of this research, we observed a rather peculiar hate speech context, predominantly related to migrants, by collecting these data in a specific period and environment. There are, of course, many other environments and contexts that may function differently; however, our findings still point to the very intrinsic characteristics of user-generated online hate speech, particularly because we studied such extreme hate speech that other Internet users reported to the national authority. One could also criticize the lack of advanced analytical methods (e.g., clustering of cases), but we estimated that the added value of such elaboration would be negligible given the required resources.
Regarding further research, we encourage studies dealing with self-regulation mechanisms, specifically hate speech reported by Internet users in national contexts and comparative studies. To address the gaps in the research data, we suggest that forthcoming studies pay closer attention to hate speech removed from the platforms. This would expand our understanding of the most critical online hate speech, where hate speech characteristics are most clearly articulated.