Next Article in Journal
Resisting Uniformity: How Transgender and Gender-Diverse Teachers Subvert School Dress Codes for Self-Affirmation and Possibility
Previous Article in Journal
Correction: Hazan-Liran and Levkovich (2025). The Weight of Loneliness: Family Resilience and Social Support Among Parents of Children with and Without Special Needs. Social Sciences 14: 531
Previous Article in Special Issue
Media Platforms and Protest Movements: An Analysis of the 2019 #Ikokwu4 Protests in Port Harcourt, Nigeria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Public Perception of Hate Speech Regulation in Unconventional Media

by
Ismael Crespo Martínez
,
Inmaculada Melero López
* and
María Isabel López Palazón
Department of Political Science, Social Anthropology and Public Finance, University of Murcia, Ronda de Levante, 10, 30008 Murcia, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2025, 14(12), 705; https://doi.org/10.3390/socsci14120705
Submission received: 15 November 2025 / Revised: 4 December 2025 / Accepted: 8 December 2025 / Published: 10 December 2025
(This article belongs to the Special Issue Understanding the Influence of Alternative Political Media)

Abstract

This study provides one of the first quantitative analyses regarding citizens’ perception of hate speech regulation in Spain, based on the influential, empirical study of the Torre Pacheco case. The research at hand statistically validates the correlation between the consumption of content through unconventional media and a reduced tendency to accept regulatory measures, a significant finding given the current climate of growing disinformation and digital polarization. The results indicate that women are more likely to support regulation, while individuals who are politically more conservative tend to reject such intervention. The conclusions highlight a potential association between political affiliation, trust in state institutions, and resistance to content regulation in the digital environment, which provide key insights into the current challenges facing democratic governance.

1. Introduction and Rationale

The growth of unconventional outlets such as social media, YouTube, and other nontraditional channels has changed not only the information ecosystem, but the frameworks that regulate hate speech as well. Specifically, those who publish hate speech have found social media to be a platform that allows the popularity of this type of discourse to be increased and strengthened. Among these, X (formerly Twitter) is susceptible to the propagation of narratives in the political sphere due to its link to current affairs content (Blanco Alfonso et al. 2022). WhatsApp is another channel with the potential for virality, allowing it to spread political disinformation as well (Avaaz 2019). In fact, the evidence suggests that these platforms have an ideal infrastructure for the propagation of disinformation and polarizing content, which is having a measurable effect on the deliberative climate (Tucker et al. 2018; Vosoughi et al. 2018).
Among the capabilities of social media is their vast potential to magnify polarizing content using algorithms (Sunstein 2001). However, it must be emphasized that the paradigms of interaction between users and algorithms differ significantly among platforms. Such dynamics can reduce exposure to certain perspectives while encouraging the creation of user groups with specific, like-minded ideas. This, in turn, reinforces shared content, or the classic and well-known echo chamber (Cinelli et al. 2021).
These practices have harmful effects on democracies, and their impact is not limited to those which are emerging or unstable. In fact, fear is spreading even in established democracies such as the United States (Tucker et al. 2018). Among the most serious consequences of using a negative discourse strategy is the creation of a climate of distrust in institutions, along with citizen apathy (Campos Domínguez et al. 2022). This situation is the result of fostering climates of polarization, intolerance, and mistrust toward institutions. Consequently, dialog is weakened and a discourse of rejection toward political adversaries is fomented, which exacerbates differences and fosters confrontational discourse (Koltay 2016). This state of affairs has sparked a debate on the need to regulate such content, which must involve public institutions, private platforms, and civil society itself through counter-narrative strategies, fact-checking, and media literacy.
As part of education in media literacy, some social media platforms have incorporated a series of technical instruments capable of detecting hate speech using algorithms, after which they either report the discourse or simply remove it (Jia and Schumann 2025). In addition to these measures, countries such as the United Kingdom and Germany have also implemented regulations on hate speech to set limits on discriminatory content.
This paper analyzes the association between the consumption of nontraditional media content and citizens’ opinion regarding hate speech regulation, by incorporating ideological self-placement as a key variable. The study is based on research involving a specific region, along with the infamous Torre Pacheco case (July 2025), where online hate speech was associated with local unrest and the activation of far-right networks. The data used by the authors to examine this case was taken from a survey conducted by the CEMOP-Panel of the Murcia Center for Public Opinion Studies (CEMOP)1. This survey not only analyzed patterns of news consumption regarding the case, but also the capability of disseminating content related to the mobilization that took place, as well as the level of agreement on the strict regulation of information channels such as social media.
The relevance of the incident being examined dates back to 2019, when support for the radical right-wing party Vox allowed it to become the most voted political faction in the general elections of the Murcia Region.
Vox’s discourse is similar to that of other European far-right groups, whose main narratives are the following: anti-immigration, which appeals to fears about irregular and/or illegal immigration, specifically from Morocco and other North African countries; anti-establishment rhetoric, seen as distrust of elites and a disaffection with the system; narratives that defend traditional values; and a discourse on globalization and modernization (Crespo Martínez and Mora Rodríguez 2022). All the foregoing, together with especially relevant data on social media consumption and exposure to disinformation and hate speech, reinforces the aim of this research, which is to establish a framework for addressing content regulation in cases of discrimination. The case addressed in this study was widely reported in the media due to the intense mobilization that took place on social media after the attack. This resulted in a vast spread of disinformation, which fueled the tense climate even further. Lastly, the mobilization was carried out by ultra-right groups who orchestrated attacks against immigrants, which resulted in confrontations and rioting.

2. Literature Review

2.1. Unconventional Media, Polarization, and Hate Speech

One of the premises underlying this research is that fake news uses social media platforms to expand and magnify its reach (Tucker et al. 2018). From this point of view, theories such as selective exposure confirm that the Internet is a unique space in which political leanings, along with exposure to a diversity of information linked to patterns of political behavior, are interrelated (Garrett 2009). This confirms that social media platforms tend to provide content based on the political and ideological stances of citizens by using algorithms that analyze the digital information generated by each user (Bucher 2012). These digital dynamics highlight the way in which information bubbles (Pariser 2017) encourage users to search for information that lies within their own frame of reference. This gives rise to echo chambers (Barberá et al. 2015) as a consequence of selective exposure, thereby duplicating messages and reinforcing previously held beliefs (Cho et al. 2020).
These consumption patterns are typical of social media, yet they are not exclusive to these platforms. They have also been found in emerging political blogs, other unconventional media, and audiovisual spaces such as YouTube, where behavior associated with political polarization is propagated (Adamic and Glance 2005). The absence of relationships between people who participate in debates and discussions in the same spaces has led to these environments being described as cyber ghettos (Johnson et al. 2009).
To further develop the theoretical construction, the term unconventional media should be clarified. Beyond traditional media, there is now a collection of alternative platforms that embrace journalistic tenets that differ from conventional standards. These outlets are critical of traditional media and have a less formal structure. Their main purpose is to interact with citizens through more direct community participation, and to challenge prevailing narratives. However, this model also faces significant challenges such as the lack of oversight, as well as potential vulnerability to biased funding sources (Müller and Schulz 2021).
In confronting the problem of false information, we start from the premise that novelty is an aspect that takes precedence over information accuracy. This makes people more likely to share novel content, which in turn foments the spread of fake news (Vosoughi et al. 2018). Added to this phenomenon is the tendency of audiences to consume messages that appeal to moral feelings through communicative resources that intensify psychological reactions (Brady et al. 2020). This can result in the creation of hostile climates, denigration of the image of an adversary, and limited dialog between political parties (Marín Albaladejo 2022). Consequently, stories are based on insults, threats, mockery, and sarcasm, which create polarized environments and widespread rejection of those who are the target of hate speech, and most affected by it (Rodríguez Pérez 2024). However, not all views are opposed to the consequences of polarization and polarized narratives. According to Waisbord (2020), the use of discourse that generates polarized climates can be beneficial, both in terms of attracting audiences and advertising, and for obtaining symbolic rewards on digital platforms such as followers, popularity, and relevance.
Nevertheless, polarized discourse may or may not share some characteristics with hate speech, the latter of which promotes prejudice and intolerance. In fact, the former can also be a factor in creating a hostile climate which, in some cases, can lead to discriminatory acts or even violent attacks (Gagliardone et al. 2015). Hate speech focuses its discursive strategy on the existence of an enemy (Albertazzi and McDonnell 2008), with the aim of dividing the population into antagonistic groups (Vázquez Barrio 2011). As a result, we are faced with a contagious phenomenon in which citizens tend to become radicalized, or what Lelkes (2016) calls affective polarization.
Furthermore, when hate speech fosters a climate of hostility and encourages discriminatory action, it originates with a group of individuals who have a series of traits that are detrimental and undesirable. These manifestations are not only expressed in acts of violence, but also through ridicule and innuendo (Parekh 2006), which is developed using five basic strategies: violation of established rules; engendering a feeling of shame in victims; using threats and intimidation to instill fear; dehumanization of victims through negative comparisons; and finally, disseminating disinformation about the groups to which the victims of hate speech belong (Williams 2021).

2.2. Hate Speech as a Manifestation of Political Polarization

The relationship between hate speech and affective polarization allows us to understand how digital environments magnify emotional and moral divisions between political parties. Affective polarization is defined as emotional distance and hostility toward those aligned with the opposing political camp (Iyengar et al. 2012; Lelkes 2016), which is fueled by identity reinforcement provided by homogeneous digital communities. In these communities, selective exposure and echo chambers produce continuous interaction between emotion and political morality: in other words, the content not only informs, but it also mobilizes feelings of grievance, fear, or resentment (Brady et al. 2020). This dynamic tends to moralize one’s own political identity while reducing empathy toward the opposing group. Thus, hate speech is an extreme, discursive display of this affective polarization, which transforms ideological differences into moral judgments about the legitimacy and humanity of the other. As highlighted by Parekh (2006), political hatred does not arise from rational disagreement, but from the perception of threat and dehumanization of the adversary. In this regard, the polarizing rhetoric transmitted by nonconventional media not only reflects pre-existing divisions, but also propagates and radicalizes these breaches, thereby legitimizing intolerance as a type of political identity (Waisbord 2020). Consequently, hate speech can be defined as an advanced stage of affective polarization, where the mistrust of institutions, media fragmentation, and the emotional nature of public debate converge, thereby eroding democratic deliberation.

2.3. Diverse Consumption and the Far Right

The global rise in the far right is a widely documented political phenomenon. Specialized studies have used various terms to refer to these manifestations, such as neo-fascism, far right, radical right, and alternative right. These terms are used to describe leaders and governments who promote hate speech and harassment, as well as authoritarian public policies capable of restricting human rights and civil liberties (Fair 2025).
Not only have unconventional media increased their presence in the global information landscape, but they are attracting an increasingly larger audience. However, traditional media continue to be the most visible information outlets. Moreover, the objectives of the mainstream media may be related to state-funded propaganda strategies, the advancement and mobilization of the far right, the creation of new populist parties, and increased revenue through clickbait, yet without any initial political aims. As a consequence, unconventional news sources not only accept a certain type of distinct content, but they are also defined by their inflammatory and polarizing style (Müller and Freudenthaler 2022).
The main audiences for nontraditional news sources are either right-wing or far-right profiles who are highly interested in politics, distrust conventional media, and have a restrictive attitude toward immigration (Puschmann et al. 2024). They are also linked to conspiracy theories such as the idea that elites use immigration to promote the replacement of the autochthonous population, or that climate change is fabricated and its effects exaggerated, in order to achieve economic and political goals (Hameleers 2021).
Another notable feature of right-wing, alternative media is its proximity to political populism among certain sectors of voters (Puschmann et al. 2024). In fact, Müller and Schulz (2021) argue that the frequency and pace with which people are exposed to alternative news is a key factor in consolidating their right-wing political stance. Thus, continued exposure to this type of content is often associated with a stronger affinity for populist discourse and a higher probability of voting for right-wing parties.
In Spain, the media ecosystem and presence of nonconventional media reflect the aforementioned trends. The party in Spain known as Vox, which is considered far-right (Ferreira 2019), has been the subject of study for various reasons, one of which is their dissemination of a simplistic discourse capable of generating hatred, with social media serving as the ideal channel (Garde Eransus 2025). This strategy uses populist messages to incite hatred and fear, which can be used by both political parties and media outlets who share the same ideas. As a result, the polarized view is magnified and foments a negative perception of migrants in general, and immigrants in particular. These narratives offer a fragmented view of the facts and appeal only to one vision of reality (Mariscal Ríos 2025). Depending on their political leanings, voters tend to prefer certain media and platforms over others. For example, in the case of those who support Vox, there is a clear inclination toward using the X channel, due to its status as one of the most highly political and polarized platforms (Rivera et al. 2021).
Research by Guerrero Solé and Philippe (2020), which is based on analyzing content disseminated on X (formerly Twitter), revealed that Vox was the party that issued the highest number of inflammatory messages during the state of emergency decreed by the government due to the COVID-19 pandemic. This behavior is part of the strategic use of hate speech by populist movements, which resort to these tactics to spread their ideals and mobilize social sectors who are especially disenchanted. Given the context, issues such as feminism, LGTBI rights, and migration have become the main topics of far-right hate speech in Spain and other European countries (Ramírez Plascencia et al. 2022). This reinforces the idea that continuous exposure to xenophobic messages contributes to consolidating and magnifying existing prejudice toward certain social groups (Mele 2013).

2.4. The Debate Regarding Regulation in Europe and the United States

Due to the spread of hate speech in digital environments, discussion regarding the need to regulate such content is growing, the aim of which is to strike a balance between protecting the general, fundamental rights of citizens, and freedom of expression. This debate is focused on the involvement of public institutions in promoting regulatory frameworks, as well as the participation of private platforms in applying their own community standards and mitigation systems. In the European Union, various regulations, recommendations, and rulings have been approved with the aim of reducing the negative effects of disinformation, including the following: the Charter of Fundamental Rights of the European Union (2000); the European Media Freedom Act; Code of Best Practice; and the Digital Services Act (Fernández Torres and Cea 2025). Added to these EU legal initiatives is the Digital Services Act (DSA, EU Regulation 2022/2065), approved in 2022, which aims to combat illegal content and disinformation and protect users by ensuring algorithmic transparency (EUR-Lex 2022).
Germany and the United Kingdom are pioneers in the approval of regulations against hate speech and disinformation. In Germany, the NetzDG (Network Enforcement Act), also known as the German social media law, was passed in 2017. It was the first European regulation to combat agitation on social media resulting from the viral spread of fake news. Social media platforms have a protocol whereby false content must be labeled as such, after which the post is reviewed in accordance with existing laws. In fact, if any illegal content is found, it must be examined and removed within 24 hours. While this law has been criticized for its potential to limit freedom of expression, the pros outweigh the cons due to the individual safety it provides, along with the advantage of encouraging the use of reliable sources on social media (Federal Ministry of Justice and Consumer Protection 2020).
Regarding the UK, although its aim was to improve on the negative effects of the German law, criticism is also common due to the limitations on freedom of expression. The Online Safety Act of 2023 broadened the responsibility of digital platforms in filtering content and requiring algorithmic transparency, with special emphasis on the protection of vulnerable sectors. Among its content, the act refers to the protection and safety of children, along with the most vulnerable groups, as well as measures to mitigate the dangers associated with the posting of illegal content on social media platforms (Gov UK 2023).
In Spain, the main regulation of hate speech can be found in the Penal Code, which addresses hate crimes in Article 510. Along the same lines, digital content is also overseen by the Digital Services Act (DSA) of 2022, which stems from European regulations that apply to all member states. This law protects consumers and establishes clear responsibilities for online platforms and social media, with the aim of confronting illegal content, hate speech, and disinformation (EUR-Lex 2023). However, although the DSA does not regulate hate speech per se, it places the responsibility for this content on the intermediaries. Spain also approved the Second Action Plan to Combat Hate Crimes (2022–2024), which establishes various measures aimed at prevention, training, institutional coordination, and related objectives.
These regulations of the European Union and its member states differ from those of the United States, where the foundational principle and main tenet is the protection of free speech. In the USA, Section 230 of the Communications Decency Act (1996) exempts platforms from liability for content posted by their users, which limits the state’s regulatory capacity. Free speech is protected by the First Amendment with regard to political discourse, yet it must be distinguished from commercial discourse, which is for profit and subject to governmental regulation (Gómez Peralta 2023).
From the standpoint of political communication, the regulation of hate speech and disinformation cannot be viewed solely from a legal perspective, but rather as a problem of communicative governance (Gorwa 2019; Suzor 2019). Digital platforms have evolved toward a quasi-institutional role in the organization of the public domain (Gillespie 2018), where they manage the visibility, interaction, and reputation of political actors (Napoli 2019). This ability to modulate public conversation transforms content moderation into communicative power with democratic implications. Given the context, the European regulation known as DSA, Online Safety Act, can be interpreted as an attempt to rebalance the communication ecosystem and ensure algorithmic transparency, which is in line with theories regarding platform regulation (Helberger et al. 2018). The final goal is not to censor, but to protect the public interest in matters of communication, and to strengthen institutional trust in the digital sphere.
Among the various instruments for platform monitoring, we find counter-narratives, media literacy, fact-checking, and platform accountability. Regarding the first, counteractive discourse strategies could be implemented to offset polarizing narratives, making it possible to combat hate speech and promote peaceful, non-polarized speech. Garland et al. (2022) suggest that hate speech is associated with changes in public discourse, and that a counter-discourse can help to curb hateful rhetoric in online discussions. The second instrument to employ would be media literacy and digital education. This strategy offers technical mechanisms to detect hate speech using algorithms, which are then reported or removed (Jia and Schumann 2025). The third issue is transparency and monitoring of recommendation algorithms through mechanisms such as fact-checking and data verification, which could help improve political discourse due to their effect on reputation, as well as the potential risks to politicians who spread disinformation (Nyhan and Reifler 2014). Finally, addressing platform accountability through independent audits would enhance transparency as well. All these initiatives seek not only to curb the spread of hate speech, but also to strengthen social resilience, inclusion, and democratic cohesion (Benesch 2014).

3. Case Study: Torre Pacheco

The media coverage surrounding Vox is not limited to the events in Torre Pacheco; as mentioned above, this political party has sparked several controversies. One example was the placement of several billboards in El Ejido with xenophobic and discriminatory images, installed just a few days after the attack in Torre Pacheco, which was severely condemned by the public and various political actors (Sánchez 2025). Another example in the Murcia Region involved the significant level of political and social mobilization initiated by Vox, which led to the closure of one of the juvenile centers in Santa Cruz where 47 minors of different nationalities were housed (Ferrán 2025).
The events in Torre Pacheco, a small city with approximately 40,000 inhabitants in the Region of Murcia, took place between 9 and 19 July 2025, following an attack on a 68-year-old resident, allegedly by young people of Maghreb origin. This incident triggered a wave of protests and riots in the municipality, driven by the dissemination of information and narratives through unconventional outlets such as social media, instant messaging platforms, and YouTube channels, which were exacerbated even further by far-right social media platforms.
Added to this situation is the widespread impact that unconventional media currently have, due to their ability to magnify the impact of political party narratives. Specifically, the Digital News Report 2025 points out that 73% of Spanish citizens believe social media is the main instigator of disinformation, followed by instant messaging channels such as WhatsApp at 33%, with news websites coming in third at 25%. These data are in line with the findings of Avaaz (2019), who concluded that nearly ten million Spanish citizens received disinformation during the 2019 general election campaign. Furthermore, most of this hateful content was directed at immigration (14%), followed by feminism and the LGBTQ community (10%).
According to data from the Spanish Observatory on Racism and Xenophobia (Ministry of Inclusion, Social Security, and Migration), on the days when the demonstrations took place, 26,222 hate messages were disseminated. Moreover, 90% of the comments were directed at people from North Africa, and 33% used dehumanizing expressions that incited violence or called for deportation from the country.
The significance of the Torre Pacheco case lies in its illustration of how the dissemination of messages through unconventional media can generate social mobilization and exacerbate hate speech. Similar phenomena have been recorded in other municipalities in the Murcia Region, and in various communities in other regions as well. This highlights the relevance of this case for analyzing the impact of unconventional media consumption on public opinion, as well as the public’s willingness to support or oppose measures to regulate content on digital platforms.

4. Data and Methodology

The main objective of this research is to analyze the extent to which the consumption of unconventional media influences public opinion concerning the need to regulate hate speech and disinformation. These nontraditional outlets include social media, YouTube, online forums, and other nonconventional media. As a complement, the authors have also examined the relationship between people’s political ideology and their attitude toward such regulations.
Therefore, the study attempts to answer the following research questions:
Q1. To what extent does the consumption of nonconventional media influence the public’s opinion on the need to regulate hate speech and disinformation on outlets such as social media?
Q2. Is frequent consumption of unconventional media more prevalent among people with political views close to the far right?
Based on these questions, the working hypotheses (H) are as follows:
H1. 
The consumption of news through unconventional media is associated with a lower tendency to support regulation of hate speech and disinformation on social media.
H2. 
Regular consumption of nontraditional media is more prevalent among individuals who take a far-right political stance.
To verify these hypotheses, three blocks of binary logistic regression models were estimated, which used data from a survey based on a regional, representative sample of 611 cases2. The survey was part of a CEMOP-Panel study (Murcia Center for Public Opinion Studies [Centro de Estudios Murciano de Opinión Pública, CEMOP]) of the University of Murcia. The dependent variable measures the degree of support for stronger regulation of information channels in order to limit the spread of disinformation and hate speech. This variable was coded dichotomously as follows:
1 = Supports the regulation of hate speech.
0 = Does not support the regulation of hate speech, believing that each person should be free to decide what they consume.
These binary logistic regression models allowed us to estimate the probability that an individual will support strict regulation of social media content based on a set of explanatory variables, including gender, age, educational level, ideological self-placement, and the type of information consumed.
The independent variables included in the models were incorporated in a categorical manner, given that specific hypotheses were available for each of the categories of interest. The main independent variables included the following: firstly, the type of media used to obtain information about the Torre Pacheco case, such as traditional media, unconventional media, or personal contacts; secondly, political ideology. In order to fully understand the process, it should be noted that the variable unconventional media was created as an aggregate category, which recoded three items into a single group: (1) social media, (2) YouTube channels, podcasts, and other platforms, and (3) unconventional websites and news channels such as forums. This aggregation was justified based on the fact that despite their specific characteristics, these platforms share a core analytical feature: the absence of formal editorial control. Unlike traditional media, they represent an alternative ecosystem which, theoretically, adopts a critical stance toward dominant narratives. Therefore, they were grouped together in order to measure exposure to this environment in an unhurried manner, which is assumed to have a unifying influence on mistrust of institutional regulations. In turn, the variable contacts refers to sources of interpersonal information; in other words, obtaining information through comments from friends, family members, and acquaintances. Regarding political ideology, this variable was measured using an ordinal scale of political self-placement that includes the following categories: extreme left (1–2), left-wing (3–4), center (5–6), right-wing (7–8) and far-right (9–10). Moreover, sociodemographic control variables were also introduced, specifically gender (0 = male, 1 = female), educational level (0 = non-university, 1 = university), and age, which was grouped according to generational criteria: Generation Z (18–28 years old), millennials (29–44 years old), Generation X (45–60 years old), and baby boomers and the Silent Generation (61 or older), who tend to be less strident in their views (the descriptive statistics for the variables considered in the study can be viewed in Table A1 of Appendix A).

Sampling and Weighting Procedures

Data for this research proceeded from the CEMOP-Panel study, which was based on an online panel (CAWI) with a non-probabilistic sampling design. The initial sample configuration was carried out using a quota system based on gender, age, and territory of the Murcia Region.
Although this method enables efficient access to the sample, it is prone to bias in terms of coverage and self-selection. To overcome these limitations and improve the representativeness of the data, the entire analysis was conducted by applying the corresponding sample weighting. A measurement procedure was employed in order to adjust the sample obtained to the population parameters. The demographic variables used for this adjustment were the following: territory, gender, age, educational level, and occupation. This procedure corrects any potential discrepancies between the sample and the population, thereby strengthening the external validity of the findings.

5. Results

Three binary logistic regression models were estimated incrementally, in order to analyze the factors that influence the likelihood of supporting stricter regulation of information channels such as social media, the purpose of which is to curtail disinformation and hate speech in situations similar to the Torre Pacheco case. Additional blocks of variables were also incorporated into each model in order to assess the evolution of the explanatory capability and impact of the different predictors on the dichotomous dependent variable (1 = supports regulation, 0 = does not support regulation).
For a broader understanding of these models, it should be noted that the initial sample consisted of N = 611 cases. However, logistic regression models were estimated using listwise deletion, a standard procedure that can exclude any participant from the analysis if the person has a missing value in any of the variables included in the model, both dependent and independent. This explains the reduction in sample size to N = 559 for Models 1 and 2. The N = 551 in Model 3 was due to the information source variable containing eight additional missing cases.
Likewise, a collinearity diagnosis was performed prior to the logistic estimation in order to ensure the reliability of the results. The objective was to rule out high multicollinearity among the predictors, as this can inflate standard errors and affect the significance of the coefficients. To evaluate this assumption, tolerance statistics and the variance inflation factor (VIF) were calculated for the independent variables included in the analysis. Following standard criteria, tolerance values below 0.10 or, in equivalent terms, VIF values above 10, were considered problematic. The test results show that all values were well within acceptable limits: the highest VIF value was 1.184 (for the age variable), and the lowest tolerance value was 0.845 (also for the age variable). This data allows us to rule out the presence of multicollinearity among the model predictors. Therefore, the variance inflation factors (VIFs) were less than 1.5 among all the models, indicating the absence of collinearity issues.
Table 1 displays details of the classification performance of the model. The overall accuracy rate is 76.5%. The model demonstrates high sensitivity (84.4% precision in the supports regulation category), and moderate specificity (63.5% accuracy in the does not support regulation category), using a standard cut-off point of 0.500.
As can be observed in Table 2, the first model included only the sociodemographic variables of gender, age, and educational level. The results show that gender is a significant predictor: women are approximately 3.5 times more likely to support strict regulation than men, with all other variables remaining constant (Exp(B) = 3.534; p < 0.001). In terms of age, the 45–60 age group (Exp(B) = 2.657; p < 0.001), as well as the 61 and over segment (Exp(B) = 3.452; p < 0.001), are more likely to support regulation than the younger group (18–28 years), ceteris paribus [all other things being equal]. This indicates that the tendency to support control measures increases with age. Regarding education, although having a university degree would theoretically be expected to influence attitudes toward content regulation, the variable is not significant in this model (Exp(B) = 0.936; p = 0.754). This suggests that higher education is not clearly associated with support for regulation, at least not among this sample. The second model incorporated political self-placement as an attitudinal variable that reflects individuals’ perception, belief, or pre-disposition regarding the need to regulate information channels such as social media in order to limit disinformation and hate speech. After its inclusion, the effect of gender remained, and even increased slightly: in fact, women are approximately 4.16 times more likely to support strict regulation than men, ceteris paribus (Exp(B) = 4.163; p < 0.001). However, the impact of age was insignificant (29–44 years: Exp(B) = 0.844, p = 0.608; 45–60 years: Exp(B) = 1.562, p = 0.168; 61 years and older: Exp(B) = 2.018, p = 0.041). This suggests that the initial influence of age observed in Model 1 might be altered by political ideology. Regarding possession of a university education, it continues to be insignificant (Exp(B) = 0.792; p = 0.314), once again indicating that educational level is not clearly associated with a willingness to support content regulation on social media, at least not in this sample. As for ideology, a clear pattern emerges: those who identify as centrist, right-wing or far-right are much less likely to support regulation than the far-left category, with all other variables remaining unchanged. The odds ratios are 0.026 for the center, 0.014 for the right, and 0.008 for the far right, indicating that the more conservative the political stance, the less willing a person is to support measures that regulate content on social media (for a robustness of the logistic regression model with ideology as a metric variable, consult Table A2 in Appendix B).
In the last model, the type of media used to obtain information about events in Torre Pacheco was included as the main independent explanatory variable. The results show that those who obtain information through nonconventional media are significantly less likely to support content regulation (Exp(B) = 0.158; p < 0.001), ceteris paribus, representing an approximate reduction of 84% compared to those who consume traditional media such as the mainstream press, radio, and television. After including this variable, the effects of political ideology and gender continue to be strong, independent predictors, with the other variables remaining constant. Specifically, women continue to show a greater willingness to support regulation, while people with centrist, right-wing, and far-right political views continue to be less in favor of such measures. This indicates that both ideology and the source of information act as independent and relevant factors in explaining support for content control measures. Regarding other variables, age and educational levels do not have a significant influence on the last model.

The Regulation of Hate Speech

The results obtained in our analysis reinforce the idea that support for the regulation of hate speech is not solely a normative phenomenon. Instead, it is influenced by ideological and gender factors as well, in addition to media consumption patterns. Consistent with the findings of Matthes and Schmuck (2017), women express a stronger propensity for regulation, which could be construed as a higher level of sensitivity to the social consequences of hate speech, along with an enhanced perception of the risks to community cohesion. This pattern has also been observed in previous European research, where the gender variable is consistently associated with more restrictive attitudes toward potentially damaging speech in public spaces (Munzert et al. 2025).
In terms of political stance, and considering the observational nature of this research, the analysis shows that support for regulation decreases as ideological positions become more conservative, highlighting the possible association between ideology, level of trust in the state, and resistance to regulation. This finding suggests that individuals with conservative ideologies, or those who are more distrustful of democratic institutions, might see state intervention as limiting freedom of expression, and they are more reluctant to accept regulatory measures.
On the other hand, age and educational levels were not significant variables in explaining support for regulation, which is consistent with the results reported by Müller and Schulz (2021). This suggests that exposure to formal, higher education does not necessarily lead to a more favorable attitude toward institutional intervention. As such, these findings reinforce the hypothesis that although education is important for media literacy, it does not necessarily imply a higher level of awareness of hate speech, nor predict a stronger demand for regulation.
The most significant finding of the model is the impact of the type of media consumed. People who obtain their information mostly from nonconventional media are 84% less likely to support regulatory measures (Exp(B) = 0.158; p < 0.001), compared to those who consume traditional newspapers, radio, and television. This result is in line with studies by Hameleers (2021) and Puschmann et al. (2024), with regard to selective exposure and distrust of the state among ideologically polarized audiences, especially in conservative sectors, or among those with an affinity for anti-establishment narratives. Furthermore, a lower tendency to support regulations shows how exposure to unconventional environments produces a lack of trust in regulatory solutions, thereby acting as a self-empowering instrument of the disinformation ecosystem.
These findings reinforce the idea that support for hate speech regulation might be a significant yet indirect predictor of institutional trust. Individuals who confide in democratic institutions may be inclined to view regulation as a legitimate tool for safeguarding social harmony and information accuracy. By contrast, those who obtain information from unconventional sources, or perceive the state as an ominous force that tries to restrict freedom of expression, may be more inclined to reject any content control whatsoever.

6. Conclusions

As the main objective of this research is to examine how the consumption of nonconventional media influences public opinion regarding the need to regulate hate speech and disinformation, the findings indicate that consuming information from this type of media significantly reduces the likelihood of supporting regulatory measures (H1).
It is also crucial to differentiate between the nature of each outlet, as their format modulates their impact. Thus, each platform uses different psychological mechanisms to create an alternative reality (Brady et al. 2020). Governmental regulation is consistently viewed as a threat, or an attempt at censorship, which is why exposure to these scenarios could generate mistrust in regulatory solutions, and might even act as a self-empowering mechanism that enhances the disinformation ecosystem. This pattern confirms the effect of selective exposure and its interaction with political polarization, where the perception of bias or manipulation reinforces resistance to intervention.
The findings suggest that the regulation of hate speech is mainly perceived through the prism of institutional trust. In this regard, support for regulation is seen as a key indicator of people’s confidence in their public institutions (Norris 2011; Zmerli and Newton 2017), whereas institutional mistrust results in a stronger tendency to reject intervention. Attitudes toward the regulation of hate speech and disinformation can be mostly explained by the source of information used by individuals, and by their political ideology as well. According to our findings, it bears mentioning that both factors operate either directly or complementarily, but not in an interactive way. The rejection of regulation is more concentrated among people with extreme right-wing political views (H2), which are also linked to anti-immigration rhetoric. The case of Torre Pacheco illustrates the connection between digital discourse and social mobilization. In local conflicts, nonconventional media act in two ways: firstly, they operate as an emotional driving force that propagates a global, far-right rationale, and as architects of a collective identity of opposition; secondly, they magnify polarization in the public sphere. Their discourse mobilizes specific emotions such as the fear of losing one’s cultural identity, anger against a system perceived as corrupt, and resentment toward elites. Opposition to regulation is no longer merely an opinion but is instead an act of identity affirmation and group loyalty, and a defense mechanism that exacerbates the feeling of “us” against “them.”
Given the context, media literacy and counter-discourse strategies appear to be the most promising way to strengthen democratic resilience in the face of disinformation. The solution lies not only in legal regulation, but also in strengthening essential skills, ensuring algorithmic transparency, and promoting shared responsibility between platforms and citizens. Therefore, civil society interventions are crucial for complementing regulatory efforts. This work is especially pertinent regarding media literacy, given the strong political polarization found among heavy consumers of unconventional media. In future research, the authors recommend including variables such as trust in the media, civic participation, transnational consumption of information, and media credibility in order to refine future models and design more effective strategies for regulating and preventing disinformation. Another useful area to explore more deeply would be the association between political ideology and the type of media consumed. Although the present study did not delve into this connection, future analyses that conduct more detailed measurements of media consumption could reveal nuances regarding the impact of exposure to nonconventional media on different ideological segments. Likewise, the application of experimental, or quasi-experimental designs would enable improved isolation of the causal effects of media consumption and exposure to hate speech. Finally, longitudinal monitoring of periodic CEMOP-Panel studies might assist scholars in analyzing the evolution of attitudes toward regulation in relation to political and media events, thereby offering a dynamic and comparative view of the phenomenon.

7. Limitations

This study is observational in nature and based on data from an online panel (CAWI). Consequently, there are certain limitations in generalizing the results. Although the CEMOP-Panel enabled a representative sample to be obtained from the Murcia Region based on quotas related to gender, age, and territory, the non-probabilistic design could lead to bias in the coverage or self-selection associated with digital access, or the level of political interest of the participants. To overcome these limitations and improve the representativeness of the data, the authors conducted the analysis by applying the corresponding sample weightings. This procedure corrects any possible discrepancies between the sample obtained and the population parameters, thereby reinforcing the external validity of the findings. It should also be noted that the study analyses perceptions rather than actual behavior. Therefore, it is not possible to establish causal relationships, but only statistical associations between variables. Finally, measuring the type of information media used was based on the main source stated by each respondent, without considering the frequency or intensity of consumption, which makes it difficult to determine the true diversity of information exposure.
The findings obtained offer significant implications for public policies aimed at regulating hate speech and disinformation in digital environments. Firstly, the results highlight the need to move toward co-regulation models that combine the actions of public authorities with the joint responsibility of technology platforms and civil society. The growing obscurity of recommendation algorithms makes it essential to incorporate algorithmic transparency measures and independent external audits. In this way, both users and authorities will be able to understand and monitor the way priority is given to content that magnifies hate speech. These lines of research are consistent with the European framework of the Digital Services Act (2022), and are essential for safeguarding democratic deliberation in a fragmented media ecosystem. Finally, directly strengthening platform transparency and governance will contribute to achieving Sustainable Development Goal 16 (SDG 16), which aims to promote fair, peaceful, and inclusive societies through institutions that are not only strong, but accountable as well.

Author Contributions

Conceptualization: I.C.M. Methodology: I.C.M. and M.I.L.P. Formal analysis: I.M.L. Writing of the original draft: I.M.L. and M.I.L.P. Review and editing: I.C.M. and I.M.L. Supervision: I.C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research has not received external funding, as the study was conducted in conjunction with CEMOP’s regular activities.

Institutional Review Board Statement

This study was conducted in accordance with the ethical principles set forth by the Murcian Centre for Public Opinion Studies (CEMOP), under the auspices of the University of Murcia, the latter of which operates in accordance with the framework established by Organic Law 3/2018 on the Protection of Personal Data, the Guarantee of Digital Rights, and the General Data Protection Regulation (EU Regulation 2016/679, GDPR). The study was conducted in accordance with the Declaration of Helsinki. The research protocol, as well as the questionnaire and data management procedures, were reviewed and approved by the CEMOP Ethics and Research Committee (approval reference: CEMOP-2025-07-TP). Moreover, this study followed the ethical guidelines for social research involving human participants, ensuring voluntary participation, anonymity, and the principles of data minimization throughout the process.

Informed Consent Statement

All participants were informed of the objectives of the study, the voluntary nature of their participation, the estimated time required to complete the questionnaire, and their right to withdraw at any time. Informed consent was obtained digitally prior to the start of the online survey, in accordance with CEMOP’s standard protocol for data collection using the CAWI system. No information related to the identity of the subjects was collected, and all responses were processed and analyzed anonymously.

Data Availability Statement

The anonymous microdata used in this study belong to the CEMOP-Panel. Due to privacy constraints and data protection regulations, the dataset is not publicly available. However, interested researchers may request access to anonymous data by submitting a formal request to CEMOP, specifying the intended academic use. The statistical code and model specifications are available to the author who conducts the correspondence, assuming the request is reasonable.

Conflicts of Interest

The authors declare that there is no conflict of interest in any part of this study.

Appendix A

Table A1. Summary of descriptive statistics for the variables considered in the study.
Table A1. Summary of descriptive statistics for the variables considered in the study.
Gender0 (male) = 302, 49.5%
1 (female) = 309, 50.5%
Age18–28/Generation Z = 103, 16.8%
29–44/Millennials = 165, 27.0%
45–60/Generation X = 175, 28.7%
61 and over/Baby boomers + Silent = 168, 27.5%
Educational level0 (non-university) = 440, 72.0%
1 (university) = 171, 28.0%
Support for regulation of hate speech and disinformation0 (does not support) = 208, 34.0%
1 (supports) = 334, 54.7%
Ideological self-placement1–2/extreme left = 24, 4.0%
3–4/left-wing = 73, 12.0%
5–6/center = 319, 52.1%
7–8/right-wing = 121, 19.8%
9–10/far-right = 74, 12.1%
Information source/MediaTraditional media = 399, 65.4%
Unconventional media = 171, 28.0%
Contacts = 21, 3.4%
Source: created by the authors.

Appendix B

Table A2. Robustness check of the logistic regression model (ideology as a metric variable).
Table A2. Robustness check of the logistic regression model (ideology as a metric variable).
Model 3
B
(E)
pExp(B)
95% CI
Gender
(female)
1.545
(0.235)
***4.688
[2.960, 7.423]
Age
(29–44)
−0.112
(0.376)
0.894
[0.428, 1.867]
Age
(45–60)
0.351
(0.364)
1.420
[0.696, 2.896]
Age
(61 and over)
0.208
(0.368)
1.232
[0.599, 2.532]
Education
(university degree)
−0.148
(0.246)
0.862
[0.532, 1.398]
Ideology−0.455
(0.064)
***0.634
[0.559, 0.720]
Media
(unconventional media)
−1.625***0.197
[0.117, 0.331]
Media
(contacts)
1.743
(0.781)
**5.716
[1.237, 26.403]
Constant2.828
(0.520)
***16.909
N551
Nagelkerke’s pseudo-R20.422
Chi-square194.507 ***
HL test8.407
AUC0.852
Sample weightYes
Source: Created by the authors. *** p < 0.01, ** p < 0.05. Note: Standard errors are shown in parentheses. Cook’s distances were calculated for each observation; no case had a value above the threshold of 1.0.

Notes

1
The survey can be accessed at the following link: https://www.cemopmurcia.es/estudios/la-opinion-publica-ante-los-incidentes-de-torre-pacheco-la-verdad-cemop-panel/, accessed on 15 September 2025.
2
The study published on the CEMOP website involved a larger sample (1051 cases). However, with the aim of ensuring sample representativeness in terms of strata, gender, and age based on the population register, and in terms of ideological stance (according to the last two barometers published by CEMOP), an extract was taken from the database using an algorithm that prioritized compliance with the different quotas defined as population structures to be achieved. This allowed the authors to start with a more balanced sample at the outset, and to limit the effect and size of the weightings.
3
An interaction analysis was performed to see if there was any influence. Presence was observed at a bivariate level. However, given that one of the categories is not significant, and that the N of another is limited, we decided not to include the interaction in the complete model. In addition, all the analyses presented were performed using sample weighting. This procedure corrects any discrepancies between the sample obtained (CAWI online panel), and the population parameters of the Murcia Region, which included strata, gender, age, educational level, and occupation.

References

  1. Adamic, Lada A., and Natalie Glance. 2005. The Political Blogosphere and the 2004 US Election: Divided They Blog. Paper presented at the 3rd International Workshop on Link Discovery, Chicago, IL, USA, August 21–25; pp. 36–43. [Google Scholar] [CrossRef]
  2. Albertazzi, Daniele, and Duncan McDonnell. 2008. Twenty-First Century Populism. The Spectre of Western European Democracy. Houndmills: Palgrave-Macmillan. [Google Scholar]
  3. Avaaz. 2019. Whatsapp Social Media’s Dark Web. How the Messaging Service Is Being Flooded with Lies and Hate Speech Ahead of the Spanish Elections. Avaaz.org. Available online: https://bit.ly/3FIViyY (accessed on 15 September 2025).
  4. Barberá, Pablo, John T. Jost, Jonathan Nagler, Joshua A. Tucker, and Richard Bonneau. 2015. Tweeting from left to right: Is online political communication more than an echo chamber? Psychological Science 26: 1531–42. [Google Scholar] [CrossRef]
  5. Benesch, Susan. 2014. Countering Dangerous Speech: New Ideas for Genocide Prevention. SSRN, 1–26. [Google Scholar] [CrossRef]
  6. Blanco Alfonso, Ignacio, Leticia Rodríguez Fernández, and Sergio Arce García. 2022. Polarización y discurso de odio con sesgo de género asociado a la política: Análisis de las interacciones en Twitter. Revista De Comunicación 21: 33–50. [Google Scholar] [CrossRef]
  7. Brady, William J., Molly J. Crockett, and Jay J. Van Bavel. 2020. The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. Perspectives on Psychological Science 15: 978–1010. [Google Scholar] [CrossRef]
  8. Bucher, Taina. 2012. Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media & Society 14: 1164–80. [Google Scholar] [CrossRef]
  9. Campos Domínguez, Eva, Marc Esteve Del Valle, and Cristina Renedo Farpón. 2022. Retóricas de desinformación parlamentaria en Twitter. Comunicar 72: 47–58. [Google Scholar] [CrossRef]
  10. Cho, Jaeho, Saifuddin Ahmed, Martin Hilbert, Billy Liu, and Jonathan Luu. 2020. Do Search Algorithms Endanger Democracy? An Experimental Investigation of Algorithm Effects on Political Polarization. Journal of Broadcasting & Electronic Media 64: 150–72. [Google Scholar] [CrossRef]
  11. Cinelli, Matteo, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2021. The echo chamber effect on social media. Psychological and Cognitive Science 118: e2023301118. [Google Scholar] [CrossRef] [PubMed]
  12. Crespo Martínez, Ismael, and Alberto Mora Rodríguez. 2022. El auge de la extrema derecha en Europa: El caso de Vox en la Región de Murcia. Política y Sociedad 59: 75974. [Google Scholar] [CrossRef]
  13. EUR-Lex. 2022. Reglamento (UE) 2022/2065 del Parlamento Europeo y del Consejo de 19 de Octubre de 2022 Relativo a un Mercado Único de Servicios Digitales y por el que se Modifica la Directiva 2000/31/CE (Reglamento de Servicios Digitales). EUR-Lex. Available online: https://eur-lex.europa.eu/legal-content/es/ALL/?uri=CELEX:32022R2065 (accessed on 20 September 2025).
  14. EUR-Lex. 2023. Reglamento de Servicios Digitales. EUR-Lex. Available online: https://eur-lex.europa.eu/ES/legal-content/summary/digital-services-act.html (accessed on 20 September 2025).
  15. Fair, Hernán. 2025. La extrema derecha neoliberal y autoritaria en la argentina de Milei. Kairos: Revista de Temas Sociales 15: 45–62. [Google Scholar]
  16. Federal Ministry of Justice and Consumer Protection. 2020. Law to Improve Law Enforcement on Social Media (NetzDG). Available online: https://www.gesetze-im-internet.de/netzdg/BJNR335210017.html (accessed on 15 September 2025).
  17. Fernández Torres, María J., and Nereida Cea. 2025. Europa ante el desafío de la desinformación en tiempos de inteligencia artificial. In Los Medios de Comunicación ante la Desinformación: Inteligencia Artificial, Discursos de odio, Teorías de la Conspiración y Verificación. Edited by Laura Teruel Rodríguez and Livia García Faroldi. Valencia: Tirant Humanidades, pp. 363–80. [Google Scholar]
  18. Ferrán, Jaime. 2025. Cierra el centro de menores de Santa Cruz. La Opinión, 14 de Octubre de 2025. Available online: https://www.laopiniondemurcia.es/comunidad/2025/10/14/cierra-centro-menores-santa-cruz-122624498.html (accessed on 15 September 2025).
  19. Ferreira, Carles. 2019. Vox como representante de la derecha radical en España: Un estudio sobre su ideología. Revista Española de Ciencia Política 51: 73–98. [Google Scholar] [CrossRef]
  20. Gagliardone, Iginio, Danit Gal, Thiago Alves, and Gabriela Martínez. 2015. Countering Online Hate Speech. Paris: UNESCO Publishing. [Google Scholar]
  21. Garde Eransus, Edurne. 2025. El discurso de Vox en Twitter sobre la llegada de inmigrantes a Ceuta: Una aproximación desde la teoría del encuadre. Círculo de Lingüística Aplicada a la Comunicación 102: 261–70. [Google Scholar] [CrossRef]
  22. Garland, Joshua, Keyan G. Zahedi, Jean G. Young, Laurent H. Dufresne, and Mirta Galesic. 2022. Impact and dynamics of hate and counter speech online. EPJ Data Science 11: 3. [Google Scholar] [CrossRef]
  23. Garrett, R. Kelly. 2009. Echo Chambers Online? Politically Motivated Selective Exposure Among Internet News Users. Journal of Computer—Mediated Communication 14: 265–85. [Google Scholar] [CrossRef]
  24. Gillespie, Tarleton. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven: Yale University Press. [Google Scholar]
  25. Gorwa, Robert. 2019. What is Platform Governance? Information, Communication & Society 22: 854–71. [Google Scholar] [CrossRef]
  26. Gov UK. 2023. Online Safety Act. Gov UK. Available online: https://www.gov.uk/government/collections/online-safety-act (accessed on 15 September 2025).
  27. Gómez Peralta, Héctor. 2023. El discurso de odio y los límites de la libertad de expresión en Estados Unidos. Revista Mexicana de Ciencias Políticas y Sociales 68: 281–305. [Google Scholar] [CrossRef]
  28. Guerrero Solé, Frederic, and Olivier Philippe. 2020. The toxicity of Spanish politics in Twitter during the COVID-19 pandemics. RACO 21: 133–39. [Google Scholar] [CrossRef]
  29. Hameleers, Michael. 2021. They are selling themselves out to the enemy! The content and effects of populist conspiracy theories. International Journal of Public Opinion Research 33: 38–56. [Google Scholar] [CrossRef]
  30. Helberger, Natali, Jo Pierson, and Thomas Poell. 2018. Governing Online Platforms: From Contested to Cooperative Responsibility. The Information Society 34: 1–14. [Google Scholar] [CrossRef]
  31. Iyengar, Shanto, Gaurav Sood, and Yphtach Lelkes. 2012. Affect, not ideology: A social identity perspective on polarization. Public Opinion Quarterly 76: 405–31. [Google Scholar] [CrossRef]
  32. Jia, Yue, and Sandy Schumann. 2025. Tackling hate speech online: The effect of counter-speech on subsequent by stander behavioral intentions. Cyberpsychology: Journal of Psychosocial Research on Cyberspace 19: 1–20. [Google Scholar] [CrossRef]
  33. Johnson, Thomas J., Shannon L. Bichard, and Weiwu Zhang. 2009. Communication Communities or ‘Cyberghettos?’: A Path Analysis Model Examining Factors that Explain Selective Exposure to Blogs. Journal of Computer-Mediated Communication 15: 60–82. [Google Scholar] [CrossRef]
  34. Koltay, András. 2016. Hate speech and democratic citizenship. Journal of Media Law 8: 302–6. [Google Scholar] [CrossRef]
  35. Lelkes, Yphtach. 2016. Mass polarization: Manifestations and measurements. Public Opinion Quarterly 80: 392–410. [Google Scholar] [CrossRef]
  36. Mariscal Ríos, Alicia. 2025. Análisis crítico de discursos y narrativas en torno a la inmigración. ELUA 44: 291–319. [Google Scholar] [CrossRef]
  37. Marín Albaladejo, Juan A. 2022. La polarización discursiva como estrategia de comunicación en las cuentas de líderes y partidos políticos en Twitter. In El Debate Público en la Red: Polarización, Consenso y Discursos del Odio. Edited by Elena Arroyas Langa, Pedro L. Pérez-Díaz and Marta Pérez-Escolar. Salamanca: Comunicación Social Ediciones y Publicaciones. [Google Scholar] [CrossRef]
  38. Matthes, Jörg, and Desirée Schmuck. 2017. The effects of anti-immigrant right-wing populist ads on implicit and explicit attitudes: A moderated mediation model. Communication Research 44: 556–81. [Google Scholar] [CrossRef]
  39. Mele, Nicco. 2013. The End of Big: How the Internet Makes David the New Goliath. New York: St. Martin’s Press. [Google Scholar]
  40. Munzert, Simon, Richard Traunmüller, Pablo Barberá, Andrew Guess, and JungHwan Yang. 2025. Citizen preferences for online hate speech regulation. PNAS Nexus 4: pgaf032. [Google Scholar] [CrossRef]
  41. Müller, Philipp, and Anne Schulz. 2021. Alternative media for a populist audience? Exploring political and media use predictors of exposure to Breitbart, sputnik, and Co. Information Communication & Society 24: 277–93. [Google Scholar] [CrossRef]
  42. Müller, Philipp, and Rainer Freudenthaler. 2022. Right-wing, populist, controlled by foreign powers? Topic diversification and partisanship in the content structures of German-language alternative media. Digital Journalism 10: 1363–86. [Google Scholar] [CrossRef]
  43. Napoli, Philip M. 2019. Social Media and the Public Interest: Media Regulation in the Disinformation Age. New York: Columbia University Press. Available online: https://cup.columbia.edu/book/social-media-and-the-public-interest/9780231184540/ (accessed on 15 September 2025).
  44. Norris, Pippa. 2011. Democratic Deficit: Critical Citizens Revisited. New York: Cambridge University Press. [Google Scholar]
  45. Nyhan, Brendan, and Jason Reifler. 2014. The Effect of Fact-Checking on Elites: A Field Experiment on U.S. State Legislators. American Journal of Political Science 59: 529–788. [Google Scholar] [CrossRef]
  46. Parekh, Bhikhu. 2006. Hate speech. Is there a case for banning? Public Policy Research 12: 213–23. [Google Scholar] [CrossRef]
  47. Pariser, Eli. 2017. El Filtro Burbuja. Cómo la Red Decide lo que Leemos y lo que Pensamos. Barcelona: Taurus. [Google Scholar]
  48. Puschmann, Cornelius, Sebastian Stier, Patrick Zerrer, and Helena Rauxloh. 2024. Politicized and paranoid? Assessing attitudinal predictors of alternative news consumption. Journal of Communication 74: 725–47. [Google Scholar] [CrossRef]
  49. Ramírez Plascencia, David, Rosa M. Alonzo González, and Alejandra Ochoa Amezquita. 2022. Odio, polarización social y clase media en Las Mañaneras de López Obrador. Doxa Comunicación. Revista Interdisciplinar De Estudios De Comunicación Y Ciencias Sociales 35: 83–96. [Google Scholar] [CrossRef]
  50. Rivera, José M., Nieves Lagares, María Pereira, and Erika Jaráiz. 2021. Relación entre diversos usos de las redes sociales, emociones y voto en España. Revista Latina de Comunicación Social 79: 73–98. [Google Scholar] [CrossRef]
  51. Rodríguez Pérez, Isaura. 2024. Discursos de odio en X: Aproximación a los mensajes de Javier Milei y el espacio político La Libertad Avanza. Revista Más Poder Local 58: 48–69. [Google Scholar] [CrossRef]
  52. Sánchez, Nacho. 2025. Vox Desata una Nueva Polémica en El Ejido con dos Vallas Xenófobas: “¿Qué Almería Quieres?” El País, 1 de agosto de 2025. Available online: https://elpais.com/espana/2025-08-01/vox-desata-una-nueva-polemica-en-el-ejido-con-dos-vallas-con-tintes-xenofobos-que-almeria-quieres.html (accessed on 20 September 2025).
  53. Sunstein, Cass R. 2001. #Republic: Divided Democracy in the Age of Social Media. Princeton: Princeton University. [Google Scholar]
  54. Suzor, Nicolas. 2019. Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge: Cambridge University Press. Available online: https://www.cambridge.org/core/books/lawless/8504E4EC8A74E539D701A04D3EE8D8DE (accessed on 20 September 2025).
  55. Tucker, Joshua A., Andrew Guess, Pablo Barbera, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, and Brendan Nyhan. 2018. Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. SSRN, 1–95. [Google Scholar] [CrossRef]
  56. Vázquez Barrio, Tamara. 2011. El discurso político en los programas de infoentretenimiento. In Periodismo Político: Nuevos Retos, Nuevas Prácticas: Actas de las Comunicaciones Presentadas en el XVII Congreso Internacional de la SEP, 5 y 6 de Mayo de 2011. Edited by Salomé Berrocal Gonzalo. Valladolid: Universidad de Valladolid, pp. 477–501. [Google Scholar]
  57. Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science 359: 1146–51. [Google Scholar] [CrossRef] [PubMed]
  58. Waisbord, Silvio. 2020. ¿Es válido atribuir la polarización política a la comunicación digital? Sobre burbujas, plataformas y polarización afectiva. Revista Saap 14: 248–79. [Google Scholar] [CrossRef]
  59. Williams, Matthew. 2021. The Science of Hate. London: Faber & Faber. [Google Scholar]
  60. Zmerli, Sonja, and Kenneth Newton. 2017. Objects of Political and Social Trust: Scales and Hierarchies. In Handbook on Political Trust. Edited by Sonja Zmerli and Tom W. G. van der Meer. Cheltenham: Edward Elgar Publishing, pp. 104–24. [Google Scholar]
Table 1. Model classification table.
Table 1. Model classification table.
Correct Percentage
Does not support regulation63.5
Supports regulation84.4
Overall percentage76.5
Source: created by the authors. Note: the cut-off value is 0.500.
Table 2. Binary logistic regression analysis by blocks regarding the influence of media consumption and political ideology on the support for hate speech regulation.
Table 2. Binary logistic regression analysis by blocks regarding the influence of media consumption and political ideology on the support for hate speech regulation.
Model 1Model 2Model 3
B
(E)
pExp(B)
95% CI
B
(E)
pExp(B)
95% CI
B
(E)
pExp(B)
95% CI
Gender
(female)
1.262
(0.193)
***3.534
[2.420, 5.159]
1.426
(0.210)
***4.163
[2.760, 6.278]
1.554
(0.236)
***4.729
[2.979, 7.507]
Age
(29–44)
0.379
(0.290)
1.461
[0.827, 2.581]
−0.170
(0.331)
0.844
[0.441, 1.615]
–0.047
(0.384)
0.955
[0.449, 2.027]
Age
(45–60)
0.977
(0.288)
***2.657
[1.512, 4.671]
0.446
(0.323)
1.562
[0.829, 2.945]
0.385
(0.367)
1.469
[0.715, 3.019]
Age
(61 and over)
1.239
(0.295)
***3.452
[1.938, 6.151]
0.702
(0.343)
**2.018
[1.030, 3.957]
0.382
(0.383)
1.465
[0.691, 3.106]
Education
(university degree)
−0.066
(0.210)
0.936
[0.621, 1.413]
−0.233
(0.232)
0.792
[0.503, 1.247]
−0.197
(0.255)
0.821
[0.499, 1.352]
Ideology
(left-wing)
−1.606
(1.651)
0.201
[0.008; 5.104]
–2.115
(1.700)
0.121
[0.004, 3.374]
Ideology
(center)
−3.641
(1.597)
**0.026
[0.001, 0.599]
–4.638
(1.628)
***0.010
[0.000, 0.235]
Ideology
(right-wing)
−4.282
(1.608)
***0.014
[0.001, 0.323]
–5.227
(1.644)
***0.005
[0.000, 0.135]
Ideology
(far-right)
−4.841
(1.623)
***0.008
[0.000, 0.190]
–5.352
(1.652)
***0.005
[0.000, 0.121]
Media
(unconventional media)
–1.844
(0.289)
***0.158
[0.090, 0.279]
Media
(contacts)
1.141
(0.767)
3.131
[0.697, 14.070]
Constant–0.783
(0.244)
***0.4573.310
(1.618)
27.3814.679
(1.661)
***107.674
N559559551
Nagelkerke’s pseudo-R20.1690.3220.453
Chi-square71.775 ***146.583 ***212.032 ***
HL test13.390 *10.59916.157 **
AUC0.6810.7950.864
Sample weightYesYesYes
Source: Created by the authors.3 *** p < 0.01, ** p < 0.05, * p < 0.1. Note: Standard errors are shown in parentheses. Cook’s distances were calculated for each observation; no case had a value above the threshold of 1.0. Constant: Male + 18–28 years old + non-university educated + extreme left + consumers of traditional media.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Crespo Martínez, I.; Melero López, I.; López Palazón, M.I. The Public Perception of Hate Speech Regulation in Unconventional Media. Soc. Sci. 2025, 14, 705. https://doi.org/10.3390/socsci14120705

AMA Style

Crespo Martínez I, Melero López I, López Palazón MI. The Public Perception of Hate Speech Regulation in Unconventional Media. Social Sciences. 2025; 14(12):705. https://doi.org/10.3390/socsci14120705

Chicago/Turabian Style

Crespo Martínez, Ismael, Inmaculada Melero López, and María Isabel López Palazón. 2025. "The Public Perception of Hate Speech Regulation in Unconventional Media" Social Sciences 14, no. 12: 705. https://doi.org/10.3390/socsci14120705

APA Style

Crespo Martínez, I., Melero López, I., & López Palazón, M. I. (2025). The Public Perception of Hate Speech Regulation in Unconventional Media. Social Sciences, 14(12), 705. https://doi.org/10.3390/socsci14120705

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop