Previous Article in Journal
Using Secure Multi-Party Computation to Create Clinical Trial Cohorts
Previous Article in Special Issue
A Lightweight Multimodal Framework for Misleading News Classification Using Linguistic and Behavioral Biometrics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary

Department of Modern Technology and Cyber Security Law, Deák Ferenc Faculty of Law and Political Sciences, Széchenyi István University, 1 Egyetem Square, 9026 Gyor, Hungary
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv. 2026, 6(1), 3; https://doi.org/10.3390/jcp6010003
Submission received: 13 September 2025 / Revised: 19 December 2025 / Accepted: 23 December 2025 / Published: 24 December 2025
(This article belongs to the Special Issue Multimedia Security and Privacy)

Abstract

Online hate speech poses a growing socio-technological threat that undermines democratic resilience and obstructs progress toward Sustainable Development Goal 16 (SDG 16). This study examines the regulatory and behavioral dimensions of this phenomenon through a combined legal analysis of platform governance and an empirical survey conducted on Meta platforms, based on a sample of young Hungarians (N = 301, aged 14–34). This study focuses on Hungary as a relevant case study of a Central and Eastern European (CEE) state. Countries in this region, due to their shared historical development, face similar societal challenges that are also reflected in the online sphere. The combination of high social media penetration, a highly polarized political discourse, and the tensions between platform governance and EU law (the DSA) makes the Hungarian context particularly suitable for examining digital resilience and the legal awareness of young users. The results reveal a significant “awareness gap”: While a majority of young users can intuitively identify overt hate speech, their formal understanding of platform rules is minimal. Furthermore, their sanctioning preferences often diverge from Meta’s actual policies, indicating a lack of clarity and predictability in platform governance. This gap signals a structural weakness that erodes user trust. The legal analysis highlights the limited enforceability and opacity of content moderation mechanisms, even under the Digital Services Act (DSA) framework. The empirical findings show that current self-regulation models fail to empower users with the necessary knowledge. The contribution of this study is to empirically identify and critically reframe this ‘awareness gap’. Moving beyond a simple knowledge deficit, we argue that the gap is a symptom of a deeper legitimacy crisis in platform governance. It reflects a rational user response—manifesting as digital resignation—to opaque, commercially driven, and unaccountable moderation systems. By integrating legal and behavioral insights with critical platform studies, this paper argues that achieving SDG 16 requires a dual strategy: (1) fundamentally increasing transparency and accountability in content governance to rebuild user trust, and (2) enhancing user-centered digital and legal literacy through a shared responsibility model. Such a strategy must involve both public and private actors in a coordinated, rights-based approach. Ultimately, this study calls for policy frameworks that strengthen democratic resilience not only through better regulation, but by empowering citizens to become active participants—rather than passive subjects—in the governance of online spaces.

1. Introduction

The content moderation practices of social media platforms now undeniably influence social processes and the nature and quality of public discourse. While the goal of moderation is to reduce harmful content [1], its opaque and inconsistent application often undermines user trust [2] and the sense of fairness [3], thereby actively impacting the functioning of democratic institutions.
A recent example is the 2024 annulment of the Romanian presidential election results by the country’s Constitutional Court, which was justified precisely by the extensive and coordinated influence operations conducted via social media platforms that undermined the integrity of the election and delegitimized its outcome. Similar patterns can be observed in other democratic elections, where coordinated online hate campaigns targeting election officials aim to further erode reliance in democratic processes [4].
Sustainable Development Goal 16 (SDG 16) aims to promote peaceful, just, and inclusive societies. While traditionally applied to state actors, these objectives are increasingly relevant for the governance of powerful digital platforms, which function as quasi-public spaces where public discourse is shaped and democracy is influenced [5,6]. Particularly important in this context are:
  • Equal access to justice (Target 16.3), which in the digital sphere translates to users’ ability to understand platform rules and access effective remedies against content moderation decisions;
  • The development of effective, accountable and transparent institutions (Target 16.6), a principle that directly challenges the opaque, “black box” and often inconsistent enforcement of rules by social media companies [7,8];
  • The protection of fundamental freedoms (Target 16.10), which involves balancing freedom of expression with protection from harm, a task complicated when users lack awareness of their rights and responsibilities online.
When users do not understand the rules or trust the enforcement mechanisms due to perceived governance failures, the platform fails as an ‘accountable institution’, and ‘access to justice’ becomes illusory, undermining user trust and creating a legitimacy deficit [9,10]. This study argues that the ‘awareness gap’ is a critical barrier to achieving SDG 16 in the digital age.
Hate speech proliferating on social media platforms—whether based on ethnicity, religion, gender, nationality, or politics—not only violates individual dignity but also systematically corrodes the social and institutional structures upon which democratic governance is built. Inflammatory and divisive content, amplified by algorithms and (even unintentionally) favored by platform business models, can dominate public discourse, crowding out moderate and civilized exchange and creating a communication environment where political enemy-imaging, suspicion, and collective stigmatization become the norm. Furthermore, hate speech amplifies the formation of so-called “filter bubbles” and “echo chambers”. In these digital spaces, algorithmic filtering and users’ cognitive biases both contribute to individuals primarily encountering content that confirms their existing views [11,12]. This phenomenon further deepens social division and political polarization, reducing the potential for engaging with different viewpoints.
The destabilization of these relations of trust creates a particularly dangerous situation for the functioning of democratic institutions. Hate speech, used as a conscious political stratagem [13], is often employed to weaken citizens’ trust in public institutions [4]. The frequent presence of hate speech in the online sphere normalizes intolerance, which undermines the legitimacy of institutions responsible for upholding equality and justice, as well as the social order [14]. This process particularly affects the integrity of elections (SDG 16.7, inclusive and representative decision-making), as hate speech often accompanies disinformation, targeted smear campaigns, and the public stigmatization of minorities or political opponents. In such an environment, elections cease to be an undisturbed arena for democratic will-formation and become a potential ground for exclusion, intimidation, and withdrawal from public life. It is important to highlight that hate speech does not only restrict the participation of targeted groups. Exposure to online hate speech can also trigger a “chilling effect” among the wider public, leading to self-censorship and withdrawal from debates [15]. Although research suggests this effect is not universal and can be counteracted by strong professional motivation [16], it can nevertheless contribute to a reduction in the diversity of opinions and the quality of democratic debates. Thus, hate speech leads to a narrowing of the democratic public sphere, which contradicts the full realization of fundamental rights, including the freedom of expression and information (SDG 16.10).
However, state regulation alone is not sufficient to combat hate speech on social media. The regulatory practices of online platforms, particularly the transparency of content moderation decisions, the effectiveness of remedy mechanisms, and the enforcement of user rights, are receiving heightened attention today. The European Union’s Digital Services Act (DSA) aims to make progress in these areas by imposing obligations on platforms regarding risk assessment, transparency in content moderation, and the protection of user rights [17].
The literature on hate speech regulation can be divided into several dominant streams. A significant line of research focuses on legal and policy frameworks, analyzing the balance between freedom of speech and regulation and the roles of platforms and states [18,19]. Another extensive field deals with the automated, algorithmic detection of hate speech, where Transformer-based models (e.g., BERT) represent the current state of the art [20]. Although the literature also acknowledges the importance of dialog-based and educational strategies [21,22], it simultaneously points to a lack of research with a focus on communication and education [23].
Social resilience in the digital age can be interpreted as an adaptive capacity that enables society to maintain its functionality despite external shocks [24]. This capacity is based on a complex system in which legal consciousness, digital literacy, institutional transparency, and active civic participation collectively form the foundation of democratic resilience. The literature confirms that critical digital literacy is not just an individual skill but a fundamental prerequisite for resilience against disinformation and for democratic discourse [25,26]. Digital inequalities—embodied not only in access but also in use and skills—also directly affect the achievement of sustainable development goals such as quality education or gender equality [27]. Where this harmony is disrupted—for example, due to regulatory incoherence, procedural opacity, or a lack of digital competence on the part of users (the specific aspects of which, such as the ability to recognize hate speech and knowledge of platform sanctions, are the subject of our empirical investigation) or a combination thereof—not only does the fight against hate speech become ineffective, but the embeddedness of democratic norms is also undermined, jeopardizing the long-term achievement of the SDG 16 framework.
It is precisely this disruption—the gap between top–down regulation and the bottom–up reality of users’ digital competence—that this study addresses. While the literature covers legal frameworks and technological solutions, there is a lack of empirical studies exploring how users themselves interpret platform rules and what level of legal consciousness they possess.
To investigate this gap, this paper focuses on Hungary as a relevant case study of a Central and Eastern European (CEE) state. Countries in this region, due to their shared historical development, face similar societal challenges that are also reflected in the online sphere. The combination of high social media penetration, a highly polarized political discourse, and the tensions between platform governance and EU law (DSA) makes the Hungarian context particularly suitable for examining digital resilience and the legal awareness of young users.
The primary contribution of this study is the empirical identification and analysis of the “awareness gap” between users’ intuitive recognition of hate speech and their formal understanding of platform rules. By examining this gap, we aim to provide insights into the effectiveness of current platform governance and formulate recommendations for a more user-centric approach.
This paper proceeds as follows. Section 2 provides a detailed theoretical background on the legal and platform-level challenges of defining and governing hate speech. Section 3 outlines the methodology of our empirical study, conducted on a sample of young Hungarian users of Meta platforms. Section 4 presents the results, which are then discussed in Section 5 (Discussion) in the context of democratic resilience and SDG 16. Finally, Section 6 (Conclusions) offers conclusions and policy recommendations.

2. Theoretical Background and Platform Governance

2.1. Definitional Challenges of Hate Speech in International and European Law

In the context of hate speech, the Universal Declaration of Human Rights (UDHR) is particularly noteworthy, as it provides for the right to freedom of expression [28], while on the other, it prohibits arbitrary interference with privacy, including attacks upon honor and reputation [29]. Similar provisions can be found in the International Covenant on Civil and Political Rights (ICCPR), which sets several limits on the right to freedom of expression [30]. One of these limits is the advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence [31]; thus, the restriction on freedom of expression centers on human dignity to protect equality among people (Table 1).
The beginning of European regulation can be considered the European Convention on Human Rights (ECHR), also known as the Treaty of Rome, which established the Strasbourg-based European Court of Human Rights (ECtHR) as the guardian of its provisions. The Convention stipulates that freedom of expression carries with it duties and responsibilities [32,33]. The grounds for restriction include national security, public safety, the prevention of disorder or crime, and the protection of health or morals, as well as the reputation or rights of others, which also encompasses the prohibition of hate speech [34]. It also addresses the prohibition of abuse of rights, referring to the previously mentioned lex specialis nature of restrictions, which means that the restriction cannot extend beyond the right itself and the limits set by the ECHR [35]. The ECtHR does not only expect the person expressing hate speech to respect these boundaries; the Court can also hold media service providers accountable, as the regulation makes it the duty of the content provider (e.g., television, authorities) to conduct the necessity and proportionality test appropriately and to assess the situation (see Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v. Hungary; Delfi AS v. Estonia [36,37]).
In its extensive case law concerning Article 10 of the ECHR, the ECtHR relies on the so-called “three-part test” [38], during which the court examines three fundamental aspects: (1) the interference is prescribed by law; (2) the restriction is necessary to achieve a legitimate aim; and (3) the restriction is necessary in a democratic society. However, a significant body of academic literature argues that the Court’s application of this test is inconsistent, as it has been shown to apply conflicting standards—sometimes a broad “bad tendency” approach, other times a stricter “incitement” test—to similar cases, leading to legal uncertainty [39]. If these three conditions are met, the restriction is considered not to violate the right to freedom of expression (Table 1).
Since the Treaty of Lisbon, the Charter of Fundamental Rights of the European Union (the Charter) has been part of the founding treaties of the European Union (EU) [40]., thus becoming a legally binding document with horizontal and vertical effect. The Charter adopts the provisions of the ECHR in several places and complements or clarifies them elsewhere. The right to freedom of expression is regulated in Article 11 of the 2012 amended text, similar to the UDHR [41]. The limitations are mentioned much later, in Article 52, which states that, alongside the primacy of the principle of proportionality, EU values and public opinion must also be taken into account. Restrictions may only be applied to protect the rights and freedoms of others [42]. In such a conflict of fundamental rights, a careful examination of proportionality is necessary [43].
In addition to the foundational legal documents declaring fundamental rights mentioned above, it is important to address the series of workshops organized by the UN Office of the High Commissioner for Human Rights (OHCHR) in 2011, which resulted in the formulation of the OHCHR’s annual report, better known as the Rabat Plan of Action. The document’s goal is none other than to curb national, racial, and religious-based discrimination while ensuring the fullest possible exercise of the right to freedom of expression [44]. The Report also makes recommendations in several areas, including legislation. It emphasizes that the aforementioned three-part test should also be applied by member states in cases of hate speech, suggests the differentiation of three main types of expression, and the specific formulation of terms such as “hatred”, “discrimination”, and “hostility” in national laws [45]. In addition to these, it is important to mention the six-part threshold test created by the Report, which would be applied to determine if an expression qualifies as a criminal offense. The components of the six-part threshold test include (1) the context; (2) the speaker; (3) the intent; (4) the content and form; (5) the extent of the expression; and (6) the probability, including the dangerousness of the expression leading to a subsequent crime [46].
Despite such efforts to introduce nuance, the overall legal framework remains at the level of high principles rather than providing a clear-cut, operational definition that can be easily applied by non-legal actors, such as platform moderators or everyday users. This context-heavy, interpretive approach, while judicially sound, fails to offer a simple, predictable rule for citizens navigating online spaces (Table 1).
The lack of a clear and universally accepted legal definition is a crucial issue—a challenge widely noted in the literature [47,48]. It not only complicates legal enforcement but also creates confusion for both social media platforms and the users who are expected to abide by their rules. Furthermore, a significant disconnect exists between formal legal definitions and the “ordinary meaning” of hate speech as understood by the public [49]. As our study will demonstrate, this top-level legal ambiguity trickles down, contributing to the “awareness gap” observed at the user level.

2.2. Hate Speech Governance on Meta Platforms

In addition to legal frameworks, major online platforms have established their own regulatory systems (Community Standards) to moderate content. Meta, one of the world’s largest platform providers, defines hate speech in its Community Standards. According to Meta’s definition, hate speech is a “direct attack on people based on what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and serious disease”. In Meta’s interpretation, an “attack” includes violent or dehumanizing speech, statements of inferiority, and calls for exclusion.
The literature points out that such platform definitions are often intentionally broader than strict legal concepts. As private actors, platforms are not bound by the same constitutional guarantees for freedom of speech as states are, allowing them to create stricter rules [50]. However, this practice, especially under European regulatory pressure (e.g., the DSA), frequently leads to the excessive removal of content, a phenomenon known as “over-removal”. Empirical studies have shown that a significant portion of content deleted by platforms would have been legally permissible [51]. This tension between legal norms and the risk-averse practices of platforms creates significant uncertainty for users.
This platform-level regulation falls under the scope of the DSA, which mandates that terms of service must contain rules on content moderation in a simple, user-friendly, and clear manner. Article 14 of the DSA states that in applying content restrictions, platforms must act with due diligence, objectivity, and proportionality, taking into account the rights and legitimate interests of all parties involved [52]. The literature extensively discusses how the DSA attempts to balance platform responsibilities with users’ fundamental rights, although challenges related to monitoring obligations remain significant [17].
The tension between legal norms and platform practices, as highlighted by the issue of “over-removal”, points to a deeper structural phenomenon: the exercise of private power in a quasi-public domain. Scholars of platform governance argue that this is not merely a technical issue of imperfect rule application, but a form of ‘digital domination’ [6]. In this view, large platforms like Meta function as de facto private regulators that can arbitrarily interfere with users’ ability to participate in the public sphere. Their content moderation policies, therefore, are not just terms of service but instruments of power that shape public discourse.
This power is largely driven by commercial, rather than rights-based, imperatives. The primary goal is often to maintain an “advertiser-friendly” environment, which incentivizes platforms to be overly cautious and risk-averse, even if it means suppressing legitimate speech [7]. The reliance on opaque, automated “black box” systems for enforcement at scale exacerbates this problem. These algorithmic systems struggle with context and nuance, leading to inconsistent decisions that users perceive as arbitrary and unfair. This creates a state of constant uncertainty or ‘precarity’ for content creators, whose livelihoods can be jeopardized by an inexplicable change in the algorithm or a sudden moderation decision [7].
The consequences of this opaque and commercially driven governance are not distributed equally. Research consistently shows that automated moderation systems and vaguely worded policies disproportionately harm marginalized communities. Content from creators focusing on issues of race, gender, sexuality, and disability is more likely to be flagged as inappropriate, silenced, or de-platformed, often without clear justification or an effective appeals process [8]. This systemic bias not only reinforces existing social inequalities but also undermines the perceived legitimacy of platform governance, leaving many users feeling disempowered and distrustful of the very systems that mediate their social and professional lives [9,10].

2.3. A Proposed Operational Definition for Hate Speech Research

Hate speech is a phenomenon that falls outside the substantive scope of freedom of expression, acting as a limitation on this fundamental right to protect democratic values. To maintain a proportionate restriction on fundamental rights, the concept of hate speech must have a definition that is sufficiently abstract to be easily generalized and applied to individual cases, yet also sufficiently specific to ensure the restriction is not overly broad and does not provide an opportunity for the abuse of rights.
As Section 2.1 demonstrated the lack of a uniform legal definition, this study formulates an operational concept by synthesizing elements from jurisprudence. We adopted a question-based approach, also used in forensic investigations [53], to arrive at our definition by exploring the following questions: Who? What? Why? How? Where? Against whom?
Anyone (subjectum generalis) can be a perpetrator of hate speech. However, the issue of public figures, particularly politicians, warrants special attention [54]. Although national legal systems vary, in most democracies, public figures are not exempt from prohibitions on hate speech; in fact, their position may entail greater responsibility.
The act must occur in public, as the concept of hate speech is meaningless without impact. The effect must be capable of or lead to the direct (“clear and present danger”) intimidation of others or the violation of their human dignity. The expression can be realized through speech, writing, or other forms of conduct.
A specific issue concerning public figures is their heightened obligation to tolerate criticism. According to international legal practice, public figures must tolerate even sharp criticism in public debates [55]. This obligation, however, applies to opinions on public matters, not to explicitly malicious attacks on their person. Hate speech does not qualify as legitimate criticism and is therefore not granted constitutional protection, even when directed at public figures [56]. This is particularly true for sexist and threatening remarks against female politicians, which represent a recurring problem in the digital public sphere [57].
Based on these findings, the operational definition of hate speech formulated and used in this paper is as follows:
Hate speech—as a limitation on freedom of expression—is any expression made by someone that is capable of or leads to the intimidation of others or the violation of their human dignity, made before a large public in connection with a protected characteristic of another person.

2.4. Moderation and Appeals: Meta’s Internal “Legal Order”

On Meta’s platforms, content is reviewed at multiple levels: on one hand by automated tools (artificial intelligence), and on the other, based on user reports. While essential for scale, the literature extensively examines the challenges of automated content moderation [20]. These systems are often described as “opaque” black boxes that struggle to interpret context, leading to unfair outcomes such as the disproportionate penalization of marginalized groups [58,59]. Against this backdrop of technological fallibility, users have the right to an internal complaint-handling system against the platform’s decisions [60], through which they can request a review. If the user disagrees with this review decision, they can turn to the Oversight Board, a body established by Meta that is described as independent.
The Oversight Board functions as a kind of “constitutional court” within Meta’s ecosystem, a novel form of “transnational hybrid adjudication” whose decisions are, in principle, binding on the platform and can set precedents for future cases [61]. However, the scholarly literature is deeply critical of the Board’s effectiveness and legitimacy. Critics argue that its operation is often more formalistic and performative, serving Meta’s public relations interests rather than genuine accountability, as it tends to focus on emblematic cases while avoiding systemic issues like algorithmic design. Furthermore, its policy recommendations are non-binding, which limits its potential for real, structural change [62].
The ‘algorithmic opacity’ mentioned earlier is not just a technical shortcoming but a core feature of the user experience. It creates a state of confusion and powerlessness, as users often receive vague, generic, or no explanations for moderation decisions, making it nearly impossible for them to understand what rule was broken or how to appeal effectively [7]. For content creators, this opacity fosters a condition of ‘precarity’, where their livelihoods are subject to the unpredictable whims of an automated system, leading to financial instability and emotional distress [8].
The scholarly critique of the Oversight Board further highlights this legitimacy deficit. While the Board increases transparency in specific cases, its jurisdiction is narrowly defined to individual content decisions, leaving systemic issues like algorithmic design unaddressed [63]. Critics argue that because Meta fundamentally controls the Board’s charter and funding, its independence is compromised [64]. This has led to the concern that the Board functions more as sophisticated “window dressing”—a form of performative governance designed to deflect regulatory pressure—rather than as a truly independent body of accountability [65].
As the scholarly critiques suggest, this internal system highlights a fundamental tension. While appearing to mimic a judicial structure, Meta’s “legal order” differs significantly from a rule-of-law model: it lacks guarantees such as legal certainty, predictability, and equality before the law. The separation of powers is also not upheld, as Meta simultaneously fulfills the roles of “legislator” (by setting the Community Standards), “executive” (by enforcing them), and “judiciary” (by adjudicating appeals). This subordinate relationship, which has evolved from a civil law contractual relationship, makes Meta a kind of “state within a state”, exercising quasi-public authority over its users according to its own rules. This phenomenon is discussed in the academic literature as an extension of the “public forum” doctrine, whereby private platforms assume public functions when they become primary arenas for public discourse [66].

3. Materials and Methods

3.1. Study Design and Participants

The aim of our research is to investigate the ability of Meta platform users to recognize hate speech and their awareness of Meta’s sanctioning system. As detailed in the Introduction, Hungary was chosen as a relevant case study due to its position as a Central and Eastern European EU member state characterized by high social media penetration and a polarized online public sphere. In June 2024, Facebook (Meta’s most popular platform) had over 7.1 million users in Hungary. As a Meta platform account can be registered from the age of 13, our research focused on the 14–34 age group to examine the awareness demonstrated by the younger generation during their use of Meta platforms (especially Facebook and Instagram).
The data collection period was from 1 August 2025 to 1 September 2025. During this period, 303 valid responses were received from young Hungarian users. Two individuals stated that they do not use any Meta platforms; therefore, their responses were excluded from the analysis and removed from the dataset. The remaining 301 completed questionnaires thus serve as the basis for our research (N = 301).
The age distribution of the respondents was as follows: The 14–18 age group constituted 29% of the sample, the 19–24 age group 35%, the 25–29 age group 21%, and the 30–34 age group 15% (Figure 1). Regarding educational attainment, 51% of respondents had completed primary school or held a high school diploma (Figure 2).

3.2. Survey Instrument and Case Studies

Data collection was conducted via an online questionnaire created on the Microsoft Forms platform, ensuring anonymity for the participants. The questionnaire was in Hungarian and was promoted on Facebook and Instagram.
The questionnaire was based on five case study descriptions derived from real cases discussed and decided by the Oversight Board. These cases were adapted to the Hungarian context to be objectively evaluable for the participants.
The cases used were:
  • Negative stereotypes about African Americans: This case was adapted to stereotypes concerning the Roma community to fit the Hungarian social context.
  • Hate-inciting meme video montage: A video featuring antisemitic, anti-LGBTIQ+, and other discriminatory content.
  • Holocaust denial: An Instagram post that questioned the facts of the Holocaust.
  • Post targeting transgender individuals: A post that incited transgender people to commit suicide.
  • Dehumanizing speech against a woman: A post that objectified a woman and used offensive language.
After each case-study description, participants were asked to answer two questions: (1) whether they classify the given case as hate speech (yes/no), and (2) what sanction they would apply to the content if they were in Meta’s position (choosing from six possible options from the lightest to the most severe sanctions, e.g., “delete post”, “suspend account”, etc.). Ethical considerations were addressed by adapting case studies from publicly available Oversight Board decisions, ensuring they were presented as hypothetical scenarios rather than direct quotes of harmful content. The research design focused on users’ perceptions of governance and sanctioning, not on generating or provoking hateful responses. All data was collected and processed anonymously.

3.3. Hypotheses

Based on the literature discussed in Section 2, which highlights the opacity of platform governance and the significant disconnect between formal legal definitions and the ‘ordinary meaning’ of hate speech as understood by the public [49,62], the questionnaire was designed to answer two research questions. To address these, the following null hypotheses were formulated:
  • H0A: Young Hungarian Facebook users do not recognize posts containing hate speech.
  • H0B: Hungarian consumers would not sanction posts containing hate speech accordingly (i.e., in line with Meta’s decision).

3.4. Data Analysis

Microsoft Excel’s Data Analysis ToolPak was used for data analysis. The research was exercised through sample statistics to reflect on and represent the whole statistical population [67]. To test the hypotheses, descriptive statistics (frequency tables) and one-sample t-tests were employed. The significance level was set at the commonly accepted value in the social sciences of 5% (α = 0.05) [68]. If the p-value was less than this alpha value, the null hypothesis was rejected.

4. Results

This section presents the results of the empirical survey. First, a descriptive overview of the respondents’ familiarity with the concept of hate speech is provided. Subsequently, the outcomes of the hypothesis tests concerning the recognition of hate speech and the sanctioning preferences of users are reported (see Appendix A and Appendix B).

4.1. Preliminary Findings on Conceptual Awareness

Among the respondents, as presented in Table 2, only 37.5% selected at least one of the two definitions provided that contained the key legal attributes of hate speech. A fully correct answer, identifying both these options, was given by only 48 individuals, representing less than 16% of the sample.
This finding provides the initial empirical grounding for our central concept of the “awareness gap”. It indicates that the vast majority of users are unable to recognize the formal, content-related elements of hate speech as defined by platform policies, highlighting a significant disconnect between the rules as written and user comprehension.

4.2. Hypothesis Testing: Recognition of Hate Speech (H0A)

Based on these findings, it can be concluded that users recognized hate speech more easily in practical examples than in the conceptual definition task. This demonstrates a clear distinction between users’ ability to intuitively identify harmful content in context and their capacity to articulate or recognize the formal rules governing it.
This result further substantiates the “awareness gap”, suggesting that young users operate with a practical, intuition-based moral compass rather than conscious legal literacy. The core of the gap lies precisely in this disconnect between an intuitive understanding of harm and the formal, often opaque, regulatory knowledge required by the platform.
A one-sample t-test was used to determine if respondents could recognize hate speech at a statistically significant level. First, we tested whether more than half of the respondents (151) answered correctly. This yielded a statistically significant result (p = 0.031), leading to the rejection of the null hypothesis. Second, we tested whether more than 75% of respondents (226) answered correctly. This result was not statistically significant (p = 0.338).
Based on these findings, it can be concluded that users recognized hate speech more easily in practical examples than in the conceptual definition task. This suggests that young users identify hate speech based on general intuition rather than conscious legal literacy or rule-following.

4.3. Hypothesis Testing: Sanctioning Preferences (H0B)

The second null hypothesis (H0B: Hungarian consumers would not sanction posts containing hate speech accordingly l (i.e., in line with Meta’s decision)) was tested by analyzing the sanctioning choices. In each case, the “correct” sanction—in alignment with Meta’s final decision—was to “delete the post”. The results are presented in Table 3.
The results presented in Table 4 reveal a significant divergence between user preferences and Meta’s official sanctioning decisions. Across all five case studies, the majority of respondents chose sanctions that were either more lenient or more severe than the platform’s ‘correct’ action of deleting the post. This inconsistency in sanctioning preferences provides compelling evidence for the “awareness gap”, demonstrating that users’ normative expectations for justice and proportionality are poorly aligned with Meta’s opaque and often unpredictable enforcement logic.

5. Discussion

5.1. Situating the “Awareness Gap” Within Systemic Regulatory Divergences

Our empirical research in Hungary identified a significant “awareness gap”: a disconnect between users’ intuitive recognition of harm and their formal knowledge of platform rules. This local finding, however, should be understood as a manifestation of a well-documented global phenomenon: the systemic divergence between public legal frameworks and private platform governance. To contextualize our results, we synthesized the main conclusions from recent systematic literature reviews on the topic (see Table 4 and Table 5).
As Table 4 and Table 5 illustrate, the literature consistently identifies fundamental differences in the scope, enforcement, and standardization of hate speech rules. Our findings directly reflect the user-level consequences of these differences.
First, the broader and more ambiguous scope of platform definitions, as highlighted by Hietanen and Eddebo [69], likely contributes to our finding that less than 16% of respondents could correctly identify the formal attributes of hate speech. Users are faced with a moving target: a set of rules that are intentionally vaguer and more expansive than legal norms.
Second, the lack of transparency and predictability in platform enforcement provides a compelling explanation for the inconsistent sanctioning preferences observed in our sample. When the official moderation process is perceived as an opaque “black box”, users are more likely to rely on personal moral intuition rather than the platform’s poorly communicated logic. This resonates with findings by Sinpeng et al. [70], who documented a widespread sense of “reporting fatigue” and disempowerment among users who feel their reports have no impact.
Finally, the struggle to apply uniform rules in diverse local contexts, a challenge emphasized by Hatano [71], helps explain why our Hungarian sample’s intuitive understanding of harm—rooted in their local reality—clashes with the abstract, global policies of Meta. The “awareness gap” is, in essence, a gap between global corporate policy and local lived experience.
However, to fully grasp the “awareness gap”, we must look beyond these structural discrepancies. The gap is not merely a passive deficit of user knowledge but can be interpreted as an active, if often subconscious, user response to a platform governance model they perceive as fundamentally illegitimate. This critical perspective reframes the “awareness gap” as a symptom of a deeper crisis of trust and accountability.
A useful starting point for this analysis is the concept of “folk theories”. In opaque algorithmic environments where official explanations are scarce or untrustworthy, users are forced to construct their own informal “mental models” to make sense of the system and guide their actions [72]. Our finding that users rely on their intuition of harm—rather than attempting to master complex and ever-changing corporate policies—is a prime example of folk theorization in practice. This is not a failure of the user; it is a rational coping mechanism in an environment that lacks the transparency and predictability necessary for rule-based engagement.
This behavior can be further understood as a form of ‘algorithmic resistance as political disengagement’ [73]. Magalhães’s research on Brazilian Facebook users shows that when individuals feel the “visibility games” of a platform are demeaning to their civic voice, they may strategically withdraw from political expression. Similarly, the “awareness gap” can signify a user’s refusal to invest the cognitive labor required to learn the rules of a game they perceive as rigged. This withdrawal is a quiet form of resistance against a system that demands participation but offers little genuine agency in return.
Such disengagement often culminates in a state of ‘digital resignation’, where users continue to use the platform for social connection while simultaneously holding a deeply cynical view of its governance. They accept the platform’s flaws—such as inconsistent moderation and algorithmic bias—as an unavoidable feature of the digital landscape, feeling powerless to effect change [74]. This resonates with the previously mentioned findings on ‘reporting fatigue’, where users cease to engage with governance mechanisms after concluding their feedback is ignored [70].
Ultimately, all these user responses—folk theorization, strategic disengagement, and digital resignation—point to a profound legitimacy deficit at the heart of current platform governance models [65]. A system’s legitimacy hinges on whether those subject to its authority perceive it as fair, just, and accountable [75]. Our findings suggest that for many users, this is not the case. Therefore, addressing the “awareness gap” is not a simple matter of creating better user manuals or educational campaigns. It requires a fundamental rethinking of platform governance itself, moving towards a model built on transparency, procedural fairness, and genuine user participation—a system that can earn, rather than simply demand, the trust of its users.

5.2. Policy Implications and Recommendations

Our findings, contextualized by the broader academic literature, suggest that bridging the “awareness gap” requires a dual strategy targeting both platforms and users, in line with the goals of SDG 16, presented in Table 6. The policy recommendations emerging from the literature strongly advocate for more transparent, context-sensitive, and multi-stakeholder governance models. In light of this, and to achieve the user-friendly system envisioned by the DSA, we propose the following steps:
1. Increasing Regulatory Transparency (Supporting SDG 16.6 and 16.10): The academic literature consistently identifies a lack of transparency as a core failure of platform self-regulation. Antić [76] warns that leaving content removal to platforms without judicial oversight carries the risk of “non-competent censorship”, while Farrand [77] highlights the “significant discretion” platforms retain even under frameworks like the DSA. To counter this, our first recommendation is that platforms should adopt a clear and comprehensible definition of hate speech in their Community Standards. A user-centric, perhaps even question-based, operational definition would enhance user awareness and improve the consistency of moderation decisions.
Tying this to the broader goals of SDG 16, increasing transparency is a fundamental step toward building ‘effective, accountable and transparent institutions’ (Target 16.6). A constitutional approach to content moderation, as advocated by scholars like De Gregorio [78], argues for embedding democratic values and fundamental rights directly into platform governance. By making their rules and enforcement criteria clear and understandable, platforms would not only mitigate the “awareness gap” but also become more accountable actors. This clarity is also essential for protecting ‘fundamental freedoms’ (Target 16.10), as it provides users with foreseeable boundaries for their expression, a key principle under international human rights law [79].
2. Transparency of Sanctions and Remedies (Promoting SDG 16.3 and 16.6): The sense of procedural fairness is eroded when enforcement is perceived as an opaque “black box”. Therefore, our second recommendation is that greater emphasis must be placed on informing users about the sanctions associated with rule violations. The current fragmented information system, which contributes to user disempowerment as documented by Sinpeng et al. [70], hinders access to justice and accountability. It is necessary to consolidate information on sanctions into a single, easily accessible document linked directly from the Community Standards, which would significantly promote understanding and a sense of fairness.
The lack of transparent remedies has severe implications for ‘access to justice’ (Target 16.3), a cornerstone of SDG 16. As research on the Rohingya genocide demonstrates, the arbitrary removal of content by platforms can lead to the destruction of vital evidence of human rights violations, thereby obstructing transitional justice processes [80]. Similarly, activists and journalists documenting conflicts often find their content removed without a meaningful appeals process, a phenomenon described as “disappearing acts” [81]. A clear, accessible, and fair system for sanctions and remedies is therefore not just a matter of user satisfaction, but a prerequisite for platforms to avoid actively undermining the rule of law and access to justice on a global scale.
3. Raising Awareness (A Shared Responsibility of Meta and the State): Finally, our finding of a significant “awareness gap” underscores the urgent need for educational initiatives, a point strongly supported by the literature [76]. Our third recommendation is that more resources should be dedicated to enhancing users’ digital awareness and critical thinking skills. Echoing the call by Sinpeng et al. [70] for improving “regulatory literacy”, we argue this is a shared responsibility. Platforms should invest in user education, and states should integrate critical digital and legal literacy into public education. Such programs should build upon users’ intuitive understanding of harm, empowering them to navigate the complexities of online governance, a process made more difficult by the context-dependent and coded nature of online hate [71,77].
This call for shared responsibility aligns with the overarching ambition of SDG 16 to foster ‘just, peaceful and inclusive societies’. Such societies require empowered, digitally literate citizens. The literature shows that effective models for governing digital risks often rely on co-regulation and multi-stakeholder collaboration, rather than placing the burden solely on the user or the state [82]. This is evident in the ‘cooperative cyberfare state model’ for preventing online grooming, which emphasizes a partnership between the state and parents [83]. Applying this logic, a collaborative effort between platforms, public institutions, and civil society to raise awareness is crucial for transforming users from passive subjects of platform power into active participants in a more just digital environment, as envisioned by frameworks like the ‘Just Digital’ framework [84].

5.3. Limitations and Future Research

This research has its limitations. The sample consisted exclusively of Hungarian users aged 14–34, limiting the generalizability of the findings to other demographic groups or cultural contexts. Furthermore, the questionnaire-based method is subject to the biases inherent in self-reported data. Methodologically, our analysis relied on descriptive statistics and t-tests to identify the ‘awareness gap’. Future research should employ more advanced statistical methods, such as regression analysis, to explore the potential correlations between demographic variables (e.g., age, education) and users’ attitudes toward hate speech recognition and sanctioning.
Future research could be extended to other countries and older age groups, and could employ qualitative methods (e.g., interviews) to explore user understanding in greater depth. It would also be valuable to investigate how the practical implementation of the DSA changes user awareness and platform practices over the long term.

6. Conclusions

This study examined the complex relationship between hate speech, platform governance, and democratic resilience within the framework of SDG 16. Our empirical results identified a critical “awareness gap” among young users in Hungary—while they intuitively recognize harmful content, they lack a conscious understanding of platform rules and sanctions.
The primary contribution of this study, however, is to reframe this gap not as a simple knowledge deficit, but as a symptom of a deeper legitimacy crisis in platform governance. Drawing on recent critical scholarship, we argue that the “awareness gap” is a rational user response—manifesting as folk theories, political disengagement, and digital resignation—to opaque, unaccountable, and commercially driven moderation systems. This gap, therefore, is not a failure of the user, but a failure of the governance model to earn user trust and foster meaningful participation. It erodes the foundations of a just and inclusive digital public sphere, directly undermining the objectives of SDG 16.
Addressing this systemic problem requires a multi-pronged strategy rooted in the principles of accountability, transparency, and shared responsibility. Our policy recommendations—aligned with SDG 16 targets for accountable institutions (16.6), access to justice (16.3), and the protection of fundamental freedoms (16.10)—call for a fundamental shift in platform governance. Fulfilling this obligation is not merely a matter of better rule-making, it is a prerequisite for rebuilding trust and fostering a resilient digital public sphere where citizens can act as empowered participants, not just passive subjects of algorithmic power.

Author Contributions

Conceptualization, D.B., Z.R. and R.K.; methodology, D.B., Z.R. and R.K.; software, D.B., Z.R. and R.K.; validation, D.B., Z.R. and R.K.; formal analysis, D.B., Z.R. and R.K.; investigation, D.B., Z.R. and R.K.; resources, D.B., Z.R. and R.K.; data curation, D.B., Z.R. and R.K.; writing—original draft preparation, D.B., Z.R. and R.K.; writing—review and editing, D.B., Z.R. and R.K.; visualization, D.B., Z.R. and R.K.; supervision, R.K.; project administration, D.B., Z.R. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding. The Article Processing Charge (APC) was covered by Széchenyi István University.

Institutional Review Board Statement

In Hungary, national legislation does not require formal ethics committee approval for anonymous, non-invasive social science research involving adult participants where no sensitive personal data is processed. Nevertheless, to ensure the highest ethical standards and international compliance, the study protocol was voluntarily submitted to, and approved by, the Scientific Ethics Committee of the Scientific Advisory Board of Széchenyi István University (protocol code SZE/ETT-49/2025). The study was conducted in accordance with the Declaration of Helsinki.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. For participants under 18 years of age, informed consent was obtained from their legal guardian(s) prior to their participation.

Data Availability Statement

The raw data collected during this questionnaire-based study are not publicly accessible due to ethical and privacy considerations related to the subject matter of the study. Publicly available research cited in this paper is detailed within the reference list and in Appendix A and Appendix B.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SDGSustainable Development Goal
DSADigital Services Act
BERTBidirectional Encoder Representations from Transformers
UDHRUniversal Declaration of Human Rights
ICCPRInternational Covenant on Civil and Political Rights
ECHREuropean Convention on Human Rights
ECtHREuropean Court of Human Rights
LGBTIQ+Lesbian, Gay, Bisexual, Transgender, Intersex, and Queer

Appendix A

The following table analyzes the user’s ability to recognize the attributes of hate speech based on Hypothesis 0A.
Table A1. One-sample t-test result at a hypothetical mean of 50 + 1%. (statistically significant result for H0A).
Table A1. One-sample t-test result at a hypothetical mean of 50 + 1%. (statistically significant result for H0A).
t-Test:
One-Sample Assuming Unequal Variances
Variable 1Variable 2
Mean214.80
Variance3108.20
Observations52
Hypothesized Mean151
df4
t Stat2.558887559
P(T ≤ t) one-tail0.031355958
t Critical one-tail2.131846786
P(T ≤ t) two-tail0.062711916
t Critical two-tail2.776445105

Appendix B

The following table indicates the user’s ability to sanction hate speech on Meta platforms according to the Oversight Board’s decisions based on Hypothesis 0B.
Table A2. One-sample t-test result at a hypothetical mean of 50 + 1%. (statistically significant result for H0B).
Table A2. One-sample t-test result at a hypothetical mean of 50 + 1%. (statistically significant result for H0B).
t-Test:
One-Sample Assuming Unequal Variances
Variable 1Variable 2
Mean247.40
Variance196.30
Observations52
Hypothesized Mean151
df4
t Stat15.3851554
P(T ≤ t) one-tail0.00005207
t Critical one-tail2.131846786
P(T ≤ t) two-tail0.000104138
t Critical two-tail2.776445105

References

  1. Gopal, R.; Hojati, A.; Patterson, R.A. A Little Bit Goes a Long Way: Indirect Effects of Content Moderation on Online Social Media. Int. J. Electron. Commer. 2025, 29, 39–64. [Google Scholar] [CrossRef]
  2. Buhmann, A.; Maltseva, K.; Fieseler, C.; Fleck, M. Muzzling social media: The adverse effects of moderating stakeholder conversations online. Technol. Soc. 2021, 64, 101490. [Google Scholar] [CrossRef]
  3. Weber, I.; Gonçalves, J.; Masullo, G.; Da Silva, M.; Hofhuis, J. Who Can Say What? Testing the Impact of Interpersonal Mechanisms and Gender on Fairness Evaluations of Content Moderation. Soc. Media + Soc. 2024, 10. [Google Scholar] [CrossRef]
  4. Rozo, A.Z.; Campo-Archbold, A.; Díaz-López, D.; Gray, I.; Pastor-Galindo, J.; Nespoli, P.; Mármol, F.G.; McCoy, D. Cyber democracy in the digital age: Characterizing hate networks in the 2022 US midterm elections. Inf. Fusion 2024, 110, 102459. [Google Scholar] [CrossRef]
  5. Smith, L.; Niker, F. What Social Media Facilitates, Social Media should Regulate: Duties in the New Public Sphere. Political Q. 2021, 92, 578–585. [Google Scholar] [CrossRef]
  6. Aytac, U. Digital Domination: Social Media and Contestatory Democracy. Political Stud. 2022, 72, 6–25. [Google Scholar] [CrossRef]
  7. Ma, R.; Kou, Y. “How advertiser-friendly is my video?”: YouTuber’s Socioeconomic Interactions with Algorithmic Content Moderation. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–25. [Google Scholar] [CrossRef]
  8. Are, C.; Briggs, P. The Emotional and Financial Impact of De-Platforming on Creators at the Margins. Soc. Media + Soc. 2023, 9, 1–12. [Google Scholar] [CrossRef]
  9. Malki, L.M.; Patel, D.; Singh, A. “The Headline Was So Wild That I Had To Check”: An Exploration of Women’s Encounters With Health Misinformation on Social Media. Proc. ACM Hum.-Comput. Interact. 2024, 8, 1–27. [Google Scholar] [CrossRef]
  10. Pierson, J. Digital platforms as entangled infrastructures: Addressing public values and trust in messaging apps. Eur. J. Commun. 2021, 36, 349–361. [Google Scholar] [CrossRef]
  11. Cinelli, M.; De Francisci Morales, G.; Galeazzi, A.; Quattrociocchi, W.; Starnini, M. The echo chamber effect on social media. Proc. Natl. Acad. Sci. USA 2021, 118, e2023301118. [Google Scholar] [CrossRef]
  12. Wang, X.; Tang, S.; Zheng, Z.; Fu, F. Public discourse and social network echo chambers driven by socio-cognitive biases. Phys. Rev. X 2020, 10, 041042. [Google Scholar] [CrossRef]
  13. Kirk, R.; Schill, D. Sophisticated Hate Stratagems: Unpacking the Era of Distrust. Am. Behav. Sci. 2021, 68, 3–25. [Google Scholar] [CrossRef]
  14. Windisch, S.; Wiedlitzka, S.; Olaghere, A.; Jenaway, E. Online interventions for reducing hate speech and cyberhate: A systematic review. Campbell Syst. Rev. 2022, 18, e1243. [Google Scholar] [CrossRef] [PubMed]
  15. Daruwala, N.A. Social media, expression, and online engagement: A psychological analysis of digital communication and the chilling effect in the UK. Front. Commun. 2025, 10, 1565289. [Google Scholar] [CrossRef]
  16. Nölleke, D.; Leonhardt, B.; Hanusch, F. “The chilling effect”: Medical scientists’ responses to audience feedback on their media appearances during the COVID-19 pandemic. Public Underst. Sci. 2023, 32, 546–560. [Google Scholar] [CrossRef] [PubMed]
  17. Gosztonyi, G. Challenges of Monitoring Obligations in the European Union’s Digital Services Act. Elte Law J. 2024, 2024, 45–60. [Google Scholar] [CrossRef]
  18. Bawono, B.T.; Glaser, H. The Urgency of Restorative Justice Regulation on Hate Speech. Bestuur 2024, 11, 364–383. [Google Scholar] [CrossRef]
  19. Butler, O.; Turenne, S. The regulation of hate speech online and its enforcement—A comparative outlook. J. Media Law 2022, 14, 20–24. [Google Scholar] [CrossRef]
  20. Ramos, G.; Batista, F.; Ribeiro, R.; Fialho, P.; Moro, S.; Fonseca, A.; Guerra, R.; Carvalho, P.; Marques, C.; Silva, C. A comprehensive review on automatic hate speech detection in the age of the transformer. Soc. Netw. Anal. Min. 2024, 14, 204. [Google Scholar] [CrossRef]
  21. Asogwa, N.; Ezeibe, C. The state, hate speech regulation and sustainable democracy in Africa: A study of Nigeria and Kenya. Afr. Identities 2020, 20, 199–214. [Google Scholar] [CrossRef]
  22. Paz, M.A.; Montero-Díaz, J.; Moreno-Delgado, A. Hate Speech: A Systematized Review. Sage Open 2020, 10. [Google Scholar] [CrossRef]
  23. Montero, A.I.; Laforgue-Bullido, N.; Abril-Hervás, D. Hate speech: A systematic review of scientific production and educational considerations. Rev. Fuentes 2022, 24, 222–233. [Google Scholar] [CrossRef]
  24. Kovács-Szépvölgyi, E.; Tóth, D.A.; Kelemen, R. From Voice to Action: Upholding Children’s Right to Participation in Shaping Policies and Laws for Digital Safety and Well-Being. Societies 2025, 15, 243. [Google Scholar] [CrossRef]
  25. Adjin-Tettey, T.D. Combating Fake News, Disinformation, and Misinformation: Experimental Evidence for Media Literacy Education. Cogent Arts Humanit. 2022, 9, 2037229. [Google Scholar] [CrossRef]
  26. Dragomir, M.; Rúas-Araújo, J.; Horowitz, M. Beyond Online Disinformation: Assessing National Information Resilience in Four European Countries. Humanit. Soc. Sci. Commun. 2024, 11, 101. [Google Scholar] [CrossRef]
  27. Kelemen, R.; Squillace, J.; Németh, R.; Cappella, J. The Impact of Digital Inequality on IT Identity in the Light of Inequalities in Internet Access. Elte Law J. 2024, 2024, 173–190. [Google Scholar] [CrossRef]
  28. Universal Declaration of Human Rights (UDHR) Article 19. Available online: https://www.un.org/en/about-us/universal-declaration-of-human-rights (accessed on 12 September 2025).
  29. Universal Declaration of Human Rights (UDHR) Article 12. Available online: https://www.un.org/en/about-us/universal-declaration-of-human-rights (accessed on 12 September 2025).
  30. International Covenant on Civil and Political Rights (ICCPR) Article 19. Available online: https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights (accessed on 12 September 2025).
  31. International Covenant on Civil and Political Rights (ICCPR) Article 20. Available online: https://www.ohchr.org/en/instruments-mechanisms/instruments/international-covenant-civil-and-political-rights (accessed on 12 September 2025).
  32. European Convention on Human Rights (ECHR) Preamble. Available online: https://www.echr.coe.int/documents/d/echr/convention_ENG (accessed on 12 September 2025).
  33. European Convention on Human Rights (ECHR) Article 19. Available online: https://www.echr.coe.int/documents/d/echr/convention_ENG (accessed on 12 September 2025).
  34. European Convention on Human Rights (ECHR) Article 10. Available online: https://www.echr.coe.int/documents/d/echr/convention_ENG (accessed on 12 September 2025).
  35. European Convention on Human Rights (ECHR) Article 17. Available online: https://www.echr.coe.int/documents/d/echr/convention_ENG (accessed on 12 September 2025).
  36. Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt. v. Hungary No. 22947/13, ECHR, 2016. Available online: https://hudoc.echr.coe.int/fre#{%22itemid%22:[%22001-160314%22]} (accessed on 12 September 2025).
  37. Delfi AS v. Estonia, 64569/09, ECHR, 2015. Available online: https://hudoc.echr.coe.int/fre#{%22itemid%22:[%22001-155105%22]} (accessed on 12 September 2025).
  38. Oster, J. Media Freedom as a Fundamental Right; Cambridge University Press: Cambridge, UK, 2015. [Google Scholar] [CrossRef]
  39. Sottiaux, S. Conflicting Conceptions of Hate Speech in the ECtHR’s Case Law. Ger. Law J. 2022, 23, 1193–1211. [Google Scholar] [CrossRef]
  40. Treaty of the European Union. Article 6. Available online: https://eur-lex.europa.eu/resource.html?uri=cellar:2bf140bf-a3f8-4ab2-b506-fd71826e6da6.0023.02/DOC_1&format=PDF (accessed on 12 September 2025).
  41. Charter of Fundamental Rights of the European Union (2000/C 364/01) Article 11. Available online: https://www.europarl.europa.eu/charter/pdf/text_en.pdf (accessed on 12 September 2025).
  42. Charter of Fundamental Rights of the European Union (2000/C 364/01) Article 52. Available online: https://www.europarl.europa.eu/charter/pdf/text_en.pdf (accessed on 12 September 2025).
  43. Koltay, A. A Szólásszabadság Alapvonalai; Századvég Kiadó: Budapest, Hungary, 2009. [Google Scholar]
  44. Annual report of the United Nations High Commissioner for Human Rights. Report of the United Nations High Commissioner for Human Rights on the Expert Workshops on the Prohibition of Incitement to National, Racial or Religious Hatred. 2012, p.6. Available online: https://www.ohchr.org/sites/default/files/Rabat_draft_outcome.pdf (accessed on 12 September 2025).
  45. Annual report of the United Nations High Commissioner for Human Rights. Report of the United Nations High Commissioner for Human Rights on the Expert Workshops on the Prohibition of Incitement to National, Racial or Religious Hatred. 2012, p.10. Available online: https://www.ohchr.org/sites/default/files/Rabat_draft_outcome.pdf (accessed on 12 September 2025).
  46. Annual report of the United Nations High Commissioner for Human Rights. Report of the United Nations High Commissioner for Human Rights on the Expert Workshops on the Prohibition of Incitement to National, Racial or Religious Hatred. 2012, p.9. Available online: https://www.ohchr.org/sites/default/files/Rabat_draft_outcome.pdf (accessed on 12 September 2025).
  47. Fino, A. Defining Hate Speech. J. Int. Crim. Justice 2020, 18, 31–57. [Google Scholar] [CrossRef]
  48. Lepoutre, M.; Vilar-Lluch, S.; Borg, E.; Hansen, N. What is Hate Speech? The Case for a Corpus Approach. Crim. Law Philos. 2023, 18, 397–430. [Google Scholar] [CrossRef]
  49. Vilar-Lluch, S. Understanding and appraising ‘hate speech’. J. Lang. Aggress. Confl. 2023, 11, 213–238. [Google Scholar] [CrossRef]
  50. Vučković, J.; Lučić, S. HATE SPEECH AND SOCIAL MEDIA. TEME 2023, 47, 191–207. [Google Scholar] [CrossRef]
  51. Alkiviadou, N. Platform liability, hate speech and the fundamental right to free speech. Inf. Commun. Technol. Law 2024, 34, 207–217. [Google Scholar] [CrossRef]
  52. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services Amending Directive 2000/31/EC (Digital Services Act). Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022R2065 (accessed on 12 September 2025).
  53. Kovács, G. A nyomozás a gyakorlatban—A kezdeti lépésektől a nyomozás tervezéséig és szervezéséig. Magy. Bűnüldöző 2014, 2014, 22. [Google Scholar]
  54. Pál, G. A gyűlöletbeszéd fogalma: A politikai vitában. Értelmezések és alkalmazások. Polit. Tanulmányok 2012, 2, 7. [Google Scholar]
  55. Vejdeland and Others v. Sweden, No. 1813/07, ECHR 2012. Available online: https://hudoc.echr.coe.int/fre#{%22itemid%22:[%22001-109046%22]} (accessed on 12 September 2025).
  56. New York Times Co. v. Sullivan, 376 U.S. 254 1964. Available online: https://supreme.justia.com/cases/federal/us/376/254/ (accessed on 12 September 2025).
  57. European Conference of Presidents of Parliament. Women in politics and in the public discourse: What role can national Parliaments play in combating the increasing level of harassment and hate speech towards female politicians and parliamentarians? Strasbourg, France, 24–25 October 2019. Available online: https://edoc.coe.int/en/violence-against-women/7989-women-in-politics-and-in-the-public-discourse.html?fbclid=IwY2xjawO33vRleHRuA2FlbQIxMABicmlkETFzc0RraU9iTGVNd2tpc3FRc3J0YwZhcHBfaWQQMjIyMDM5MTc4ODIwMDg5MgABHsVOwUKB9rMxBcxOqs54hUDKCr-8ejJEJC6QvTKWAWgowN7BtvgXRugX_Dh5_aem__lSEB6D4ECuvHJ0DGVG7qQ (accessed on 12 September 2025).
  58. Abokhodair, N.; Skop, Y.; Rüller, S.; Aal, K.; Elmimouni, H. Opaque algorithms, transparent biases: Automated content moderation during the Sheikh Jarrah Crisis. First Monday 2024, 29, 4. [Google Scholar] [CrossRef]
  59. Peterson-Salahuddin, C. Repairing the harm: Toward an algorithmic reparations approach to hate speech content moderation. Big Data Soc. 2024, 11. [Google Scholar] [CrossRef]
  60. Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services Amending Directive 2000/31/EC (Digital Services Act). Article 20. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32022R2065 (accessed on 12 September 2025).
  61. Gulati, R. Meta’s Oversight Board and Transnational Hybrid Adjudication—What Consequences for International Law? Ger. Law J. 2023, 24, 473–493. [Google Scholar] [CrossRef]
  62. Douek, E. The Meta Oversight Board and the Empty Promise of Legitimacy. Harv. JL Tech. 2023, 37, 373. [Google Scholar] [CrossRef]
  63. Wong, D.; Floridi, L. Meta’s Oversight Board: A Review and Critical Assessment. Minds Mach. 2023, 33, 261–284. [Google Scholar] [CrossRef]
  64. Mazúr, J.; Grambličková, B. New Regulatory Force of Cyberspace: The Case of Meta’s Oversight Board. Masaryk. Univ. J. Law Technol. 2023, 17, 3. [Google Scholar] [CrossRef]
  65. Haggart, B.; Iglesias Keller, C. Democratic legitimacy in global platform governance. Telecommun. Policy 2021, 45, 102152. [Google Scholar] [CrossRef]
  66. Morris, P.; Sarapin, S. You can’t block me: When social media spaces are public forums. First Amend. Stud. 2020, 54, 52–70. [Google Scholar] [CrossRef]
  67. Rajaretnam, T. Statistics for Social Sciences, 1st ed.; SAGE Publications: New Delhi, India, 2015. [Google Scholar]
  68. D O’Gorman, K.; MacIntosh, R. Research Methods for Business and Management: A Guide to Writing Your Dissertation; Goodfellow Publishers: Oxford, UK, 2014. [Google Scholar]
  69. Hietanen, M.; Eddebo, J. Towards a Definition of Hate Speech—With a Focus on Online Contexts. J. Commun. Inq. 2022, 47, 440–458. [Google Scholar] [CrossRef]
  70. Sinpeng, A.; Martin, F.; Gelber, K.; Shields, K. Facebook: Regulating Hate Speech in the Asia Pacific; The University of Sydney and The University of Queensland: Sydney, NSW, Australia, 2021. [Google Scholar] [CrossRef]
  71. Hatano, A. Regulating Online Hate Speech through the Prism of Human Rights Law: The Potential of Localised Content Moderation. In The Australian Year Book of International Law Online; Brill Nijhoff: Leiden, The Netherlands, 2023. [Google Scholar] [CrossRef]
  72. DeVito, M.A. Adaptive Folk Theorization as a Path to Algorithmic Literacy on Changing Platforms. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–39. [Google Scholar] [CrossRef]
  73. Magalhães, J.C. Algorithmic resistance as political disengagement. Media Int. Aust. 2022, 183, 77–89. [Google Scholar] [CrossRef]
  74. Lin, H. Oscillation Between Resist and to Not? Users’ Folk Theories and Resistance to Algorithmic Curation on Douyin. Soc. Media + Soc. 2025, 11, 20563051251313610. [Google Scholar] [CrossRef]
  75. Pan, C.A.; Yakhmi, S.; Iyer, T.P.; Strasnick, E.; Zhang, A.X.; Bernstein, M.S. Comparing the Perceived Legitimacy of Content Moderation Processes: Contractors, Algorithms, Expert Panels, and Digital Juries. Proc. ACM Hum.-Comput. Interact. 2022, 6, 1–32. [Google Scholar] [CrossRef]
  76. Antić, P. Sanctioning hate speech on the Internet: In search of the best approach. Prav. Zapisi 2023, 14, 74–100. [Google Scholar] [CrossRef]
  77. Farrand, B. ‘Is This a Hate Speech?’ The Difficulty in Combating Radicalisation in Coded Communications on Social media Platforms. Eur. J. Crim. Policy Res. 2023, 29, 477–493. [Google Scholar] [CrossRef]
  78. De Gregorio, G. Democratising online content moderation: A constitutional framework. Comput. Law Secur. Rev. 2020, 36, 105374. [Google Scholar] [CrossRef]
  79. Quintais, J.P.; Appelman, N.; Fathaigh, R.Ó. Using Terms and Conditions to apply Fundamental Rights to Content Moderation. Ger. Law J. 2023, 24, 881–911. [Google Scholar] [CrossRef]
  80. Pour, H.N. Transitional justice and online social platforms: Facebook and the Rohingya genocide. Int. J. Law Inf. Technol. 2023, 31, 95–113. [Google Scholar] [CrossRef]
  81. Banchik, A.V. Disappearing acts: Content moderation and emergent practices to preserve at-risk human rights–related content. New Media Soc. 2021, 23, 1527–1544. [Google Scholar] [CrossRef]
  82. Santos, A.; Cazzamatta, R.; Napolitano, C.J. Holding platforms accountable in the fight against misinformation: A cross-national analysis of state-established content moderation regulations. Int. Commun. Gaz. 2025, 87, 729–750. [Google Scholar] [CrossRef]
  83. Kovács-Szépvölgyi, E.; Cs Kiss, Z. Mind the Net: Parental Awareness and State Responsibilities in the Age of Grooming. Soc. Sci. 2025, 14, 506. [Google Scholar] [CrossRef]
  84. O’Sullivan, K.; Clark, S.; Marshall, K.; MacLachlan, M. A Just Digital framework to ensure equitable achievement of the Sustainable Development Goals. Nat. Commun. 2021, 12, 6345. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Age-based distribution of respondents.
Figure 1. Age-based distribution of respondents.
Jcp 06 00003 g001
Figure 2. Highest educational attainment of respondents.
Figure 2. Highest educational attainment of respondents.
Jcp 06 00003 g002
Table 1. Summary of the main legal challenges in defining hate speech.
Table 1. Summary of the main legal challenges in defining hate speech.
ChallengeDescription
No universal definitionVaries by country, court, and context
Free speech vs. harmRisk of overbroad or under-protective laws
Subjectivity/contextIntent, audience, and culture affect interpretation
Distinction from offense/defamationDifficult to draw clear legal boundaries
Digital/operational enforcementAutomated systems struggle with legal nuance
Table 2. Recognition of the definition of hate speech—the number and proportion of correct and partially correct answers.
Table 2. Recognition of the definition of hate speech—the number and proportion of correct and partially correct answers.
Response TypeCount (n)Proportion (%)
At least one correct answer11337.5
Both correct answers4815.9
Table 3. Distribution of hate speech recognition across the cases.
Table 3. Distribution of hate speech recognition across the cases.
Case StudyRecognition (“Yes” Responses, %)
Case 1 (Roma ethnicity)137
Case 2 (Meme video)271
Case 3 (Holocaust denial)209
Case 4 (Transgender individuals)266
Case 5 (Violence against a woman)191
Table 4. Sanctioning preferences of respondents across the five case studies. The “correct” sanction is defined as aligning with Meta’s final decision to delete the post.
Table 4. Sanctioning preferences of respondents across the five case studies. The “correct” sanction is defined as aligning with Meta’s final decision to delete the post.
Sanction OutcomeCase 1Case 2Case 3Case 4Case 5
Correct4948607437
Incorrect252253241227264
More Lenient1258412979195
More Severe12716911214869
Table 5. Key Divergences Between Legal and Platform Hate Speech Governance.
Table 5. Key Divergences Between Legal and Platform Hate Speech Governance.
FeatureLegal FrameworksPlatform GovernanceImplication for Users (as Identified in the Literature)
Scope of DefinitionTypically narrow (incitement to violence/discrimination). Constrained by free speech rights.Generally broader, covering “offensive” content not legally proscribed.Confusion and “Over-removal”: Users find content removed that would be legally permissible.
EnforcementRequires due process, judicial review. Slower.Rapid and less transparent. Relies on automated systems and internal policies.Lack of Predictability and Fairness: Decisions seem arbitrary; users feel disempowered by the opaque process.
StandardizationJurisdiction-specific but based on established legal principles.Aims for global uniformity but often struggles with local context.Inconsistent Outcomes: Similar content may be treated differently across platforms or regions, leading to user frustration.
Table 6. Policy Recommendations for Bridging the “Awareness Gap” within the SDG 16 Framework.
Table 6. Policy Recommendations for Bridging the “Awareness Gap” within the SDG 16 Framework.
Policy RecommendationAddressed Problem (Root Cause of the “Gap”)Corresponding SDG 16 Target
1. Increasing Regulatory TransparencyLegitimacy Deficit & Algorithmic Opacity: User distrust in opaque, “black box” governance models that lack democratic input and accountability.Target 16.6: Develop effective, accountable and transparent institutions at all levels.
Target 16.10: Ensure public access to information and protect fundamental freedoms.
2. Transparency of Sanctions & RemediesProcedural Injustice & User Disempowerment: Inconsistent enforcement, lack of effective remedies, and destruction of evidence, leading to user frustration and “reporting fatigue”.Target 16.3: Promote the rule of law and ensure equal access to justice for all.
3. Raising Awareness as a Shared ResponsibilityPolitical Disengagement & Digital Resignation: Users’ withdrawal from active participation due to a sense of powerlessness and a breakdown of trust between individuals and institutions.Overarching Goal of SDG 16: Promote just, peaceful and inclusive societies.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kelemen, R.; Bosits, D.; Réti, Z. Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary. J. Cybersecur. Priv. 2026, 6, 3. https://doi.org/10.3390/jcp6010003

AMA Style

Kelemen R, Bosits D, Réti Z. Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary. Journal of Cybersecurity and Privacy. 2026; 6(1):3. https://doi.org/10.3390/jcp6010003

Chicago/Turabian Style

Kelemen, Roland, Dorina Bosits, and Zsófia Réti. 2026. "Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary" Journal of Cybersecurity and Privacy 6, no. 1: 3. https://doi.org/10.3390/jcp6010003

APA Style

Kelemen, R., Bosits, D., & Réti, Z. (2026). Digital Resilience and the “Awareness Gap”: An Empirical Study of Youth Perceptions of Hate Speech Governance on Meta Platforms in Hungary. Journal of Cybersecurity and Privacy, 6(1), 3. https://doi.org/10.3390/jcp6010003

Article Metrics

Back to TopTop