1. Introduction
Freedom of expression is a cornerstone of democratic societies and has long been recognized as a fundamental human right. In the digital age, however, its contours have significantly evolved, presenting novel legal, societal, and ethical dilemmas. The emergence of the internet, proliferation of social media platforms, and the global reach of digital communication have transformed how individuals engage with public discourse (
Brečka et al. 2010). These developments have also exacerbated tensions between the right to express one’s opinion and the need to protect individuals and democratic institutions from harm caused by disinformation, hate speech, and targeted manipulation (
European Parliament 2024).
Contemporary scholarship highlights the dual nature of the internet—both as a vehicle for robust civic engagement and as a medium vulnerable to abuse (
Křeček et al. 2023). The European Court of Human Rights (ECtHR) has played a key role in developing principles to navigate this tension, notably in the cases of Delfi AS v. Estonia and K.U. v. Finland, which set important precedents concerning platform liability and user anonymity (
Šikuta 2017). Recent decisions by the Court of Justice of the European Union (CJEU) further indicate a growing willingness to hold online intermediaries accountable, particularly in relation to unlawful content and electoral integrity (CJEU Case C-18/18, 2019).
Within the European Union (EU), harmonization efforts face challenges due to varying national approaches to freedom of expression. While several initiatives—such as the European Media Freedom Act (Regulation (EU) 2024/1083), the Action Plan Against Disinformation, and the Digital Services Act—seek to establish a common legal framework, a fully unified approach remains elusive (
European Commission 2024). Nonetheless, the EU has made important strides, particularly in addressing hate speech, safeguarding media pluralism, and combating strategic lawsuits against public participation (SLAPPs).
In its harmonization efforts, the EU faces divergences among Member States regarding the balance between protecting freedom of expression and combating disinformation and hate speech. On the one hand, it is essential to shield society from harmful content that may undermine democratic processes or provoke social unrest. On the other hand, it is necessary to ensure that regulation is not misused to silence critical voices or political opposition (
Funta and Plavcan 2021).
Freedom of expression within the European Union (hereinafter also “the EU”) represents a particularly complex legal and political issue. Its complexity stems not only from substantive considerations but also from the legislative diversity of the Union. As a supranational entity composed of numerous Member States, the EU encompasses jurisdictions that differ significantly both in their constitutional traditions regarding freedom of expression and in their broader societal approaches to this right. While the Union has undertaken sustained efforts to harmonize regulation in this field (
Fee 2024), freedom of expression remains largely embedded in domestic legal orders. Consequently, despite important initiatives at the supranational level, it has not yet been fully unified across the EU. The central question this article addresses is whether the EU is effectively balancing the promotion of free expression with its regulation, especially in the context of digital communication and hybrid threats. This study is a doctrinal/theoretical analysis that synthesizes case law, legislation, and policy instruments into a structured framework. It develops the argument along four axes: (i) the core concepts and limits of freedom of expression, (ii) the social harms arising from its abuse and their legal assessment, (iii) the regulatory and judicial responses at national and supranational levels, and (iv) the cross-cutting challenges to coherence in EU law.
2. Materials and Methods
This paper is based on a doctrinal/theoretical research methodology, which entails analysis of primary and secondary legal sources relevant to the topic of freedom of ex-pression in the European Union. Primary sources include constitutional texts, EU regulations and directives (e.g., Regulation (EU) 2024/1083, Directive 2022/0117(COD)), and key decisions by the ECtHR and the CJEU. Case law was selected based on its jurisprudential importance and relevance to the scope of the research (e.g., Delfi AS v. Estonia, K.U. v. Finland, Meta Platforms Inc. v. Bundeskartellamt).
Secondary sources include academic papers, policy documents, official EU communications, and expert commentary, with citations following the Author-Date format. Legislative documents and official reports were accessed via EUR-Lex, Curia, HUDOC, and institutional repositories.
The research also incorporates qualitative content analysis of policy documents and institutional initiatives (e.g., Code of Conduct on Illegal Hate Speech Online; European Parliament INGE Reports), evaluating their objectives, implementation, and legal implications. The analysis respects ethical standards in legal scholarship and adheres to principles of academic integrity. All cited materials are publicly available.
The objective is not to provide a systematic literature review but to construct a conceptual and normative framework clarifying key themes and identifying regulatory challenges.
3. Problematization
While the general principles of freedom of expression are well established in European constitutionalism, the digital environment confronts legal scholarship and policymakers with particularly difficult questions that lack clear answers. This paper focuses on three interrelated challenges:
- (a)
Measuring social harm. Disinformation, hate speech, and manipulative online campaigns are regularly described as threats to democracy and public health, yet there is no agreed methodology for assessing their actual impact. Courts and regulators struggle with defining when expression crosses the threshold from protected opinion into a legally cognizable harm. Electoral interference, polarization, and loss of trust in institutions are often cited (
Tomz and Weeks 2025;
Essien 2025), but remain difficult to quantify in ways that justify restrictions without risking arbitrariness.
- (b)
Preventing censorship and illiberal misuse. Efforts to counter harmful speech inevitably raise the spectre of censorship. In EU Member States with illiberal tendencies (
Cabrera Cuadrado and Chrobak 2023), legislative tools against “fake news” or “threats to national security” may be weaponized to silence critics or opposition voices. The Czech example of draft legislation allowing broad website blocking
1 illustrates the danger of vague legal definitions. The central dilemma is how the EU can encourage robust safeguards without enabling authoritarian misuse at the national level.
- (c)
National-supranational tensions. Freedom of expression remains primarily a national competence, with Member States applying divergent thresholds and remedies. At the same time, the EU is increasingly intervening through regulations such as the Digital Services Act, the Media Freedom Act, and the Anti-SLAPP Directive (
Farrington and Zabrocka 2023). This creates frictions: Member States may view EU harmonization as encroaching on sovereignty, while EU institutions emphasize cross-border risks and the need for uniform standards. The resulting tension between fragmentation and harmonization represents one of the most pressing unresolved issues.
4. Core Concepts and Limits of Freedom of Expression
For the purposes of this paper, freedom of expression is defined as the legally protected right of individuals to seek, receive, and impart information and ideas of all kinds, through any medium, subject only to restrictions that are necessary and proportionate in a democratic society to protect competing rights and legitimate public interests. This definition synthesises the guarantees of Article 10 ECHR and Article 11 of the Charter of Fundamental Rights of the EU, but also highlights the practical dimension of freedom of expression in the digital environment, where private intermediaries and technological infrastructures play a decisive role in shaping access to speech.
Freedom of expression represents a highly debated legal and societal issue, gaining increasing prominence in recent years among both the general public and the professional community (
Singh 2018;
Peonidis 2019;
Kapelańska-Pręgowska and Pucelj 2023;
Beaupert 2018). The topic is inherently complex and, due to its nature as a conflictual right, widely perceived as sensitive. In recent decades, the issue has intensified—particularly with the advent of the internet, the subsequent rise of social media (
Funta and Ondria 2023), and later during the COVID-19 pandemic and the ongoing conflict in Ukraine. These developments demonstrate that freedom of expression is no longer only a constitutional or doctrinal question, but also a highly practical challenge with direct implications for democratic stability, public health, and social cohesion.
The combination of technological disruption, geopolitical turbulence, and societal polarisation underscores both the urgency and the difficulty of establishing clear legal and policy standards in this domain. The combination of interactive platforms enabling commentary and communication of personal opinions with rising societal tensions has exacerbated the clash between freedom of expression and the protection of personality rights, as well as the spread of disinformation. The perception of anonymity, the speed of communication, and ease of access have facilitated the dissemination of diverse information regardless of age, intellectual capacity, or educational background (
Funta 2020, p. 195).
From a legal perspective, not every form of expression is protected in the same manner. A fundamental distinction lies between statements of fact and value judgments. Statements of fact, in principle, require proof of veracity, whereas value judgments are not capable of being proven (
Smet 2011, p. 29). The Constitutional Court of the Slovak Republic has emphasized that circumstances play a decisive role: whether the speaker presents certain facts as certainties, or merely as suspicions driven by a good-faith intent to inform the public on matters of general interest (IV. ÚS/472/2012). Similarly, the Slovak Supreme Court has held that in assessing value judgments, the overall meaning must correspond to truth within generally accepted levels of simplification or imprecision, so that readers may form their own judgment (3Cdo/84/2011, 3MCdo/10/2005).
The issue of user anonymity on the internet (
Atanesyan et al. 2025) must be considered one of the principal catalysts in the discourse surrounding online freedom of expression. In Standard Verlagsgesellschaft v. Austria (No. 3), para. 74 et seq., the European Court of Human Rights (ECtHR) expresses a partial view on anonymity, referring to the Declaration on Freedom of Communication on the Internet, which underscores the principle of online anonymity to support the free expression of opinions, information, and ideas. The Court further noted that requiring public disclosure of authorship in online communication could result in a chilling effect, deterring individuals from commenting. This deterrent could multiply and indirectly infringe upon the media company’s right to freedom of the press (Standard Verlagsgesellschaft v Austria (No. 3), para 74 et seq.).
Freedom of expression in virtual space presents new legal threats not only to the authors and recipients of such speech but also to the operators of communication platforms. The internet and modern mobile and computer technologies enable the rapid and often anonymous dissemination of information with a broad reach through platforms such as YouTube, TikTok, Pinterest, Facebook, Instagram, or WhatsApp (
Šramel and Horváth 2021).
Another conceptual cornerstone concerns online anonymity. In Standard Ver-lagsgesellschaft v. Austria (No. 3), the ECtHR underscored the principle that anonymity supports free expression by preventing chilling effects. Yet in K.U. v. Finland (Application no. 2872/02, judgment of 2 December 2008), the Court recognized that the anonymous nature of the internet also imposes a duty on States to ensure that perpetrators of unlawful acts can be identified and prosecuted (
Šikuta 2017). These cases illustrate the fragile balance between protecting anonymity as a condition of open debate and ensuring accountability for harmful conduct.
The issue is further complicated by divergent domestic understandings of freedom of expression and the associated concepts of disinformation and misinformation. While most Member States recognise the distinction between intentional disinformation and negligent misinformation, the scope of permissible restrictions varies considerably. Germany and France, informed by their historical experiences with totalitarian propaganda, permit broader limitations on hate speech and Holocaust denial. In contrast, Nordic countries traditionally emphasise a liberal approach, tolerating robust—even offensive—public de-bate (
Kenyon 2025). Hungary, however, demonstrates how vague anti-disinformation provisions can be instrumentalised to silence critical media, raising concerns about the abuse of regulation (
Metodieva 2025). These national divergences reveal that “freedom of expression” is not a uniformly applied standard but a contested principle whose application depends on constitutional traditions, political culture, and historical memory. Consequently, EU efforts to harmonise definitions and thresholds—such as in the EMFA or the Anti-SLAPP Directive—must contend with deeply rooted national differences that complicate convergence.
It may thus be concluded that freedom of expression is not confined solely to politics or art but extends into everyday matters, including the commercial sphere (
Krzeminska-Vamvaka 2008). In this context, the Constitutional Court of the Czech Republic ruled that even commercial speech is protected under Article 17(1) of the Charter of Fundamental Rights and Freedoms. Such guarantees apply proportionately even in interim injunction proceedings. If a commercial expression is to be restricted by an injunction on grounds of alleged falsity, the ordinary courts must adequately address—commensurate with the nature of interim proceedings—whether the factual basis of such commercial expression can indeed be deemed false (Constitutional Court of the Czech Republic, Case No. TZ 57/2021, 23 August 2021).
Taken together, these principles delineate the core limits of freedom of expression in the European legal context. Protection varies depending on the form of expression (fact vs. value judgment), the conditions of dissemination (anonymity vs. identifiability), and the field of application (political, artistic, or commercial speech). At the same time, the possibility of abuse—through disinformation, hate speech, or strategic lawsuits against public participation—creates a pressing need for legal systems to define boundaries without undermining the democratic value of open discourse.
5. Social Harms and Legal Proportionality
In this paper, disinformation is understood as the deliberate creation and dissemination of false or misleading information with the intent to deceive, manipulate public opinion, or cause harm to individuals, institutions, or democratic processes. This differs from misinformation, which denotes the unintentional sharing of inaccurate content (
Benkler et al. 2018). While both phenomena undermine trust and public debate, disinformation is distinguished by its intentionality and instrumental use for political, economic, or ideological purposes.
The digital environment amplifies the potential harms stemming from the misuse of freedom of expression. The problem does not lie in the mere dissemination of information or the exercise of the right per se, but in its deliberate or negligent misuse. The spread of false or distorted information can cause serious damages—legal, financial, reputational, or even irreversible health consequences (
Čentéš et al. 2020;
Čentéš and Rampášek 2024).
One of the most serious consequences is its capacity to disrupt democratic processes and electoral integrity. In Decision No. PL. ÚS 26/2019 of 26 May 2021, the Constitutional Court of the Slovak Republic stressed that restrictions on electoral polls must not rise to dirigisme replacing free dissemination of information, except where strictly necessary in a democratic society. Similarly, in Resolution No. IV. ÚS 362/2009, the Court underscored that freedom of expression is a conditio sine qua non of pluralistic democracy, with special protection for value judgments in political discourse. In Decision No. PL. ÚS 5/03, the Court highlighted the inherent tension between electoral law restrictions and constitutional guarantees of expression. These cases demonstrate the difficulty of balancing free speech with the integrity of elections.
Targeted attacks on certain candidates, alongside the promotion of others, constitute a strategy that may significantly influence electoral outcomes, whether at the national or European level. As further held by the Constitutional Court in Decision No. PL. ÚS 5/03 of 26 April 2004: “Free elections and freedom of expression form the foundation of any democratic system. However, these rights may, under certain circumstances, come into conflict. In the present case, such a conflict arose. On the one hand, electoral law, in pursuit of its legitimate aim of ensuring free elections to municipal self-government bodies, prohibits the publication of favourable or unfavourable information about candidates through mass media during the election period (§ 30(11) of the Electoral Act). On the other hand, the Constitution guarantees freedom of expression and the right to information (i.e., the right to seek and disseminate information), which may only be restricted by law where necessary in a democratic society to protect specified interests such as the rights and freedoms of others, national security, public order, public health, or morals (Art. 26(1) and (4) of the Constitution).”
In order to better explain the topic, we will provide an example of the spread of disinformation related to the COVID-19 pandemic and its impact on the functioning of the EU and its Member States. The circulation of false narratives during the pandemic led, in many cases, to serious and often irreversible harm to individuals who believed such misinformation, particularly concerning their health (
Chaufan et al. 2024). The COVID-19 pandemic further illustrated the dangers of widespread misinformation. False narratives about vaccines, treatments, or the virus’s origins under-mined public trust in scientific authorities and aggravated the health crisis (
Moyano et al. 2024). Social psychology research confirms that individuals exposed to false claims are statistically more likely to later accept them as true, especially when cognitive processing is interrupted (
Ľalík 2016). Thus, misinformation not only misleads individuals but erodes collective resilience to crises.
COVID-19 is not the sole subject of disinformation identified by the EU as constituting an abuse of freedom of expression. In recent years, misinformation concerning national elections in Member States has also gained prominence. Such campaigns aim to dis-credit certain candidates while portraying others in an unduly favourable light (
Nee 2023). Systematic efforts of this nature may directly influence electoral results and alter the composition of decision-making bodies. The dissemination of false information-whether in favour of or against particular candidates—may amount to manipulation of the electorate, potentially serving efforts to elect candidates committed to undermining the rule of law (
Starbird et al. 2023).
National examples highlight the risk of disproportionate responses. In March 2022, more than twenty internet domains were blocked in the Czech Republic without clear le-gal basis, with subsequent efforts to retroactively legitimize such measures. Draft legislation on restricting the dissemination of content threatening national security would allow blocking entire websites based on a single controversial article, under broadly defined conditions (
Křeček et al. 2023). Such vague legal standards risk transforming legitimate counter-disinformation efforts into tools of censorship.
A further dimension of harm concerns vulnerable groups, especially children. For instance, in Case C-252/21 (Meta Platforms Inc. et al. v. Bundeskartellamt, 4 July 2023), the Court of Justice of the European Union held that special protection of personal data must be ensured for children, given their reduced awareness of the risks, consequences, and rights associated with data processing. Such protection applies in particular to data processing for marketing purposes, user profiling, or data collection in the context of services targeted at children. Children remain one of the most vulnerable groups online, susceptible both to misinformation and to hate speech. The vast reach of the internet, its porous boundaries, and the limited capacity for oversight create the perception that it is a space in which freedom of expression can be exercised in both lawful and unlawful, ethical and unethical ways (
Brook and Eben 2024).
The European Parliament has described the information environment as increasingly “post-truth,” where objective facts are less influential than appeals to emotion and belief. This exacerbates polarisation, dividing society into antagonistic camps and weakening constructive public discourse. Such developments also create fertile ground for populist leaders and extremist movements (
Mouratidis et al. 2025).
These examples reveal the core difficulty in regulating freedom of expression: while harm is real, its measurement is uncertain. Courts rely on proportionality tests to justify restrictions, but the absence of clear metrics risks both under-enforcement (failing to address disinformation that damages democracy or health) and over-enforcement (enabling censorship and chilling effects). A sustainable regulatory balance requires legal standards capable of objectively assessing social harm while safeguarding pluralistic debate.
6. Regulatory and Judicial Responses: A Typology
Regulatory responses to the misuse of freedom of expression in the EU operate along two principal axes: legislative frameworks and judicial decision-making. Legislative action, at both national and supranational levels, provides ex ante rules defining permissible speech and the obligations of intermediaries, while judicial bodies intervene ex post to resolve specific disputes and apply proportionality tests to ensure that restrictions remain compatible with democratic standards (
Jahn 2021;
Psychogiopoulou 2024;
Mergaert 2015). Both dimensions are indispensable—legislation establishes general and preventive norms, whereas judicial reasoning offers contextual interpretation and safeguards against overreach.
In this broader framework, political and institutional initiatives also play a significant role. For instance, the European Parliament resolution of 23 November 2016 on EU strategic communication to counteract propaganda by third parties (procedure 2016/2030(INI)) sought to curb disinformation directed against the Union and to establish a strategic communication framework to counter such efforts. The report recognised that disinformation and propaganda form integral components of hybrid warfare, and therefore called for assertive responses through institutional and political communication, academic and think-tank research, social media campaigns, civil society engagement, and media literacy initiatives (
Samoilenko and Laruelle 2019).
A further step came with the Resolution of 17 April 2020 on EU coordinated action to combat the COVID-19 pandemic and its consequences (2020/2616(RSP)). Here, the European Parliament identified COVID-19-related disinformation as a direct threat to public health and stressed that every person must have access to accurate, reliable, and verified information, particularly in times of public health emergencies. It underlined the essential role of free, independent, pluralistic, and adequately funded media in safeguarding democratic values. Accordingly, the Parliament urged both Member States and the Commission to intensify support for professional journalism, to counter disinformation campaigns more effectively, and to guarantee that media outlets can operate free from political or financial pressure (
Vila Maior and Camisão 2021).
The regulation of freedom of expression within the European Union is shaped by a complex institutional division of competences. The European Commission proposes legislation, monitors implementation, and enforces compliance—most prominently under the Digital Services Act, where it holds direct supervisory powers over very large online platforms. The European Parliament acts as co-legislator, shaping the normative framework through resolutions, own-initiative reports, and its role in the ordinary legislative procedure; it also raises political visibility of disinformation and press freedom. The Council of the European Union, representing Member States, balances national sensitivities and ultimately decides, together with Parliament, on the adoption of legislative acts. The Court of Justice of the European Union (CJEU) interprets EU law, ensures its uniform application, and reviews the legality of acts by EU institutions. Alongside it, the European Court of Human Rights (ECtHR)—though not an EU body—remains central by interpreting Article 10 ECHR, which continues to frame standards applicable to all Member States.
The activities of the EU institutions in the field of freedom of expression can be observed at multiple levels and in various forms, such as the implementation of visual and informational campaigns (
Pamment 2020). One notable example is the European Parliament resolution of 9 March 2022 on foreign interference in all democratic processes in the European Union, including disinformation (2020/2268(INI)). The primary objective and intended purpose of this report is to restrict the dissemination of disinformation that aims to jeopardize the security of Member States, interfere with electoral processes, and create an effective sanctioning mechanism against the creators and distributors of such disinformation (
Besnier and Ketterlin 2025). The rapporteur of this European Parliament resolution was Sandra Kalniete, a Member of the European Parliament (MEP) from Latvia, representing the European People’s Party (EPP). She emphasized the danger posed by disinformation, stating: “At times I tend to compare the threat of disinformation to a monster, where online platforms and infrastructure form its nervous system and money constitutes its bloodstream. We may never be able to completely eliminate this monster, but we can weaken it to the extent that it becomes less dominant in our information space…”.
6.1. National Responses
Divergences between Member States remain a major obstacle to coherence. Some states emphasise restrictions to protect electoral integrity or national security, while others adopt a more liberal approach. The Slovak Constitutional Court has repeatedly intervened in electoral matters, highlighting tensions between free expression and regulation of campaigns (PL. ÚS 26/2019; PL. ÚS 5/03). In the Czech Republic, more than twenty internet domains were blocked in March 2022 without clear legal basis, with subsequent legislative attempts to retroactively authorise such blocking (
Křeček et al. 2023). Draft laws allowing entire websites to be blocked for a single controversial article illustrate the risks of vague standards and potential censorship.
6.2. EU Legislative Initiatives (Hard Law)
At the supranational level, the European Union has undertaken significant legislative initiatives to address the problem of strategic lawsuits against public participation (SLAPPs). Such lawsuits are frequently employed to intimidate or silence journalists, activists, and public watchdogs by subjecting them to lengthy and costly legal proceedings. To counter this practice, the European Parliament and the Council adopted Directive (EU) 2024/1069 of 11 April 2024 on protecting persons engaged in public participation from manifestly unfounded claims or abusive court proceedings. The directive introduces mechanisms for the early dismissal of abusive lawsuits, strengthens procedural safeguards, and provides legal and financial support to affected individuals (
Van Calster 2024). By favouring civil remedies over criminal sanctions, it seeks to safeguard freedom of expression and democratic participation. More broadly, this initiative reflects the EU’s commitment to upholding the rule of law, protecting media freedom, and fostering an enabling environment for civic engagement across Member States (
Peribañez 2025).
A second pillar of this framework is the European Media Freedom Act (Regulation (EU) 2024/1083), which establishes common rules to safeguard media pluralism, ensure transparency in media ownership, and reinforce the protection of journalists against political interference and unlawful surveillance (
Verza 2025). A central objective of the Act is to guarantee that media professionals can perform their work in a secure environment, free from threats or intimidation, both online and offline. To this end, it introduces concrete protective measures, including the creation of independent national support services such as helplines, legal and psychological assistance, and safe shelters. In addition, it emphasises the need to strengthen security arrangements during public demonstrations and gatherings, thereby preventing coercion or intimidation that could undermine press freedom and journalistic independence (
Kerševan and Poler 2024).
Thirdly, the Digital Services Act (Regulation (EU) 2022/2065) establishes a harmonised and comprehensive regulatory framework for digital intermediaries within the Union. It introduces a layered system of obligations proportionate to the size and societal impact of online services, with particular emphasis on very large online platforms (VLOPs) and very large online search engines (VLOSEs), defined as those reaching more than 45 million users in the Union. The DSA requires providers to conduct regular and systematic risk assessments concerning the dissemination of illegal content, the potential adverse impact on fundamental rights—including freedom of expression and media pluralism—as well as threats to electoral integrity and public security. On the basis of such assessments, they must implement appropriate and proportionate risk-mitigation measures, including, inter alia, the adaptation of recommender systems, the modification of advertising practices, and the strengthening of internal moderation structures. In addition, they are under an obligation to ensure transparency through periodic reporting on content-moderation decisions, algorithmic functioning, and advertising practices. The DSA further requires the establishment of effective notice-and-action mechanisms, while at the same time safeguarding users’ rights to seek redress through internal complaint-handling procedures and out-of-court dispute-settlement mechanisms. Finally, it provides for access to platform data by vetted researchers and competent regulatory authorities, thereby enabling independent auditing and effective public oversight (
Lendvai 2024;
Genç-Gelgeç 2022;
Griffin 2025).
This systemic approach goes significantly beyond the traditional liability exemptions under the E-Commerce Directive, acknowledging the active role of intermediaries in shaping the online information environment. Importantly, the DSA embeds procedural safeguards to prevent excessive removal of lawful content and to avert censorship, thereby seeking to reconcile the fight against disinformation and hate speech with the preservation of robust democratic discourse. In this respect, the DSA represents the European Union’s most concrete legal response to the phenomenon of information disorder (
Veleva 2024).
In the fourth place, the Creative Europe Programme, which has been in operation since 2014, continues to support Europe’s cultural and linguistic diversity while enhancing the competitiveness of the audiovisual sector (
Arreola and Niemkoff 2024).
All these instruments illustrate the EU’s preference for harmonisation through binding rules, while maintaining sensitivity to national constitutional traditions. They reflect a deliberate attempt to strike a balance between safeguarding fundamental rights and ensuring the resilience of democratic institutions in the digital age. By addressing abusive litigation, promoting media pluralism, and imposing systemic accountability on online platforms, the Union advances a multi-layered regulatory strategy that is both preventive and corrective in nature. This strategy is not merely sectoral but holistic, recognising that threats to freedom of expression may emanate from judicial abuse, political capture of the media, or structural imbalances in the digital sphere (
Caruso 2025). Accordingly, the EU framework situates the protection of freedom of expression within a broader rule-of-law paradigm, where transparency, accountability, and access to remedies constitute indispensable guarantees. Moreover, the cumulative effect of these measures signals an evolution from the Union’s earlier reliance on soft-law mechanisms and self-regulatory initiatives towards a more robust supranational governance model (
Bayer 2025). This development underscores the growing understanding that information disorders, disinformation campaigns, and strategic suppression of critical voices constitute not only challenges to individual rights but also systemic risks to democratic governance and public order. In this respect, the EU’s approach may serve as a normative reference point for other regional organisations and national jurisdictions grappling with similar phenomena. It exemplifies the Union’s capacity to combine legal innovation with respect for constitutional diversity, thereby strengthening the European legal order’s resilience against contemporary threats to freedom of expression and media pluralism.
6.3. Soft-Law Instruments and Platform Measures
Online freedom of expression creates challenges not only for individual users but also for the companies operating social media platforms (
Funta and Horváth 2024). These platforms therefore bear a responsibility to adopt measures that mitigate risks and contribute to a stable and safe online environment. In addition to binding legislation, the EU has promoted self-regulatory initiatives by digital intermediaries. The most prominent of these is the Code of Conduct on Countering Illegal Hate Speech Online, launched in May 2016 by Facebook, Microsoft, Twitter, and YouTube and later joined by other companies The Code seeks to harmonize and formalize company procedures and employee responsibilities in situations where freedom of expression may be misused. Under this framework, signatories undertake to review and remove hate speech promptly upon valid notification, to enforce clear community standards, and to share best practices across the industry (
Psychogiopoulou 2024). They are further obliged to assess reports of illegal hate speech and, within 24 h of receiving a substantiated request, either remove or disable access to such content. In addition, companies must adopt transparent rules governing the review of reported content that potentially infringes on protected expression, while also committing to broader duties aimed at promoting openness and accountability in the fight against abuse of freedom of expression (
Nave and Lane 2023).
Recent developments show the limits of voluntary commitments. In 2023–2024, several major platforms significantly revised their engagement with the Code of Practice on Disinformation. Most notably, X (formerly Twitter) formally withdrew from the Code, while other signatories such as Meta, Google, and TikTok narrowed the scope of their obligations, reducing transparency reports and cooperation with fact-checking initiatives. For the EU, this shift underscores the fragility of self-regulation: voluntary mechanisms may weaken precisely when political stakes are highest (
Chystoforova and Reviglio 2025). The Commission has responded by clarifying that compliance with the Code will now be assessed in conjunction with the Digital Services Act. Under the DSA, very large online platforms are legally obliged to identify and mitigate systemic risks related to disinformation, and failure to meet the Code’s commitments may serve as evidence of non-compliance in regulatory proceedings. This transformation reflects a broader trend: soft-law initiatives are no longer sufficient on their own, but operate as complementary instruments feeding into the hard-law framework of the DSA.
The Open Code (OpCode) project aimed to monitor and report hate speech online across Member States. In its first monitoring phase (2020), 302 messages were assessed, with removal rates varying significantly across platforms. Facebook proved the most responsive, while Twitter and YouTube showed minimal engagement. The project revealed disparities in platform responsiveness and underscored the need for consistent enforcement mechanisms (
Kassimeris 2021).
These initiatives demonstrate the growing reliance on private actors to regulate online discourse, but also the limits of voluntary commitments, which remain vulnerable to shifting corporate priorities and may falter in precisely those moments when robust safeguards are most needed. The uneven responsiveness of platforms, the selective withdrawal from established codes, and the narrowing of transparency obligations all reveal the fragility of self-regulation in the face of political or economic pressures. As a result, voluntary mechanisms cannot, in themselves, ensure the consistent protection of fundamental rights, nor can they guarantee a level playing field across the internal market. Against this backdrop, the EU has increasingly moved towards a hybrid governance model, in which soft-law initiatives serve as experimental laboratories or norm-setting baselines, while enforceable obligations under instruments such as the Digital Services Act provide the necessary legal backbone. This layered approach reflects a recognition that the governance of online communication is too central to democratic life, public security, and the integrity of elections to be left to corporate discretion alone (
Simpson 2024). In this respect, voluntary commitments may continue to play a supportive role, for instance by fostering industry cooperation, encouraging innovation in content moderation practices, and facilitating dialogue with civil society. However, their real value lies in complementing, and being scrutinised through, the legally binding duties imposed by EU legislation. Accordingly, the trajectory of EU law suggests an irreversible shift: while private actors remain indispensable partners in shaping the online information environment, their responsibilities are now increasingly anchored in supranational legal obligations (
Quintais et al. 2023). This ensures that the pursuit of economic interests does not override the Union’s foundational commitment to safeguarding fundamental rights, preserving media pluralism, and maintaining the conditions of democratic deliberation in the digital age.
The European Union also supports both Member States and civil society in implementing initiatives that strengthen media literacy. The European Commission is actively involved in sharing best practices, developing guidance and tools to assist educators in improving pupils’ digital skills. In connection with the 2024 European elections, the Commission released a series of awareness-raising materials designed for both professionals and the wider public, complemented by a pan-European campaign organised in cooperation with the European Regulators Group for Audiovisual Media Services (
Sádaba and Salaverría 2023).
6.4. Judicial Decision-Making
One of the most serious threats to online freedom of expression is the spread of hate speech and false information (
Pejchal 2025). The European Court of Human Rights (ECtHR) first confronted these issues in Delfi AS v. Estonia (judgment of 10 October 2013), which concerned the liability of an online news portal for offensive comments posted by its readers (
Cox 2014). The Court considered whether imposing such liability was proportionate, assessing in particular the context of the posts, the preventive measures adopted by the portal, the possibility of holding the individual commenters liable, and the broader implications of the domestic courts’ findings. By upholding liability, the ECtHR underlined that the benefits of digital communication inevitably entail risks when harmful content circulates without effective safeguards.
Similarly, in K.U. v. Finland, the ECtHR emphasised the state’s positive obligation to establish frameworks enabling the identification of anonymous perpetrators of online abuse. These early cases demonstrate that courts at the European level play a corrective role in delineating the boundaries of online expression: while restrictions must remain strictly necessary in a democratic society, states and intermediaries alike are obliged to protect individuals from serious harms (
Koops and Sluijs 2012).
By contrast, in Standard Verlagsgesellschaft v. Austria (No. 3), the ECtHR cautioned that mandatory disclosure of online authorship may exert a chilling effect on journalistic activity and thereby indirectly undermine press freedom. Complementing this line of case law, the Court of Justice of the European Union (CJEU) in Glawischnig-Piesczek v. Facebook (Case C-18/18, 2019) held that hosting providers may be compelled to identify and remove identical unlawful content on a global scale, even where they had no prior knowledge of its existence. More recently, in Meta Platforms Inc. v. Bundeskartellamt (Case C-252/21, 2023), the CJEU underscored the heightened protection owed to children’s data, acknowledging their particular vulnerability to profiling and manipulation.
In light of the considerations set out above, it can be inferred that these decisions illustrate the courts’ reliance on a proportionality framework: while restrictions on online expression must be strictly necessary in a democratic society, both states and private intermediaries are simultaneously subject to positive obligations to shield individuals from serious harms. What emerges is a distinctly European doctrine of “responsible freedom of expression online”, which seeks to reconcile the preservation of robust public debate with the imperative of protecting human dignity, privacy, and democratic integrity (
Nave and Lane 2023). This evolving jurisprudence reflects a dual commitment: on the one hand, safeguarding the marketplace of ideas against undue interference, and on the other, recognising that the digital environment magnifies the potential for harm through anonymity, viral dissemination, and algorithmic amplification. The courts thus position freedom of expression not as an absolute licence but as a relational right, exercised within a framework of responsibility shared between individuals, platforms, and states (
Frosio and Geiger 2023). By doing so, European adjudication contributes to a normative model that balances liberty with accountability, offering a corrective counterpoint to both laissez-faire approaches and overly restrictive regulatory regimes.
7. Cross-Cutting Challenges in the EU Context
Despite significant efforts at both national and supranational levels, several unresolved challenges complicate the regulation of freedom of expression in the European Union. These challenges cut across legal instruments, political practice, and judicial decision-making.
7.1. Fragmentation of National Approaches
Freedom of expression in the European Union remains primarily regulated within domestic legal orders, with Member States applying divergent thresholds and remedies. This divergence generates significant legal uncertainty in cross-border contexts, where a single online statement may simultaneously breach multiple national laws. Efforts by the EU to harmonize this area—through instruments such as the Digital Services Act (DSA), the European Media Freedom Act (EMFA), or the Anti-SLAPP Directive—inevitably encounter resistance rooted in distinct constitutional traditions and historical experiences with censorship.
National practices illustrate the scope of these divergences. Germany and France, drawing on their historical experience, have long maintained constitutional traditions permitting significant restrictions on hate speech. By contrast, in Hungary concerns persist regarding government control of media pluralism and the instrumental use of regulation to silence critical voices (
Metodieva 2025). Against this background, the EMFA is not merely a measure of market regulation, but a direct response to systemic deficiencies in some Member States, aiming to secure press independence and thereby protect freedom of expression where national safeguards prove insufficient (
Cantero Gamito 2023).
The challenges of fragmentation are particularly visible in the online environment. A single racist post may, within seconds, contravene legal provisions in dozens of Member States. More broadly, the combined issues of protecting journalists and media, addressing hate speech and disinformation, regulating online speech, managing the implications of artificial intelligence, and adapting to ever-growing internet activity create a system exceedingly difficult to regulate without more harmonised and effective rules (
Peña-Fernández et al. 2025).
An additional challenge arises from the inherently cross-border nature of online freedom of expression. Individuals can disseminate disinformation or hate speech across jurisdictions simply by traveling, making localization and enforcement far more complex (
Šori and Vehovar 2022). This dynamic demonstrates that the abuse of freedom of expression is not only a pressing issue for individual Member States but has also become a significant concern for the Union as a whole.
7.2. Risk of Censorship and Illiberal Misuse
Efforts to counter harmful speech can be weaponized in Member States with weak rule-of-law safeguards. The Czech example of draft legislation enabling the blocking of entire websites for a single controversial article, based on vague definitions of “threats to national security,” illustrates the danger (
Křeček et al. 2023). Such measures highlight how legitimate concerns about disinformation can be transformed into tools of censorship, threatening pluralism rather than protecting it.
7.3. Tension Between National Sovereignty and Supranational Oversight
The EU’s growing regulatory role has provoked frictions with Member States. While the EU emphasises the cross-border nature of online harms—ranging from electoral interference to disinformation campaigns—some Member States view supranational intervention as encroaching upon their sovereignty in the field of constitutional rights. This tension is particularly visible in debates on the scope of the Digital Services Act and the Media Freedom Act (
Hulkó et al. 2025).
7.4. Power of Private Intermediaries
Large technology companies play a pivotal role in shaping the boundaries of expression online. Through content moderation, community standards, and algorithmic amplification, they exercise quasi-regulatory authority. Although initiatives such as the Code of Conduct and the DSA aim to constrain this power, concerns remain about transparency, accountability, and due process. The risk of both over-removal and under-enforcement persists, raising questions about the adequacy of safeguards for fundamental rights (
Griffin 2024;
Fathaigh et al. 2025).
7.5. Vulnerable Groups and Democratic Resilience
Children, minorities, and journalists face heightened risks in the online environment. Targeted disinformation or harassment campaigns can disproportionately silence or manipulate these groups, undermining both individual rights and democratic resilience (
Celuch et al. 2023). Protection mechanisms remain uneven across Member States, with the EU seeking to establish only minimum standards.
7.6. Artificial Intelligence
An emerging challenge concerns the growing role of artificial intelligence. AI systems have the potential to exacerbate disinformation—for instance, through deepfakes, automated bot networks, or the large-scale micro-targeting of users. At the same time, AI tools are increasingly used in content moderation, fact-checking, and detection of coordinated inauthentic behaviour. This dual role magnifies both the opportunities and risks: automated moderation may reduce the spread of harmful content, but also risks errors, bias, and lack of transparency (
Gomez et al. 2024). The current DSA framework addresses algorithmic accountability only indirectly. However, with the formal adoption of the EU Artificial Intelligence Act in 2024, the two instruments will need to be read together to understand how Europe regulates the intersection between digital platforms, disinformation, and freedom of expression.
The Special Committee on Foreign Interference in All Democratic Processes in the European Union, including Disinformation (INGE) was established in 2020 to assess the magnitude of threats across various domains related to EU democracy (
Tiilikainen 2024). The committee focused on matters such as national and European elections, disinformation campaigns in traditional and social media, cyberattacks targeting critical infrastructure, and the financing of political groups and civil society. In its findings, the committee emphasized the coordinated nature of foreign influence operations, particularly those orchestrated by authoritarian regimes seeking to undermine democratic institutions, polarize societies, and erode trust in public authorities and the media. The INGE Committee also called for a comprehensive strategy to bolster resilience within the Union, including legal and institutional reforms, enhanced cooperation among Member States, increased transparency in online political advertising, and strengthened support for independent journalism and media literacy. Its final reports underscored the need for a proactive, whole-of-society approach to counter hybrid threats and protect the integrity of democratic processes across the EU (
Korkea-aho 2025).
The foregoing analysis suggests that the EU is pursuing an active response, recognising that the issue concerns both the security and trust of Member State citizens and the overall credibility of the Union itself. In this context, the European Centre of Excellence for Countering Hybrid Threats (Hybrid CoE) plays a key supporting role by enhancing the Union’s strategic capabilities to respond to hybrid threats. Established in 2017 and headquartered in Helsinki, the Centre facilitates cooperation among EU Member States, NATO allies, and partner countries through research, training, and strategic analysis. Its activities contribute to improving societal resilience and building a coordinated response to disinformation, cyberattacks, and other forms of hybrid interference that undermine democratic institutions and public trust (
Arkan 2025).
Overall, the challenges highlight the precarious path along which the EU must navigate. On one side lies the risk of fragmentation, illiberal misuse, and unchecked private power; on the other, the danger of over-regulation and chilling effects. Reconciling national diversity with supranational coherence, while preserving robust safeguards for freedom of expression, remains the Union’s most pressing unresolved task. In this sense, the future of European regulation in the digital sphere will depend on its ability to institutionalise a model of responsible freedom of expression that both withstands technological disruption and resists political instrumentalisation. Only through continuous judicial oversight, transparent regulatory practices, and sustained dialogue with civil society can the EU ensure that its legal framework protects not merely the formal right to speak, but the substantive conditions for democratic discourse in the twenty-first century.
8. Conclusions
Freedom of expression within a jurisdiction as vast and diverse as the European Union faces the challenge of varying legal systems, each with its own set of rules and restrictions. Based on the aforementioned developments, it may be concluded that the EU is actively undertaking initiatives aimed at addressing specific issues linked to freedom of expression, in particular hate speech and the dissemination of disinformation. It is a positive sign that the EU recognizes the seriousness of the current situation, wherein expressions are spread online with the primary intention of inciting hatred, misleading recipients, and causing harm.
Nevertheless, despite these efforts, there remains a lack of a binding and comprehensive legislative framework that uniformly regulates these issues throughout the EU. This absence may be attributable to several factors, such as differing historical experiences, fears of censorship, significant ideological divergence, and the diverse legislative approaches of Member States.
That said, the EU has already taken initial steps that are far from insignificant. It is understandable that the EU proceeds cautiously in its attempts to regulate freedom of expression, given the historical legacy of censorship and the associated public dissatisfaction—an issue that poses a substantial risk not only for individual Member States but for the Union as a whole.
However, it must not be overlooked that legislation is not the only effective instrument available; judicial decision-making plays an equally important role. The judiciary complements legislative measures by providing authoritative interpretations and applying legal reasoning that enhances public understanding of the limits and implications of freedom of expression. The contribution of the courts is therefore indispensable.
In addition to legislative efforts, the case law of the European Court of Human Rights (ECtHR) plays a vital role. It is essential that courts—particularly the ECtHR in the context of the EU—intervene in specific disputes concerning freedom of expression. As consistently affirmed in the case law interpreting Article 10 of the European Convention on Human Rights, any restriction on freedom of expression must meet strict criteria and is subject to narrowly defined exceptions. Such limitations are not to be interpreted broadly; rather, the necessity of each restriction must be convincingly demonstrated by the state. This high threshold is designed to prevent disproportionate interference by public authorities and to ensure that freedom of expression remains robustly protected within a democratic society (
Kráľ 2004, p. 72).
Judicial bodies play a key role in delineating the boundary between legitimate expression and its misuse. Their decisions are crucial, particularly in controversial cases where competing rights must be balanced. For example, in cases involving defamation or hate speech, the courts must assess whether any restriction of expression is proportionate and does not infringe upon the core values of democratic discourse.
Without judicial oversight, freedom of expression would risk two opposing extremes. On one end, there would be unregulated anarchy, enabling the unchecked spread of falsehoods, hate speech, and incitement to violence—conditions that could destabilize society, foment conflict, and threaten individual rights. On the other end, excessive suppression by powerful entities could stifle legitimate discourse, undermining democratic principles.
In the absence of effective judicial control over speech, these extremes would represent significant threats to the legal order and social cohesion. For this reason, we must closely monitor the trajectory of the EU’s activities in this domain and place our hope in its ability to maintain a fair and sustainable balance between safeguarding freedom of expression and protecting the rights that stand in potential conflict with it.
Freedom of expression in the European Union sits at the heart of contemporary democratic debate, but the digital environment exposes three particularly difficult dilemmas. First, the problem of measuring social harm: without reliable criteria, legal interventions risk being either toothless or disproportionate. Second, the risk of censorship and illiberal misuse: in Member States with weak rule-of-law safeguards, regulatory tools against dis-information can be instrumentalized to silence critics. Third, the tension between national sovereignty and supranational regulation: the EU has a legitimate interest in addressing cross-border information risks, yet divergent traditions and constitutional sensitivities hinder uniform standards.
Against this backdrop, the EU’s incremental steps—the Digital Services Act, the Media Freedom Act, and the Anti-SLAPP Directive—represent significant progress but do not amount to a coherent framework. Courts, especially the ECtHR and the CJEU, continue to play a vital role by applying proportionality and protecting against chilling effects, yet their jurisprudence is reactive rather than systemic.
The way forward requires more than additional instruments. Three priorities emerge from this analysis:
- (1)
Developing clearer standards for harm assessment. The EU should invest in methodologies that allow disinformation or hate speech to be evaluated objectively for their impact on democratic processes, public health, or vulnerable groups.
- (2)
Embedding procedural safeguards against censorship. Any national or EU measure must include transparency, narrow definitions, and avenues for judicial review to prevent authoritarian misuse.
- (3)
Reconciling national diversity with supranational coherence. The EU should not seek full uniformity but must ensure a baseline of rights protection and procedural guarantees that apply across all Member States.
Without addressing these issues directly, the EU risks oscillating between two extremes: unregulated anarchy, enabling manipulation and harm, and excessive suppression, undermining democratic discourse. A sustainable balance requires not only legislation but also judicial vigilance, robust safeguards, and continuous dialogue between EU institutions, Member States, platforms, and civil society.