Definition
Disinformation refers to false or misleading information created with the deliberate intention to deceive and cause individual or societal harm. It is typically distinguished from misinformation, which involves falsehoods shared without deceptive intent, and from malinformation, which uses accurate information in misleading or harmful ways. Terms often used interchangeably in public debate—such as fake news, propaganda, and conspiracy theories—describe related but distinct phenomena with differing aims and methods. The term derives from the Soviet concept of dezinformatsiya, originally associated with covert influence operations and strategic deception. Over time, however, its meaning has expanded to encompass a wide range of manipulative practices enacted by both state and non-state actors. Disinformation can take textual, visual, and multimodal forms, including fabricated images and AI-generated content such as deepfakes. Motivations vary and may include political influence, economic gain, ideological mobilisation, or efforts to stigmatise specific groups. Although these practices have long historical precedents, digital and platformised communication environments have amplified their scale, speed, and persuasive potential. This entry provides a narrative overview and conceptual synthesis structured around four dimensions: the history of disinformation, the supply and diffusion mechanisms, the psychological, social, and narrative drivers, and the interventions designed to mitigate its impact.
1. Introduction
The literature increasingly regards disinformation as a socio-technical challenge rather than merely a limited knowledge deficit [1,2]. This perspective emphasises that false and misleading content emerges not only from individual misunderstandings but from the interaction between human cognition, social dynamics, and technological infrastructures. During the 2016–2020 cycle of “infodemic,” research came together around the “information disorder” framework by Wardle and Derakhshan [3], which includes misinformation, disinformation, and malinformation. This framework gave everyone common terms to use in debates about measurement and policy. The literature also highlights conceptual challenges in distinguishing misinformation from disinformation, particularly when the intentions of those who share false content are difficult to ascertain, an issue explored in depth by recent philosophical analyses [4].
The modern use of the term disinformation reflects this evolution. Western scholarship typically treats it as a loan translation of Soviet dezinformatsiya, a term reported in intelligence contexts from the 1920s and subsequently codified in Soviet reference works [5,6,7]. The use of English increased significantly after the 1950s, with a wider use of dictionaries starting in the 1980s. This was in line with Cold War debates about propaganda and active measures [5,7,8]. This diffusion underscores a key distinction in the literature: while techniques of deception have deep historical roots, the modern category term disinformation—and its policy salience—are products of 20th-century statecraft.
In the most recent years, three research streams developed concurrently. The first issue is supply and diffusion: who makes fabricated content, how it is spread and kept going, and how platform incentives affect visibility and reach [9]. The second examines susceptibility, integrating cognitive, affective, social, and narrative mechanisms to elucidate why falsehoods persist and why corrections frequently leave residual influence [10,11]. The third examines the efficacy of interventions set up to counter disinformation, encompassing prebunking, media literacy initiatives, accuracy prompts, interface friction, post hoc corrections, and provenance or policy safeguards [11,12].
New cumulative evidence makes it even clearer who is at risk and what works. An individual participant data meta-analysis (31 experiments; 11,561 U.S. participants; 256,337 headline judgments) distinguishes discrimination ability (the capacity to differentiate true from false) from response bias (a general inclination to classify items as true or false). It demonstrates that source display, in conjunction with headlines, enhances discrimination—yielding heterogeneous subgroup gains—and identifies demographic and cognitive moderators [13]. A complementary toolbox synthesis in Nature Human Behaviour delineates nine individual-level interventions aligning strategies with objectives and evidential robustness [12]. A meta-analysis of media literacy interventions encompassing 49 experiments (N = 81,155) indicates a moderate overall effect on resilience (d = 0.60), with enhanced outcomes for multi-session programs, in cultures with higher uncertainty avoidance, and among college students compared to crowdsourced adults [14].
Together, these strands reflect a growing consensus that disinformation must be understood systemically, as an interplay between producers, environments, and audiences. This entry synthesises these perspectives while offering an accessible overview of the history, drivers, and countermeasures of disinformation.
2. History
The English term disinformation is generally traced to the Russian dezinformatsiya, a Soviet intelligence term that entered official and semi-official usage by the mid-20th century [5,15]. Archival and lexicographic notes indicate early institutional use in the 1920s (reports of a “special disinformation office” in 1923), inclusion in Russian dictionaries by 1949, and a definition in the Great Soviet Encyclopedia (1952) as “false information with the intention to deceive public opinion” [5,6,7].
Historians and other specialists in the field exercise caution against “recentism,” given the close association of contemporary concerns for disinformation with social platforms and AI [16,17]. While the modern concept of disinformation is rooted in twentieth-century political and intelligence practices [8,15], earlier media systems also enabled various forms of intentional deception, strategic persuasion, and fabricated content [18,19]. Examining these antecedents helps illustrate how evolving communication infrastructures shaped the possibilities for large-scale informational manipulation. These cases are selected for their exemplary relevance in the sea of unreliable information, to reiterate, also from a methodological point of view, the importance of acquiring a perspective on the history and evolution of untrustworthy information.
Across different epochs, a small set of recurring communicative logics can be observed. One concerns fabrication and forgery, the deliberate production of false documents or narratives to legitimise authority or mobilise hostility. The Donation of Constantine—exposed philologically by Lorenzo Valla but used for centuries to support papal temporal claims—illustrates how textual fabrications can stabilise power arrangements when they resonate with prevailing institutional and religious frameworks [20]. Later, the Protocols of the Elders of Zion became a paradigmatic case of politically instrumentalised forgery, circulating internationally as evidence for a fictitious Jewish conspiracy and informing antisemitic propaganda across regimes and decades [21,22].
A second enduring logic is strategic persuasion as an instrument of statecraft, observable long before the modern era. Ancient empires already made systematic use of political messaging to construct legitimacy, consolidate power, and shape collective memory. In Augustan Rome, monumental inscriptions such as the Res Gestae Divi Augusti and widespread visual programs documented by historians and classicists were carefully crafted to celebrate imperial achievements, glorify military victories, and present the emperor as restorer of order and prosperity [23,24]. These practices combined controlled narratives, symbolic communication, and curated public imagery to influence perceptions across the empire, illustrating an early and sophisticated form of state-sponsored persuasion. The twentieth-century Soviet doctrine of dezinformatsiya later formalised these strategies into a repertoire of “active measures,” including selective leaks, planted stories, forged documents, and psychological operations designed to manipulate geopolitical perceptions [8,15,19,25]. Western responses included counter-propaganda and public diplomacy efforts such as the United States Information Agency and transnational broadcasters like Radio Free Europe/Radio Liberty, which sought to shape perceptions behind and beyond the Iron Curtain [26,27]. These historical continuities show that many features of contemporary influence operations—narrative construction, symbolic legitimacy, and strategic manipulation of information—extend deep into antiquity, even as digital infrastructures have transformed their scale and speed.
A third historical pattern involves the acceleration and scaling of information enabled by successive media infrastructures. The Reformation pamphlet economy and later civil-war propaganda in England show how cheap print, literacy growth, and dense distribution networks could rapidly disseminate polemical or misleading materials, transforming political communication [28,29,30]. In the nineteenth century, the mass press and its commercial logics amplified sensational narratives: the Great Moon Hoax [31,32] of 1835 and the “yellow press” of the Spanish–American War era [33] illustrate how economic incentives favoured spectacular and often unreliable stories, foreshadowing contemporary attention-based dynamics. Studies of the Yugoslav wars and the Rwandan genocide have shown how broadcast and print media can be weaponised to dehumanise adversaries, mobilise violence, and entrench divisive narratives [34,35].
Finally, the most recent hybrid media environment combines these long-standing tendencies with novel technological affordances. Analyses of Russian information operations describe a “firehose of falsehood” model, in which high-volume, multi-channel, and often contradictory messaging seeks to overwhelm verification, and fragment shared reality [15,19,36]. In parallel, platform-based ecosystems and market-shaping incentives on digital media have created new conditions for disinformation production and amplification, where political, economic, and ideological motives intersect with algorithmic curation and engagement-optimised business models [18,37,38].
Viewed through these recurring logics, disinformation appears not as a uniquely contemporary anomaly but as a persistent feature of mediated communication, whose forms evolve alongside political institutions, media infrastructures, and economic incentives. This conceptual perspective provides the backdrop for examining how present-day socio-technical environments interact with cognitive, social, and narrative mechanisms to sustain and amplify misleading content.
3. Supply and Diffusion of Disinformation
Disinformation is intentionally created and disseminated to influence beliefs, behaviours, or public debate, typically for political, economic, or ideological purposes. A large body of research shows that multiple actors—state and non-state, collective and individual—participate in its production and spread. Political actors may deploy disinformation to undermine trust in institutions, polarise societies, or gain electoral advantage, while commercial actors often pursue financial incentives such as advertising revenue or increased visibility [9,18,37,38]. Ideological groups and individual users may circulate misleading content to reinforce group identity, express grievance, or mobilise collective action [39,40,41].
Disinformation is amplified not only by its creators but also by ordinary users, who often share false content because it aligns with their prior beliefs or emotional states. Emotional and identity-based motivations—particularly anger, moral outrage, or confirmation bias—play a central role in driving engagement and diffusion [42,43].
Digital platforms significantly shape these dynamics. Recommendation systems, virality metrics, and frictionless sharing create environments in which sensational or emotionally charged content travels quickly and broadly. Platform features such as closed groups, personalised feeds, and microtargeting can intensify echo chambers and increase exposure to misleading information [41,44,45]. At the same time, business models centred on engagement may inadvertently incentivise the spread of controversial or polarising content, complicating efforts to curb disinformation [46,47].
Research shows that disinformation unfolds differently across regions, shaped by diverse media systems, political pressures, and cultural practices. In many Asia-Pacific countries, where news consumption is strongly mediated by mobile devices and encrypted messaging apps, the speed and opacity of information flows have facilitated the circulation of misleading content within tightly knit social and family networks, influenced by cultural norms, local politics, and linguistic diversity [48,49]. Across Sub-Saharan Africa, motivations for sharing false content often differ from Western assumptions: people may circulate unverified information out of civic duty, amusement, or political engagement, relying on locally embedded cues of credibility rather than institutional trust [50,51]. Latin American countries, characterised by high social media penetration and historically low confidence in traditional media, face recurring waves of fabricated content and identity-driven polarisation, often amplified by partisan or commercial actors [52,53,54]. Although psychological factors underlying susceptibility—such as analytic thinking and accuracy motivation—appear consistent across countries, comparative studies reveal substantial regional variation in media trust, platform governance, and the effectiveness of interventions [44,55,56].
Taken together, these global perspectives underscore that disinformation is not a uniform phenomenon but one that reflects and reshapes local information cultures, power structures, and communicative norms.
4. Psychological, Social, and Narrative Drivers
A developed body of literature explains why people perceive certain falsehoods as true, spread them widely, and avoid correcting them. Understanding susceptibility to disinformation requires examining multiple interconnected dimensions: the cognitive mechanisms that make false information seem credible, the emotional and social dynamics that drive its spread, the narrative frameworks through which communities interpret information, and the platform architectures that shape exposure and engagement. Together, these factors create complex ecologies in which disinformation can take hold and persist despite correction efforts [57,58,59].
4.1. Cognitive Mechanisms and Emotional Influences
Repetition and processing fluency are fundamental: familiar statements are easier to process and thus perceived as more truthful (the illusory truth effect), even when audiences have pertinent knowledge [59,60]. Once encoded, misinformation can persist in shaping inference post-correction—the continued influence effect—unless the correction provides a causally adequate alternative that addresses the explanatory void left by the myth [57,60]. Numerous sharing decisions are executed with limited contemplation; thus, accuracy prompts that highlight precision prior to action can enhance subsequent sharing discernment in both experimental and field studies [61].
These cognitive biases are influenced by emotion and identity [2,58]. Moral-emotional language in social media posts attracts attention and enhances sharing [62]; anger appears particularly effective at making content viral in networked environments [63].
4.2. Social Dynamics and Network Effects
People rely on others for epistemic guidance: beliefs are shaped by social trust, status signals, and network structure. Formal models and empirical studies demonstrate how homophily, conformity, and reputational dynamics can produce polarised group beliefs, even among individuals seeking truth [64]. Online, interface cues—such as social endorsement indicators, source attributions, and popularity metrics—function as credibility heuristics that expedite evaluation and influence user behaviour; on a large scale, disinformation has been shown to disseminate more rapidly and extensively than true news, attributable in part to advantages in novelty and emotional response [65,66,67].
4.3. Narrative Frameworks
Narratives persuade by inducing transportation, an immersive state that reduces counter-argumentation and facilitates enduring attitude change; meta-analytic evidence substantiates these effects across various contexts [68]. Compelling stories can bypass evaluative filters and outcompete fact-only corrections, an asymmetry particularly salient in conspiracy narratives that supply clear villains, dramatic conflicts, and coherent explanatory frameworks in emotionally engaging arcs [69,70].
Recent work on the narrative processes behind disinformation [71] represents a fundamental shift in approach: moving from treating it primarily as a problem of factual inaccuracy to recognising it as a manipulation of the narrative frameworks through which communities make sense of reality. While fact-checking remains essential, this research demonstrates that facts are never encountered in isolation—they are always interpreted through narratives, the mental structures that communities use to organise information, assign meaning, and construct coherent understandings of the world. Disinformation exploits this reality by embedding falsehoods within narrative frameworks that resonate with specific communities’ values, identities, and ways of seeing, making factual corrections insufficient when the underlying narrative structure remains intact. This approach shows that effective interventions—including fact-checking, prebunking, and media literacy—must therefore operate on two levels: addressing factual accuracy while simultaneously engaging with the narrative dimension that determines how facts are received, interpreted, and integrated into belief systems. This dual focus is particularly crucial in polarised contexts, where different communities may acknowledge the same facts yet interpret them through incompatible narrative frameworks, leading to parallel realities and communication breakdowns.
4.4. Platform Design
Platform architecture shapes exposure, attention, and engagement with misleading content [72,73]. Ranking systems reward material that elicits strong reactions; recommendation algorithms create feedback loops that increase the visibility of sensational or congruent content; frictionless sharing reduces opportunities for reflection; and design choices such as infinite scroll, autoplay, or prominent engagement metrics can amplify cognitive biases [57]. The PNAS individual-participant meta-analysis [13] shows how source visibility influences discrimination, illustrating how even minor interface decisions can affect judgment processes. Susceptibility emerges from the interplay of cognitive (fluency, memory), emotional (arousal, moral language), social (identity, group norms), and architectural (algorithmic curation, user-flow design) factors [57].
5. Strategies for Prevention and Mitigation
Interventions for tackling disinformation are most efficacious when regarded as complementary, situated at various phases of the disinformation life cycle, and implemented across socio-ecological strata. Across reviews, five families come up again and again: boosting (building competence through prebunking and media literacy), nudging (choice architecture), debunking (post-exposure corrections), labels, and system-level guardrails [57,58,74,75,76]. An up-to-date toolbox in Nature Human Behaviour combines 81 studies into nine categories for individuals: accuracy prompts; debunking/rebuttals; friction; inoculation; lateral reading/verification; media-literacy tips; social norms; source-credibility labels; and warning/fact-checking labels. Each category has targets, scope, and evidence summaries that go along with the four-family schema used here [12].
5.1. Boosting: Building Competencies Through Prebunking and Media Literacy
Building individual competences against disinformation involves two complementary approaches that enhance people’s ability to recognise and resist manipulation: prebunking (or psychological inoculation) and media and information literacy (MIL) interventions.
Brief inoculations alert individuals to manipulation techniques and either pre-refute them (passively) or engage users in simulated manipulation (actively; through mini-games), thereby cultivating mental “antibodies.” A comprehensive study program, which included a substantial YouTube field experiment, demonstrated that concise (approximately 90-s) technique-based prebunking videos enhanced the recognition of manipulation and diminished susceptibility [57]. Effects are moderate and scalable, but they diminish without reinforcement, necessitating booster exposures and incorporation into ongoing education [77].
MIL improves verification (like lateral reading), source evaluation, and understanding of platform incentives. A randomised trial conducted in the United States and India demonstrated that a brief digital media literacy module enhanced the ability to discern truth [11]. A 2024 meta-analysis of 49 experiments (N = 81,155) indicates a moderate overall effect on resilience to misinformation (d = 0.60), with more pronounced effects for multi-session programs, in cultures with higher uncertainty avoidance, and among college students compared to crowdsourced adults. Specific improvements include reduced belief (d = 0.27), enhanced discernment (d = 0.76), and diminished sharing (d = 1.04) [14]. These findings enhance the inoculation theory by demonstrating that structured, repeated exposure and refutation strategies are more effective than singular sessions, while also offering new recommendations for educational systems [78]. Modern policy guidance advocates for the treatment of Media and Information Literacy (MIL) as ecosystem infrastructure, integrating schools, libraries, community organisations, creator training, newsroom standards, and platform partnerships with the life cycle of a socio-ecological matrix [1].
Some studies note unintended side effects of boosting activities: when poorly designed, these interventions can lead individuals to adopt an overly generalised scepticism, resulting in reduced trust even toward reliable content or institutions. While these effects tend to be small and context-dependent, they highlight the importance of designing interventions that promote critical discernment rather than indiscriminate distrust [79].
Together, these competence-building interventions are widely regarded as among the most effective long-term strategies for countering disinformation, though they require sustained investment and time to achieve widespread impact across populations.
5.2. Nudging (Choice Architecture)
Accuracy prompts—simple reminders to evaluate accuracy before sharing—consistently enhance sharing discernment, yielding small yet significant effects across multiple experiments [61]. As part of a broader suite, friction and social-norm nudges can be useful, provided they are tailored to context and audience and combined with complementary interventions [12]. Effects can vary depending on the context and may diminish without repetition; nudges should be viewed as low-cost adjuncts rather than substitutes for more intensive interventions [58].
In addition to general warnings, making source information clear at the time of judgment enhances performance. The PNAS meta-analysis indicates that source display with headlines enhances discrimination on average, yielding greater advantages for certain subgroups. This implies that specific, informative cues can more consistently enhance veracity judgments compared to general labels [13]. Decisions at the interface level about where and how to show provenance, authorship, or editorial processes are examples of choice architecture.
5.3. Debunking (Post-Exposure Correction)
Fact-checking and corrections are effective tools for reducing belief in misinformation, but their impact is moderate and varies by context, message design, and individual differences [80]. Fact-checking, media literacy, inoculation, and accuracy prompts can all work together to make it easier to confront false information.
The Debunking Handbook 2020 establishes best practices: present the fact first, caution against the myth, and elucidate the fallacy [74]. Meta-analyses show that fact-checking and corrections have small to moderate positive effects on belief accuracy, but these effects vary depending on the topic, audience, and format [76]. Concerns about “backfire effects” have diminished as methodologies have advanced; when corrections adhere to best practices, backfire occurrences are infrequent [61,74].
5.4. Warning and Labels
Labels can take multiple forms, including: Warning labels, which signal that content has been disputed or lacks verification; Source-credibility labels, indicating the trustworthiness or expertise of an outlet; Context or information-quality labels, providing additional details, links, or provenance information; Fact-checking labels, summarising third-party review. (Semi)-automated labels promise to work on a large scale, but they do not always do so. Specific labels linked to evidence generally surpass generic warnings; performance differs among languages and platforms, and mislabeling may lead to confusion or excessive moderation [58]. It appears that aligning labels with paths for source transparency and verification is important. The evidence examined indicates that source visibility and task-relevant cues at the decision point may yield more dependable enhancements than generic warning banners [12,13].
5.5. Guardrails at the System Level
Infrastructure and governance at the upstream and midstream levels set the rules for how individual-level measures work. Effective trust signals require distinguishing between two complementary dimensions: assertive provenance and inferred context [81]. Assertive provenance refers to claims made by content creators or sources about a piece of content’s origin, authenticity, and history—such as who created it, when, and whether it has been altered. Standards like C2PA enable this by specifying cryptographically verifiable Content Credentials for capture and edit history, allowing creator tools and platforms to attach tamper-evident provenance, which is especially important for synthetic media. Inferred context, by contrast, encompasses the broader ecosystem of information surrounding content: how it has circulated, been modified, or been verified by third parties, as well as the reputation and track record of sources within specific domains. Building robust trust frameworks requires both dimensions, especially as synthetic media becomes increasingly sophisticated and widespread.
The European Union’s Digital Services Act requires very large platforms to do systemic-risk assessments, take steps to reduce those risks, and have their work checked by an outside party. The Strengthened Code of Practice on Disinformation adds commitments to demonetisation, fact-checking access, and political advertising transparency [82,83]. These actions move responsibility up and down the chain, which balances out the natural tendency to put too much on end users.
6. Conclusions
The collected evidence depicts disinformation as a systemic, socio-technical issue. Processing fluency and memory dynamics render familiar falsehoods credible; emotional resonance, identity affirmation, and narrative coherence direct attention and influence persuasion; social trust, status indicators, and network configurations determine diffusion; and platform design influences exposure and subsequent action [2,10,58]. These mechanisms—and their interactions—explain why disinformation thrives in contemporary information environments and why no single intervention can fully address it.
There are many levels of effective practice. At the preventive edge, prebunking (short, technique-based videos; active inoculation games) and media and information literacy (verification, lateral reading, and AI/media awareness) cultivate competencies that persist within individuals and communities [1,11,14,57]. At the time of action, accuracy prompts, careful friction, and prominent source cues push people to pay attention to accuracy [12,13,61,81]. After exposure, well-structured corrections generally diminish misperceptions, particularly when they offer credible alternatives [74,76,80]. Narrative-layer tools encourage individuals to be more receptive to new ideas by prompting them to reflect on their interpretation of facts [71]. Additionally, at the system level, provenance and platform transparency create the right conditions: end-user competence cannot handle everything when creation and distribution are optimised for speed and scale [81,82,83].
The effects of disinformation are real but usually small to moderate. They can fade away without reinforcement and differ depending on the domain and audience [58]. Layered deployment with periodic boosters enhances durability. There are still gaps: non-Western contexts are not well represented in the research; there is not enough evidence on multimodal and AI-generated deception; and there is a need for standardised outcome measures, clear reporting, and deployment research that connects laboratory effects to platform-scale impact under credible governance regimes.
Taken together, the evidence suggests that disinformation is best approached as a socio-technical problem: one emerging from the coupling of human cognition, social dynamics, and digital infrastructures. Addressing it requires aligning individual-level competencies with system-level incentives and governance. As communication technologies continue to evolve, so too must the strategies designed to strengthen information integrity and support resilient democratic societies.
Author Contributions
N.B.: Conceptualisation, Methodology, Writing—Original Draft, Writing—Review and Editing; S.M.: Conceptualisation, Supervision, Writing—Review and Editing. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analysed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- World Economic Forum. Rethinking Media Literacy: A New Ecosystem Model for Information Integrity. World Economic Forum. 2025. Available online: https://www.weforum.org/publications/rethinking-media-literacy-a-new-ecosystem-model-for-information-integrity/ (accessed on 20 October 2025).
- Lazer, D.M.J.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The science of fake news. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef] [PubMed]
- Wardle, C.; Derakhshan, H. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe, Report. 2017. Available online: https://edoc.coe.int/en/media/7495-information-disorder-toward-an-interdisciplinary-framework-for-research-and-policy-making.html (accessed on 20 October 2025).
- Hayward, T. The Problem of Disinformation: A Critical Approach. Soc. Epistemol. 2025, 39, 1–23. [Google Scholar] [CrossRef]
- Andrés, R.R. Fundamentos del concepto de desinformación como práctica manipuladora en la comunicación política y las relaciones internacionales. Hist. Comun. Soc. 2018, 23, 231–244. [Google Scholar] [CrossRef]
- Mahairas, A.; Dvilyanski, M. Disinformation—Дезинформация (Dezinformatsiya). Cyber Def. Rev. 2018, 3, 21–28. [Google Scholar]
- Cheyfitz, E. Disinformation: The limits of capitalism’s imagination and the end of ideology. Boundary 2 2014, 41, 55–91. [Google Scholar] [CrossRef]
- Martin, L.J. Disinformation: An instrumentality in the propaganda arsenal. Polit. Commun. 1982, 2, 47–64. [Google Scholar] [CrossRef]
- Buchanan, T. Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLoS ONE 2020, 15, e0239666. [Google Scholar] [CrossRef]
- Brashier, N.M.; Marsh, E.J. Judging truth. Annu. Rev. Psychol. 2020, 71, 499–515. [Google Scholar] [CrossRef]
- Guess, A.M.; Lerner, M.; Lyons, B.; Montgomery, J.M.; Nyhan, B.; Reifler, J.; Sircar, N. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proc. Natl. Acad. Sci. USA 2020, 117, 15536–15545. [Google Scholar] [CrossRef]
- Kozyreva, A.; Lorenz-Spreen, P.; Herzog, S.M.; Ecker, U.K.H.; Lewandowsky, S.; Hertwig, R.; Ali, A.; Bak-Coleman, J.; Barzilai, S.; Basol, M.; et al. Toolbox of individual-level interventions against online misinformation. Nat. Hum. Behav. 2024, 8, 1044–1052. [Google Scholar] [CrossRef]
- Sultan, M.; Tump, A.N.; Ehmann, N.; Lorenz-Spreen, P.; Hertwig, R.; Gollwitzer, A.; Kurvers, R.H.J.M. Susceptibility to online misinformation: A systematic meta-analysis of demographic and psychological factors. Proc. Natl. Acad. Sci. USA 2024, 121, e2409329121. [Google Scholar] [CrossRef]
- Huang, G.; Jia, W.; Yu, W. Media Literacy Interventions Improve Resilience to Misinformation: A Meta-Analytic Investigation of Overall Effect and Moderating Factors. Commun. Res. 2024. [Google Scholar] [CrossRef]
- Colon, D. La Guerre de L’information. Les États à la Conquête de nos Cerveaux; Tallandier: Paris, France, 2023. [Google Scholar]
- Darnton, R. The True History of Fake News. The New York Review of Books. Available online: http://nrs.harvard.edu/urn-3:HUL.InstRepos:42667781 (accessed on 20 October 2025).
- Pretalli, M.; Zagni, G. Une Histoire de la Désinformation. Fake News et Théories du Complot des Pharaons aux Réseaux Sociaux; Éditions Mimésis: Sesto San Giovanni, Italy, 2025. [Google Scholar]
- Bennett, W.L.; Livingston, S.; Barr, D.J. Regime, information infrastructures, and epistemic crisis. In The Disinformation Age: Politics, Technology, and Disruptive Communication; Bennett, W.L., Livingston, S., Eds.; Cambridge University Press: Cambridge, UK, 2020; pp. 1–26. [Google Scholar]
- Rid, T. Active Measures: The Secret History of Disinformation and Political Warfare; Farrar, Straus and Giroux: New York, NY, USA, 2020. [Google Scholar]
- Valla, L. On the Donation of Constantine; Harvard University Press: Cambridge, MA, USA, 2007. [Google Scholar]
- Cohn, N. Warrant for Genocide: The Myth of the Jewish World-Conspiracy and the Protocols of the Elders of Zion; Eyre & Spottiswoode: Hertfordshire, UK, 1967. [Google Scholar]
- Hagemeister, M. The Protocols of the Elders of Zion: Between History and Fiction. New Ger. Crit. 2008, 103, 83–95. [Google Scholar] [CrossRef]
- Bosworth, A.B. Augustus, the Res Gestae and Hellenistic theories of apotheosis. J. Roman Stud. 1999, 89, 1–18. [Google Scholar] [CrossRef]
- Zanker, P. The Power of Images in the Age of Augustus; University of Michigan Press: Ann Arbor, MI, USA, 1990. [Google Scholar]
- U.S. Department of State. Active Measures: A Report on the Substance and Process of Anti-U.S. Disinformation and Propaganda; U.S. Government Printing Office: Washington, DC, USA, 1987. [Google Scholar]
- Cull, N.J. The Cold War and the United States Information Agency: American Propaganda and Public Diplomacy, 1945–1989; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
- Johnson, A.R. Radio Free Europe/Radio Liberty: The CIA Years and Beyond; Stanford University Press: Redwood City, CA, USA, 2010. [Google Scholar]
- Pettegree, A. Reformation and the Culture of Persuasion; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
- Pettegree, A. Brand Luther: 1517, Printing, and the Making of the Reformation; Penguin: Singapore, 2015. [Google Scholar]
- Peacey, J. Politicians and Pamphleteers: Propaganda in the English Civil Wars and Interregnum; Ashgate: Farnham, UK, 2004. [Google Scholar]
- Encyclopaedia Britannica. The Great Moon Hoax of 1835 Was Sci-Fi Passed Off as News. 2025. Available online: https://www.britannica.com/story/the-great-moon-hoax-of-1835-was-sci-fi-passed-off-as-news (accessed on 20 October 2025).
- Smithsonian Magazine. The Great Moon Hoax Was Simply a Sign of Its Time. 2 Luglio 2015. Available online: https://www.smithsonianmag.com/smithsonian-institution/great-moon-hoax-was-simply-sign-its-time-180955761/ (accessed on 20 October 2025).
- Walker, M. The Spanish American War and the Yellow Press. Library of Congress, 2024. Available online: https://blogs.loc.gov/headlinesandheroes/2024/02/the-spanish-american-war-and-the-yellow-press/ (accessed on 20 October 2025).
- Thompson, M. Forging War: The Media in Serbia, Croatia and Bosnia-Herzegovina; University of Luton Press: Luton, UK, 1999. [Google Scholar]
- Thompson, A. (Ed.) The Media and the Rwanda Genocide; International Development Research Centre: Ottawa, ON, Canada, 2007; Available online: https://www.internews.org/wp-content/uploads/legacy/resources/TheMedia&TheRwandaGenocide.pdf (accessed on 20 October 2025).
- Paul, C.; Matthews, M. The Russian “Firehose of Falsehood” Propaganda Model. RAND Corporation. 2016. Available online: https://www.rand.org/pubs/perspectives/PE198.html (accessed on 20 October 2025).
- Pedriza, S. Sources, Channels and Strategies of Disinformation in the 2020 US Election: Social Networks, Traditional Media and Political Candidates. J. Media 2021, 2, 605–624. [Google Scholar] [CrossRef]
- Ruiz, C. Disinformation on digital media platforms: A market-shaping approach. New Media Soc. 2023, 27, 2188–2211. [Google Scholar] [CrossRef]
- Wintterlin, F.; Schatto-Eckrodt, T.; Frischlich, L.; Boberg, S.; Reer, F.; Quandt, T. “It’s us against them up there”: Spreading online disinformation as populist collective action. Comput. Hum. Behav. 2023, 146, 107784. [Google Scholar] [CrossRef]
- Ong, J.C.; Cabañes, J.V.A. When Disinformation Studies Meets Production Studies: Social Identities and Moral Justifications in the Political Trolling Industry. Int. J. Commun. 2019, 13, 20. [Google Scholar]
- Husandani, R.A.; Utari, P.; Rahmanto, A.N. Impact of social media disinformation explored in “The Social Dilemma”. J. ASPIKOM 2025, 9, 89–106. [Google Scholar] [CrossRef]
- Shu, K.; Bhattacharjee, A.; Alatawi, F.; Nazer, T.H.; Ding, K.; Karami, M.; Liu, H. Combating disinformation in a social media age. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2020, 10, e1385. [Google Scholar] [CrossRef]
- Treen, K.M.D.; Williams, H.T.P.; O’NEill, S.J. Online misinformation about climate change. Wiley Interdiscip. Rev. Clim. Chang. 2020, 11, e665. [Google Scholar] [CrossRef]
- Tomassi, A.; Falegnami, A.; Romano, E. Disinformation in the Digital Age: Climate Change, Media Dynamics, and Strategies for Resilience. Publications 2025, 13, 24. [Google Scholar] [CrossRef]
- Surjatmodjo, D.; Unde, A.A.; Cangara, H.; Sonni, A.F. Information Pandemic: A Critical Review of Disinformation Spread on Social Media and Its Implications for State Resilience. Soc. Sci. 2024, 13, 418. [Google Scholar] [CrossRef]
- Adebesin, F.; Smuts, H.; Mawela, T.; Maramba, G.; Hattingh, M. The Role of Social Media in Health Misinformation and Disinformation During the COVID-19 Pandemic: Bibliometric Analysis. JMIR Infodemiol. 2023, 3, e48620. [Google Scholar] [CrossRef]
- Iosifidis, P.; Nicoli, N. The battle to end fake news: A qualitative content analysis of Facebook announcements on how it combats disinformation. Int. Commun. Gaz. 2020, 82, 60–81. [Google Scholar] [CrossRef]
- Kaur, K.; Nair, S.; Kwok, Y.; Kajimoto, M.; Chua, Y.T.; Labiste, M.D.; Soon, C.; Jo, H.; Lin, L.; Le, T.T.; et al. Information Disorder in Asia and the Pacific: Overview of Misinformation Ecosystem in Australia, India, Indonesia, Japan, the Philippines, Singapore, South Korea, Taiwan, and Vietnam. Microecon. Asymmetric Priv. Inf. Ej. 2018. [Google Scholar] [CrossRef]
- Moran, R.; Nguyễn, S.; Bui, L. Sending News Back Home: Misinformation Lost in Transnational Social Networks. Proc. ACM Hum.-Comput. Interact. 2023, 7, 1–36. [Google Scholar] [CrossRef]
- Madrid-Morales, D.; Wasserman, H.; Gondwe, G.; Ndlovu, K.; Sikanku, E.; Tully, M.; Umejei, E.; Uzuegbunam, C. Comparative Approaches to Mis/Disinformation|Motivations for Sharing Misinformation: A Comparative Study in Six Sub-Saharan African Countries. Int. J. Commun. 2021, 15, 20. Available online: https://ijoc.org/index.php/ijoc/article/view/14801 (accessed on 20 October 2025).
- Vinhas, O.; Bastos, M. When Fact-Checking Is Not WEIRD: Negotiating Consensus Outside Western, Educated, Industrialized, Rich, and Democratic Countries. Int. J. Press. 2023, 30, 256–276. [Google Scholar] [CrossRef]
- Cazzamatta, R. Global misinformation trends: Commonalities and differences in topics, sources of falsehoods, and deception strategies across eight countries. New Media Soc. 2024, 27, 6334–6358. [Google Scholar] [CrossRef]
- Rodríguez-Virgili, J.; Serrano-Puche, J.; Fernández, C.B. Digital Disinformation and Preventive Actions: Perceptions of Users from Argentina, Chile, and Spain. Media Commun. 2021, 9, 323–337. [Google Scholar] [CrossRef]
- Mahl, D.; Zeng, J.; Schäfer, M.S.; Egert, F.A.; Oliveira, T. “We Follow the Disinformation”: Conceptualizing and Analyzing Fact-Checking Cultures Across Countries. Int. J. Press. 2024. [Google Scholar] [CrossRef]
- Arechar, A.A.; Allen, J.; Berinsky, A.J.; Cole, R.; Epstein, Z.; Garimella, K.; Gully, A.; Lu, J.G.; Ross, R.M.; Stagnaro, M.N.; et al. Understanding and combatting misinformation across 16 countries on six continents. Nat. Hum. Behav. 2023, 7, 1502–1513. [Google Scholar] [CrossRef]
- Pérez-Escolar, M.; Lilleker, D.; Tapia-Frade, A. A Systematic Literature Review of the Phenomenon of Disinformation and Misinformation. Media Commun. 2023, 11, 76–87. [Google Scholar] [CrossRef]
- Roozenbeek, J.; van der Linden, S.; Goldberg, B.; Rathje, S.; Lewandowsky, S. Psychological inoculation improves resilience against misinformation: Evidence from a large field experiment on YouTube. Sci. Adv. 2022, 8, eabl8203. [Google Scholar] [CrossRef]
- Ecker, U.K.H.; Lewandowsky, S.; Cook, J.; Schmid, P.; Fazio, L.K.; Brashier, N.; Kendeou, P.; Vraga, E.K.; Amazeen, M.A. The psychology of misinformation: A review of evidence regarding susceptibility to misinformation and beliefs in the effectiveness of interventions. Nat. Rev. Psychol. 2022, 1, 13–29. [Google Scholar] [CrossRef]
- Van Der Linden, S. Foolproof. Why We Fall for Misinformation and How to Build Immunity; 4th Estate: London, UK, 2023. [Google Scholar]
- Fazio, L.K. Knowledge does not protect against illusory truth. J. Exp. Psychol. Gen. 2015, 144, 993–1002. [Google Scholar] [CrossRef]
- Pennycook, G.; Epstein, Z.; Mosleh, M.; Arechar, A.A.; Eckles, D.; Rand, D.G. Shifting attention to accuracy can reduce misinformation online. Nature 2021, 592, 590–595. [Google Scholar] [CrossRef]
- Brady, W.J.; Wills, J.A.; Jost, J.T.; Tucker, J.A.; Van Bavel, J.J. Emotion shapes the diffusion of moralized content in social networks. Proc. Natl. Acad. Sci. USA 2017, 114, 7313–7318. [Google Scholar] [CrossRef]
- Chuai, Y.; Zhao, J. Anger can make fake news viral online. Front. Phys. 2022, 10, 970174. [Google Scholar] [CrossRef]
- O’Connor, C.; Weatherall, J.O. The Misinformation Age: How False Beliefs Spread; Yale University Press: New Haven, CT, USA, 2019. [Google Scholar]
- Metzger, M.J.; Flanagin, A.J. Credibility and trust of information in online environments: The use of cognitive heuristics. J. Pragmat. 2013, 59, 210–220. [Google Scholar] [CrossRef]
- Sundar, S.S. The MAIN model: A heuristic approach to understanding technology effects on credibility. In Digital Media, Youth, and Credibility; Metzger, M.J., Flanagin, A.J., Eds.; MIT Press: Cambridge, MA, USA, 2008; pp. 73–100. [Google Scholar]
- Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef]
- van Laer, T.; de Ruyter, K.; Visconti, L.M.; Wetzels, M. The extended transportation-imagery model: A meta-analysis of the antecedents and consequences of consumers’ narrative transportation. J. Consum. Res. 2014, 40, 797–817. [Google Scholar] [CrossRef]
- Brooks, P. Seduced by Story: The Use and Abuse of Narrative; New York Review Books: New York, NY, USA, 2022. [Google Scholar]
- Gottschall, J. The Story Paradox: How Our Love of Storytelling Builds Societies and Tears Them Down; Basic Books: New York, NY, USA, 2021. [Google Scholar]
- von Holstein, E.S.; Nowak, A.; Napiorkowski, M.; Perrot, S. The Power of Narratives: A Strategic Approach to Combatting Disinformation in Europe—Key Findings from the First European Narrative Observatory (NODES). Re-Imagine Europa. 2024. Available online: https://nodes.eu/wp-content/uploads/2024/11/NODES_WhitePaper_The-Power-of-Narratives.pdf (accessed on 20 October 2025).
- Chen, S.; Xiao, L.; Kumar, A. Spread of misinformation on social media: What contributes to it and how to combat it. Comput. Hum. Behav. 2023, 141, 107643. [Google Scholar] [CrossRef]
- Sanfilippo, M.R.; Zhu, X.A.; Yang, S. Sociotechnical governance of misinformation: An ARIST paper. J. Assoc. Inf. Sci. Technol. 2025, 76, 289–325. [Google Scholar] [CrossRef]
- Lewandowsky, S.; Cook, J.; Ecker, U.; Albarracín, D.; Amazeen, M.A.; Kendeou, P.; Lombardi, D.; Newman, E.J.; Pennycook, G.; Porter, E.; et al. The Debunking Handbook 2020. 2020. Available online: https://climatecommunication.gmu.edu/all/the-debunking-handbook-2020/ (accessed on 20 October 2025).
- Walter, N.; Cohen, J.; Holbert, R.L.; Morag, Y. Fact-Checking: A Meta-Analysis of What Works and for Whom. Polit. Commun. 2019, 37, 350–375. [Google Scholar] [CrossRef]
- Walter, N.; Brooks, J.J.; Saucier, C.J.; Suresh, S. Evaluating the Impact of Attempts to Correct Health Misinformation on Social Media: A Meta-Analysis. Health Commun. 2020, 36, 1776–1784. [Google Scholar] [CrossRef]
- Unkelbach, C.; Rom, S.C. A referential theory of the repetition-induced truth effect. Cognition 2017, 160, 110–126. [Google Scholar] [CrossRef]
- Bruno, N.; De Santis, A.; Moriggi, S. Teachers competencies in evaluating digital sources and tackling disinformation: Implications for media literacy education. J. E-Learn. Knowl. Soc. 2025, 21, 85–99. [Google Scholar] [CrossRef]
- Hoes, E.; Aitken, B.; Zhang, J.; Gackowski, T.; Wojcieszak, M. Prominent misinformation interventions reduce misperceptions but increase scepticism. Nat. Hum. Behav. 2024, 8, 1545–1553. [Google Scholar] [CrossRef]
- Young, D.G.; Jamieson, K.H.; Poulsen, S.; Goldring, A. Fact-checking effectiveness as a function of format and tone: Evaluating FactCheck.org and FlackCheck.org. J. Mass Commun. Q. 2021, 98, 205–220. [Google Scholar] [CrossRef]
- Hebbar, N.; Wolf, C. Determining Trustworthiness Through Provenance and Context. Google, Policy Paper. 2024. Available online: https://static.googleusercontent.com/media/publicpolicy.google/it//resources/determining_trustworthiness_en.pdf (accessed on 20 October 2025).
- European Commission. Strengthened Code of Practice on Disinformation. 2022. Available online: https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation (accessed on 20 October 2025).
- European Union. Digital Services Act (Regulation (EU) 2022/2065) and Implementing Guidance. 2022. Available online: https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng (accessed on 20 October 2025).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).