Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (322)

Search Parameters:
Keywords = disinformation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 963 KB  
Article
Poor Journalism as a Distinct Phenomenon from Disinformation: Definition and Taxonomy
by Ernesto García-Ojeda and Marta Saavedra
Journal. Media 2026, 7(2), 87; https://doi.org/10.3390/journalmedia7020087 (registering DOI) - 22 Apr 2026
Abstract
Disinformation has become one of the main contemporary social and political concerns. However, both public and academic debates continue to exhibit an epistemological confusion between disinformation—characterized by a deliberate intention to deceive—and the errors or deficiencies arising from journalistic practice. The aim of [...] Read more.
Disinformation has become one of the main contemporary social and political concerns. However, both public and academic debates continue to exhibit an epistemological confusion between disinformation—characterized by a deliberate intention to deceive—and the errors or deficiencies arising from journalistic practice. The aim of this study is to conceptually define these errors under the phenomenon of poor journalism and to propose a taxonomy that allows it to be examined as distinct from disinformation. To this end, a qualitative integrative systematic review was conducted, based on the inductive analysis of peer-reviewed academic publications in Spanish and English, indexed in Scopus, Web of Science, and EBSCO Host. The analysis identifies two main analytical dimensions: deficient practices and structural causes. The findings show that poor journalism does not stem from a deliberate intention to deceive, but rather from structural factors, commercial logics, and corporate interests within the media ecosystem. This phenomenon is intensified by a circular logic in which the same causes that generate it also reinforce it. This study helps to clarify a relevant conceptual gap by offering a definition and a taxonomy that may be used in future research and media literacy initiatives. Full article
(This article belongs to the Special Issue Reimagining Journalism in the Era of Digital Innovation)
Show Figures

Figure 1

19 pages, 613 KB  
Article
Spanish Investigative Journalism in the Face of Verification and Information Disorders
by María Alcalá-Santaella, Roberto Gelado Marcos and Fernando Bonete Vizcaíno
Journal. Media 2026, 7(2), 84; https://doi.org/10.3390/journalmedia7020084 (registering DOI) - 21 Apr 2026
Abstract
This research focuses on the perception that Spanish investigative journalists have of disinformation, exploring its impact on their professional routines. It also assesses the methods deployed by these professionals to mitigate its spread. To this end, a quantitative methodology based on the survey [...] Read more.
This research focuses on the perception that Spanish investigative journalists have of disinformation, exploring its impact on their professional routines. It also assesses the methods deployed by these professionals to mitigate its spread. To this end, a quantitative methodology based on the survey technique was used, and a structured interview comprising 18 questions was designed. This interview combined 7 closed questions with a five-point Likert-type scale structure and 11 open-ended questions to ascertain the perceptions of respondents more accurately. The survey involved 28 journalists from the Association of Investigative Journalists (API, its Spanish acronym) and various relevant media outlets. The results underline the rigor and independence required in investigative journalism to combat disinformation while drawing attention to the need to train and adapt the practice of journalism through new formats. The tension between the potential of technology and uneasiness about its reliability—an ambivalence that is on the rise with the emergence of AI—is also emphasized alongside the importance of ethics and transparency to restore the credibility of the media. Full article
(This article belongs to the Special Issue Reimagining Journalism in the Era of Digital Innovation)
Show Figures

Figure 1

13 pages, 844 KB  
Viewpoint
Disinformation, Psychosocial Vulnerability, and Media Trust in the Digital Era: Implications for Health Behaviour and Societal Resilience
by João Miguel Alves Ferreira, Vaitsa Giannouli and Sergii Tukaiev
Healthcare 2026, 14(8), 1089; https://doi.org/10.3390/healthcare14081089 - 20 Apr 2026
Viewed by 34
Abstract
Disinformation, amplified by digital platforms and algorithmic distribution systems, represents a growing challenge for media trust, public health communication, and societal stability. This narrative literature review examines disinformation through an integrative psychosocial perspective, focusing on how patterns of exposure interact with individual vulnerability [...] Read more.
Disinformation, amplified by digital platforms and algorithmic distribution systems, represents a growing challenge for media trust, public health communication, and societal stability. This narrative literature review examines disinformation through an integrative psychosocial perspective, focusing on how patterns of exposure interact with individual vulnerability factors—including education, political beliefs, social identity, personality traits, and emotional responses to uncertainty—to influence the processing and acceptance of misleading information. The review synthesises interdisciplinary evidence on how algorithmic amplification and emotionally salient content increase susceptibility to disinformation and shape risk perception, health-related decision-making, and preventive behaviours. Findings indicate that repeated exposure to false or misleading information reinforces perceived credibility through familiarity effects, contributes to declining trust in institutional sources, and intensifies social and political polarisation. Disinformation is therefore conceptualised not only as an informational problem but also as a psychosocial process affecting emotional regulation, cognitive evaluation, and collective responses to crises, particularly in public health contexts. The analysis further highlights a recursive feedback loop in which reduced media trust increases vulnerability to subsequent disinformation, with broader implications for democratic participation and social cohesion. Mitigation strategies discussed include media literacy initiatives, critical thinking education, platform governance, regulatory approaches, and interventions targeting psychosocial drivers of susceptibility. Full article
(This article belongs to the Section Clinical Care)
Show Figures

Figure 1

32 pages, 1560 KB  
Article
Examining Narrative Patterns in Disinformation and Trustworthy News: A Comparative Analysis
by Justina Mandravickaitė and Tomas Krilavičius
Soc. Sci. 2026, 15(4), 255; https://doi.org/10.3390/socsci15040255 - 17 Apr 2026
Viewed by 312
Abstract
In this study, we examined how disinformation and trustworthy news differ in their narrative construction across nine theoretically motivated dimensions. We address the following research question: how do disinformation and trustworthy news differ in narrative organisation and epistemic grounding? We analysed 610 English-language [...] Read more.
In this study, we examined how disinformation and trustworthy news differ in their narrative construction across nine theoretically motivated dimensions. We address the following research question: how do disinformation and trustworthy news differ in narrative organisation and epistemic grounding? We analysed 610 English-language news articles (308 pro-Kremlin disinformation and 302 trustworthy articles) covering selected international events from 2015 to 2023, using data derived from the EUvsDisinfo dataset. Narrative elements were extracted using a hybrid pipeline combining large language models and knowledge graphs, resulting in article-level representations for comparative analysis. Ordinal scores (1–5) were assigned for emotional intensity, cultural complexity, conspiracist structure, source diversity, crisis intensity, evidence support, media control, solutions orientation and memory work. Non-parametric comparisons showed significant differences in eight of these nine dimensions. Disinformation articles revealed stronger conspiracist structuring and greater meta-media hostility, as well as significantly lower source diversity, evidence support, cultural complexity and weaker memory work. Emotional intensity did not differ reliably across disinformation and trustworthy news. A simple additive NarrativeRisk score, which we designed as a transparent and interpretable summary measure, showed between-group differences in both parametric and non-parametric tests. As a univariate discrimination indicator, NarrativeRisk achieved ROC AUC ≈ 0.84. Cluster analysis identified three recurrent narrative profiles, including one dominated by disinformation, one by trustworthy news and one mixed profile. These findings indicate that disinformation is distinguished not only by factual unreliability but also by different patterns in narrative organisation. Full article
Show Figures

Figure 1

30 pages, 521 KB  
Article
Psychosocial and Social Security Risks Linked to Vaccine Misinformation in Romania: Implications for Vaccination Acceptance and Public Policy
by Flavius Cristian Mărcău, Cătălin Peptan, Olivia-Roxana Alecsoiu, Marian Emanuel Cojoaca, Alina Magdalena Musetescu, Genu Alexandru Căruntu, Alina Georgiana Holt, Lorena Duduială Popescu, Costina Sfinteș and Victor Gheorman
Behav. Sci. 2026, 16(4), 595; https://doi.org/10.3390/bs16040595 - 16 Apr 2026
Viewed by 354
Abstract
This study examines the influence of misinformation on vaccination decision-making and the perception of social security in Romania in the context of potential future pandemics. Using a survey-based design, data were collected through an online questionnaire administered to a sample of 1005 respondents. [...] Read more.
This study examines the influence of misinformation on vaccination decision-making and the perception of social security in Romania in the context of potential future pandemics. Using a survey-based design, data were collected through an online questionnaire administered to a sample of 1005 respondents. The analysis employed descriptive and inferential statistical methods, including chi-square tests, ANOVA, Kruskal–Wallis tests, principal component analysis (PCA), K-means clustering, random forest models, and Spearman correlations. The results indicate statistically significant associations between belief in misinformation and vaccination attitudes (p < 0.001), with moderate effect sizes. Effect size estimates indicated small-to-moderate associations (e.g., Cramér’s V up to 0.371 for key demographic differences, and Kendall’s W = 0.273 for the increase in willingness across the three severity scenarios). Individuals with higher levels of education, urban residence, and younger age were more likely to report higher willingness to vaccinate, whereas respondents from rural areas and those with lower educational levels showed greater susceptibility to misinformation. In addition, risk perception was significantly associated with vaccination intention, which increased as the severity of hypothetical pandemic scenarios intensified. Predictive modeling identified specific misinformation beliefs—particularly those related to vaccine safety and natural immunity—as key factors associated with vaccination decisions. These findings suggest that misinformation is strongly associated with both individual vaccination behavior and broader perceptions of social security. Full article
21 pages, 4306 KB  
Systematic Review
Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review
by José Casás García, Alba Silva Rodríguez and Ana-Isabel Rodríguez-Vázquez
Soc. Sci. 2026, 15(4), 247; https://doi.org/10.3390/socsci15040247 - 13 Apr 2026
Viewed by 413
Abstract
The impact of artificial intelligence (AI) extends across virtually all sectors of society, including communication. One of the areas in which its influence is expected to be most significant is disinformation, arguably one of the greatest challenges faced by networked societies over the [...] Read more.
The impact of artificial intelligence (AI) extends across virtually all sectors of society, including communication. One of the areas in which its influence is expected to be most significant is disinformation, arguably one of the greatest challenges faced by networked societies over the past decade. Through a systematized literature review with a scoping orientation, this study examines how research on artificial intelligence and disinformation has evolved over the last five years and identifies the main thematic strands structuring this field. The analysis of 62 articles reveals a predominance of qualitative approaches (53.3%) and a technocentric perspective structured around five main research lines: (1) AI as a source of disinformation, (2) AI as a tool to combat it, (3) regulatory frameworks, (4) deepfakes, and (5) algorithmic literacy. These findings highlight both the consolidation of the field and the need to advance toward more interdisciplinary and transfer-oriented research. Full article
(This article belongs to the Special Issue Disinformation in the Age of Artificial Intelligence)
Show Figures

Figure 1

36 pages, 8897 KB  
Article
Evolutionary Game Analysis of AI-Generated Disinformation Governance on UGC Platforms Based on Prospect Theory
by Licai Lei, Yanyan Wu and Shang Gao
Systems 2026, 14(4), 416; https://doi.org/10.3390/systems14040416 - 9 Apr 2026
Viewed by 375
Abstract
While Generative Artificial Intelligence technology empowers content production on user-generated content platforms, it also gives rise to novel risks of disinformation dissemination. The effective governance of these risks is critical to ensuring the cybersecurity of the online ecosystem and maintaining long-term social stability. [...] Read more.
While Generative Artificial Intelligence technology empowers content production on user-generated content platforms, it also gives rise to novel risks of disinformation dissemination. The effective governance of these risks is critical to ensuring the cybersecurity of the online ecosystem and maintaining long-term social stability. To address the collaborative governance dilemma, this study constructs a tripartite “platform-user-government” evolutionary game model based on prospect theory. It explores the evolutionarily stable strategies and stability conditions of each actor, supplemented by numerical simulations and practical case validation. The results indicate that: (1) under specific conditions, the system can converge to an ideal equilibrium {active platform governance, engaged user participation, stringent government supervision}; (2) the government’s reward–penalty mechanisms can drive the system towards this ideal equilibrium; (3) users’ digital literacy is a key variable influencing the system’s evolutionary path; (4) both the risk preference coefficient (β) and loss aversion coefficient (λ) from prospect theory have a significant moderating effect on the system’s evolution. Finally, targeted recommendations are proposed for the three aforementioned stakeholders to accelerate the improvement of China’s collaborative governance of the content ecosystem. Full article
(This article belongs to the Special Issue Advancing Open Innovation in the Age of AI and Digital Transformation)
Show Figures

Figure 1

14 pages, 249 KB  
Article
Perceptions of Pre-Service Teachers in Early Childhood and Primary Education on GenAI-Generated Deepfakes
by José María Campillo-Ferrer and Pedro Miralles-Sánchez
Educ. Sci. 2026, 16(4), 575; https://doi.org/10.3390/educsci16040575 - 4 Apr 2026
Viewed by 478
Abstract
This study explored pre-service teachers’ views on the use of generative artificial intelligence (Gen AI) in the production of misinformation, addressing the potential challenges posed by deepfakes generated by these online resources. A quantitative approach was used; 133 pre-service teachers participated in the [...] Read more.
This study explored pre-service teachers’ views on the use of generative artificial intelligence (Gen AI) in the production of misinformation, addressing the potential challenges posed by deepfakes generated by these online resources. A quantitative approach was used; 133 pre-service teachers participated in the study, all of them were enrolled in primary education degree programmes in the Region of Murcia, Spain. The results indicated a clear awareness of the risks posed by these digital tools in the generation of deepfakes. Respondents became aware of the potential threats this may pose on the internet, which can be further exacerbated when disseminated in educational environments. Recognising the relevance of pre-service teachers’ concerns can help educators and educational administrations take steps to limit Gen AI in accordance with ethical parameters and thus reduce the spread of misinformation. In social science teaching and learning, further research is needed to equip students with the essential skills to distinguish between accurate and inaccurate information. For all these reasons, it seems essential to improve research in media literacy education for the application of identification skills used in assessment processes. These improvements can take the form of evidence-based approaches, such as AI literacy programmes or media literacy modules, to facilitate student learning and ensure better quality education. Full article
10 pages, 375 KB  
Entry
Deepfakes
by Sean William Maher
Encyclopedia 2026, 6(4), 80; https://doi.org/10.3390/encyclopedia6040080 - 2 Apr 2026
Viewed by 55441
Definition
Deepfakes have emerged as one of the most significant developments in contemporary computational media, representing a sophisticated convergence of machine learning, computer vision, and audiovisual synthesis. Enabled primarily by deep neural networks such as generative adversarial networks (GANs) and transformer-based architectures, Deepfakes are [...] Read more.
Deepfakes have emerged as one of the most significant developments in contemporary computational media, representing a sophisticated convergence of machine learning, computer vision, and audiovisual synthesis. Enabled primarily by deep neural networks such as generative adversarial networks (GANs) and transformer-based architectures, Deepfakes are realistic video fabrications through sound and image alteration and substitution that synthesises human likeness, speech, and behaviours. Deepfakes function simultaneously as creative tools, political instruments, security risks, and epistemic disruptors. They have generated widespread scholarly, regulatory, and public concern by contributing to the reshaping of visual communication and posing significant challenges to established norms of authenticity. This entry defines Deepfakes, outlines their technological foundations, synthesises insights from current research and assesses implications for media industries, journalism, documentary, disinformation, governance, and digital culture. Full article
(This article belongs to the Section Social Sciences)
Show Figures

Figure 1

21 pages, 333 KB  
Article
Artificial Truth: Algorithmic Power, Epistemic Authority, and the Crisis of Democratic Knowledge
by Rosario Palese
Societies 2026, 16(3), 102; https://doi.org/10.3390/soc16030102 - 23 Mar 2026
Viewed by 1366
Abstract
This article examines how artificial intelligence and algorithmic systems are reconfiguring truth regimes in digital societies, introducing the concept of “Artificial Truth” to describe an emerging form of epistemic governance where knowledge production and validation become infrastructural functions of sociotechnical systems. The study [...] Read more.
This article examines how artificial intelligence and algorithmic systems are reconfiguring truth regimes in digital societies, introducing the concept of “Artificial Truth” to describe an emerging form of epistemic governance where knowledge production and validation become infrastructural functions of sociotechnical systems. The study develops an integrated theoretical framework combining Foucault’s notion of truth regimes, Bourdieu’s theory of symbolic capital and fields, and Actor-Network Theory’s constructivist approach. Through conceptual analysis, the article investigates how algorithmic recommendation systems, generative AI, and automated fact-checking operate as epistemic devices that actively shape what is recognized as credible, authoritative, and true in public discourse. The analysis reveals three fundamental transformations: (1) the restructuring of trust economies, with epistemic authority shifting from institutional expertise to platform-native capital based on engagement metrics and affective proximity; (2) the emergence of generative AI as an epistemic actor producing “synthetic truth” through linguistic fluency rather than propositional understanding; (3) the institutionalization of computational veridiction in algorithmic fact-checking systems that translate situated epistemic judgments into probabilistic classifications presented as neutral. These dynamics configure a regime where truth is evaluated less by correspondence with reality and more by computational plausibility and platform integration. The article’s primary contribution lies in providing a unified theoretical framework for understanding contemporary transformations of epistemic authority, moving beyond disinformation studies to analyze AI as an epistemic actor. By integrating classical sociological perspectives with Science and Technology Studies, it conceptualizes algorithmic systems as epistemic infrastructures that embody specific power relations, restructure symbolic capital economies, and distribute epistemic authority asymmetrically, with profound implications for democratic knowledge, citizen epistemic agency, and public sphere pluralism. Full article
31 pages, 1934 KB  
Review
Artificial Intelligence for Detecting Electoral Disinformation on Social Media: Models, Datasets, and Evaluation
by Félix Díaz, Nhell Cerna, Rafael Liza and Bryan Motta
Information 2026, 17(3), 292; https://doi.org/10.3390/info17030292 - 17 Mar 2026
Viewed by 616
Abstract
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining [...] Read more.
During elections, information manipulation on social media has accelerated the use of artificial intelligence, yet the evidence is difficult to interpret without an integrated view of methods, data, and evaluation. We mapped 557 English-language journal articles from Scopus and Web of Science, combining performance indicators, science mapping, and a focused full-text synthesis of highly cited papers. The literature grows sharply after 2019, peaks in 2025, and shows geographically uneven production, with collaboration structured around a small set of hubs. The thematic structure suggests that, during the pandemic era, infodemic-related research served as a catalyst, intensifying scientific attention to fake news and disinformation and expanding the associated detection and monitoring agendas. In addition, socio-political harm constructs such as hate speech, extremism, and polarization appear as recurrent and structurally central targets, highlighting that election-relevant work often extends beyond veracity assessment toward monitoring discourse risks. Blockchain also emerges as a novel and adjacent integrity theme, aligned with authenticity and provenance-oriented mitigation rather than mainstream detection pipelines. AI for electoral disinformation is not reducible to veracity classification, as influential studies also target automation and coordinated behavior, verification support, diffusion analysis, and estimation frameworks that focus on exposure and impact. Evaluation remains heterogeneous and is often shaped by benchmark settings, making high accuracy values hard to compare and potentially misleading when labeling quality, topic leakage, or context shift are not characterized. Overall, the findings motivate evaluation protocols that align operational objectives with modeling roles and explicitly address robustness to temporal and platform changes, asymmetric error costs during election windows, and representativeness across electoral contexts and languages, while also guiding future work on emerging integrity challenges and governance-relevant deployment settings. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1690 KB  
Article
“Virality Alert”: The Construction, Imagination, and Algorithmic Falsification of a Local Disaster
by Giacomo Buoncompagni
Journal. Media 2026, 7(1), 58; https://doi.org/10.3390/journalmedia7010058 - 17 Mar 2026
Viewed by 442
Abstract
This paper investigates the strategies employed by local journalists to verify AI-generated and manipulated imagery during the 2026 Romagna earthquake. Drawing on a qualitative methodology, this study identifies a multi-layered process of “situated verification.” The findings reveal that verification efficacy is predicated on [...] Read more.
This paper investigates the strategies employed by local journalists to verify AI-generated and manipulated imagery during the 2026 Romagna earthquake. Drawing on a qualitative methodology, this study identifies a multi-layered process of “situated verification.” The findings reveal that verification efficacy is predicated on territorial familiarity, professional networks, and direct institutional triangulation, which collectively compensate for technological and resource constraints. Local journalists emerge as epistemic mediators who stabilize the information ecosystem, mitigate public anxiety, and curb the spread of disinformation. Furthermore, institutional interventions, such as police-led fact-checking, function as both pragmatic verification tools and symbolic signals that promote responsible information sharing. By highlighting how verification is deeply rooted in temporality, social embeddedness, and local expertise, this research underscores the critical role of proximity journalism in crisis communication. The study contributes to the fields of visual epistemology and media literacy, demonstrating that relational and context-aware practices are essential for maintaining information integrity in an era of AI-driven visual disinformation. Full article
Show Figures

Figure 1

25 pages, 321 KB  
Article
Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence
by Hala Alshwayyat and Jorge Vázquez-Herrero
Soc. Sci. 2026, 15(3), 185; https://doi.org/10.3390/socsci15030185 - 13 Mar 2026
Viewed by 805
Abstract
Information disorders are a significant global issue but are particularly relevant and underexplored in the Middle East, where political instability contributes to their spread. Despite the critical role fact-checking platforms play in combating information disorders, we need to learn more about how these [...] Read more.
Information disorders are a significant global issue but are particularly relevant and underexplored in the Middle East, where political instability contributes to their spread. Despite the critical role fact-checking platforms play in combating information disorders, we need to learn more about how these platforms operate in such a complicated regional context. This study analyzes three fact-checking platforms: Akeed (Jordan), Teyit (Turkey), and Factnameh (Iran) to better understand the differences in how they approach fact-checking, the strategies they use, and the obstacles they face, including social and political conditions but also regarding the impact of AI. Using a multimethod qualitative approach based on document analysis and interviews, the study highlights recurring issues such as censorship, limited access to data, and audience engagement. The findings reveal how these platforms address these challenges and provide valuable insights into effective methodologies for fighting mis-/disinformation. The results offer broader implications for enhancing media literacy, strengthening the role of fact-checking platforms in the Middle East, and providing recommendations for best practices that can be applied regionally. Full article
(This article belongs to the Special Issue Disinformation in the Age of Artificial Intelligence)
23 pages, 362 KB  
Article
Conceptualising Digital Democracy—From Technocracy and Populism to a New Concept of Democratic Authority and Participation?
by Oliver Fernando Hidalgo
Soc. Sci. 2026, 15(3), 175; https://doi.org/10.3390/socsci15030175 - 9 Mar 2026
Viewed by 700
Abstract
According to the rather pessimistic diagnoses dominating in contemporary political research, the digitisation of information and the digital transformation of modern society tend to both a new form of (post-democratic) technocracy and a resurgence of populist democracy. These two main perils posed by [...] Read more.
According to the rather pessimistic diagnoses dominating in contemporary political research, the digitisation of information and the digital transformation of modern society tend to both a new form of (post-democratic) technocracy and a resurgence of populist democracy. These two main perils posed by the digital era can be confirmed by an in-depth theoretical approach eliciting that the practice of digital democracy generates a couple of threats that could eventually outweigh all available options offered by digital technologies in terms of facilitating democratic participation and deliberation. However, the focus on existing risks of digital democracy must not neglect the inherent opportunities. Hence, this article demonstrates how the corresponding debate benefits from an overarching theoretical foundation contributing equally to a systematic and well-balanced analysis. By applying the theory of democratic antinomies, it becomes possible to manage the difficult traverse between the requested openness to new technological developments and the indispensable defence of classic democratic principles. On this path, an adequate reflection on the conceptual change to which the notions of authority and participation are exposed in the age of digitalisation is crucial. Full article
(This article belongs to the Special Issue Technology, Digital Transformation and Society)
13 pages, 481 KB  
Article
A Conceptual Framework for a Morphological Scenario Library and Playbook Mapping in Cognitive Warfare Defense
by Dojin Ryu
J. Cybersecur. Priv. 2026, 6(2), 46; https://doi.org/10.3390/jcp6020046 - 3 Mar 2026
Viewed by 750
Abstract
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents [...] Read more.
Cognitive warfare is a hybrid threat that combines information manipulation with psychological influence, often amplified by digital platforms and synthetic media. Conventional cybersecurity tooling is optimized for technical intrusion and offers limited support for anticipating and responding to influence operations. This paper presents a conceptual framework that structures cognitive warfare threats with General Morphological Analysis (GMA) and links plausible configurations to indicator profiles and response playbooks. We first conduct a PRISMA-informed literature review (2018–2025) to derive a five-dimensional taxonomy (actor, tactic, medium, target, objective). We then apply cross-consistency assessment to remove implausible state-pair combinations and obtain a reduced library of internally consistent scenarios. To support analyst-guided triage, we outline an AI-enabled workflow that maps observable signals to taxonomy states, matches events to scenarios, and prioritizes responses via an auditable, policy-set risk score. Finally, we illustrate the framework on three publicly documented cases and show how each case maps to scenario vectors, indicators, and playbooks. No end-to-end system implementation or performance metrics are reported; the contribution is the structured scenario library and the traceable mapping from observations to response guidance. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

Back to TopTop