Next Article in Journal
Social and Solidarity Economy and Social Innovation in the Agri-Food Sector: A Conceptual Synthesis of Contributions to Sustainable Local and Rural Development
Previous Article in Journal
Using Law to Gut Law: Executive Aggrandizement and Quality of Government Decline in Chávez’s Venezuela
Previous Article in Special Issue
Fact-Checking Platforms in the Middle East: A Comparative Study in the Age of Artificial Intelligence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review

by
José Casás García
*,
Alba Silva Rodríguez
and
Ana-Isabel Rodríguez-Vázquez
Department of Communication Sciences, Faculty of Communication Science, University of Santiago de Compostela, 15782 Santiago de Compostela, Spain
*
Author to whom correspondence should be addressed.
Soc. Sci. 2026, 15(4), 247; https://doi.org/10.3390/socsci15040247
Submission received: 26 January 2026 / Revised: 9 March 2026 / Accepted: 8 April 2026 / Published: 13 April 2026
(This article belongs to the Special Issue Disinformation in the Age of Artificial Intelligence)

Abstract

The impact of artificial intelligence (AI) extends across virtually all sectors of society, including communication. One of the areas in which its influence is expected to be most significant is disinformation, arguably one of the greatest challenges faced by networked societies over the past decade. Through a systematized literature review with a scoping orientation, this study examines how research on artificial intelligence and disinformation has evolved over the last five years and identifies the main thematic strands structuring this field. The analysis of 62 articles reveals a predominance of qualitative approaches (53.3%) and a technocentric perspective structured around five main research lines: (1) AI as a source of disinformation, (2) AI as a tool to combat it, (3) regulatory frameworks, (4) deepfakes, and (5) algorithmic literacy. These findings highlight both the consolidation of the field and the need to advance toward more interdisciplinary and transfer-oriented research.

1. Introduction

A decade has now passed since the global shock triggered by the impact of disinformation on electoral processes, particularly during the United Kingdom’s Brexit referendum and the campaign that brought Donald Trump to the Oval Office of the White House for the first time. Both events have become key milestones in the development and evolution of one of the most serious threats facing contemporary democracies: disinformation (Casero-Ripollés 2018; Guallar et al. 2020; Adams et al. 2023). From the public sphere—especially within the European and media contexts—a range of strategies have been designed to counter this phenomenon, most notably fact-checking initiatives and media literacy programs (Graves 2016; Magallón Rosa 2019).
Artificial intelligence (AI) is commonly defined as the ability of systems to autonomously interpret and learn from external data in order to achieve specific objectives through flexible adaptive processes (Kaplan and Haenlein 2019). In recent years, AI has experienced remarkable development, leading to a progressive and widespread expansion of its fields of application (Adetayo 2023). The capacity of artificial intelligence to transcend certain human limitations—computational, cognitive, and even creative—has opened up new areas of application in sectors such as education and marketing, healthcare, finance, and manufacturing, with significant effects on productivity and performance (Dwivedi et al. 2021).
While artificial intelligence promises substantial benefits for users, organizations, and economies, it is also expected to replace millions of existing jobs and to cause a significant reduction in employment in certain occupations (Ng et al. 2021). This represents one of the major challenges associated with AI, though by no means the only one.
The expansion of generative artificial intelligence, particularly large multimodal language models such as ChatGPT or Gemini, has raised alarm due to their potential to exacerbate problems related to disinformation. At the same time, these technologies have begun to play a relevant role as part of the solution to this challenge (Vizoso et al. 2021). Several studies have already examined their implications for strengthening journalistic practices (Pavlik 2023; Gutiérrez-Caneda et al. 2023), especially in areas that are more readily “automatable”. Nevertheless, information professionals continue to express significant reservations regarding their integration into everyday newsroom routines. From a legal perspective, the adoption of the European Union’s AI Act constitutes one of the most important milestones in this domain, and the role of AI in disinformation is among the central concerns addressed by this legislation (Regulation-EU-2024/1689—EN—EUR-Lex n.d.).
The contribution adopted a systematized literature review approach with a scoping and mapping orientation. Rather than aiming for an exhaustive systematic synthesis or causal inference, its purpose is to identify, classify, and structure the main research trends, methodological approaches, and thematic areas in the field of artificial intelligence and disinformation.
The study makes three main contributions to literature. First, it provides a systematized and up-to-date review of research on artificial intelligence and disinformation between 2020 and 2025 within the social sciences. Second, it combines bibliometric analysis with qualitative thematic coding to identify structural patterns in the field. Third, it proposes a coherent framework of five major research lines that helps to organize existing knowledge and to guide future research, particularly in relation to knowledge transfer toward media, institutions, and society.
Accordingly, the research questions guiding this study are as follows:
  • RQ1. Which authors have published most extensively on artificial intelligence and disinformation?
  • RQ2. Which academic journals publish most frequently on this topic?
  • RQ3. Which keywords are most commonly used in this field?
  • RQ4. How has the distribution of keywords evolved over time?
  • RQ5. Which methodological approaches are most frequently employed?
  • RQ6. What are the main research perspectives on artificial intelligence and disinformation, and which potential future research directions can be identified?

2. Materials and Methods

This study is based on a systematized literature review with a scoping orientation (Codina 2017), aimed at mapping and describing the main lines of research on artificial intelligence and disinformation within the field of the social sciences. The research design follows the guidelines of the PRISMA Extension for Scoping Reviews (PRISMA-ScR) (Tricco et al. 2018; Sánchez-Serrano et al. 2022), ensuring a systematic, transparent, and replicable process.
The bibliographic search was conducted exclusively in the Scopus database, selected due to its multidisciplinary coverage and its reliability in indexing peer-reviewed academic journals.
The following search string was applied in the TITLE-ABS-KEY field:
(TITLE-ABS-KEY (“AI” OR “artificial intelligence”)) AND (TITLE-ABS-KEY (“disinformation” OR “misinformation”)) AND PUBYEAR > 2019 AND PUBYEAR < 2027 AND (LIMIT-TO (DOCTYPE, “ar”)) AND (LIMIT-TO (SUBJAREA, “SOCI”)).
Articles explicitly addressing the relationship between artificial intelligence and disinformation were identified from both empirical and analytical perspectives. The search was restricted to the period between 2020 and 2025, in order to capture the most recent scientific production. In addition, results were limited to the document type “article”, belonging to the Social Sciences subject area (SOCI), and published in English, Spanish, or Portuguese. The search was completed in October 2025, and the retrieved records were exported in CSV format for data cleaning, coding, and subsequent analysis.
Inclusion and exclusion criteria were established to ensure the quality and relevance of the analyzed corpus. The inclusion criteria comprised peer-reviewed, open-access articles published between 2020 and 2025 that presented empirical studies (quantitative, qualitative, or mixed-methods approaches) and were indexed in Q1 or Q2 journals according to the SCImago Journal Rank—SJR (Scimago Journal & Country Rank n.d.). In addition, a minimum threshold of ten citations was required as an indicator of scientific impact and relevance. Only studies written in English, Spanish, or Portuguese and explicitly focused on the relationship between artificial intelligence and disinformation within social science contexts were considered (Reason 1).
The exclusion criteria included theoretical essays, opinion pieces, or texts lacking an empirical basis, as well as theses, preprints, or documents not subject to peer review. Studies that included the keywords “artificial intelligence” or “disinformation” but did not maintain a substantive relationship with the object of study were also excluded, as were publications written in languages other than English, Spanish, or Portuguese. This linguistic criterion was established to ensure analytical consistency and reliability, given the research team’s academic proficiency in these languages. English was included due to its predominance in international scientific communication, while Spanish and Portuguese were selected because of their relevance to the geographical and epistemological scope of the study.
The study selection process was carried out in three consecutive phases. First, duplicate records were removed and a preliminary review of metadata was conducted to ensure the integrity of the initial dataset. In the second phase, title and abstract screening was performed to exclude false positives and documents not directly related to the research topic. Finally, in the third phase, the full texts of potentially eligible studies were read in order to verify compliance with the predefined inclusion and exclusion criteria.
The thematic categories were developed inductively through iterative reading and comparative analysis of the selected articles. No formal codebook was employed; instead, categories emerged progressively and were refined through discussion among the authors. Each study was assigned to a primary thematic area based on its dominant analytical focus. This procedure is consistent with the mapping and exploratory orientation of the review.
The process was conducted by two independent reviewers, who applied the same selection criteria consistently. Any discrepancies identified during the evaluation were resolved through consensus, thereby ensuring procedural coherence and transparency. As a result of this process, a final sample of 62 articles was obtained, forming the main analytical basis of this review, as shown in Figure 1.
A structured analysis sheet was developed, including the following recording variables: publication ID, year, number of citations, DOI, author(s), title, source, access type, journal quartile, keywords, link, abstract, scientific areas, object of study and objectives, research questions or hypotheses, methods employed, main results, and implications.
Data extraction and coding were performed manually using a Microsoft Excel spreadsheet. Subsequently, a mixed synthesis approach was applied, consisting of:
-
Descriptive analysis (distribution by year, language, area, and journal quartile).
-
Qualitative thematic coding of objectives, methods, and findings.
Study quality was assessed in an exploratory manner by considering the clarity of objectives, the appropriateness of methods, the coherence between results and conclusions, and the relevance to the topic of artificial intelligence and disinformation. No exclusions were made on the basis of methodological quality, in line with the open and mapping-oriented nature of a scoping review.
The results were grouped into five thematic areas, identified inductively through the coding of the 62 articles:
  • AI as a tool to combat disinformation (23 articles).
  • AI as a source or amplifier of disinformation (9 articles).
  • Regulation, ethics, and governance of AI in relation to disinformation (9 articles).
  • Deepfakes and audiovisual manipulation (15 articles).
  • AI as an educational tool and media literacy (4 articles).
The keyword co-occurrence analysis was performed in RStudio (version 2025.09.2+418) using the bibliometrix package for network construction and community detection. The network visualization was generated using biblioshiny (version 5). It was based on the keywords provided by the authors (Author’s Keywords). A minimum frequency threshold of ≥2 occurrences was applied to include keywords in the network. Furthermore, nodes were required to have at least one edge (minimum number of edges = 1) to be included in the visualization, thus excluding isolated terms that were not part of the relational structure.
The keywords provided by the authors were analyzed and indexed in Scopus without exhaustive manual harmonization of spelling variants, acronyms, or morphological differences. This decision was made to preserve the author’s original terminology and avoid subjective interpretive biases being introduced through manual aggregation. Conceptually distinct terms (e.g., “disinformation” and “misinformation”) were retained as separate nodes to maintain analytical differentiation. In Spanish and Portuguese texts, the English words provided by the source documents were used.
Co-occurrence links were normalized using the strength of association index, which corrects for frequency bias by weighting the co-occurrence between two keywords relative to their overall occurrence frequencies. This normalization improves the detection of structurally significant relationships rather than purely frequency effects. Community detection was performed using the Louvain modularity optimization algorithm (Zhang et al. 2018), which divides the network into clusters by maximizing the density of edges within the cluster relative to the connections between clusters. The resulting communities represent structurally coherent thematic groupings that emerge from the relational architecture of the dataset.

3. Results

3.1. Scientific Output by Journals and Thematic Areas

Regarding the scientific journals in which the articles included in the analyzed sample were published, a total of 43 different journal titles were identified. Profesional de la Información stands out as the outlet with the highest number of published articles (N = 4), followed by IEEE Transactions on Computational Social Systems (N = 3) and Proceedings of the ACM on Human–Computer Interaction (N = 3).
Figure 2 illustrates, on the one hand, the distribution of the number of contributions by journal and, on the other, the most recurrent research areas, based on the thematic classification proposed by Scimago.
With respect to the research areas associated with the journals included in the sample, the results are particularly noteworthy, given that article selection was conducted using the Scopus database and restricted to the Social Sciences subject area. Despite this constraint, the analysis reveals the coexistence of multiple disciplinary areas, several of which recur significantly across different journal titles.
In this regard, Communication is the predominant field, with a total of 15 appearances, followed by Social Science (N = 9), Computer Science Applications (N = 8), Cultural Studies (N = 6), Sociology and Political Science (N = 6), Artificial Intelligence (N = 5) and Information Systems (N = 5).
This distribution shows that journals in the field of communication have paid the most attention to the issues of disinformation and artificial intelligence. However, the data also reveal a clear hybridisation between the approaches of the social sciences and communication and those of a technical and computational nature, highlighting the interdisciplinary nature of the scientific output analysed.

3.2. Authors with the Highest Number of Publications

The results show that only seven authors have more than one publication within the sample analyzed. Specifically, Berta García-Orosa, Alejandro Martín, Guillermo Villar-Rodríguez, Walter J. Scheirer, Tim Weninger, Michael G. Yankoski, and Xosé López-García each have two publications. It should be noted that, in the cases of Martín and Villar-Rodríguez, as well as Scheirer, Weninger, and Yankoski, the duplicate occurrences correspond to the same articles.
In terms of the impact of these contributions, measured by the number of citations received, García-Orosa and López-García stand out as the most cited authors within this group, with a total of 52 and 72 citations, respectively. Despite these citation levels, none of the works authored by these scholars rank among the ten most cited articles in the overall sample, as shown in Table 1.

3.3. Most Frequently Used Keywords in the Sample

The analysis of the conceptual network was conducted using RStudio through thematic clustering based on the co-occurrence of keywords in the articles included in the study. In total, four clusters were identified, each represented by a different color. The results are displayed as a graph in Figure 3. Node size reflects the frequency of each keyword, while the thickness of the edges between nodes represents the strength of their association: the thicker the edge, the stronger the relationship.
The graph positions the keyword artificial intelligence at the center, both of its own cluster (in red) and of the overall network. From this node, the following thematic clusters emerge:
Red cluster. Artificial intelligence is the central node of the cluster and has a high degree of association with disinformation. Journalism is the second keyword with the highest intensity of relationship within the cluster, accompanied by terms such as automation, bots, hoaxes, algorithms, deep fakes, elections and democracy, among others.
Blue cluster. Misinformation is the central term in this cluster. It links directly to the central node with a strong co-occurrence, but less so than that between artificial intelligence and disinformation. Its second strongest link is with fake news. Both the latter and misinformation itself also establish strong degrees of relationship with disinformation. Concepts such as fact-checking, social bots, political deepfakes, COVID-19, human–AI interaction, etc., appear in this second cluster.
Green cluster. The green cluster has the keyword deepfakes as its central node, connected to the two central concepts of the red cluster and linked to social media within its own cluster. Memes, AI, large language models, generative AI, and deep learning complete this cluster.
Purple cluster. A small cluster with only three concepts: ChatGPT (connected to the central node), generative artificial intelligence, and chat bots.
Overall, the results point to a technocentric approach in which the relationship between artificial intelligence and disinformation is primarily associated with processes of production and circulation of disinformation on social media platforms. This explains the prominence of concepts such as deepfakes and social bots, as well as their association with risks to democracy, expressed through terms such as elections and manipulation.

3.4. Most Frequently Used Keywords over Time

Figure 4 and Figure 5 illustrate the evolution of research on disinformation and artificial intelligence during the analysis period. There is a key year, 2022, when ChatGPT was launched, causing the keyword artificial intelligence to appear in the centre of the graph, displacing disinformation to a secondary position within the same cluster. In any case, the technocentric approach already existed in 2020 and 2021, when the keyword disinformation was located at the centre of the graph, linked to concepts such as natural language processing, social bots, algorithms and deep learning (closely related to artificial intelligence). Furthermore, even before ChatGPT, there was already a clear interest in political disinformation linked to social media. Keywords such as elections, memes, political deepfakes, Twitter and social media appear, forming their own cluster.
The influence of ChatGPT on the growth of research addressing artificial intelligence is undeniable. However, the use of the tool’s name as a keyword remains relatively limited when compared to the more general term artificial intelligence or to specific forms of AI-generated disinformation, such as deepfakes. On the disinformation side, the evolution throughout the period shows a marked trend toward diversification: from the keyword disinformation, a broad range of related terms emerges, including misinformation, fake news, deepfakes, manipulation, social bots, and hoaxes, among others. This diversity highlights the scale of the problem and how research in the field has had to expand and specialise in order to address all areas of the subject. Table 2 illustrates the evolution of keyword density during the 2020–2025 period and identifies the main keywords in each year.

3.5. Research Techniques Used in the Articles

With regard to the methodological approaches used in the works that make up the sample, the results show a clear predominance of qualitative methodologies, present in more than half of the publications analysed, with a total of 32 articles (53.3%), as shown in Table 3. Within this approach, the most frequently used instruments are systematic literature reviews, document analysis, in-depth interviews and interpretative analysis.
Quantitative methodologies rank second, having been employed in 17 studies (28.3%). This category encompasses both research incorporating computational methods—such as data and text mining or semantic analysis—and studies based on more traditional techniques, notably structured surveys and quantitative content analysis.
Finally, mixed-methods approaches show a more limited presence across the sample, being used in a total of 11 publications (18.3%).

3.6. Research Approaches to Artificial Intelligence and Disinformation

The studies included in the sample address the intersection between artificial intelligence and disinformation from five distinct perspectives: artificial intelligence as a tool to combat disinformation; artificial intelligence as a source of disinformation; (closely related to the latter) artificial intelligence for the creation of deepfakes; the regulation and ethics of artificial intelligence and disinformation; and, finally, artificial intelligence as a tool for education and media literacy. Table 4 provides an overview of these approaches.

3.7. Artificial Intelligence as a Source of Disinformation

The widespread expansion of artificial intelligence use—particularly the prominent role of generative artificial intelligence, including ChatGPT and other large language models (LLMs) such as Gemini, Perplexity, or Grok—has not been without problems, especially those related to security, privacy, and disinformation. This constitutes one of the main research approaches identified in the sample analyzed. In particular, the exponential pace at which these technologies have been adopted emerges as a fundamental concern. Such rapid expansion hinders, or even directly prevents, the detection, analysis, awareness, and mitigation of both existing and potential risks associated with the use of artificial intelligence (Wach et al. 2023; Ferrara 2024; Bontridder and Poullet 2021; García-Orosa 2021; Douven and Hegselmann 2021).
A clear critical stance toward techno-optimistic attitudes surrounding artificial intelligence can be observed across the literature. According to several authors, this perspective tends to overlook issues of considerable democratic, social, and economic significance. These risks stem, on the one hand, from the technological tools themselves, including algorithmic opacity, biases in model design and training processes, insufficient quality control, and infringements of intellectual property, image, and privacy rights (Wach et al. 2023; Douven and Hegselmann 2021).
On the other hand, the analyzed studies warn against the misuse that these technologies facilitate and amplify, particularly the production of disinformation, the creation and dissemination of deepfakes, the manipulation of public opinion, and practices of political microsegmentation and targeted persuasion (Ferrara 2024; Bontridder and Poullet 2021). Table 5 provides a detailed overview of the main malicious uses associated with generative artificial intelligence.
The analysis of these phenomena is situated within a broad contextual framework, namely the so-called fourth wave of digital democracy (García-Orosa 2021), which makes it possible to understand the current situation beyond technodeterministic approaches. This perspective incorporates the political role of major technological platforms in communication processes (Google, Amazon, Meta, and OpenAI), as well as the growing influence of algorithms in the circulation of content.
This framework also takes into account the active involvement of users in the production and dissemination of content (prosumers), the intensification of polarization, the proliferation of disinformation campaigns driven by state, corporate, or individual actors, and the use of bots in communication processes (Assenmacher et al. 2020).
Regulation (Lian et al. 2024; Wach et al. 2023; Ferrara 2024) and greater pressure for accountability on the part of platforms, analysis and training (Hausken 2024; Thomson et al. 2024) are, in the eyes of these authors, the keys to tackling this problem. Holistic approaches propose, on the one hand, modifying the digital ecosystem itself in order to reduce incentives for the production of disinformation through artificial intelligence (Bontridder and Poullet 2021) and, on the other, the reconfiguration of the democratic system through the integration of new communication practices, ethical values and more transparent power structures in the face of automation and digital manipulation processes (García-Orosa 2021).

3.8. Regulation and Ethics of Artificial Intelligence and Disinformation

Based on the above, the analyzed corpus agrees on the urgent need for more and better regulation in the field of artificial intelligence and disinformation, involving the political–institutional, business–technological, media and academic spheres.
European institutions—specifically the European Union and the European Commission—and their actions regarding artificial intelligence and disinformation are the focus of several studies. Critical reviews of the AI Act (AIA) identify gaps and areas for improvement, particularly in relation to the regulation of deepfakes (Romero Moreno 2024) and the role of the media (Porlezza 2023; Forja-Pena et al. 2024). In the former case, the legislation is considered insufficient, as the analyzed studies point to inconsistencies in risk classification and shortcomings in ensuring accountability and adequate protection for victims. In the latter, scholars emphasize the need to develop specific regulatory frameworks for the application of artificial intelligence in the media sector, since certain practices—such as recommendation systems or automated news production—may fall into medium- or high-risk categories under the criteria established by the AIA.
From a citizen’s perspective, the urgent need is to enact regulations focused on data protection on social media (Battista and Uva 2023), by prohibiting coercive practices linked to the acceptance of privacy terms and the transfer of personal data. In a context of imbalance between platforms and states, in which the former act as private regulators without democratic control (Marsden et al. 2020), the fight against disinformation should be oriented towards hybrid models of regulation or co-regulation, based on democratic principles.
This collaborative approach requires ongoing dialogue between artificial intelligence system developers, various interest groups, the media, academia, regulatory bodies and the public (Polyportis and Pahos 2024; Koplin 2023; Victor et al. 2023).
In the geopolitical sphere, Miller (2023) points to the concept of cognitive warfare and argues for the need to regulate disinformation and computational propaganda, without undermining freedom of expression. He proposes a shared responsibility between governments, the media and citizens to resist disinformation and justifies, in certain cases, the banning of foreign media outlets—such as Russia Today—when they are used as information weapons.
In sum, the academic focus is on the challenge posed by the regulation of AI and disinformation in Europe, in a context of geopolitical tension and with the questionable willingness of large technology companies to collaborate, as they are more concerned with being the first in the ‘AI race’ than with creating a framework for responsible innovation based on ethical principles and the protection of citizens’ rights.

3.9. Artificial Intelligence as a Tool to Combat Disinformation

One of the most prolific lines of research within the analyzed field concerns the incorporation of artificial intelligence in the fight against disinformation, addressed in a total of 23 studies in the sample. A large proportion of these rely on experimental research designs aimed at evaluating the capacity of AI-based technologies to (1) detect and verify potentially disinformative content (Naeem et al. 2020; Nasir et al. 2021; Shahid et al. 2024; Das et al. 2023; Santos 2023; Yankoski et al. 2021, Xiao et al. 2024) and (2) identify disinformation agents and their behavioral patterns (Villar-Rodríguez et al. 2022; Noguera Vivo et al. 2023). In addition, several studies examine the effectiveness of and perceived risks associated with the use of AI as a fact-checking tool (Wojcieszak et al. 2021; Lu et al. 2022; Pareek et al. 2024; Cuartielles et al. 2023; Canavilhas 2022; Liu et al. 2025).
Research on the automated analysis of disinformation reports results obtained using different models and tools and highlights the predominance of textual content analysis—an area that is continuously evolving and yielding positive results (Naeem et al. 2020; Nasir et al. 2021; Shahid et al. 2024). By contrast, multimedia analysis remains underdeveloped, despite the growing prevalence of audiovisual and audio-based disinformation (Nasir et al. 2021; Shahid et al. 2024).
The literature identifies several limitations. Most studies focus on text analysis and continue to face challenges related to data quality and the scarcity of multilingual datasets. Furthermore, models struggle to address complex phenomena such as irony, sarcasm, and specific cultural and linguistic nuances (Santos 2023; Montoro-Montarroso et al. 2023; Shahid et al. 2024).
Some studies propose the use of synthetic data as a potential response to these shortcomings, particularly in multilingual contexts (Aïmeur et al. 2023). Finally, although the tools analyzed are capable of identifying false information, evidence retrieval and the explainability of reasoning processes remain underdeveloped areas (Das et al. 2023; Kozik et al. 2024).
On the other hand, some studies point to the prevalence of memes and simple visual content over deepfakes as the main source of viral disinformation (Yankoski et al. 2021). AI tools find it difficult to identify coordinated narratives of emotional manipulation, which limits their effectiveness in the face of complex campaigns. In this regard, the authors propose the need to move from perceptual detection to cognitive detection, with the ability to interpret meanings, irony and context (Yankoski et al. 2021; Santos 2023).
Sociological, legal, and journalistic approaches allow for a structural rather than merely technical analysis of disinformation (Yankoski et al. 2020; Llorca-Asensi et al. 2021; Vicari and Komendatova 2023; Vizoso et al. 2021). The literature identifies systems that combine professional judgement with the scalability provided by algorithmic systems as the most effective solution in the fight against disinformation (Das et al. 2023; Santos 2023; Montoro-Montarroso et al. 2023; Flores-Saviaga et al. 2022, Sánchez González et al. 2022). In this regard, journalists and fact-checkers must remain at the center of the process to ensure discernment, transparency, and accountability.
This hybrid perspective also extends to the identification of disinformation agents. AI tools have demonstrated that disinformation is not transmitted solely through viral content but is sustained over time as a long tail of messages disseminated through low-interaction profiles (Villar-Rodríguez et al. 2022). These patterns make it possible to predict disinformative behavior and the evolution of disinformation campaigns (Noguera Vivo et al. 2023).
The studies reviewed in this section indicate that the effectiveness of AI as a fact-checker largely depends on user perception and trust. Without the construction of adequate trust mechanisms, situations of algorithmic aversion or the backfire effect may arise in users exposed to corrections produced by AI (Lu et al. 2022; Pareek et al. 2024). In the professional field of communication, journalists perceive tools such as ChatGPT as operational support, never as a substitute for people (Cuartielles et al. 2023).

3.10. Artificial Intelligence for the Creation of Deepfakes

The literature reviewed indicates that deepfakes currently generate more uncertainty than direct deception. Their presence contributes to a general loss of trust in the content circulating on the internet, especially in audiovisual material disseminated by the media (Brennen et al. 2021). Exposure to this type of content and, above all, the subsequent revelation of the deception reduces users’ perceived credibility and self-efficacy, which encourages attitudes of informational cynicism and a crisis of visual evidence (Vaccari and Chadwick 2020; Weikmann et al. 2025).
Nevertheless, recent studies have tempered alarmist narratives. In electoral contexts, the presence of deepfakes has thus far been limited, and in some cases the negative impact on democratic processes appears to stem more from media coverage of the threat than from the actual circulation of manipulated content (Łabuz and Nehring 2024).
With regard to detection and credibility, the analyzed studies indicate that deepfakes do not necessarily possess greater persuasive power than other forms of disinformation, whether graphical, textual, or audiovisual. In this respect, so-called cheapfakes (Hameleers 2024) may prove even more credible. Moreover, users’ ability to detect deepfakes is more closely related to their political interest and analytical thinking skills than to their capacity to identify visual flaws. This finding reinforces counter-disinformation approaches that prioritize reflective reasoning over the analysis of increasingly sophisticated deceptive content (Appel and Prietzel 2022; Murillo-Ligorred et al. 2023; Ali et al. 2021).
From a technical perspective, authors highlight both technical and social limitations. Detection systems have demonstrated critical vulnerabilities to minimal adversarial modifications, and models also face generalization problems. As standalone solutions, their effectiveness is therefore questionable (Hussain et al. 2022; Gambín et al. 2024). In parallel, the literature points to an underrepresentation of social sciences compared to computational and legal approaches (Godulla et al. 2021). Scholars call for integrated sociotechnical responses that combine media literacy, regulation, and strengthened journalism and fact-checking in order to safeguard democratic development (Gómez-de-Ágreda et al. 2021; García-Ull 2021; Millière 2022).

3.11. Artificial Intelligence for Education and Media Literacy

Media literacy constitutes another key axis from which the issue of artificial intelligence-driven disinformation is addressed. The selected studies show that algorithmic literacy has a measurable impact on students’ critical awareness of disinformation in digital environments. Research based on workshops with students (Calvo et al. 2020) improves their understanding of automation mechanisms and strengthens their perception of artificial intelligence as a relevant agent in shaping public debate. These findings support the systematic integration of such content into higher education curricula. Nevertheless, classical models of technology adoption are considered insufficient (Acosta-Enriquez et al. 2024), highlighting the need to incorporate specific ethical and psychosocial dimensions into analyses of AI use in educational contexts, as well as new pedagogical methodologies (Yim 2024), in order to mitigate the potentially harmful effects of AI usage (Osamor Ifelebuegu et al. 2023).
The educational context also reveals a divergence of perceptions between students and teachers. The former see generative AI as a way to reduce the burden of rote learning, while the latter are reluctant. They argue that precautions are needed in relation to plagiarism, loss of educational control and the reliability of the systems (Wong 2024). In summary, there is a clear need for clear university policies, ethical guidelines and training programmes that combine creative and critical use of generative AI. Collaboration between academia, industry and government is essential to ensure the ethical and inclusive development of these technologies, including in the Global South (Gasaymeh et al. 2024).

4. Discussion and Conclusions

Since the emergence of ChatGPT in 2022, generative artificial intelligence has gained a growing presence in economic, political, social, media and academic spheres. Although research on the relationship between artificial intelligence and disinformation had already been underway, since that year there has been a significant increase in scientific output and greater diversification of approaches and topics, as reflected in bibliometric analysis.
In terms of keywords, the results depict a scenario centred around a technocentric core cluster, which is associated, to varying degrees, with other clusters linked to the role of social media in spreading disinformation and its effects on the public domain and democracy. Noteworthy is the limited weight of the concept of journalism in co-occurrence with the main nodes of the graph, which points to a greater prominence of approaches from other areas of knowledge.
In this sense, the sample includes contributions from journals belonging to different categories, with a notable emphasis on communication and other social sciences, along with contributions from technological fields such as computer science and artificial intelligence. Finally, the methodological analysis shows that the use of mixed methodologies is in the minority, while qualitative approaches predominate.
The literature review identifies five major lines of research in the study of AI and disinformation: AI as a source of disinformation, AI as a tool to combat disinformation, regulatory approaches, deepfakes, and algorithmic literacy.
The analyzed studies indicate that technodeterministic approaches to combating online disinformation must be superseded by broader research frameworks that integrate multiple areas of knowledge (García-Orosa 2021; Vicari and Komendatova 2023), particularly the social sciences. In this regard, model development should move toward cognitive and contextual detection, including the identification of irony, emotions, and manipulative narratives (Yankoski et al. 2021; Santos 2023). This approach is especially relevant given the predominance of memes and graphic or audiovisual content in disinformation campaigns, which pose significant challenges for current models (Yankoski et al. 2021).
In this context, opportunities for research emerge around hybrid human–AI models, both in fact-checking and in the detection of disinformation agents. Full automation of these processes is largely rejected, with professionals in information and journalism—as well as users—placed at the center of the process (Das et al. 2023; Santos 2023; Montoro-Montarroso et al. 2023; Vizoso et al. 2021; Flores-Saviaga et al. 2022). The reviewed literature consistently emphasizes the need to address disinformation and artificial intelligence through hybrid approaches. Accordingly, greater development of methodologies that integrate computational techniques within social science research is recommended in order to expand analytical capacity and generate knowledge with greater social impact.
The studies underline the importance of collaboration among institutional and regulatory bodies, media organizations, business and technological actors, and civil society as a necessary condition for addressing disinformation (Romero Moreno 2024; Porlezza 2023; Forja-Pena et al. 2024; Marsden et al. 2020; Polyportis and Pahos 2024). This need is particularly acute in a context where disinformation constitutes a key factor in the erosion of democratic processes. Future research should evaluate the actual effectiveness of current regulation, particularly at the European level (AI Act), analyze its gaps and the role of the media, and explore models of democratic co-regulation that balance the power asymmetry between platforms, states and citizens in a context of growing geopolitical tension.
On the other hand, it is important to study artificial intelligence, especially AGI, as a structural source of disinformation (Ferrara 2024; Wach et al. 2023; Bontridder and Poullet 2021). It should not only be examined in terms of scalability, speed of dissemination and segmentation capacity, but also regarding the incentives that favour this automated production of disinformation. With regard to deepfakes, the study of their effects on trust, information uncertainty and democratic culture is a field of work that warrants further investigation. The reviewed literature indicates that these technologies tend to generate indeterminacy rather than direct deception, thereby eroding the credibility of content—whether deceptive or not—and diminishing citizens’ perceived self-efficacy (Vaccari and Chadwick 2020; Weikmann et al. 2025).
Progress is already evident, as reflected in the diversity of thematic approaches emerging in this field of research. Media and algorithmic literacy constitutes one such avenue. Experimental studies involving students provide valuable insights for strengthening this line of inquiry (Appel and Prietzel 2022; Murillo-Ligorred et al. 2023; Calvo et al. 2020; Osamor Ifelebuegu et al. 2023), in which journalism—particularly fact-checking—should play a central role. Monitoring large generative AI models as potential sources of disinformation (Ferrara 2024; Wach et al. 2023; Bontridder and Poullet 2021) and evaluating the quality and effectiveness of regulatory systems, especially the European framework (Romero Moreno 2024; Porlezza 2023; Forja-Pena et al. 2024; Marsden et al. 2020; Polyportis and Pahos 2024), complete this strategic axis without diminishing the importance of continuing to advance detection and verification efforts through hybrid human–AI models (Das et al. 2023; Santos 2023; Montoro-Montarroso et al. 2023; Vizoso et al. 2021; Sánchez González et al. 2022).
Finally, certain research limitations should be acknowledged. The study design presents several limitations derived from the inclusion criteria. Restricting the corpus to open-access articles published in Q1 and Q2 journals, along with a minimum citation threshold, may bias the sample toward more established and higher-impact publications. This could underrepresent recent studies—particularly those published between 2023 and 2025—that have not yet accumulated citations. Therefore, the findings should be interpreted as reflecting a subset of high-impact scientific literature rather than the full body of research on artificial intelligence and disinformation. Future research will incorporate sensitivity analyses that relax these criteria in order to assess the stability of the thematic patterns identified.

Author Contributions

Conceptualization, J.C.G.; methodology, J.C.G. and A.S.R.; software, J.C.G. and A.S.R.; validation, A.-I.R.-V., J.C.G. and A.S.R.; formal analysis, A.-I.R.-V., J.C.G. and A.S.R.; investigation, A.-I.R.-V., J.C.G. and A.S.R.; resources, A.-I.R.-V.; data curation, J.C.G. and A.S.R.; writing—original draft preparation, J.C.G. and A.-I.R.-V.; writing—review and editing, J.C.G. and A.-I.R.-V.; visualization, J.C.G.; supervision, A.-I.R.-V. and A.S.R.; project administration, A.-I.R.-V.; funding acquisition, A.-I.R.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the R&D project Artificial Intelligence in Digital Media in Spain: Effects and Roles (PID2024-156034OB-C22), funded by MICIU/AEI/10.13039/501100011033 and by ERDF/EU.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Acosta-Enriquez, Benicio Gonzalo, Marco Agustín Arbulú Ballesteros, Carmen Graciela Arbulu Perez Vargas, Milca Naara Orellana Ulloa, Cristian Raymound Gutiérrez Ulloa, Johanna Micaela Pizarro Romero, Néstor Daniel Gutiérrez Jaramillo, Héctor Ulises Cuenca Orellana, Diego Xavier Ayala Anzoátegui, and Carlos López Roca. 2024. Knowledge, Attitudes, and Perceived Ethics Regarding the Use of ChatGPT among Generation Z University Students. International Journal for Educational Integrity 20: 10. [Google Scholar] [CrossRef]
  2. Adams, Zoë, Magda Osman, Christos Bechlivanidis, and Björn Meder. 2023. (Why) Is Misinformation a Problem? Perspectives on Psychological Science: A Journal of the Association for Psychological Science 18: 1436–63. [Google Scholar] [CrossRef]
  3. Adetayo, Adebowale Jeremy. 2023. Artificial Intelligence Chatbots in Academic Libraries: The Rise of ChatGPT. Library Hi Tech News 40: 18–21. [Google Scholar] [CrossRef]
  4. Aïmeur, Esma, Sabrine Amri, and Gilles Brassard. 2023. Fake News, Disinformation and Misinformation in Social Media: A Review. Social Network Analysis and Mining 13: 30. [Google Scholar] [CrossRef] [PubMed]
  5. Ali, Safinah, Daniella DiPaola, Irene Lee, Victor Sindato, Grace Kim, Ryan Blumofe, and Cynthia Breazeal. 2021. Children as Creators, Thinkers and Citizens in an AI-Driven Future. Computers and Education: Artificial Intelligence 2: 100040. [Google Scholar] [CrossRef]
  6. Appel, Markus, and Fabian Prietzel. 2022. The Detection of Political Deepfakes. Journal of Computer-Mediated Communication: JCMC 27: zmac008. [Google Scholar] [CrossRef]
  7. Assenmacher, Dennis, Lena Clever, Lena Frischlich, Thorsten Quandt, Heike Trautmann, and Christian Grimme. 2020. Demystifying Social Bots: On the Intelligence of Automated Social Media Actors. Social Media + Society 6: 205630512093926. [Google Scholar] [CrossRef]
  8. Battista, Daniele, and Gabriele Uva. 2023. Exploring the Legal Regulation of Social Media in Europe: A Review of Dynamics and Challenges—Current Trends and Future Developments. Sustainability 15: 4144. [Google Scholar] [CrossRef]
  9. Bontridder, Noémi, and Yves Poullet. 2021. The Role of Artificial Intelligence in Disinformation. Data & Policy 3: e32. [Google Scholar] [CrossRef]
  10. Brennen, J. Scott, Felix M. Simon, and Rasmus Kleis Nielsen. 2021. Beyond (Mis)Representation: Visuals in COVID-19 Misinformation. Politics [The International Journal of Press] 26: 277–99. [Google Scholar] [CrossRef]
  11. Calvo, Dafne, Lorena Cano-Orón, and Almudena Esteban. 2020. Materiales y Evaluación Del Nivel de Alfabetización Para El Reconocimiento de Bots Sociales En Contextos de Desinformación Política. Revista ICONO14 18: 111–37. [Google Scholar] [CrossRef]
  12. Canavilhas, Joao. 2022. Inteligencia Artificial Aplicada al Periodismo: Estudio de Caso Del Proyecto ‘A European Perspective’ (UER). Revista Latina de Comunicación Social 80: 1–13. [Google Scholar] [CrossRef]
  13. Casero-Ripollés, Andreu. 2018. Research on Political Information and Social Media: Key Points and Challenges for the Future. El Profesional de La Información 27: 964–74. [Google Scholar] [CrossRef]
  14. Codina, Lluís. 2017. Bases de datos Académicas para Investigar en Comunicación Social: Revisiones Sistematizadas, Grupo Óptimo y Protocolo de Búsqueda. Lluís Codina. July 12. Available online: https://www.lluiscodina.com/bases-de-datos-academicasi-comunicacion-social/ (accessed on 5 September 2025).
  15. Cuartielles, Roger, Xavier Ramon-Vegas, and Carles Pont-Sorribes. 2023. Retraining Fact-Checkers: The Emergence of ChatGPT in Information Verification. El Profesional de La Información 32: e320515. [Google Scholar] [CrossRef]
  16. Das, Anubrata, Houjiang Liu, Venelin Kovatchev, and Matthew Lease. 2023. The State of Human-Centered NLP Technology for Fact-Checking. Information Processing & Management 60: 103219. [Google Scholar] [CrossRef]
  17. Douven, Igor, and Rainer Hegselmann. 2021. Mis- and Disinformation in a Bounded Confidence Model. Artificial Intelligence 291: 103415. [Google Scholar] [CrossRef]
  18. Dwivedi, Yogesh K., Laurie Hughes, Elvira Ismagilova, Gert Aarts, Crispin Coombs, Tom Crick, Yanqing Duan, Rohita Dwivedi, John Edwards, Aled Eirug, and et al. 2021. Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. International Journal of Information Management 57: 101994. [Google Scholar] [CrossRef]
  19. Ferrara, Emilio. 2024. GenAI against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models. Journal of Computational Social Science 7: 549–69. [Google Scholar] [CrossRef]
  20. Flores-Saviaga, Claudia, Shangbin Feng, and Saiph Savage. 2022. Datavoidant: An AI System for Addressing Political Data Voids on Social Media. Proceedings of the ACM on Human-Computer Interaction 6: 503. [Google Scholar] [CrossRef]
  21. Forja-Pena, Tania, Berta García-Orosa, and Xosé López-García. 2024. The Ethical Revolution: Challenges and Reflections in the Face of the Integration of Artificial Intelligence in Digital Journalism. Communication & Society 37: 237–54. [Google Scholar] [CrossRef]
  22. Gambín, Ángel Fernández, Anis Yazidi, Athanasios Vasilakos, Hårek Haugerud, and Youcef Djenouri. 2024. Deepfakes: Current and Future Trends. Artificial Intelligence Review 57: 64. [Google Scholar] [CrossRef]
  23. García-Orosa, Berta. 2021. Disinformation, Social Media, Bots, and Astroturfing: The Fourth Wave of Digital Democracy. El Profesional de La Información 30: e300603. [Google Scholar] [CrossRef]
  24. García-Ull, Francisco José. 2021. «Deepfakes»: El Pròxim Repte En La Detecció de Notícies Falses. Anàlisi 64: 103–20. [Google Scholar] [CrossRef]
  25. Gasaymeh, Al-Mothana M., Mohammad A. Beirat, and Asma’a A. Abu Qbeita. 2024. University Students’ Insights of Generative Artificial Intelligence (AI) Writing Tools. Education Sciences 14: 1062. [Google Scholar] [CrossRef]
  26. Godulla, Alexander, Christian P. Hoffmann, and Daniel Seibert. 2021. Dealing with Deepfakes—An Interdisciplinary Examination of the State of Research and Implications for Communication Studies. Studies in Communication and Media 10: 72–96. [Google Scholar] [CrossRef]
  27. Gómez-de-Ágreda, Ángel, Claudio Feijóo, and Idoia-Ana Salazar-García. 2021. Una Nueva Taxonomía Del Uso de La Imagen En La Conformación Interesada Del Relato Digital. Deep Fakes e Inteligencia Artificial. El Profesional de La Información 30: e300216. [Google Scholar] [CrossRef]
  28. Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-Checking in American Journalism. New York: Columbia University Press. [Google Scholar]
  29. Guallar, Javier, Lluís Codina, Pere Freixa, and Mario Pérez-Montoro. 2020. Desinformación, bulos, curación y verificación. Revisión de estudios en Iberoamérica 2017–2020. Telos 22: 595–613. [Google Scholar] [CrossRef]
  30. Gutiérrez-Caneda, Beatriz, Jorge Vázquez-Herrero, and Xosé López-García. 2023. AI Application in Journalism: ChatGPT and the Uses and Risks of an Emergent Technology. El Profesional de La Información 32: e320514. [Google Scholar] [CrossRef]
  31. Hameleers, Michael. 2024. Cheap versus Deep Manipulation: The Effects of Cheapfakes versus Deepfakes in a Political Setting. International Journal of Public Opinion Research 36: edae004. [Google Scholar] [CrossRef]
  32. Hausken, Liv. 2024. Photorealism versus Photography. AI-Generated Depiction in the Age of Visual Disinformation. Journal of Aesthetics & Culture 16: 2340787. [Google Scholar] [CrossRef]
  33. Hussain, Shehzeen, Paarth Neekhara, Brian Dolhansky, Joanna Bitton, Cristian Canton Ferrer, Julian McAuley, and Farinaz Koushanfar. 2022. Exposing Vulnerabilities of Deepfake Detection Systems with Robust Attacks. Digital Threats: Research and Practice 3: 30. [Google Scholar] [CrossRef]
  34. Kaplan, Andreas, and Michael Haenlein. 2019. Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Business Horizons 62: 15–25. [Google Scholar] [CrossRef]
  35. Koplin, Julian J. 2023. Dual-Use Implications of AI Text Generation. Ethics and Information Technology 25: 32. [Google Scholar] [CrossRef]
  36. Kozik, Rafał, Aleksandra Pawlicka, Marek Pawlicki, Michał Choraś, Wojciech Mazurczyk, and Krzysztof Cabaj. 2024. A Meta-Analysis of State-of-the-Art Automated Fake News Detection Methods. IEEE Transactions on Computational Social Systems 11: 5219–29. [Google Scholar] [CrossRef]
  37. Lian, Ying, Huiting Tang, Mengting Xiang, and Xuefan Dong. 2024. Public Attitudes and Sentiments toward ChatGPT in China: A Text Mining Analysis Based on Social Media. Technology in Society 76: 102442. [Google Scholar] [CrossRef]
  38. Liu, Xingyu, Li Qi, Laurent Wang, and Miriam J. Metzger. 2025. Checking the Fact-Checkers: The Role of Source Type, Perceived Credibility, and Individual Differences in Fact-Checking Effectiveness. Communication Research 52: 719–46. [Google Scholar] [CrossRef]
  39. Llorca-Asensi, Elena, Alexander Sánchez Díaz, Maria-Elena Fabregat-Cabrera, and Raúl Ruiz-Callado. 2021. ‘Why Can’t We?’ Disinformation and Right to Self-Determination. The Catalan Conflict on Twitter. Social Sciences 10: 383. [Google Scholar] [CrossRef]
  40. Lu, Zhuoran, Patrick Li, Weilong Wang, and Ming Yin. 2022. The Effects of AI-Based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human-Computer Interaction 6: 461. [Google Scholar] [CrossRef]
  41. Łabuz, Mateusz, and Christopher Nehring. 2024. On the Way to Deep Fake Democracy? Deep Fakes in Election Campaigns in 2023. European Political Science 23: 454–73. [Google Scholar] [CrossRef]
  42. Magallón Rosa, Raúl. 2019. La (No) Regulación de La Desinformación En La Unión Europea. Una Perspectiva Comparada. Revista de Derecho Político 1: 319–46. [Google Scholar] [CrossRef]
  43. Marsden, Chris, Trisha Meyer, and Ian Brown. 2020. Platform Values and Democratic Elections: How Can the Law Regulate Digital Disinformation? Computer Law & Security Review 36: 105373. [Google Scholar] [CrossRef]
  44. Miller, Seumas. 2023. Cognitive Warfare: An Ethical Analysis. Ethics and Information Technology 25: 46. [Google Scholar] [CrossRef]
  45. Millière, Raphaël. 2022. Deep Learning and Synthetic Media. Synthese 200: 231. [Google Scholar] [CrossRef]
  46. Montoro-Montarroso, Andrés, Javier Cantón-Correa, Paolo Rosso, Berta Chulvi, Ángel Panizo-Lledot, Javier Huertas-Tato, Blanca Calvo-Figueras, M. José Rementeria, and Juan Gómez-Romero. 2023. Fighting Disinformation with Artificial Intelligence: Fundamentals, Advances and Challenges. El Profesional de La Información 32: e320322. [Google Scholar] [CrossRef]
  47. Murillo-Ligorred, Víctor, Nora Ramos-Vallecillo, Irene Covaleda, and Leticia Fayos. 2023. Knowledge, Integration and Scope of Deepfakes in Arts Education: The Development of Critical Thinking in Postgraduate Students in Primary Education and Master’s Degree in Secondary Education. Education Sciences 13: 1073. [Google Scholar] [CrossRef]
  48. Naeem, Bilal, Aymen Khan, Mirza Omer Beg, and Hasan Mujtaba. 2020. A Deep Learning Framework for Clickbait Detection on Social Area Network Using Natural Language Cues. Journal of Computational Social Science 3: 231–43. [Google Scholar] [CrossRef]
  49. Nasir, Jamal Abdul, Osama Subhani Khan, and Iraklis Varlamis. 2021. Fake News Detection: A Hybrid CNN-RNN Based Deep Learning Approach. International Journal of Information Management Data Insights 1: 100007. [Google Scholar] [CrossRef]
  50. Ng, Davy Tsz Kit, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. 2021. Conceptualizing AI Literacy: An Exploratory Review. Computers and Education: Artificial Intelligence 2: 100041. [Google Scholar] [CrossRef]
  51. Noguera Vivo, José Manuel, María del Mar Grandío-Pérez, Guillermo Villar-Rodríguez, Alejandro Martín, and David Camacho. 2023. Desinformación y Vacunas En Redes: Comportamiento de Los Bulos En Twitter. Revista Latina de Comunicación Social 81: 44–62. [Google Scholar] [CrossRef]
  52. Osamor Ifelebuegu, Augustine, Peace Kulume, and Perpetua Cherukut. 2023. Chatbots and AI in Education (AIEd) Tools: The Good, the Bad, and the Ugly. Journal of Applied Learning & Teaching 6: 332–45. [Google Scholar] [CrossRef]
  53. Page, Matthew J., Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl, Sue E. Brennan, and et al. 2021. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 372: n71. [Google Scholar] [CrossRef] [PubMed]
  54. Pareek, Saumya, Niels van Berkel, Eduardo Velloso, and Jorge Goncalves. 2024. Effect of Explanation Conceptualisations on Trust in AI-Assisted Credibility Assessment. Proceedings of the ACM on Human-Computer Interaction 8: 383. [Google Scholar] [CrossRef]
  55. Pavlik, John V. 2023. Collaborating with ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator 78: 84–93. [Google Scholar] [CrossRef]
  56. Polyportis, Athanasios, and Nikolaos Pahos. 2024. Navigating the Perils of Artificial Intelligence: A Focused Review on ChatGPT and Responsible Research and Innovation. Humanities & Social Sciences Communications 11: 107. [Google Scholar] [CrossRef]
  57. Porlezza, Colin. 2023. Promoting Responsible AI: A European Perspective on the Governance of Artificial Intelligence in Media and Journalism. Communications 48: 370–94. [Google Scholar] [CrossRef]
  58. Regulation-EU-2024/1689—EN—EUR-Lex. n.d. Europa.Eu. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 6 March 2026).
  59. Romero Moreno, Felipe. 2024. Generative AI and Deepfakes: A Human Rights Approach to Tackling Harmful Content. International Review of Law Computers & Technology 38: 297–326. [Google Scholar] [CrossRef]
  60. Santos, Fátima C. Carrilho. 2023. Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis. Journalism and Media 4: 679–87. [Google Scholar] [CrossRef]
  61. Sánchez González, María, Hada M. Sánchez Gonzales, and Sergio Martínez Gonzalo. 2022. Inteligencia Artificial En Verificadores Hispanos de La Red IFCN: Proyectos Innovadores y Percepción de Expertos y Profesionales. Estudios Sobre El Mensaje Periodístico 28: 867–79. [Google Scholar] [CrossRef]
  62. Sánchez-Serrano, Silvia, Inmaculada Pedraza-Navarro, and Macarena Donoso-González. 2022. ¿Cómo Hacer Una Revisión Sistemática Siguiendo El Protocolo PRISMA?: Usos y Estrategias Fundamentales Para Su Aplicación En El Ámbito Educativo a Través de Un Caso Práctico. Bordón Revista de Pedagogía 74: 51–66. [Google Scholar] [CrossRef]
  63. Scimago Journal & Country Rank. n.d. Scimagojr.com. Available online: https://www.scimagojr.com/ (accessed on 6 March 2026).
  64. Shahid, Wajiha, Bahman Jamshidi, Saqib Hakak, Haruna Isah, Wazir Zada Khan, Muhammad Khurram Khan, and Kim-Kwang Raymond Choo. 2024. Detecting and Mitigating the Dissemination of Fake News: Challenges and Future Research Opportunities. IEEE Transactions on Computational Social Systems 11: 4649–62. [Google Scholar] [CrossRef]
  65. Thomson, T. J., Ryan J. Thomas, and Phoebe Matich. 2024. Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies. Digital Journalism 13: 1693–714. [Google Scholar] [CrossRef]
  66. Tricco, Andrea C., Erin Lillie, Wasifa Zarin, Kelly K. O’Brien, Heather Colquhoun, Danielle Levac, David Moher, Micah D. J. Peters, Tanya Horsley, Laura Weeks, and et al. 2018. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of Internal Medicine 169: 467–73. [Google Scholar] [CrossRef]
  67. Vaccari, Cristian, and Andrew Chadwick. 2020. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society 6: 205630512090340. [Google Scholar] [CrossRef]
  68. Vicari, Rosa, and Nadejda Komendatova. 2023. Systematic Meta-Analysis of Research on AI Tools to Deal with Misinformation on Social Media during Natural and Anthropogenic Hazards and Disasters. Humanities & Social Sciences Communications 10: 332. [Google Scholar] [CrossRef]
  69. Victor, Bryan G., Rebeccah L. Sokol, Lauri Goldkind, and Brian E. Perron. 2023. Recommendations for Social Work Researchers and Journal Editors on the Use of Generative AI and Large Language Models. Journal of the Society for Social Work and Research 14: 563–77. [Google Scholar] [CrossRef]
  70. Villar-Rodríguez, Guillermo, Mónica Souto-Rico, and Alejandro Martín. 2022. Virality, Only the Tip of the Iceberg: Ways of Spread and Interaction around COVID-19 Misinformation in Twitter. Communication & Society 35: 239–56. [Google Scholar] [CrossRef]
  71. Vizoso, Ángel, Martín Vaz-Álvarez, and Xosé López-García. 2021. Fighting Deepfakes: Media and Internet Giants’ Converging and Diverging Strategies against Hi-Tech Misinformation. Media and Communication 9: 291–300. [Google Scholar] [CrossRef]
  72. Wach, Krzysztof, Cong Doanh Duong, Joanna Ejdys, Rūta Kazlauskaitė, Pawel Korzynski, Grzegorz Mazurek, Joanna Paliszkiewicz, and Ewa Ziemba. 2023. The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT. Entrepreneurial Business and Economics Review 11: 7–30. [Google Scholar] [CrossRef]
  73. Weikmann, Teresa, Hannah Greber, and Alina Nikolaou. 2025. After Deception: How Falling for a Deepfake Affects the Way We See, Hear, and Experience Media. Politics [The International Journal of Press] 30: 187–210. [Google Scholar] [CrossRef]
  74. Wojcieszak, Magdalena, Arti Thakur, João Fernando Ferreira Gonçalves, Andreu Casas, Ericka Menchen-Trevino, and Miriam Boon. 2021. Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views? Journal of Computer-Mediated Communication: JCMC 26: 223–43. [Google Scholar] [CrossRef]
  75. Wong, Wilson Kia Onn. 2024. The Sudden Disruptive Rise of Generative Artificial Intelligence? An Evaluation of Their Impact on Higher Education and the Global Workplace. Journal of Open Innovation Technology Market and Complexity 10: 100278. [Google Scholar] [CrossRef]
  76. Xiao, Shuai, Guipeng Lan, Jiachen Yang, Yang Li, and Jiabao Wen. 2024. Securing the Socio-Cyber World: Multiorder Attribute Node Association Classification for Manipulated Media. IEEE Transactions on Computational Social Systems 11: 4809–18. [Google Scholar] [CrossRef]
  77. Yankoski, Michael, Tim Weninger, and Walter Scheirer. 2020. An AI Early Warning System to Monitor Online Disinformation, Stop Violence, and Protect Elections. The Bulletin of the Atomic Scientists 76: 85–90. [Google Scholar] [CrossRef]
  78. Yankoski, Michael, Walter Scheirer, and Tim Weninger. 2021. Meme Warfare: AI Countermeasures to Disinformation Should Focus on Popular, Not Perfect, Fakes. The Bulletin of the Atomic Scientists 77: 119–23. [Google Scholar] [CrossRef]
  79. Yim, Iris Heung Yue. 2024. Artificial Intelligence Literacy in Primary Education: An Arts-Based Approach to Overcoming Age and Gender Barriers. Computers and Education: Artificial Intelligence 7: 100321. [Google Scholar] [CrossRef]
  80. Zhang, Xiao, Zhixin Ma, Ze Zhang, Qijuan Sun, and Jun Yan. 2018. A Review of Community Detection Algorithms Based on Modularity Optimization. Journal of Physics. Conference Series 1069: 012123. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram of the article selection process (adapted from Page et al. 2021). Source: Authors’ own elaboration.
Figure 1. PRISMA flow diagram of the article selection process (adapted from Page et al. 2021). Source: Authors’ own elaboration.
Socsci 15 00247 g001
Figure 2. Journals contributing to the largest number of publications in the sample (left) versus the most frequent research areas (right). Source: Authors’ own elaboration.
Figure 2. Journals contributing to the largest number of publications in the sample (left) versus the most frequent research areas (right). Source: Authors’ own elaboration.
Socsci 15 00247 g002
Figure 3. Keyword co-occurrence map (author keywords) (minimum frequency = 2). Source: Authors’ own elaboration based on RStudio.
Figure 3. Keyword co-occurrence map (author keywords) (minimum frequency = 2). Source: Authors’ own elaboration based on RStudio.
Socsci 15 00247 g003
Figure 4. Evolution of the keyword co-occurrence map during the period of analysis (2020–2022). Source: Authors’ own elaboration based on RStudio.
Figure 4. Evolution of the keyword co-occurrence map during the period of analysis (2020–2022). Source: Authors’ own elaboration based on RStudio.
Socsci 15 00247 g004
Figure 5. Evolution of the keyword co-occurrence map during the period of analysis (2023–2025). Authors’ own elaboration based on RStudio.
Figure 5. Evolution of the keyword co-occurrence map during the period of analysis (2023–2025). Authors’ own elaboration based on RStudio.
Socsci 15 00247 g005
Table 1. Authors with the highest number of publications on the topic and number of citations in Scopus. Source: Authors’ own elaboration.
Table 1. Authors with the highest number of publications on the topic and number of citations in Scopus. Source: Authors’ own elaboration.
Author/sPublicationsCitations in ScopusRefs.
Berta García-OrosaGarcía-Orosa, B. (2021). Disinformation, social media, bots, and astroturfing: the fourth wave of digital democracy. El profesional de la información. https://doi.org/10.3145/epi.2021.nov.0335(García-Orosa 2021)
Forja-Pena, T., García-Orosa, B., & López-García, X. (2024). The ethical revolution: Challenges and reflections in the face of the integration of artificial intelligence in digital journalism. Communication & Society, 237–54. https://doi.org/10.15581/003.37.3.237-25417(Forja-Pena et al. 2024)
Alejandro Martín, Guillermo Villar-RodríguezVillar-Rodríguez, G., Souto-Rico, M., & Martín, A. (2022). Virality, only the tip of the iceberg: ways of spread and interaction around COVID-19 misinformation in Twitter. Communication & Society, 239–56. https://doi.org/10.15581/003.35.2.239-25614(Villar-Rodríguez et al. 2022)
Noguera Vivo, J. M., Grandío-Pérez, M. del M., Villar-Rodríguez, G., Martín, A., & Camacho, D. (2023). Desinformación y vacunas en redes: Comportamiento de los bulos en Twitter. Revista latina de comunicación social, 81, 44–62. https://doi.org/10.4185/rlcs-2023-182011(Noguera Vivo et al. 2023)
Walter J. Scheirer, Tim Weninger, Michael G. Yankoski Yankoski, M., Weninger, T., & Scheirer, W. (2020). An AI early warning system to monitor online disinformation, stop violence, and protect elections. The Bulletin of the Atomic Scientists, 76(2), 85–90. https://doi.org/10.1080/00963402.2020.172897615(Yankoski et al. 2020)
Yankoski, M., Scheirer, W., & Weninger, T. (2021). Meme warfare: AI countermeasures to disinformation should focus on popular, not perfect, fakes. The Bulletin of the Atomic Scientists, 77(3), 119–23. https://doi.org/10.1080/00963402.2021.191209314(Yankoski et al. 2021)
Xosé López-GarcíaVizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and Internet giants’ converging and diverging strategies against hi-tech misinformation. Media and Communication, 9(1), 291–300. https://doi.org/10.17645/mac.v9i1.349455(Vizoso et al. 2021)
Forja-Pena, T., García-Orosa, B., & López-García, X. (2024). The ethical revolution: Challenges and reflections in the face of the integration of artificial intelligence in digital journalism. Communication & Society, 237–54. https://doi.org/10.15581/003.37.3.237-25417(Forja-Pena et al. 2024)
Table 2. Annual evolution of publications and keyword prominence in the AI–disinformation research field (2020–2025). Source: Authors’ own elaboration.
Table 2. Annual evolution of publications and keyword prominence in the AI–disinformation research field (2020–2025). Source: Authors’ own elaboration.
YearNumber of PapersNumber of Keywords in the GraphMost Frequent Keywords
202068disinformation
20211311disinformation
2022816artificial intelligence; disinformation; fake news
20231524artificial intelligence; disinformation; fake news
20241828artificial intelligence; disinformation; fake news
2025233artificial intelligence; fake news; misinformation; disinformation
Table 3. Typology of methodologies used in the studies included in the sample.
Table 3. Typology of methodologies used in the studies included in the sample.
Methods TypeTotal NumberPercentage
Qualitative3355%
Quantitative1726.7%
Mixed Methods1218.3%
Source: Authors’ own elaboration.
Table 4. Thematic approaches of the studies included in the sample.
Table 4. Thematic approaches of the studies included in the sample.
Thematic ApproachDescription of the ApproachMain Object of Analysis
AI as a tool to combat disinformationStudies analyzing the use of AI algorithms to identify, classify, and track disinformative content (text, image, audio, or video).Automated detection systems, algorithmic verification, diffusion pattern analysis.
AI as a source of disinformationResearch focusing on AI as an agent that produces or amplifies false or misleading content.Generative models, automation of false narratives, scalability of disinformation.
AI for the creation of deepfakesA specific line of research addressing the synthetic generation of hyper-realistic images, audio, and video for disinformative purposes.Political, media, or personal deepfakes; audiovisual manipulation.
Regulation and ethics of AI and disinformationNormative and legal approaches analyzing regulatory frameworks, public policies, and self-regulation mechanisms.Legislation, ethical codes, algorithmic governance, platform accountability.
AI for education and media literacyStudies exploring the use of AI to educate citizens and enhance resilience to disinformation.Educational tools, intelligent assistants, personalized learning.
Source: Authors’ own elaboration.
Table 5. Intentional malicious deployments of large language models (LLMs) and generative AI in real-world contexts.
Table 5. Intentional malicious deployments of large language models (LLMs) and generative AI in real-world contexts.
GoalApplicationExampleProof-of-Concept
DishonestyAutomated essay writing and academic dishonestyStudents could use LLMs to generate essays, research papers, or assignments, bypassing the learning process and undermining academic integrityInputting a prompt like Write a 2000-word essay on the impact of the Industrial Revolution on European society into an LLM and receiving a detailed, well-structured essay in return
Generating fake research papersLLMs can be used to produce fake research papers with fabricated data, results, and references, potentially polluting academic databases or misleading researchersFeeding an LLM a prompt such as “Generate a research paper on the effects of a drug called ‘Zyphorin’ on Alzheimer’s disease” and obtaining a seemingly legitimate paper
PropagandaImpersonating celebrities or public figuresLLMs can generate statements, tweets, or messages that mimic the style of celebrities or public figures, leading to misinformation or defamationInputting “Generate a tweet in the style of [Celebrity Name] discussing climate change” and getting a fabricated tweet that appears genuine
Automated propaganda generationGovernments or organizations could use LLMs to produce propaganda material at scale, targeting different demographics or regions with tailored messagesInputting “Generate a propaganda article promoting the benefits of a fictional government policy ‘GreenFuture Initiative’” and receiving a detailed article
Creating Fake Historical Documents or TextsLLMs can be used to fabricate historical documents, letters, or texts, potentially misleading historians or altering public perception of eventsPrompting an LLM with “Generate a letter from Napoleon Bonaparte to Josephine discussing his strategies for the Battle of Waterloo” to produce a fabricated historical document
DeceptionGenerating fake product reviewsBusinesses could use LLMs to generate positive reviews for their products or negative reviews for competitors, misleading consumersInputting “Generate 10 positive reviews for a fictional smartphone brand ‘NexaPhone’” and obtaining seemingly genuine user reviews
Generating realistic but fake personal stories or testimoniesLLMs can be used to craft personal stories or testimonies for use in deceptive marketing, false legal claims, or to manipulate public sentimentInputting “Generate a personal story of someone benefiting from a fictional health supplement ‘VitaBoost’” to obtain a convincing but entirely fabricated testimony
Crafting convincing scam emailsLLMs can be used to craft highly personalized scam emails that appear to come from legitimate sources, such as banks or service providersFeeding the model information about a fictional user and a prompt like “Generate an email from a bank notifying the user of suspicious account activity” to produce a scam email
Crafting legal documents with hidden clausesUnscrupulous entities could use LLMs to generate legal documents that contain hidden, misleading, or exploitative clausesPrompting an LLM with “Generate a rental agreement that subtly gives the landlord the right to increase rent without notice” to produce a deceptive legal document
Source: Ferrara (2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García, J.C.; Rodríguez, A.S.; Rodríguez-Vázquez, A.-I. Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review. Soc. Sci. 2026, 15, 247. https://doi.org/10.3390/socsci15040247

AMA Style

García JC, Rodríguez AS, Rodríguez-Vázquez A-I. Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review. Social Sciences. 2026; 15(4):247. https://doi.org/10.3390/socsci15040247

Chicago/Turabian Style

García, José Casás, Alba Silva Rodríguez, and Ana-Isabel Rodríguez-Vázquez. 2026. "Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review" Social Sciences 15, no. 4: 247. https://doi.org/10.3390/socsci15040247

APA Style

García, J. C., Rodríguez, A. S., & Rodríguez-Vázquez, A.-I. (2026). Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review. Social Sciences, 15(4), 247. https://doi.org/10.3390/socsci15040247

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop