Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review
Abstract
1. Introduction
- RQ1. Which authors have published most extensively on artificial intelligence and disinformation?
- RQ2. Which academic journals publish most frequently on this topic?
- RQ3. Which keywords are most commonly used in this field?
- RQ4. How has the distribution of keywords evolved over time?
- RQ5. Which methodological approaches are most frequently employed?
- RQ6. What are the main research perspectives on artificial intelligence and disinformation, and which potential future research directions can be identified?
2. Materials and Methods
- -
- Descriptive analysis (distribution by year, language, area, and journal quartile).
- -
- Qualitative thematic coding of objectives, methods, and findings.
- AI as a tool to combat disinformation (23 articles).
- AI as a source or amplifier of disinformation (9 articles).
- Regulation, ethics, and governance of AI in relation to disinformation (9 articles).
- Deepfakes and audiovisual manipulation (15 articles).
- AI as an educational tool and media literacy (4 articles).
3. Results
3.1. Scientific Output by Journals and Thematic Areas
3.2. Authors with the Highest Number of Publications
3.3. Most Frequently Used Keywords in the Sample
3.4. Most Frequently Used Keywords over Time
3.5. Research Techniques Used in the Articles
3.6. Research Approaches to Artificial Intelligence and Disinformation
3.7. Artificial Intelligence as a Source of Disinformation
3.8. Regulation and Ethics of Artificial Intelligence and Disinformation
3.9. Artificial Intelligence as a Tool to Combat Disinformation
3.10. Artificial Intelligence for the Creation of Deepfakes
3.11. Artificial Intelligence for Education and Media Literacy
4. Discussion and Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Acosta-Enriquez, Benicio Gonzalo, Marco Agustín Arbulú Ballesteros, Carmen Graciela Arbulu Perez Vargas, Milca Naara Orellana Ulloa, Cristian Raymound Gutiérrez Ulloa, Johanna Micaela Pizarro Romero, Néstor Daniel Gutiérrez Jaramillo, Héctor Ulises Cuenca Orellana, Diego Xavier Ayala Anzoátegui, and Carlos López Roca. 2024. Knowledge, Attitudes, and Perceived Ethics Regarding the Use of ChatGPT among Generation Z University Students. International Journal for Educational Integrity 20: 10. [Google Scholar] [CrossRef]
- Adams, Zoë, Magda Osman, Christos Bechlivanidis, and Björn Meder. 2023. (Why) Is Misinformation a Problem? Perspectives on Psychological Science: A Journal of the Association for Psychological Science 18: 1436–63. [Google Scholar] [CrossRef]
- Adetayo, Adebowale Jeremy. 2023. Artificial Intelligence Chatbots in Academic Libraries: The Rise of ChatGPT. Library Hi Tech News 40: 18–21. [Google Scholar] [CrossRef]
- Aïmeur, Esma, Sabrine Amri, and Gilles Brassard. 2023. Fake News, Disinformation and Misinformation in Social Media: A Review. Social Network Analysis and Mining 13: 30. [Google Scholar] [CrossRef] [PubMed]
- Ali, Safinah, Daniella DiPaola, Irene Lee, Victor Sindato, Grace Kim, Ryan Blumofe, and Cynthia Breazeal. 2021. Children as Creators, Thinkers and Citizens in an AI-Driven Future. Computers and Education: Artificial Intelligence 2: 100040. [Google Scholar] [CrossRef]
- Appel, Markus, and Fabian Prietzel. 2022. The Detection of Political Deepfakes. Journal of Computer-Mediated Communication: JCMC 27: zmac008. [Google Scholar] [CrossRef]
- Assenmacher, Dennis, Lena Clever, Lena Frischlich, Thorsten Quandt, Heike Trautmann, and Christian Grimme. 2020. Demystifying Social Bots: On the Intelligence of Automated Social Media Actors. Social Media + Society 6: 205630512093926. [Google Scholar] [CrossRef]
- Battista, Daniele, and Gabriele Uva. 2023. Exploring the Legal Regulation of Social Media in Europe: A Review of Dynamics and Challenges—Current Trends and Future Developments. Sustainability 15: 4144. [Google Scholar] [CrossRef]
- Bontridder, Noémi, and Yves Poullet. 2021. The Role of Artificial Intelligence in Disinformation. Data & Policy 3: e32. [Google Scholar] [CrossRef]
- Brennen, J. Scott, Felix M. Simon, and Rasmus Kleis Nielsen. 2021. Beyond (Mis)Representation: Visuals in COVID-19 Misinformation. Politics [The International Journal of Press] 26: 277–99. [Google Scholar] [CrossRef]
- Calvo, Dafne, Lorena Cano-Orón, and Almudena Esteban. 2020. Materiales y Evaluación Del Nivel de Alfabetización Para El Reconocimiento de Bots Sociales En Contextos de Desinformación Política. Revista ICONO14 18: 111–37. [Google Scholar] [CrossRef]
- Canavilhas, Joao. 2022. Inteligencia Artificial Aplicada al Periodismo: Estudio de Caso Del Proyecto ‘A European Perspective’ (UER). Revista Latina de Comunicación Social 80: 1–13. [Google Scholar] [CrossRef]
- Casero-Ripollés, Andreu. 2018. Research on Political Information and Social Media: Key Points and Challenges for the Future. El Profesional de La Información 27: 964–74. [Google Scholar] [CrossRef]
- Codina, Lluís. 2017. Bases de datos Académicas para Investigar en Comunicación Social: Revisiones Sistematizadas, Grupo Óptimo y Protocolo de Búsqueda. Lluís Codina. July 12. Available online: https://www.lluiscodina.com/bases-de-datos-academicasi-comunicacion-social/ (accessed on 5 September 2025).
- Cuartielles, Roger, Xavier Ramon-Vegas, and Carles Pont-Sorribes. 2023. Retraining Fact-Checkers: The Emergence of ChatGPT in Information Verification. El Profesional de La Información 32: e320515. [Google Scholar] [CrossRef]
- Das, Anubrata, Houjiang Liu, Venelin Kovatchev, and Matthew Lease. 2023. The State of Human-Centered NLP Technology for Fact-Checking. Information Processing & Management 60: 103219. [Google Scholar] [CrossRef]
- Douven, Igor, and Rainer Hegselmann. 2021. Mis- and Disinformation in a Bounded Confidence Model. Artificial Intelligence 291: 103415. [Google Scholar] [CrossRef]
- Dwivedi, Yogesh K., Laurie Hughes, Elvira Ismagilova, Gert Aarts, Crispin Coombs, Tom Crick, Yanqing Duan, Rohita Dwivedi, John Edwards, Aled Eirug, and et al. 2021. Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. International Journal of Information Management 57: 101994. [Google Scholar] [CrossRef]
- Ferrara, Emilio. 2024. GenAI against Humanity: Nefarious Applications of Generative Artificial Intelligence and Large Language Models. Journal of Computational Social Science 7: 549–69. [Google Scholar] [CrossRef]
- Flores-Saviaga, Claudia, Shangbin Feng, and Saiph Savage. 2022. Datavoidant: An AI System for Addressing Political Data Voids on Social Media. Proceedings of the ACM on Human-Computer Interaction 6: 503. [Google Scholar] [CrossRef]
- Forja-Pena, Tania, Berta García-Orosa, and Xosé López-García. 2024. The Ethical Revolution: Challenges and Reflections in the Face of the Integration of Artificial Intelligence in Digital Journalism. Communication & Society 37: 237–54. [Google Scholar] [CrossRef]
- Gambín, Ángel Fernández, Anis Yazidi, Athanasios Vasilakos, Hårek Haugerud, and Youcef Djenouri. 2024. Deepfakes: Current and Future Trends. Artificial Intelligence Review 57: 64. [Google Scholar] [CrossRef]
- García-Orosa, Berta. 2021. Disinformation, Social Media, Bots, and Astroturfing: The Fourth Wave of Digital Democracy. El Profesional de La Información 30: e300603. [Google Scholar] [CrossRef]
- García-Ull, Francisco José. 2021. «Deepfakes»: El Pròxim Repte En La Detecció de Notícies Falses. Anàlisi 64: 103–20. [Google Scholar] [CrossRef]
- Gasaymeh, Al-Mothana M., Mohammad A. Beirat, and Asma’a A. Abu Qbeita. 2024. University Students’ Insights of Generative Artificial Intelligence (AI) Writing Tools. Education Sciences 14: 1062. [Google Scholar] [CrossRef]
- Godulla, Alexander, Christian P. Hoffmann, and Daniel Seibert. 2021. Dealing with Deepfakes—An Interdisciplinary Examination of the State of Research and Implications for Communication Studies. Studies in Communication and Media 10: 72–96. [Google Scholar] [CrossRef]
- Gómez-de-Ágreda, Ángel, Claudio Feijóo, and Idoia-Ana Salazar-García. 2021. Una Nueva Taxonomía Del Uso de La Imagen En La Conformación Interesada Del Relato Digital. Deep Fakes e Inteligencia Artificial. El Profesional de La Información 30: e300216. [Google Scholar] [CrossRef]
- Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-Checking in American Journalism. New York: Columbia University Press. [Google Scholar]
- Guallar, Javier, Lluís Codina, Pere Freixa, and Mario Pérez-Montoro. 2020. Desinformación, bulos, curación y verificación. Revisión de estudios en Iberoamérica 2017–2020. Telos 22: 595–613. [Google Scholar] [CrossRef]
- Gutiérrez-Caneda, Beatriz, Jorge Vázquez-Herrero, and Xosé López-García. 2023. AI Application in Journalism: ChatGPT and the Uses and Risks of an Emergent Technology. El Profesional de La Información 32: e320514. [Google Scholar] [CrossRef]
- Hameleers, Michael. 2024. Cheap versus Deep Manipulation: The Effects of Cheapfakes versus Deepfakes in a Political Setting. International Journal of Public Opinion Research 36: edae004. [Google Scholar] [CrossRef]
- Hausken, Liv. 2024. Photorealism versus Photography. AI-Generated Depiction in the Age of Visual Disinformation. Journal of Aesthetics & Culture 16: 2340787. [Google Scholar] [CrossRef]
- Hussain, Shehzeen, Paarth Neekhara, Brian Dolhansky, Joanna Bitton, Cristian Canton Ferrer, Julian McAuley, and Farinaz Koushanfar. 2022. Exposing Vulnerabilities of Deepfake Detection Systems with Robust Attacks. Digital Threats: Research and Practice 3: 30. [Google Scholar] [CrossRef]
- Kaplan, Andreas, and Michael Haenlein. 2019. Siri, Siri, in My Hand: Who’s the Fairest in the Land? On the Interpretations, Illustrations, and Implications of Artificial Intelligence. Business Horizons 62: 15–25. [Google Scholar] [CrossRef]
- Koplin, Julian J. 2023. Dual-Use Implications of AI Text Generation. Ethics and Information Technology 25: 32. [Google Scholar] [CrossRef]
- Kozik, Rafał, Aleksandra Pawlicka, Marek Pawlicki, Michał Choraś, Wojciech Mazurczyk, and Krzysztof Cabaj. 2024. A Meta-Analysis of State-of-the-Art Automated Fake News Detection Methods. IEEE Transactions on Computational Social Systems 11: 5219–29. [Google Scholar] [CrossRef]
- Lian, Ying, Huiting Tang, Mengting Xiang, and Xuefan Dong. 2024. Public Attitudes and Sentiments toward ChatGPT in China: A Text Mining Analysis Based on Social Media. Technology in Society 76: 102442. [Google Scholar] [CrossRef]
- Liu, Xingyu, Li Qi, Laurent Wang, and Miriam J. Metzger. 2025. Checking the Fact-Checkers: The Role of Source Type, Perceived Credibility, and Individual Differences in Fact-Checking Effectiveness. Communication Research 52: 719–46. [Google Scholar] [CrossRef]
- Llorca-Asensi, Elena, Alexander Sánchez Díaz, Maria-Elena Fabregat-Cabrera, and Raúl Ruiz-Callado. 2021. ‘Why Can’t We?’ Disinformation and Right to Self-Determination. The Catalan Conflict on Twitter. Social Sciences 10: 383. [Google Scholar] [CrossRef]
- Lu, Zhuoran, Patrick Li, Weilong Wang, and Ming Yin. 2022. The Effects of AI-Based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human-Computer Interaction 6: 461. [Google Scholar] [CrossRef]
- Łabuz, Mateusz, and Christopher Nehring. 2024. On the Way to Deep Fake Democracy? Deep Fakes in Election Campaigns in 2023. European Political Science 23: 454–73. [Google Scholar] [CrossRef]
- Magallón Rosa, Raúl. 2019. La (No) Regulación de La Desinformación En La Unión Europea. Una Perspectiva Comparada. Revista de Derecho Político 1: 319–46. [Google Scholar] [CrossRef]
- Marsden, Chris, Trisha Meyer, and Ian Brown. 2020. Platform Values and Democratic Elections: How Can the Law Regulate Digital Disinformation? Computer Law & Security Review 36: 105373. [Google Scholar] [CrossRef]
- Miller, Seumas. 2023. Cognitive Warfare: An Ethical Analysis. Ethics and Information Technology 25: 46. [Google Scholar] [CrossRef]
- Millière, Raphaël. 2022. Deep Learning and Synthetic Media. Synthese 200: 231. [Google Scholar] [CrossRef]
- Montoro-Montarroso, Andrés, Javier Cantón-Correa, Paolo Rosso, Berta Chulvi, Ángel Panizo-Lledot, Javier Huertas-Tato, Blanca Calvo-Figueras, M. José Rementeria, and Juan Gómez-Romero. 2023. Fighting Disinformation with Artificial Intelligence: Fundamentals, Advances and Challenges. El Profesional de La Información 32: e320322. [Google Scholar] [CrossRef]
- Murillo-Ligorred, Víctor, Nora Ramos-Vallecillo, Irene Covaleda, and Leticia Fayos. 2023. Knowledge, Integration and Scope of Deepfakes in Arts Education: The Development of Critical Thinking in Postgraduate Students in Primary Education and Master’s Degree in Secondary Education. Education Sciences 13: 1073. [Google Scholar] [CrossRef]
- Naeem, Bilal, Aymen Khan, Mirza Omer Beg, and Hasan Mujtaba. 2020. A Deep Learning Framework for Clickbait Detection on Social Area Network Using Natural Language Cues. Journal of Computational Social Science 3: 231–43. [Google Scholar] [CrossRef]
- Nasir, Jamal Abdul, Osama Subhani Khan, and Iraklis Varlamis. 2021. Fake News Detection: A Hybrid CNN-RNN Based Deep Learning Approach. International Journal of Information Management Data Insights 1: 100007. [Google Scholar] [CrossRef]
- Ng, Davy Tsz Kit, Jac Ka Lok Leung, Samuel Kai Wah Chu, and Maggie Shen Qiao. 2021. Conceptualizing AI Literacy: An Exploratory Review. Computers and Education: Artificial Intelligence 2: 100041. [Google Scholar] [CrossRef]
- Noguera Vivo, José Manuel, María del Mar Grandío-Pérez, Guillermo Villar-Rodríguez, Alejandro Martín, and David Camacho. 2023. Desinformación y Vacunas En Redes: Comportamiento de Los Bulos En Twitter. Revista Latina de Comunicación Social 81: 44–62. [Google Scholar] [CrossRef]
- Osamor Ifelebuegu, Augustine, Peace Kulume, and Perpetua Cherukut. 2023. Chatbots and AI in Education (AIEd) Tools: The Good, the Bad, and the Ugly. Journal of Applied Learning & Teaching 6: 332–45. [Google Scholar] [CrossRef]
- Page, Matthew J., Joanne E. McKenzie, Patrick M. Bossuyt, Isabelle Boutron, Tammy C. Hoffmann, Cynthia D. Mulrow, Larissa Shamseer, Jennifer M. Tetzlaff, Elie A. Akl, Sue E. Brennan, and et al. 2021. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 372: n71. [Google Scholar] [CrossRef] [PubMed]
- Pareek, Saumya, Niels van Berkel, Eduardo Velloso, and Jorge Goncalves. 2024. Effect of Explanation Conceptualisations on Trust in AI-Assisted Credibility Assessment. Proceedings of the ACM on Human-Computer Interaction 8: 383. [Google Scholar] [CrossRef]
- Pavlik, John V. 2023. Collaborating with ChatGPT: Considering the Implications of Generative Artificial Intelligence for Journalism and Media Education. Journalism & Mass Communication Educator 78: 84–93. [Google Scholar] [CrossRef]
- Polyportis, Athanasios, and Nikolaos Pahos. 2024. Navigating the Perils of Artificial Intelligence: A Focused Review on ChatGPT and Responsible Research and Innovation. Humanities & Social Sciences Communications 11: 107. [Google Scholar] [CrossRef]
- Porlezza, Colin. 2023. Promoting Responsible AI: A European Perspective on the Governance of Artificial Intelligence in Media and Journalism. Communications 48: 370–94. [Google Scholar] [CrossRef]
- Regulation-EU-2024/1689—EN—EUR-Lex. n.d. Europa.Eu. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (accessed on 6 March 2026).
- Romero Moreno, Felipe. 2024. Generative AI and Deepfakes: A Human Rights Approach to Tackling Harmful Content. International Review of Law Computers & Technology 38: 297–326. [Google Scholar] [CrossRef]
- Santos, Fátima C. Carrilho. 2023. Artificial Intelligence in Automated Detection of Disinformation: A Thematic Analysis. Journalism and Media 4: 679–87. [Google Scholar] [CrossRef]
- Sánchez González, María, Hada M. Sánchez Gonzales, and Sergio Martínez Gonzalo. 2022. Inteligencia Artificial En Verificadores Hispanos de La Red IFCN: Proyectos Innovadores y Percepción de Expertos y Profesionales. Estudios Sobre El Mensaje Periodístico 28: 867–79. [Google Scholar] [CrossRef]
- Sánchez-Serrano, Silvia, Inmaculada Pedraza-Navarro, and Macarena Donoso-González. 2022. ¿Cómo Hacer Una Revisión Sistemática Siguiendo El Protocolo PRISMA?: Usos y Estrategias Fundamentales Para Su Aplicación En El Ámbito Educativo a Través de Un Caso Práctico. Bordón Revista de Pedagogía 74: 51–66. [Google Scholar] [CrossRef]
- Scimago Journal & Country Rank. n.d. Scimagojr.com. Available online: https://www.scimagojr.com/ (accessed on 6 March 2026).
- Shahid, Wajiha, Bahman Jamshidi, Saqib Hakak, Haruna Isah, Wazir Zada Khan, Muhammad Khurram Khan, and Kim-Kwang Raymond Choo. 2024. Detecting and Mitigating the Dissemination of Fake News: Challenges and Future Research Opportunities. IEEE Transactions on Computational Social Systems 11: 4649–62. [Google Scholar] [CrossRef]
- Thomson, T. J., Ryan J. Thomas, and Phoebe Matich. 2024. Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions, and Policies. Digital Journalism 13: 1693–714. [Google Scholar] [CrossRef]
- Tricco, Andrea C., Erin Lillie, Wasifa Zarin, Kelly K. O’Brien, Heather Colquhoun, Danielle Levac, David Moher, Micah D. J. Peters, Tanya Horsley, Laura Weeks, and et al. 2018. PRISMA Extension for Scoping Reviews (PRISMA-ScR): Checklist and Explanation. Annals of Internal Medicine 169: 467–73. [Google Scholar] [CrossRef]
- Vaccari, Cristian, and Andrew Chadwick. 2020. Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News. Social Media + Society 6: 205630512090340. [Google Scholar] [CrossRef]
- Vicari, Rosa, and Nadejda Komendatova. 2023. Systematic Meta-Analysis of Research on AI Tools to Deal with Misinformation on Social Media during Natural and Anthropogenic Hazards and Disasters. Humanities & Social Sciences Communications 10: 332. [Google Scholar] [CrossRef]
- Victor, Bryan G., Rebeccah L. Sokol, Lauri Goldkind, and Brian E. Perron. 2023. Recommendations for Social Work Researchers and Journal Editors on the Use of Generative AI and Large Language Models. Journal of the Society for Social Work and Research 14: 563–77. [Google Scholar] [CrossRef]
- Villar-Rodríguez, Guillermo, Mónica Souto-Rico, and Alejandro Martín. 2022. Virality, Only the Tip of the Iceberg: Ways of Spread and Interaction around COVID-19 Misinformation in Twitter. Communication & Society 35: 239–56. [Google Scholar] [CrossRef]
- Vizoso, Ángel, Martín Vaz-Álvarez, and Xosé López-García. 2021. Fighting Deepfakes: Media and Internet Giants’ Converging and Diverging Strategies against Hi-Tech Misinformation. Media and Communication 9: 291–300. [Google Scholar] [CrossRef]
- Wach, Krzysztof, Cong Doanh Duong, Joanna Ejdys, Rūta Kazlauskaitė, Pawel Korzynski, Grzegorz Mazurek, Joanna Paliszkiewicz, and Ewa Ziemba. 2023. The Dark Side of Generative Artificial Intelligence: A Critical Analysis of Controversies and Risks of ChatGPT. Entrepreneurial Business and Economics Review 11: 7–30. [Google Scholar] [CrossRef]
- Weikmann, Teresa, Hannah Greber, and Alina Nikolaou. 2025. After Deception: How Falling for a Deepfake Affects the Way We See, Hear, and Experience Media. Politics [The International Journal of Press] 30: 187–210. [Google Scholar] [CrossRef]
- Wojcieszak, Magdalena, Arti Thakur, João Fernando Ferreira Gonçalves, Andreu Casas, Ericka Menchen-Trevino, and Miriam Boon. 2021. Can AI Enhance People’s Support for Online Moderation and Their Openness to Dissimilar Political Views? Journal of Computer-Mediated Communication: JCMC 26: 223–43. [Google Scholar] [CrossRef]
- Wong, Wilson Kia Onn. 2024. The Sudden Disruptive Rise of Generative Artificial Intelligence? An Evaluation of Their Impact on Higher Education and the Global Workplace. Journal of Open Innovation Technology Market and Complexity 10: 100278. [Google Scholar] [CrossRef]
- Xiao, Shuai, Guipeng Lan, Jiachen Yang, Yang Li, and Jiabao Wen. 2024. Securing the Socio-Cyber World: Multiorder Attribute Node Association Classification for Manipulated Media. IEEE Transactions on Computational Social Systems 11: 4809–18. [Google Scholar] [CrossRef]
- Yankoski, Michael, Tim Weninger, and Walter Scheirer. 2020. An AI Early Warning System to Monitor Online Disinformation, Stop Violence, and Protect Elections. The Bulletin of the Atomic Scientists 76: 85–90. [Google Scholar] [CrossRef]
- Yankoski, Michael, Walter Scheirer, and Tim Weninger. 2021. Meme Warfare: AI Countermeasures to Disinformation Should Focus on Popular, Not Perfect, Fakes. The Bulletin of the Atomic Scientists 77: 119–23. [Google Scholar] [CrossRef]
- Yim, Iris Heung Yue. 2024. Artificial Intelligence Literacy in Primary Education: An Arts-Based Approach to Overcoming Age and Gender Barriers. Computers and Education: Artificial Intelligence 7: 100321. [Google Scholar] [CrossRef]
- Zhang, Xiao, Zhixin Ma, Ze Zhang, Qijuan Sun, and Jun Yan. 2018. A Review of Community Detection Algorithms Based on Modularity Optimization. Journal of Physics. Conference Series 1069: 012123. [Google Scholar] [CrossRef]





| Author/s | Publications | Citations in Scopus | Refs. |
|---|---|---|---|
| Berta García-Orosa | García-Orosa, B. (2021). Disinformation, social media, bots, and astroturfing: the fourth wave of digital democracy. El profesional de la información. https://doi.org/10.3145/epi.2021.nov.03 | 35 | (García-Orosa 2021) |
| Forja-Pena, T., García-Orosa, B., & López-García, X. (2024). The ethical revolution: Challenges and reflections in the face of the integration of artificial intelligence in digital journalism. Communication & Society, 237–54. https://doi.org/10.15581/003.37.3.237-254 | 17 | (Forja-Pena et al. 2024) | |
| Alejandro Martín, Guillermo Villar-Rodríguez | Villar-Rodríguez, G., Souto-Rico, M., & Martín, A. (2022). Virality, only the tip of the iceberg: ways of spread and interaction around COVID-19 misinformation in Twitter. Communication & Society, 239–56. https://doi.org/10.15581/003.35.2.239-256 | 14 | (Villar-Rodríguez et al. 2022) |
| Noguera Vivo, J. M., Grandío-Pérez, M. del M., Villar-Rodríguez, G., Martín, A., & Camacho, D. (2023). Desinformación y vacunas en redes: Comportamiento de los bulos en Twitter. Revista latina de comunicación social, 81, 44–62. https://doi.org/10.4185/rlcs-2023-1820 | 11 | (Noguera Vivo et al. 2023) | |
| Walter J. Scheirer, Tim Weninger, Michael G. Yankoski | Yankoski, M., Weninger, T., & Scheirer, W. (2020). An AI early warning system to monitor online disinformation, stop violence, and protect elections. The Bulletin of the Atomic Scientists, 76(2), 85–90. https://doi.org/10.1080/00963402.2020.1728976 | 15 | (Yankoski et al. 2020) |
| Yankoski, M., Scheirer, W., & Weninger, T. (2021). Meme warfare: AI countermeasures to disinformation should focus on popular, not perfect, fakes. The Bulletin of the Atomic Scientists, 77(3), 119–23. https://doi.org/10.1080/00963402.2021.1912093 | 14 | (Yankoski et al. 2021) | |
| Xosé López-García | Vizoso, Á., Vaz-Álvarez, M., & López-García, X. (2021). Fighting deepfakes: Media and Internet giants’ converging and diverging strategies against hi-tech misinformation. Media and Communication, 9(1), 291–300. https://doi.org/10.17645/mac.v9i1.3494 | 55 | (Vizoso et al. 2021) |
| Forja-Pena, T., García-Orosa, B., & López-García, X. (2024). The ethical revolution: Challenges and reflections in the face of the integration of artificial intelligence in digital journalism. Communication & Society, 237–54. https://doi.org/10.15581/003.37.3.237-254 | 17 | (Forja-Pena et al. 2024) |
| Year | Number of Papers | Number of Keywords in the Graph | Most Frequent Keywords |
|---|---|---|---|
| 2020 | 6 | 8 | disinformation |
| 2021 | 13 | 11 | disinformation |
| 2022 | 8 | 16 | artificial intelligence; disinformation; fake news |
| 2023 | 15 | 24 | artificial intelligence; disinformation; fake news |
| 2024 | 18 | 28 | artificial intelligence; disinformation; fake news |
| 2025 | 2 | 33 | artificial intelligence; fake news; misinformation; disinformation |
| Methods Type | Total Number | Percentage |
|---|---|---|
| Qualitative | 33 | 55% |
| Quantitative | 17 | 26.7% |
| Mixed Methods | 12 | 18.3% |
| Thematic Approach | Description of the Approach | Main Object of Analysis |
|---|---|---|
| AI as a tool to combat disinformation | Studies analyzing the use of AI algorithms to identify, classify, and track disinformative content (text, image, audio, or video). | Automated detection systems, algorithmic verification, diffusion pattern analysis. |
| AI as a source of disinformation | Research focusing on AI as an agent that produces or amplifies false or misleading content. | Generative models, automation of false narratives, scalability of disinformation. |
| AI for the creation of deepfakes | A specific line of research addressing the synthetic generation of hyper-realistic images, audio, and video for disinformative purposes. | Political, media, or personal deepfakes; audiovisual manipulation. |
| Regulation and ethics of AI and disinformation | Normative and legal approaches analyzing regulatory frameworks, public policies, and self-regulation mechanisms. | Legislation, ethical codes, algorithmic governance, platform accountability. |
| AI for education and media literacy | Studies exploring the use of AI to educate citizens and enhance resilience to disinformation. | Educational tools, intelligent assistants, personalized learning. |
| Goal | Application | Example | Proof-of-Concept |
|---|---|---|---|
| Dishonesty | Automated essay writing and academic dishonesty | Students could use LLMs to generate essays, research papers, or assignments, bypassing the learning process and undermining academic integrity | Inputting a prompt like Write a 2000-word essay on the impact of the Industrial Revolution on European society into an LLM and receiving a detailed, well-structured essay in return |
| Generating fake research papers | LLMs can be used to produce fake research papers with fabricated data, results, and references, potentially polluting academic databases or misleading researchers | Feeding an LLM a prompt such as “Generate a research paper on the effects of a drug called ‘Zyphorin’ on Alzheimer’s disease” and obtaining a seemingly legitimate paper | |
| Propaganda | Impersonating celebrities or public figures | LLMs can generate statements, tweets, or messages that mimic the style of celebrities or public figures, leading to misinformation or defamation | Inputting “Generate a tweet in the style of [Celebrity Name] discussing climate change” and getting a fabricated tweet that appears genuine |
| Automated propaganda generation | Governments or organizations could use LLMs to produce propaganda material at scale, targeting different demographics or regions with tailored messages | Inputting “Generate a propaganda article promoting the benefits of a fictional government policy ‘GreenFuture Initiative’” and receiving a detailed article | |
| Creating Fake Historical Documents or Texts | LLMs can be used to fabricate historical documents, letters, or texts, potentially misleading historians or altering public perception of events | Prompting an LLM with “Generate a letter from Napoleon Bonaparte to Josephine discussing his strategies for the Battle of Waterloo” to produce a fabricated historical document | |
| Deception | Generating fake product reviews | Businesses could use LLMs to generate positive reviews for their products or negative reviews for competitors, misleading consumers | Inputting “Generate 10 positive reviews for a fictional smartphone brand ‘NexaPhone’” and obtaining seemingly genuine user reviews |
| Generating realistic but fake personal stories or testimonies | LLMs can be used to craft personal stories or testimonies for use in deceptive marketing, false legal claims, or to manipulate public sentiment | Inputting “Generate a personal story of someone benefiting from a fictional health supplement ‘VitaBoost’” to obtain a convincing but entirely fabricated testimony | |
| Crafting convincing scam emails | LLMs can be used to craft highly personalized scam emails that appear to come from legitimate sources, such as banks or service providers | Feeding the model information about a fictional user and a prompt like “Generate an email from a bank notifying the user of suspicious account activity” to produce a scam email | |
| Crafting legal documents with hidden clauses | Unscrupulous entities could use LLMs to generate legal documents that contain hidden, misleading, or exploitative clauses | Prompting an LLM with “Generate a rental agreement that subtly gives the landlord the right to increase rent without notice” to produce a deceptive legal document |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
García, J.C.; Rodríguez, A.S.; Rodríguez-Vázquez, A.-I. Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review. Soc. Sci. 2026, 15, 247. https://doi.org/10.3390/socsci15040247
García JC, Rodríguez AS, Rodríguez-Vázquez A-I. Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review. Social Sciences. 2026; 15(4):247. https://doi.org/10.3390/socsci15040247
Chicago/Turabian StyleGarcía, José Casás, Alba Silva Rodríguez, and Ana-Isabel Rodríguez-Vázquez. 2026. "Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review" Social Sciences 15, no. 4: 247. https://doi.org/10.3390/socsci15040247
APA StyleGarcía, J. C., Rodríguez, A. S., & Rodríguez-Vázquez, A.-I. (2026). Artificial Intelligence and Disinformation: A State-of-the-Art Review Through a Systematized Literature Review. Social Sciences, 15(4), 247. https://doi.org/10.3390/socsci15040247
