AI in Academic Metrics and Impact Analysis

A special issue of Publications (ISSN 2304-6775).

Deadline for manuscript submissions: 31 December 2026 | Viewed by 14411

Special Issue Editors


E-Mail Website
Guest Editor
School of Economics and Management, Beijing University of Technology, Beijing 100124, China
Interests: complex network methods and applications; scientific prediction and evaluation; data mining

E-Mail Website
Guest Editor
School of Economics and Management, Beijing University of Technology, Beijing 100124, China
Interests: technology forecasting; scientometrics; complex networks

Special Issue Information

Dear Colleagues,

For centuries, experimental and theoretical sciences have served as the two foundational paradigms in the scientific community, but a new generation of artificial intelligence (AI) is now giving rise to a new paradigm of scientific research. Utilizing AI technologies to address scientific problems will greatly accelerate the efficiency of scientific investigation. This increase in research efficiency will, in turn, enhance AI capabilities, ultimately creating a new research paradigm driven by the resonant engine of the dual spiral of AI4Science and Science4AI. Imagine a future where scientific research involves the automatic generation and verification of hypotheses, the intelligent discovery of new laws, and the automated proposal of new principles—a whole new era in science. How the field of scientometrics can seize this wave of AI to build a more scientific and intelligent research evaluation system is an issue that forward-thinking scientometricians need to consider and address.

This Special Issue aims to create a platform for peers to engage in discussions, focusing on the cutting-edge advancements and key breakthroughs of AI in research evaluation. The application of AI promises to fundamentally change traditional academic assessment, citation analysis, and measures of research impact, significantly enhancing the accuracy, interpretability, and automation of evaluations. In the context of human–AI collaboration, AI can also serve as an intelligent assessment assistant, improving the understanding and application efficiency of academic achievements. This Special Issue particularly focuses on the following research topics: (1) AI exploration in identifying original research; (2) the automation of research result screening and recommendations based on AI; (3) the AI-based evaluation of high-value patents; (4) the impact and challenges of AI technology in scientometrics; (5) the trustworthiness of AI in research evaluation, among others.

Dr. Guoqiang Liang
Dr. Shuo Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Publications is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • scientometric assessment
  • AI4Science
  • LLMs
  • AI agents
  • patent analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 1449 KB  
Article
On the Vulnerability of Citation Metrics in the Era of Generative Artificial Intelligence
by Kay Smarsly
Publications 2026, 14(2), 23; https://doi.org/10.3390/publications14020023 - 11 Apr 2026
Viewed by 359
Abstract
Large language model (LLM) chatbots, as a widely used form of generative artificial intelligence, have reduced the marginal cost of producing publication-style manuscripts and have expanded feasible routes for manipulating citation metrics within the publishing ecosystem. Citation-based indicators (e.g., the h-index, the i10-index, [...] Read more.
Large language model (LLM) chatbots, as a widely used form of generative artificial intelligence, have reduced the marginal cost of producing publication-style manuscripts and have expanded feasible routes for manipulating citation metrics within the publishing ecosystem. Citation-based indicators (e.g., the h-index, the i10-index, and total citation counts) remain embedded in research evaluation and are sensitive to indexing practices of bibliographic databases, with Google Scholar providing broad coverage combined with comparatively limited curation. In this study, a systematic literature review is conducted to synthesize reported mechanisms of citation-metric manipulation and to examine limitations of citation-metric use, including evidence reported in civil engineering. A Google Scholar proof-of-concept case study examines whether the indexing of LLM-assisted, non-peer-reviewed documents with concentrated references to a target author is associated with changes in author-level citation metrics under platform-specific conditions. After indexing, a stepwise increase in author-level metrics is observed, demonstrating the feasibility of citation-metric manipulation under the platform-specific conditions. Finally, this paper discusses the implications for research integrity and citation manipulation in the era of generative artificial intelligence. It also presents recommendations for researchers, academic institutions and evaluation committees, publishers and editors, bibliographic database providers, and funding institutions and policymakers. Full article
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)
Show Figures

Figure 1

20 pages, 602 KB  
Article
Policies and Guidelines for the Use of Artificial Intelligence in Latin American Journals Indexed in Scopus and Classified According to the Scimago Journal Rank (SJR)
by Cristian Zahn-Muñoz, Patricio Viancos-González, Nancy Alarcón-Henríquez, Bastián Aravena-Niño and Ezequiel Martínez-Rojas
Publications 2026, 14(1), 17; https://doi.org/10.3390/publications14010017 - 6 Mar 2026
Viewed by 748
Abstract
The emergence of artificial intelligence tools in scientific production is generating significant challenges for scientific integrity and editorial governance, prompting journals and publishers to develop normative guidelines for their use. This study analyzes the current state of guideline implementation among Latin American journals [...] Read more.
The emergence of artificial intelligence tools in scientific production is generating significant challenges for scientific integrity and editorial governance, prompting journals and publishers to develop normative guidelines for their use. This study analyzes the current state of guideline implementation among Latin American journals indexed in Scopus and classified according to the Scimago Journal Rank (SJR). A quantitative approach was adopted, complemented by a descriptive documentary analysis based on a detailed review of the websites of 1119 journals from 17 Latin American countries. The collected data were systematized using Excel and analyzed through descriptive and inferential statistical techniques. The results indicate that only 27.2% of journals have explicit guidelines on the use of artificial intelligence, with a predominantly regulatory rather than punitive orientation that prioritizes technical support while restricting practices that compromise human intellectual control. Additionally, statistically significant differences were identified according to quality indicators, showing that journals with higher quality levels are more likely to have such guidelines. Overall, the findings reveal an incipient and heterogeneous regulatory development, underscoring the need to strengthen and harmonize editorial guidelines on artificial intelligence in order to safeguard transparency, clarify the responsibilities of the actors involved in the production and publication process, and protect the integrity of scientific communication. Full article
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)
Show Figures

Figure 1

29 pages, 3634 KB  
Article
Human–AI Complementarity in Peer Review: Empirical Analysis of PeerJ Data and Design of an Efficient Collaborative Review Framework
by Zhihe Yang, Xiaoyu Zhou, Yuxin Jiang, Xinjie Zhang, Qihui Gao, Yanzhu Lu and Anqi Yang
Publications 2026, 14(1), 1; https://doi.org/10.3390/publications14010001 - 19 Dec 2025
Cited by 1 | Viewed by 1873
Abstract
In response to the persistent imbalance between the rapid expansion of scholarly publishing and the constrained availability of qualified reviewers, an empirical investigation was conducted to examine the feasibility and boundary conditions of employing Large Language Models (LLMs) in journal peer review. A [...] Read more.
In response to the persistent imbalance between the rapid expansion of scholarly publishing and the constrained availability of qualified reviewers, an empirical investigation was conducted to examine the feasibility and boundary conditions of employing Large Language Models (LLMs) in journal peer review. A parallel corpus of 493 pairs of human expert reviews and GPT-4o-generated reviews was constructed from the open peer-review platform PeerJ Computer Science. Analytical techniques, including keyword co-occurrence analysis, sentiment and subjectivity assessment, syntactic complexity measurement, and n-gram distributional entropy analysis, were applied to compare cognitive patterns, evaluative tendencies, and thematic coverage between human and AI reviewers. The results indicate that human and AI reviews exhibit complementary functional orientations. Human reviewers were observed to provide integrative and socially contextualized evaluations, while AI reviews emphasized structural verification and internal consistency, especially regarding the correspondence between abstracts and main texts. Contrary to the assumption of excessive leniency, GPT-4o-generated reviews demonstrated higher critical density and functional rigor, maintaining substantial topical alignment with human feedback. Based on these findings, a collaborative human–AI review framework is proposed, in which AI systems are positioned as analytical assistants that conduct structured verification prior to expert evaluation. Such integration is expected to enhance the efficiency, consistency, and transparency of the peer-review process and to promote the sustainable development of scholarly communication. Full article
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)
Show Figures

Figure 1

29 pages, 4365 KB  
Article
A Multidisciplinary Bibliometric Analysis of Differences and Commonalities Between GenAI in Science
by Kacper Sieciński and Marian Oliński
Publications 2025, 13(4), 67; https://doi.org/10.3390/publications13040067 - 11 Dec 2025
Viewed by 2060
Abstract
Generative artificial intelligence (GenAI) is rapidly permeating research practices, yet knowledge about its use and topical profile remains fragmented across tools and disciplines. In this study, we present a cross-disciplinary map of GenAI research based on the Web of Science Core Collection (as [...] Read more.
Generative artificial intelligence (GenAI) is rapidly permeating research practices, yet knowledge about its use and topical profile remains fragmented across tools and disciplines. In this study, we present a cross-disciplinary map of GenAI research based on the Web of Science Core Collection (as of 4 November 2025) for the ten tool lines with the largest number of publications. We employed a transparent query protocol in the Title (TI) and Topic (TS) fields, using Boolean and proximity operators together with brand-specific exclusion lists. Thematic similarity was estimated with the Jaccard index for the Top–50, Top–100, and Top–200 sets. In parallel, we computed volume and citation metrics using Python and reconstructed a country-level co-authorship network. The corpus comprises 14,418 deduplicated publications. A strong concentration is evident around ChatGPT, which accounts for approximately 80.6% of the total. The year 2025 shows a marked increase in output across all lines. The Jaccard matrices reveal two stable clusters: general-purpose tools (ChatGPT, Gemini, Claude, Copilot) and open-source/developer-led lines (LLaMA, Mistral, Qwen, DeepSeek). Perplexity serves as a bridge between the clusters, while Grok remains the most distinct. The co-authorship network exhibits a dual-core structure anchored in the United States and China. The study contributes to bibliometric research on GenAI by presenting a perspective that combines publication dynamics, citation structures, thematic profiles, and similarity matrices based on the Jaccard algorithm for different tool lines. In practice, it proposes a comparative framework that can help researchers and institutions match GenAI tools to disciplinary contexts and develop transparent, repeatable assessments of their use in scientific activities. Full article
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)
Show Figures

Figure 1

32 pages, 6227 KB  
Article
A Decade of Deepfake Research in the Generative AI Era, 2014–2024: A Bibliometric Analysis
by Btissam Acim, Mohamed Boukhlif, Hamid Ouhnni, Nassim Kharmoum and Soumia Ziti
Publications 2025, 13(4), 50; https://doi.org/10.3390/publications13040050 - 2 Oct 2025
Cited by 1 | Viewed by 8276
Abstract
The recent growth of generative artificial intelligence (AI) has brought new possibilities and revolutionary applications in many fields. It has also, however, created important ethical and security issues, especially with the abusive use of deepfakes, which are artificial media that can propagate very [...] Read more.
The recent growth of generative artificial intelligence (AI) has brought new possibilities and revolutionary applications in many fields. It has also, however, created important ethical and security issues, especially with the abusive use of deepfakes, which are artificial media that can propagate very realistic but false information. This paper provides an extensive bibliometric, statistical, and trend analysis of deepfake research in the age of generative AI. Utilizing the Web of Science (WoS) database for the years 2014–2024, the research identifies key authors, influential publications, collaboration networks, and leading institutions. Biblioshiny (Bibliometrix R package, University of Naples Federico II, Naples, Italy) and VOSviewer (version 1.6.20, Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands) are utilized in the research for mapping the science production, theme development, and geographical distribution. The cutoff point of ten keyword frequencies by occurrence was applied to the data for relevance. This study aims to provide a comprehensive snapshot of the research status, identify gaps in the knowledge, and direct upcoming studies in the creation, detection, and mitigation of deepfakes. The study is intended to help researchers, developers, and policymakers understand the trajectory and impact of deepfake technology, supporting innovation and governance strategies. The findings highlight a strong average annual growth rate of 61.94% in publications between 2014 and 2024, with China, the United States, and India as leading contributors, IEEE Access among the most influential sources, and three dominant clusters emerging around disinformation, generative models, and detection methods. Full article
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)
Show Figures

Figure 1

Back to TopTop