You are currently viewing a new version of our website. To view the old version click .

Publications

Publications is an international, peer-reviewed, open access journal on scholarly publishing, published quarterly online by MDPI. 

Quartile Ranking JCR - Q2 (Information Science and Library Science)

All Articles (550)

In response to the persistent imbalance between the rapid expansion of scholarly publishing and the constrained availability of qualified reviewers, an empirical investigation was conducted to examine the feasibility and boundary conditions of employing Large Language Models (LLMs) in journal peer review. A parallel corpus of 493 pairs of human expert reviews and GPT-4o-generated reviews was constructed from the open peer-review platform PeerJ Computer Science. Analytical techniques, including keyword co-occurrence analysis, sentiment and subjectivity assessment, syntactic complexity measurement, and n-gram distributional entropy analysis, were applied to compare cognitive patterns, evaluative tendencies, and thematic coverage between human and AI reviewers. The results indicate that human and AI reviews exhibit complementary functional orientations. Human reviewers were observed to provide integrative and socially contextualized evaluations, while AI reviews emphasized structural verification and internal consistency, especially regarding the correspondence between abstracts and main texts. Contrary to the assumption of excessive leniency, GPT-4o-generated reviews demonstrated higher critical density and functional rigor, maintaining substantial topical alignment with human feedback. Based on these findings, a collaborative human–AI review framework is proposed, in which AI systems are positioned as analytical assistants that conduct structured verification prior to expert evaluation. Such integration is expected to enhance the efficiency, consistency, and transparency of the peer-review process and to promote the sustainable development of scholarly communication.

19 December 2025

Technical Roadmap.

Generative artificial intelligence (GenAI) is rapidly permeating research practices, yet knowledge about its use and topical profile remains fragmented across tools and disciplines. In this study, we present a cross-disciplinary map of GenAI research based on the Web of Science Core Collection (as of 4 November 2025) for the ten tool lines with the largest number of publications. We employed a transparent query protocol in the Title (TI) and Topic (TS) fields, using Boolean and proximity operators together with brand-specific exclusion lists. Thematic similarity was estimated with the Jaccard index for the Top–50, Top–100, and Top–200 sets. In parallel, we computed volume and citation metrics using Python and reconstructed a country-level co-authorship network. The corpus comprises 14,418 deduplicated publications. A strong concentration is evident around ChatGPT, which accounts for approximately 80.6% of the total. The year 2025 shows a marked increase in output across all lines. The Jaccard matrices reveal two stable clusters: general-purpose tools (ChatGPT, Gemini, Claude, Copilot) and open-source/developer-led lines (LLaMA, Mistral, Qwen, DeepSeek). Perplexity serves as a bridge between the clusters, while Grok remains the most distinct. The co-authorship network exhibits a dual-core structure anchored in the United States and China. The study contributes to bibliometric research on GenAI by presenting a perspective that combines publication dynamics, citation structures, thematic profiles, and similarity matrices based on the Jaccard algorithm for different tool lines. In practice, it proposes a comparative framework that can help researchers and institutions match GenAI tools to disciplinary contexts and develop transparent, repeatable assessments of their use in scientific activities.

11 December 2025

Measuring Group Performance Fairly: The h-Group, Homogeneity, and the α-Index

  • Roberto da Silva,
  • José Palazzo M. de Oliveira and
  • Viviane Moreira

Ranking research groups plays a crucial role in various contexts, such as ensuring the fair allocation of research grants, assigning projects, and evaluating journal editorial boards. In this paper, we analyze the distribution of h-indexes within research groups and propose a single metric to quantify their overall performance, termed the α-index. This index integrates two complementary aspects: the homogeneity of members’ h-indexes, captured by the Gini coefficient (g), and the h-group, an extension of the individual h-index to groups. By combining both uniformity and collective research output, the α-index provides a consistent and equitable metric for comparative evaluation, essentially calculated as the average relative h-group multiplied by and normalized by the maximum value of this quantity across all analyzed groups. We describe the full procedure for computing the index and its components and illustrate its application to computer science conferences, where program committees are compared through a resampling procedure that ensures fair comparisons across groups of different sizes. Additional results are presented for postgraduate programs, further demonstrating the method’s applicability. Correlation analyses are used to establish rankings; however, our primary goal is to recommend a fairer index that reduces deviations from those currently used by governmental agencies to evaluate conferences and graduate programs. The proposed approach offers a more nuanced assessment than simply averaging members’ h-indexes and can be applied broadly–for example, to university departments and research councils–contributing to a more equitable distribution of research funding, an issue of increasing importance.

11 December 2025

  • Perspective
  • Open Access

Regaining Scientific Authority in a Post-Truth Landscape

  • Andrew M. Petzold and
  • Marcia D. Nichols

Recent decades have seen a rise of anti-science rhetoric, fueled by scientific scandals and failures of peer review, and the rise of trainable generative AI spreading misinformation. We argue, moreover, that the continued erosion of scientific authority also arises from inherent features in science and academia, including a reliance on publication as a method for gaining professional credibility and success. Addressing this multifaceted challenge necessitates a concerted effort across several key areas: strengthening scientific messaging, combating misinformation, rebuilding trust in scientific authority, and fundamentally rethinking academic professional norms. Taking these steps will require widespread effort, but if we want to rebuild trust with the public, we must make significant and structural changes to the production and dissemination of science.

9 December 2025

News & Conferences

Issues

Open for Submission

Editor's Choice

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Publications - ISSN 2304-6775