Previous Article in Journal
Measuring Group Performance Fairly: The h-Group, Homogeneity, and the α-Index
Previous Article in Special Issue
A Decade of Deepfake Research in the Generative AI Era, 2014–2024: A Bibliometric Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multidisciplinary Bibliometric Analysis of Differences and Commonalities Between GenAI in Science

Faculty of Economic Sciences, Institute of Management and Quality Sciences, University of Warmia and Mazury in Olsztyn, 10-719 Olsztyn, Poland
*
Author to whom correspondence should be addressed.
Publications 2025, 13(4), 67; https://doi.org/10.3390/publications13040067
Submission received: 15 September 2025 / Revised: 29 November 2025 / Accepted: 5 December 2025 / Published: 11 December 2025
(This article belongs to the Special Issue AI in Academic Metrics and Impact Analysis)

Abstract

Generative artificial intelligence (GenAI) is rapidly permeating research practices, yet knowledge about its use and topical profile remains fragmented across tools and disciplines. In this study, we present a cross-disciplinary map of GenAI research based on the Web of Science Core Collection (as of 4 November 2025) for the ten tool lines with the largest number of publications. We employed a transparent query protocol in the Title (TI) and Topic (TS) fields, using Boolean and proximity operators together with brand-specific exclusion lists. Thematic similarity was estimated with the Jaccard index for the Top–50, Top–100, and Top–200 sets. In parallel, we computed volume and citation metrics using Python and reconstructed a country-level co-authorship network. The corpus comprises 14,418 deduplicated publications. A strong concentration is evident around ChatGPT, which accounts for approximately 80.6% of the total. The year 2025 shows a marked increase in output across all lines. The Jaccard matrices reveal two stable clusters: general-purpose tools (ChatGPT, Gemini, Claude, Copilot) and open-source/developer-led lines (LLaMA, Mistral, Qwen, DeepSeek). Perplexity serves as a bridge between the clusters, while Grok remains the most distinct. The co-authorship network exhibits a dual-core structure anchored in the United States and China. The study contributes to bibliometric research on GenAI by presenting a perspective that combines publication dynamics, citation structures, thematic profiles, and similarity matrices based on the Jaccard algorithm for different tool lines. In practice, it proposes a comparative framework that can help researchers and institutions match GenAI tools to disciplinary contexts and develop transparent, repeatable assessments of their use in scientific activities.

1. Introduction

In recent years, generative artificial intelligence (GenAI) has undergone a rapid evolution from general-purpose language models to specialized solutions grounded in self-supervised learning. These changes have translated into new approaches to hypothesis formulation, experimental design, and the analysis of large-scale datasets (Gangwal & Lavecchia, 2024; H. Wang et al., 2023). In this context, GenAI contributes to new scientific discoveries and enables the identification of patterns previously inaccessible to traditional methods. This phenomenon is most visible in drug design, protein engineering, and the development of next-generation materials (Buehler, 2024; Gangwal & Lavecchia, 2024; H. Wang et al., 2023).
Large language models (LLMs) are particularly important in academic settings, supporting manuscript writing, literature reviews, programming, and scientific inference (Bail, 2024; Banh & Strobel, 2023; Pu et al., 2024). In these areas, they shorten research cycles and increase the potential for interdisciplinary collaboration. Significant changes are also occurring in knowledge management and education, where LLMs facilitate the personalization of learning, the creation of instructional materials, scalable learner support, and more efficient knowledge transfer and innovation uptake in organizational practice (Alavi et al., 2024; Lin, 2023; Yan et al., 2024).
From a user’s perspective, contemporary language models may appear nearly identical. They share a conversational interface, a similar chat-window layout, prompt–response interaction, and integrations with office tools and the web browser. Many systems also offer comparable features that facilitate scholarly work, such as abstract generation, coding assistance, file handling, and basic multimodality. This design convergence and interaction style fosters the impression that these tools constitute a class with comparable use profiles.
However, empirical and review studies indicate substantial qualitative differences across models. These include, among others, the balance between creativity and factual precision, the depth of inferential reasoning, the scope and maturity of multimodality, stability over extended contexts, and strategies for mitigating errors and unattributed reuse (Hochmair et al., 2024; Shukla et al., 2024; Tosun, 2025). Effectiveness also varies in tasks specific to science, such as critical content appraisal, inconsistency detection, and work with up-to-date sources (Bolgova et al., 2025; Kaftan et al., 2024).
Prior studies have conducted bibliometric analyses of individual GenAI tools (Gande et al., 2024; Oliński et al., 2024; Yalcinkaya & Sebnem, 2024) or of groups of large language models within selected disciplines (Gencer & Gencer, 2025; Pwanedo Amos et al., 2025; S. Wang et al., 2024). Accordingly, a research gap has been identified concerning the insufficient delineation of differences and commonalities in the bibliometric records of selected GenAI tools. The present study addresses this gap by conducting a cross-sectional analysis of available bibliometric data to translate the tools’ functional differences into empirical insights. This study is warranted for several reasons:
  • Bibliometric analysis will make it possible to determine the dynamics of the diffusion of selected GenAI tools in science.
  • Comparative analysis will help define the disciplinary and geographical specialization profiles of GenAI tools.
  • Comparative analysis will enable a segmentation of GenAI tool applications by topical area.
  • The resulting findings will allow us to assess the substitutability of GenAI tools.
The aim of this study is to identify differences and commonalities across the publication corpora associated with selected GenAI tools (for the purposes of this study we use the term GenAI to refer to large language models, such as ChatGPT, Claude, Gemini (the study took into account many different versions of the tools, given that the research concerns articles written between 2023 and 2025), and related systems). In line with this aim, we pose the following research questions:
  • Which of the analyzed GenAI tools exhibits the highest growth rate in the number of publications over the years 2023–2025?
  • Which of the analyzed GenAI tools exhibits the highest citation-per-publication rate?
  • Do the analyzed publication corpora display geographic concentration in the same regions?
  • What differences exist in the topical scopes of the analyzed publication corpora?
  • What is the scale of the shared keyword corpus across the analyzed publications?
We treat this study as a preliminary map of the GenAI tool ecosystem in science. It can provide measurable guidance for matching tools to tasks and serve as a point of reference for further research on the differences and similarities among them.

2. Literature Review

2.1. Previous Bibliometric Analyses Based on GenAI

Since the public release of ChatGPT in November 2022, interest in large language models has risen markedly, rapidly translating into an intensification of publications and the emergence of a bibliometrics stream analyzing this area of science. An initial query in the Web of Science database identified 121 articles devoted to the bibliometric analysis of GenAI use, confirming the scale and momentum of this domain (Figure 1).
In total, these publications have been cited 846 times, with thirty-eight articles receiving no citations. This primarily reflects their recent publication, as forty-five percent of all articles were published in 2025.
The largest share of these publications concerns research in Computer Science and Information Systems (Figure 2). This set consists mainly of articles taking a general perspective on the impact of GenAI on science (e.g., Fan et al., 2024; Farhat et al., 2024; Nan et al., 2025).
It is also noteworthy that publications in the social sciences and in medicine account for a substantial proportion. In the social sciences, the main line of research consists of bibliometric analyses addressing the influence of LLM tools on education across stages—from primary schools to higher education (e.g., Bhullar et al., 2024; Polat et al., 2024; Pradana et al., 2023). In medicine, researchers’ interests are diversified (focusing on multiple disciplines, including anesthesiology, surgery, and nursing), although the most frequently cited publications are general in scope (e.g., Barrington et al., 2023; Gencer & Gencer, 2025; J. Wu et al., 2024).
The literature is dominated by bibliometric analyses focused on the influence of ChatGPT as the leading LLM tool. Importantly, the general popularity of ChatGPT correlates with the number of bibliometric analyses that center on this tool (and with the overall number of publications on GenAI systems, as shown in Table A1 and Table A2—see Appendix B). In September, the number of visits to chatgpt.com, according to Similarweb, was 5.904 B (for comparison: deepseek.com—333.0 M; perplexity.ai—169.5 M; claude.ai—156.9 M). Of the analyzed set of 121 publications, applying exclusion phrases for ChatGPT yielded sixteen results (only thirteen percent), which primarily focused on LLMs in general and their impact across scientific fields.
This example illustrates how other GenAI tools remain underrepresented in bibliometrics. Accordingly, the present study seeks to outline broad differences and similarities in research on LLM tools.

2.2. The Essence of Differences and Commonalities Between GenAI

Although bibliometric analyses of large language models often treat them as a homogeneous technological class, these tools differ substantially in architecture, accessibility, functionality, and scope of application. The most frequently cited models—such as ChatGPT, Claude, Gemini, and LLaMA—are based on the transformer architecture, yet they vary in size (from billions to trillions of parameters), optimization methods, level of accessibility (commercial or open source), and the degree of control over the fine-tuning process (Patil & Gudivada, 2024; Shao et al., 2024). Open-source models such as LLaMA offer greater flexibility and adaptability to specific environments, whereas commercial solutions are typically optimized for performance, stability, and safety but are less amenable to modification (Patil & Gudivada, 2024; Savage et al., 2025; Shao et al., 2024).
Another important axis of differentiation is functionality. Classical LLMs are limited to text processing and generation, whereas newer multimodal models (multimodal large language models, MLLMs), such as Gemini, GPT-5, or Claude 3, can analyze images, audio, and even video, thereby extending their usefulness to new sectors (e.g., medical imaging, analysis of sensor data). These differences affect response precision, robustness to hallucinations, the ability to transfer knowledge across tasks, and the efficiency of adaptation to specific industries such as education, law, or public health (Patil & Gudivada, 2024; Shao et al., 2024). In practice, tool selection also depends on the level of transparency, availability of documentation, implementation costs, interface language, and compliance with privacy and security policies in a given sector (Scherbakov et al., 2025; Shool et al., 2025).
From a bibliometric perspective, disregarding functional differences among widely available GenAI tools may distort the picture of their actual use in science. Although many analyses focus on ChatGPT as a representative model, tools such as Claude, Gemini, DeepSeek, Perplexity, and Copilot differ in functional scope, interface, mode of operation, and target user groups. These differences may influence both the topical profile of the publications in which they are employed and their presence across fields of knowledge. It is therefore warranted to conduct a comparative bibliometric analysis of these tools to capture their diverse applications and to better understand their role in shaping the contemporary landscape of scholarly communication.

3. Materials and Methods

3.1. Data Collection Process

The present study employed the Web of Science Core Collection due to its high indexing quality, selectivity, and data stability. The database covers more than 34,000 journals and is regarded as one of the most prestigious sources of bibliometric data (Birkle et al., 2020; Pranckutė, 2021). It enables advanced analyses of citations and bibliometric indicators, as well as visualizations of co-authorship and keyword co-occurrence networks (Pranckutė, 2021).
A key advantage of this database is the ability to classify research topics using dedicated Web of Science categories. This structure permits precise delineation of the topical profiles of publications and of differences in the uses of GenAI tools across domains, which constitutes added value relative to other databases. In addition, the extensive set of operators and options for advanced searching allows for the fine-grained extraction of desired records, a feature that is particularly important for the present study.
To retrieve the most relevant records, we prepared tailored protocols for advanced publication searches. The study focused on assembling publications based on information contained in titles, abstracts, and keywords (using the TI OR TS functions, where TI = title; TS = title, abstract, keywords). The unit of analysis was the individual model line, for which separate query strings were constructed to include official names and versions (e.g., “GPT-4o,” “Claude 3.5,” “Llama 3.1”) as well as simple brand forms (“ChatGPT,” “Grok,” “Copilot”). To reduce ambiguity, we applied the NEAR/n proximity operator in a conservative manner (e.g., (Gemini NEAR/3 Google) OR (Gemini NEAR/3 “DeepMind”)) and used extended brand-specific exclusion lists with NOT (for Gemini—astrological/astronomical and chemical terms; for Llama/Alpaca—zoological vocabulary; for Copilot—aviation/autopilot). At all stages, we maintained constant quality filters: document type Article/Review and publication period 2023–2025 (based on early online access). Bibliographic data were retrieved on 4 November 2025. Detailed search codes and the total number of publications for each model line are presented in Table A1 and Table A2 (see Appendix B).
For the purposes of the study, we delineated twenty-three GenAI model lines, drawing on the public compilation of LLMs from the Exploding Topics platform (Cardillo, 2025), and subsequently grouped them at the “brand/family” level. For each line, we verified the number of publications available in Web of Science by running the prepared protocols. As a result, we qualified ten lines with the largest number of publications. We excluded model lines with five or fewer articles, namely Nova, Pythia, Alpaca, XGen, Falcon, Stable LM, Command R, DBRX, Jamba, Nemotron, Phi, GitHub Copilot, and Gemma.

3.2. Data Preparation and Analysis

Search results were exported to text files and organized by tool line, which enabled the preparation of tabulations of publication counts, citation counts, and inputs for network and topical analyses. The same results were also downloaded in .xlsx format, which facilitated the subsequent production of charts and tables.
Co-citation analyses, co-authorship networks, and keyword co-occurrence analyses were conducted in VOSviewer v1.6.20. For co-citation, we considered the one hundred most frequently cited authors. The co-authorship network map was prepared using all countries present in the corpus (seventy five countries). For keyword co-occurrence, we created maps for each set with the unit of analysis set to “All Keywords” and the counting method set to “Full counting.” The minimum occurrence threshold was adjusted to yield at least roughly three hundred input terms per line.
We then estimated thematic similarity across tool lines using the Jaccard index, based on the one hundred most frequent keywords, with a sensitivity analysis at thresholds of fifty and two hundred terms. The entire procedure was automated in Python (v3.14.0) using the pandas, numpy, and openpyxl libraries. We first ingested the VOSviewer exports, sorted them in descending order by frequency, and generated Top–K lists for each line. We then performed cleaning and normalization. First, we applied Unicode NFKC normalization and lowercasing. Second, we harmonized hyphens and punctuation via regular expressions. Third, we applied a substitution dictionary to merge variants and abbreviations (for example, “chat gpt → chatgpt,” “llms → large language model,” “google bard → gemini”); and filtering of generic terms and brand names (including tool and vendor names as well as highly generic methodological terms such as artificial intelligence, machine learning, language model, prompt engineering, algorithm, dataset, evaluation, benchmark).
Additionally, we filtered out editorial phrases based on prefixes and suffixes such as introduction to, review, and survey, and removed within-list duplicates after cleaning while preserving the order of first occurrence. From these prepared lists, we constructed Top–K sets and computed Jaccard matrices according to the definition |A ∩ B| / |A ∪ B| for every pair of tool lines.
Citation indicators are based on raw Web of Science citation counts as of 4 November 2025 and are not normalized by field or publication year. Extremely highly cited papers were retained in the distributions. To reduce their influence, we complement mean-based measures such as citations per publication with distribution sensitive indicators such as the i10 index and the h index. When interpreting cross tool differences, we remain aware of possible sources of bias, including shorter citation windows for later introduced tools and domain imbalances that result from uneven subject area profiles.

4. Results

4.1. Analysis of the Number of Publications

In this subsection, we focus on the current scale of publication counts and observed changes in output among selected GenAI tools. The total number of publications for the entire set is 14,418 (deduplicated). Figure 3 presents each tool’s share of the aggregate publication count (for the group of analyzed GenAI tools).
Figure 3 shows a strong concentration of publications around ChatGPT. This line accounts for approximately 80.6% of the total pool. Gemini ranks second with 17.0%, followed by Claude (7.5%) and LLaMA (7.4%). The remaining tools form a long tail with low shares. The structure is thus clearly “ChatGPT-centric,” which suggests caution when generalizing conclusions to the entire GenAI class. At the same time, it justifies a separate, in-depth analysis for ChatGPT and a comparative treatment of the remaining lines as a lower-visibility group (these data correlate with the number of bibliometric analyses centered on ChatGPT).
Figure 4 presents the percentage shares of publications for selected GenAI tools in the years 2023–2025.
A substantial increase in the share of publications in 2025 can be observed for all analyzed GenAI lines, making 2025 a clear turning point. The share of works attributable to 2025 exceeds one-half for most tools: ChatGPT (11.66% → 36.84% → 51.50%), Gemini (12.79% → 31.98% → 55.23%), Claude (14.15% → 24.51% → 61.33%), LLaMA (6.00% → 25.52% → 68.48%), Mistral (6.38% → 21.14% → 72.48%), Copilot (0.99% → 21.78% → 77.23%). The “steepest” curves are seen for later entrants: DeepSeek (0.00% → 0.71% → 99.29%), Qwen (0.00% → 7.09% → 92.91%), and Grok (3.51% → 3.51% → 92.98%), suggesting very recent and rapidly growing attention from researchers only in 2025. The only relatively “evenly” distributed case is Perplexity (20.93% → 35.31% → 43.76%), indicating earlier and more consistent interest.
The moderate distribution of publication shares for some GenAI tools prompts verification of growth dynamics over the period under study. The results are presented in Figure 5.
As shown in Figure 5, the strongest jump in publication counts occurred in 2023–2024 for most tools, followed by a general deceleration in 2024–2025, with one marked exception (Mistral). ChatGPT grew by 215.9% in 2023–2024, then by 39.8%. Gemini and Claude show similar profiles, about 150% in the first interval and approximately 73% in the second. LLaMA posts very high growth in both segments, 325.4% and 168.3%, indicating sustained expansion of the open-source family. Perplexity grows moderately, 68.7% and 24.0%, suggesting earlier saturation. Mistral is the only tool with an acceleration: 231.6% in 2023–2024 and 242.9% in 2024–2025. Two inferences are possible—first, consolidation among the earliest-adopted tools; second, strong growth momentum driven by open model lines, especially LLaMA and Mistral.
Because earlier analyses revealed a strong concentration of research on a few GenAI lines and varied maturation patterns, it is useful to complement the picture with total volume. Figure 6 shows the increase in the total number of articles in 2023–2025. This view distinguishes structural change from the absolute growth in outputs and provides a reference point for comparisons across tools.
The total number of publications rises sharply from 2254 in 2023 to 5555 in 2024, and then to 6609 in 2025. This corresponds to an increase of approximately 146% in 2023–2024 and about 19% in 2024–2025. Thus, there was a rapid boom in the first period and a marked slowdown thereafter, while maintaining a high absolute scale. The logarithmic smoothing is consistent with this picture and indicates, illustratively, a level of roughly eight thousand publications in 2026 if the current dynamics persist. This is an illustrative estimate without uncertainty intervals and should therefore be interpreted with caution. The trend remains consistent with earlier findings, where growth was strongest in 2023–2024 and 2025 ushered in a phase of consolidation.

4.2. Analysis of Publication Citations

In this subsection, we present a citation analysis for the examined publication sets. Figure 7 displays citations per publication in 2023–2025, which visualizes the dynamics of this phenomenon for each article group and for the corpus.
Figure 7 shows a clear increase in citations per publication across the entire set. The strongest jump is observed between 2023 and 2024, with further growth in 2025. ChatGPT remains the leader, recording the highest values in each year (approximately 4.8 → 10.5 → 15.2). Gemini joins the second tier, rising rapidly and reaching about 8.4 citations per publication in 2025. Perplexity improves to roughly 7.0 in 2025. LLaMA and Claude stabilize in the 4–5.5 range. Later introduced lines, such as DeepSeek, Qwen, Grok and Mistral, show lower values, which can be attributed to a shorter citation window and to recency effects. We therefore interpret their citation indicators as conservative lower bounds rather than as fully comparable steady state levels. Copilot increases its impact to approximately 5.5 in 2025. The inference is as follows: not only is the number of studies rising, but the average citation impact is also increasing, with tools adopted earliest maintaining an advantage.
Following the analysis of average citations per publication shown in Figure 7, we expand the assessment of impact by considering measures of concentration. Table 1 reports, for each GenAI line, the title of the most-cited publication and the cumulative citations of the ten most influential studies within that line. We additionally report the h-index calculated for the entire corpus, enabling a simultaneous appraisal of the scale and strength of impact.
The table indicates a strong concentration of impact alongside ChatGPT’s dominance. This line has both the single most-cited publication and the highest i10-index and h-index values, indicating a broad and deep citation base. Gemini ranks second at a clearly lower, yet still substantial, level relative to the remaining lines. LLaMA stands out with a relatively high h-index despite a more modest “strongest” single paper, suggesting a more distributed impact profile.
Across the corpus, the most–cited articles concern applications in medicine, education, and writing practices, which confirm the early anchoring of research in these areas. Later-introduced lines have lower i10 and h values, partly due to a shorter citation window. The overall picture aligns with earlier results: the field is distinctly oriented toward ChatGPT, while the remaining tools form a diversified “long tail,” within which open model families are gaining prominence.
Having analyzed volume, growth dynamics, and citation impact, we now turn to the intellectual structure of the corpus. Figure 8 presents the author co-citation map for the one hundred most frequently cited scholars across the entire set. This view captures the principal research communities and identifies authors who act as bridges between areas.
The network is clearly modular, with several dominant clusters and a limited number of bridges among them. The largest clusters correspond to applications in health care (e.g., Ayers et al., 2023; Cascella et al., 2023; Gilson et al., 2023) and in education (e.g., Chan & Hu, 2023; Cotton et al., 2024; Farrokhnia et al., 2024); further visible are nodes associated with management and scholarly information, as well as threads on LLM and NLP methods. Strong central nodes indicate a concentration of impact around authors recognized as leaders in these fields, whereas intercluster connections suggest active exchange of methods and case studies among medicine, education, and management sciences.

4.3. Analysis of Authors’ Countries of Origin

In this subsection, we analyze patterns of co-authorship among authors from different countries. Figure A1, Figure A2 and Figure A3 (see Appendix C) present the share of authors by country of affiliation within the corpora for individual GenAI tools. Each panel lists the ten most frequently represented countries in a given corpus. The aim is to identify centers that drive publication output and to assess the degree of internationalization of each tool line.
The figures reveal a clear split between “Western” and “Chinese” corpora. In ChatGPT, Gemini, Claude, and LLaMA, the United States predominates, typically with a share of about 18–23 percent, followed by China in second place with shares on the order of 9–15 percent. For tools developed in China, we observe strong domestic dominance. Qwen shows roughly a 42 percent share of authors from China, and DeepSeek about 35 percent. Perplexity likewise shows the highest share for China and a slightly lower share for the United States, suggesting robust research demand in that region.
Turkey appears regularly in the top three and reaches the highest or near-highest values in Copilot and Grok, indicating active adoption of these tools in that country. Western Europe contributes a steady, mid-range presence. The United Kingdom, Germany, Italy, France, and Spain typically fall in the 3–6 percent range. Mistral exhibits a more pronounced European footprint, consistent with its origin and open distribution.
The overall picture aligns with the conclusions of previous sections. Corpora associated with globally popular lines are broadly internationalized, albeit with a clear primacy of the United States. Corpora for tools developed in China are highly geographically concentrated. Open-source and developer-oriented lines, such as LLaMA and Mistral, display greater geographic diffusion, confirming their broad accessibility and permeability across academic systems.
Having examined country shares within individual GenAI lines, we now turn to the pattern of collaboration at the country level. Figure 9 presents the co-authorship layout for the entire corpus from 2023 to 2025. The objective is to identify the system’s core and the channels through which regional blocks are connected.
The network exhibits a dual-core structure. The largest and most densely connected node is the United States. The second core is formed by China. The United States is linked by a dense web of ties to England, Canada, and Australia, as well as to the principal centers of Western Europe. Within this group, Germany, Italy, Spain, the Netherlands, and France are clearly visible; they collaborate intensively within the region while maintaining strong channels with the United States. England acts as a bridge between Europe and the American core and maintains distinct bilateral relationships, including with Israel and countries in Central Europe.
The Chinese core connects primarily with India and with countries in Asia and the Persian Gulf. Nodes such as Malaysia, the United Arab Emirates, and Saudi Arabia form a dense band of collaboration with India and China. Japan and South Korea anchor the Pacific arc, with numerous connections to the United States and Australia. Turkey is embedded in both the Middle Eastern and European blocs.
This picture confirms earlier results. U.S. dominance across many tool lines coincides with centrality in the co-authorship network. China is building its own hub with strong regional backing. The Anglophone countries and several EU member states serve as intermediaries, facilitating the flow of topics and methods across regions. We also observe highly intensive bilateral relationships that coexist with multinational consortia.

4.4. Analysis of Publication Topics

In this subsection, we examine the thematic structure of the individual GenAI tool corpora. Figure A4, Figure A5 and Figure A6 show the shares of the ten most frequently represented Web of Science subject categories in each set.
The results indicate a shared technical core alongside differentiated specializations. Computer Science dominates across all corpora, especially in open-source and developer-oriented lines, where it reaches the highest shares. The highest values are observed for LLaMA at approximately 38.6 percent, Mistral at about 39.7 percent, Qwen at roughly 31.5 percent, and DeepSeek at around 25.4 percent. The same sets also show elevated Engineering shares, confirming a strong developmental component. In parallel, a health-related block persists in many corpora. Health Care Sciences & Services, General & Internal Medicine, and Medical Informatics are clearly present, with increased shares in Qwen, where Medical Informatics accounts for approximately 18.9 percent and Health Care Sciences & Services about 13.4 percent, as well as in Copilot and Grok.
ChatGPT combines the technical component with an elevated share of Education & Educational Research at roughly 18.9 percent, reflecting early and broad uptake in teaching and learning. The sets also bear tool-specific signatures. Gemini engages in the natural sciences more strongly than others, with Astronomy & Astrophysics and Physics appearing more frequently in its top ten. Claude contributes a more pronounced presence of the humanities, particularly Literature and History. Open-source lines more often include infrastructural categories, such as Telecommunications, Radiology, Nuclear Medicine & Medical Imaging, Information Science & Library Science, and Instruments & Instrumentation. In Copilot, additional clinical categories are visible, including Ophthalmology, Dentistry, Oral Surgery & Medicine, and Emergency Medicine.
Taken together, these profiles reveal a field with a common informatics and engineering foundation and a concurrent shift toward applications in education and health care. Differences across brands are significant and stem from their ecosystems, availability, and dominant use cases. Comparative analyses across tools should control for topical mix, as subject-profile composition influences observed bibliometric metrics.
After identifying the topical profiles of the individual corpora, we extend the analysis to citations per publication within the three most frequently represented areas—Education & Educational Research, Computer Science, and Engineering (Figure 10). This approach partially controls field effects on visibility and allows us to ask whether disparities among tools persist after examining the dominant categories. Intuitively, one might expect less dispersion in Computer Science, given the strong technical core of many GenAI tools.
Differences among tools are smallest in Computer Science. The range in this category spans from 0.8 to 9.4, whereas in Education it extends from 0.4 to 15.6, and in Engineering from 0.4 to 10.3. This means that ChatGPT’s advantage is strongest in Education, moderate in Engineering, and least pronounced in Computer Science. Two conclusions follow. First, the advantage of tools with the earliest and broadest adoption persists even after controlling for subject domain. Second, Computer Science functions as a “leveling” area in which cross-brand differences are relatively smaller, bringing the citations-per-publication impact of technically oriented tools closer to a more uniform level.

4.5. Analysis of Keywords

To capture topical proximity among the publication corpora for individual GenAI tools, we computed pairwise Jaccard indices based on the one hundred most frequent keywords in each set. The results are presented in Figure 11, which allows us to read off the degree of thematic overlap between tools. For sensitivity control, we include analogous matrices for the Top–50 and Top–200 terms (Figure A7 and Figure A8), enabling a comparison of how threshold changes affect similarity relations.
Overall similarity levels are moderate—typically in the range of about 8–23 percentage points—confirming that the GenAI ecosystem is diverse in terms of the topics it addresses. The highest values appear between the ChatGPT–Gemini, ChatGPT–Claude, and Gemini–Claude corpora, as well as between Copilot and this trio. This indicates a loosely delineated cluster of general-purpose tools with a strong representation of “general LLM” themes and user practices for working with text and code. On the other side, we observe a cluster of open-source and developer-oriented tools, where relatively higher similarities occur for the LLaMA–DeepSeek, LLaMA–Mistral, Mistral–Qwen, and DeepSeek–Qwen pairs. Perplexity occupies a middle position, maintaining moderate overlap with both clusters, which is consistent with its search-and-advisory character. Grok exhibits the lowest average similarity to the remaining sets, suggesting a more distinct topical profile or a more limited diffusion of keyword vocabularies in the literature.
The sensitivity analysis confirms the stability of these conclusions. For the Top–50 set (Figure A7), the highest pairs increase slightly in value (especially Copilot–ChatGPT and Claude–LLaMA), indicating a strongly shared core of the most frequent terms for these tools. When expanded to the Top–200 (Figure A8), values average out and converge slightly as the “long tail” of topics introduces greater diversification, yet the relative ordering of pairs remains similar. Consequently, the Jaccard matrices reveal two clear vectors of thematic convergence (general-purpose chatbots and open-source lines), intermediate search-oriented tools, and a handful of outliers.

5. Discussion

Our analytical design follows established science mapping practices in bibliometrics. We combine performance indicators with network-based visualizations of co-citation, co-authorship, and keyword co-occurrence. The use of full counting, Jaccard similarity, and field-specific Web of Science categories is consistent with widely adopted approaches to mapping research fronts and intellectual structures in large corpora; for example, in Birkle et al. (2020), Farhat et al. (2024), Polat et al. (2024), and Pranckutė (2021). In this respect, the study offers a domain-specific application of standard bibliometric methodology to the emerging GenAI tool ecosystem.

5.1. Differences in the Bibliometric Analysis Across GenAI Tools

The publication ecosystem on GenAI is clearly differentiated in scale, dynamics, and topical profile, and these differences also have technical foundations. Tool lines vary in architecture and model type, which translates into distinct research “affordances” and into the task ranges in which they are most frequently used (S. Pan et al., 2024; Shao et al., 2024). The growing prominence of multimodal models has broadened applications beyond text and, in many corpora, introduced new topical vocabularies related to images, audio, and video (Nazi & Peng, 2024; Shao et al., 2024; Yin et al., 2024). Against this backdrop, ChatGPT’s advantage in volume and citations stems not only from first-mover effects and broad adoption, but also from its functional profile as a general-purpose tool that rapidly penetrates education and teaching practice, whereas open-source lines have grown faster in methodological and engineering work
A second dimension of difference concerns availability and deployment model. Commercial systems offer high model quality but less control over data and modification, while open-source lines enable local implementation, precise customization, and greater privacy control at the cost of technical resources (Jahan et al., 2023; Savage et al., 2025). This contributes to two stable thematic clusters: general-purpose tools (ChatGPT, Gemini, Claude, Copilot) and developer-oriented, open-source lines (LLaMA, Mistral, Qwen, DeepSeek). Performance differences are also task specific. In some review and classification tasks, GPT-class models achieve higher sensitivity and specificity than competitors such as LLaMA, whereas in others the advantages diminish or reverse (Li et al., 2024). In specialized applications there is no single dominant winner; tool choice depends on task context, data, and deployment constraints (Jahan et al., 2023; L. Wu et al., 2023).
The results should also be interpreted with caution. Individual lines differ in susceptibility to hallucinations, interpretability, and capacity to work with domain knowledge, all of which affect citation structures and visibility across fields (S. Pan et al., 2024; Zhao et al., 2024). At the same time, model compression and efficiency optimization are gaining importance, especially in resource-constrained environments, strengthening the appeal of open-source lines and promoting methodological research on adaptation and fine-tuning (Zhu et al., 2024). In practice, this recommends that cross-tool comparisons control for task and domain profile, and that assessments of substitutability rely on objective measures of topical convergence (e.g., Jaccard) and on explicit deployment assumptions, rather than on a context-free ranking of the “best” model.

5.2. Similarities in the Bibliometric Analysis Across GenAI Tools

The findings indicate strong commonalities in the dynamics and topical profiles of the analyzed corpora. All tools exhibit a synchronous jump in publication counts in 2025 and a high share of works classified under Computer Science, with frequent intersections with education and medical areas. This convergence is explained by the models shared technological base—transformer architecture and training on very large data collections—which fosters broadly applicable use cases across many disciplines (Mohan et al., 2024; Naveed et al., 2025; Shao et al., 2024). The geographic picture is likewise convergent, with the United States and China playing core roles and co-authorship patterns forming similar core–periphery configurations regardless of tool.
The Jaccard analysis reveals a stable, shared core of topical vocabulary across tool pairs. For the Top–100, similarity values typically fall in the mid-teens to roughly the low twenties (percentage points), and the ordering of relationships persists when narrowed to the Top–50 and expanded to the Top–200. The shared lexicon encompasses operational and methodological motifs of working with LLMs—such as fine-tuning, prompt engineering, and evaluation—as well as application themes that recur across fields (Naveed et al., 2025; Zhao et al., 2024). Increasingly widespread multimodality further reinforces these recurring patterns by introducing similar task categories and metrics regardless of the model line (Shao et al., 2024; Yin et al., 2024). Perplexity serves as a bridge between sets, yet the core of recurring terms is present throughout the ecosystem.
Convergences also appear in impact structure. For many tools, citations per publication increase in 2024 and 2025, suggesting simultaneous maturation of evaluation practices and the diffusion of similar benchmark suites. There is also a shared block of ethical and social issues—covering privacy, safety, and equity of access—that recurs across corpora independent of brand (Naveed et al., 2025; S. Pan et al., 2024; Raza et al., 2025). Taken together, these similarities indicate that some research questions and assessment frameworks can be designed in a cross-tool manner, facilitating comparable and transferable conclusions.
From a theoretical standpoint, our findings complement the emerging literature on GenAI in science, suggesting an ecosystem view of model lines and illustrating how Jaccard-based similarity matrices can be used to approximate substitutability and complementarity between tools. From a practical standpoint, the study proposes an evidence-based comparative template that can be reused to classify additional GenAI families, compare individual tools with their counterparts, and design cross-tool evaluations that combine impact metrics with topic overlap. These elements can support the use of descriptive maps as one possible framework for comparative analysis of GenAI tool ecosystems.

6. Conclusions

This article presents a comparative bibliometric map of the GenAI tool ecosystem in science. The analysis shows differences in volume and citations, with ChatGPT receiving the most attention, as well as accelerated growth among later entrants. It also points to a split in the ecosystem into two modes of development, which distinguishes general purpose tools from open-source lines with a more engineering and methodological profile. Differences also appear in specific fields. ChatGPT’s advantage is greatest in education and smaller in computer science, where common technical practice seems to mitigate the differences between tools. At the same time, the tools share a common technological base, convergent methodological vocabularies and a recurring pattern of a publication surge in 2025.
The Jaccard matrices indicate two stable thematic clusters. The first cluster comprises ChatGPT, Gemini, Claude, and Copilot. The second cluster includes LLaMA, Mistral, Qwen, and DeepSeek. Perplexity serves as a bridge between these two groups, while Grok appears as the most distinct case. These patterns point to a combination of differentiation and overlap in the way GenAI tools are used in scientific work.
In addition to providing a descriptive overview, the study aims to outline a comparative framework for positioning GenAI tools in the scientific environment. By combining publication dynamics, citation concentration, subject area profiles, and keyword-based Jaccard matrices, it describes a structured way to group tools into clusters, to identify bridge and outlier positions, and to assess the extent of topical overlap across different model lines. This framework can be applied to domain specific specialized models and to future generations of GenAI systems. In this way, it may support cumulative comparisons over time.
This gives rise to three practical implications for researchers and institutions that consider the implementation and evaluation of GenAI tools. First, the choice of tool should depend on the purpose and field. Applied and educational work may benefit from general purpose tools, while methodological and infrastructure projects may benefit from open source families. Second, comparisons between tools should take into account the subject area and implementation time. A prudent standard is to combine impact indicators with keyword similarity measures such as the Jaccard index and to report model versions, usage parameters and literature selection criteria. This practice can help transform descriptive bibliometric profiles into a reusable comparison template. Third, it is advisable to document the work process in the spirit of repeatability. Database query codes, data cleaning scripts and result files should be made available to facilitate independent verification and comparability of research.

7. Limitations and Future Recommendations

This study relies on data from a single database (Web of Science) and a 2023–2025 horizon, which limits coverage, exposes the results to indexing delays, and constrains the normalization of citation indicators. The thematic analyses use keywords rather than full texts and therefore may not capture entire article contents. The comparisons do not differentiate between model versions or detailed configurations, and some differences may arise from mixed disciplinary profiles within individual corpora. Specialized tools (e.g., Med-PaLM 2, BioGPT, ClinicalGPT) were not examined in depth.
We recommend expanding data sources to include additional databases and preprints, as well as incorporating full-text models for topic discovery and semantic similarity. It is worthwhile to normalize citations by field and year, track model versioning, and study the co-use of multiple tools within a single publication. Complementary avenues include altmetric analyses, data on code and repository usage, and triangulation with expert qualitative reviews. Such a research program would allow for more precise measurement of the substitutability and complementarity of GenAI tools and would better inform recommendations for institutional policy, pedagogy, and research practice.

Author Contributions

Conceptualization, K.S. and M.O.; Methodology, K.S. and M.O.; Software, K.S.; Validation, K.S.; Formal analysis, K.S.; Investigation, M.O.; Resources, M.O.; Data curation, K.S.; Writing—original draft, K.S.; Writing—review & editing, M.O.; Visualization, K.S.; Supervision, M.O.; Project administration, K.S. and M.O. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
EDUEducation Educational Research
CSComputer Science
ENGEngineering
HCSSHealth Care Sciences Services
GIMGeneral Internal Medicine
MEDINFMedical Informatics
BEBusiness Economics
SURGSurgery
STOTScience Technology Other Topics
SSOTSocial Sciences Other Topics
PHYSPhysics
RELReligion
ASTROAstronomy Astrophysics
CHEMChemistry
LITLiterature
HISTHistory
RADNMMIRadiology Nuclear Medicine Medical Imaging
LINGLinguistics
MATSMaterials Science
INSTRInstruments Instrumentation
ISLSInformation Science Library Science
RSRemote Sensing
ONCOncology
TELTelecommunications
DENTDentistry Oral Surgery Medicine
EMERGEmergency Medicine
OPHOphthalmology

Appendix A. Code for Advanced Search of Bibliometric Studies on LLM in the Web of Science Core Collection Database

(TI = (LLM OR “large language model*” OR ChatGPT OR “GPT-4” OR “GPT 4” OR GPT4 OR “GPT-3” OR GPT3 OR Gemini OR Claude OR “Microsoft Copilot” OR LLaMA OR “Meta Llama” OR Mistral OR DeepSeek) AND TS = (“bibliometric*” OR “scientometric*” OR “science mapping” OR “co-citation*” OR “bibliographic coupling” OR “keyword co-occurrence” OR “co-word” OR “co-occurrence”)) AND (DT = (“Article”)) AND PY = (2022 OR 2023 OR 2024 OR 2025).

Appendix B

Table A1. Codes used for advanced search of selected GenAI in the Web of Science Core Collection database (part 1).
Table A1. Codes used for advanced search of selected GenAI in the Web of Science Core Collection database (part 1).
GenAI NameCodeNumber of Publications
ChatGPT(TI = (ChatGPT OR “OpenAI ChatGPT” OR “GPT-4o” OR “GPT-4.1” OR “GPT-4.5” OR “GPT-o1” OR “GPT-o3” OR “GPT-o4” OR “o3-mini” OR “o4-mini” OR “GPT-5”) OR TS = (ChatGPT OR “Chat GPT” OR “OpenAI ChatGPT” OR “GPT-4o” OR “GPT-4.1” OR “GPT-4.5” OR “GPT-o1” OR “GPT-o3” OR “GPT-o4” OR “o3-mini” OR “o4-mini” OR “GPT-5” OR (“OpenAI” NEAR/3 ChatGPT) OR (“OpenAI” NEAR/3 GPT))) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)11,619
Gemini(TI = (“Google Gemini” OR “Gemini 2.5” OR “Gemini 2.0” OR “Gemini 1.5” OR “Google Bard” OR Gemini OR Bard) OR TS = (“Google Gemini” OR (Gemini NEAR/3 Google) OR (Gemini NEAR/3 “DeepMind”) OR “Google Bard” OR (Bard NEAR/2 Google) OR Gemini OR Bard)) NOT TS = (zodiac OR astrology OR constellation OR telescope OR observatory OR “Project Gemini” OR NASA OR surfactant* OR amphiphile* OR “Gemini quaternary” OR “Gemini cationic” OR “Gemini ionic liquid*”) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)2444
Claude(TI = (“Anthropic Claude” OR “Claude 4.1” OR “Claude 3.7” OR “Claude 3.5” OR “Claude 3” OR “Claude 2” OR “Claude Sonnet” OR “Claude Opus” OR “Claude Haiku” OR Claude) OR TS = (“Anthropic Claude” OR (Claude NEAR/2 Anthropic) OR Claude)) NOT TS = (“Claude Shannon” OR Monet OR “Claude Bernard” OR “Claude Lévi-Strauss” OR “Claude Levi-Strauss” OR “Saint-Claude”) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)1083
LLaMA(TI = (“Meta Llama” OR “Meta AI” OR LLaMA OR “LLaMA 2” OR “LLaMA 3” OR “Llama 3.1” OR “Llama 4 Scout” OR “Llama 2” OR “Llama 3” OR Llama) OR TS = (“Meta Llama” OR LLaMA OR (“Llama” NEAR/2 Meta) OR Llama)) NOT TS = (animal OR mammal OR camelid OR camelidae OR alpaca OR vicuna OR guanaco OR zoo OR wildlife OR herd OR wool OR fleece OR livestock OR veterinary OR “se llama” OR “llamado” OR “llamada” OR “llamados”) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)1067
Perplexity(TI = (“Perplexity AI” OR Perplexity) OR TS = (“Perplexity AI” OR (Perplexity NEAR/2 “answer engine”) OR (Perplexity NEAR/2 search) OR Perplexity)) NOT TS = ((perplexity NEAR/3 language) OR (perplexity NEAR/3 model) OR (perplexity NEAR/3 NLP) OR (perplexity NEAR/3 metric) OR “Shannon perplexity”) AND DT = (“Article”) AND PY = (2022 OR 2023 OR 2024 OR 2025)574
Source: own elaboration based on Web of Science Core Collection.
Table A2. Codes used for advanced search of selected GenAI in the Web of Science Core Collection database (part 2).
Table A2. Codes used for advanced search of selected GenAI in the Web of Science Core Collection database (part 2).
GenAI NameCodeNumber of Publications
DeepSeek(TI = (“DeepSeek” OR “DeepSeek-V2” OR “DeepSeek-V2.5” OR “DeepSeek-V3” OR “DeepSeek R1” OR “DeepSeek Coder” OR DeepSeek) OR TS = (“DeepSeek” OR (“DeepSeek” NEAR/3 model) OR (“DeepSeek” NEAR/3 “language model”) OR (“DeepSeek” NEAR/3 “AI assistant”) OR DeepSeek)) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)426
Mistral(TI = (“Mistral Large” OR “Mistral Large 2” OR “Mistral 7B” OR “Mixtral 8x22B” OR “Mixtral 8x7B” OR “Mistral AI” OR Mistral) OR TS = (“Mistral AI” OR “Mistral Large” OR “Mixtral 8x22B” OR (Mistral NEAR/3 “language model”) OR (Mixtral NEAR/3 model) OR Mistral)) NOT TS = (wind OR meteorology* OR Provence OR “Frédéric Mistral”) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)300
Qwen(TI = (“Alibaba Qwen” OR “Qwen 3” OR “Qwen-3” OR “Qwen 2.5” OR “Qwen-2.5” OR “Tongyi Qianwen” OR Qwen) OR TS = (“Alibaba Qwen” OR “Qwen 3” OR “Qwen-3” OR “Tongyi Qianwen” OR (Qwen NEAR/3 Alibaba) OR Qwen)) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)127
Copilot(TI = (“Microsoft 365 Copilot” OR “Copilot for Microsoft 365” OR “Microsoft Copilot” OR Copilot) OR TS = (“Microsoft 365 Copilot” OR (Copilot NEAR/3 “Microsoft 365”) OR (Copilot NEAR/3 “Office 365”) OR (Copilot NEAR/3 Word) OR (Copilot NEAR/3 Excel) OR (Copilot NEAR/3 PowerPoint) OR (Copilot NEAR/3 Outlook) OR (Copilot NEAR/3 Teams) OR Copilot)) NOT TS = (GitHub OR “Git Hub” OR aircraft OR airline OR aviation OR airplane OR “auto pilot” OR autopilot OR UAV OR drone OR cockpit OR pilot OR co-pilot OR “co pilot”) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)101
Grok(TI = (“xAI Grok” OR “Grok-5” OR “Grok-3” OR “Grok-2” OR “Grok-1” OR Grok) OR TS = (“xAI Grok” OR (Grok NEAR/3 xAI) OR (Grok NEAR/3 “Elon Musk”) OR Grok))
NOT TS = (“Grokking” OR “to grok” OR “Grokking Algorithms” OR Heinlein OR “Stranger in a Strange Land”) AND DT = (“Article”) AND PY = (2023 OR 2024 OR 2025)
57
Source: own elaboration based on Web of Science Core Collection.

Appendix C

Figure A1. Percentage share in the authors’ countries of origin broken down by individual GenAI tools (for the period 2023–2025) (part 1). Source: own elaboration based on Web of Science Core Collection.
Figure A1. Percentage share in the authors’ countries of origin broken down by individual GenAI tools (for the period 2023–2025) (part 1). Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a1
Figure A2. Percentage share in the authors’ countries of origin broken down by individual GenAI tools (for the period 2023–2025) (part 2). Source: own elaboration based on Web of Science Core Collection.
Figure A2. Percentage share in the authors’ countries of origin broken down by individual GenAI tools (for the period 2023–2025) (part 2). Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a2
Figure A3. Percentage share in the authors’ countries of origin broken down by individual GenAI tools (for the period 2023–2025) (part 3). Source: own elaboration based on Web of Science Core Collection.
Figure A3. Percentage share in the authors’ countries of origin broken down by individual GenAI tools (for the period 2023–2025) (part 3). Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a3
Figure A4. Percentage share of publications from selected GenAI collections in the ten most popular research areas (for the period 2023–2025) (part 1).
Figure A4. Percentage share of publications from selected GenAI collections in the ten most popular research areas (for the period 2023–2025) (part 1).
Publications 13 00067 g0a4
Figure A5. Percentage share of publications from selected GenAI collections in the ten most popular research areas (for the period 2023–2025) (part 2). Note: The abbreviations are the same as in Figure A3. Source: own elaboration based on Web of Science Core Collection.
Figure A5. Percentage share of publications from selected GenAI collections in the ten most popular research areas (for the period 2023–2025) (part 2). Note: The abbreviations are the same as in Figure A3. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a5
Figure A6. Percentage share of publications from selected GenAI collections in the ten most popular research areas (for the period 2023–2025) (part 3). Note: The abbreviations are the same as in Figure A3. Source: own elaboration based on Web of Science Core Collection.
Figure A6. Percentage share of publications from selected GenAI collections in the ten most popular research areas (for the period 2023–2025) (part 3). Note: The abbreviations are the same as in Figure A3. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a6
Figure A7. Jaccard similarity across tool-specific corpora based on the Top–50 keywords. Source: own elaboration based on Web of Science Core Collection.
Figure A7. Jaccard similarity across tool-specific corpora based on the Top–50 keywords. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a7
Figure A8. Jaccard similarity across tool-specific corpora based on the Top–200 keywords. Source: own elaboration based on Web of Science Core Collection.
Figure A8. Jaccard similarity across tool-specific corpora based on the Top–200 keywords. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g0a8

References

  1. Alavi, M., Georgia Institute of Technology, Leidner, D. E., University of Virginia, Mousavi, R., & University of Virginia. (2024). Knowledge management perspective of generative artificial intelligence. Journal of the Association for Information Systems, 25(1), 1–12. [Google Scholar] [CrossRef]
  2. Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Kelley, J. B., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589. [Google Scholar] [CrossRef]
  3. Bail, C. A. (2024). Can generative AI improve social science? Proceedings of the National Academy of Sciences, 121(21), e2314021121. [Google Scholar] [CrossRef]
  4. Banh, L., & Strobel, G. (2023). Generative artificial intelligence. Electronic Markets, 33(1), 63. [Google Scholar] [CrossRef]
  5. Barrington, N. M., Gupta, N., Musmar, B., Doyle, D., Panico, N., Godbole, N., Reardon, T., & D’Amico, R. S. (2023). A bibliometric analysis of the rise of ChatGPT in medical research. Medical Sciences, 11(3), 61. [Google Scholar] [CrossRef] [PubMed]
  6. Bhullar, P. S., Joshi, M., & Chugh, R. (2024). ChatGPT in higher education—A synthesis of the literature and a future research agenda. Education and Information Technologies, 29(16), 21501–21522. [Google Scholar] [CrossRef]
  7. Birkle, C., Pendlebury, D. A., Schnell, J., & Adams, J. (2020). Web of Science as a data source for research on scientific and scholarly activity. Quantitative Science Studies, 1(1), 363–376. [Google Scholar] [CrossRef]
  8. Bolgova, O., Ganguly, P., & Mavrych, V. (2025). Comparative analysis of LLMs performance in medical embryology: A cross-platform study of CHATGPT, Claude, Gemini, and Copilot. Anatomical Sciences Education, 18(7), 718–726. [Google Scholar] [CrossRef]
  9. Buehler, M. J. (2024). Accelerating scientific discovery with generative knowledge extraction, graph-based representation, and multimodal intelligent graph reasoning. Machine Learning: Science and Technology, 5(3), 035083. [Google Scholar] [CrossRef]
  10. Cardillo, A. (2025, October 17). Best 44 Large Language Models (LLMs) in 2025. Exploding topics. Available online: https://explodingtopics.com/blog/list-of-llms (accessed on 3 November 2025).
  11. Cascella, M., Montomoli, J., Bellini, V., & Bignami, E. (2023). Evaluating the feasibility of ChatGPT in healthcare: An analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1), 33. [Google Scholar] [CrossRef] [PubMed]
  12. Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 43. [Google Scholar] [CrossRef]
  13. Cotton, D. R. E., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in Education and Teaching International, 61(2), 228–239. [Google Scholar] [CrossRef]
  14. Dergaa, I., Chamari, K., Zmijewski, P., & Ben Saad, H. (2023). From human writing to artificial intelligence generated text: Examining the prospects and potential threats of ChatGPT in academic writing. Biology of Sport, 40(2), 615–622. [Google Scholar] [CrossRef]
  15. Fan, L., Li, L., Ma, Z., Lee, S., Yu, H., & Hemphill, L. (2024). A bibliometric review of large language models research from 2017 to 2023. ACM Transactions on Intelligent Systems and Technology, 15(5), 1–25. [Google Scholar] [CrossRef]
  16. Farhat, F., Silva, E. S., Hassani, H., Madsen, D. Ø., Sohail, S. S., Himeur, Y., Alam, M. A., & Zafar, A. (2024). The scholarly footprint of ChatGPT: A bibliometric analysis of the early outbreak phase. Frontiers in Artificial Intelligence, 6, 1270749. [Google Scholar] [CrossRef] [PubMed]
  17. Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460–474. [Google Scholar] [CrossRef]
  18. Gande, S., Gould, M., & Ganti, L. (2024). Bibliometric analysis of ChatGPT in medicine. International Journal of Emergency Medicine, 17(1), 50. [Google Scholar] [CrossRef]
  19. Gangwal, A., & Lavecchia, A. (2024). Unleashing the power of generative AI in drug discovery. Drug Discovery Today, 29(6), 103992. [Google Scholar] [CrossRef]
  20. Gencer, G., & Gencer, K. (2025). Large language models in healthcare: A bibliometric analysis and examination of research trends. Journal of Multidisciplinary Healthcare, 18, 223–238. [Google Scholar] [CrossRef]
  21. Gilson, A., Safranek, C. W., Huang, T., Socrates, V., Chi, L., Taylor, R. A., & Chartash, D. (2023). How does ChatGPT perform on the United States Medical Licensing Examination (USMLE)? The implications of large language models for medical education and knowledge assessment. JMIR Medical Education, 9, e45312. [Google Scholar] [CrossRef] [PubMed]
  22. Hochmair, H. H., Juhász, L., & Kemp, T. (2024). Correctness comparison of ChatGPT-4, Gemini, Claude-3, and Copilot for spatial tasks. Transactions in GIS, 28(7), 2219–2231. [Google Scholar] [CrossRef]
  23. Jahan, I., Laskar, M. T. R., Peng, C., & Huang, J. (2023). A comprehensive evaluation of large language models on benchmark biomedical text processing tasks (version 3). arXiv. [Google Scholar] [CrossRef]
  24. Kaftan, A. N., Hussain, M. K., & Naser, F. H. (2024). Response accuracy of ChatGPT 3.5 Copilot and Gemini in interpreting biochemical laboratory data a pilot study. Scientific Reports, 14(1), 8233. [Google Scholar] [CrossRef]
  25. Kung, T. H., Cheatham, M., Medenilla, A., Sillos, C., De Leon, L., Elepaño, C., Madriaga, M., Aggabao, R., Diaz-Candido, G., Maningo, J., & Tseng, V. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS Digital Health, 2(2), e0000198. [Google Scholar] [CrossRef]
  26. Li, M., Sun, J., & Tan, X. (2024). Evaluating the effectiveness of large language models in abstract screening: A comparative analysis. Systematic Reviews, 13(1), 219. [Google Scholar] [CrossRef]
  27. Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790. [Google Scholar] [CrossRef]
  28. Lin, Z. (2023). Why and how to embrace AI such as ChatGPT in your academic life. Royal Society Open Science, 10(8), 230658. [Google Scholar] [CrossRef] [PubMed]
  29. Lu, M. Y., Chen, B., Williamson, D. F. K., Chen, R. J., Zhao, M., Chow, A. K., Ikemura, K., Kim, A., Pouli, D., Patel, A., & Soliman, A. (2024). A multimodal generative AI copilot for human pathology. Nature, 634(8033), 466–473. [Google Scholar] [CrossRef] [PubMed]
  30. Mohan, G., Prasanna Kumar, R., Krishh, P., Keerthinathan, A., Lavanya, G., Meghana, M. K. U., Sulthana, S., & Doss, S. (2024). An analysis of large language models: Their impact and potential applications. Knowledge and Information Systems, 66(9), 5047–5070. [Google Scholar] [CrossRef]
  31. Nan, D., Zhao, X., Chen, C., Sun, S., Lee, K. R., & Kim, J. H. (2025). Bibliometric analysis on ChatGPT research with CiteSpace. Information, 16(1), 38. [Google Scholar] [CrossRef]
  32. Naveed, H., Khan, A. U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Akhtar, N., Barnes, N., & Mian, A. (2025). A comprehensive overview of large language models. ACM Transactions on Intelligent Systems and Technology, 16(5), 1–72. [Google Scholar] [CrossRef]
  33. Nazi, Z. A., & Peng, W. (2024). Large language models in healthcare and medical domain: A review (version 2). arXiv. [Google Scholar] [CrossRef]
  34. Oliński, M., Krukowski, K., & Sieciński, K. (2024). Bibliometric overview of ChatGPT: New perspectives in social sciences. Publications, 12(1), 9. [Google Scholar] [CrossRef]
  35. Pan, A., Musheyev, D., Bockelman, D., Loeb, S., & Kabarriti, A. E. (2023). Assessment of artificial intelligence Chatbot responses to top searched queries about cancer. JAMA Oncology, 9(10), 1437. [Google Scholar] [CrossRef]
  36. Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J., & Wu, X. (2024). Unifying large language models and knowledge graphs: A Roadmap. IEEE Transactions on Knowledge and Data Engineering, 36(7), 3580–3599. [Google Scholar] [CrossRef]
  37. Patil, R., & Gudivada, V. (2024). A review of current trends, techniques, and challenges in Large Language Models (LLMs). Applied Sciences, 14(5), 2074. [Google Scholar] [CrossRef]
  38. Polat, H., Topuz, A. C., Yıldız, M., Taşlıbeyaz, E., & Kurşun, E. (2024). A bibliometric analysis of research on ChatGPT in education. International Journal of Technology in Education, 7(1), 59–85. [Google Scholar] [CrossRef]
  39. Pradana, M., Elisa, H. P., & Syarifuddin, S. (2023). Discussing ChatGPT in education: A literature review and bibliometric analysis. Cogent Education, 10(2), 2243134. [Google Scholar] [CrossRef]
  40. Pranckutė, R. (2021). Web of Science (WoS) and Scopus: The Titans of bibliographic information in today’s academic world. Publications, 9(1), 12. [Google Scholar] [CrossRef]
  41. Pu, Z., Shi, C., Jeon, C. O., Fu, J., Liu, S., Lan, C., Yao, Y., Liu, Y., & Jia, B. (2024). ChatGPT and generative AI are revolutionizing the scientific community: A Janus-faced conundrum. iMeta, 3(2), e178. [Google Scholar] [CrossRef] [PubMed]
  42. Pwanedo Amos, J., Ahmed Amodu, O., Azlina Raja Mahmood, R., Bolakale Abdulqudus, A., Zakaria, A. F., Rhoda Iyanda, A., Ali Bukar, U., & Mohd Hanapi, Z. (2025). A Bibliometric exposition and review on leveraging LLMs for programming education. IEEE Access, 13, 58364–58393. [Google Scholar] [CrossRef]
  43. Raza, M., Jahangir, Z., Riaz, M. B., Saeed, M. J., & Sattar, M. A. (2025). Industrial applications of large language models. Scientific Reports, 15(1), 13755. [Google Scholar] [CrossRef]
  44. Savage, C. H., Kanhere, A., Parekh, V., Langlotz, C. P., Joshi, A., Huang, H., & Doo, F. X. (2025). Open-source large language models in radiology: A review and tutorial for practical research and clinical deployment. Radiology, 314(1), e241073. [Google Scholar] [CrossRef]
  45. Scherbakov, D., Hubig, N., Jansari, V., Bakumenko, A., & Lenert, L. A. (2025). The emergence of large language models as tools in literature reviews: A large language model-assisted systematic review. Journal of the American Medical Informatics Association, 32(6), 1071–1086. [Google Scholar] [CrossRef]
  46. Shao, M., Basit, A., Karri, R., & Shafique, M. (2024). Survey of different large language model architectures: Trends, benchmarks, and challenges. IEEE Access, 12, 188664–188706. [Google Scholar] [CrossRef]
  47. Shool, S., Adimi, S., Saboori Amleshi, R., Bitaraf, E., Golpira, R., & Tara, M. (2025). A systematic review of large language model (LLM) evaluations in clinical medicine. BMC Medical Informatics and Decision Making, 25(1), 117. [Google Scholar] [CrossRef]
  48. Shukla, M., Goyal, I., Gupta, B., & Sharma, J. (2024). A comparative study of ChatGPT, Gemini, and perplexity. International Journal of Innovative Research in Computer Science and Technology, 12(4), 10–15. [Google Scholar] [CrossRef]
  49. Şahin, M. F., Topkaç, E. C., Doğan, Ç., Şeramet, S., Özcan, R., Akgül, M., & Yazıcı, C. M. (2024). Still Using Only ChatGPT? The comparison of five different artificial intelligence chatbots’ answers to the most common questions about kidney stones. Journal of Endourology, 38(11), 1172–1177. [Google Scholar] [CrossRef]
  50. Tosun, B. (2025). Performance of five large language models in managing acute dental pain: A comprehensive analysis. Turkish Endodontic Journal, 39–49. [Google Scholar] [CrossRef]
  51. Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., Chandak, P., Liu, S., Van Katwyk, P., Deac, A., & Anandkumar, A. (2023). Scientific discovery in the age of artificial intelligence. Nature, 620(7972), 47–60. [Google Scholar] [CrossRef]
  52. Wang, S., Hu, T., Xiao, H., Li, Y., Zhang, C., Ning, H., Zhu, R., Li, Z., & Ye, X. (2024). GPT, large language models (LLMs) and generative artificial intelligence (GAI) models in geospatial science: A systematic review. International Journal of Digital Earth, 17(1), 2353122. [Google Scholar] [CrossRef]
  53. Wilhelm, T. I., Roos, J., & Kaczmarczyk, R. (2023). Large language models for therapy recommendations across 3 clinical specialties: Comparative study. Journal of Medical Internet Research, 25, e49324. [Google Scholar] [CrossRef]
  54. Wu, J., Ma, Y., Wang, J., & Xiao, M. (2024). The application of ChatGPT in medicine: A scoping review and bibliometric analysis. Journal of Multidisciplinary Healthcare, 17, 1681–1692. [Google Scholar] [CrossRef]
  55. Wu, L., Zheng, Z., Qiu, Z., Wang, H., Gu, H., Shen, T., Qin, C., Zhu, C., Zhu, H., Liu, Q., Xiong, H., & Chen, E. (2023). A survey on large language models for recommendation (version 5). arXiv. [Google Scholar] [CrossRef]
  56. Yalcinkaya, T., & Sebnem, C. Y. (2024). Bibliometric and content analysis of ChatGPT research in nursing education: The rabbit hole in nursing education. Nurse Education in Practice, 77, 103956. [Google Scholar] [CrossRef] [PubMed]
  57. Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning (version 3). arXiv. [Google Scholar] [CrossRef]
  58. Yang, R., Zhu, J., Man, J., Fang, L., & Zhou, Y. (2024). Enhancing text-based knowledge graph completion with zero-shot large language models: A focus on semantic enhancement. Knowledge-Based Systems, 300, 112155. [Google Scholar] [CrossRef]
  59. Yin, S., Fu, C., Zhao, S., Li, K., Sun, X., Xu, T., & Chen, E. (2024). A survey on multimodal large language models. National Science Review, 11(12), nwae403. [Google Scholar] [CrossRef]
  60. Zhang, W., Wang, Q., Kong, X., Xiong, J., Ni, S., Cao, D., Niu, B., Chen, M., Li, Y., Zhang, R., Wang, Y., Zhang, L., Li, X., Xiong, Z., Shi, Q., Huang, Z., Fu, Z., & Zheng, M. (2024). Fine-tuning large language models for chemical text mining. Chemical Science, 15(27), 10600–10611. [Google Scholar] [CrossRef]
  61. Zhao, H., Chen, H., Yang, F., Liu, N., Deng, H., Cai, H., Wang, S., Yin, D., & Du, M. (2024). Explainability for large language models: A survey. ACM Transactions on Intelligent Systems and Technology, 15(2), 1–38. [Google Scholar] [CrossRef]
  62. Zhou, M., Pan, Y., Zhang, Y., Song, X., & Zhou, Y. (2025). Evaluating AI-generated patient education materials for spinal surgeries: Comparative analysis of readability and DISCERN quality across ChatGPT and deepseek models. International Journal of Medical Informatics, 198, 105871. [Google Scholar] [CrossRef] [PubMed]
  63. Zhu, X., Li, J., Liu, Y., Ma, C., & Wang, W. (2024). A survey on model compression for large language models. Transactions of the Association for Computational Linguistics, 12, 1556–1577. [Google Scholar] [CrossRef]
Figure 1. Number of publications and citations in the bibliometric analysis of large language models. Note: The code used for advanced searching of bibliometric studies on GenAI in the Web of Science database can be found in Appendix A. No results were found for the period before 2023. Source: own elaboration based on Web of Science database (accessed on 1 November 2025).
Figure 1. Number of publications and citations in the bibliometric analysis of large language models. Note: The code used for advanced searching of bibliometric studies on GenAI in the Web of Science database can be found in Appendix A. No results were found for the period before 2023. Source: own elaboration based on Web of Science database (accessed on 1 November 2025).
Publications 13 00067 g001
Figure 2. Top-10 Web of Science Categories by number of GenAI bibliometric analyses. Source: own elaboration based of Web of Science Core Collection.
Figure 2. Top-10 Web of Science Categories by number of GenAI bibliometric analyses. Source: own elaboration based of Web of Science Core Collection.
Publications 13 00067 g002
Figure 3. Percentage share of publications for selected GenAI tools within the total of all analyzed articles. Note: code used for advanced publication search: #1 OR #2 OR #3 OR #4 OR #5 OR #6 OR #7 OR #8 OR #9 OR #10; each serial number corresponds to the next code for the analyzed tool (see Table A1 and Table A2, Appendix B). This procedure made it possible to obtain the total number of publications without duplicates. Note 2: Publications may be assigned to more than one line; therefore, the shares may sum to more than 100%. Source: own elaboration based on Web of Science Core Collection.
Figure 3. Percentage share of publications for selected GenAI tools within the total of all analyzed articles. Note: code used for advanced publication search: #1 OR #2 OR #3 OR #4 OR #5 OR #6 OR #7 OR #8 OR #9 OR #10; each serial number corresponds to the next code for the analyzed tool (see Table A1 and Table A2, Appendix B). This procedure made it possible to obtain the total number of publications without duplicates. Note 2: Publications may be assigned to more than one line; therefore, the shares may sum to more than 100%. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g003
Figure 4. Percentage share of publications among selected GenAI tools in 2023–2025. Source: own elaboration based on Web of Science Core Collection.
Figure 4. Percentage share of publications among selected GenAI tools in 2023–2025. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g004
Figure 5. Growth dynamics in the number of publications on selected GenAI tools in 2023–2025. Note: DeepSeek, Qwen, Copilot, and Grok are omitted due to their intensive later growth relative to the others; accordingly, we focus only on tools that exhibited moderate growth in 2023–2025. Source: own elaboration based on Web of Science Core Collection.
Figure 5. Growth dynamics in the number of publications on selected GenAI tools in 2023–2025. Note: DeepSeek, Qwen, Copilot, and Grok are omitted due to their intensive later growth relative to the others; accordingly, we focus only on tools that exhibited moderate growth in 2023–2025. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g005
Figure 6. Growth dynamics of the total number of publications on the analyzed GenAI tools in 2023–2025 (with the projected value for 2026). Note: The plot includes an indicative projection for 2026 based on a fitted logarithmic function y = 4050ln(x) + 2387.1, where x = 1, 2, 3, 4 correspond to the years 2023–2026. The projection is intended solely to suggest a possible trajectory. The 2025 record may continue to be updated in the database. Source: own elaboration based on Web of Science Core Collection.
Figure 6. Growth dynamics of the total number of publications on the analyzed GenAI tools in 2023–2025 (with the projected value for 2026). Note: The plot includes an indicative projection for 2026 based on a fitted logarithmic function y = 4050ln(x) + 2387.1, where x = 1, 2, 3, 4 correspond to the years 2023–2026. The projection is intended solely to suggest a possible trajectory. The 2025 record may continue to be updated in the database. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g006
Figure 7. Citations per publication for the analyzed groups of publications and for the entire set in 2023–2025. Source: own elaboration based on Web of Science Core Collection.
Figure 7. Citations per publication for the analyzed groups of publications and for the entire set in 2023–2025. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g007
Figure 8. Author co-citation network for the one hundred most frequently cited researchers in the GenAI literature (2023–2025). Source: own elaboration based on Web of Science Core Collection.
Figure 8. Author co-citation network for the one hundred most frequently cited researchers in the GenAI literature (2023–2025). Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g008
Figure 9. Global co-authorship network for the entire set of analyzed publications (2023–2025). Source: own elaboration based on Web of Science Core Collection.
Figure 9. Global co-authorship network for the entire set of analyzed publications (2023–2025). Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g009
Figure 10. Comparison of citations per publication across the three dominant subject categories among GenAI tools (2023–2025). Source: own elaboration based on Web of Science Core Collection.
Figure 10. Comparison of citations per publication across the three dominant subject categories among GenAI tools (2023–2025). Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g010
Figure 11. Jaccard similarity across tool-specific corpora based on the Top–100 keywords. Source: own elaboration based on Web of Science Core Collection.
Figure 11. Jaccard similarity across tool-specific corpora based on the Top–100 keywords. Source: own elaboration based on Web of Science Core Collection.
Publications 13 00067 g011
Table 1. Detailed citation characteristics for each analyzed set of publications (values cumulated for 2023–2025).
Table 1. Detailed citation characteristics for each analyzed set of publications (values cumulated for 2023–2025).
GenAI NameMost-Cited Publicationsi10-Index *h-Index *
AuthorsTitleCitation Count
ChatGPTKung et al. (2023)Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models197310,608136
GeminiLim et al. (2023)Generative AI and the future of education: Ragnarok or reformation? A paradoxical perspective from management educators566281648
ClaudeWilhelm et al. (2023)Large Language Models for Therapy Recommendations Across 3 Clinical Specialties: Comparative Study8048624
LLaMADergaa et al. (2023)From human writing to artificial intelligence generated text: examining the prospects and potential threats of ChatGPT in academic writing308141130
PerplexityA. Pan et al. (2023)Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer15070119
DeepSeekZhou et al. (2025)Evaluating AI-generated patient education materials for spinal surgeries: Comparative analysis of readability and DISCERN quality across ChatGPT and deepseek models251449
MistralZhang et al. (2024)Fine-tuning large language models for chemical text mining3922512
QwenYang et al. (2024)Enhancing text-based knowledge graph completion with zero-shot large language models: A focus on semantic enhancement211088
CopilotLu et al. (2024)A multimodal generative AI copilot for human pathology14833610
GrokŞahin et al. (2024)Still Using Only ChatGPT? The Comparison of Five Different Artificial Intelligence Chatbots’ Answers to the Most Common Questions About Kidney Stones13554
* based on Web of Science citations. Source: own elaboration based on Web of Science Core Collection.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sieciński, K.; Oliński, M. A Multidisciplinary Bibliometric Analysis of Differences and Commonalities Between GenAI in Science. Publications 2025, 13, 67. https://doi.org/10.3390/publications13040067

AMA Style

Sieciński K, Oliński M. A Multidisciplinary Bibliometric Analysis of Differences and Commonalities Between GenAI in Science. Publications. 2025; 13(4):67. https://doi.org/10.3390/publications13040067

Chicago/Turabian Style

Sieciński, Kacper, and Marian Oliński. 2025. "A Multidisciplinary Bibliometric Analysis of Differences and Commonalities Between GenAI in Science" Publications 13, no. 4: 67. https://doi.org/10.3390/publications13040067

APA Style

Sieciński, K., & Oliński, M. (2025). A Multidisciplinary Bibliometric Analysis of Differences and Commonalities Between GenAI in Science. Publications, 13(4), 67. https://doi.org/10.3390/publications13040067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop