Next Article in Journal
Few-Shot Classification Based on the Edge-Weight Single-Step Memory-Constraint Network
Next Article in Special Issue
Internet of Underwater Things: A Survey on Simulation Tools and 5G-Based Underwater Networks
Previous Article in Journal
A Fuzzy-Based Proportional–Integral–Derivative with Space-Vector Control and Direct Thrust Control for a Linear Induction Motor
Previous Article in Special Issue
A Comprehensive Review on Multiple Instance Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Tracing the Influence of Large Language Models across the Most Impactful Scientific Works

by
Dana-Mihaela Petroșanu
1,*,
Alexandru Pîrjan
2 and
Alexandru Tăbușcă
2
1
Department of Mathematics-Informatics, National University of Science and Technology Politehnica Bucharest, Splaiul Independenței 313, 060042 Bucharest, Romania
2
Department of Informatics, Statistics and Mathematics, Romanian-American University, Expoziției 1B, 012101 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(24), 4957; https://doi.org/10.3390/electronics12244957
Submission received: 12 November 2023 / Revised: 30 November 2023 / Accepted: 8 December 2023 / Published: 10 December 2023
(This article belongs to the Special Issue Artificial Intelligence Empowered Internet of Things)

Abstract

:
In recent years, large language models (LLMs) have come into view as one of the most transformative developments in the technical domain, influencing diverse sectors ranging from natural language processing (NLP) to creative arts. Their rise signifies an unprecedented convergence of computational prowess, sophisticated algorithms, and expansive datasets, pushing the boundaries of what was once thought to be achievable. Such a profound impact mandates a thorough exploration of the LLMs’ evolutionary trajectory. Consequently, this article conducts a literature review of the most impactful scientific works, using the reliable Web of Science (WoS) indexing database as a data source in order to attain a thorough and quality-assured analysis. This review identifies relevant patterns, provides research insights, traces technological growth, and anticipates potential future directions. Beyond mapping the known, this study aims to highlight uncharted areas within the LLM landscape, thereby catalyzing future research endeavors. The ultimate goal is to enhance collective understanding, encourage collaboration, and guide subsequent innovations in harnessing the potential of LLMs for societal and technological advancement.

1. Introduction

In recent years, the technical landscape has been profoundly transformed by myriad advancements in various domains, with LLMs standing out as one of the most influential. These models, driven by advancements in artificial intelligence (AI) and machine learning (ML), have become essential not only in the realm of NLP but also in various areas such as business intelligence [1,2,3], healthcare [4,5,6], legal analytics [1,7,8], and even creative arts like music and literature [9]. The sheer scale, complexity, and potential of LLMs necessitate an analysis of their influence across the most impactful scientific works. This endeavor is not a chronological recounting of events but an insight into the expanding growth and the multitudinous challenges and solutions that have surfaced over time. This article seeks to fill this knowledge gap by reviewing the most impactful scientific literature on LLMs’ influence, applications, advantages, disadvantages, challenges, benefits, and risks. The rationale for this undertaking, the chosen methodology, and its prospective contribution to the academic community are delineated below.
The importance of LLMs in today’s technical sphere cannot be understated. At a fundamental level, they signify the convergence of computational power [10,11,12,13], advanced algorithms [14,15,16], and vast datasets [10,11,17], which have collectively propelled the capabilities of these models to levels previously deemed unattainable. These models are at the core of many contemporary applications, from chatbots that offer near-human conversational experiences [6,8,18,19,20] to automated content generation tools [6,7,8] that are revolutionizing industries worldwide. The global economy, in terms of both industry and academia, has borne witness to the transformative power of LLMs [7,8,18], adapting to and evolving with these technological advancements. As such, understanding their progression and their ever-expanding influence is not just an academic exercise but a necessity for anyone invested in the future of technology.
In the realm of AI and NLP, LLMs have emerged as a disruptive force, reshaping our understanding and interaction with human language through computational means. The evolution of these models from early rule-based systems and statistical methods to complex neural network-based architectures represents a significant shift in the field. These early models, while foundational, were inherently limited in their ability to grasp the complexities and subtleties inherent in human language.
The advent of neural networks, particularly recurrent neural networks (RNNs) and the innovative transformer architecture, marked a fundamental development. The introduction of transformers, as presented in the seminal work “Attention Is All You Need” by Vaswani et al., brought to light the self-attention mechanism [21]. This mechanism enables models to assign varying degrees of importance to different words within a sentence, thereby capturing context and relationships that were previously elusive. The architecture of LLMs comprises at its core the embedding layer, which is responsible for translating tokens, be they words or sub-words, into numerical vectors, which are further used by the ML algorithms. The self-attention mechanism, a hallmark of this architecture, allows the model to dynamically focus on different segments of the input sequence. This ability to discern context and relationships between words is what sets these models apart in their understanding of language.
The transformer architecture revolutionized NLP. This architecture is characterized by its distinct separation into two main components: the encoder and the decoder. Each encoder layer, typically six in total, comprises a self-attention mechanism and a feed-forward neural network. This structure allows the model to weigh the importance of different words in a sentence, a process known as self-attention. The decoder mirrors the encoder’s structure but includes an additional layer that focuses attention on the encoder’s output, facilitating translation or summarization tasks [21].
Moving to the generative pretrained transformer (GPT) architecture, it deviates significantly from the original transformer by employing a decoder-only structure. This design choice is essential for its functionality in generating text. The GPT model processes text input through multiple decoder layers to predict the next word in a sequence. This architecture underpins its ability to generate coherent and contextually relevant text. GPT’s training involves two key stages: pretraining on a large corpus of text to learn language patterns and fine-tuning for specific tasks, allowing it to adapt to a wide range of applications [1].
In architectures like the GPT, the emphasis is on the decoder, which is tasked with generating language outputs. In contrast, architectures like the bidirectional encoder representations from transformers (BERT) employ the encoder part to understand profoundly the nuances of language context. These models also incorporate a fully connected neural network, which processes the outputs from the attention mechanism and encoder/decoder layers, culminating in the generation of the final output. The training process of these models involves two critical phases: pretraining and fine-tuning. During pretraining, the model undergoes training on vast datasets, absorbing general language patterns through unsupervised learning techniques. This phase often involves predicting missing words or sentences in a given text. The fine-tuning phase, on the other hand, tailors the pretrained model to specific tasks using smaller, task-oriented datasets, thereby enhancing its precision and applicability to specialized domains [2].
The BERT architecture represents another significant shift in transformer-based models, focusing exclusively on an encoder-only structure. This design enables BERT to understand the context from both sides of a word in a sentence, a capability known as bidirectional context. Central to BERT’s functionality is the masked language model (MLM), where some input tokens are intentionally masked and the model learns to predict them, thereby gaining a deeper understanding of language context and structure.
As one explores the capabilities and advancements in LLMs, they encounter a variety of model variants, each contributing unique features and improvements. The GPT series, developed by OpenAI, is renowned for its proficiency in generating coherent and contextually relevant text. Google’s BERT excels in understanding contextual nuances in language, proving invaluable in applications like question answering and sentiment analysis [22,23].
Other models like T5 and XLNet have further expanded the possibilities, introducing novel mechanisms that continue to push the frontiers of language modeling [24,25]. The applications and impacts of these models are vast and varied. They have demonstrated exceptional skills in natural language understanding and generation, facilitating tasks ranging from text summarization to complex language translation. Beyond the realm of language, their usefulness extends to interdisciplinary applications, aiding research and development in fields as diverse as bioinformatics and law. Nevertheless, the development and deployment of LLMs are not without challenges and ethical considerations. The scalability of these models comes at the cost of substantial computational resources, raising concerns about their environmental impact. Additionally, the potential for these models to learn and perpetuate biases present in training data is a pressing issue, necessitating a thorough design and rigorous evaluation to ensure fairness and responsible usage.
A comparative overview of the main architectures, namely transformer, GPT, and BERT, reveals both their shared characteristics and distinct functionalities. The original transformer model presents a balanced encoder–decoder structure, adept at handling a variety of language processing tasks. In contrast, GPT, with its decoder-only design, excels in text generation, harnessing its layers to predict subsequent words in a sequence. BERT, with its encoder-only approach, focuses on understanding and interpreting language, and is adept at tasks like sentence classification and question answering. This comparative analysis underscores the versatility and adaptability of the transformer architecture, thereby setting a foundation for continual advancements in the field of generative AI (Table 1).
While the present-day achievements of LLMs are visible and palpable, understanding their trajectory requires a thorough analysis of the specific scientific literature. Several innovative works, incremental improvements, and defining moments have contributed to the current state of LLMs [26,27,28]. A review of the most impactful scientific literature on LLMs is not just about tracing this journey, but about understanding the patterns, identifying significant works, and foreseeing potential future directions. Such a review has the potential to inspire researchers, guiding both newcomers and veteran practitioners in the field and ensuring that the cumulative knowledge is accessible, comprehensible, and actionable.
The landscape of LLMs, while vast, is also replete with uncharted scientific domains and areas awaiting exploration. This review not only maps the known but also attains insights into the unknown, providing a comprehensive understanding while also igniting the desire for knowledge. By analyzing and presenting the most impactful scientific literature on LLMs, this paper serves as a foundation for future research endeavors. It seeks to foster collaboration, inspire innovation, and guide the direction of subsequent research, ensuring that the journey of LLMs is not just acknowledged but also aptly leveraged for the betterment of technology and society at large.
Consequently, the inexorable rise of LLMs in the technical domain, coupled with the profound impact they have had across various sectors, necessitates a detailed review. Leveraging the robust capabilities of the WoS indexing database [29], this article endeavors to provide an insightful and impactful overview of the most influential papers regarding LLMs, thus enriching the existing body of knowledge and paving the way for future explorations.
This study offers a critical and in-depth exploration of LLMs from a multifaceted perspective. The review’s contributions extend across important bibliometric, technological, societal, ethical, and institutional insights, serving as a guide for future research, implementation, and policymaking in the field of AI. The main contributions of the conducted review study comprise insights into:
  • Technological impact and integration of LLMs: This article achieves an encompassing evaluation of the technological impact and integration of LLMs. It catalogues the multitude of ways in which LLMs have been assimilated into various technological sectors, offering readers a holistic view of these models’ influence on current technology paradigms. This study presents a detailed analysis of LLMs, enhancing the understanding of their functionality within different research areas. This review contributes by mapping out the interdisciplinary applications of LLMs, revealing their role as a milestone in the evolution of machine interaction with human languages.
  • The personalization revolution in technology and its societal impact: This study elucidates the transformative role of LLMs in personalizing technology, spotlighting the shift toward more intuitive user experiences. By exploring the adaptation of LLMs in personalizing user interaction, this article underscores a significant move toward more inclusive and democratized digital access. It contributes to the scientific literature by evaluating the societal impacts of these personalized experiences, balancing the advantages of global communication with the critical analysis of associated challenges, such as privacy concerns.
  • Trust, reliability, and ethical considerations of LLMs: Through a thorough examination, this review sheds light on the trust and reliability concerns surrounding LLMs, as well as their ethical implications. This article serves as an important discussion on the creation of trust in the digital realm, offering evidence of LLMs enhancing human decision-making. Furthermore, it highlights the necessity for ethical frameworks and bias mitigation strategies in LLM development, thereby contributing to the foundation for future policy and ethical guidelines in AI.
  • Institutional challenges and the road ahead: This review addresses the institutional challenges in integrating LLMs into existing frameworks, providing a strategic perspective for future adoption. It identifies the obstacles that institutions face, from infrastructural limitations to the need for enhanced AI literacy, offering valuable insights for overcoming these barriers. The article posits a forward-looking approach, considering potential regulatory and technological evolutions, and in doing so, it charts a path for institutions looking to responsibly harness the potential of LLMs.
The subsequent structure of this paper is as follows. Section 2 depicts the rationale for the database querying, filtering, and data curation approach. Section 3 presents outcomes achieved after analyzing the obtained scientific pool of articles, followed by an analysis regarding the expanding and multidisciplinary influence of LLMs within the scientific literature, along with a synthesis of relevant papers from the scientific pool of articles. Section 4 contrasts advantages and benefits with the potential disadvantages and risks of LLMs, while Section 5 depicts important insights of the conducted study.

2. Research Methodology

In conducting such a review, the choice of indexing database is essential. WoS [29], with its comprehensive coverage of the multidisciplinary scientific literature, stands out as a premium choice. Its rigorous selection criteria ensure that only the most impactful and credible journals are indexed, thereby ensuring the quality of the sources. Moreover, its advanced citation tracking enables a deeper understanding of the relationships between different research works, thereby aiding in unraveling the intricate web of LLM research over time. The vastness and credibility of WoS make it an ideal platform for conducting this scientific review.
In the following, we depict a series of considerations regarding the database query rationale, results filtering and data curation in LLM research. The WoS database is recognized as one of the premier platforms for accessing scholarly articles across diverse disciplines. Given the vast array of the published literature, it is essential to tailor search queries in order to retrieve the most relevant articles, particularly when analyzing a specialized topic like the impact of LLMs across research areas.
The query “TS = ((LLM* OR LARGE LANGUAGE MODEL*) AND (AI OR ARTIFICIAL INTELLIGENCE OR MACHINE LEARNING))” has been constructed to cover the comprehensive breadth and depth of the scientific literature associated with LLMs.
The usage of the wildcard symbol (*) ensures the capture of variations of LLM-related terms, be it abbreviations or the full phrase. This broadens the search, ensuring that the relevant literature is not overlooked. Including AI- and ML-related terms ensured that the retrieved articles specifically captured the relationship and relevance of LLMs within these domains. Given the intertwined nature of these fields, it is indispensable to explore their mutual implications.
The academic landscape is replete with a variety of publication formats, each serving a unique purpose. When embarking on the meticulous journey of crafting a review of the scientific literature, especially on a topic as nuanced as LLMs, it becomes imperative to ensure that the foundational sources uphold the highest standards of rigor and comprehensiveness. While review articles and conference proceedings hold value, there are compelling reasons for prioritizing scientific research articles. When embarking on the process of writing a literature review of the most impactful scientific works, it is crucial that the foundation, namely the primary sources of data, remain untouched by potential biases and preexisting summarizations.
The next filtering process that we have applied within the Clarivate database consists of prioritizing highly cited (HC) and hot articles (HA); the rationale of this approach is described in the following. The first aspect that we wanted to focus on was proper indicators of influence. Selecting “Highly Cited” and “Hot Articles” further refined our obtained scientific works pool, ensuring the inclusion of articles that have made a significant impact in the field. According to Clarivate [29], highly cited papers, by virtue of being in the top 1% in their field for a given year, are proof of their relevance and influence. Such articles have shaped the discourse and research directions in their domain, making them indispensable for understanding the impact of LLMs.
An additional aspect that we wanted to grasp was the time relevance. Hot papers, being recently published, yet swiftly gathering citations, signify cutting-edge developments in the field. These papers highlight the most current advancements and discussions, ensuring that the review remains contemporarily pertinent. Because citations are a form of academic endorsement, we focused our review on HP and HA because they indicate widespread recognition and validation by peers, reinforcing their importance in the scholarly landscape.
Consequently, the systematic approach of querying and refining search results has ensured that the conducted review captures the most influential, relevant, and novel insights on LLMs. These aspects are vital for portraying an accurate and profound picture of the LLMs’ trajectory, thereby optimally serving the academic and wider tech community’s interests.
During the analysis of the returned results, it was observed that the database, despite being set to return only documents of the “Article” type, inadvertently included three articles that were actually of the “Review” type but have been misclassified as “Article” type [30,31,32]. This discrepancy necessitated an immediate and careful response, leading to the initiation of a data curation process.
The essence of data curation in this context revolves around the rectification of the identified classification error to achieve alignment of the dataset with the predefined methodological criteria of the systematic review to avoid obtaining misleading results.
The scientific pool of articles retrieved after querying, filtering, and curating the Clarivate WoS database consists of a total of 47 article-type documents associated with LLMs, covering various subject areas. The following section depicts the main results obtained by analyzing the scientific pool of papers obtained in accordance with the above-mentioned research methodology. The datasets based on which the analysis has been performed can be found in the “The Datasets.xlsx” Excel file in the Supplementary Materials.

3. Results

3.1. Analyzing the Obtained Scientific Pool of Articles Based on the Clarivate Research Area Criterion

A data analysis of the scientific pool of articles reveals that “Computer Science” is the most prevalent research area, accounting for 16.46% of the records [4,5,8,10,11,12,17,33,34,35,36,37,38], followed by “Engineering” [1,9,13,33,36,38,39], “Health Care Sciences & Services” [4,5,6,40,41,42,43], and “Medical Informatics” [4,5,6,40,41,42,43], which each hold 8.86% of the records, while “Information Science & Library Science” makes up 6.33% of the records [3,4,5,8,37]. Beyond these primary fields, a diverse set of research areas contribute to the LLM landscape, albeit in smaller proportions. To obtain an in-depth analysis, the prominence and implications of these research areas in the context of LLMs were analyzed (Figure 1).
Unsurprisingly, given the context of LLMs and their technological foundations, “Computer Science” emerges as the most dominant research area. This underscores the centrality of computational techniques and methods in the development and application of LLMs. With 16.46% of the records associated with “Computer Science” [4,5,8,10,11,12,17,33,34,35,36,37,38], it is evident that this domain serves as the nexus of LLM development, being the bedrock for their development and fine-tuning. The intricate algorithms, deep learning techniques, and neural network architectures that underpin LLMs predominantly originate from computer science research studies. It is unsurprising that this domain occupies a central position, given that the fundamental tenets of LLMs—from data processing to model training—are deeply rooted in computer science.
The substantial representation of the “Engineering” research area [1,9,13,33,36,38,39] offers interesting insights. This suggests that beyond pure computational methods, there is a significant engineering component, namely potential hardware advancements or optimization techniques that play a role in the impact of LLMs. This research area, constituting 14.89% of the records, plays a very important role in materializing the theoretical constructs of LLMs into tangible applications. Whether the focus is on designing the hardware infrastructure to train massive models or on developing software tools to seamlessly integrate LLMs into applications, engineering bridges the gap between theory and practice. It ensures that the computational power and infrastructure are in place to harness the full potential of LLMs.
The prominence of “Health Care Sciences & Services” [4,5,6,40,41,42,43] and “Medical Informatics” [4,5,6,40,41,42,43] research areas suggests that LLMs have made significant inroads into the healthcare sector, possibly in areas like medical literature analysis, patient data processing, or diagnostic assistance. These domains, each holding 8.86% of the records, underscore the transformative impact of LLMs in the healthcare sector. LLMs have been instrumental in tasks ranging from patient data analysis, diagnostic assistance, drug discovery, and real-time patient interaction. The synergy between “Health Care Sciences & Services” [4,5,6,40,41,42,43] and “Medical Informatics” [4,5,6,40,41,42,43] research areas signifies the convergence of healthcare domain knowledge with advanced computational techniques, paving the way for innovations like personalized medicine and predictive health analytics.
The representation of the “Information Science & Library Science” [3,4,5,8,37] research area indicates that LLMs have applications in information retrieval, data organization, and potentially in digital libraries, underscoring their versatility. Holding 6.33% of the records, this domain highlights the indispensable role of LLMs in managing, categorizing, and retrieving valuable information. As the volume of digital data increases, LLMs have become essential in semantic search, information retrieval, and automated content summarization. Comprehending the context and creating texts that resemble human written content has made them important tools in the realm of digital libraries and information repositories. The obtained data underscore the multifaceted influence of LLMs across various research areas. While “Computer Science” and “Engineering” lay the foundational groundwork for LLMs, applications in sectors like “Health Care Sciences & Services” [4,5,6,40,41,42,43] and “Medical Informatics” [4,5,6,40,41,42,43] showcase LLMs’ transformative potential.
To gain deeper insights, in addition to the distribution of the record counts across research areas, we analyzed the cumulative percentage representation of the top research areas and we identified potential patterns or anomalies in the retrieved dataset. The analysis of this representation highlights that the dominant research clusters in the context of LLMs are represented by the “Computer Science”, “Engineering”, “Health Care Sciences & Services”, “Medical Informatics”, and “Information Science & Library Science” areas, as depicted by the steep rise in cumulative percentages. It is evident that a few top areas mark a rapid accumulation as they already cover a substantial portion of the total representation. For instance, by the time one has reached “Information Science Library Science” (namely after traversing the first 5 research areas out of the 29 existing ones), they will already account for over 49.37% of the research areas in terms of representation.
In light of the analysis of the obtained scientific pool of articles based on the Clarivate research area criterion, one can notice that LLMs are not just restricted to “Computer Science” because their applications span multiple domains, from healthcare to information retrieval. The evolution of LLMs can also be regarded as a combination of pure computational advancements (as seen by the prominence of “Computer Science”) and practical applications in diverse sectors (e.g., “Health Care Sciences & Services”). The data underscore the importance of conducting a scientific literature review of LLM development and influence, as their landscape is both vast and varied. With dominant clusters and diverse smaller contributors, understanding the intricate web of research is important for anyone invested in the domain.
To identify any potential patterns or anomalies in the dataset, such as any unexpected research areas that might be less intuitive in the context of LLMs, we analyzed the domains with the lowest record counts. While LLMs are primarily associated with computational domains, their presence in “Environmental Sciences Ecology” [44] might suggest applications in data analysis, environmental monitoring, or even predictive modeling related to ecological phenomena. The inclusion of the “Geochemistry Geophysics” [15] and “Geography” [44] areas indicates the potential applications of LLMs in geospatial data analysis, modeling geological phenomena, and even understanding geographical patterns. The “Microbiology” [45] research area highlights the potential of using LLMs in microbiological research, parsing vast amounts of data, or even predicting microbial behaviors. The “Operations Research Management Science” [38] domain shows that LLMs are not just restricted to purely technical applications but also have potential in optimization, decision-making, and management scenarios. LLMs can have applications such as public service bots, analyzing administrative documents, or aiding in policymaking, as depicted by the “Public Administration” [44] research area. The “Urban Studies” [44] domain reflects the fact that LLMs can play a role in urban planning, modeling urban growth, or even understanding urban social dynamics. The broad category of “Social Sciences Other Topics” [19] research area indicates that LLMs have permeated even the more human-centric fields, which could involve qualitative data analysis, social trend prediction, or aiding in sociological research.
These research areas, while contributing less in terms of record count, offer fascinating insights. They underscore the vast potential and versatility of LLMs, extending their reach beyond conventional technical domains into diverse fields. Such a wide span highlights the transformative power of LLMs, as they find relevance in areas ranging from the core technical to the deeply human-centric, reinforcing the emphasis on their profound impact across various sectors.
The retrieved and analyzed data provide valuable insights into the research landscape surrounding LLMs. While the dominant areas are expected, the presence of LLMs in diverse and sometimes unexpected fields demonstrates their adaptability and the ever-expanding horizon of their applications. The cross-disciplinary nature of LLM research, as highlighted by the registered data, accentuates their versatility and underscores the necessity for collaborative efforts to harness their full potential.
In the following subsection, we investigate the retrieved scientific pool of articles with regard to their publication year.

3.2. Analyzing the Obtained Scientific Pool of Articles Based on the Clarivate Publication Year Criterion

To obtain a detailed analysis of the obtained scientific pool of articles from the publication year perspective, we examined the distribution of publications over the years (Figure 2), identified trends in the publication of LLM-related literature to understand the progression in the field, analyzed peak years to highlight any noticeable peaks or troughs in the data, interrelated the identified data trends to the larger scientific context, and computed the cumulative contribution of publications over the years to gauge the growing interest and importance of LLMs in the scientific literature.
This chart provides a clear picture of the distribution of publications on LLMs over the years. Before 2019, the number of publications on LLMs was relatively limited [5,11,14,46,47,48,49]. This suggests that while there might have been an emerging interest and research conducted on LLMs, they had not yet gained significant traction in the academic community. In 2019, there is a noticeable increase in publications. Even if it is not over yet at the time of writing, the year 2023, in particular, stands out with the highest number of publications, indicating a rapidly increasing interest and recognition of the LLMs’ significance in the field of scientific research [1,2,3,4,6,8,18,19,20,26,27,28,33,42,43,50,51,52].
The years 2020 [9,35,36,37], 2021 [10,34,41,53,54], and 2022 [15,39,40,44,55,56] show fluctuations, with 2022 having more publications than 2020 and 2021. This might indicate that while the interest in LLMs is consistently growing, there have been specific challenges or shifts in the research focus during 2019–2022. The top three years with the highest number of LLM-related publications are 2023 with 18 publications, accounting for 38.30% of the total records, 2019 with 7 publications [7,12,13,17,38,45,57] that represent 14.89% of the records, and 2022, which accounts for 12.77% of the total obtained scientific pool of articles. Therefore, these years represent significant moments in the evolution of LLMs, with 2023 standing out as a particularly impactful year. The surge in 2023 might indicate the culmination of various research strands leading to heightened activity in the field.
The sudden spike in publications from 2023 onwards outlines an evolutionary surge period and indicates a turning point in the academic and technical landscape concerning LLMs. This aspect substantiates the emphasis on LLMs’ transformative potential in recent years, as such a surge signifies breakthroughs, the introduction of novel models, and significant advancements in existing methodologies. The data underscore a convergence of factors, a clear insight that can be made toward the convergence of computational power, advanced algorithms, and vast datasets. The escalating number of publications is indicative of these factors coming into effect, enabling the development and understanding of more complex LLMs.
With 18 publications (38.30% of the total), 2023 emerged as the pinnacle year for LLM research. This is a result of accumulated knowledge from previous years, leading to a more in-depth understanding, application, and exploration of LLMs. Such a concentration of publications also suggests that the academic community might be bracing for even more advanced models, applications, or theoretical breakthroughs in the immediate future. The trajectory of publications, especially the recent surge, emphasizes the rising importance of LLMs, leading to broad implications for stakeholders. In the case of industry stakeholders, this can be interpreted as a sharp focus on integrating LLMs into their systems, expecting advancements that could redefine business models. However, for the academic community, this trajectory suggests rich grounds for further exploration, potential collaborations, and the need to keep abreast of rapid developments.
The dynamic nature of LLM-related publications, as reflected in the data, reiterates the need for conducting scientific reviews concerning this phenomenon and for understanding future publication trends. Such reviews are instrumental in navigating the existing body of knowledge and identifying highly cited and influential scientific works that will guide future research endeavors. Given the expansive growth in the LLM literature, as evident from the data, stakeholders can focus on setting priorities for future research funding. The current trajectory indicates the potential of uncharted territories and novel applications that might redefine the boundaries of what LLMs can achieve.
The rise in the number of highly cited and influential scientific articles on LLM topics, as documented by the obtained scientific pool of articles, is proof of their transformative potential. As the technical world continues to evolve, the significance of understanding, analyzing, and foreseeing the trajectory of LLMs becomes paramount for both academic and industrial communities. The data serve as a compelling foreword to the expansive evolution of LLMs, inviting readers to explore the scientific literature and myriad facets of these powerful models.
The computed cumulative contribution of publications over the years reveals several significant insights. This analysis showcases a consistent accumulation of LLM-related publications over the years. This steady growth underscores the increasing importance and recognition of LLMs in the scientific community. In particular, in the most recent years, there has been an exponential growth in the cumulative number of publications that highlights the increasing interest in LLMs and suggests a rapid expansion of the field. The sharp ascent in the last few years, particularly in 2023, indicates that a significant portion of all LLM-related publications has been very recent. All of these aspects emphasize the rising importance of LLMs in the current technical landscape.
To obtain a more detailed image concerning the granularity level of our database of articles, we have analyzed it starting with the Citation Meso Criterion of Clarivate [58].

3.3. Analyzing the Obtained Scientific Pool of Articles Based on the Clarivate Citation Meso Criterion

To provide an in-depth analysis, we have investigated the obtained scientific pool of articles based on the Clarivate Citation Meso metric (Figure 3).
As we wanted to understand the distribution of records across the topics and identify whether there are any topics that are particularly dominant, we conducted a distribution analysis, compared the top topics with the least discussed ones to highlight disparities, assert their relevance to LLMs, and discuss how these topics relate to LLMs and their impact in those areas.
The distribution analysis of the record counts across the topics provides several insights. The topic “Computer Vision & Graphics” [12,13,14,17,18,33,45,46] stands out with the highest record count, more than double that of the majority of other topics, indicating that it has been a prominent topic in the realm of the scientific literature related to LLMs. This underscores the importance of computer vision in conjunction with language models, reflecting the interdisciplinary nature of advancements in AI. This indicates a pronounced interest and research focus in this area, which is not surprising given the surge in applications related to image recognition, augmented reality, and other graphic-intensive technologies. This aspect can be explained by considering that while LLMs primarily deal with text, their integration with computer vision models can lead to multimodal models capable of understanding and generating content that combines both text and images. Such advancements can revolutionize fields like automated content creation, virtual reality, and augmented reality.
Despite the dominance of a few topics, a diverse range of topics is represented. This diversity emphasizes the interdisciplinary nature of modern scientific endeavors. When analyzing the obtained scientific pool of articles, we noticed several unexpected entrants. Interestingly, areas like “Oceanography, Meteorology & Atmospheric Sciences” [47,48,49,57] have a noteworthy presence. This result could be indicative of the increasing intersection of technology with traditionally distinct scientific domains, possibly in areas such as climate modeling or oceanic data analysis.
The “Knowledge Engineering & Representation” area [37,38,52] is directly tied to the foundation of LLMs. Knowledge representation plays an important role in how these models understand and generate human-like text. The presence of this topic emphasizes ongoing efforts to refine and advance the underlying mechanisms of LLMs. These topics also hold significant positions, highlighting the role of LLMs in knowledge representation and data transmission. This might be indicative of efforts to improve knowledge graphs, semantic understanding, and the ways in which information is shared and processed. The “Telecommunications” [34,39,40] area highlights the fact that LLMs can play an essential role in enhancing communication systems, whether through optimizing data transmission or by aiding in the development of intelligent communication interfaces.
The obtained scientific pool of articles reveals an interesting insight into a physical sciences intersection with regard to LLMs. The representation of topics like “Physical Chemistry” [10,11,35] and “Oceanography, Meteorology & Atmospheric Sciences” [47,48,49,57] hints at the broader applications of LLMs. They could be used in these domains to analyze vast amounts of textual data, and predict trends based on historical entries. Given the trajectory of LLMs and their increasing integration into various fields, it is essential to recognize and understand these intersections. It is evident that LLMs have transcended their traditional domains and have found relevance in diverse scientific areas. As LLMs continue to evolve, their influence will likely permeate even more domains, reinforcing the necessity for interdisciplinary collaboration and research.
To understand the relative prominence of each topic’s record count out of the existing ones, we also analyzed their percentage representation, which reinforces and supplements our earlier insights. The prominence of “Computer Vision & Graphics” [12,13,14,17,18,33,45,46], with a representation of 17.02%, signifies the convergence of vision and language tasks. As LLMs become more sophisticated, their integration with computer vision tasks, such as image captioning, visual question answering, and object detection, has grown, solidifying their interdisciplinary nature.
This analysis reaffirms that LLMs have influenced a wide spectrum of scientific domains. The percentage representation of “Knowledge Engineering & Representation” [37,38,52] (6.38%) and “Oceanography, Meteorology & Atmospheric Sciences” [47,48,49,57] (8.51%) indicates that LLMs play an essential role in both knowledge representation and environmental sciences. The presence of “Telecommunications” [34,39,40] citation Meso topic with a 6.38% representation might suggest the relevance of LLMs in enhancing communication systems, possibly in areas like signal processing, data compression, or even in the semantics of communication.
These data underscore the extensive reach of LLMs across varied scientific domains. Their influence is not just confined to core computational tasks but also extends to fields as diverse as environmental sciences and telecommunications. This wide-ranging applicability of LLMs resonates within the body of knowledge, where their transformative power across various sectors is emphasized.
To attain an even more in-depth analysis of the information gathered based on the citations, we have also analyzed the obtained scientific pool of articles based on the Clarivate Citation Micro criterion [58].

3.4. Analyzing the Obtained Scientific Pool of Articles Based on the Clarivate Citation Micro Criterion

In the following, we have identified the top and least cited topics based on record count, analyzed the distribution of the record counts, identified any potential patterns or anomalies in the dataset and related the findings to LLMs and their significance in the scientific literature (Figure 4).
After analyzing the obtained scientific pool of articles, we have remarked that the citation topics at the Micro level sorted in descending order are “Deep Learning” with seven citations [12,13,14,18,33,45,46], representing 14.89% of the total; “Bulk Modulus” with three citations [10,11,35], which is 6.38% of the total; “Tropical Cyclones” with three citations [47,48,49], accounting for 6.38% of the total; “Natural Language Processing” (NLP) with two citations [37,38], and “Health Literacy” with two citations [6,41], each contributing 4.26% to the scientific pool. The average record count for citation topics is approximately 1.38, and the standard deviation of the record counts is about 1.13, suggesting some variability in the dataset, while half of the citation topics have a record count of just one.
After analyzing the data, we can state that “Deep Learning” [12,13,14,18,33,45,46] stands out as the most frequently cited Micro topic with seven records, its citation count being more than two times higher than the following topic on the list. This prominence aligns with the emphasis on the significance of LLMs, as these models often incorporate deep learning techniques. It is interesting to note the appearance of seemingly unrelated topics like “Tropical Cyclones” [47,48,49] and “Bulk Modulus” [10,11,35]. This indicates the diverse range of citation Micro topics represented in the dataset. “Natural Language Processing” [37,38] is among the top-cited topics, further highlighting the growing importance of LLMs in the scientific literature. The inclusion of NLP is also consistent with the context of LLMs, underscoring its foundational role in the development and application of these models.
Based on the visual representation of the distribution of the record counts for all the citation Micro topics (Figure 4), several interesting insights regarding the dataset’s landscape can be obtained. The topic of “Deep Learning” [12,13,14,18,33,45,46] indisputably dominates the citation landscape. Its prevalence suggests a pronounced interest and engagement with this area, reinforcing its essential role in the evolution and application of LLMs and other advanced AI methodologies. Most topics have a record count of three or less, with “Deep Learning” [12,13,14,18,33,45,46] being the only significant outlier. This polarization indicates that while a variety of topics are being cited, only a few, particularly “Deep Learning” [12,13,14,18,33,45,46], are at the forefront of research studies.
Beyond the domain of AI, the dataset showcases a diverse range of topics, from “Bulk Modulus” [10,11,35] (a topic in material science) to “Tropical Cyclones” [47,48,49] (a meteorological phenomenon) and “Health Literacy” [6,41]. This diversity might suggest that the implications of advancements like LLMs are resonating across various scientific domains, either directly or indirectly. The presence of the “Natural Language Processing” [37,38] topic is significantly less pronounced than that of “Deep Learning” [12,13,14,18,33,45,46]. Nevertheless, its presence is consistent with the foundational role that NLP plays in the development of LLMs. In light of these observations, it is evident that while “Deep Learning” [12,13,14,18,33,45,46] is central to discussions, a wide range of interdisciplinary topics are being cited. This diversity can be attributed to the wide-ranging impacts of AI and LLM advancements in multiple scientific domains.
A considerable number of topics have only one record. Some of these include “Methanol Poisoning” [50], “Emergency Department” [42], “Electronic Health Records” [4], “Rhinoplasty” [20], “Dementia” [53], and “Gene Expression Data” [5], among others. The presence of diverse topics like “Dementia” [53], “Gene Expression Data” [5], and “Salt Intake” [27] suggests the potential interdisciplinary applications of LLMs. For instance, LLMs could be instrumental in analyzing electronic health records or aiding in diagnostics based on gene expression data. The significance of topics like “Human-robot Interaction” [19], “Face Recognition” [17], and “Speech Recognition” [9] alludes to the ever-expanding capabilities of LLMs in facilitating human–computer interactions and automating tasks that traditionally required human intervention. Lastly, topics like “Internet of Things” [34] and “Wireless Ad Hoc Network” [40] highlight the growing convergence of LLMs with other technological domains, suggesting synergistic advancements and potential future research directions.
Summarization Table 2 offers an overview of the most impactful scientific works from the obtained pool of papers across various categories and criteria. The table plays an important role in delineating the intricate relationships and categories within the research landscape. The criteria analyzed throughout Section 3.1, Section 3.2, Section 3.3 and Section 3.4 are “Research Areas”, “Publication Years”, “Citation Meso”, and “Citation Micro”. All of these criteria comprise their main categories, each category being further associated with its specific scientific papers, offering an overview that highlights the main research areas, the temporal evolution from 2015 to 2023, along with a more granular classification based on the “Citation Meso” and “Citation Micro” criteria.
The applications of LLMs in the scientific literature have become more and more frequently approached. The data support the claim that LLMs have not only transformed the realm of NLP but also have potential applications across diverse scientific domains. Some of these are highlighted within the following subsection, within which a series of papers belonging to our resulted pool of papers is analyzed.

3.5. The Expanding and Multidisciplinary Influence of the LLMs within the Scientific Literature

In the following, we emphasized the expanding and multidisciplinary influence that LLMs exert within the scientific literature by analyzing several relevant papers from the scientific pool of articles, obtained in accordance with the above-mentioned research methodology.
Lecler et al. explored the utilization and potential of LLMs in the field of radiology [18]. Emphasis is placed on the potential of these models to revolutionize radiology by enhancing their performance in terms of healthcare and patients’ wellbeing. The article highlights the current applications of “Generative Pre-trained Transformers” (GPT) models in radiology, which encompass areas such as the creation of informational reports based on data analysis, offering medical guidelines, supporting the medical decision-making process, and improving patient interactions. Additionally, the article focuses on Chat Generative Pre-trained Transformer (ChatGPT), a GPT version intended to understand and generate conversations. The scientific article provides answers from ChatGPT to various questions posed by radiologists and discusses both the potential benefits and current limitations of this technology in their daily practice.
Recognizing the profound importance of research in plastic surgery, Gupta et al. embarked on a journey to determine the potential contributions of ChatGPT to this specialized field [43]. Their exploration aimed to discern whether ChatGPT was adept at generating novel systematic review ideas relevant to plastic surgery. The results from the study were striking. Out of the 80 systematic review ideas formulated by ChatGPT, the model showcased impressive accuracy in generating novel and relevant concepts. The implications of this capability extend far beyond merely aiding research. The authors postulate that ChatGPT has profound potential benefits in the realm of virtual consultations, patient education, preoperative planning, and postoperative care. These prospects position ChatGPT as a potential solution to a plethora of intricate challenges that the plastic surgery community encounters. In the article’s discussion section, the authors elaborated on the capabilities of ChatGPT. They highlighted the inherent advantage of this model over other software solutions: the seamless incorporation of humanistic features. This enhancement augments ChatGPT’s behavior and task completion capacity. Furthermore, due to its multifaceted abilities, ranging from answering queries to creative writing, it has been embraced as an invaluable asset by researchers, businesses, and individuals alike.
Samaan et al. aimed to assess the accuracy and reproducibility of responses by ChatGPT, a language model, when answering patient queries related to bariatric surgery [27]. The questions were sourced from professional societies, health institutions, and groups from social networks, namely Facebook. Specialized personnel and certified surgery providers assessed responses using a scale that ranged from comprehensive to completely incorrect. Reproducibility was analyzed by querying the model twice and observing the consistency of its responses. The results showed that out of 151 bariatric surgery-related questions, ChatGPT provided comprehensive answers to 86.8% of them. Additionally, the model demonstrated high reproducibility, providing consistent answers to 90.7% of the questions. Thus, the research concludes that ChatGPT could be a valuable supplementary resource for patients seeking information on bariatric surgery, complementing the standard of care offered by healthcare professionals. The potential of this disruptive technology to enhance patient outcomes and quality of life is also highlighted, prompting a call for further studies in this domain.
In recent years, the advent of sophisticated language models, as exemplified by ChatGPT, has made it feasible to produce increasingly realistic texts. Gao et al. conducted a rigorous analysis of the authenticity and accuracy of abstracts generated by LLMs like ChatGPT, juxtaposed against original research abstracts from leading medical journals [6]. The primary objective is to ascertain the accuracy and integrity of these models when used in scientific writing. To achieve this, the authors gathered research abstracts from renowned scientific journals and instructed the AI model to create scientific abstracts considering the scientific journal and the respective titles. They then employed AI output detectors to distinguish between the generated and original abstracts. Additionally, human reviewers were tasked with identifying the abstracts generated by ChatGPT. The study also touched upon the ethical considerations surrounding the use of LLMs in scientific writing.
The rapid emergence of generative artificial intelligence (GAI) has presented transformative possibilities within the educational sector. Cooper embarked on a comprehensive investigation into these possibilities [26]. In his study, three pivotal areas of inquiry are delineated: firstly, the capacity of ChatGPT to address questions pertinent to science education; secondly, the potential ways educators can assimilate ChatGPT into their scientific pedagogical approaches; and thirdly, a reflective analysis of the utilization of ChatGPT within the study itself as a research tool. By leveraging a self-study methodology, the research performs an in-depth analysis of the nuanced interactions between this technological innovation and its educational applications. One notable observation is that the outputs produced by ChatGPT frequently resonated with the principal themes present in the research. Nevertheless, certain reservations have been identified. In its present form, ChatGPT has a propensity to inadvertently position itself as the pinnacle of epistemic authority. This can manifest in presenting information as a monolithic truth without adequate anchoring in evidence or appropriate qualifiers. Furthermore, the research considers a few ethical quandaries tied to AI, such as its environmental footprint, challenges surrounding content regulation, and potential encroachments on copyright norms. The article underscores the imperative for educators to exemplify responsible engagement with ChatGPT. There is an emphasis on cultivating a culture of critical thinking, setting transparent expectations, and ensuring that AI-forged resources are thoroughly assessed and modified to suit distinct educational contexts.
GAI has made significant strides with the introduction and advancement of LLMs. These models are instrumental in creating diverse content, from text to videos, when provided with textual instructions. However, their potential remains underutilized and fraught with risks, especially when they operate without human intervention. Without proper guidance and responsible design, LLMs are vulnerable to generating and disseminating misinformation or content that may be harmful, inaccurate, or both. Their massive scale further amplifies the consequences of such missteps. Nonetheless, if appropriately harnessed, they can serve as valuable human companions, enhancing various cognitive processes, particularly decision-making and knowledge retrieval. The article by Harrer emphasizes the shape-shifting features that LLMs provide with regard to data management workflows in the medical field [50]. By providing insights into the intricacies of the underlying technology, the article not only sheds light on its potential but also highlights the associated risks and limitations. Moreover, it advocates a structured moral, technological, and cultural approach to the design, development, and deployment of such tools. The overarching goal is to ensure that all stakeholders, from developers and providers to users and regulators, are adequately equipped to maximize the advantages of LLMs, especially in sectors reliant on evidence-based decision-making. The conclusions emphasize the multi-faceted approach needed to integrate LLMs effectively and ethically in healthcare. The authors underscore that the Silicon Valley mindset of rapid innovation and risk-taking does not align with the delicacies of healthcare and medicine. As a result, the authors note that there is a palpable tension between the rapid pace of technological innovation and the careful, considered approach necessary for health applications. Overcoming these challenges necessitates concerted efforts among all stakeholders.
Lund et al. posited that in the contemporary era of digital scholarship, OpenAI’s ChatGPT, a manifestation of the GPT, emerges as a focal point of discussion, both for its technological prowess and the implications it carries for academic practices [8]. Predicated on the principles of NLP, ChatGPT operates as a chatbot programmed to cater to text-based user inquiries. The paper provides an in-depth analysis of the historical evolution and foundational principles underpinning ChatGPT. The nexus between this expanding technology and academia is subsequently explored, illustrating the prospective avenues it could open within scholarly research and publishing. Notably, ChatGPT’s potential to mechanize the compilation of essays and diverse scholarly manuscripts is examined. Yet, alongside its potential, the article underscores the ethical quandaries possibly emerging from the widespread adoption of LLMs like GPT-3, ChatGPT’s technological backbone. These ethical considerations are contextualized against the backdrop of sweeping advancements across AI, ML, and NLP realms in relation to research and scholarly publishing.
Bouschery et al. explored the use of GPT models and their potential to augment human innovation teams [1]. The focus is on how these models can enhance processes for developing novel products by covering wider problems domains and providing solutions to them, ultimately leading to increased performance from the innovation perspective. The authors put forward a guiding framework improved using AI for investigating the ways in which these new developments in the field of LLM can help with text processing, analyzing subtle nuances, and creating novel insights. There is also a discussion concerning the technological constraints and influence exerted over consecrated methods regarding new product development. The article sought to establish a research purpose to study LLMs and their applications along with the roles held by persons within hybrid innovation teams.
Huang et al. introduced and discussed “FinBERT”, a state-of-the-art large language model tailored for the finance domain [2]. Huang et al. claimed that “FinBERT” is adept at incorporating financial knowledge and summarizing contextual information in financial texts. They contrasted “FinBERT’s” performance in rapport with established artificial intelligence-based techniques, particularly in the context of sentiment classification. The article underscores “FinBERT’s” proficiency in discerning sentiment tone, focusing on texts that other artificial intelligence-based methods might mislabel as neutral. This capability is attributed to “FinBERT’s” adeptness at leveraging contextual information in financial texts. The authors further highlight the model’s distinct advantages when dealing with a smaller training sample size and processing texts with financial terms that are not commonly found in general texts. Moreover, “FinBERT” is lauded for its efficiency in recognizing discussions related to environmental, social, and governance issues. The article also provides evidence that other methods might not estimate correctly, in contrast to “FinBERT”, the level of information within the body of the text. The authors highlight the importance of the results for numerous interested stakeholders.
Rezaeinia et al. acknowledged that sentiment analysis, a rapidly expanding research field within NLP and text classifications, is becoming extremely important for various domains, including politics, business, and marketing [38]. Although word embedding techniques such as Word2Vec and GloVe are predominantly employed in sentiment classification, they often neglect sentiment-specific nuances in texts and require extensive corpora for optimal performance. Due to limited corpus sizes, researchers frequently resort to utilizing pretrained word embeddings, such as those trained on expansive corpora like Google News. The accuracy of these pretrained embeddings has substantially influenced sentiment analysis research. The paper introduces a new methodology, termed “Improved Word Vectors (IWV)”, devised to enhance the accuracy of pretrained embeddings specifically for sentiment analysis. This method synthesizes techniques from part-of-speech (POS) tagging, lexicon-based strategies, word position algorithms, and traditional Word2Vec/GloVe methodologies. Experimental results, ascertained using multiple deep learning models and standard sentiment datasets, underscore the efficacy of IWV in sentiment analysis tasks.
In the following, we emphasize the research area, keywords, and sustainable development goals of relevant papers from the scientific pool of articles described above, papers that have been obtained in accordance with the developed research methodology, summarized within Table 3.
After analyzing the summary depicted above, we noted that 50% of the papers in Table 3 belong to medical-related research areas [6,18,27,43,50]; 30% of them belong to education or computer science-related research areas [8,26,38], while 20% of them are related mainly to business and economics research areas [1,2]. The distribution of previously synthetized scientific papers over the years proves the growth interest in LLMs during the current year (2023 holding 80% of the publications).
The following section contrasts the advantages and benefits with the potential disadvantages and risks of LLMs, grasping several main insights towards LLMs and their capabilities in today’s world, along a multitude of domains.

4. Discussion

The advent and subsequent success of LLMs in the contemporary technological environment have presented multifaceted implications and advancements. Central to this discussion is the superior capability of LLMs in the realm of NLU. As elucidated in the preceding sections, LLMs are not just another incremental advancement in computational linguistics but represent a significant leap in how machines comprehend and engage with human language.
The quintessence of LLM success lies in their profound NLU. Their training, which encompasses an eclectic mix of text sources, has empowered these models to grasp the subtleties, nuances, and complexities inherent to human languages. Traditional language models, while effective, often falter when confronted with intricate human language, marked by idioms, metaphors, cultural context, and emotional undertones. However, LLMs, with their expansive dataset and sophisticated algorithms, have managed to significantly bridge this gap.
The enhanced NLU capabilities have notably transformed sentiment analysis. In the past, discerning the sentiment behind texts, especially those replete with complex emotions or sarcasm, proved challenging. LLMs have heralded an era where machines can identify and interpret layered sentiments with a higher degree of precision. This has vast implications for sectors such as market research, social media analytics, and customer feedback processing, where accurate sentiment interpretation is essential.
LLMs have also made a mark in question-answering systems. Previous models often provided answers based on keyword matching or simplistic logic. The deep NLU of LLMs enables them to grasp the essence of queries, considering context, intent, and depth, to generate more relevant and accurate responses. This enhancement bolsters domains like customer support, academic research, and intelligent tutoring systems.
Content summarization, another beneficiary of LLMs’ prowess, has evolved from the mere extraction of key sentences to a more refined abstraction of core ideas. LLMs can process vast amounts of text, understand the overarching themes, and produce concise yet comprehensive summaries. This is invaluable in areas like academic research, news aggregation, and business intelligence, where distilling vast quantities of information into accessible formats is essential.
The evolutionary trajectory of LLMs, as analyzed in this article, underscores an important shift toward a more harmonious machine–human language interface. As these models continue to learn and adapt, they are close to mimicking human-like text comprehension. This is not merely a technical achievement but signifies a broader cultural and societal shift. Machines that truly “understand” can lead to more meaningful human–machine interactions and enrich user experiences.
While the present accomplishments of LLMs in NLU are commendable, it is important to recognize the future steps that need to be taken. Continuous research, refinements, and ethical considerations will shape the future trajectory of LLMs. It is anticipated that as LLMs evolve, their NLU capabilities will further refine, ushering in an era where the delineation between human and machine language understanding becomes increasingly blurred.
Historically, analysis of extensive datasets required significant human labor, time, and computational resources. Traditional methods, although effective, were constrained by the sheer volume and complexity of the data. With LLMs, there is a paradigm shift in which largescale document analysis can be achieved swiftly and accurately. The automated analytical capabilities of LLMs transcend the linear growth of traditional methods, offering exponential improvements in both speed and scale. This not only allows for more extensive data analysis but also ensures a depth of analysis that would be cumbersome, if not impossible for humans to achieve within realistic timeframes.
Data extraction, especially from unstructured sources, is a bottleneck in information processing. The ability of LLMs to comprehend context, discern patterns, and extract relevant information from vast volumes of text is revolutionary. Organizations can now seamlessly derive insights from diverse sources without the need for exhaustive manual parsing. This efficiency translates to faster decision-making processes and more informed strategy development.
Beyond analysis and extraction, the scalability of LLMs has ushered in a new era in content generation. Their capacity to produce vast amounts of coherent and contextually relevant content, whether for research, marketing, or entertainment purposes, is a true milestone. This not only augments productivity but also allows for the tailoring of content to specific audiences on an unprecedented scale.
In the case of businesses and institutions, the scalable processing capabilities of LLMs represent both an opportunity and a challenge. The opportunity lies in harnessing this power for enhanced productivity, tailored solutions, and robust data-driven strategies. Nevertheless, the challenge arises in ensuring ethical use, data privacy, and avoiding over-reliance on these tools. Organizations must strike a balance between leveraging the potential of LLMs and ensuring that the human element, with its critical thinking and ethical considerations, remains integral to the decision-making process.
While the current capabilities of LLMs in scalable information processing are profound, it is essential to consider potential future trajectories. As LLMs continue to evolve, there could be further advancements in their efficiency, accuracy, and contextual understanding. The integration of LLMs with other advanced technologies, like quantum computing or neuromorphic chips, might redefine scalability limits. Furthermore, as LLMs become more ubiquitous, there might be a need for standardized benchmarks, best practices, and regulations to ensure their optimal and ethical use.
The pervasive integration of LLMs into diverse sectors, as underscored by the review conducted in this article, ushers in an era characterized by profound personalization in technology. One of the most salient impacts of LLMs is their ability to cultivate personalized user experiences, a dimension worth analyzing in-depth due to its ubiquity in contemporary applications and its potential to revolutionize user–technology interactions.
LLMs’ proficiency in generating contextually relevant content is paramount. Traditional systems, which are governed by predefined algorithms, often provide uniform outputs irrespective of the user’s unique attributes or histories. Nevertheless, LLMs, with their expansive training on diverse datasets and superior computational capabilities, can comprehend nuances and deliver outputs that resonate with a user’s specific context. For instance, in the realm of chatbots, while earlier iterations could only offer generic responses, LLM-driven chatbots can now understand user sentiment, previous interactions, and contextual clues, enabling a conversation that feels uniquely tailored to the individual.
The repercussions of such personalization extend well beyond chatbots. Recommendation systems, an integral part of e-commerce platforms, streaming services, and even news aggregators, have witnessed significant enhancements with the integration of LLMs. By analyzing user behaviors, preferences, and histories, LLMs can suggest products, songs, movies, or articles that align more closely with individual tastes, thereby increasing user engagement and satisfaction. Similarly, content curation platforms now have the tools to offer a bespoke content feed, ensuring that users are not just passive recipients but also active participants in a dialogue shaped by their preferences.
Personalization, enabled by LLMs, is not just about enhancing user satisfaction in the short term. By ensuring that each interaction is tailored to individual users, platforms can foster a deeper sense of loyalty and connection. Users are more likely to return to platforms that “understand” them, creating a symbiotic relationship in which the more the user interacts, the better the LLM becomes at providing personalized experiences, and the more likely the user is to continue engaging with the platform.
In light of the insights presented in this article, it is evident that the integration of LLMs into various technological platforms offers a paradigm shift in how users interact with technology. The journey from generic to personalized experiences, while filled with opportunities, also requires careful navigation to ensure that the potential of LLMs is harnessed responsibly and ethically. As researchers and practitioners continue to analyze LLMs more profoundly, the promise of creating more meaningful, personalized, and ethically grounded user experiences remains an exciting frontier for future explorations.
The ascent of LLMs and their subsequent applications has fostered a new paradigm in the computational world. One of the most salient features and arguably a cornerstone in their transformative capacity is their multilingual capabilities. When trained on text from diverse linguistic backgrounds, LLMs demonstrate the prowess to process and generate multilingual content, a feature that is a great challenge for traditional systems.
In the case of global businesses, this multilingual capability has metamorphosed their operational dynamics. Prior to the advent of LLMs with multilingual capacities, companies seeking to operate in different linguistic territories had to invest heavily in translators, local content creators, and region-specific marketing teams. The challenges were not solely monetary; the time-consuming nature of these translations and the potential loss of context or cultural nuance were considerable operational hurdles. With LLMs, these concerns are substantially alleviated. Now, businesses can generate content, answer queries, and address concerns in multiple languages with decreased lead times and increased accuracy. This not only enhances their global reach but also fosters an environment of inclusivity, where consumers and stakeholders from different linguistic backgrounds feel catered to and acknowledged.
This multilingual proficiency of LLMs has broader implications beyond business. In the realm of academia, researchers can now access and analyze content from multiple languages without the need for translation, thereby ensuring original context and meaning are preserved. This is particularly significant in fields like cultural studies, linguistics, and history, where the nuances of language play an essential role.
Furthermore, the integration of multilingual LLMs into applications like chatbots or customer service platforms has the potential to create a unified global digital interface. One can imagine a scenario in which a single chatbot can cater to queries from different parts of the world without any linguistic barrier. Such a scenario not only elevates the user experience but also signifies a step toward a truly globalized digital ecosystem.
Conversely, while the benefits are manifold, it is crucial to approach this capability with a degree of caution. Training LLMs on multilingual datasets requires a comprehensive understanding of the cultural, contextual, and colloquial nuances of each language. The potential for mistranslation or misinterpretation remains, which could lead to misunderstandings or even unintended consequences in certain scenarios. It emphasizes the importance of continuous refinement, feedback, and updates to these models to ensure that their multilingual capabilities are both accurate and culturally sensitive.
LLMs have undeniably etched a transformative mark on the technological landscape. As analyzed in this article, their foundation lies in their advanced processing capabilities, advanced algorithms, and vast datasets. Nevertheless, one of the most salient characteristics of LLMs, as observed in their practical application, is their remarkable flexibility across diverse domains. This facet of adaptability emerges as a defining factor in their widespread integration into various industries.
At the core of an LLM’s ability to function effectively across domains is its generalized training. These models, originally trained on large and diverse datasets, possess a broad understanding of language and concepts. It is this foundation that grants them the capacity to be fine-tuned to cater to specific industrial or academic needs. While some critics may argue that such generalized training might make the model lack specialized skills and therefore irrelevant, in reality, this broad foundation allows LLMs to be highly adaptable, making them relevant and invaluable across myriad sectors.
Industries, be it finance, healthcare, entertainment, or law, each have their unique lexicon, idioms, and conceptual intricacies. The versatility of LLMs lies in their ability to be trained further on domain-specific data, allowing them to comprehend and generate content that resonates with the particularities of each sector. For instance, in finance, an LLM fine-tuned with sectoral data can understand intricate terminologies and market dynamics, offering insights or analytics that are contextually relevant. Similarly, in the realm of law, an LLM can be trained to understand legal terminologies and case law references, aiding in tasks ranging from legal research to contract analysis.
This adaptability of LLMs is not merely about fitting into existing systems but also about fostering innovation within them. By catering to niche requirements, LLMs enable businesses and researchers to push boundaries and explore new avenues. In healthcare, for instance, LLMs could aid in parsing through vast amounts of medical literature, assisting doctors in diagnosis or treatment suggestions. In entertainment, they might be used to generate creative content or scripts, collaborating with human creators in unique ways.
Moreover, the flexibility of LLMs presents an exciting potential for cross-industry collaborations. A model trained in both healthcare and law might aid in navigating the intricate labyrinth of healthcare regulations. Similarly, the intersection of finance and technology could see LLMs playing important roles in fintech innovations.
The rise of LLMs and their transformative impact on diverse sectors, as highlighted in the preceding sections, brings with it a multitude of challenges and considerations, not least of which are the computational demands associated with their training. The findings of this review reveal some critical insights into the complexities and implications of these computational necessities, which warrant an in-depth discussion.
Central to the operation and optimization of LLMs is the undeniable requirement for vast computational resources. The training processes for these models often necessitate the deployment of clusters comprising high-performance “Graphics Processing Units” (GPUs) or “Tensor Processing Units” (TPUs). Such hardware-intensive processes underscore the immense computational prowess that undergirds the functioning of LLMs. Nonetheless, the implications are multifaceted. On one hand, the need for such powerful computational infrastructures means that the barrier to entry in the realm of LLM research and application is significantly high. The financial overhead associated with procuring, maintaining, and running these powerful clusters can be a significant deterrent for many organizations, especially for smaller entities or those from resource-limited settings. This potentially leads to the centralization of capabilities and expertise in well-funded organizations or institutions, therefore raising concerns about equity and accessibility in the LLM landscape.
Beyond the financial implications, there is a growing awareness and concern about the environmental footprint of LLM operations. The extensive energy consumption associated with training these models, particularly in vast data centers, has raised alarms about their carbon footprint. In an age increasingly defined by concerns over climate change and environmental degradation, the sustainability of LLMs has become a pressing issue. Extensive energy requirements not only magnify operational costs but also position LLMs at an intersection where technological advancement could potentially be at odds with environmental stewardship. This necessitates a reevaluation of practices, as well as innovations in energy-efficient training methods or the incorporation of renewable energy sources in data centers.
Given these insights, it becomes imperative for the scientific community to address these challenges proactively. Potential pathways include the development of more efficient training algorithms that reduce computational demands, collaborative efforts that pool resources to democratize access, and a conscious push toward sustainable practices in LLM research and deployment. Additionally, a deeper engagement with interdisciplinary experts, especially from the environmental science and sustainable energy sectors, could pave the way for solutions that reconcile the dual imperatives of technological advancement and environmental responsibility.
In the ever-evolving landscape of LLMs, their remarkable capabilities, as analyzed within the conducted review, are closely intertwined with some pressing challenges, notably the matters of data sensitivity and privacy. The benefits of LLMs, ranging from advanced NLP applications to groundbreaking contributions in creative arts, business intelligence, and healthcare, are fundamentally grounded in their training on expansive datasets. Conversely, this very strength has given rise to pressing concerns regarding the potential for these models to inadvertently leak sensitive data.
LLMs’ ability to generalize from vast datasets, which often include user-generated content, poses a significant risk. There is an underlying possibility, albeit minimal, that these models might memorize specific patterns or even direct inputs from the training data. Given the diverse nature of their training data, which may span from public web pages to academic articles, the inadvertent reproduction of sensitive or personally identifiable information is a tangible concern. Such occurrences, although rare, could have far-reaching implications, including potential breaches of confidentiality agreements, exposure of proprietary information, or even the unauthorized disclosure of personal data.
While the field has made strides in addressing these concerns, notably through techniques such as differential privacy, these solutions are not infallible. Differential privacy, which adds a degree of randomness to data or outputs to obfuscate individual data points, has shown promise in curbing the likelihood of data leakage. Nevertheless, ensuring absolute anonymity, especially in a domain characterized by the enormity and diversity of data as with LLMs, remains an elusive goal. Additionally, introducing differential privacy can sometimes come at the cost of model performance, creating a tradeoff between usefulness and privacy.
Recognizing the profound societal and technological impact of LLMs, it becomes paramount for researchers, developers, and policymakers to place increased emphasis on fortifying data privacy measures. Beyond merely refining existing techniques like differential privacy, there is a pressing need to innovate novel methodologies that can ensure data privacy without compromising the efficiency and efficacy of LLMs.
Collaborative efforts between academia and industry can lead the way in setting standardized protocols for training data curation, ensuring that sensitive information is systematically identified and excluded. Moreover, developing mechanisms for regular audits of model outputs against known sensitive data patterns can act as a safeguard against unintentional disclosures.
The emergence and proliferation of LLMs in various technical and nontechnical sectors have undeniably created environments of increased automation, efficiency, and innovation. While the transformative power of these models has brought forth numerous advantages, it has simultaneously highlighted pressing concerns regarding their societal and economic implications, specifically in the domain of job displacement.
The rapid integration of LLMs into different industries implies a significant shift in the nature of many tasks previously undertaken by humans. For instance, the deployment of advanced chatbots that offer near-human conversational experiences may decrease the demand for customer service representatives in certain sectors. Similarly, automated content generation tools could challenge the roles of writers, journalists, and content creators. Such changes inevitably raise pertinent questions about the potential for job losses and their subsequent impact on the workforce.
The economic ramifications of LLMs cannot be viewed in isolation. A broader sequence of events emerges when one considers the trajectory of technological advancements over history. Historically, every major technological upheaval, from the Industrial Revolution to the rise of computerization, has been accompanied by fears of widespread job displacement. While it is true that certain jobs become obsolete with technological progress, new roles, industries, and opportunities often emerge in tandem. Then again, the transition is not always seamless, and the displacement of one job does not always equate to the immediate creation of another.
The issue at hand is multifaceted. There is an immediate economic impact, in which sectors heavily reliant on tasks that LLMs can automate might witness a sharp decline in job opportunities. Even more significantly, there is also the long-term concern of a mismatch between the skills that the future job market demands and the skills that the current workforce possesses. Consequently, the importance of upskilling and workforce retraining becomes paramount.
Without parallel initiatives aimed at upskilling or retraining, widespread LLM adoption could exacerbate socioeconomic inequalities. The vulnerable sections of the workforce that are easily automatable are at heightened risk. Addressing these aspects requires concerted efforts from policymakers, industry leaders, and educational institutions to ensure that the workforce is prepared for the evolving job landscape. Programs that focus on equipping individuals with skills that are complementary to what LLMs offer are significant. Moreover, a multipronged approach that not only emphasizes technical skills but also soft skills like critical thinking, creativity, and emotional intelligence, which are less susceptible to automation, will be essential.
The extensive deployment and growing influence of LLMs in a plethora of technical sectors have ushered in a contemporary era of AI and NLP. While the potential benefits of LLMs are substantial, it is also critical to acknowledge and address the intricacies and challenges they introduce into the human–technology interface.
A salient concern that surfaces from the evolving LLM landscape is the propensity for overreliance on these models. The advanced capabilities of LLMs, such as their ability to generate coherent and often insightful outputs, may lead users to an unchecked acceptance of the information that LLMs produce. This raises the critical issue of trust calibration. It is of utmost importance that users of LLMs understand the inherent limitations and potential biases present in these models. Misplaced trust or an overestimation of an LLM’s accuracy can lead to misinformed decisions, especially in essential in sectors such as healthcare, legal analytics, and business intelligence.
As demonstrated in this review, LLMs have numerous applications and have the potential to bring significant solutions to numerous domains, such as the medical field (one of the most encountered research area applications). Consequently, serious problems that have affected during the COVID-19 pandemic and are still plaguing the medical field, such as the need for psychotherapeutic support 24/7 for medical personnel suffering from severe burnouts [59], or supporting the medical decisional process in order to avoid mistakes that are more likely to occur in telemedicine environments [60], may someday be alleviated by LLMs.
While LLMs often produce outputs that mimic human-like expertise, it is important to remember that they are inherently a product of their training data and algorithms. They lack human intuition, ethical reasoning, and the vastness of experiential knowledge. As such, there exists an imperative need to ensure that users are educated about the model’s nature, thereby calibrating their trust and reliance on its outputs appropriately.
Beyond individual users, institutional challenges must also be overcome. Organizations that deploy LLMs in their operations need to establish guidelines and protocols to ensure that the model’s outputs are cross-verified, especially when consequential decisions are at stake. Moreover, feedback loops should be incorporated to rectify any inaccuracies or biases, thereby refining the model over time.
Addressing these challenges is not just a technological endeavor but also an ethical and societal endeavor. Technological innovation is only one facet of the solution. Ethical considerations come into play when determining the boundaries of LLM usage, especially in areas in which misinformation or biases could have severe consequences. Meanwhile, societal collaboration is extremely important for creating a collective awareness of LLMs’ capabilities and limitations, fostering an environment in which technology complements human expertise rather than unquestionably overriding it.
The next section presents the conclusions of this study and its importance to the scientific community, specialists, and users.

5. Conclusions

The transformative potential of LLMs, particularly in the domain of NLU, has been substantially attained. Their ability to provide enhanced sentiment analysis, question-answering, and content summarization is proof of their efficiency and efficacy. As the technical community stands on this precipice of advancement, it is imperative to harness the potential of LLMs judiciously, ensuring that their contributions align with societal betterment and technological enlightenment.
The scalability in information processing offered by LLMs is reshaping the landscape of data analysis, extraction, and content creation. While the benefits are manifold, it is imperative for the scientific and professional communities to engage in continuous dialogue, ensuring that as we harness the potential of LLMs, we do so responsibly, ethically, and innovatively. The trajectory of LLMs, as analyzed in the preceding sections, underscores their transformative potential. As we anticipate future developments, a holistic understanding and a forward-looking perspective will be paramount in guiding this technological behemoth toward societal and technological advancement.
While the potential of LLM-driven personalization is vast, it is also important to address associated challenges. Personalization, if not wielded judiciously, can lead to the creation of “echo chambers”, in which users are only exposed to content that aligns with their existing beliefs or preferences. Furthermore, ethical considerations regarding user data privacy and the extent to which personalization algorithms should influence user decisions need to be rigorously examined.
The multilingual capabilities of LLMs have ushered in a new era of global communication and operation, breaking down linguistic barriers and creating a more interconnected world. The burden is on developers, researchers, businesses, and policymakers to harness this potential responsibly, ensuring that as we embrace a multilingual digital future, we do so with precision, empathy, and understanding.
The adaptability of LLMs across domains underscores their revolutionary potential. Their ability to seamlessly integrate, adapt, and innovate within varied industries is illustrative of their transformative power. The literature review of the most impactful scientific works offers insights into their growth, especially into their flexibility, which truly highlights their potential future trajectory. As industries continue to evolve and intersect, the role of LLMs, with their unparalleled adaptability, will undoubtedly be at the forefront of technological and societal advancement.
Although the evolution of LLMs paints a promising picture of technological progress and myriad applications, it is accompanied by pressing challenges. Addressing the computational and environmental demands of LLMs is not only a technical necessity but also a moral and ecological imperative. The journey forward will require a harmonization of innovation with responsibility, ensuring that the LLM landscape evolves in a manner that is both pioneering and sustainable.
The remarkable ascent of LLMs in the technical domain, highlighted in the preceding sections, underscores their transformative potential. As we stand on the cusp of further advancements and wider LLM adoption, responsibly addressing the intertwined challenges of data sensitivity and privacy will be decisive. Through collaborative research, innovation, and vigilance, the goal is to harness the unparalleled capabilities of LLMs, ensuring that they serve as a boon for societal and technological progress while safeguarding individual and collective data rights.
Although LLMs represent an exciting frontier in the realm of technology, their widespread adoption must be approached with a nuanced understanding of their broader socioeconomic implications. Embracing the benefits of LLMs should not come at the expense of the workforce. A balanced approach that acknowledges the transformative potential of LLMs while actively addressing the challenges they pose will be key to harnessing their power for the collective advancement of society.
Even though the evolutionary trajectory of LLMs as charted in this review underscores their transformative potential, it is imperative that the scientific and global community at large remain vigilant. The very essence of our reliance on technology, and more specifically on LLMs, must be anchored in informed and judicious trust. Only by navigating these multifaceted challenges with a holistic approach that encompasses technological, ethical, and societal dimensions can we truly harness the full potential of LLMs for the advancement of society and technology.
Despite the numerous opportunities and technological advancements, there are considerable limitations inherent in LLMs, primarily revolving around their dependency on vast datasets and computational resources that raise considerable concerns. Considering the previous work related to LLMs and the foregoing literature review and discussions, it is evident that while LLMs represent a significant leap forward in artificial intelligence, their current limitations necessitate a cautious and discerning application.
A paramount concern is the phenomenon of “hallucinations”, in which LLMs generate plausible but factually incorrect or nonsensical information. This not only questions the reliability of these models but also raises serious concerns in applications in which accuracy is critical, such as in informational and educational contexts. Furthermore, the potential for misuse of LLMs in generating misleading information or “deepfakes” poses a profound societal risk. The ease with which persuasive, but factually incorrect or manipulative content can be generated necessitates robust safeguards and ethical guidelines to prevent harm.
Another important limitation is the fact that these models are constrained by the biases inherent in their training data. Despite advancements in algorithmic neutrality, LLMs continue to reflect and sometimes amplify societal prejudices, underscoring the need for more rigorous and inclusive data curation.
Another salient limitation lies in the interpretability of LLMs. As these models grow in complexity, their decision-making processes become increasingly opaque, posing challenges not only for validation and trust but also for compliance with emerging regulations that mandate explainability in AI systems. This “black box” nature hinders the capacity for human oversight and raises ethical concerns, especially in high-stakes domains such as healthcare and law.
Furthermore, the environmental impact of LLMs cannot be overlooked. The immense computational resources required for training and operating these models translate into significant carbon footprints, which is contrary to global efforts aimed at sustainability. Although strides are being made toward more energy-efficient algorithms, the scale of improvement required is substantial.
Just as LLMs offer transformative potential, their current limitations highlight an essential need for continued research and responsible stewardship. Addressing these challenges will not only enhance the efficacy and reliability of LLMs but also ensure their alignment with societal values and ethical norms. As we are part of this disruptive technological evolution, it is imperative that we manage these limitations with a balanced approach, harmonizing technological advancement with human-centric principles.
Future research should therefore focus on developing more efficient, unbiased, and environmentally sustainable LLMs. This includes exploring novel training methodologies, implementing rigorous ethical guidelines, and embracing energy-efficient technologies. Future research should aim to enhance the contextual understanding of LLMs, thereby bridging the gap between technological complexity and human-centric applications. This involves deepening the models’ grasp of cultural nuances and ethical considerations, ensuring that their outputs are not only accurate but also culturally sensitive and morally sound. Moreover, the societal implications of LLMs, particularly in terms of job displacement and data privacy, require a balanced and multidisciplinary approach. Future research should collaborate across fields such as economics, sociology, and law to develop frameworks that mitigate the risks of automation while harnessing its benefits. This includes creating upskilling programs, advocating for fair data usage policies, and ensuring equitable access to these technologies.
The future of LLMs should be guided by a commitment to responsible innovation, in which technological advancement coexists with ethical integrity, environmental sustainability, and societal well-being. Embracing this complex approach from multiple angles will not only maximize the potential of LLMs but also ensure their evolution aligns with the overarching goal of bringing about a more inclusive, equitable, and enlightened digital future.
In addressing the intricacies of LLMs, this study has endeavored to provide a comprehensive analysis within its stated parameters. Nevertheless, it is important to acknowledge a notable limitation in our research scope. The current investigation did not extend to an examination of patents related to LLMs. This omission is not an oversight but rather a deliberate scope delineation, considering the complex and extensive nature of patent data. Consequently, the insights derived from patent analyses, which can offer valuable perspectives on technological advancements and intellectual property trends in LLMs, remain unexplored in this study. This gap in our research underscores a significant avenue for future work, in which a detailed exploration of patents could expose additional dimensions of LLM development and deployment, thereby enriching our understanding of this rapidly evolving field.
In conclusion, while LLMs offer transformative advantages that can revolutionize various sectors, it is fundamental to navigate their challenges and disadvantages with caution and foresight. The balanced harnessing of their potential, while addressing their pitfalls, will determine their trajectory in reshaping the technological landscape for the betterment of humankind.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/electronics12244957/s1: the sheet “Research Areas” of the “The Datasets.xlsx” Excel file that contains detailed information regarding the Clarivate WoS Research Areas; the sheet “Publication Years” of the “Datasets.xlsx” Excel file that comprises the details regarding the publication years of the papers from the obtained scientific pool of articles; the sheet “Citation Meso” of the “The Datasets.xlsx” Excel file that contains information regarding the classification of the scientific pool of papers according to the Clarivate Citation Meso Criterion; and the sheet “Citation Micro” of the “The Datasets.xlsx” Excel file that comprises information regarding the classification of the scientific pool of papers according to the Clarivate Citation Micro Criterion.

Author Contributions

Conceptualization, D.-M.P., A.P. and A.T.; methodology, D.-M.P., A.P. and A.T.; software, D.-M.P., A.P. and A.T.; validation, D.-M.P., A.P. and A.T.; formal analysis, D.-M.P., A.P. and A.T.; investigation, D.-M.P., A.P. and A.T.; resources, D.-M.P., A.P. and A.T.; data curation, D.-M.P., A.P. and A.T.; writing—original draft preparation, D.-M.P., A.P. and A.T.; writing—review and editing, D.-M.P., A.P. and A.T.; visualization, D.-M.P., A.P. and A.T.; supervision, A.P.; project administration, D.-M.P.; funding acquisition, D.-M.P., A.P. and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The datasets are available in “The Datasets.xlsx” Excel document from within the Supplementary Materials file available online at: https://www.mdpi.com/article/10.3390/electronics12244957/s1.

Acknowledgments

The authors would like to express their gratitude for the logistics support received from the Center of Research, Consultancy and Training in Economic Informatics and Information Technology RAU-INFORTIS of the Romanian-American University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bouschery, S.G.; Blazevic, V.; Piller, F.T. Augmenting Human Innovation Teams with Artificial Intelligence: Exploring Transformer-Based Language Models. J. Prod. Innov. Manag. 2023, 40, 139–153. [Google Scholar] [CrossRef]
  2. Huang, A.H.; Wang, H.; Yang, Y. FinBERT: A Large Language Model for Extracting Information from Financial Text*. Contemp. Account. Res. 2023, 40, 806–841. [Google Scholar] [CrossRef]
  3. Dwivedi, Y.K.; Kshetri, N.; Hughes, L.; Slade, E.L.; Jeyaraj, A.; Kar, A.K.; Baabdullah, A.M.; Koohang, A.; Raghavan, V.; Ahuja, M.; et al. “So What If ChatGPT Wrote It?” Multidisciplinary Perspectives on Opportunities, Challenges and Implications of Generative Conversational AI for Research, Practice and Policy. Int. J. Inf. Manag. 2023, 71, 102642. [Google Scholar] [CrossRef]
  4. Liu, S.; Wright, A.P.; Patterson, B.L.; Wanderer, J.P.; Turer, R.W.; Nelson, S.D.; McCoy, A.B.; Sittig, D.F.; Wright, A. Using AI-Generated Suggestions from ChatGPT to Optimize Clinical Decision Support. J. Am. Med. Inform. Assoc. 2023, 30, 1237–1245. [Google Scholar] [CrossRef] [PubMed]
  5. Nikfarjam, A.; Sarker, A.; O’connor, K.; Ginn, R.; Gonzalez, G. Pharmacovigilance from Social Media: Mining Adverse Drug Reaction Mentions Using Sequence Labeling with Word Embedding Cluster Features. J. Am. Med. Inform. Assoc. 2015, 22, 671–681. [Google Scholar] [CrossRef]
  6. Gao, C.A.; Howard, F.M.; Markov, N.S.; Dyer, E.C.; Ramesh, S.; Luo, Y.; Pearson, A.T. Comparing Scientific Abstracts Generated by ChatGPT to Real Abstracts with Detectors and Blinded Human Reviewers. npj Digit. Med. 2023, 6, 75. [Google Scholar] [CrossRef]
  7. Timoshenko, A.; Hauser, J.R. Identifying Customer Needs from User-Generated Content. Mark. Sci. 2019, 38, 1–20. [Google Scholar] [CrossRef]
  8. Lund, B.D.; Wang, T.; Mannuru, N.R.; Nie, B.; Shimray, S.; Wang, Z. ChatGPT and a New Academic Reality: Artificial Intelligence-Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing. J. Assoc. Inf. Sci. Technol. 2023, 74, 570–581. [Google Scholar] [CrossRef]
  9. Kong, Q.; Cao, Y.; Iqbal, T.; Wang, Y.; Wang, W.; Plumbley, M.D. PANNs: Large-Scale Pretrained Audio Neural Networks for Audio Pattern Recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 2880–2894. [Google Scholar] [CrossRef]
  10. Thompson, A.P.; Aktulga, H.M.; Berger, R.; Bolintineanu, D.S.; Brown, W.M.; Crozier, P.S.; Veld, P.J.I.; Kohlmeyer, A.; Moore, S.G.; Nguyen, T.D.; et al. LAMMPS—A Flexible Simulation Tool for Particle-Based Materials Modeling at the Atomic, Meso, and Continuum Scales. Comput. Phys. Commun. 2022, 271, 108171. [Google Scholar] [CrossRef]
  11. Khorshidi, A.; Peterson, A.A. Amp: A Modular Approach to Machine Learning in Atomistic Simulations. Comput. Phys. Commun. 2016, 207, 310–324. [Google Scholar] [CrossRef]
  12. Bingham, E.; Chen, J.P.; Jankowiak, M.; Obermeyer, F.; Pradhan, N.; Karaletsos, T.; Singh, R.; Szerlip, P.; Horsfall, P.; Goodman, N.D. Pyro: Deep Universal Probabilistic Programming. J. Mach. Learn. Res. 2019, 20, 1–6. [Google Scholar]
  13. Park, J.; Samarakoon, S.; Bennis, M.; Debbah, M. Wireless Network Intelligence at the Edge. Proc. IEEE 2019, 107, 2204–2239. [Google Scholar] [CrossRef]
  14. Lake, B.M.; Salakhutdinov, R.; Tenenbaum, J.B. Human-Level Concept Learning through Probabilistic Program Induction. Science 2015, 350, 1332–1338. [Google Scholar] [CrossRef] [PubMed]
  15. Münchmeyer, J.; Woollam, J.; Rietbrock, A.; Tilmann, F.; Lange, D.; Bornstein, T.; Diehl, T.; Giunchi, C.; Haslinger, F.; Jozinović, D.; et al. Which Picker Fits My Data? A Quantitative Evaluation of Deep Learning Based Seismic Pickers. J. Geophys. Res. Solid Earth 2022, 127, e2021JB023499. [Google Scholar] [CrossRef]
  16. Rasmy, L.; Xiang, Y.; Xie, Z.; Tao, C.; Zhi, D. Med-BERT: Pretrained Contextualized Embeddings on Large-Scale Structured Electronic Health Records for Disease Prediction. npj Digit. Med. 2021, 4, 86. [Google Scholar] [CrossRef]
  17. Mollahosseini, A.; Hasani, B.; Mahoor, M.H. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Trans. Affect. Comput. 2019, 10, 18–31. [Google Scholar] [CrossRef]
  18. Lecler, A.; Duron, L.; Soyer, P. Revolutionizing Radiology with GPT-Based Models: Current Applications, Future Possibilities And limitations of ChatGPT. Diagn. Interv. Imaging 2023, 104, 269–274. [Google Scholar] [CrossRef]
  19. Carvalho, I.; Ivanov, S. ChatGPT for Tourism: Applications, Benefits and Risks. Tour. Rev. 2023. ahead-of-print. [Google Scholar] [CrossRef]
  20. Xie, Y.; Seth, I.; Hunter-Smith, D.J.; Rozen, W.M.; Ross, R.; Lee, M. Aesthetic Surgery Advice and Counseling from Artificial Intelligence: A Rhinoplasty Consultation with ChatGPT. Aesthetic Plast. Surg. 2023, 47, 1985–1993. [Google Scholar] [CrossRef]
  21. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems; Guyon, I., Von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, New York, USA, 2017; Volume 30. [Google Scholar]
  22. Zhu, L.; Zhu, Z.; Zhang, C.; Xu, Y.; Kong, X. Multimodal Sentiment Analysis Based on Fusion Methods: A Survey. Inf. Fusion 2023, 95, 306–325. [Google Scholar] [CrossRef]
  23. Zhu, L.; Xu, M.; Bao, Y.; Xu, Y.; Kong, X. Deep Learning for Aspect-Based Sentiment Analysis: A Review. PeerJ Comput. Sci. 2022, 8, e1044. [Google Scholar] [CrossRef]
  24. Sams, A.S.; Zahra, A. Multimodal Music Emotion Recognition in Indonesian Songs Based on CNN-LSTM, XLNet Transformers. Bull. Electr. Eng. Inform. 2023, 12, 355–364. [Google Scholar] [CrossRef]
  25. Bird, J.J.; Ekárt, A.; Faria, D.R. Chatbot Interaction with Artificial Intelligence: Human Data Augmentation with T5 and Language Transformer Ensemble for Text Classification. J. Ambient. Intell. Humaniz. Comput. 2023, 14, 3129–3144. [Google Scholar] [CrossRef]
  26. Cooper, G. Examining Science Education in ChatGPT: An Exploratory Study of Generative Artificial Intelligence. J. Sci. Educ. Technol. 2023, 32, 444–452. [Google Scholar] [CrossRef]
  27. Samaan, J.S.; Yeo, Y.H.; Rajeev, N.; Hawley, L.; Abel, S.; Ng, W.H.; Srinivasan, N.; Park, J.; Burch, M.; Watson, R.; et al. Assessing the Accuracy of Responses by the Language Model ChatGPT to Questions Regarding Bariatric Surgery. Obes. Surg. 2023, 33, 1790–1796. [Google Scholar] [CrossRef] [PubMed]
  28. Hallsworth, J.E.; Udaondo, Z.; Pedrós-Alió, C.; Höfer, J.; Benison, K.C.; Lloyd, K.G.; Cordero, R.J.B.; de Campos, C.B.L.; Yakimov, M.M.; Amils, R. Scientific Novelty beyond the Experiment. Microb. Biotechnol. 2023, 16, 1131–1173. [Google Scholar] [CrossRef]
  29. Clarivate Web of Science Journal Evaluation Process and Selection Criteria. Available online: https://clarivate.com/products/scientific-and-academic-research/research-discovery-and-workflow-solutions/webofscience-platform/web-of-science-core-collection/editorial-selection-process/editorial-selection-process/ (accessed on 6 November 2023).
  30. Roh, Y.; Heo, G.; Whang, S.E. A Survey on Data Collection for Machine Learning: A Big Data—AI Integration Perspective. IEEE Trans. Knowl. Data Eng. 2021, 33, 1328–1347. [Google Scholar] [CrossRef]
  31. Otter, D.W.; Medina, J.R.; Kalita, J.K. A Survey of the Usages of Deep Learning for Natural Language Processing. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 604–624. [Google Scholar] [CrossRef]
  32. Choudhary, T.; Mishra, V.; Goswami, A.; Sarangapani, J. A Comprehensive Survey on Model Compression and Acceleration. Artif. Intell. Rev. 2020, 53, 5113–5155. [Google Scholar] [CrossRef]
  33. Wu, Y.; Jiang, L.; Yang, Y. Switchable Novel Object Captioner. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1162–1173. [Google Scholar] [CrossRef] [PubMed]
  34. Ali, F.; El-Sappagh, S.; Islam, S.R.; Ali, A.; Attique, M.; Imran, M.; Kwak, K.-S. An Intelligent Healthcare Monitoring Framework Using Wearable Sensors and Social Networking Data. Futur. Gener. Comput. Syst. 2021, 114, 23–43. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Wang, H.; Chen, W.; Zeng, J.; Zhang, L.; Wang, H.; Weinan, E. DP-GEN: A Concurrent Learning Platform for the Generation of Reliable Deep Learning Based Potential Energy Models. Comput. Phys. Commun. 2020, 253, 107206. [Google Scholar] [CrossRef]
  36. Fink, O.; Wang, Q.; Svensén, M.; Dersin, P.; Lee, W.-J.; Ducoffe, M. Potential, Challenges and Future Directions for Deep Learning in Prognostics and Health Management Applications. Eng. Appl. Artif. Intell. 2020, 92, 103678. [Google Scholar] [CrossRef]
  37. Elnagar, A.; Al-Debsi, R.; Einea, O. Arabic Text Classification Using Deep Learning Models. Inf. Process. Manag. 2020, 57, 102121. [Google Scholar] [CrossRef]
  38. Rezaeinia, S.M.; Rahmani, R.; Ghodsi, A.; Veisi, H. Sentiment Analysis Based on Improved pre-Trained Word Embeddings. Expert Syst. Appl. 2019, 117, 139–147. [Google Scholar] [CrossRef]
  39. Tu, Y.; Lin, Y.; Zha, H.; Zhang, J.; Wang, Y.; Gui, G.; Mao, S. Large-Scale Real-World Radio Signal Recognition with Deep Learning. Chin. J. Aeronaut. 2022, 35, 35–48. [Google Scholar] [CrossRef]
  40. Yang, X.; Chen, A.; PourNejatian, N.; Shin, H.C.; Smith, K.E.; Parisien, C.; Compas, C.; Martin, C.; Costa, A.B.; Flores, M.G.; et al. A large language model for electronic health records. npj Digit. Med. 2022, 5, 194. [Google Scholar] [CrossRef]
  41. Kwok, S.W.H.; Vadde, S.K.; Wang, G. Tweet Topics and Sentiments Relating to COVID-19 Vaccination Among Australian Twitter Users: Machine Learning Analysis. J. Med. Internet Res. 2021, 23, e26953. [Google Scholar] [CrossRef]
  42. Cascella, M.; Montomoli, J.; Bellini, V.; Bignami, E. Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios. J. Med. Syst. 2023, 47, 33. [Google Scholar] [CrossRef]
  43. Gupta, R.; Herzog, I.; Weisberger, J.; Chao, J.; Chaiyasate, K.; Lee, E.S. Utilization of ChatGPT for Plastic Surgery Research: Friend or Foe? J. Plast. Reconstr. Aesthetic Surg. 2023, 80, 145–147. [Google Scholar] [CrossRef]
  44. Huai, S.; Van de Voorde, T. Which Environmental Features Contribute to Positive and Negative Perceptions of Urban Parks? A Cross-Cultural Comparison Using Online Reviews and Natural Language Processing Methods. Landsc. Urban Plan. 2022, 218, 104307. [Google Scholar] [CrossRef]
  45. Liang, H.; Tsui, B.Y.; Ni, H.; Valentim, C.C.S.; Baxter, S.L.; Liu, G.; Cai, W.; Kermany, D.S.; Sun, X.; Chen, J.; et al. Evaluation and Accurate Diagnoses of Pediatric Diseases Using Artificial Intelligence. Nat. Med. 2019, 25, 433–438. [Google Scholar] [CrossRef] [PubMed]
  46. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for Radiologists. RadioGraphics 2017, 37, 2113–2131. [Google Scholar] [CrossRef] [PubMed]
  47. Gentine, P.; Pritchard, M.; Rasp, S.; Reinaudi, G.; Yacalis, G. Could Machine Learning Break the Convection Parameterization Deadlock? Geophys. Res. Lett. 2018, 45, 5742–5751. [Google Scholar] [CrossRef]
  48. O’Gorman, P.A.; Dwyer, J.G. Using Machine Learning to Parameterize Moist Convection: Potential for Modeling of Climate, Climate Change, and Extreme Events. J. Adv. Model Earth Syst. 2018, 10, 2548–2563. [Google Scholar] [CrossRef]
  49. Brenowitz, N.D.; Bretherton, C.S. Prognostic Validation of a Neural Network Unified Physics Parameterization. Geophys. Res. Lett. 2018, 45, 6289–6298. [Google Scholar] [CrossRef]
  50. Harrer, S. Attention Is Not All You Need: The Complicated Case of Ethically Using Large Language Models in Healthcare and Medicine. EBioMedicine 2023, 90, 104512. [Google Scholar] [CrossRef]
  51. Lin, Z.; Akin, H.; Rao, R.; Hie, B.; Zhu, Z.; Lu, W.; Smetanin, N.; Verkuil, R.; Kabeli, O.; Shmueli, Y.; et al. Evolutionary-Scale Prediction of Atomic-Level Protein Structure with a Language Model. Science 2023, 379, 1123–1130. [Google Scholar] [CrossRef]
  52. Hu, Y.-H.; Fu, J.S.; Yeh, H.-C. Developing an Early-Warning System through Robotic Process Automation: Are Intelligent Tutoring Robots as Effective as Human Teachers? Interact. Learn. Environ. 2023, 31, 1–14. [Google Scholar] [CrossRef]
  53. El-Sappagh, S.; Alonso, J.M.; Islam, S.M.R.; Sultan, A.M.; Kwak, K.S. A Multilayer Multimodal Detection and Prediction Model Based on Explainable Artificial Intelligence for Alzheimer’s Disease. Sci. Rep. 2021, 11, 2660. [Google Scholar] [CrossRef] [PubMed]
  54. Wu, W.-T.; Li, Y.-J.; Feng, A.-Z.; Li, L.; Huang, T.; Xu, A.-D.; Lyu, J. Data Mining in Clinical Big Data: The Frequently Used Databases, Steps, and Methodological Models. Mil. Med. Res. 2021, 8, 44. [Google Scholar] [CrossRef]
  55. Chen, X.; Zou, D.; Xie, H.; Cheng, G.; Liu, C. Two Decades of Artificial Intelligence in Education: Contributors, Collaborations, Research Topics, Challenges, and Future Directions. Educ. Technol. Soc. 2022, 25, 28–47. [Google Scholar]
  56. Høie, M.H.; Kiehl, E.N.; Petersen, B.; Nielsen, M.; Winther, O.; Nielsen, H.; Hallgren, J.; Marcatili, P. NetSurfP-3.0: Accurate and Fast Prediction of Protein Structural Features by Protein Language Models and Deep Learning. Nucleic Acids Res. 2022, 50, W510–W515. [Google Scholar] [CrossRef] [PubMed]
  57. Bolton, T.; Zanna, L. Applications of Deep Learning to Ocean Data Inference and Subgrid Parameterization. Geosci. Model Dev. 2019, 11, 376–399. [Google Scholar] [CrossRef]
  58. Clarivate Web of Science Citation Topics. Available online: https://incites.help.clarivate.com/Content/Research-Areas/citation-topics.htm?Highlight=Citation%20Topics (accessed on 11 November 2023).
  59. Dimitriu, M.C.; Pantea-Stoian, A.; Smaranda, A.C.; Nica, A.A.; Carap, A.C.; Constantin, V.D.; Davitoiu, A.M.; Cirstoveanu, C.; Bacalbasa, N.; Bratu, O.G.; et al. Burnout Syndrome in Romanian Medical Residents in Time of the COVID-19 Pandemic. Med. Hypotheses 2020, 144, 109972. [Google Scholar] [CrossRef] [PubMed]
  60. Moroianu, M.; Bogdan-Goroftei, R.E.; Salmen, T.; Bica, C.I.; Pietrosel, V.-A.; Hainarosie, R.; Stoian, A.P. Evaluation of Medical Decision Errors during the Transition Period to Telemedicine. J. Mind Med. Sci. 2023, 10, 72–78. [Google Scholar] [CrossRef]
Figure 1. The distribution of papers based on the Research Area Criterion (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023).
Figure 1. The distribution of papers based on the Research Area Criterion (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023).
Electronics 12 04957 g001
Figure 2. The distribution of publications over the years (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023).
Figure 2. The distribution of publications over the years (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023).
Electronics 12 04957 g002
Figure 3. The representation of citation topics at the Meso level (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023.).
Figure 3. The representation of citation topics at the Meso level (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023.).
Electronics 12 04957 g003
Figure 4. The representation of citation topics at the Micro level (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023).
Figure 4. The representation of citation topics at the Micro level (Source: This figure was devised based on the official data retrieved from Clarivate Web of Science on 8 September 2023).
Electronics 12 04957 g004
Table 1. The comparative characteristics of the transformer, GPT, and BERT architectures.
Table 1. The comparative characteristics of the transformer, GPT, and BERT architectures.
CharacteristicsThe LLMs Architecture
TransformerGPT BERT
Primary FunctionSequence-to-sequence tasks (e.g., translation, summarization)Text generationUnderstanding language context (e.g., sentence classification, question answering)
StructureEncoder–decoderDecoder-onlyEncoder-only
ComponentsMultiple encoder and decoder layers, each with self-attention and feed-forward networkStack of decoder layers with a large number of parameters (e.g., 175 billion in GPT-3.5)Stack of encoder layers, bidirectional context processing
Attention MechanismSelf-attention in both encoder and decoderSelf-attention in decoder layersSelf-attention in encoder layers
Training StagesNot explicitly divided into stagesPretraining on large corpus, followed by fine-tuningPretraining with masked language model (MLM) and next sentence prediction (NSP), followed by fine-tuning
Typical ApplicationMachine translation, text summarizationContent creation, conversational agentsSentiment analysis, language understanding tasks
Table 2. Summarization of the main criteria and categories from the obtained scientific pool of papers along with their corresponding scientific articles.
Table 2. Summarization of the main criteria and categories from the obtained scientific pool of papers along with their corresponding scientific articles.
CriteriaMain Categories,
According to the Criteria
Scientific Papers,
According to the Categories
Research AreasComputer Science[4,5,8,10,11,12,17,33,34,35,36,37,38]
Engineering[1,9,13,33,36,38,39]
Health Care Sciences Services[4,5,6,40,41,42,43]
Medical Informatics[4,5,6,40,41,42,43]
Information Science Library Science[3,4,5,8,37]
Publication Years2015–2018[5,11,14,46,47,48,49]
2019[7,12,13,17,38,45,57]
2020[9,35,36,37]
2021[10,34,41,53,54]
2022[15,39,40,44,55,56]
2023[1,2,3,4,6,8,18,19,20,26,27,28,33,42,43,50,51,52]
Citation MesoComputer Vision and Graphics[12,13,14,17,18,33,45,46]
Oceanography, Meteorology and Atmospheric Sciences[47,48,49,57]
Knowledge Engineering and Representation[37,38,52]
Physical Chemistry[10,11,35]
Telecommunications[34,39,40]
Citation MicroDeep Learning[12,13,14,18,33,45,46]
Bulk Modulus[10,11,35]
Tropical Cyclones[47,48,49]
Natural Language Processing[37,38]
Health Literacy[6,41]
Table 3. A synthesis of relevant papers from the scientific pool of articles.
Table 3. A synthesis of relevant papers from the scientific pool of articles.
No.Reference Number/
Publication Year
Number of Citations 1Research
Areas
KeywordsPurpose
1[18]
2023
32Radiology, Nuclear Medicine and Medical ImagingArtificial intelligence; ChatGPT; Generative pretrained transformer (GPT) RadiologyExplore the utilization and potential of LLMs in the field of radiology
2[43]
2023
9SurgeryArtificial intelligence AI; Education; Systematic reviewsPotential contributions of ChatGPT to plastic surgery
3[27]
2023
10SurgeryArtificial intelligence; ChatGPT; Language learning models; Bariatric surgery; Weight loss; Health literacyThe accuracy and reproducibility of responses of ChatGPT when answering patient queries related to bariatric surgery
4[6]
2023
27Health Care Sciences and Services; Medical InformaticsN/AAn analysis of the authenticity and accuracy of abstracts generated by LLMs like ChatGPT, juxtaposed against original research abstracts from leading medical journals
5[26]
2023
17Education and Educational ResearchGenerative artificial intelligence and science education; Large language models; ChatGPT; Digital technologiesAn in-depth analysis into the nuanced interactions between this technological innovation and its educational applications
6[50]
2023
29General and Internal Medicine; Research and Experimental MedicineGenerative artificial intelligence; Large language models. Foundation models; AI ethics; Augmented; human intelligence; Information management; AI trustworthinessThe shape-shifting features that LLMs provide with regard to data management workflows in the medical field
7[8]
2023
26Computer Science; Information Science and Library ScienceAI; Communication; Construction; Plagiarism; CitationAn in-depth analysis into the historical evolution and foundational principles underpinning ChatGPT and its ilk; the nexus between this expanding technology and academia is subsequently explored
8[1]
2023
14Business and Economics; EngineeringPerformance; Creativity; Knowledge; Search; IdeaExplores the usage of GPT models and their potential to augment human innovation teams; the focus is on how these models can enhance processes for developing novel products through covering wider problems domains along with providing solutions to them
9[2]
2022
9Business and Economicsdeep learning; large language model; transfer learning; interpretable machine learning; sentiment classification; environment social and governance (ESG)Introduces and discusses “FinBERT”, a state-of-the-art large language model tailored for the finance domain
10[38]
2019
153Computer Science; Engineering; Operations Research and Management ScienceSentiment analysis; Deep learning; Word embeddings; Word2Vec; GloVe; Natural language processingIntroduces a new methodology, devised to enhance the accuracy of pretrained embeddings specifically for sentiment analysis.
1 Retrieved from Clarivate Web of Science on 8 of September 2023.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Petroșanu, D.-M.; Pîrjan, A.; Tăbușcă, A. Tracing the Influence of Large Language Models across the Most Impactful Scientific Works. Electronics 2023, 12, 4957. https://doi.org/10.3390/electronics12244957

AMA Style

Petroșanu D-M, Pîrjan A, Tăbușcă A. Tracing the Influence of Large Language Models across the Most Impactful Scientific Works. Electronics. 2023; 12(24):4957. https://doi.org/10.3390/electronics12244957

Chicago/Turabian Style

Petroșanu, Dana-Mihaela, Alexandru Pîrjan, and Alexandru Tăbușcă. 2023. "Tracing the Influence of Large Language Models across the Most Impactful Scientific Works" Electronics 12, no. 24: 4957. https://doi.org/10.3390/electronics12244957

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop