Recent Advances in Large Language Models

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: 15 July 2025 | Viewed by 10035

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Computer Science, University of Koblenz-Landau, 56070 Koblenz, Germany
Interests: natural language processing; computer vision; time series analysis; AI and its sub-disciplines
School of Future Science and Engineering, Soochow University, Suzhou 215006, China
Interests: computer vision and pattern recognition

Special Issue Information

Dear Colleagues,

This Special Issue, entitled "Recent Advances in Large Language Models (LLMs)", invites original research papers that explore cutting-edge developments in the field of LLMs. We are particularly interested in contributions that delve into a broad spectrum of applications, including, but not limited to, image processing, text analysis, speech recognition, and time series analysis.

The scope of this Special Issue encompasses both advancements in the core architectures of LLMs and innovative applications of existing models. We encourage submissions highlighting novel methodologies for enhancing LLMs' efficiency, accuracy, and versatility. This includes, but is not limited to, improvements in training methods, optimization techniques, model scaling, and the integration of multimodal data processing.

Papers demonstrating the application of LLMs in diverse domains such as natural language processing, computer vision, healthcare, finance, and environmental science are especially welcome. We are also interested in studies that address the challenges associated with LLMs, such as ethical considerations, bias mitigation, and computational efficiency.

This Special Issue aims to provide a comprehensive overview of state-of-the-art LLMs, offering insights into their current capabilities and future potential. We look forward to receiving submissions that push the boundaries of what is possible with large language models.

Dr. Zeyd Boukhers
Dr. Cong Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image processing
  • text analysis
  • speech recognition
  • and time series analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 7433 KiB  
Article
Harnessing Response Consistency for Superior LLM Performance: The Promise and Peril of Answer-Augmented Prompting
by Hua Wu, Haotian Hong, Li Sun, Xiaojing Bai and Mengyang Pu
Electronics 2024, 13(23), 4581; https://doi.org/10.3390/electronics13234581 - 21 Nov 2024
Viewed by 1680
Abstract
This paper introduces Answer-Augmented Prompting (AAP), an innovative approach that leverages the Response Consistency of History of Dialogue (HoD) phenomenon in Large Language Models (LLMs). AAP not only achieves significantly superior performance enhancements compared to traditional augmentation methods but also exhibits a stronger [...] Read more.
This paper introduces Answer-Augmented Prompting (AAP), an innovative approach that leverages the Response Consistency of History of Dialogue (HoD) phenomenon in Large Language Models (LLMs). AAP not only achieves significantly superior performance enhancements compared to traditional augmentation methods but also exhibits a stronger potential for “jailbreaking”, allowing models to produce unsafe or misleading responses. By strategically modifying the HoD, AAP influences LLM performance in a dual manner: it promotes accuracy while amplifying risks associated with bypassing built-in safeguards. Our experiments demonstrate that AAP outperforms standard methods in both effectiveness and the ability to elicit harmful content. To address these risks, we propose comprehensive mitigation strategies for both LLM service providers and end-users. This research offers valuable insights into the implications of Response Consistency in LLMs, underscoring the promise and peril of this powerful capability. Full article
(This article belongs to the Special Issue Recent Advances in Large Language Models)
Show Figures

Figure 1

Review

Jump to: Research

83 pages, 14385 KiB  
Review
A Review of Large Language Models: Fundamental Architectures, Key Technological Evolutions, Interdisciplinary Technologies Integration, Optimization and Compression Techniques, Applications, and Challenges
by Songyue Han, Mingyu Wang, Jialong Zhang, Dongdong Li and Junhong Duan
Electronics 2024, 13(24), 5040; https://doi.org/10.3390/electronics13245040 - 21 Dec 2024
Viewed by 7794
Abstract
Large language model-related technologies have shown astonishing potential in tasks such as machine translation, text generation, logical reasoning, task planning, and multimodal alignment. Consequently, their applications have continuously expanded from natural language processing to computer vision, scientific computing, and other vertical industry fields. [...] Read more.
Large language model-related technologies have shown astonishing potential in tasks such as machine translation, text generation, logical reasoning, task planning, and multimodal alignment. Consequently, their applications have continuously expanded from natural language processing to computer vision, scientific computing, and other vertical industry fields. This rapid surge in research work in a short period poses significant challenges for researchers to comprehensively grasp the research dynamics, understand key technologies, and develop applications in the field. To address this, this paper provides a comprehensive review of research on large language models. First, it organizes and reviews the research background and current status, clarifying the definition of large language models in both Chinese and English communities. Second, it analyzes the mainstream infrastructure of large language models and briefly introduces the key technologies and optimization methods that support them. Then, it conducts a detailed review of the intersections between large language models and interdisciplinary technologies such as contrastive learning, knowledge enhancement, retrieval enhancement, hallucination dissolution, recommendation systems, reinforcement learning, multimodal large models, and agents, pointing out valuable research ideas. Finally, it organizes the deployment and industry applications of large language models, identifies the limitations and challenges they face, and provides an outlook on future research directions. Our review paper aims not only to provide systematic research but also to focus on the integration of large language models with interdisciplinary technologies, hoping to provide ideas and inspiration for researchers to carry out industry applications and the secondary development of large language models. Full article
(This article belongs to the Special Issue Recent Advances in Large Language Models)
Show Figures

Figure 1

Back to TopTop