Topic Editors

School of Cyber Science and Engineering, Wuhan University, Wuhan 430000, China
Prof. Dr. Hu Xiong
School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China

Challenges and Solutions in Large Language Models

Abstract submission deadline
30 April 2026
Manuscript submission deadline
30 June 2026
Viewed by
718

Topic Information

Dear Colleagues,

Large Language Models (LLMs) are a class of artificial intelligence systems based on deep learning designed to understand and generate natural language. With their powerful language understanding and generation abilities, these models have shown tremendous potential in various fields, including machine translation, text generation, sentiment analysis, and question answering systems. However, LLMs face challenges related to safety, security, and privacy, which pose significant risks to their reliability, controllability, and long-term applicability. Developing effective solutions is crucial for improving the reliability of LLMs in real-world applications. In this topic, research areas may include adversarial robustness in LLMs, backdoor and poisoning attacks, model extraction attacks, prompt injection defense, privacy leakage of LLMs, privacy preservation and data security for LLMs, privacy-preserving fine-tuning, privacy-preserving inference for LLMs, trustworthiness and explainability of LLMs, and the security of multimodal LLMs.

Prof. Dr. Debiao He
Prof. Dr. Hu Xiong
Topic Editors

Keywords

  • adversarial robustness in LLMs
  • backdoor and poisoning attacks
  • model extraction attacks
  • prompt injection defense
  • privacy leakage of LLMs
  • privacy preservation and data security for LLMs
  • privacy-preserving fine-tuning
  • privacy-preserving inference for LLMs
  • trustworthiness and explainability of LLMs
  • security of multimodal LLMs

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.5 2011 19.8 Days CHF 2400 Submit
Cryptography
cryptography
2.1 5.0 2017 23.1 Days CHF 1800 Submit
Mathematics
mathematics
2.2 4.6 2013 18.4 Days CHF 2600 Submit
Symmetry
symmetry
2.2 5.3 2009 17.1 Days CHF 2400 Submit
AI
ai
5.0 6.9 2020 20.7 Days CHF 1600 Submit

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (1 paper)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
37 pages, 2412 KB  
Systematic Review
Unlocking the Potential of the Prompt Engineering Paradigm in Software Engineering: A Systematic Literature Review
by Irdina Wanda Syahputri, Eko K. Budiardjo and Panca O. Hadi Putra
AI 2025, 6(9), 206; https://doi.org/10.3390/ai6090206 - 28 Aug 2025
Viewed by 425
Abstract
Prompt engineering (PE) has emerged as a transformative paradigm in software engineering (SE), leveraging large language models (LLMs) to support a wide range of SE tasks, including code generation, bug detection, and software traceability. This study conducts a systematic literature review (SLR) combined [...] Read more.
Prompt engineering (PE) has emerged as a transformative paradigm in software engineering (SE), leveraging large language models (LLMs) to support a wide range of SE tasks, including code generation, bug detection, and software traceability. This study conducts a systematic literature review (SLR) combined with a co-citation network analysis of 42 peer-reviewed journal articles to map key research themes, commonly applied PE methods, and evaluation metrics in the SE domain. The results reveal four prominent research clusters: manual prompt crafting, retrieval-augmented generation, chain-of-thought prompting, and automated prompt tuning. These approaches demonstrate notable progress, often matching or surpassing traditional fine-tuning methods in terms of adaptability and computational efficiency. Interdisciplinary collaboration among experts in AI, machine learning, and software engineering is identified as a key driver of innovation. However, several research gaps remain, including the absence of standardized evaluation protocols, sensitivity to prompt brittleness, and challenges in scalability across diverse SE applications. To address these issues, a modular prompt engineering framework is proposed, integrating human-in-the-loop design, automated prompt optimization, and version control mechanisms. Additionally, a conceptual pipeline is introduced to support domain adaptation and cross-domain generalization. Finally, a strategic research roadmap is presented, emphasizing future work on interpretability, fairness, and collaborative development platforms. This study offers a comprehensive foundation and practical insights to advance prompt engineering research tailored to the complex and evolving needs of software engineering. Full article
(This article belongs to the Topic Challenges and Solutions in Large Language Models)
Show Figures

Figure 1

Back to TopTop