Challenges and Solutions in Large Language Models
Topic Information
Dear Colleagues,
Large Language Models (LLMs) are a class of artificial intelligence systems based on deep learning designed to understand and generate natural language. With their powerful language understanding and generation abilities, these models have shown tremendous potential in various fields, including machine translation, text generation, sentiment analysis, and question answering systems. However, LLMs face challenges related to safety, security, and privacy, which pose significant risks to their reliability, controllability, and long-term applicability. Developing effective solutions is crucial for improving the reliability of LLMs in real-world applications. In this topic, research areas may include adversarial robustness in LLMs, backdoor and poisoning attacks, model extraction attacks, prompt injection defense, privacy leakage of LLMs, privacy preservation and data security for LLMs, privacy-preserving fine-tuning, privacy-preserving inference for LLMs, trustworthiness and explainability of LLMs, and the security of multimodal LLMs.
Prof. Dr. Debiao He
Prof. Dr. Hu Xiong
Topic Editors
Keywords
- adversarial robustness in LLMs
- backdoor and poisoning attacks
- model extraction attacks
- prompt injection defense
- privacy leakage of LLMs
- privacy preservation and data security for LLMs
- privacy-preserving fine-tuning
- privacy-preserving inference for LLMs
- trustworthiness and explainability of LLMs
- security of multimodal LLMs
Participating Journals


