Innovative Applications of Large Language Models in Natural Language Processing (NLP)

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 15 September 2025 | Viewed by 397

Special Issue Editors

School of Computer Science and Technology, Dalian University of Technology, Dalian 116081, China
Interests: information retrieval; question answering and dialogue; natural language processing; large language models
Special Issues, Collections and Topics in MDPI journals
School of Computer Science and Technology, Dalian University of Technology, Dalian 116081, China
Interests: natural language processing; biomedical text mining; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science and Technology, Dalian University of Technology, Dalian 116081, China
Interests: natural language processing; sentiment analysis and opinion mining; large language models

Special Issue Information

Dear Colleagues,

This Special Issue will showcase advances in NLP and its applications, including significant advances in sentiment analysis, machine translation, semantic understanding, and more. The rapid growth of big data and the advancements in computational power have spurred remarkable progress in the field of natural language processing (NLP). Central to this progress are large language models (LLMs) like GPT, BERT, and T5, which have shown exceptional capabilities in understanding and generating human-like text. These models, a subset of artificial intelligence (AI), leverage deep learning techniques to build predictive models that can handle diverse NLP tasks.

However, despite their impressive performance, challenges remain in the interpretability, scalability, and ethical implications of these models. This Special Issue invites experts and scholars from around the world to share their latest research results and technological advances in order to provide more inspiration and ideas for the future development of NLP.

In this Special Issue, research areas may include (but are not limited to) the following:

  1. Large language models (LLMs)—capabilities, limitations, dangers, and evaluation;
  2. Knowledge-based, deep learning-based, and neuro-symbolic approaches to text processing;
  3. Information extraction;
  4. Information retrieval;
  5. Semantic analysis;
  6. Natural language processing;
  7. Question answering.

Dr. Bo Xu
Dr. Ling Luo
Dr. Liang Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • natural language processing
  • information retrieval
  • large language models
  • machine translation
  • artificial intelligence
  • sentiment analysis

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1008 KiB  
Article
LLM-Based Query Expansion with Gaussian Kernel Semantic Enhancement for Dense Retrieval
by Min Pan, Wenrui Xiong, Shuting Zhou, Mengfei Gao and Jinguang Chen
Electronics 2025, 14(9), 1744; https://doi.org/10.3390/electronics14091744 - 24 Apr 2025
Viewed by 146
Abstract
In the field of Information Retrieval (IR), user-submitted keyword queries often fail to accurately represent users’ true search intent. With the rapid advancement of artificial intelligence, particularly in natural language processing (NLP), query expansion (QE) based on large language models (LLMs) has emerged [...] Read more.
In the field of Information Retrieval (IR), user-submitted keyword queries often fail to accurately represent users’ true search intent. With the rapid advancement of artificial intelligence, particularly in natural language processing (NLP), query expansion (QE) based on large language models (LLMs) has emerged as a key strategy for improving retrieval effectiveness. However, such methods often introduce query topic drift, which negatively impacts retrieval accuracy and efficiency. To address this issue, this study proposes an LLM-based QE framework that incorporates a Gaussian kernel-enhanced semantic space for dense retrieval. Specifically, the model first employs LLMs to expand the semantic dimensions of the initial query, generating multiple query representations. Then, by introducing a Gaussian kernel semantic space, it captures deep semantic relationships among these query vectors, refining their semantic distribution to better represent the original query’s intent. Finally, the ColBERTv2 model is utilized to retrieve documents based on the enhanced query representations, enabling precise relevance assessment and improving retrieval performance. To validate the effectiveness of the proposed approach, extensive empirical evaluations were conducted on the MS MARCO passage ranking dataset. The model was systematically assessed using key metrics, including MAP, NDCG@10, MRR@10, and Recall@1000. Experimental results demonstrate that the proposed method outperforms existing approaches across multiple metrics, significantly improving retrieval precision while effectively mitigating query drift, offering a novel approach for building efficient QE mechanisms. Full article
Show Figures

Figure 1

Back to TopTop