Natural Language Processing (NLP) and Large Language Modelling (2nd Edition)

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "AI-Driven Innovations".

Deadline for manuscript submissions: 30 April 2026 | Viewed by 433

Special Issue Editor


E-Mail Website
Guest Editor
School of Info Technology, Faculty of Science, Engineering and Built Environment, Geelong Waurn Ponds Campus, Deakin University, Geelong, VIC 3216, Australia
Interests: natural language processing; small efficient language modelling; continual learning; text generation; adversarial learning; scientific text mining; multimodality; conversational systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Natural Language Processing is a rapidly evolving field playing a crucial role in shaping the future of human–computer interactions, with applications range from sentiment analysis and machine translation to question-answering and dialogue systems.

We invite researchers, practitioners, and enthusiasts to submit original research articles, reviews, and case studies that contribute to the advancement of NLP. We also welcome extended conference papers that comprise at least 50% of original material, e.g., in the form of technical extensions, more in-depth evaluations, or additional use cases. Topics of interest for this Special Issue include, but are not limited to, the following:

  • Large language modelling and its applications;
  • Sentiment analysis and opinion mining;
  • Machine translation and multilingual processing;
  • Question-answering and information retrieval;
  • Dialogue systems and conversational agents;
  • Text summarization and generation;
  • Natural language understanding and generation;
  • NLP applications in healthcare, finance, education, and other domains.

Submissions should present novel research findings, innovative methodologies, and practical applications that demonstrate the current state of the art in NLP. We welcome interdisciplinary approaches and encourage submissions that explore the intersection of NLP and other fields, such as machine learning, artificial intelligence, and cognitive science.

Dr. Ming Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • natural language processing
  • small efficient language modelling
  • continual learning
  • text generation
  • adversarial learning
  • scientific text mining
  • multimodality
  • conversational systems
  • large language model

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 356 KB  
Article
Integrating Large Language Models with near Real-Time Web Crawling for Enhanced Job Recommendation Systems
by David Gauhl, Kevin Kakkanattu, Melbin Mukkattu and Thomas Hanne
Computers 2025, 14(9), 387; https://doi.org/10.3390/computers14090387 - 15 Sep 2025
Viewed by 137
Abstract
This study addresses the limitations of traditional job recommendation systems that rely on static datasets, making them less responsive to dynamic job market changes. While existing job platforms address job search with an untransparent logic following their business goals, job seekers may benefit [...] Read more.
This study addresses the limitations of traditional job recommendation systems that rely on static datasets, making them less responsive to dynamic job market changes. While existing job platforms address job search with an untransparent logic following their business goals, job seekers may benefit from a solution actively and dynamically crawling and evaluating job offers from a variety of sites according to their objectives. To address this gap, a hybrid system was developed that integrates large language models (LLMs) for semantic analysis with near real-time data acquisition through web crawling. The system extracts and ranks job-specific keywords from user inputs, such as resumes, while dynamically retrieving job listings from online platforms. User evaluations indicated strong performance in keyword extraction and system usability but revealed challenges in web crawler performance, affecting recommendation accuracy. Compared with a state-of-the-art commercial tool, user tests indicate a smaller accuracy of our prototype but a higher functionality satisfaction. Test users highlighted its great potential for further development. The results highlight the benefits of combining LLMs and web crawling while emphasizing the need for improved near real-time data handling to enhance recommendation precision and user satisfaction. Full article
Show Figures

Figure 1

18 pages, 4208 KB  
Article
Transformer Models for Paraphrase Detection: A Comprehensive Semantic Similarity Study
by Dianeliz Ortiz Martes, Evan Gunderson, Caitlin Neuman and Nezamoddin N. Kachouie
Computers 2025, 14(9), 385; https://doi.org/10.3390/computers14090385 - 14 Sep 2025
Viewed by 191
Abstract
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the [...] Read more.
Semantic similarity, the task of determining whether two sentences convey the same meaning, is central to applications such as paraphrase detection, semantic search, and question answering. Despite the widespread adoption of transformer-based models for this task, their performance is influenced by both the choice of similarity measure and BERT (bert-base-nli-mean-tokens), RoBERTa (all-roberta-large-v1), and MPNet (all-mpnet-base-v2) on the Microsoft Research Paraphrase Corpus (MRPC). Sentence embeddings were compared using cosine similarity, dot product, Manhattan distance, and Euclidean distance, with thresholds optimized for accuracy, balanced accuracy, and F1-score. Results indicate a consistent advantage for MPNet, which achieved the highest accuracy (75.6%), balanced accuracy (71.0%), and F1-score (0.836) when paired with cosine similarity at an optimized threshold of 0.671. BERT and RoBERTa performed competitively but exhibited greater sensitivity to the choice of Similarity metric, with BERT notably underperforming when using cosine similarity compared to Manhattan or Euclidean distance. Optimal thresholds varied widely (0.334–0.867), underscoring the difficulty of establishing a single, generalizable cut-off for paraphrase classification. These findings highlight the value of fine-tuning of both Similarity metrics and thresholds alongside model selection, offering practical guidance for designing high-accuracy semantic similarity systems in real-world NLP applications. Full article
Show Figures

Figure 1

Back to TopTop