Applications of Artificial Intelligence and Natural Language Processing

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 1096

Special Issue Editor

Department of Computer Science, Texas Tech University, Lubbock, TX, USA
Interests: artificial intelligence; AI education; natural language processing

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit your latest research to this Special Issue entitled “Applications of Artificial Intelligence and Natural Language Processing”. This Special Issue aims to highlight recent advancements in and innovative applications of AI/NLP methodologies—including machine learning, deep learning, and large language models—to address real-world challenges through interdisciplinary and convergent approaches. We invite papers that present original research on related topics, including, but not limited to, the following:

  • Health Informatics: Clinical text mining, predictive analytics, and decision-support systems;
  • Sentiment and Opinion Mining: Fine-grained sentiment analysis, emotion detection, and opinion extraction from social media and user-generated content;
  • Low-Resource and Multilingual NLP: Transfer learning, cross-lingual adaptation, and solutions tailored to underrepresented languages.

We look forward to receiving your valuable contributions that push the boundaries of intelligent language technologies and create a meaningful impact across domains such as education, healthcare, social media analytics, and beyond.

Dr. Maaz Amjad
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • neural networks
  • natural language processing
  • health informatics
  • AI for social good
  • data for good

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 301 KB  
Article
External Knowledge-Guided Tuning for Critical Error Detection in Machine Translation
by Sugyeong Eo and Chanjun Park
Mathematics 2026, 14(9), 1484; https://doi.org/10.3390/math14091484 - 28 Apr 2026
Viewed by 191
Abstract
With the advent of large language models (LLMs), significant progress has been made in improving the fluency of machine translation (MT). However, hallucination remains a persistent challenge to translation accuracy, making Critical Error Detection (CED) increasingly important. In this paper, we introduce a [...] Read more.
With the advent of large language models (LLMs), significant progress has been made in improving the fluency of machine translation (MT). However, hallucination remains a persistent challenge to translation accuracy, making Critical Error Detection (CED) increasingly important. In this paper, we introduce a simple yet effective approach, termed external knowledge-guided tuning, for the CED task. We focus on sentence-level CED, formulated as a binary classification task that determines whether an MT output contains critical errors. Although the task is binary, the data consist of diverse error cases, including issues related to toxicity, safety, named entities, sentiment, and numerical information, which may manifest as hallucination, mistranslation, or deletion. Our approach restructures model inputs in a cloze-style format and incorporates auxiliary descriptions, casting CED within a masked language modeling framework. By integrating additional contextual signals, including demonstration examples and outputs from commercial systems, our method guides the model to acquire task-specific knowledge and compare alternative MT outputs. Experimental results demonstrate the effectiveness of our approach, achieving state-of-the-art (SOTA) performance on the English–Czech language pair and a second-place ranking on English–German. We further provide a comprehensive analysis of the aggregated effects of external knowledge and examine the contribution of each component within the proposed framework. Our proposed method enables the model to internalize task-relevant knowledge through parameter updates within a prompt-based formulation, providing a principled way to incorporate external knowledge into CED and enhancing the model’s ability to identify critical errors in practice. Full article
Show Figures

Figure 1

21 pages, 1627 KB  
Article
EGTJ: An Unsupervised and Non-Parametric Approach for Efficient Text Classification Under Resource-Limited Environments
by Haifeng Lv and Yong Ding
Mathematics 2026, 14(5), 801; https://doi.org/10.3390/math14050801 - 27 Feb 2026
Viewed by 438
Abstract
Deep neural networks (DNNs) dominate text classification but suffer from high computational costs and poor generalization in data-scarce or Out-of-Distribution (OOD) environments. Conversely, non-parametric methods like compression-based offer robustness but incur prohibitive inference latency due to the reliance on exhaustive pairwise comparisons. To [...] Read more.
Deep neural networks (DNNs) dominate text classification but suffer from high computational costs and poor generalization in data-scarce or Out-of-Distribution (OOD) environments. Conversely, non-parametric methods like compression-based offer robustness but incur prohibitive inference latency due to the reliance on exhaustive pairwise comparisons. To bridge this gap, this study proposes EGTJ, a training-free framework that introduces a novel retrieval-augmented compression architecture. Unlike prior works that apply similarity metrics in isolation, EGTJ utilizes an inverted-index pre-filtering mechanism to dynamically constrain the comparison scope, effectively reducing algorithmic complexity from linear to constant time relative to the training set size. Furthermore, a tri-metric fusion strategy is introduced that integrates information-theoretic (gzip), lexical (TF-IDF), and structural (Jaccard) similarities to mitigate the inherent biases of individual metrics. Experimental results across five in-distribution and four OOD datasets demonstrate that EGTJ achieves superior accuracy over all baseline methods—notably outperforming BERT by over 30% in 5-shot OOD scenarios—while simultaneously slashing inference latency by orders of magnitude compared to standard compression-based approaches. These findings present EGTJ as a scalable, high-performance alternative for resource-constrained NLP, effectively solving the scalability bottleneck of non-parametric classification. Full article
Show Figures

Figure 1

Back to TopTop