Industrial Improvement with AI in Applied Mathematics

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 30 July 2026 | Viewed by 1414

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Interdisciplinary Studies, Hunan Normal University, Changsha 410081, China
Interests: computer vision; artificial intelligence; intelligent system; large language model
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The intelligent industry, also called Industry 4.0, has developed rapidly in recent years, demonstrating significant general capabilities across various domains with the expansion of AI. For example, Generated AI (GAI) has become a research hotspot in question answering (QA) of domain knowledge in industry, since AI has shown strong speech and action recognition abilities pertaining to industrial behavior. However, issues such as response trustworthiness and understanding caused by cognitive biases and knowledge deficiencies hinder its widespread application.

Thus, this Special Issue is proposed to determine a way to use applied mathematics, including probability theory, mathematical statistics, combinatorics, and set theory, to enhance AI’s response stability and mitigate cognitive hallucinations, which can increase potential future applications of the intelligent industry.

Therefore, this Special Issue aims to provide an opportunity for researchers to publish their theoretical and technological studies with AI in applied mathematics, as well as novel engineering applications for the intelligent industry.

Prof. Dr. Shuai Liu
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent industry
  • Mathematical analysis
  • AI
  • cognitive hallucination
  • knowledge deficiency

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

39 pages, 2921 KB  
Article
Reasoning-Enhanced Query–Service Matching: A Large Language Model Approach with Adaptive Scoring and Diversity Optimization
by Yue Xiang, Jing Lu, Jinqian Wei and Yaowen Hu
Mathematics 2026, 14(6), 950; https://doi.org/10.3390/math14060950 - 11 Mar 2026
Viewed by 341
Abstract
Query–service matching in customer service systems faces a critical challenge of accurately aligning user queries expressed in colloquial language with formally defined services while balancing business objectives. Traditional keyword-based and embedding approaches fail to capture complex semantic nuances and cannot provide interpretable explanations. [...] Read more.
Query–service matching in customer service systems faces a critical challenge of accurately aligning user queries expressed in colloquial language with formally defined services while balancing business objectives. Traditional keyword-based and embedding approaches fail to capture complex semantic nuances and cannot provide interpretable explanations. We address this problem by proposing a novel reasoning-enhanced framework that leverages large language models (LLMs) for structured multi-criteria evaluation. Our key innovation is a reasoning-first scoring architecture where the model generates detailed explanations before numerical scores, reducing score variance by 18% through conditional mutual information. We introduce a controlled stochastic perturbation mechanism with theoretically derived optimal parameters that balance diversity and relevance, alongside a knowledge distillation pipeline enabling 960× model compression (480B→0.5B parameters) while retaining 94% performance. Rigorous theoretical analysis establishes Pareto optimality guarantees for multi-criteria evaluation, information-theoretic entropy reduction bounds, and PAC learning guarantees for distillation. Experimental validation on real-world telecommunications data demonstrates 89% Precision@1 (15.3% improvement over baselines), 23% diversity enhancement, and 96× latency reduction, with deployment cost decreasing 1200× compared to direct LLM inference. This work bridges the gap between LLM capabilities and production deployment requirements through principled mathematical foundations and practical system design. Full article
(This article belongs to the Special Issue Industrial Improvement with AI in Applied Mathematics)
Show Figures

Figure 1

18 pages, 1927 KB  
Article
Utility-Based Preference Training for Effective Synthetic Text Classification
by Jiho Gwak and Yuchul Jung
Mathematics 2026, 14(3), 507; https://doi.org/10.3390/math14030507 - 31 Jan 2026
Viewed by 502
Abstract
High-quality synthetic text can mitigate annotation scarcity in text classification. However, standard preference optimization often produces samples that are fluent but weakly label-specific. We present Utility-weighted Direct Preference Optimization (U-DPO), a preference-optimization framework for class-conditional synthetic data generation. In U-DPO, a task-specific classifier [...] Read more.
High-quality synthetic text can mitigate annotation scarcity in text classification. However, standard preference optimization often produces samples that are fluent but weakly label-specific. We present Utility-weighted Direct Preference Optimization (U-DPO), a preference-optimization framework for class-conditional synthetic data generation. In U-DPO, a task-specific classifier provides a margin-based external score for each candidate generation, which is combined with an embedding-based internal similarity score to form an overall utility. These utilities are used (i) to mine preference pairs from multiple candidates per class and (ii) to weigh each DPO update by the utility gap between preferred and dispreferred samples. This design encourages the generator to concentrate on learning informative, label-discriminative preference comparisons rather than treating all pairs equally. Across two multiclass scientific-abstract benchmarks (arXiv and WOS-11967), U-DPO consistently improves downstream SciBERT classification accuracy compared with both vanilla synthetic generation and standard DPO fine-tuning, with gains up to 0.88 percentage points on arXiv and 0.83 percentage points on WOS-11967 depending on the generator. An additional GPT-4.5-based evaluation also indicates a higher mean quality score for U-DPO samples with reduced variance. Full article
(This article belongs to the Special Issue Industrial Improvement with AI in Applied Mathematics)
Show Figures

Figure 1

Back to TopTop