Applied Mathematics, Computing, and Machine Learning

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 May 2026 | Viewed by 2744

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, University of Dayton, Dayton, OH 45469, USA
Interests: artificial intelligence; cybersecurity; privacy

Special Issue Information

Dear Colleagues,

The fields of artificial intelligence (AI) and machine learning (ML) are experiencing rapid growth and will fundamentally transform industries, research, education, and our daily lives. Nevertheless, current AI technologies are far from satisfactory, still lacking many of the brain’s capabilities. This Special Issue, entitled “Applied Mathematics, Computing, and Machine Learning”, calls for papers in the following areas:

  • Optimization Algorithms for Machine Learning (e.g., novel gradient descent methods and constrained optimization).
  • Numerical Linear Algebra in AI (e.g., tensor decomposition and sparse methods).
  • Probability, Statistics, Stochastic Processes, and Control Theory in ML (e.g., time series analysis and reinforcement learning theory).
  • High-Performance Computing for AI (e.g., parallel and distributed machine learning, GPU acceleration, and specialized hardware).
  • Quantum Computing for Machine Learning (e.g., quantum algorithms for AI tasks and quantum machine learning architectures).
  • Uncertainty Quantification in AI (e.g., probabilistic AI and error analysis in machine learning models).
  • Ethical AI and Explainable AI (XAI) (e.g., mathematical and computational approaches to fairness, transparency, and accountability).
  • Applied Machine Learning in Science and Engineering (e.g., applications in fields such as healthcare, finance, mechanical engineering, materials science, environmental modeling, etc.).
  • AI for Scientific Discovery (e.g., using AI to accelerate research in mathematics, physics, chemistry, and biology).
  • Computational Neuroscience and AI (e.g., bridging the gap between biological and artificial intelligence).

Dr. Zhongmei Yao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • applied mathematics
  • machine learning
  • artificial intelligence
  • optimization algorithms
  • uncertainty quantification
  • ethical AI
  • explainable AI
  • computational neuroscience
  • quantum machine learning
  • error analysis
  • transparency of AI
  • fairness of AI
  • accountability of AI
  • applied machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 389 KB  
Article
FedQuAD: Fast-Converging Curvature-Aware Federated Learning for Credit Default Prediction from Private Accounting Data
by Dingwen Bai, MuGa WaEr and Qichun Wu
Mathematics 2026, 14(6), 1012; https://doi.org/10.3390/math14061012 - 17 Mar 2026
Viewed by 397
Abstract
Credit default prediction from firm-level accounting statements is central to risk management, yet the underlying financial data are highly sensitive and often siloed across banks, auditors, and platforms. Federated learning (FL) offers a practical route to collaborative modeling without centralizing raw records, but [...] Read more.
Credit default prediction from firm-level accounting statements is central to risk management, yet the underlying financial data are highly sensitive and often siloed across banks, auditors, and platforms. Federated learning (FL) offers a practical route to collaborative modeling without centralizing raw records, but standard FL optimization can converge slowly under severe client heterogeneity, heavy-tailed accounting features, and label imbalance typical of default events. This paper proposes FedQuAD, a novel fast-converging FL algorithm that couples (i) quasi-Newton curvature aggregation on the server with a lightweight limited-memory update to accelerate global progress, (ii) a proximal variance-reduced local solver that stabilizes client drift under non-IID accounting distributions, and (iii) federated robust standardization of tabular financial ratios via secure aggregated quantile statistics to mitigate scale instability and outliers. FedQuAD is communication-efficient by design: It transmits compact gradient and curvature sketches and adapts local computation to each client’s stochasticity and drift. We provide convergence guarantees for strongly convex default-risk objectives (logistic and calibrated GLM losses) under bounded heterogeneity, and extend the analysis to nonconvex deep tabular models via expected stationarity bounds. Experiments on public credit-risk benchmarks with simulated cross-silo (institutional) partitions demonstrate that FedQuAD reaches target AUC and calibration error with substantially fewer communication rounds than representative baselines while maintaining privacy constraints compatible with secure aggregation and optional client-level differential privacy accounting. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

24 pages, 14077 KB  
Article
Efficient and Interpretable Machine Learning for Student Academic Outcome Prediction
by Hongwen Gu and Yuqi Zhang
Mathematics 2026, 14(4), 626; https://doi.org/10.3390/math14040626 - 11 Feb 2026
Viewed by 622
Abstract
Understanding and preventing student dropout presents a decision-critical modeling problem involving heterogeneous variables, nonlinear relationships, and the need for transparent inference. This study addresses the prediction of undergraduate academic outcomes, including Graduation, Enrolled, and Dropout, by proposing a efficientand interpretable machine learning framework [...] Read more.
Understanding and preventing student dropout presents a decision-critical modeling problem involving heterogeneous variables, nonlinear relationships, and the need for transparent inference. This study addresses the prediction of undergraduate academic outcomes, including Graduation, Enrolled, and Dropout, by proposing a efficientand interpretable machine learning framework that explicitly balances predictive performance, feature efficiency, and algorithmic explainability. The empirical analysis relies on a dataset of 4424 student records across 17 undergraduate programs from the Polytechnic Institute of Portalegre, Portugal. In contrast to existing approaches that rely on high-dimensional input spaces and opaque predictive architectures, we develop a reduced-dimensional classification pipeline based on recursive feature elimination with Gradient Boosting and Random Forest models. Starting from a comprehensive set of demographic, academic, and financial indicators, only 20 informative predictors are retained for model construction, substantially reducing input complexity while preserving predictive capacity. Comparative evaluation across multiple learning algorithms identifies Gradient Boosting as the most effective model, achieving an AUC of 0.891. Beyond predictive accuracy, the proposed framework emphasizes model interpretability through the integration of SHapley Additive exPlanations (SHAP), enabling quantitative attribution of feature contributions at both global and instance levels. The analysis reveals that second-semester academic engagement variables—including the number of courses approved, evaluated, and enrolled—as well as tuition fee payment status and age at enrollment, are the dominant factors shaping student outcomes. Overall, the results demonstrate that strong classification performance can be achieved using a compact feature set while maintaining transparent and explainable model behavior. By combining mathematically grounded feature selection with principled model explanation, this study advances methodological understanding of how efficiency, interpretability, and predictive accuracy can be jointly optimized in applied machine learning, with implications for decision-support systems in educational analytics. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

18 pages, 1825 KB  
Article
Fast Deep Belief Propagation: An Efficient Learning-Based Algorithm for Solving Constraint Optimization Problems
by Shufeng Kong, Feifan Chen, Zijie Wang and Caihua Liu
Mathematics 2025, 13(20), 3349; https://doi.org/10.3390/math13203349 - 21 Oct 2025
Viewed by 1305
Abstract
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on [...] Read more.
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on labor-intensive hyperparameter optimization limits scalability. Deep Attentive BP (DABP) addresses this by automating damping through recurrent neural networks (RNNs), but introduces significant memory overhead and sequential computation bottlenecks. To reduce memory usage and accelerate deep belief propagation, this paper introduces Fast Deep Belief Propagation (FDBP), a deep learning framework that improves COP solving through online self-supervised learning and graphics processing unit (GPU) acceleration. FDBP decouples the learning of damping factors from BP message passing, inferring all parameters for an entire BP iteration in a single step, and leverages mixed precision to further optimize GPU memory usage. This approach substantially improves both the efficiency and scalability of BP optimization. Extensive evaluations on synthetic and real-world benchmarks highlight the superiority of FDBP, especially for large-scale instances where DABP fails due to memory constraints. Moreover, FDBP achieves an average speedup of 2.87× over DABP with the same restart counts. Because BP for COPs is a mathematically grounded GPU-parallel message-passing framework that bridges applied mathematics, computing, and machine learning, and is widely applicable across science and engineering, our work offers a promising step toward more efficient solutions to these problems. Full article
(This article belongs to the Special Issue Applied Mathematics, Computing, and Machine Learning)
Show Figures

Figure 1

Back to TopTop