Advanced Mathematical Methods for Machine Learning, Neural Networks, and Computer Vision

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 14 June 2026 | Viewed by 322

Special Issue Editor


E-Mail Website
Guest Editor
School of Mathematics and Computational Science, Guilin University of Electronic Technology, Guilin 541002, China
Interests: statistical analysis; data mining; machine learning; chaos theory; information security; image processing

Special Issue Information

Dear Colleagues,

The integration of mathematical methods in machine learning, neural networks, and computer vision has become increasingly vital as AI applications expand across various industries. These methods provide the theoretical foundation for building reliable, interpretable, and efficient AI systems. Recent advances in geometric deep learning, statistical learning theory, and sparse representation have significantly enhanced the performance of AI systems, yet many theoretical challenges remain to be addressed.

We are pleased to invite you to contribute to this Special Issue, which focuses on the application and theoretical underpinnings of mathematical methods in machine learning, neural networks, and computer vision. This Issue will highlight how mathematical theories drive innovative breakthroughs in AI algorithms and promote the practical application of AI technologies.

This Special Issue aims to collect original research articles and reviews exploring the application of mathematical methods in the aforementioned fields. We particularly encourage submissions on geometric deep learning, sparse representation, optimization algorithms, statistical learning theory, and generative models. These studies should provide theoretical support for AI systems and facilitate their application in practical tasks. Other topics of interest include, but are not limited to, sparse models, low-rank structures, stochastic algorithms, and explainability analysis.

In this Special Issue, original research articles and reviews are welcome. Research areas may include geometric deep learning, sparse representation, optimization algorithms, statistical learning theory, generative models, sparse models, low-rank structures, stochastic algorithms, explainability analysis, etc.

We look forward to receiving your contributions.

Prof. Dr. Guodong Li
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mathematical optimization
  • geometric deep learning
  • neural network analysis
  • robust AI design
  • machine learning theory
  • computer vision modeling

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1759 KB  
Article
VLGA: A Chaos-Enhanced Genetic Algorithm for Optimizing Transformer-Based Prediction of Infectious Diseases
by Guodong Li, Lu Zhang, Fuxin Zhang and Wenxia Xu
Mathematics 2025, 13(24), 3908; https://doi.org/10.3390/math13243908 - 6 Dec 2025
Viewed by 192
Abstract
Accurate and generalizable prediction of infectious disease incidence is essential for proactive public health response. This study proposes a novel hybrid VLGA-Transformer model to address this challenge, validated through tuberculosis (TB) and hepatitis B case studies. Utilizing monthly TB data from Zhejiang Province [...] Read more.
Accurate and generalizable prediction of infectious disease incidence is essential for proactive public health response. This study proposes a novel hybrid VLGA-Transformer model to address this challenge, validated through tuberculosis (TB) and hepatitis B case studies. Utilizing monthly TB data from Zhejiang Province (2013–2023), raw sequences were first decomposed via Variational Mode Decomposition (VMD) to extract intrinsic temporal patterns. To overcome Transformer parameter optimization difficulties, we innovatively integrated the Lorenz attractor into a Genetic Algorithm (GA), creating a Lorenz-attractor-enhanced GA (LGA) that dynamically balances exploration and exploitation. The resulting VLGA-Transformer framework demonstrated superior performance, achieving R2 values of 0.96 for TB and 0.93 for hepatitis B prediction, significantly outperforming benchmark models in both accuracy and stability. When tested on hepatitis B data, the model confirmed its robust cross-disease generalizability. These findings highlight the framework’s dual strengths—high-precision forecasting and robust generalization—providing actionable insights for public health authorities to optimize resource allocation and intervention strategies, thereby advancing data-driven infectious disease control systems. Full article
Show Figures

Figure 1

Back to TopTop