Mathematical Foundations and Advances in Machine Learning and Data Mining

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 29 September 2025 | Viewed by 1420

Special Issue Editor


E-Mail Website
Guest Editor
School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081, China
Interests: machine learning; data mining; distributed systems

Special Issue Information

Dear Colleagues,

We are pleased to invite contributions to the Special Issue "Mathematical Foundations and Advances in Machine Learning and Data Mining" in Mathematics. Machine learning and data mining are critical areas in modern computational science, offering powerful tools for extracting knowledge from large and complex datasets. These techniques have transformed various domains, including healthcare, finance, and social sciences, by enabling predictive modeling, anomaly detection, and decision-making automation. The integration of mathematical theories with practical applications in these fields highlights their importance and the need for continued research and innovation.

This Special Issue aims to bring together recent advances and applications of mathematical foundations in machine learning and data mining. It aligns with the journal's scope by focusing on the development and analysis of mathematical models and algorithms that underpin these technologies. We seek to explore both theoretical advancements and practical implementations, fostering a comprehensive understanding of how mathematical principles drive progress in this dynamic field.

In this Special Issue, original research articles and reviews are welcome. Research areas may include (but are not limited to) the following: mathematical foundations of machine learning; algorithm development and optimization; statistical methods in data mining; big data analytics; advances in explainable AI; applications of machine learning in various domains (e.g., healthcare, finance, and engineering); neural networks and deep learning; pattern recognition; data preprocessing and feature selection; clustering and classification techniques; and predictive modeling and analytics.

I look forward to receiving valuable contributions.

Prof. Dr. Kan Li
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • machine learning
  • data mining
  • explainable AI
  • pattern recognition
  • big data

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1043 KiB  
Article
LLM-Guided Reinforcement Learning for Interactive Environments
by Fuxue Yang, Jiawen Liu and Kan Li
Mathematics 2025, 13(12), 1932; https://doi.org/10.3390/math13121932 - 10 Jun 2025
Viewed by 506
Abstract
We propose herein LLM-Guided Reinforcement Learning (LGRL), a novel framework that leverages large language models (LLMs) to decompose high-level objectives into a sequence of manageable subgoals in interactive environments. Our approach decouples high-level planning from low-level action execution by dynamically generating context-aware [...] Read more.
We propose herein LLM-Guided Reinforcement Learning (LGRL), a novel framework that leverages large language models (LLMs) to decompose high-level objectives into a sequence of manageable subgoals in interactive environments. Our approach decouples high-level planning from low-level action execution by dynamically generating context-aware subgoals that guide the reinforcement learning (RL) agent. During training, intermediate subgoals—each associated with partial rewards—are generated based on the agent’s current progress, providing fine-grained feedback that facilitates structured exploration and accelerates convergence. At inference, a chain-of-thought strategy is employed, enabling the LLM to adaptively update subgoals in response to evolving environmental states. Although demonstrated on a representative interactive setting, our method is generalizable to a wide range of complex, goal-oriented tasks. Experimental results show that LGRL achieves higher success rates, improved efficiency, and faster convergence compared to baseline approaches. Full article
Show Figures

Figure 1

13 pages, 345 KiB  
Article
Novel Iterative Reweighted 1 Minimization for Sparse Recovery
by Qi An, Li Wang and Nana Zhang
Mathematics 2025, 13(8), 1219; https://doi.org/10.3390/math13081219 - 8 Apr 2025
Viewed by 314
Abstract
Data acquisition and high-dimensional signal processing often require the recovery of sparse representations of signals to minimize the resources needed for data collection. p quasi-norm minimization excels in exactly reconstructing sparse signals from fewer measurements, but it is NP-hard and challenging to [...] Read more.
Data acquisition and high-dimensional signal processing often require the recovery of sparse representations of signals to minimize the resources needed for data collection. p quasi-norm minimization excels in exactly reconstructing sparse signals from fewer measurements, but it is NP-hard and challenging to solve. In this paper, we propose two distinct Iteratively Re-weighted 1 Minimization (IR1) formulations for solving this non-convex sparse recovery problem by introducing two novel reweighting strategies. These strategies ensure that the ϵ-regularizations adjust dynamically based on the magnitudes of the solution components, leading to more effective approximations of the non-convex sparsity penalty. The resulting IR1 formulations provide first-order approximations of tighter surrogates for the original p quasi-norm objective. We prove that both algorithms converge to the true sparse solution under appropriate conditions on the sensing matrix. Our numerical experiments demonstrate that the proposed IR1 algorithms outperform the conventional approach in enhancing recovery success rate and computational efficiency, especially in cases with small values of p. Full article
Show Figures

Figure 1

Back to TopTop