Advanced Neural Network and Machine Learning Algorithms, Models and Architectures in Data Mining

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 March 2026 | Viewed by 2553

Special Issue Editors


E-Mail Website
Guest Editor
Big Data Institute, Central South University, Changsha 518060, China
Interests: spatial data mining; graph neural networks; knowledge graphs; intelligent transportation

E-Mail Website
Guest Editor
Department of Geo-Informatics, Central South University, Changsha 410086, China
Interests: geographical data analysis; crowdsourcing mapping; human mobility pattern; urban functional zone identification and sustainable development
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the popularity of large models, the field of data mining is facing new challenges and opportunities, and new machine learning methods and neural network models are continuously emerging. This Special Issue is titled "Advanced Neural Network and Machine Learning Algorithms, Models and Architectures in Data Mining" and focuses on new machine learning and neural network models in the field of data mining, including graph neural network models, transformers, and large language models. Key topics include machine learning architectures (such as deep neural networks and reinforcement learning), data analysis methods in intelligent systems, and clustering models and prediction models for multimodal data. Contributions should emphasize mathematical innovations, such as new neural network learning models, memory models, and new methods and techniques for data processing, such as images, text, sequence data, spatiotemporal data, etc. This Issue aims to connect emerging data mining methods with the application of data-driven machine learning methods in the real world.

Dr. Jincai Huang
Dr. Jianbo Tang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • data mining
  • transformers
  • graph neural networks
  • data clustering models
  • prediction models
  • multimodal data analysis
  • data-driven machine learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

29 pages, 19410 KB  
Article
Supervised Feature Selection Method Using Stackable Attention Networks
by Zhu Chen, Wei Jiang, Jun Tan, Zhiqiang Li and Ning Gui
Mathematics 2025, 13(22), 3703; https://doi.org/10.3390/math13223703 - 18 Nov 2025
Viewed by 207
Abstract
Mainstream DNN-based feature selection methods share a similar design strategy: employing one specially designed feature selection module to learn the importance of features along with the model-training process. While these works achieve great success in feature selection, their shallow structures, which evaluate feature [...] Read more.
Mainstream DNN-based feature selection methods share a similar design strategy: employing one specially designed feature selection module to learn the importance of features along with the model-training process. While these works achieve great success in feature selection, their shallow structures, which evaluate feature importance from one perspective, are easily disturbed by noisy samples, especially in datasets with high-dimensional features and complex structures. To alleviate this limitation, this paper innovatively introduces a Stackable Attention architecture for Feature Selection (SAFS), which can calculate stable and accurate feature weights through a set of Stackable Attention Blocks (SABlocks) rather than from a single module. To avoid information loss from stacking, a feature jump concatenation structure is designed. Furthermore, an inertia-based weight update method is proposed to generate a more robust feature weight distribution. Experiments on twelve real-world datasets, including multiple domains, demonstrate that SAFS produced the best results with significant performance edges compared to thirteen baselines. Full article
Show Figures

Figure 1

16 pages, 451 KB  
Article
Uncertainty-Aware Multi-Branch Graph Attention Network for Transient Stability Assessment of Power Systems Under Disturbances
by Ke Wang, Shixiong Fan, Haotian Xu, Jincai Huang and Kezheng Jiang
Mathematics 2025, 13(22), 3575; https://doi.org/10.3390/math13223575 - 7 Nov 2025
Viewed by 618
Abstract
With the rapid development of modern society and the continuous growth of electricity demand, the stability of power systems has become increasingly critical. In particular, Transient Stability Assessment (TSA) plays a vital role in ensuring the secure and reliable operation of power systems. [...] Read more.
With the rapid development of modern society and the continuous growth of electricity demand, the stability of power systems has become increasingly critical. In particular, Transient Stability Assessment (TSA) plays a vital role in ensuring the secure and reliable operation of power systems. Existing studies have employed Graph Attention Networks (GAT) to model both the topological structure and vertex attributes of power systems, achieving excellent results under ideal test environments. However, the continuous expansion of power systems and the large-scale integration of renewable energy sources have significantly increased system complexity, posing major challenges to TSA. Traditional methods often struggle to handle various disturbances. To address this issue, we propose a graph attention network framework with multi-branch feature aggregation. This framework constructs multiple GAT branches from different information sources and employs a learnable mask mechanism to enhance diversity among branches. In addition, this framework adopts an uncertainty-aware aggregation strategy to efficiently fuse the information from all branches. Extensive experiments conducted on the IEEE-39 bus and IEEE-118 bus systems demonstrate that our method consistently outperforms existing approaches under different disturbance scenarios, providing more accurate and reliable identification of potential instability risks. Full article
Show Figures

Figure 1

31 pages, 4670 KB  
Article
Survival Analysis as Imprecise Classification with Trainable Kernels
by Andrei Konstantinov, Lev Utkin, Vlada Efremenko, Vladimir Muliukha, Alexey Lukashin and Natalya Verbova
Mathematics 2025, 13(18), 3040; https://doi.org/10.3390/math13183040 - 21 Sep 2025
Viewed by 691
Abstract
Survival analysis is a fundamental tool for modeling time-to-event data in healthcare, engineering, and finance, where censored observations pose significant challenges. While traditional methods like the Beran estimator offer nonparametric solutions, they often struggle with the complex data structures and heavy censoring. This [...] Read more.
Survival analysis is a fundamental tool for modeling time-to-event data in healthcare, engineering, and finance, where censored observations pose significant challenges. While traditional methods like the Beran estimator offer nonparametric solutions, they often struggle with the complex data structures and heavy censoring. This paper introduces three novel survival models, iSurvM (imprecise Survival model based on Mean likelihood functions), iSurvQ (imprecise Survival model based on Quantiles of likelihood functions), and iSurvJ (imprecise Survival model based on Joint learning), that combine imprecise probability theory with attention mechanisms to handle censored data without parametric assumptions. The first idea behind the models is to represent censored observations by interval-valued probability distributions for each instance over time intervals between event moments. The second idea is to employ the kernel-based Nadaraya–Watson regression with trainable attention weights for computing the imprecise probability distribution over time intervals for the entire dataset. The third idea is to consider three decision strategies for training, which correspond to the proposed three models. Experiments on synthetic and real datasets demonstrate that the proposed models, especially iSurvJ, consistently outperform the Beran estimator from accuracy and computational complexity points of view. Codes implementing the proposed models are publicly available. Full article
Show Figures

Figure 1

14 pages, 1313 KB  
Article
A Fast and Privacy-Preserving Outsourced Approach for K-Means Clustering Based on Symmetric Homomorphic Encryption
by Wanqi Tang and Shiwei Xu
Mathematics 2025, 13(17), 2893; https://doi.org/10.3390/math13172893 - 8 Sep 2025
Viewed by 555
Abstract
Training a machine learning (ML) model always needs many computing resources, and cloud-based outsourced training is a good solution to address the issue of a computing resources shortage. However, the cloud may be untrustworthy, and it may pose a privacy threat to the [...] Read more.
Training a machine learning (ML) model always needs many computing resources, and cloud-based outsourced training is a good solution to address the issue of a computing resources shortage. However, the cloud may be untrustworthy, and it may pose a privacy threat to the training process. Currently, most work makes use of multi-party computation protocols and lattice-based homomorphic encryption algorithms to solve the privacy problem, but these tools are inefficient in communication or computation. Therefore, in this paper, we focus on the k-means and propose a fast and privacy-preserving method for outsourced clustering of k-means models based on symmetric homomorphic encryption (SHE), which is used to encrypt the clustering dataset and model parameters in our scheme. We design an interactive protocol and use various tools to optimize the protocol time overheads. We perform security analysis and detailed evaluation on the performance of our scheme, and the experimental results show that our scheme has better prediction accuracy, as well as lower computation and total overheads. Full article
Show Figures

Figure 1

Back to TopTop