Applications of Artificial Intelligence, Machine Learning and Data Science

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "E1: Mathematics and Computer Science".

Deadline for manuscript submissions: 31 December 2025 | Viewed by 11985

Special Issue Editors


E-Mail Website
Guest Editor
Department of Statistics, Guangzhou University, Guangzhou 510006, China
Interests: statistical machine learning; pattern recognition; data mining; computer vision; biomedical image processing

E-Mail Website
Guest Editor
Department of Computer Science, Norwegian University of Science and Technology, 2815 Gjovik, Norway
Interests: pattern recognition; computer vision; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The current Special Issue focuses on the applications of mathematical methods in artificial intelligence, machine learning, and data science. Over the past decade, mathematical theories and methods have driven the rapid development of new technologies in artificial intelligence, machine learning, and data science. These technologies can match or even exceed human performance levels in a variety of applications. These advancements have the potential to enable new high-impact applications in different fields. We encourage researchers to contribute to this Special Issue, including, but not limited to, the following subject areas: deep learning models and their applications, machine learning methodologies and theoretical analysis, image processing technologies and applications, computer vision methods and algorithms, and data mining and data analytics.

Dr. Yufeng Yu
Dr. Guoxia Xu
Prof. Dr. Hu Zhu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • data mining
  • computer vision
  • artificial intelligence
  • image processing
  • pattern recognition

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

37 pages, 8656 KB  
Article
Anomaly-Aware Graph-Based Semi-Supervised Deep Support Vector Data Description for Anomaly Detection
by Taha J. Alhindi
Mathematics 2025, 13(24), 3987; https://doi.org/10.3390/math13243987 - 14 Dec 2025
Viewed by 139
Abstract
Anomaly detection in safety-critical systems often operates under severe label constraints, where only a small subset of normal and anomalous samples can be reliably annotated, while large unlabeled data streams are contaminated and high-dimensional. Deep one-class methods, such as deep support vector data [...] Read more.
Anomaly detection in safety-critical systems often operates under severe label constraints, where only a small subset of normal and anomalous samples can be reliably annotated, while large unlabeled data streams are contaminated and high-dimensional. Deep one-class methods, such as deep support vector data description (DeepSVDD) and deep semi-supervised anomaly detection (DeepSAD), address this setting. However, they treat samples largely in isolation and do not explicitly leverage the manifold structure of unlabeled data, which can limit robustness and interpretability. This paper proposes Anomaly-Aware Graph-based Semi-Supervised Deep Support Vector Data Description (AAG-DSVDD), a boundary-focused deep one-class approach that couples a DeepSAD-style hypersphere with a label-aware latent k-nearest neighbor (k-NN) graph. The method combines a soft-boundary enclosure for labeled normals, a margin-based push-out for labeled anomalies, an unlabeled center-pull, and a k-NN graph regularizer on the squared distances to the center. The resulting graph term propagates information from scarce labels along the latent manifold, aligns anomaly scores of neighboring samples, and supports sample-level interpretability through graph neighborhoods, while test-time scoring remains a single distance-to-center computation. On a controlled two-dimensional synthetic dataset, AAG-DSVDD achieves a mean F1-score of 0.88±0.02 across ten random splits, improving on the strongest baseline by about 0.12 absolute F1. On three public benchmark datasets (Thyroid, Arrhythmia, and Heart), AAG-DSVDD attains the highest F1 on all datasets with F1-scores of 0.719, 0.675, and 0.8, respectively, compared to all baselines. In a multi-sensor fire monitoring case study, AAG-DSVDD reduces the average absolute error in fire starting time to approximately 473 s (about 30% improvement over DeepSAD) while keeping the average pre-fire false-alarm rate below 1% and avoiding persistent pre-fire alarms. These results indicate that graph-regularized deep one-class boundaries offer an effective and interpretable framework for semi-supervised anomaly detection under realistic label budgets. Full article
Show Figures

Figure 1

21 pages, 10260 KB  
Article
Machine Learning for Enabling High-Data-Rate Secure Random Communication: SVM as the Optimal Choice over Others
by Areeb Ahmed and Zoran Bosnić
Mathematics 2025, 13(22), 3590; https://doi.org/10.3390/math13223590 - 8 Nov 2025
Viewed by 403
Abstract
Machine learning (ML) has become a key ingredient in revolutionizing the physical layer security of next-generation devices across Industry 4.0, healthcare, and communication networks. Many conventional and unconventional communication architectures now incorporate ML algorithms for performance and security enhancement. In this study, we [...] Read more.
Machine learning (ML) has become a key ingredient in revolutionizing the physical layer security of next-generation devices across Industry 4.0, healthcare, and communication networks. Many conventional and unconventional communication architectures now incorporate ML algorithms for performance and security enhancement. In this study, we propose an unconventional, high-data-rate, machine-learning-driven, secure random communication system (HDR-MLRCS). Instead of utilizing traditional static methods to encrypt and decrypt alpha-stable (α-stable) noise as a random carrier, we integrated several ML algorithms to convey binary information to the intended receivers covertly. A support vector machine-aided receiver (SVM-R), Naïve Bayes-aided receiver (NB-R), k-Nearest Neighbor-aided receiver (kNN-R), and decision tree-aided receiver (DT-R) were integrated into a single architecture to provide an accelerated data rate with robust security. All intended receivers were pre-trained on a restricted-access dataset (R- Full article
Show Figures

Figure 1

17 pages, 5623 KB  
Article
Deep Learning-Based Back-Projection Parameter Estimation for Quantitative Defect Assessment in Single-Framed Endoscopic Imaging of Water Pipelines
by Gaon Kwon and Young Hwan Choi
Mathematics 2025, 13(20), 3291; https://doi.org/10.3390/math13203291 - 15 Oct 2025
Viewed by 422
Abstract
Aging water pipelines are increasingly prone to structural failure, leakage, and ground subsidence, creating critical risks to urban infrastructure. Closed-circuit television endoscopy is widely used for internal assessment, but it depends on manual interpretation and lacks reliable quantitative defect information. Traditional vanishing point [...] Read more.
Aging water pipelines are increasingly prone to structural failure, leakage, and ground subsidence, creating critical risks to urban infrastructure. Closed-circuit television endoscopy is widely used for internal assessment, but it depends on manual interpretation and lacks reliable quantitative defect information. Traditional vanishing point detection techniques, such as the Hough Transform, often fail under practical conditions due to irregular lighting, debris, and deformed pipe surfaces, especially when pipes are water-filled. To overcome these challenges, this study introduces a deep learning-based method that estimates inverse projection parameters from monocular endoscopic images. The proposed approach reconstructs a spatially accurate two-dimensional projection of the pipe interior from a single frame, enabling defect quantification for cracks, scaling, and delamination. This eliminates the need for stereo cameras or additional sensors, providing a robust and cost-effective solution compatible with existing inspection systems. By integrating convolutional neural networks with geometric projection estimation, the framework advances computational intelligence applications in pipeline condition monitoring. Experimental validation demonstrates high accuracy in pose estimation and defect size recovery, confirming the potential of the system for automated, non-disruptive pipeline health evaluation. Full article
Show Figures

Figure 1

27 pages, 1030 KB  
Article
A Hybrid Mathematical Framework for Dynamic Incident Prioritization Using Fuzzy Q-Learning and Text Analytics
by Arturo Peralta, José A. Olivas, Pedro Navarro-Illana and Juan Alvarado
Mathematics 2025, 13(12), 1941; https://doi.org/10.3390/math13121941 - 11 Jun 2025
Cited by 1 | Viewed by 1322
Abstract
This paper presents a hybrid framework for dynamic incident prioritization in enterprise environments, combining fuzzy logic, natural language processing, and reinforcement learning. The proposed system models incident descriptions through semantic embeddings derived from advanced text analytics, which serve as state representations within a [...] Read more.
This paper presents a hybrid framework for dynamic incident prioritization in enterprise environments, combining fuzzy logic, natural language processing, and reinforcement learning. The proposed system models incident descriptions through semantic embeddings derived from advanced text analytics, which serve as state representations within a fuzzy Q-learning model. Severity and urgency are encoded as fuzzy variables, enabling the prioritization process to manage linguistic vagueness and operational uncertainty. A mathematical formulation of the fuzzy Q-learning algorithm is developed, including fuzzy state definition, reward function design, and convergence analysis. The system continuously updates its prioritization policy based on real-time feedback, adapting to evolving patterns in incident reports and resolution outcomes. Experimental evaluation on a dataset of 10,000 annotated incident descriptions demonstrates improved prioritization accuracy, particularly for ambiguous or borderline cases, and reveals a 19% performance gain over static fuzzy and deep learning-based baselines. The results validate the effectiveness of integrating fuzzy inference and reinforcement learning in incident management tasks requiring adaptability, transparency, and mathematical robustness. Full article
Show Figures

Figure 1

16 pages, 4676 KB  
Article
Application of Dual-Stage Attention Temporal Convolutional Networks in Gas Well Production Prediction
by Xianlin Ma, Long Zhang, Jie Zhan and Shilong Chang
Mathematics 2024, 12(24), 3896; https://doi.org/10.3390/math12243896 - 10 Dec 2024
Cited by 3 | Viewed by 1603
Abstract
Effective production prediction is vital for optimizing energy resource management, designing efficient extraction strategies, minimizing operational risks, and informing strategic investment decisions within the energy sector. This paper introduces a Dual-Stage Attention Temporal Convolutional Network (DA-TCN) model to enhance the accuracy and efficiency [...] Read more.
Effective production prediction is vital for optimizing energy resource management, designing efficient extraction strategies, minimizing operational risks, and informing strategic investment decisions within the energy sector. This paper introduces a Dual-Stage Attention Temporal Convolutional Network (DA-TCN) model to enhance the accuracy and efficiency of gas production forecasting, particularly for wells in tight sandstone reservoirs. The DA-TCN architecture integrates feature and temporal attention mechanisms within the Temporal Convolutional Network (TCN) framework, improving the model’s ability to capture complex temporal dependencies and emphasize significant features, resulting in robust forecasting performance across multiple time horizons. Application of the DA-TCN model to gas production data from two wells in Block T of the Sulige gas field in China demonstrated a 19% improvement in RMSE and a 21% improvement in MAPE compared to traditional TCN methods for long-term forecasts. These findings confirm that dual-stage attention not only increases predictive accuracy but also enhances forecast stability over short-, medium-, and long-term horizons. By enabling more reliable production forecasting, the DA-TCN model reduces operational uncertainties, optimizes resource allocation, and supports cost-effective management of unconventional gas resources. Leveraging existing knowledge, this scalable and data-efficient approach represents a significant advancement in gas production forecasting, delivering tangible economic benefits for the energy industry. Full article
Show Figures

Figure 1

21 pages, 4569 KB  
Article
Pairwise-Constraint-Guided Multi-View Feature Selection by Joint Sparse Regularization and Similarity Learning
by Jinxi Li and Hong Tao
Mathematics 2024, 12(14), 2278; https://doi.org/10.3390/math12142278 - 21 Jul 2024
Viewed by 1812
Abstract
Feature selection is a basic and important step in real applications, such as face recognition and image segmentation. In this paper, we propose a new weakly supervised multi-view feature selection method by utilizing pairwise constraints, i.e., the pairwise constraint-guided multi-view f [...] Read more.
Feature selection is a basic and important step in real applications, such as face recognition and image segmentation. In this paper, we propose a new weakly supervised multi-view feature selection method by utilizing pairwise constraints, i.e., the pairwise constraint-guided multi-view feature selection (PCFS for short) method. In this method, linear projections of all views and a consistent similarity graph with pairwise constraints are jointly optimized to learning discriminative projections. Meanwhile, the l2,0-norm-based row sparsity constraint is imposed on the concatenation of projections for discriminative feature selection. Then, an iterative algorithm with theoretically guaranteed convergence is developed for the optimization of PCFS. The performance of the proposed PCFS method was evaluated by comprehensive experiments on six benchmark datasets and applications on cancer clustering. The experimental results demonstrate that PCFS exhibited competitive performance in feature selection in comparison with related models. Full article
Show Figures

Figure 1

22 pages, 2390 KB  
Article
Variational Online Learning Correlation Filter for Visual Tracking
by Zhongyang Wang, Feng Liu and Lizhen Deng
Mathematics 2024, 12(12), 1818; https://doi.org/10.3390/math12121818 - 12 Jun 2024
Cited by 1 | Viewed by 1357
Abstract
Recently, discriminative correlation filters (DCF) have been successfully applied for visual tracking. However, traditional DCF trackers tend to separately solve boundary effect and temporal degradation problems in the tracking process. In this paper, a variational online learning correlation filter (VOLCF) is proposed for [...] Read more.
Recently, discriminative correlation filters (DCF) have been successfully applied for visual tracking. However, traditional DCF trackers tend to separately solve boundary effect and temporal degradation problems in the tracking process. In this paper, a variational online learning correlation filter (VOLCF) is proposed for visual tracking to improve the robustness and accuracy of the tracking process. Unlike previous methods, which use only first-order temporal constraints, this approach leads to overfitting and filter degradation. First, beyond the standard filter training requirement, our proposed VOLCF method introduces a model confidence term, which leverages the temporal information of adjacent frames during filter training. Second, to ensure the consistency of the temporal and spatial characteristics of the video sequence, the model introduces Kullback–Leibler (KL) divergence to obtain the second-order information of the filter. In contrast to traditional target tracking models that rely solely on first-order feature information, this approach facilitates the acquisition of a generalized connection between the previous and current filters. As a result, it incorporates joint-regulated filter updating. Through quantitative and qualitative analyses of the experiment, it proves that the VOLCF model has excellent tracking performance. Full article
Show Figures

Figure 1

18 pages, 7887 KB  
Article
A Two-Stage Method for Aerial Tracking in Adverse Weather Conditions
by Yuan Feng, Xinnan Xu, Nuoyi Chen, Quanjian Song and Lufang Zhang
Mathematics 2024, 12(8), 1216; https://doi.org/10.3390/math12081216 - 18 Apr 2024
Viewed by 1380
Abstract
To tackle the issue of aerial tracking failure in adverse weather conditions, we developed an innovative two-stage tracking method, which incorporates a lightweight image restoring model DADNet and an excellent pretrained tracker. Our method begins by restoring the degraded image, which yields a [...] Read more.
To tackle the issue of aerial tracking failure in adverse weather conditions, we developed an innovative two-stage tracking method, which incorporates a lightweight image restoring model DADNet and an excellent pretrained tracker. Our method begins by restoring the degraded image, which yields a refined intermediate result. Then, the tracker capitalizes on this intermediate result to produce precise tracking bounding boxes. To expand the UAV123 dataset to various weather scenarios, we estimated the depth of the images in the dataset. Our method was tested on two famous trackers, and the experimental results highlighted the superiority of our method. The comparison experiment’s results also validated the dehazing effectiveness of our restoration model. Additionally, the components of our dehazing module were proven efficient through ablation studies. Full article
Show Figures

Figure 1

16 pages, 1391 KB  
Article
Dynamic Merging for Optimal Onboard Resource Utilization: Innovating Mission Queue Constructing Method in Multi-Satellite Spatial Information Networks
by Jun Long, Shangpeng Wang, Yakun Huo, Limin Liu and Huilong Fan
Mathematics 2024, 12(7), 986; https://doi.org/10.3390/math12070986 - 26 Mar 2024
Cited by 2 | Viewed by 1482
Abstract
The purpose of constructing onboard observation mission queues is to improve the execution efficiency of onboard tasks and reduce energy consumption, representing a significant challenge in achieving efficient global military reconnaissance and target tracking. Existing research often focuses on the aspect of task [...] Read more.
The purpose of constructing onboard observation mission queues is to improve the execution efficiency of onboard tasks and reduce energy consumption, representing a significant challenge in achieving efficient global military reconnaissance and target tracking. Existing research often focuses on the aspect of task scheduling, aiming at optimizing the efficiency of single-task execution, while neglecting the complex dependencies that might exist between multiple tasks and payloads. Moreover, traditional task scheduling schemes are no longer suitable for large-scale tasks. To effectively reduce the number of tasks within the network, we introduce a network aggregation graph model based on multiple satellites and tasks, and propose a task aggregation priority dynamic calculation algorithm based on graph computations. Subsequently, we present a dynamic merging-based method for multi-satellite, multi-task aggregation, a novel approach for constructing onboard mission queues that can dynamically optimize the task queue according to real-time task demands and resource status. Simulation experiments demonstrate that, compared to baseline algorithms, our proposed task aggregation method significantly reduces the task size by approximately 25% and effectively increases the utilization rate of onboard resources. Full article
Show Figures

Figure 1

Back to TopTop