Next Issue
Volume 18, April
Previous Issue
Volume 18, February
 
 

Algorithms, Volume 18, Issue 3 (March 2025) – 58 articles

Cover Story (view full-size image): X-ray computed tomography is crucial for medical diagnostics, but frequent exposure poses health risks, driving the use of low-dose CT (LDCT) imaging. However, LDCT introduces noise, compromising diagnostic accuracy. This article presents a pure vision transformer (PViT) with a gradient–Laplacian attention module (GLAM) for LDCT denoising, enhancing edge preservation and structural detail. The model was validated on five datasets, consistently preserving the anatomical structures of piglet, head, abdomen, chest, and thoracic CT images. Extensive ablation studies on attention and loss functions also confirmed the contribution of each module component. Compared to state-of-the-art models, the proposed model shows superior performance in terms of noise suppression, sharper anatomical boundaries, and efficient training, highlighting its clinical applicability. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
32 pages, 13498 KiB  
Article
Solving Multidimensional Partial Differential Equations Using Efficient Quantum Circuits
by Manu Chaudhary, Kareem El-Araby, Alvir Nobel, Vinayak Jha, Dylan Kneidel, Ishraq Islam, Manish Singh, Sunday Ogundele, Ben Phillips, Kieran Egan, Sneha Thomas, Devon Bontrager, Serom Kim and Esam El-Araby
Algorithms 2025, 18(3), 176; https://doi.org/10.3390/a18030176 - 20 Mar 2025
Viewed by 334
Abstract
Quantum computing has the potential to solve certain compute-intensive problems faster than classical computing by leveraging the quantum mechanical properties of superposition and entanglement. This capability can be particularly useful for solving Partial Differential Equations (PDEs), which are challenging to solve even for [...] Read more.
Quantum computing has the potential to solve certain compute-intensive problems faster than classical computing by leveraging the quantum mechanical properties of superposition and entanglement. This capability can be particularly useful for solving Partial Differential Equations (PDEs), which are challenging to solve even for High-Performance Computing (HPC) systems, especially for multidimensional PDEs. This led researchers to investigate the usage of Quantum-Centric High-Performance Computing (QC-HPC) to solve multidimensional PDEs for various applications. However, the current quantum computing-based PDE-solvers, especially those based on Variational Quantum Algorithms (VQAs) suffer from limitations, such as low accuracy, long execution times, and limited scalability. In this work, we propose an innovative algorithm to solve multidimensional PDEs with two variants. The first variant uses Finite Difference Method (FDM), Classical-to-Quantum (C2Q) encoding, and numerical instantiation, whereas the second variant utilizes FDM, C2Q encoding, and Column-by-Column Decomposition (CCD). We evaluated the proposed algorithm using the Poisson equation as a case study and validated it through experiments conducted on noise-free and noisy simulators, as well as hardware emulators and real quantum hardware from IBM. Our results show higher accuracy, improved scalability, and faster execution times in comparison to variational-based PDE-solvers, demonstrating the advantage of our approach for solving multidimensional PDEs. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 681 KiB  
Article
A PSO-Based Approach for the Optimal Allocation of Electric Vehicle Parking Lots to the Electricity Distribution Network
by Marzieh Sadat Arabi and Anjali Awasthi
Algorithms 2025, 18(3), 175; https://doi.org/10.3390/a18030175 - 20 Mar 2025
Viewed by 244
Abstract
Electric vehicles can serve as controllable loads, storing energy during off-peak periods and acting as generation units during peak periods or periods with high electricity prices. They function as distributed generation resources within distribution systems, requiring controlled charging and discharging of batteries. In [...] Read more.
Electric vehicles can serve as controllable loads, storing energy during off-peak periods and acting as generation units during peak periods or periods with high electricity prices. They function as distributed generation resources within distribution systems, requiring controlled charging and discharging of batteries. In this paper, we address the problem of the optimal allocation of parking lots within a distribution system to efficiently supply electric vehicle loads. The goal is to determine the best capacity and size of parking lots to meet peak hour demands while considering constraints on the permanent operation of the distribution system. Using the particle swarm optimization (PSO) algorithm, the study maximizes total benefits, taking into account network parameters, vehicle data, and market prices. Results show that installing parking lots could be economically profitable for distribution companies and could improve voltage profiles. Full article
Show Figures

Figure 1

25 pages, 653 KiB  
Review
Algorithms Facilitating the Observation of Urban Residential Vacancy Rates: Technologies, Challenges and Breakthroughs
by Binglin Liu, Weijia Zeng, Weijiang Liu, Yi Peng and Nini Yao
Algorithms 2025, 18(3), 174; https://doi.org/10.3390/a18030174 - 20 Mar 2025
Viewed by 355
Abstract
In view of the challenges brought by a complex environment, diverse data sources and urban development needs, our study comprehensively reviews the application of algorithms in urban residential vacancy rate observation. First, we explore the definition and measurement of urban residential vacancy rate, [...] Read more.
In view of the challenges brought by a complex environment, diverse data sources and urban development needs, our study comprehensively reviews the application of algorithms in urban residential vacancy rate observation. First, we explore the definition and measurement of urban residential vacancy rate, pointing out the difficulties in accurately defining vacant houses and obtaining reliable data. Then, we introduce various algorithms such as traditional statistical learning, machine learning, deep learning and ensemble learning, and analyze their applications in vacancy rate observation. The traditional statistical learning algorithm builds a prediction model based on historical data mining and analysis, which has certain advantages in dealing with linear problems and regular data. However, facing the high nonlinear relationships and complexity of the data in the urban residential vacancy rate observation, its prediction accuracy is difficult to meet the actual needs. With their powerful nonlinear modeling ability, machine learning algorithms have significant advantages in capturing the nonlinear relationships of data. However, they require high data quality and are prone to overfitting phenomenon. Deep learning algorithms can automatically learn feature representation, perform well in processing large amounts of high-dimensional and complex data, and can effectively deal with the challenges brought by various data sources, but the training process is complex and the computational cost is high. The ensemble learning algorithm combines multiple prediction models to improve the prediction accuracy and stability. By comparing these algorithms, we can clarify the advantages and adaptability of different algorithms in different scenarios. Facing the complex environment, the data in the observation of urban residential vacancy rate are affected by many factors. The unbalanced urban development leads to significant differences in residential vacancy rates in different areas. Spatiotemporal heterogeneity means that vacancy rates vary in different geographical locations and over time. The complexity of data affected by various factors means that the vacancy rate is jointly affected by macroeconomic factors, policy regulatory factors, market supply and demand factors and individual resident factors. These factors are intertwined, increasing the complexity of data and the difficulty of analysis. In view of the diversity of data sources, we discuss multi-source data fusion technology, which aims to integrate different data sources to improve the accuracy of vacancy rate observation. The diversity of data sources, including geographic information system (GIS) (Geographic Information System) data, remote sensing images, statistics data, social media data and urban grid management data, requires integration in format, scale, precision and spatiotemporal resolution through data preprocessing, standardization and normalization. The multi-source data fusion algorithm should not only have the ability of intelligent feature extraction and related analysis, but also deal with the uncertainty and redundancy of data to adapt to the dynamic needs of urban development. We also elaborate on the optimization methods of algorithms for different data sources. Through this study, we find that algorithms play a vital role in improving the accuracy of vacancy rate observation and enhancing the understanding of urban housing conditions. Algorithms can handle complex spatial data, integrate diverse data sources, and explore the social and economic factors behind vacancy rates. In the future, we will continue to deepen the application of algorithms in data processing, model building and decision support, and strive to provide smarter and more accurate solutions for urban housing management and sustainable development. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

32 pages, 2702 KiB  
Article
Data Science in the Management of Healthcare Organizations
by Pedro Faria, Victor Alves, José Neves and Henrique Vicente
Algorithms 2025, 18(3), 173; https://doi.org/10.3390/a18030173 - 19 Mar 2025
Viewed by 212
Abstract
The transformation of healthcare organizations is essential to address their inherent complexity and dynamic nature. This study emphasizes the role of Data Science, with the incorporation of Artificial Intelligence tools, in enabling data-driven and interconnected management strategies. To achieve this, a thermodynamic approach [...] Read more.
The transformation of healthcare organizations is essential to address their inherent complexity and dynamic nature. This study emphasizes the role of Data Science, with the incorporation of Artificial Intelligence tools, in enabling data-driven and interconnected management strategies. To achieve this, a thermodynamic approach to Knowledge Representation and Reasoning was employed, capturing healthcare workers’ perceptions of their work environment through structured questionnaires. Over several months, the entropic efficiency in healthcare workers’ responses was analyzed, offering insights into the intricate relationships between leadership, teamwork, work engagement, and their influence on organizational performance and worker satisfaction. This approach demonstrates Data Science’s potential to enhance organizational effectiveness and adaptability while empowering healthcare workers. By bridging technological innovation with human-centric management, it provides actionable insights for sustainable improvements in healthcare systems. The study underscores that involving healthcare workers in decision-making processes not only could enhance satisfaction but also facilitate meaningful organizational transformation, creating more responsive and resilient healthcare organizations capable of navigating the complexities of modern healthcare. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

22 pages, 6381 KiB  
Article
MPVF: Multi-Modal 3D Object Detection Algorithm with Pointwise and Voxelwise Fusion
by Peicheng Shi, Wenchao Wu and Aixi Yang
Algorithms 2025, 18(3), 172; https://doi.org/10.3390/a18030172 - 19 Mar 2025
Viewed by 210
Abstract
3D object detection plays a pivotal role in achieving accurate environmental perception, particularly in complex traffic scenarios where single-modal detection methods often fail to meet precision requirements. This highlights the necessity of multi-modal fusion approaches to enhance detection performance. However, existing camera-LiDAR intermediate [...] Read more.
3D object detection plays a pivotal role in achieving accurate environmental perception, particularly in complex traffic scenarios where single-modal detection methods often fail to meet precision requirements. This highlights the necessity of multi-modal fusion approaches to enhance detection performance. However, existing camera-LiDAR intermediate fusion methods suffer from insufficient interaction between local and global features and limited fine-grained feature extraction capabilities, which results in inadequate small object detection and unstable performance in complex scenes. To address these issues, the multi-modal 3D object detection algorithm with pointwise and voxelwise fusion (MPVF) is proposed, which enhances multi-modal feature interaction and optimizes feature extraction strategies to improve detection precision and robustness. First, the pointwise and voxelwise fusion (PVWF) module is proposed to combine local features from the pointwise fusion (PWF) module with global features from the voxelwise fusion (VWF) module, enhancing the interaction between features across modalities, improving small object detection capabilities, and boosting model performance in complex scenes. Second, an expressive feature extraction module, improved ResNet-101 and feature pyramid (IRFP), is developed, comprising the improved ResNet-101 (IR) and feature pyramid (FP) modules. The IR module uses a group convolution strategy to inject high-level semantic features into the PWF and VWF modules, improving extraction efficiency. The FP module, placed at an intermediate stage, captures fine-grained features at various resolutions, enhancing the model’s precision and robustness. Finally, evaluation on the KITTI dataset demonstrates a mean Average Precision (mAP) of 69.24%, a 2.75% improvement over GraphAlign++. Detection accuracy for cars, pedestrians, and cyclists reaches 85.12%, 48.61%, and 70.12%, respectively, with the proposed method excelling in pedestrian and cyclist detection. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

37 pages, 48147 KiB  
Article
Generation of Sparse Antennas and Scatterers Based on Optimal Current Grid Approximation
by Tuan Phuong Dang, Manh Tuan Nguyen, Adnan F. Alhaj Hasan and Talgat R. Gazizov
Algorithms 2025, 18(3), 171; https://doi.org/10.3390/a18030171 - 18 Mar 2025
Viewed by 338
Abstract
The reduction in mass and surface of antennas and scattering structures is an important task in designing electromagnetic devices. To solve this task, we present a novel algorithm based on the optimal current grid approximation for generating sparse scattering structures and evaluating their [...] Read more.
The reduction in mass and surface of antennas and scattering structures is an important task in designing electromagnetic devices. To solve this task, we present a novel algorithm based on the optimal current grid approximation for generating sparse scattering structures and evaluating their effectiveness. The main contributions of the work include analyzing the performance of the optimal current grid approximation and its modifications in generating sparse antennas, as well as demonstrating the effectiveness of the algorithm in generating sparse scattering structures using a scattering plate and corner reflectors as an example. To validate the accuracy of the algorithm, we compared the scattering properties of the obtained sparse scattering structures. The results show that the algorithm worked accurately and effectively in generating sparse scattering structures while reducing their mass and surface. Full article
Show Figures

Figure 1

53 pages, 24859 KiB  
Article
Investigations into the Design and Implementation of Reinforcement Learning Using Deep Learning Neural Networks
by Roxana-Elena Tudoroiu, Mohammed Zaheeruddin, Daniel-Ioan Curiac, Mihai Sorin Radu and Nicolae Tudoroiu
Algorithms 2025, 18(3), 170; https://doi.org/10.3390/a18030170 - 16 Mar 2025
Viewed by 369
Abstract
This paper investigates the design and MATLAB/Simulink implementation of two intelligent neural reinforcement learning control algorithms based on deep learning neural network structures (RL DLNNs), for a complex Heating Ventilation Air Conditioning (HVAC) centrifugal chiller system (CCS). Our motivation to design such control [...] Read more.
This paper investigates the design and MATLAB/Simulink implementation of two intelligent neural reinforcement learning control algorithms based on deep learning neural network structures (RL DLNNs), for a complex Heating Ventilation Air Conditioning (HVAC) centrifugal chiller system (CCS). Our motivation to design such control strategies lies in this system’s significant control-related challenges, namely its high dimensionality and strongly nonlinear multi-input multi-output (MIMO) structure, coupled with strong constraints and a substantial impact of measured disturbance on tracking performance. As a beneficial vehicle for “proof of concept”, two simplified CCS MIMO models were derived, and an extensive number of simulations were run to demonstrate the effectiveness of both RL DLNN control algorithm implementations compared with two conventional control algorithms. The experiments involving the two investigated data-driven advanced neural control algorithms prove their high potential to adapt to various types of nonlinearities, singularities, dimensions, disruptions, constraints, and uncertainties that inherently characterize real-world processes. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (2nd Edition))
Show Figures

Graphical abstract

16 pages, 841 KiB  
Article
An Alternative Estimator for Poisson–Inverse-Gaussian Regression: The Modified Kibria–Lukman Estimator
by Rasha A. Farghali, Adewale F. Lukman, Zakariya Algamal, Murat Genc and Hend Attia
Algorithms 2025, 18(3), 169; https://doi.org/10.3390/a18030169 - 14 Mar 2025
Viewed by 380
Abstract
Poisson regression is used to model count response variables. The method has a strict assumption that the mean and variance of the response variable are equal, while, in practice, the case of overdispersion is common. Also, in multicollinearity, the model parameter estimates obtained [...] Read more.
Poisson regression is used to model count response variables. The method has a strict assumption that the mean and variance of the response variable are equal, while, in practice, the case of overdispersion is common. Also, in multicollinearity, the model parameter estimates obtained with the maximum likelihood estimator are adversely affected. This paper introduces a new biased estimator that extends the modified Kibria–Lukman estimator to the Poisson–Inverse-Gaussian regression model to deal with overdispersion and multicollinearity in the data. The superiority of the proposed estimator over the existing biased estimators is presented in terms of matrix and scalar mean square error. Moreover, the performance of the proposed estimator is examined through a simulation study. Finally, on a real dataset, the superiority of the proposed estimator over other estimators is demonstrated. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 1321 KiB  
Article
Chinese Story Generation Based on Style Control of Transformer Model and Content Evaluation Method
by Jhe-Wei Lin, Tang-Wei Su and Che-Cheng Chang
Algorithms 2025, 18(3), 168; https://doi.org/10.3390/a18030168 - 14 Mar 2025
Viewed by 331
Abstract
Natural language processing (NLP) has numerous applications and has been extensively developed in deep learning. In recent years, language models such as Transformer, BERT, and GPT have frequently been the foundation for related research. However, relatively few studies have focused on evaluating the [...] Read more.
Natural language processing (NLP) has numerous applications and has been extensively developed in deep learning. In recent years, language models such as Transformer, BERT, and GPT have frequently been the foundation for related research. However, relatively few studies have focused on evaluating the quality of generated sentences. While traditional evaluation methods like BLEU can be applied, the challenge is that there is no ground truth reference for generated sentences, making it difficult to establish a reliable evaluation criterion. Therefore, this study examines the content generated by Bidirectional Encoder Representations and related recurrent methods based on the Transformer model. Specifically, we focus on analyzing sentence fluency by assessing the degree of part-of-speech (PoS) matching and the coherence of PoS context ordering. Determining whether the generated sentences align with the expected PoS structure of the model is crucial, as it significantly impacts the readability of the generated text. Full article
Show Figures

Figure 1

35 pages, 9522 KiB  
Article
Decoding PM2.5 Prediction in Nanning Urban Area, China: Unraveling Model Superiorities and Drawbacks Through SARIMA, Prophet, and LightGBM
by Minru Chen, Binglin Liu, Mingzhi Liang and Nini Yao
Algorithms 2025, 18(3), 167; https://doi.org/10.3390/a18030167 - 14 Mar 2025
Viewed by 397
Abstract
With the rapid development of industrialization and urbanization, air pollution is becoming increasingly serious. Accurate prediction of PM2.5 concentration is of great significance to environmental protection and public health. Our study takes Nanning urban area, which has unique geographical, climatic and pollution [...] Read more.
With the rapid development of industrialization and urbanization, air pollution is becoming increasingly serious. Accurate prediction of PM2.5 concentration is of great significance to environmental protection and public health. Our study takes Nanning urban area, which has unique geographical, climatic and pollution source characteristics, as the object. Based on the dual-time resolution raster data of the China High-resolution and High-quality PM2.5 Dataset (CHAP) from 2012 to 2023, the PM2.5 concentration prediction study is carried out using SARIMA, Prophet and LightGBM models. The study systematically compares the performance of each model from the spatial and temporal dimensions using indicators such as mean square error (MSE), mean absolute error (MAE) and coefficient of determination (R2). The results show that the LightGBM model has a strong ability to mine complex nonlinear relationships, but its stability is poor. The Prophet model has obvious advantages in dealing with seasonality and trend of time series, but it lacks adaptability to complex changes. The SARIMA model is based on time series prediction theory and performs well in some scenarios, but has limitations in dealing with non-stationary data and spatial heterogeneity. Our research provides a multi-dimensional model performance reference for subsequent PM2.5 concentration predictions, helps researchers select models reasonably according to different scenarios and needs, provides new ideas for analyzing concentration change patterns, and promotes the development of related research in the field of environmental science. Full article
Show Figures

Figure 1

14 pages, 799 KiB  
Article
A GNN-Based False Data Detection Scheme for Smart Grids
by Junhong Qiu, Xinxin Zhang, Tao Wang, Huiying Hou, Siyuan Wang and Tiejun Yang
Algorithms 2025, 18(3), 166; https://doi.org/10.3390/a18030166 - 14 Mar 2025
Viewed by 513
Abstract
A Cyber-Physical System (CPS) incorporates communication dynamics and software into phsical processes, providing abstractions, modeling, design, and analytical techniques for the system. Based on spatial temporal graph neural networks (STGNNs), anomaly detection technology has been presented to detect anomaly data in smart grids [...] Read more.
A Cyber-Physical System (CPS) incorporates communication dynamics and software into phsical processes, providing abstractions, modeling, design, and analytical techniques for the system. Based on spatial temporal graph neural networks (STGNNs), anomaly detection technology has been presented to detect anomaly data in smart grids with good performance. However, since topological changes of power networks in smart grids often already predict the occurrence of anomalies, traditional models based on STGNNs to portray network evolution cannot be directly utilized in smart grids. Our research proposed a smart grid anomaly detection method on the grounds of STGNNs, which used evolution in the information of several attributes that affected the power network to represent the evolution of the power network, subsequently used STGNNs to obtain the time-space dependencies of nodes in several information networks, and used a cross-domain method to help the anomaly detection of the power network through anomaly information of other related networks. Laboratory findings reveal that the abnormal data detection rate of our scheme reaches 90% in the initial stage of data transmission and outperforms other comparative methods, and as time goes by, the detection rate becomes higher and higher. Full article
Show Figures

Figure 1

19 pages, 293 KiB  
Article
Where to Split in Hybrid Genetic Search for the Capacitated Vehicle Routing Problem
by Lars Magnus Hvattum
Algorithms 2025, 18(3), 165; https://doi.org/10.3390/a18030165 - 13 Mar 2025
Viewed by 351
Abstract
One of the best heuristic algorithms for solving the capacitated vehicle routing problem is a hybrid genetic search. A critical component of the search is a splitting procedure, where a solution encoded as a giant tour of nodes is optimally split into vehicle [...] Read more.
One of the best heuristic algorithms for solving the capacitated vehicle routing problem is a hybrid genetic search. A critical component of the search is a splitting procedure, where a solution encoded as a giant tour of nodes is optimally split into vehicle routes using dynamic programming. However, the current state-of-the-art implementation of the splitting procedure assumes that the start of the giant tour is fixed as a part of the encoded solution. This paper examines whether the fixed starting point is a significant drawback. Results indicate that simple adjustments of the starting point for the splitting procedure can improve the performance of the genetic search, as measured by the average primal gaps of the final solutions obtained, by 3.9%. Full article
(This article belongs to the Special Issue Heuristic Optimization Algorithms for Logistics)
Show Figures

Figure 1

17 pages, 1344 KiB  
Article
A Two-Stage Multi-Objective Optimization Algorithm for Solving Large-Scale Optimization Problems
by Jiaqi Liu and Tianyu Liu
Algorithms 2025, 18(3), 164; https://doi.org/10.3390/a18030164 - 13 Mar 2025
Viewed by 347
Abstract
For large-scale multi-objective optimization, it is particularly challenging for evolutionary algorithms to converge to the Pareto Front. Most existing multi-objective evolutionary algorithms (MOEAs) handle convergence and diversity in a mutually dependent manner during the evolution process. In this case, the performance degradation of [...] Read more.
For large-scale multi-objective optimization, it is particularly challenging for evolutionary algorithms to converge to the Pareto Front. Most existing multi-objective evolutionary algorithms (MOEAs) handle convergence and diversity in a mutually dependent manner during the evolution process. In this case, the performance degradation of one solution may lead to the deterioration of the performance of the other solution. This paper proposes a two-stage multi-objective optimization algorithm based on decision variable clustering (LSMOEA-VT) to solve large-scale optimization problems. In LSMOEA-VT, decision variables are divided into two categories and use dimensionality reduction methods to optimize the variables that affect evolutionary convergence. Following this, we performed an interdependence analysis to break down the convergence variables into multiple subcomponents that are more tractable. Furthermore, a non-dominated dynamic weight aggregation method is used to enhance the diversity of the population. To evaluate the performance of our proposed algorithm, we performed extensive comparative experiments against four optimization algorithms across a diverse set of benchmark problems, including eight multi-objective optimization problems and nine large-scale optimization problems. The experimental results show that our proposed algorithm performs well on some test functions and has certain competitiveness. Full article
(This article belongs to the Special Issue Algorithms for Complex Problems)
Show Figures

Figure 1

32 pages, 12235 KiB  
Article
Explainable MRI-Based Ensemble Learnable Architecture for Alzheimer’s Disease Detection
by Opeyemi Taiwo Adeniran, Blessing Ojeme, Temitope Ezekiel Ajibola, Ojonugwa Oluwafemi Ejiga Peter, Abiola Olayinka Ajala, Md Mahmudur Rahman and Fahmi Khalifa
Algorithms 2025, 18(3), 163; https://doi.org/10.3390/a18030163 - 13 Mar 2025
Viewed by 579
Abstract
With the advancements in deep learning methods, AI systems now perform at the same or higher level than human intelligence in many complex real-world problems. The data and algorithmic opacity of deep learning models, however, make the task of comprehending the input data [...] Read more.
With the advancements in deep learning methods, AI systems now perform at the same or higher level than human intelligence in many complex real-world problems. The data and algorithmic opacity of deep learning models, however, make the task of comprehending the input data information, the model, and model’s decisions quite challenging. This lack of transparency constitutes both a practical and an ethical issue. For the present study, it is a major drawback to the deployment of deep learning methods mandated with detecting patterns and prognosticating Alzheimer’s disease. Many approaches presented in the AI and medical literature for overcoming this critical weakness are sometimes at the cost of sacrificing accuracy for interpretability. This study is an attempt at addressing this challenge and fostering transparency and reliability in AI-driven healthcare solutions. The study explores a few commonly used perturbation-based interpretability (LIME) and gradient-based interpretability (Saliency and Grad-CAM) approaches for visualizing and explaining the dataset, models, and decisions of MRI image-based Alzheimer’s disease identification using the diagnostic and predictive strengths of an ensemble framework comprising Convolutional Neural Networks (CNNs) architectures (Custom multi-classifier CNN, VGG-19, ResNet, MobileNet, EfficientNet, DenseNet), and a Vision Transformer (ViT). The experimental results show the stacking ensemble achieving a remarkable accuracy of 98.0% while the hard voting ensemble reached 97.0%. The findings present a valuable contribution to the growing field of explainable artificial intelligence (XAI) in medical imaging, helping end users and researchers to gain deep understanding of the backstory behind medical image dataset and deep learning model’s decisions. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis: 2nd Edition)
Show Figures

Figure 1

21 pages, 2339 KiB  
Article
Control of High-Power Slip Ring Induction Generator Wind Turbines at Variable Wind Speeds in Optimal and Reliable Modes
by Mircea-Bogdan Radac, Valentin-Dan Muller and Samuel Ciucuriță
Algorithms 2025, 18(3), 162; https://doi.org/10.3390/a18030162 - 11 Mar 2025
Viewed by 412
Abstract
This work analyzes high-power wind turbines (WTs) from the Oravita region, Romania. These WTs are based on slip ring induction generator with wound rotor and we propose a modified architecture with two power converters on both the stator and on the rotor, functioning [...] Read more.
This work analyzes high-power wind turbines (WTs) from the Oravita region, Romania. These WTs are based on slip ring induction generator with wound rotor and we propose a modified architecture with two power converters on both the stator and on the rotor, functioning at variable wind speeds spanning a large interval. Investigations developed around a realistic WT model with doubly fed induction generator show how WT control enables variable wind speed operations at optimal mechanical angular speed (MAS), guaranteeing maximal power point (MPP), but only up to a critical wind speed value, after which the electrical power must saturate for reliable operation. In this reliable operating region, blade pitch angle control must be enforced. Variable wind speed acts as a time-varying parameter disturbance but also imposes the MPP operation setpoint in one of the two analyzed regions. To achieve null tracking errors, a double integrator must appear within the MAS controller when the wind speed disturbance is realistically modeled as a ramp-like input; however, inspecting the linearized model reveals several difficulties as described in the paper, together with the proposed solution tradeoff. The study developed around the Fuhrlander-FL-MD-70 1.5[MW] WT model shows that several competitive controllers are designed and tested in the identified operating regions of interest, as they validate the reliable and performant functioning specifications. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

15 pages, 2209 KiB  
Article
Deep Learning in Financial Modeling: Predicting European Put Option Prices with Neural Networks
by Zakaria Elbayed and Abdelmjid Qadi EI Idrissi
Algorithms 2025, 18(3), 161; https://doi.org/10.3390/a18030161 - 11 Mar 2025
Viewed by 661
Abstract
This paper explores the application of deep neural networks (DNNs) as an alternative to the traditional Black–Scholes model for predicting European put option prices. Using synthetic datasets generated under the Black–Scholes framework, the proposed DNN achieved strong predictive performance, with a Mean Squared [...] Read more.
This paper explores the application of deep neural networks (DNNs) as an alternative to the traditional Black–Scholes model for predicting European put option prices. Using synthetic datasets generated under the Black–Scholes framework, the proposed DNN achieved strong predictive performance, with a Mean Squared Error (MSE) of 0.0021 and a coefficient of determination (R2) of 0.9533. This study highlights the scalability and adaptability of DNNs to complex financial systems, offering potential applications in real-time risk management and the pricing of exotic derivatives. While synthetic datasets provide a controlled environment, this study acknowledges the challenges of extending the model to real-world financial data, paving the way for future research to address these limitations. Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
Show Figures

Figure 1

44 pages, 4296 KiB  
Article
Hybrid Optimization Algorithm for Solving Attack-Response Optimization and Engineering Design Problems
by Ahmad K. Al Hwaitat, Hussam N. Fakhouri, Jamal Zraqou and Najem Sirhan
Algorithms 2025, 18(3), 160; https://doi.org/10.3390/a18030160 - 10 Mar 2025
Viewed by 474
Abstract
This paper presents JADEDO, a hybrid optimization method that merges the dandelion optimizer’s (DO) dispersal-inspired stages with JADE’s (adaptive differential evolution) dynamic mutation and crossover operators. By integrating these complementary mechanisms, JADEDO effectively balances global exploration and local exploitation for both unimodal and [...] Read more.
This paper presents JADEDO, a hybrid optimization method that merges the dandelion optimizer’s (DO) dispersal-inspired stages with JADE’s (adaptive differential evolution) dynamic mutation and crossover operators. By integrating these complementary mechanisms, JADEDO effectively balances global exploration and local exploitation for both unimodal and multimodal search spaces. Extensive benchmarking against classical and cutting-edge metaheuristics on the IEEE CEC2022 functions—encompassing unimodal, multimodal, and hybrid landscapes—demonstrates that JADEDO achieves highly competitive results in terms of solution accuracy, convergence speed, and robustness. Statistical analysis using Wilcoxon sum-rank tests further underscores JADEDO’s consistent advantage over several established optimizers, reflecting its proficiency in navigating complex, high-dimensional problems. To validate its real-world applicability, JADEDO was also evaluated on three engineering design problems (pressure vessel, spring, and speed reducer). Notably, it achieved top-tier or near-optimal designs in constrained, high-stakes environments. Moreover, to demonstrate suitability for security-oriented tasks, JADEDO was applied to an attack-response optimization scenario, efficiently identifying cost-effective, low-risk countermeasures under stringent time constraints. These collective findings highlight JADEDO as a robust, flexible, and high-performing framework capable of tackling both benchmark-oriented and practical optimization challenges. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 4249 KiB  
Article
Total Outer-Independent Domination Number: Bounds and Algorithms
by Paul Bosch, Ernesto Parra Inza, Ismael Rios Villamar and José Luis Sánchez-Santiesteban
Algorithms 2025, 18(3), 159; https://doi.org/10.3390/a18030159 - 10 Mar 2025
Viewed by 438
Abstract
In graph theory, the study of domination sets has garnered significant interest due to its applications in network design and analysis. Consider a graph G(V,E); a subset of its vertices is a total dominating set (TDS) if, [...] Read more.
In graph theory, the study of domination sets has garnered significant interest due to its applications in network design and analysis. Consider a graph G(V,E); a subset of its vertices is a total dominating set (TDS) if, for each xV(G), there exists an edge in E(G) connecting x to at least one vertex within this subset. If the subgraph induced by the vertices outside the TDS has no edges, the set is called a total outer-independent dominating set (TOIDS). The total outer-independent domination number, denoted as γtoi(G), represents the smallest cardinality of such a set. Deciding if a given graph has a TOIDS with at most r vertices is an NP-complete problem. This study introduces new lower and upper bounds for γtoi(G) and presents an exact solution approach using integer linear programming (ILP). Additionally, we develop a heuristic and a procedure to efficiently obtain minimal TOIDS. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

19 pages, 1715 KiB  
Article
Gradual Optimization of University Course Scheduling Problem Using Genetic Algorithm and Dynamic Programming
by Xu Han and Dian Wang
Algorithms 2025, 18(3), 158; https://doi.org/10.3390/a18030158 - 10 Mar 2025
Viewed by 776
Abstract
The university course scheduling problem (UCSP) is a challenging combinatorial optimization problem that requires optimization of the quality of the schedule and resource utilization while meeting multiple constraints involving courses, teachers, students, and classrooms. Although various algorithms have been applied to solve the [...] Read more.
The university course scheduling problem (UCSP) is a challenging combinatorial optimization problem that requires optimization of the quality of the schedule and resource utilization while meeting multiple constraints involving courses, teachers, students, and classrooms. Although various algorithms have been applied to solve the UCSP, most of the existing methods are limited to scheduling independent courses, neglecting the impact of joint courses on the overall scheduling results. To address this limitation, this paper proposed an innovative mixed-integer linear programming model capable of handling the complex constraints of both joint and independent courses simultaneously. To improve the computational efficiency and solution quality, a hybrid method combining a genetic algorithm and dynamic programming, named POGA-DP, was designed. Compared to the traditional algorithms, POGA-DP introduced exchange operations based on a judgment mechanism and mutation operations with a forced repair mechanism to effectively avoid local optima. Additionally, by incorporating a greedy algorithm for classroom allocation, the utilization of classroom resources was further enhanced. To verify the performance of the new method, this study not only tested it on real UCSP instances at Beijing Forestry University but also conducted comparative experiments with several classic algorithms, including a traditional GA, Ant Colony Optimization (ACO), the Producer–Scrounger Method (PSM), and particle swarm optimization (PSO). The results showed that POGA-DP improved the scheduling quality by 46.99% compared to that of the traditional GA and reduced classroom usage by up to 29.27%. Furthermore, POGA-DP increased the classroom utilization by 0.989% compared to that with the traditional GA and demonstrated an outstanding performance in solving joint course scheduling problems. This study also analyzed the stability of the scheduling results, revealing that POGA-DP maintained a high level of consistency in scheduling across adjacent weeks, proving its feasibility and stability in practical applications. In conclusion, POGA-DP outperformed the existing algorithms in the UCSP, making it particularly suitable for efficient scheduling under complex constraints. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

14 pages, 3621 KiB  
Article
AI Under Attack: Metric-Driven Analysis of Cybersecurity Threats in Deep Learning Models for Healthcare Applications
by Sarfraz Brohi and Qurat-ul-ain Mastoi
Algorithms 2025, 18(3), 157; https://doi.org/10.3390/a18030157 - 10 Mar 2025
Viewed by 736
Abstract
Incorporating Artificial Intelligence (AI) in healthcare has transformed disease diagnosis and treatment by offering unprecedented benefits. However, it has also revealed critical cybersecurity vulnerabilities in Deep Learning (DL) models, which raise significant risks to patient safety and their trust in AI-driven applications. Existing [...] Read more.
Incorporating Artificial Intelligence (AI) in healthcare has transformed disease diagnosis and treatment by offering unprecedented benefits. However, it has also revealed critical cybersecurity vulnerabilities in Deep Learning (DL) models, which raise significant risks to patient safety and their trust in AI-driven applications. Existing studies primarily focus on theoretical vulnerabilities or specific attack types, leaving a gap in understanding the practical implications of multiple attack scenarios on healthcare AI. In this paper, we provide a comprehensive analysis of key attack vectors, including adversarial attacks, such as the gradient-based Fast Gradient Sign Method (FGSM), evasion attacks (perturbation-based), and data poisoning, which threaten the reliability of DL models, with a specific focus on breast cancer detection. We propose the Healthcare AI Vulnerability Assessment Algorithm (HAVA) that systematically simulates these attacks, calculates the Post-Attack Vulnerability Index (PAVI), and quantitatively evaluates their impacts. Our findings revealed that the adversarial FGSM and evasion attacks significantly reduced model accuracy from 97.36% to 61.40% (PAVI: 0.385965) and 62.28% (PAVI: 0.377193), respectively, demonstrating their severe impact on performance, but data poisoning had a milder effect, retaining 89.47% accuracy (PAVI: 0.105263). The confusion matrices also revealed a higher rate of false positives in the adversarial FGSM and evasion attacks than more balanced misclassification patterns observed in data poisoning. By proposing a unified framework for quantifying and analyzing these post-attack vulnerabilities, this research contributes to formulating resilient AI models for critical domains where accuracy and reliability are important. Full article
Show Figures

Figure 1

19 pages, 2026 KiB  
Review
Quantum Computing and Machine Learning in Medical Decision-Making: A Comprehensive Review
by James C. L. Chow
Algorithms 2025, 18(3), 156; https://doi.org/10.3390/a18030156 - 9 Mar 2025
Cited by 2 | Viewed by 1429
Abstract
Medical decision-making is increasingly integrating quantum computing (QC) and machine learning (ML) to analyze complex datasets, improve diagnostics, and enable personalized treatments. While QC holds the potential to accelerate optimization, drug discovery, and genomic analysis as hardware capabilities advance, current implementations remain limited [...] Read more.
Medical decision-making is increasingly integrating quantum computing (QC) and machine learning (ML) to analyze complex datasets, improve diagnostics, and enable personalized treatments. While QC holds the potential to accelerate optimization, drug discovery, and genomic analysis as hardware capabilities advance, current implementations remain limited compared to classical computing in many practical applications. Meanwhile, ML has already demonstrated significant success in medical imaging, predictive modeling, and decision support. Their convergence, particularly through quantum machine learning (QML), presents opportunities for future advancements in processing high-dimensional healthcare data and improving clinical outcomes. This review examines the foundational concepts, key applications, and challenges of these technologies in healthcare, explores their potential synergy in solving clinical problems, and outlines future directions for quantum-enhanced ML in medical decision-making. Full article
Show Figures

Figure 1

30 pages, 34873 KiB  
Article
Text-Guided Synthesis in Medical Multimedia Retrieval: A Framework for Enhanced Colonoscopy Image Classification and Segmentation
by Ojonugwa Oluwafemi Ejiga Peter, Opeyemi Taiwo Adeniran, Adetokunbo MacGregor John-Otumu, Fahmi Khalifa and Md Mahmudur Rahman
Algorithms 2025, 18(3), 155; https://doi.org/10.3390/a18030155 - 9 Mar 2025
Viewed by 777
Abstract
The lack of extensive, varied, and thoroughly annotated datasets impedes the advancement of artificial intelligence (AI) for medical applications, especially colorectal cancer detection. Models trained with limited diversity often display biases, especially when utilized on disadvantaged groups. Generative models (e.g., DALL-E 2, Vector-Quantized [...] Read more.
The lack of extensive, varied, and thoroughly annotated datasets impedes the advancement of artificial intelligence (AI) for medical applications, especially colorectal cancer detection. Models trained with limited diversity often display biases, especially when utilized on disadvantaged groups. Generative models (e.g., DALL-E 2, Vector-Quantized Generative Adversarial Network (VQ-GAN)) have been used to generate images but not colonoscopy data for intelligent data augmentation. This study developed an effective method for producing synthetic colonoscopy image data, which can be used to train advanced medical diagnostic models for robust colorectal cancer detection and treatment. Text-to-image synthesis was performed using fine-tuned Visual Large Language Models (LLMs). Stable Diffusion and DreamBooth Low-Rank Adaptation produce images that look authentic, with an average Inception score of 2.36 across three datasets. The validation accuracy of various classification models Big Transfer (BiT), Fixed Resolution Residual Next Generation Network (FixResNeXt), and Efficient Neural Network (EfficientNet) were 92%, 91%, and 86%, respectively. Vision Transformer (ViT) and Data-Efficient Image Transformers (DeiT) had an accuracy rate of 93%. Secondly, for the segmentation of polyps, the ground truth masks are generated using Segment Anything Model (SAM). Then, five segmentation models (U-Net, Pyramid Scene Parsing Network (PSNet), Feature Pyramid Network (FPN), Link Network (LinkNet), and Multi-scale Attention Network (MANet)) were adopted. FPN produced excellent results, with an Intersection Over Union (IoU) of 0.64, an F1 score of 0.78, a recall of 0.75, and a Dice coefficient of 0.77. This demonstrates strong performance in terms of both segmentation accuracy and overlap metrics, with particularly robust results in balanced detection capability as shown by the high F1 score and Dice coefficient. This highlights how AI-generated medical images can improve colonoscopy analysis, which is critical for early colorectal cancer detection. Full article
Show Figures

Figure 1

25 pages, 2129 KiB  
Article
An Adaptive Feature-Based Quantum Genetic Algorithm for Dimension Reduction with Applications in Outlier Detection
by Tin H. Pham and Bijan Raahemi
Algorithms 2025, 18(3), 154; https://doi.org/10.3390/a18030154 - 8 Mar 2025
Viewed by 520
Abstract
Dimensionality reduction is essential in machine learning, reducing dataset dimensions while enhancing classification performance. Feature Selection, a key subset of dimensionality reduction, identifies the most relevant features. Genetic Algorithms (GA) are widely used for feature selection due to their robust exploration and efficient [...] Read more.
Dimensionality reduction is essential in machine learning, reducing dataset dimensions while enhancing classification performance. Feature Selection, a key subset of dimensionality reduction, identifies the most relevant features. Genetic Algorithms (GA) are widely used for feature selection due to their robust exploration and efficient convergence. However, GAs often suffer from premature convergence, getting stuck in local optima. Quantum Genetic Algorithm (QGA) address this limitation by introducing quantum representations to enhance the search process. To further improve QGA performance, we propose an Adaptive Feature-Based Quantum Genetic Algorithm (FbQGA), which strengthens exploration and exploitation through quantum representation and adaptive quantum rotation. The rotation angle dynamically adjusts based on feature significance, optimizing feature selection. FbQGA is applied to outlier detection tasks and benchmarked against basic GA and QGA variants on five high-dimensional, imbalanced datasets. Performance is evaluated using metrics like classification accuracy, F1 score, precision, recall, selected feature count, and computational cost. Results consistently show FbQGA outperforming other methods, with significant improvements in feature selection efficiency and computational cost. These findings highlight FbQGA’s potential as an advanced tool for feature selection in complex datasets. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

28 pages, 2644 KiB  
Article
The Euler-Type Universal Numerical Integrator (E-TUNI) with Backward Integration
by Paulo M. Tasinaffo, Gildárcio S. Gonçalves, Johnny C. Marques, Luiz A. V. Dias and Adilson M. da Cunha
Algorithms 2025, 18(3), 153; https://doi.org/10.3390/a18030153 - 8 Mar 2025
Viewed by 361
Abstract
The Euler-Type Universal Numerical Integrator (E-TUNI) is a discrete numerical structure that couples a first-order Euler-type numerical integrator with some feed-forward neural network architecture. Thus, E-TUNI can be used to model non-linear dynamic systems when the real-world plant’s analytical model is unknown. From [...] Read more.
The Euler-Type Universal Numerical Integrator (E-TUNI) is a discrete numerical structure that couples a first-order Euler-type numerical integrator with some feed-forward neural network architecture. Thus, E-TUNI can be used to model non-linear dynamic systems when the real-world plant’s analytical model is unknown. From the discrete solution provided by E-TUNI, the integration process can be either forward or backward. Thus, in this article, we intend to use E-TUNI in a backward integration framework to model autonomous non-linear dynamic systems. Three case studies, including the dynamics of the non-linear inverted pendulum, were developed to verify the computational and numerical validation of the proposed model. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
Show Figures

Figure 1

23 pages, 69279 KiB  
Article
A Novel Equivariant Self-Supervised Vector Network for Three-Dimensional Point Clouds
by Kedi Shen, Jieyu Zhao and Min Xie
Algorithms 2025, 18(3), 152; https://doi.org/10.3390/a18030152 - 7 Mar 2025
Viewed by 579
Abstract
For networks that process 3D data, estimating the orientation and position of 3D objects is a challenging task. This is because the traditional networks are not robust to the rotation of the data, and their internal workings are largely opaque and uninterpretable. To [...] Read more.
For networks that process 3D data, estimating the orientation and position of 3D objects is a challenging task. This is because the traditional networks are not robust to the rotation of the data, and their internal workings are largely opaque and uninterpretable. To solve this problem, a novel equivariant self-supervised vector network for point clouds is proposed. The network can learn the rotation direction information of the 3D target and estimate the rotational pose change of the target, and the interpretability of the equivariant network is studied using information theory. The utilization of vector neurons within the network lifts the scalar data to vector representations, enabling the network to learn the pose information inherent in the 3D target. The network can perform complex rotation-equivariant tasks after pre-training, and it shows impressive performance in complex tasks like category-level pose change estimation and rotation-equivariant reconstruction. We demonstrate through experiments that our network can accurately detect the orientation and pose change of point clouds and visualize the latent features. Moreover, it performs well in invariant tasks such as classification and category-level segmentation. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

31 pages, 3248 KiB  
Systematic Review
Diagnosis and Management of Sexually Transmitted Infections Using Artificial Intelligence Applications Among Key and General Populations in Sub-Saharan Africa: A Systematic Review and Meta-Analysis
by Claris Siyamayambo, Edith Phalane and Refilwe Nancy Phaswana-Mafuya
Algorithms 2025, 18(3), 151; https://doi.org/10.3390/a18030151 - 7 Mar 2025
Viewed by 695
Abstract
The Fourth Industrial Revolution (4IR) has significantly impacted healthcare, including sexually transmitted infection (STI) management in Sub-Saharan Africa (SSA), particularly among key populations (KPs) with limited access to health services. This review investigates 4IR technologies, including artificial intelligence (AI) and machine learning (ML), [...] Read more.
The Fourth Industrial Revolution (4IR) has significantly impacted healthcare, including sexually transmitted infection (STI) management in Sub-Saharan Africa (SSA), particularly among key populations (KPs) with limited access to health services. This review investigates 4IR technologies, including artificial intelligence (AI) and machine learning (ML), that assist in diagnosing, treating, and managing STIs across SSA. By leveraging affordable and accessible solutions, 4IR tools support KPs who are disproportionately affected by STIs. Following systematic review guidelines using Covidence, this study examined 20 relevant studies conducted across 20 SSA countries, with Ethiopia, South Africa, and Zimbabwe emerging as the most researched nations. All the studies reviewed used secondary data and favored supervised ML models, with random forest and XGBoost frequently demonstrating high performance. These tools assist in tracking access to services, predicting risks of STI/HIV, and developing models for community HIV clusters. While AI has enhanced the accuracy of diagnostics and the efficiency of management, several challenges persist, including ethical concerns, issues with data quality, and a lack of expertise in implementation. There are few real-world applications or pilot projects in SSA. Notably, most of the studies primarily focus on the development, validation, or technical evaluation of the ML methods rather than their practical application or implementation. As a result, the actual impact of these approaches on the point of care remains unclear. This review highlights the effectiveness of various AI and ML methods in managing HIV and STIs through detection, diagnosis, treatment, and monitoring. The study strengthens knowledge on the practical application of 4IR technologies in diagnosing, treating, and managing STIs across SSA. Understanding this has potential to improve sexual health outcomes, address gaps in STI diagnosis, and surpass the limitations of traditional syndromic management approaches. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

32 pages, 1286 KiB  
Article
Real-Time Fuzzy Record-Matching Similarity Metric and Optimal Q-Gram Filter
by Ondřej Rozinek, Jaroslav Marek, Jan Panuš and Jan Mareš
Algorithms 2025, 18(3), 150; https://doi.org/10.3390/a18030150 - 6 Mar 2025
Viewed by 536
Abstract
In this paper, we introduce an advanced Fuzzy Record Similarity Metric (FRMS) that improves approximate record matching and models human perception of record similarity. The FRMS utilizes a newly developed similarity space with favorable properties combined with a metric space, employing a bag-of-words [...] Read more.
In this paper, we introduce an advanced Fuzzy Record Similarity Metric (FRMS) that improves approximate record matching and models human perception of record similarity. The FRMS utilizes a newly developed similarity space with favorable properties combined with a metric space, employing a bag-of-words model with general applications in text mining and cluster analysis. To optimize the FRMS, we propose a two-stage method for approximate string matching and search that outperforms baseline methods in terms of average time complexity and F measure on various datasets. In the first stage, we construct an optimal Q-gram count filter as an optimal lower bound for fuzzy token similarities such as FRMS. The approximated Q-gram count filter achieves a high accuracy rate, filtering over 99% of dissimilar records, with a constant time complexity of O(1). In the second stage, FRMS runs for a polynomial time of approximately O(n4) and models human perception of record similarity by maximum weight matching in a bipartite graph. The FRMS architecture has widespread applications in structured document storage such as databases and has already been commercialized by one of the largest IT companies. As a side result, we explain the behavior of the singularity of the Q-gram filter and the advantages of a padding extension. Overall, our method provides a more accurate and efficient approach to approximate string matching and search with real-time runtime. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

19 pages, 587 KiB  
Article
Simple Rules of a Discrete Stochastic Process Leading to Catalan-like Recurrences
by Mariusz Białecki
Algorithms 2025, 18(3), 149; https://doi.org/10.3390/a18030149 - 6 Mar 2025
Viewed by 472
Abstract
A method for obtaining integer sequences is presented by defining simple rules for the evolution of a discrete dynamical system. This paper demonstrates that various Catalan-like recurrences of known integer sequences can be obtained from a single stochastic process defined by simple rules. [...] Read more.
A method for obtaining integer sequences is presented by defining simple rules for the evolution of a discrete dynamical system. This paper demonstrates that various Catalan-like recurrences of known integer sequences can be obtained from a single stochastic process defined by simple rules. The resulting exact equations that describe the stationary state of the process are derived using combinatorial analysis. A specific reduction of the process is applied, and the solvability of the reduced system of equations is demonstrated. Then, a procedure for providing appropriate parameters for a given sequence is formulated. The general method is illustrated with examples of Catalan, Motzkin, Schröder, and A064641 integer sequences. We also point out that by appropriately changing the parameters of the system, one can smoothly transition between distributions related to Motzkin numbers and shifted Catalan numbers. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 7248 KiB  
Article
CEEMDAN-IHO-SVM: A Machine Learning Research Model for Valve Leak Diagnosis
by Ruixue Wang and Ning Zhao
Algorithms 2025, 18(3), 148; https://doi.org/10.3390/a18030148 - 5 Mar 2025
Viewed by 437
Abstract
Due to the complex operating environment of valves, when a fault occurs inside a valve, the vibration signal generated by the fault is easily affected by the environmental noise, making the extraction of fault features difficult. To address this problem, this paper proposes [...] Read more.
Due to the complex operating environment of valves, when a fault occurs inside a valve, the vibration signal generated by the fault is easily affected by the environmental noise, making the extraction of fault features difficult. To address this problem, this paper proposes a feature extraction method based on the combination of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Fuzzy Entropy (FN). Due to the slow convergence speed and the tendency to fall into local optimal solutions of the Hippopotamus Optimization Algorithm (HO), an improved Hippopotamus Optimization (IHO) algorithm-optimized Support Vector Machine (SVM) model for valve leakage diagnosis is introduced to further enhance the accuracy of valve leakage diagnosis. The improved Hippopotamus Optimization algorithm initializes the hippopotamus population with Tent chaotic mapping, designs an adaptive weight factor, and incorporates adaptive variation perturbation. Moreover, the performance of IHO was proven to be optimal compared to HO, Particle Swarm Optimization (PSO), Grey Wolf Optimization (GWO), Whale Optimization Algorithm (WOA), and Sparrow Search Algorithm (SSA) by calculating twelve test functions. Subsequently, the IHO-SVM classification model was established and applied to valve leakage diagnosis. The prediction effects of the seven models, IHO-SVM. HO-SVM, PSO-SVM, GWO-SVM, WOA-SVM, SSA-SVM, and SVM were compared and analyzed with actual data. As a result, the comparison indicated that IHO-SVM has desirable robustness and generalization, which successfully improves the classification efficiency and the recognition rate in fault diagnosis. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

14 pages, 6384 KiB  
Article
Parallel CUDA-Based Optimization of the Intersection Calculation Process in the Greiner–Hormann Algorithm
by Jiwei Zuo, Junfu Fan, Kuan Li, Qingyun Liu, Yuke Zhou and Yi Zhang
Algorithms 2025, 18(3), 147; https://doi.org/10.3390/a18030147 - 5 Mar 2025
Viewed by 429
Abstract
The Greiner–Hormann algorithm is a commonly used polygon overlay analysis algorithm. It uses a double-linked list structure to store vertex data, and its intersection calculation step has a significant effect on the overall operating efficiency of the algorithm. To address the time-consuming intersection [...] Read more.
The Greiner–Hormann algorithm is a commonly used polygon overlay analysis algorithm. It uses a double-linked list structure to store vertex data, and its intersection calculation step has a significant effect on the overall operating efficiency of the algorithm. To address the time-consuming intersection calculation process in the Greiner–Hormann algorithm, this paper presents two kernel functions that implement a GPU parallel improvement algorithm based on CUDA multi-threading. This method allocates a thread to each edge of the subject polygon, determines in parallel whether it intersects with each edge of the clipping polygon, transfers the number of intersection points back to the CPU for calculation, and opens up corresponding storage space on the GPU side on the basis of the total number of intersection points; then, information such as intersection coordinates is calculated in parallel. In addition, experiments are conducted on the data of eight polygons with different complexities, and the optimal thread mode, running time, and speedup ratio of the parallel algorithm are statistically analyzed. The experimental results show that when a single CUDA thread block contains 64 threads or 128 threads, the parallel transformation step of the Greiner–Hormann algorithm has the highest computational efficiency. When the complexity of the subject polygon exceeds 53,000, the parallel improvement algorithm can obtain a speedup ratio of approximately three times that of the serial algorithm. This shows that the design method in this paper can effectively improve the operating efficiency of the polygon overlay analysis algorithm in the current large-scale data context. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop