Next Issue
Volume 18, December
Previous Issue
Volume 18, October
 
 

Algorithms, Volume 18, Issue 11 (November 2025) – 59 articles

Cover Story (view full-size image): This article presents a MATLAB-based application for automated ECG signal analysis and abnormality detection. Using single-lead ECG inputs, the system filters noise, detects QRS complexes, identifies P- and T-wave boundaries, computes PQ and QT intervals, and evaluates heart rate. A multi-class, multi-label SVM classifier, trained on the LUDB dataset, assigns clinically meaningful diagnoses across eight diagnostic categories. The tool enables efficient processing of raw ECG signals and supports both database and user-uploaded inputs, offering a practical solution for rapid, accurate ECG assessment. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 2303 KB  
Article
Segmenting Action-Value Functions over Time Scales in SARSA via TD(Δ)
by Mahammad Humayoo, Gengzhong Zheng, Xiaoqing Dong, Wei Huang, Liming Miao, Shuwei Qiu, Zexun Zhou, Peitao Wang, Zakir Ullah, Naveed Ur Rehman Junejo and Xueqi Cheng
Algorithms 2025, 18(11), 729; https://doi.org/10.3390/a18110729 - 20 Nov 2025
Viewed by 150
Abstract
In numerous episodic reinforcement learning (RL) environments, SARSA-based methodologies are employed to enhance policies aimed at maximizing returns over long horizons. Traditional SARSA algorithms face challenges in achieving an optimal balance between bias and variation, primarily due to their dependence on a single, [...] Read more.
In numerous episodic reinforcement learning (RL) environments, SARSA-based methodologies are employed to enhance policies aimed at maximizing returns over long horizons. Traditional SARSA algorithms face challenges in achieving an optimal balance between bias and variation, primarily due to their dependence on a single, constant discount factor (η). This study enhances the temporal difference decomposition method, TD(Δ), by applying it to the SARSA algorithm, wherein the action-value function is segmented into several components based on the differences between action-value functions linked to specific discount factors. Each component, referred to as a delta estimator (D), is linked to a specific discount factor and learned independently. This modified technique is referred to as SARSA(Δ). SARSA is a widely used on-policy RL method that enhances action-value functions via temporal difference updates. This decomposition, namely SARSA(Δ), facilitates learning across a range of time scales. This analysis makes learning more effective and guarantees consistency, especially in situations where long-horizon improvement is needed. The results of this research show that the proposed technique works to lower bias in SARSA’s updates and speed up convergence in both deterministic and stochastic settings, even in dense-reward Atari environments. Experimental results from a variety of benchmark settings show that the proposed SARSA(Δ) outperforms existing TD learning techniques in both tabular and deep RL environments. Full article
Show Figures

Figure 1

25 pages, 515 KB  
Article
Prioritizing Longitudinal Gene–Environment Interactions Using an FDR-Assisted Robust Bayesian Linear Mixed Model
by Xiaoxi Li, Kun Fan and Cen Wu
Algorithms 2025, 18(11), 728; https://doi.org/10.3390/a18110728 - 19 Nov 2025
Viewed by 307
Abstract
Analysis of longitudinal data in high-dimensional gene–environment interaction studies have been extensively conducted using variable selection methods. Despite their success, these studies have been consistently challenged by the lack of uncertainty quantification procedures to identify main and interaction effects under longitudinal phenotypes that [...] Read more.
Analysis of longitudinal data in high-dimensional gene–environment interaction studies have been extensively conducted using variable selection methods. Despite their success, these studies have been consistently challenged by the lack of uncertainty quantification procedures to identify main and interaction effects under longitudinal phenotypes that follow heavy-tailed distributions due to disease heterogeneity. In this article, to improve statistical rigor of variable selection-based G × E analysis, we propose to apply the robust Bayesian linear mixed-effect model with a false discovery rate (FDR) control procedure to tackle these challenges. The Bayesian mixed model adopts a robust likelihood function to account for skewness in longitudinal phenotypic measurements, and it imposes spike-and-slab priors to detect important main and interaction effects. Leveraging the parallelism between spike-and-slab priors and the Bayesian approach to hypothesis testing, we perform variable selection and uncertainty quantification through a Bayesian false discovery rate (FDR)-assisted procedure. Numerical analyses have demonstrated the advantage of our proposal over alternative approaches. A case study of a longitudinal cancer prevention study with high-dimensional lipid measures yields main and interaction effects with important biological implications. Full article
Show Figures

Figure 1

26 pages, 1042 KB  
Article
Development and Application of a Fuzzy-Apriori-Based Algorithmic Model for the Pedagogical Evaluation of Student Background Data and Question Generation
by Éva Karl and György Molnár
Algorithms 2025, 18(11), 727; https://doi.org/10.3390/a18110727 - 19 Nov 2025
Viewed by 217
Abstract
This study presents a fuzzy-Apriori model that analyses student background data, along with end-of-lesson student-generated questions, to identify interpretable rules. After linguistic and semantic preprocessing, questions are represented in a fuzzy form and combined with background and performance variables to generate association rules, [...] Read more.
This study presents a fuzzy-Apriori model that analyses student background data, along with end-of-lesson student-generated questions, to identify interpretable rules. After linguistic and semantic preprocessing, questions are represented in a fuzzy form and combined with background and performance variables to generate association rules, including support, confidence, and lift. The dataset includes 202 students, parent reports from 174 families, 5832 student-generated questions, and 510 teacher-generated questions collected in regular lessons in grades 7–8. The model also incorporates a topic-level dynamic updating step that refreshes the rule set over time. The findings indicate descriptive associations between background characteristics, question complexity and alignment, and classroom performance. It is essential to note that this phase explores possibilities rather than providing a validated instructional method. Question coding inevitably involves subjective elements, and while we conducted the study in real classroom settings, we did not perform causal analyses at this stage. The next step will be developing reliability metrics through longitudinal studies across multiple classroom environments. Future work will test whether using these patterns can inform instructional adjustments and support student learning. Full article
Show Figures

Figure 1

16 pages, 1455 KB  
Article
Key Aspects to Promote the Safe Use of GenAI Tools by Undergraduate Education and Architecture Students: Similarities and Differences
by María-Carmen Ricoy, Joseba Delgado-Parada, Sálvora Feliz and Tiberio Feliz-Murias
Algorithms 2025, 18(11), 726; https://doi.org/10.3390/a18110726 - 18 Nov 2025
Viewed by 330
Abstract
Generative Artificial Intelligence (GenAI) is transforming higher education, yet concerns remain about its ethical use. The perceptions of students about GenAI may differ depending on the university degree in which they are enrolled. Thus, field-specific training approaches are essential to ensure an effective [...] Read more.
Generative Artificial Intelligence (GenAI) is transforming higher education, yet concerns remain about its ethical use. The perceptions of students about GenAI may differ depending on the university degree in which they are enrolled. Thus, field-specific training approaches are essential to ensure an effective GenAI adoption. The objective of this research is to analyze the use of GenAI by undergraduate Education and Architecture students, evaluate the potential and associated risks, and identify proposals for its safe use. A qualitative study was conducted with 165 Education and Architecture students, considering similarities and differences in their perceptions through an open-ended questionnaire. GenAI tools, especially ChatGPT, are mostly used on computers. Architecture students use a wide variety of GenAI tools, while those from Education degrees, who started using GenAI later, focus on text generators. The benefits identified by future educators mainly have an impact on the academic level, while future architects value their personal benefit. However, all participants agree on the negative repercussions of GenAI on their personal development. While some Education students encourage promoting the use of these tools, Architecture students call for training initiatives that should be differentiated according to the field of study. Full article
(This article belongs to the Special Issue Evolution of Algorithms in the Era of Generative AI)
Show Figures

Figure 1

21 pages, 23184 KB  
Article
FDC-YOLO: A Blur-Resilient Lightweight Network for Engine Blade Defect Detection
by Xinyue Xu, Fei Li, Lanhui Xiong, Chenyu He, Haijun Peng, Yiwen Zhao and Guoli Song
Algorithms 2025, 18(11), 725; https://doi.org/10.3390/a18110725 - 17 Nov 2025
Viewed by 322
Abstract
The synergy between continuum robots and visual inspection technology provides an efficient automated solution for aero-engine blade defect detection. However, flexible end-effector instability and complex internal illumination conditions cause defect image blurring and defect feature loss, leading existing detection methods to fail in [...] Read more.
The synergy between continuum robots and visual inspection technology provides an efficient automated solution for aero-engine blade defect detection. However, flexible end-effector instability and complex internal illumination conditions cause defect image blurring and defect feature loss, leading existing detection methods to fail in simultaneously achieving both high-precision and high-speed requirements. To address this, this study proposes the real-time defect detection algorithm FDC-YOLO, enabling precise and efficient identification of blurred defects. We design the dynamic subtractive attention sampling module (DSAS) to dynamically compensate for information discrepancies during sampling, which reduces critical information loss caused by multi-scale feature fusion. We design a high-frequency information processing module (HFM) to enhance defect feature representation in the frequency domain, which significantly improves the visibility of defect regions while mitigating blur-induced noise interference. Additionally, we design a classification domain detection head (CDH) to focus on domain-invariant features across categories. Finally, FDC-YOLO achieves 7.9% and 3.5% mAP improvements on the aero-engine blade defect dataset and low-resolution NEU-DET dataset, respectively, with only 2.68 M parameters and 7.0G FLOPs. These results validate the algorithm’s generalizability in addressing low-accuracy issues across diverse blur artifacts in defect detection. Furthermore, this algorithm is combined with the tensegrity continuum robot to jointly construct an automatic defect detection system for aircraft engines, providing an efficient and reliable innovative solution to the problem of internal damage detection in engines. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

22 pages, 1524 KB  
Article
Hypergraph Neural Networks for Coalition Formation Under Uncertainty
by Gerasimos Koresis, Charilaos Akasiadis and Georgios Chalkiadakis
Algorithms 2025, 18(11), 724; https://doi.org/10.3390/a18110724 - 17 Nov 2025
Viewed by 365
Abstract
Identifying effective coalitions of agents for task execution within large multiagent settings is a challenging endeavor. The problem is exacerbated by the presence of coalitional value uncertainty, which is due to uncertainty regarding the values of synergies among the different collaborating agent types. [...] Read more.
Identifying effective coalitions of agents for task execution within large multiagent settings is a challenging endeavor. The problem is exacerbated by the presence of coalitional value uncertainty, which is due to uncertainty regarding the values of synergies among the different collaborating agent types. Intuitively, in such environments, a hypergraph can be used to concisely represent coalition–task pairs in the form of hyperedges, along with their associated rewards. Therefore, this paper proposes harnessing the power of Hypergraph Neural Networks (HGNNs) that fit generic hypergraph-structured historical representations of coalitional task executions to learn the unknown values of coalitional configurations undertaking the tasks. However, the fitted model by itself cannot be used to provide suggestions on which coalitions to form; it can only be queried for the values of given coalition–task configurations. To actually provide coalitional suggestions, this work relies on informed search approaches that incorporate the output of the HGNN as an indicator of the quality of the proposed coalition configurations. The resulting approach is illustrated, via simulation results, to be able to effectively capture the uncertain values of multiagent synergies and thus suggest highly rewarding coalitional configurations. Specifically, the proposed novel hybrid approach can outperform competing baseline approaches and achieve close to 80% performance of the theoretical maximum in this setting. Full article
(This article belongs to the Special Issue Graph and Hypergraph Algorithms and Applications)
Show Figures

Figure 1

14 pages, 398 KB  
Article
Efficient Record Linkage in the Age of Large Language Models: The Critical Role of Blocking
by Nidhibahen Shah, Sreevar Patiyara, Joyanta Basak, Sartaj Sahni, Anup Mathur, Krista Park and Sanguthevar Rajasekaran
Algorithms 2025, 18(11), 723; https://doi.org/10.3390/a18110723 - 16 Nov 2025
Viewed by 313
Abstract
Record linkage is an essential task in data integration in the fields of healthcare, law enforcement, fraud detection, transportation, biology, and supply chain management. The problem of record linkage is to cluster records from various sources such that each cluster belongs to a [...] Read more.
Record linkage is an essential task in data integration in the fields of healthcare, law enforcement, fraud detection, transportation, biology, and supply chain management. The problem of record linkage is to cluster records from various sources such that each cluster belongs to a single entity. Scalability in record linking is limited by the large number of pairwise comparisons required. Blocking addresses this challenge by partitioning data into smaller parts, substantially reducing the computational cost. With the advancement of Large Language Models (LLMs), there are several possibilities to improve record linkage by leveraging their semantic understanding of textual attributes. LLM-based record linkage algorithms in the literature have very large runtimes. In this paper, we show that the employment of blocking can result in significant improvements not only in the runtime but also in the accuracy. Specifically, we propose a record linkage algorithm that combines LLMs with blocking. Experimental evaluation demonstrates that our algorithm achieves lower runtimes while simultaneously improving F1 scores compared to the approaches relying solely on LLMs. These findings demonstrate the importance of blocking even in the era of advanced machine learning models. Full article
Show Figures

Figure 1

22 pages, 4107 KB  
Article
Hybrid CNN–MLP for Robust Fault Diagnosis in Induction Motors Using Physics-Guided Spectral Augmentation
by Alexander Shestakov, Dmitry Galyshev, Olga Ibryaeva and Victoria Eremeeva
Algorithms 2025, 18(11), 722; https://doi.org/10.3390/a18110722 - 15 Nov 2025
Viewed by 285
Abstract
The diagnosis of faults in induction motors, such as broken rotor bars, is critical for preventing costly emergency shutdowns and production losses. The complexity of this task lies in the diversity of induction motor operating regimes. Specifically, a change in load alters the [...] Read more.
The diagnosis of faults in induction motors, such as broken rotor bars, is critical for preventing costly emergency shutdowns and production losses. The complexity of this task lies in the diversity of induction motor operating regimes. Specifically, a change in load alters the signal’s frequency composition and, consequently, the values of fault diagnostic features. Developing a reliable diagnostic model requires data covering the entire range of motor loads, but the volume of available experimental data is often limited. This work investigates a data augmentation method based on the physical relationship between the frequency content of diagnostic signals and the motor’s operating regime. The method enables stretching and compression of the signal in the spectral domain while preserving Fourier transform symmetry and energy consistency, facilitating the generation of synthetic data for various load regimes. We evaluated the method on experimental data from a 0.37 kW induction motor with broken rotor bars. The synthetic data were used to train three diagnostic models: a Multilayer Perceptron (MLP), a Convolutional Neural Network (CNN), and a hybrid CNN-MLP model. Results indicate that the proposed augmentation method enhances classification quality across different load levels. The hybrid CNN-MLP model achieved the best performance, with an F1-score of 0.98 when augmentation was employed. These findings demonstrate the practical efficacy of physics-guided spectral augmentation for induction motor fault diagnosis. Full article
Show Figures

Figure 1

39 pages, 4244 KB  
Article
A Neuro-Symbolic Multi-Agent Architecture for Digital Transformation of Psychological Support Systems via Artificial Neurotransmitters and Archetypal Reasoning
by Gerardo Iovane, Iana Fominska and Raffaella Di Pasquale
Algorithms 2025, 18(11), 721; https://doi.org/10.3390/a18110721 - 15 Nov 2025
Viewed by 466
Abstract
The digital transformation in the treatment of mental health and emotional disharmony requires artificial intelligence architectures that overcome the limitations of purely neural approaches, such as temporal inconsistency, opacity, and lack of theoretical foundations. Assuming the existence and use of generalist LLMs currently [...] Read more.
The digital transformation in the treatment of mental health and emotional disharmony requires artificial intelligence architectures that overcome the limitations of purely neural approaches, such as temporal inconsistency, opacity, and lack of theoretical foundations. Assuming the existence and use of generalist LLMs currently used in clinical settings and considering the appropriate limitations indicated by experts, this article aims to offer clinicians an alternative Neuro-symbolic-Psychological multi-agent architecture (NSPA-AI), which integrates archetypal symbolic reasoning with neurobiological modelling, based on our established framework of artificial neurotransmitters for the modelling and analysis of affective-emotional stimuli to enable interpretable AI-assisted psychological intervention. The system implements a hub-and-spoke topology that coordinates five specialized agents (symbolic, psychological, neurofunctional, decision fusion, learning) that process heterogeneous information via SPADE protocols. Seven archetypal constructs from Jungian psychology and narrative identity theory provide stable symbolic frameworks for longitudinal therapeutic consistency. An empirical study of 156 university students demonstrated significant improvements in depression (Cohen’s d = 1.03), stress (d = 0.89), and narrative identity integration (d = 0.75), which were maintained at a 12-week follow-up and superior to GPT-4 controls (d = 0.34). Neurofunctional correlations—downregulation of cortisol (r = 0.71 with stress reduction), increase in serotonin (r = −0.68 with depression improvement)—validated the neurobiological basis of the entropy-energy framework. Qualitative analysis revealed the following four mechanisms of improvement: symbolic emotional support (93%), increased self-awareness through neurotransmitter visualization (84%), non-judgmental AI interaction (98%), and archetypal narrative organization (87%). The results establish that neuro-symbolic architectures are viable alternatives to large language models for digital mental health, providing the interpretability and clinical validity essential for adoption in the healthcare sector. Full article
(This article belongs to the Special Issue Algorithms in Multi-Sensor Imaging and Fusion)
Show Figures

Figure 1

29 pages, 3845 KB  
Article
Modeling Approaches for Digital Plant Phenotyping Under Dynamic Conditions of Natural, Climatic and Anthropogenic Factors
by Bagdat Yagaliyeva, Olga Ivashchuk and Dmitry Goncharov
Algorithms 2025, 18(11), 720; https://doi.org/10.3390/a18110720 - 15 Nov 2025
Viewed by 331
Abstract
Methods, algorithms, and models for the creation and practical application of digital twins (3D models) of agricultural crops are presented, illustrating their condition under different levels of atmospheric CO2 concentration, soil, and meteorological conditions. An algorithm for digital phenotyping using machine learning [...] Read more.
Methods, algorithms, and models for the creation and practical application of digital twins (3D models) of agricultural crops are presented, illustrating their condition under different levels of atmospheric CO2 concentration, soil, and meteorological conditions. An algorithm for digital phenotyping using machine learning methods with the U2-Net architecture are proposed for segmenting plants into elements and assessing their condition. To obtain a dataset and conduct verification experiments, a prototype of a software and hardware complex has been developed that implements the process of cultivation and digital phenotyping without disturbing the microclimate inside the chamber and eliminating the subjectivity of measurements. In order to identify new data and confirm the data published in open scientific sources on the effects of CO2 on crop growth and development, plants (ten species) were grown at different CO2 concentrations (0.015–0.03% and 0.07–0.09%) with a 10-fold repetition. A model has been built and trained to distinguish between cases when plant segments need to be combined because they belong to the same leaf (p-value = 0.05), and when they belong to a separate leaf (p-value = 0.03). A knowledge base has been formed, including: 790 3D models of plants and data on their physiological characteristics. Full article
(This article belongs to the Special Issue AI Applications and Modern Industry)
Show Figures

Figure 1

19 pages, 1202 KB  
Article
Optimizing Navigation in Mobile Robots: Modified Particle Swarm Optimization and Genetic Algorithms for Effective Path Planning
by Mohamed Amr, Ahmed Bahgat, Hassan Rashad, Azza Ibrahim and Ayman Youssef
Algorithms 2025, 18(11), 719; https://doi.org/10.3390/a18110719 - 14 Nov 2025
Viewed by 359
Abstract
Mobile robots are increasingly integral to diverse applications, with path-planning algorithms being essential for efficient and secure mobile robot navigation. Mobile robot path planning is defined as the design of the least time-consuming, shortest-distance, and most collision-free path from the starting point to [...] Read more.
Mobile robots are increasingly integral to diverse applications, with path-planning algorithms being essential for efficient and secure mobile robot navigation. Mobile robot path planning is defined as the design of the least time-consuming, shortest-distance, and most collision-free path from the starting point to the endpoint for the mobile robot’s autonomous movement. This study investigates and assesses two widely used algorithms in artificial intelligence (AI)—Improved Particle Swarm Optimization (IPSO) and Improved Genetic Algorithm (IGA)—for path planning of mobile robot navigation problems. In this work Manhattan movements are proposed as a distance formula to modify both algorithms in the path planning of the mobile robot navigation problem. Unlike the traditional GA and PSO, which can use horizontal search, the proposed algorithm relies on vertical search, which gives us an advantage. The results demonstrate the effectiveness of these modified algorithms in barrier detection and obstacle avoidance. Six different experiments were run using both improved algorithms to show their ability to achieve their goal and avoid obstacles in various scenarios with different complexities. Across various scenarios, the tested AI algorithms performed effectively, regardless of the map scale and complexity. This paper proposes a complete comparison between the two improved algorithms in different scenarios. The results show that the algorithms’ performance is influenced more by the density of walls and obstacles than by the size or complexity of the map. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science: 2nd Edition)
Show Figures

Figure 1

42 pages, 3632 KB  
Article
Logistic Biplots for Ordinal Variables Based on Alternating Gradient Descent on the Cumulative Probabilities, with an Application to Survey Data
by Julio C. Hernández-Sánchez, Laura Vicente-González, Elisa Frutos-Bernal and José L. Vicente-Villardón
Algorithms 2025, 18(11), 718; https://doi.org/10.3390/a18110718 - 14 Nov 2025
Viewed by 207
Abstract
Biplot methods provide a framework for the simultaneous graphical representation of both rows and columns of a data matrix. Classical biplots were originally developed for continuous data in conjunction with principal component analysis (PCA). In recent years, several extensions have been proposed for [...] Read more.
Biplot methods provide a framework for the simultaneous graphical representation of both rows and columns of a data matrix. Classical biplots were originally developed for continuous data in conjunction with principal component analysis (PCA). In recent years, several extensions have been proposed for binary and nominal data. These variants, referred to as logistic biplots (LBs), are based on logistic rather than linear response models. However, existing formulations remain insufficient for analyzing ordinal data, which are common in many social and behavioral research contexts. In this study, we extend the biplot methodology to ordinal data and introduce the ordinal logistic biplot (OLB). The proposed method estimates row scores that generate ordinal logistic responses along latent dimensions, whereas column parameters define logistic response surfaces. When these surfaces are projected onto the space defined by the row scores, they form a linear biplot representation. The model is based on a framework, leading to a multidimensional structure analogous to the graded response model used in Item Response Theory (IRT). We further examine the geometric properties of this representation and develop computational algorithms—based on an alternating gradient descent procedure—for parameter estimation and computation of prediction directions to facilitate visualization. The OLB method can be viewed as an extension of multidimensional IRT models, incorporating a graphical representation that enhances interpretability and exploratory power. Its primary goal is to reveal meaningful patterns and relationships within ordinal datasets. To illustrate its usefulness, we apply the methodology to the analysis of job satisfaction among PhD holders in Spain. The results reveal two dominant latent dimensions: one associated with intellectual satisfaction and another related to job-related aspects such as salary and benefits. Comparative analyses with alternative techniques indicate that the proposed approach achieves superior discriminatory power across variables. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
Show Figures

Figure 1

64 pages, 2732 KB  
Systematic Review
Artificial Intelligence in Software Testing: A Systematic Review of a Decade of Evolution and Taxonomy
by Alex Escalante-Viteri and David Mauricio
Algorithms 2025, 18(11), 717; https://doi.org/10.3390/a18110717 - 14 Nov 2025
Viewed by 1278
Abstract
Software testing is fundamental to ensuring the quality, reliability, and security of software systems. Over the past decade, artificial intelligence (AI) algorithms have been increasingly applied to automate testing processes, predict and detect defects, and optimize evaluation strategies. This systematic review examines studies [...] Read more.
Software testing is fundamental to ensuring the quality, reliability, and security of software systems. Over the past decade, artificial intelligence (AI) algorithms have been increasingly applied to automate testing processes, predict and detect defects, and optimize evaluation strategies. This systematic review examines studies published between 2014 and 2024, focusing on the taxonomy and evolution of algorithms across problems, variables, and metrics in software testing. A taxonomy of testing problems is proposed by categorizing issues identified in the literature and mapping the AI algorithms applied to them. In parallel, the review analyzes the input variables and evaluation metrics used by these algorithms, organizing them into established categories and exploring their evolution over time. The findings reveal three complementary trajectories: (1) the evolution of problem categories, from defect prediction toward automation, collaboration, and evaluation; (2) the evolution of input variables, highlighting the increasing importance of semantic, dynamic, and interface-driven data sources beyond structural metrics; and (3) the evolution of evaluation metrics, from classical performance indicators to advanced, testing-specific, and coverage-oriented measures. Finally, the study integrates these dimensions, showing how interdependencies among problems, variables, and metrics have shaped the maturity of AI in software testing. This review contributes a novel taxonomy of problems, a synthesis of variables and metrics, and a future research agenda emphasizing scalability, interpretability, and industrial adoption. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

18 pages, 3175 KB  
Article
AudioFakeNet: A Model for Reliable Speaker Verification in Deepfake Audio
by Samia Dilbar, Muhammad Ali Qureshi, Serosh Karim Noon and Abdul Mannan
Algorithms 2025, 18(11), 716; https://doi.org/10.3390/a18110716 - 13 Nov 2025
Viewed by 517
Abstract
Deepfake audio refers to the generation of voice recordings using deep neural networks that replicate a specific individual’s voice, often for deceptive or fraud purposes. Although this has been an area of research for quite some time, deepfakes still pose substantial challenges for [...] Read more.
Deepfake audio refers to the generation of voice recordings using deep neural networks that replicate a specific individual’s voice, often for deceptive or fraud purposes. Although this has been an area of research for quite some time, deepfakes still pose substantial challenges for reliable true speaker authentication. To address the issue, we propose AudioFakeNet, a hybrid deep learning architecture that use Convolutional Neural Networks (CNNs) along with Long Short-Term Memory (LSTM) units, and Multi-Head Attention (MHA) mechanisms for robust deepfake detection. CNN extracts spatial and spectral features, LSTM captures temporal dependencies, and MHA enhances to focus on informative audio segments. The model is trained using Mel-Frequency Cepstral Coefficients (MFCCs) from the publicly available dataset and was validated on self-collected dataset, ensuring reproducibility. Performance comparisons with state-of-the-art machine learning and deep learning models show that our proposed AudioFakeNet achieves higher accuracy, better generalization, and lower Equal Error Rate (EER). Its modular design allows for broader adaptability in fake-audio detection tasks, offering significant potential across diverse speech synthesis applications. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

49 pages, 1835 KB  
Article
Reinforcement Learning-Guided Hybrid Metaheuristic for Energy-Aware Load Balancing in Cloud Environments
by Yousef Sanjalawe, Salam Al-E’mari, Budoor Allehyani and Sharif Naser Makhadmeh
Algorithms 2025, 18(11), 715; https://doi.org/10.3390/a18110715 - 13 Nov 2025
Viewed by 362
Abstract
Cloud computing has transformed modern IT infrastructure by enabling scalable, on-demand access to virtualized resources. However, the rapid growth of cloud services has intensified energy consumption across data centres, increasing operational costs and carbon footprints. Traditional load-balancing methods, such as Round Robin and [...] Read more.
Cloud computing has transformed modern IT infrastructure by enabling scalable, on-demand access to virtualized resources. However, the rapid growth of cloud services has intensified energy consumption across data centres, increasing operational costs and carbon footprints. Traditional load-balancing methods, such as Round Robin and First-Fit, often fail to adapt dynamically to fluctuating workloads and heterogeneous resources. To address these limitations, this study introduces a Reinforcement Learning-guided hybrid optimization framework that integrates the Black Eagle Optimizer (BEO) for global exploration with the Pelican Optimization Algorithm (POA) for local refinement. A lightweight RL controller dynamically tunes algorithmic parameters in response to real-time workload and utilization metrics, ensuring adaptive and energy-aware scheduling. The proposed method was implemented in CloudSim 3.0.3 and evaluated under multiple workload scenarios (ranging from 500 to 2000 cloudlets and up to 32 VMs). Compared with state-of-the-art baselines, including PSO-ACO, MS-BWO, and BSO-PSO, the RL-enhanced hybrid BEO–POA achieved up to 30.2% lower energy consumption, 45.6% shorter average response time, 28.4% higher throughput, and 12.7% better resource utilization. These results confirm that combining metaheuristic exploration with RL-based adaptation can significantly improve the energy efficiency, responsiveness, and scalability of cloud scheduling systems, offering a promising pathway toward sustainable, performance-optimized data-centre management. Full article
(This article belongs to the Special Issue AI Algorithms for 6G Mobile Edge Computing and Network Security)
Show Figures

Figure 1

15 pages, 1171 KB  
Article
Person Re-Identification Under Non-Overlapping Cameras Based on Advanced Contextual Embeddings
by Chi-Hung Chuang, Tz-Chian Huang, Chong-Wei Wang, Jung-Hua Lo and Chih-Lung Lin
Algorithms 2025, 18(11), 714; https://doi.org/10.3390/a18110714 - 12 Nov 2025
Viewed by 354
Abstract
Person Re-identification (ReID), a critical technology in intelligent surveillance, aims to accurately match specific individuals across non-overlapping camera networks. However, factors in real-world scenarios such as variations in illumination, viewpoint, and pose continuously challenge the matching accuracy of existing models. Although Transformer-based models [...] Read more.
Person Re-identification (ReID), a critical technology in intelligent surveillance, aims to accurately match specific individuals across non-overlapping camera networks. However, factors in real-world scenarios such as variations in illumination, viewpoint, and pose continuously challenge the matching accuracy of existing models. Although Transformer-based models like TransReID have demonstrated a strong capability for capturing global context in feature extraction, the features they produce still have room for optimization at the metric matching stage. To address this issue, this study proposes a hybrid framework that combines advanced feature extraction with post-processing optimization. We employed a fixed, pre-trained TransReID model as the feature extractor and introduced a camera-aware Jaccard distance re-ranking algorithm (CA-Jaccard) as a post-processing module. Without retraining the main model, this framework refines the initial distance metric matrix by analyzing the local neighborhood topology among feature vectors and incorporating camera information. Experiments were conducted on two major public datasets, Market-1501 and MSMT17. The results show that our framework significantly improved the overall ranking quality of the model, increasing the mean Average Precision (mAP) on Market-1501 from 88.2% to 93.58% compared to using TransReID alone, achieving a gain of nearly 4% in mAP on MSMT17. This research confirms that advanced post-processing techniques can effectively complement powerful feature extraction models, providing an efficient pathway to enhance the robustness of ReID systems in complex scenarios. Additionally, it is the first-ever to analyze how the modified distance metric improves the ReID task when used specifically with the ViT-based feature extractor TransReID. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

25 pages, 4855 KB  
Article
Improved Flood Management and Risk Communication Through Large Language Models
by Divas Karimanzira, Thomas Rauschenbach, Tobias Hellmund and Linda Ritzau
Algorithms 2025, 18(11), 713; https://doi.org/10.3390/a18110713 - 12 Nov 2025
Viewed by 437
Abstract
In light of urbanization, climate change, and the escalation of extreme weather events, flood management is becoming more and more important. Improving community resilience and reducing flood risks require prompt decision-making and effective communication. This study investigates how flood management systems can incorporate [...] Read more.
In light of urbanization, climate change, and the escalation of extreme weather events, flood management is becoming more and more important. Improving community resilience and reducing flood risks require prompt decision-making and effective communication. This study investigates how flood management systems can incorporate Large Language Models (LLMs), especially those that use Retrieval-Augmented Generation (RAG) architectures. We suggest a multimodal framework that uses a Flood Knowledge Graph to aggregate data from various sources, such as social media, hydrological, and meteorological inputs. Although LLMs have the potential to be transformative, we also address important drawbacks like governance issues, hallucination risks, and a lack of physical modeling capabilities. When compared to text-only LLMs, the RAG system significantly improves the reliability of flood-related decision support by reducing factual inconsistency rates by more than 75%. Our suggested architecture includes expert validation and security layers to guarantee dependable, useful results, like flood-constrained evacuation route planning. In areas that are vulnerable to flooding, this strategy seeks to strengthen warning systems, enhance information sharing, and build resilient communities. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms in Sustainability)
Show Figures

Figure 1

28 pages, 514 KB  
Article
Dynamic Assessment with AI (Agentic RAG) and Iterative Feedback: A Model for the Digital Transformation of Higher Education in the Global EdTech Ecosystem
by Rubén Juárez, Antonio Hernández-Fernández, Claudia de Barros-Camargo and David Molero
Algorithms 2025, 18(11), 712; https://doi.org/10.3390/a18110712 - 11 Nov 2025
Viewed by 842
Abstract
This article formalizes AI-assisted assessment as a discrete-time policy-level design for iterative feedback and evaluates it in a digitally transformed higher-education setting. We integrate an agentic retrieval-augmented generation (RAG) feedback engine—operationalized through planning (rubric-aligned task decomposition), tool use beyond retrieval (tests, static/dynamic analyzers, [...] Read more.
This article formalizes AI-assisted assessment as a discrete-time policy-level design for iterative feedback and evaluates it in a digitally transformed higher-education setting. We integrate an agentic retrieval-augmented generation (RAG) feedback engine—operationalized through planning (rubric-aligned task decomposition), tool use beyond retrieval (tests, static/dynamic analyzers, rubric checker), and self-critique (checklist-based verification)—into a six-iteration dynamic evaluation cycle. Learning trajectories are modeled with three complementary formulations: (i) an interpretable update rule with explicit parameters η and λ that links next-step gains to feedback quality and the gap-to-target and yields iteration-complexity and stability conditions; (ii) a logistic-convergence model capturing diminishing returns near ceiling; and (iii) a relative-gain regression quantifying the marginal effect of feedback quality on the fraction of the gap closed per iteration. In a Concurrent Programming course (n=35), the cohort mean increased from 58.4 to 91.2 (0–100), while dispersion decreased from 9.7 to 5.8 across six iterations; a Greenhouse–Geisser corrected repeated-measures ANOVA indicated significant within-student change. Parameter estimates show that higher-quality, evidence-grounded feedback is associated with larger next-step gains and faster convergence. Beyond performance, we engage the broader pedagogical question of what to value and how to assess in AI-rich settings: we elevate process and provenance—planning artifacts, tool-usage traces, test outcomes, and evidence citations—to first-class assessment signals, and outline defensible formats (trace-based walkthroughs and oral/code defenses) that our controller can instrument. We position this as a design model for feedback policy, complementary to state-estimation approaches such as knowledge tracing. We discuss implications for instrumentation, equity-aware metrics, reproducibility, and epistemically aligned rubrics. Limitations include the observational, single-course design; future work should test causal variants (e.g., stepped-wedge trials) and cross-domain generalization. Full article
Show Figures

Figure 1

17 pages, 1075 KB  
Article
A Multi-Branch Convolutional Neural Network for Student Graduation Prediction
by Zhifeng Zhang, Xiaoyun Qin, Junxia Ma, Yangyang Chu and Bo Wang
Algorithms 2025, 18(11), 711; https://doi.org/10.3390/a18110711 - 10 Nov 2025
Viewed by 243
Abstract
Accurate prediction of student graduation status is crucial for higher education institutions to implement timely interventions and improve student success. While existing methods often rely on single data sources or generic model architectures, this paper proposes a novel Multi-Branch Convolutional Neural Network (MBCNN) [...] Read more.
Accurate prediction of student graduation status is crucial for higher education institutions to implement timely interventions and improve student success. While existing methods often rely on single data sources or generic model architectures, this paper proposes a novel Multi-Branch Convolutional Neural Network (MBCNN) that systematically integrates multi-dimensional factors influencing student outcomes. The model employs eight dedicated branches to capture both subjective and objective features from four key dimensions: student characteristics, school resources, family environment, and societal factors. Through robust normalization and hierarchical feature fusion, MBCNN effectively learns discriminative representations from these heterogeneous data sources. Evaluated on a real-world dataset from Polytechnic Institute of Portalegre, our approach demonstrates superior performance compared to traditional and up-to-date machine learning methods, achieving improvements of 4.07–17.35% in accuracy, 4.60–20.19% in weighted precision, 4.07–17.35% in weighted recall, and 4.59–18.73% in weighted F1-score. The results validate that domain-specific neural architectures, designed to align with the inherent structure of educational data, significantly enhance prediction accuracy and generalization capability. Full article
Show Figures

Figure 1

16 pages, 2828 KB  
Article
Classification of Earthquakes Using Grammatical Evolution
by Constantina Kopitsa, Ioannis G. Tsoulos, Vasileios Charilogis and Chrysostomos Stylios
Algorithms 2025, 18(11), 710; https://doi.org/10.3390/a18110710 - 10 Nov 2025
Viewed by 446
Abstract
Earthquake predictability remains a central challenge in seismology. Are earthquakes inherently unpredictable phenomena, or can they be forecasted through advances in technology? Contemporary seismological research continues to pursue this scientific milestone, often referred to as the ‘Holy Grail’ of earthquake prediction. In the [...] Read more.
Earthquake predictability remains a central challenge in seismology. Are earthquakes inherently unpredictable phenomena, or can they be forecasted through advances in technology? Contemporary seismological research continues to pursue this scientific milestone, often referred to as the ‘Holy Grail’ of earthquake prediction. In the direction of earthquake prediction based on historical data, the Grammatical Evolution technique of GenClass demonstrated high predictive accuracy for earthquake magnitude. Similarly, our research team follows this line of reasoning, operating under the belief that nature provides a pattern that, with the appropriate tools, can be decoded. What is certain is that, over the past 30 years, scientists and researchers have made significant strides in the field of seismology, largely aided by the development and application of artificial intelligence techniques. Artificial Neural Networks (ANNs) were first applied in the domain of seismology in 1994. The introduction of deep neural networks (DNNs), characterized by architectures incorporating two hidden layers, followed in 2002. Subsequently, recurrent neural networks (RNNs) were implemented within seismological studies as early as 2007. Most recently, grammatical evolution (GE) has been introduced in seismological studies (2025). Despite continuous progress in the field, achieving the so-called “triple prediction”—the precise estimation of the time, location, and magnitude of an earthquake—remains elusive. Nevertheless, machine learning and soft computing approaches have long played a significant role in seismological research. Concerning these approaches, significant advancements have been achieved, both in mapping seismic patterns and in predicting seismic characteristics on a smaller geographical scale. In this way, our research analyzes historical seismic events from 2004 to 2011 within the latitude range of 21°–79° longitude range of 33°–176°. The data is categorized and classified, with the aim of employing grammatical evolution techniques to achieve more accurate and timely predictions of earthquake magnitudes. This paper presents a systematic effort to enhance magnitude prediction accuracy using GE, contributing to the broader goal of reliable earthquake forecasting. Subsequently, this paper presents the superiority of GenClass, a key element of the grammatical evolution techniques, with an average error of 19%, indicating an overall accuracy of 81%. Full article
Show Figures

Figure 1

16 pages, 1871 KB  
Review
Foundational Algorithms for Modern Cybersecurity: A Unified Review on Defensive Computation in Adversarial Environments
by Paul A. Gagniuc
Algorithms 2025, 18(11), 709; https://doi.org/10.3390/a18110709 - 7 Nov 2025
Viewed by 643
Abstract
Cyber defense has evolved into an algorithmically intensive discipline where mathematical rigor and adaptive computation underpin the robustness and continuity of digital infrastructures. This review consolidates the algorithmic spectrum that supports modern cyber defense, from cryptographic primitives that ensure confidentiality and integrity to [...] Read more.
Cyber defense has evolved into an algorithmically intensive discipline where mathematical rigor and adaptive computation underpin the robustness and continuity of digital infrastructures. This review consolidates the algorithmic spectrum that supports modern cyber defense, from cryptographic primitives that ensure confidentiality and integrity to behavioral intelligence algorithms that provide predictive security. Classical symmetric and asymmetric schemes such as AES, ChaCha20, RSA, and ECC define the computational backbone of confidentiality and authentication in current systems. Intrusion and anomaly detection mechanisms range from deterministic pattern matchers exemplified by Aho-Corasick and Boyer-Moore to probabilistic inference models such as Markov Chains and HMMs, as well as deep architectures such as CNNs, RNNs, and Autoencoders. Malware forensics combines graph theory, entropy metrics, and symbolic reasoning into a unified diagnostic framework, while network defense employs graph-theoretic algorithms for routing, flow control, and intrusion propagation. Behavioral paradigms such as reinforcement learning, evolutionary computation, and swarm intelligence transform cyber defense from reactive automation to adaptive cognition. Hybrid architectures now merge deterministic computation with distributed learning and explainable inference to create systems that act, reason, and adapt. This review identifies and contextualizes over 50 foundational algorithms, ranging from AES and RSA to LSTMs, graph-based models, and post-quantum cryptography, and redefines them not as passive utilities, but as the cognitive genome of cyber defense: entities that shape, sustain, and evolve resilience within adversarial environments. Full article
Show Figures

Figure 1

27 pages, 2706 KB  
Article
Enhancing Cardiovascular Disease Classification with Routine Blood Tests Using an Explainable AI Approach
by Nurdaulet Tasmurzayev, Bibars Amangeldy, Zhanel Baigarayeva, Assiya Boltaboyeva, Baglan Imanbek, Naoya Maeda-Nishino, Sarsenbek Zhussupbekov and Aliya Baidauletova
Algorithms 2025, 18(11), 708; https://doi.org/10.3390/a18110708 - 7 Nov 2025
Viewed by 674
Abstract
Background: While machine learning (ML) is widely applied in cardiology, a critical research gap persists. The incremental diagnostic value of routine blood tests for classifying cardiovascular disease (CVD) remains largely unquantified, and many models operate as non-interpretable “black boxes,” limiting their clinical adoption. [...] Read more.
Background: While machine learning (ML) is widely applied in cardiology, a critical research gap persists. The incremental diagnostic value of routine blood tests for classifying cardiovascular disease (CVD) remains largely unquantified, and many models operate as non-interpretable “black boxes,” limiting their clinical adoption. This study aims to address these gaps by quantifying the contribution of readily available laboratory panels and demonstrating the utility of transparent diagnostic modeling within a real-world clinical cohort. Methods: We conducted a retrospective study on the clinical data of 896 adult patients from a hospital database. A baseline feature set (demographics, vital signs) was compared against an enhanced set that additionally included results from routine hematology and biochemistry panels. Five machine learning classifiers were trained and evaluated. To ensure transparency, SHAP (SHapley Additive exPlanations) analysis, a key component of explainable AI (XAI), was used to interpret the predictions of the top-performing model. Results: The inclusion of routine blood tests consistently and significantly improved the performance of all classifiers. The XGBoost model demonstrated the best performance (accuracy 91.62%, precision 95.00%, recall 87.36%). Critically, SHAP analysis identified aspartate aminotransferase (AST), glucose, and creatinine as the most significant biomarkers, providing clear, interpretable insights into the biochemical drivers of the model’s predictions. Conclusion: Routine laboratory markers contain a strong, interpretable signal indicative of CVD that is crucial for accurate risk stratification. These findings underscore the diagnostic relevance of common blood biomarkers and demonstrate how explainable AI can transform routine clinical data into transparent and actionable cardiovascular insights. Further validation in larger and demographically diverse cohorts is warranted. Full article
Show Figures

Figure 1

35 pages, 20479 KB  
Article
Comprehensive Forensic Tool for Crime Scene and Traffic Accident 3D Reconstruction
by Alejandra Ospina-Bohórquez, Esteban Ruiz de Oña, Roy Yali, Emmanouil Patsiouras, Katerina Margariti and Diego González-Aguilera
Algorithms 2025, 18(11), 707; https://doi.org/10.3390/a18110707 - 7 Nov 2025
Viewed by 880
Abstract
This article presents a comprehensive forensic tool for crime scene and traffic accident investigations, integrating advanced 3D reconstruction and semantic and dynamic analyses; the tool facilitates the accurate documentation and preservation of crime scenes through photogrammetric techniques, producing detailed 3D models based on [...] Read more.
This article presents a comprehensive forensic tool for crime scene and traffic accident investigations, integrating advanced 3D reconstruction and semantic and dynamic analyses; the tool facilitates the accurate documentation and preservation of crime scenes through photogrammetric techniques, producing detailed 3D models based on images or video captured under specified protocols. The system includes modules for semantic analysis, enabling object detection and classification in 3D point clouds and 2D images. By employing machine learning methods such as the Random Forest model for point cloud classification and the YOLOv8 architecture for object detection, the tool enhances the accuracy and reliability of forensic analysis. Furthermore, a dynamic analysis module supports ballistic trajectory calculations for crime scene investigations and the vehicle impact speed estimation using the Equivalent Barrier Speed (EBS) model for traffic accidents. These capabilities are integrated into a single, user-friendly platform offering significant improvements over existing forensic tools, which often focus on singular tasks and require expertise. This tool provides a robust, accessible solution for law enforcement agencies, enabling more efficient and precise forensic investigations across different scenarios. Full article
(This article belongs to the Special Issue Modern Algorithms for Image Processing and Computer Vision)
Show Figures

Figure 1

37 pages, 960 KB  
Article
Product Recommendation with Price Personalization According to Customer’s Willingness to Pay Using Deep Reinforcement Learning
by Ali Mahdavian, Hadi Moradi and Behnam Bahrak
Algorithms 2025, 18(11), 706; https://doi.org/10.3390/a18110706 - 5 Nov 2025
Viewed by 704
Abstract
Integrating recommendation systems with dynamic pricing strategies is essential for enhancing product sales and optimizing revenue in modern business. This study proposes a novel product recommendation model that uses Reinforcement Learning to tailor pricing strategies to customer purchase intentions. While traditional recommendation systems [...] Read more.
Integrating recommendation systems with dynamic pricing strategies is essential for enhancing product sales and optimizing revenue in modern business. This study proposes a novel product recommendation model that uses Reinforcement Learning to tailor pricing strategies to customer purchase intentions. While traditional recommendation systems focus on identifying products customers prefer, they often neglect the critical factor of pricing. To improve effectiveness and increase conversion, it is crucial to personalize product prices according to the customer’s willingness to pay (WTP). Businesses often use fixed-budget promotions to boost sales, emphasizing the importance of strategic pricing. Designing intelligent promotions requires recommending products aligned with customer preferences and setting prices reflecting their WTP, thus increasing the likelihood of purchase. This research advances existing recommendation systems by integrating dynamic pricing into the system’s output, offering a significant innovation in business practice. However, this integration introduces technical complexities, which are addressed through a Markov Decision Process (MDP) framework and solved using Reinforcement Learning. Empirical evaluation using the Dunnhumby dataset shows promising results. Due to the lack of direct comparisons between combined product recommendation and pricing models, the outputs were simplified into two categories: purchase and non-purchase. This approach revealed significant improvements over comparable methods, demonstrating the model’s efficacy. Full article
Show Figures

Figure 1

27 pages, 3210 KB  
Article
A Robust Lyapunov-Based Control Strategy for DC–DC Boost Converters
by Mario Ivan Nava-Bustamante, José Luis Meza-Medina, Rodrigo Loera-Palomo, Cesar Alberto Hernández-Jacobo and Jorge Alberto Morales-Saldaña
Algorithms 2025, 18(11), 705; https://doi.org/10.3390/a18110705 - 5 Nov 2025
Viewed by 403
Abstract
This paper presents a robust and reliable voltage regulation method in DC–DC converters, for which a multiloop control strategy is developed and analyzed for a boost converter. The proposed control scheme consists of an inner current loop and an outer voltage loop, both [...] Read more.
This paper presents a robust and reliable voltage regulation method in DC–DC converters, for which a multiloop control strategy is developed and analyzed for a boost converter. The proposed control scheme consists of an inner current loop and an outer voltage loop, both systematically designed using the control Lyapunov function (CLF) methodology. The main contributions of this work are (1) the formulation of a control structure capable of maintaining performance under variations in load, reference voltage, and input voltage; (2) the theoretical demonstration of global asymptotic stability of the closed-loop system in the Lyapunov sense; and (3) the experimental validation of the proposed controller on a physical DC–DC boost converter, confirming its effectiveness. The results support the advancement of high-efficiency nonlinear control methods for power electronics applications. Furthermore, the experimental findings reinforce the practical relevance and real-world applicability of the proposed approach. Full article
(This article belongs to the Special Issue Algorithmic Approaches to Control Theory and System Modeling)
Show Figures

Figure 1

18 pages, 1456 KB  
Article
Hybrid Deep Learning Framework for Anomaly Detection in Power Plant Systems
by Shuchong Wang, Changxiang Zhao, Xingchen Liu, Xianghong Ni, Xu Chen, Xinglong Gao and Li Sun
Algorithms 2025, 18(11), 704; https://doi.org/10.3390/a18110704 - 5 Nov 2025
Viewed by 610
Abstract
Currently, thermal power units undertake the task of peak and frequency regulation, and their internal equipment is in a non-conventional environment, which could very easily fail and thus lead to unplanned shutdown of the unit. To realize the condition monitoring and early warning [...] Read more.
Currently, thermal power units undertake the task of peak and frequency regulation, and their internal equipment is in a non-conventional environment, which could very easily fail and thus lead to unplanned shutdown of the unit. To realize the condition monitoring and early warning of the key equipment inside coal power units, this study proposes a deep learning-based equipment condition anomaly detection model, which combines the deep autoencoder (DAE), Transformer, and Gaussian mixture model (GMM) to establish an anomaly detection model. DAE and the Transformer encoder extract static and time-series features from multi-dimensional operation data, and GMM learns the feature distribution of normal data to realize anomaly detection. Based on the data verification of boiler superheater equipment and turbine bearings in real power plants, the model is more capable of detecting equipment anomalies in advance than the traditional method and is more stable with fewer false alarms. When applied to the superheater equipment, the proposed model triggered early warnings approximately 90 h in advance compared to the actual failure time, with a lower false negative rate, reducing the missed detection rate by 70% compared to the Transformer-GMM (TGMM) model, which verifies the validity of the model and its early warning capability. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

25 pages, 3646 KB  
Article
An Explainable YOLO-Based Deep Learning Framework for Pneumonia Detection from Chest X-Ray Images
by Ali Ahmed, Ali I. Siam, Ahmed E. Mansour Atwa, Mohamed Ahmed Atwa, Elsaid Md. Abdelrahim and El-Sayed Atlam
Algorithms 2025, 18(11), 703; https://doi.org/10.3390/a18110703 - 4 Nov 2025
Viewed by 775
Abstract
Pneumonia remains a serious global health issue, particularly affecting vulnerable groups such as children and the elderly, where timely and accurate diagnosis is critical for effective treatment. Recent advances in deep learning have significantly enhanced pneumonia detection using chest X-rays, yet many current [...] Read more.
Pneumonia remains a serious global health issue, particularly affecting vulnerable groups such as children and the elderly, where timely and accurate diagnosis is critical for effective treatment. Recent advances in deep learning have significantly enhanced pneumonia detection using chest X-rays, yet many current methods still face challenges with interpretability, efficiency, and clinical applicability. In this work, we proposed a YOLOv11-based deep learning framework designed for real-time pneumonia detection, strengthened by the integration of Grad-CAM for visual interpretability. To further enhance robustness, the framework incorporated preprocessing techniques such as Contrast Limited Adaptive Histogram Equalization (CLAHE) for contrast improvement, region-of-interest extraction, and lung segmentation, ensuring both precise localization and improved focus on clinically relevant features. Evaluation on two publicly available datasets confirmed the effectiveness of the approach. On the COVID-19 Radiography Dataset, the system reached a macro-average accuracy of 98.50%, precision of 98.60%, recall of 97.40%, and F1-score of 97.99%. On the Chest X-ray COVID-19 & Pneumonia dataset, it achieved 98.06% accuracy, with corresponding high precision and recall, yielding an F1-score of 98.06%. The Grad-CAM visualizations consistently highlighted pathologically relevant lung regions, providing radiologists with interpretable and trustworthy predictions. Comparative analysis with other recent approaches demonstrated the superiority of the proposed method in both diagnostic accuracy and transparency. With its combination of real-time processing, strong predictive capability, and explainable outputs, the framework represents a reliable and clinically applicable tool for supporting pneumonia and COVID-19 diagnosis in diverse healthcare settings. Full article
Show Figures

Figure 1

21 pages, 18400 KB  
Article
An Improved Bi-RRT Algorithm for Optimal Puncture Path Planning
by Shigang Wang, Yunqi Ran and Zhan Chen
Algorithms 2025, 18(11), 702; https://doi.org/10.3390/a18110702 - 4 Nov 2025
Viewed by 389
Abstract
Percutaneous puncture has become one of the most widely used minimally invasive techniques in clinical practice due to its advantages of low trauma, quick recovery and easy operation. However, incomplete needle tip movement, tissue barriers and complex distribution of sensitive organs make it [...] Read more.
Percutaneous puncture has become one of the most widely used minimally invasive techniques in clinical practice due to its advantages of low trauma, quick recovery and easy operation. However, incomplete needle tip movement, tissue barriers and complex distribution of sensitive organs make it difficult to balance puncture accuracy and safety. To this end, this paper proposes a new puncture path planning algorithm for flexible needles, which integrates gravitational guidance, bi-directional adaptive expansion, optimal node selection based on the A* algorithm, and path optimization strategies, with Bi-Rapid-Research Random Trees (Bi-RRTs) at its core, to significantly improve obstacle avoidance capability and computational efficiency. The simulation results of 2D and 3D complex scenes in MATLAB show that compared with the traditional RRT algorithm and Bi-RRT algorithm, the GBOPBi-RRT algorithm achieves significant advantages in terms of path length, computation time and node size. In particular, in the 3D environment, the GBOPBi-RRT algorithm shortens the planning path by 43.21% compared with RRT, 27.47% compared with RRT* and 30.33% compared with Bi-RRT, which provides a reliable solution for efficient planning of percutaneous puncture with flexible needles. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

16 pages, 1975 KB  
Article
Explainable Schizophrenia Classification from rs-fMRI Using SwiFT and TransLRP
by Julian Weaver, Emerald Zhang, Nihita Sarma, Alaa Melek and Edward Castillo
Algorithms 2025, 18(11), 701; https://doi.org/10.3390/a18110701 - 4 Nov 2025
Viewed by 476
Abstract
Schizophrenia is challenging to identify from resting-state functional MRI (rs-fMRI) due to subtle, distributed changes and the clinical need for transparent models. We build on the Swin 4D fMRI Transformer (SwiFT) to classify schizophrenia vs. controls and explain predictions with Transformer Layer-wise Relevance [...] Read more.
Schizophrenia is challenging to identify from resting-state functional MRI (rs-fMRI) due to subtle, distributed changes and the clinical need for transparent models. We build on the Swin 4D fMRI Transformer (SwiFT) to classify schizophrenia vs. controls and explain predictions with Transformer Layer-wise Relevance Propagation (TransLRP). We further introduce Swarm-LRP, a particle swarm optimization (PSO) scheme that tunes Layer-wise Relevance Propagation (LRP) rules against model-agnostic explainability (XAI) metrics from Quantus. On the COBRE dataset, TransLRP yields higher faithfulness and lower sensitivity/complexity than Integrated Gradients, and highlights physiologically plausible regions. Swarm-LRP improves single-subject explanation quality over baseline LRP by optimizing (α,γ,ϵ) values and discrete layer-rule assignments. These results suggest that architecture-aware explanations can recover spatiotemporal patterns of rs-fMRI relevant to schizophrenia while improving attribution robustness. This feasibility study indicates a path toward clinically interpretable neuroimaging models. Full article
Show Figures

Figure 1

20 pages, 3937 KB  
Article
Squeeze-and-Excitation Networks and the Improved Informer Model for Bearing Fault Diagnosis
by Bin Yuan, Yanghui Du, Zengbiao Xie and Suifan Chen
Algorithms 2025, 18(11), 700; https://doi.org/10.3390/a18110700 - 4 Nov 2025
Viewed by 482
Abstract
This paper presents a fault diagnosis model for rolling bearings that addresses the challenges of establishing long-sequence correlations and extracting spatial features in deep-learning models. The proposed model combines SENet with an improved Informer model. Initially, local features are extracted using the Conv1d [...] Read more.
This paper presents a fault diagnosis model for rolling bearings that addresses the challenges of establishing long-sequence correlations and extracting spatial features in deep-learning models. The proposed model combines SENet with an improved Informer model. Initially, local features are extracted using the Conv1d method, and input data is optimized through normalization and embedding techniques. Next, the SE-Conv1d network model is employed to enhance key features while suppressing noise interference adaptively. In the improved Informer model, the ProbSparse self-attention mechanism and self-attention distillation technique efficiently capture global dependencies in long sequences within the rolling bearing dataset, significantly reducing computational complexity and improving accuracy. Finally, experiments on the CWRU and HUST datasets demonstrate that the proposed model achieves accuracy rates of 99.78% and 99.45%, respectively. The experimental results show that, compared to other deep learning methods, the proposed model offers superior fault diagnosis accuracy, stability, and generalization ability. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop