Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications, and is published monthly online by MDPI.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Theory and Methods) / CiteScore - Q1 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.2 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2025).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
2.1 (2024);
5-Year Impact Factor:
2.0 (2024)
Latest Articles
Hybrid Particle Swarm Optimization with Chaotic Opposition-Based Initialization and Adaptive Learning Strategy
Algorithms 2026, 19(5), 344; https://doi.org/10.3390/a19050344 - 30 Apr 2026
Abstract
Particle swarm optimization (PSO) is an optimizing method that is based on the theory of swarm intelligence. PSO is an effective algorithm that is used to search in a parallel manner compared to other methods. However, PSO has a tendency towards local optima
[...] Read more.
Particle swarm optimization (PSO) is an optimizing method that is based on the theory of swarm intelligence. PSO is an effective algorithm that is used to search in a parallel manner compared to other methods. However, PSO has a tendency towards local optima when tackling complex multimodal optimization problems. It also has the disadvantages of slow convergence process and poor stability in the latter evolutionary period. In view of these demerits, a hybrid PSO method based on chaotic opposition-based initialization and an adaptive learning strategy is presented in this work (abbreviated as ACMPSO). First, the chaos initialization and opposition-based learning (OBL) are employed to produce high-quality initial particles in the feasible region, which is able to improve the quality of the initial solutions. Second, the logistic mapping embedded inertia weight is formulated to better trade off the global and local search process. Third, the global optimal particle is regulated by an exclusive velocity and position updating strategy whereas the rest particles are adjusted by the standard updating mechanism so as to prevent particles from premature convergence. Furthermore, an adaptive position update paradigm is developed to finely regulate the global exploration and local exploitation. Finally, conducted experiments on CEC’13 and CEC’22 reveal that the proposed ACMPSO outperforms several other advanced PSO variants regarding their convergence rate and accuracy. Alternatively, to further illustrate the effect of ACMPSO, we have applied it to two real-world problems, and simulation results ascertain its effectiveness and robustness.
Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
►
Show Figures
Open AccessArticle
Bionic Corner Detection Based on Cooperative Processing of Simple Cells and End-Stopped Cells
by
Shuo Sun and Haiyang Yu
Algorithms 2026, 19(5), 343; https://doi.org/10.3390/a19050343 - 30 Apr 2026
Abstract
►▼
Show Figures
Corner detection is a fundamental task in computer vision that plays a critical role in applications such as image registration, 3D reconstruction, and object tracking. In biological visual systems, simple cells in the primary visual cortex exhibit high selectivity to edge stimuli of
[...] Read more.
Corner detection is a fundamental task in computer vision that plays a critical role in applications such as image registration, 3D reconstruction, and object tracking. In biological visual systems, simple cells in the primary visual cortex exhibit high selectivity to edge stimuli of specific orientations, while end-stopped cells can detect geometric singular structures such as line segment endpoints and corners. Existing corner detection methods based on visual neural computation typically employ a strategy of densely distributed end-stopped cells for corner localization, which suffers from significant localization deviation under small angle conditions due to mutual interference between responses of adjacent neurons. To address this problem, this paper proposes a bionic corner detection method based on cooperative processing of simple cells and end-stopped cells. The method constructs a two-stage cooperative processing framework: the edge filtering stage employs a Gabor filter bank to simulate the orientation selectivity of simple cells, extracting edge positions and orientation information; the dynamic construction stage builds unilateral end-stopped cells only at filtered edge positions based on local orientation information, fundamentally avoiding computational redundancy and response interference caused by global dense distribution; the corner localization stage determines precise corner coordinates through hierarchical clustering and dual-cluster centroid fusion strategies. Experimental results demonstrate that, in the 15° acute-angle regime where dense end-stopped schemes are most severely affected by response interference, the proposed method reduces the mean localization error from 8.76 to 2.34 pixels, corresponding to a 73.3% improvement; averaged across the eight tested angle levels from 15° to 165°, the improvement is approximately 40.9%, and all per-angle differences are statistically significant (paired t-test, p < 0.01 or below, N = 10 independent runs). On standard test images, the method attains the lowest mean localization error among the eight compared detectors (1.58 pixels, versus 1.68–3.42 pixels for Harris, FAST, COSFIRE, KAZE, SuperPoint, Deep Corner, and Wei et al.), while maintaining competitive detection rate, false-alarm rate, and runtime. Physiological plausibility validation experiments show that the correlation coefficient between the detection deviation of this method and human perceptual deviation reaches 0.923, indicating that the output of the framework aligns with previously reported human perceptual bias patterns and supporting its biological plausibility as a biologically inspired—rather than mechanistic—model of corner perception. The source code, dataset, and experimental results are publicly available (see Data Availability Statement).
Full article

Figure 1
Open AccessArticle
Perturbation of Highly Dispersive Solitons in Optical Metamaterials with Twin-Core Couplers and Power-Law of Self-Phase Modulation by Laplace–Adomian Decomposition
by
Oswaldo González-Gaxiola, Jehan Saleh Ahmed, Lina S. Calucag and Anjan Biswas
Algorithms 2026, 19(5), 342; https://doi.org/10.3390/a19050342 - 29 Apr 2026
Abstract
This paper utilizes the Laplace–Adomian decomposition method to numerically investigate the highly dispersive bright soliton solutions in twin-core optical couplers that employ metamaterials as waveguides. The focus of the study is on the power-law self-phase modulation. The results of the simulations and the
[...] Read more.
This paper utilizes the Laplace–Adomian decomposition method to numerically investigate the highly dispersive bright soliton solutions in twin-core optical couplers that employ metamaterials as waveguides. The focus of the study is on the power-law self-phase modulation. The results of the simulations and the accompanying error analysis demonstrate exceptional accuracy for this numerical approach. These findings suggest that the Laplace–Adomian decomposition method is a robust tool for tackling complex nonlinear problems in optical systems. Furthermore, the implications of this research could pave the way for advancements in the design and optimization of metamaterial-based waveguides, potentially leading to improved performance in applications, such as telecommunications and sensing technologies.
Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Frequency-Guided Multi-Scale Dehazing Network with Cross-Domain Spatial–Spectral Gating
by
Fangyuan Jin, Hui Lin, Lu Zhang and Yiwei Chen
Algorithms 2026, 19(5), 341; https://doi.org/10.3390/a19050341 - 28 Apr 2026
Abstract
►▼
Show Figures
Single-image dehazing is still a challenging problem because haze mainly corrupts low-frequency structures such as global contrast and color consistency, while fine textures and object boundaries are degraded in a different manner. In this paper, we present a frequency-guided multi-scale dehazing network (FGDNet)
[...] Read more.
Single-image dehazing is still a challenging problem because haze mainly corrupts low-frequency structures such as global contrast and color consistency, while fine textures and object boundaries are degraded in a different manner. In this paper, we present a frequency-guided multi-scale dehazing network (FGDNet) that explicitly couples spatial-domain restoration and Fourier-domain feature decomposition in a compact U-Net-like architecture. Built on a gated U-Net backbone, the proposed model inserts a frequency processing branch into encoder stages. In detail, the feature maps are transformed by fast Fourier transform, split into low- and high-frequency components through a radial mask, refined separately, and fused by a lightweight cross-domain gating module. The low-frequency pathway emphasizes color and illumination recovery, whereas the high-frequency pathway enhances edges and textures. Moreover, an additional Fourier amplitude supervision term aligns the spectral distribution of restored images with haze-free targets. Experimental results on RESIDE ITS, RESIDE OTS, O-HAZE, and NH-HAZE show that the proposed method achieves 33.3 dB PSNR/0.983 SSIM on ITS, 35.1 dB PSNR/0.988 SSIM on OTS, 19.1 dB PSNR/0.786 SSIM for OTS-trained generalization to O-HAZE, and 15.8 dB PSNR/0.648 SSIM for OTS-trained generalization to NH-HAZE. Furthermore, both quantitative and qualitative results demonstrate that the proposed method provides a more effective and more robust solution than representative dehazing methods. In addition, ablation studies confirm that both the Fourier branch and the spatial–spectral gating mechanism contribute consistently to performance gains. These results support the effectiveness of explicit frequency-aware representation learning for image dehazing and suggest a practical direction for improving generalization from synthetic to real haze.
Full article

Figure 1
Open AccessArticle
Scalable Bayesian–XAI Framework for Multi-Objective Decision-Making in Uncertain Dynamic Systems
by
Mostafa Aboulnour Salem and Zeyad Aly Khalil
Algorithms 2026, 19(5), 340; https://doi.org/10.3390/a19050340 - 28 Apr 2026
Abstract
This study proposes a scalable Explainable Artificial Intelligence (XAI)–driven Bayesian–AI decision–control framework for multi-objective optimisation in uncertain and dynamic systems. The framework integrates Bayesian networks, stochastic control, and expected utility theory within a unified probabilistic architecture. Unlike traditional black-box models, the proposed framework
[...] Read more.
This study proposes a scalable Explainable Artificial Intelligence (XAI)–driven Bayesian–AI decision–control framework for multi-objective optimisation in uncertain and dynamic systems. The framework integrates Bayesian networks, stochastic control, and expected utility theory within a unified probabilistic architecture. Unlike traditional black-box models, the proposed framework provides intrinsic interpretability through probabilistic reasoning and dependency-aware modelling. This allows users to understand how decisions are formed and how variables influence outcomes. To further strengthen explainability, the framework incorporates post hoc XAI techniques, including SHAP-based feature attribution and sensitivity-based local explanations. These methods quantify the contribution of each variable and provide clear explanations at both global and local levels. The system is formulated as a stochastic state-space model and implemented as a closed-loop adaptive architecture. It updates decisions continuously as new data becomes available. Scalable inference is achieved using variational inference, Markov Chain Monte Carlo, and Sequential Monte Carlo methods. This ensures efficient performance in complex and high-dimensional environments. A simulation study based on 370 observations shows that the proposed framework improves decision quality, robustness under uncertainty, and transparency compared to conventional methods. Explainability is evaluated using Fidelity, Stability, and Transparency metrics. The results confirm that the model produces consistent and reliable explanations. The framework supports human-centred decision-making by providing visual analytics and clear probabilistic explanations. This makes it suitable for high-stakes applications such as cyber–physical systems, intelligent platforms, and real-time AI systems. The main contribution of this study is the integration of intrinsic probabilistic interpretability with post hoc XAI techniques into a single, scalable framework. This approach bridges a key gap in XAI research and offers a practical and transparent solution for decision-making under uncertainty.
Full article
(This article belongs to the Special Issue Explainable AI: Advances in Interpretability Algorithms and Applications)
Open AccessArticle
A Short-Term Wind Power Prediction Method Based on Multi-Model Fusion with an Improved Gray Wolf Optimization Algorithm
by
Zaijiang Yu, He Jiang and Yan Zhao
Algorithms 2026, 19(5), 339; https://doi.org/10.3390/a19050339 - 28 Apr 2026
Abstract
In the current energy context, enhancing the precision of wind power prediction serves as a key enabler for the stable development of the power grid. In the existing wind power prediction models, there are often problems of modal aliasing and noise residue, or
[...] Read more.
In the current energy context, enhancing the precision of wind power prediction serves as a key enabler for the stable development of the power grid. In the existing wind power prediction models, there are often problems of modal aliasing and noise residue, or the prediction accuracy of the model is not high. In an effort to solve the problem of short-term wind power forecasting, a wind power series decomposition and reconstruction method based on improved complete ensemble empirical mode decomposition with adaptive noise-variational modal decomposition (ICEEMDAN-VMD) secondary decomposition is proposed. Using ICEEMDAN, wind power data (wind direction, wind speed, temperature, humidity, air pressure, etc.) is decomposed into several IMF sub-series, and these IMF sub-series are categorized into three different frequency components by combining sample entropy, Q statistics and sequence frequency. Secondly, the gray wolf optimization (GWO) is improved by using the empirical exchange strategy (EES), and the optimization performance of the EES-GWO proposed in this paper is verified by using 10 test functions. Finally, the EES-GWO-convolutional neural network–bidirectional gated recurrent unit–global attention (EES-GWO-CNN-BiGRU–Global attention) high-frequency component prediction model is constructed. Finally, we employ the XGBoost model to forecast the mid- and low-frequency components, thereby generating the corresponding forecasting results. The support vector machine (SVM) model nonlinearly integrates all the forecasting results to produce the final forecasting results. Through example analysis and comparison, the performance of the proposed model is verified from two perspectives.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Open AccessArticle
Enabling Reliable Industrial Energy Savings Verification Through Hybrid Factored Conditional Restricted Boltzmann Machine and Generative Adversarial Network
by
Suziee Sukarti, Mohamad Fani Sulaima, Norashikin Sahadan, Muhamad Hafizul Shamsor, Siaw Wei Yao and Aida Fazliana Abdul Kadir
Algorithms 2026, 19(5), 338; https://doi.org/10.3390/a19050338 - 28 Apr 2026
Abstract
Reliable quantification of industrial energy savings requires accurate detection of non-routine events (NREs) that distort post-retrofit baselines. Conventional statistical and rule-based anomaly detection methods often misinterpret operational variability, leading to biased or overstated savings under the International Performance Measurement and Verification Protocol (IPMVP).
[...] Read more.
Reliable quantification of industrial energy savings requires accurate detection of non-routine events (NREs) that distort post-retrofit baselines. Conventional statistical and rule-based anomaly detection methods often misinterpret operational variability, leading to biased or overstated savings under the International Performance Measurement and Verification Protocol (IPMVP). This study develops a novel IPMVP-compliant hybrid deep learning framework that integrates a deterministic Deep Neural Network (DNN) for baseline modeling with stochastic architectures, namely the Factored Conditional Restricted Boltzmann Machine (FCRBM) and Generative Adversarial Network (GAN), to capture probabilistic reconstruction patterns. Their outputs are fused using a hybrid thresholding mechanism that balances detection sensitivity and specificity. Using high-resolution data from an industrial glove manufacturing facility, the hybrid DNN–FCRBM model achieved the best trade-off, demonstrating an accuracy of 94.3%, a precision of 91.1%, and a low false positive rate of 5.1%. This model validated 11.32% industrial energy savings (approximately 478,050 kWh), equivalent to 237 tonnes of CO2 avoided. The integration of stochastic generative learning within a deterministic framework strengthens transparency, auditability, and IPMVP compliance, offering a scalable pathway for credible industrial energy savings verification.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Path Optimization Simulation Method for Nuclear Power Plant Inspection and Maintenance Robots Based on the Integration of Bi-RRT and APF
by
Tong Wu, Meihao Zhu, Zhansheng Liu, Xiaofeng Zhang, Fengjuan Chen, Xiaoqing Zhu, Haowen Sun, Chuan Zhang and Jiahao Wu
Algorithms 2026, 19(5), 337; https://doi.org/10.3390/a19050337 - 27 Apr 2026
Abstract
►▼
Show Figures
Path planning for inspection and maintenance robots in nuclear power plants often suffers from limited adaptability, high computational cost, and unstable convergence in obstacle-dense confined environments. To address these issues, this paper proposes an improved Bi-RRT–APF path optimization framework for complex industrial scenarios.
[...] Read more.
Path planning for inspection and maintenance robots in nuclear power plants often suffers from limited adaptability, high computational cost, and unstable convergence in obstacle-dense confined environments. To address these issues, this paper proposes an improved Bi-RRT–APF path optimization framework for complex industrial scenarios. The method integrates (1) a hybrid sampling strategy combining random, goal-biased, and potential-field-guided sampling to enhance global exploration and convergence efficiency; (2) a potential-field-guided perturbation and stagnation detection mechanism to improve escape capability from local minima; and (3) a dynamic target switching and constrained segmented connection strategy to improve path feasibility and safety. A digital twin-based simulation platform is further developed to validate the engineering applicability of the proposed approach. Simulation results demonstrate significant quantitative improvements over baseline methods. Compared with conventional RRT and Bi-RRT, the proposed method reduces iteration count by 65.3% and 43.8%, respectively, and decreases computation time by 76.1% and 48.4%, respectively, while increasing the success rate to 95% (from 82% and 93%) and improving path smoothness (reduced from 5.3 and 3.3 to 2.9). Compared with advanced variants (Quad-RRT and KB-RRT*), the method further reduces computation time by 25.2% and 10.3% and iteration count by 29.3% and 8.4%, respectively. These results indicate that the proposed method achieves a balanced improvement in efficiency, robustness, and path quality. This work provides an efficient and reliable solution for autonomous path planning of robots in complex nuclear power plant environments.
Full article

Figure 1
Open AccessArticle
Representation-Centric Deep Learning for Multi-Class, Multi-Organ Histopathology Image Classification
by
Li Hao and Ma Ning
Algorithms 2026, 19(5), 336; https://doi.org/10.3390/a19050336 - 25 Apr 2026
Abstract
Imaging-based multi-omics derived from digital histopathology provides a valuable approach for characterizing tumor heterogeneity from routine clinical specimens. However, robust multi-cancer histopathological analysis remains challenging due to pronounced intra-tumor variability, inter-organ morphological overlap, and sensitivity to staining and acquisition variations, which can limit
[...] Read more.
Imaging-based multi-omics derived from digital histopathology provides a valuable approach for characterizing tumor heterogeneity from routine clinical specimens. However, robust multi-cancer histopathological analysis remains challenging due to pronounced intra-tumor variability, inter-organ morphological overlap, and sensitivity to staining and acquisition variations, which can limit the generalizability of deep learning models. These limitations are largely driven by insufficient representation learning, particularly in multi-organ and multi-class diagnostic settings. In this study, we propose a hierarchically regularized representation learning framework for multi-cancer histopathological image analysis that models imaging-based features across multiple organs and diagnostic categories. The framework integrates complementary mechanisms to capture fine-grained cellular morphology, long-range tissue architecture, and organ-aware diagnostic semantics within a unified computational model. A hierarchical supervision strategy guides the network to reduce entanglement between organ-level structural characteristics and disease-specific diagnostic patterns in the learned representations. The method operates without pixel-level annotations or handcrafted morphological priors, supporting scalable experimental evaluation. We demonstrate the approach on balanced lung and colon cancer histopathology cohorts, achieving 96.5% accuracy on lung cancer classification and 96.8% accuracy on colon cancer classification. Ablation and robustness analyses further validate the contributions of hierarchical regularization and consistency learning. Overall, this work provides a demonstrated proof-of-concept framework for representation-centric imaging-based analysis in multi-organ histopathology under the evaluated dataset conditions.
Full article
Open AccessArticle
Neuro-Fuzzy Control of a Bidirectional DC-DC Converter Applied in the Powertrain of Electric Vehicles
by
Erik Martínez-Vera, Pedro Bañuelos-Sánchez, Alfredo Rosado-Muñoz, Juan Manuel Ramirez-Cortes and Pilar Gomez-Gil
Algorithms 2026, 19(5), 335; https://doi.org/10.3390/a19050335 - 25 Apr 2026
Abstract
Power converters are fundamental components in vehicle electrification systems. However, their inherently nonlinear and time-varying condition requires complex design procedures when conventional control strategies based on linear small-signal models are employed. This work proposes a simplified and hardware-oriented DC-DC converter control methodology that
[...] Read more.
Power converters are fundamental components in vehicle electrification systems. However, their inherently nonlinear and time-varying condition requires complex design procedures when conventional control strategies based on linear small-signal models are employed. This work proposes a simplified and hardware-oriented DC-DC converter control methodology that combines fuzzy logic and Neural Networks in a sequential manner. A fuzzy logic fuzzy controller is first used to generate a dataset of control actions under closed-loop operation. A lightweight neural network is then trained using the obtained data to approximate this mapping and subsequently replace the fuzzy controller in real-time operation. To validate the approach, a bidirectional buck–boost DC-DC converter is designed for applications in the powertrain of electric vehicles with 500 kHz switching frequency and 13 kW power rating. The control algorithm is embedded in an FPGA to demonstrate its suitability for hardware deployment. The experimental results show a reduction in RMSE of 33.7% and a decrease in the settling time of at least 51.7% when compared with a benchmark PID control.
Full article
(This article belongs to the Special Issue Data-Driven Intelligent Modeling and Optimization Algorithms for Industrial Processes: 3rd Edition)
Open AccessArticle
Complex-Time Neural Networks: Geometric Temporal Access for Long-Range Reasoning
by
Gerardo Iovane, Giovanni Iovane and Antonio De Rosa
Algorithms 2026, 19(5), 334; https://doi.org/10.3390/a19050334 - 25 Apr 2026
Abstract
Most neural architectures model time as a one-dimensional real-valued variable, constraining temporal reasoning to sequential propagation along a single axis. We introduce Complex-Time Neural Networks (CTNN), a new class of architectures in which temporal coordinates are elements of the complex plane T =
[...] Read more.
Most neural architectures model time as a one-dimensional real-valued variable, constraining temporal reasoning to sequential propagation along a single axis. We introduce Complex-Time Neural Networks (CTNN), a new class of architectures in which temporal coordinates are elements of the complex plane T = t + iτ ∈ ℂ, where Re(T) preserves chronological ordering and Im(T) encodes an orthogonal experiential dimension. Within this geometry, Im(T) < 0 defines a memory domain enabling retrospective retrieval, Im(T) = 0 corresponds to present-moment computation, and Im(T) > 0 defines an imagination domain for prospective projection. We prove the Expressive Separation Theorem (Theorem 1), establishing that, within the temporally coupled function class GTCP and under explicit Assumptions A1–A4 (in particular the bounded projection Assumption A3), CTNN accesses temporally coupled functions at O(1) cost with respect to temporal distance Δ1, Δ2, while real-time architectures incur Ω(Δ1 + Δ2) sequential steps. For layered compositions, this yields an exponential composition gap within GTCP under A1–A4. These advantages hold under the stated assumptions and may not directly generalize to broader function classes or large-scale settings where A3 cannot be maintained. Therefore, Theorem 1 provides a formal separation result for GTCP, while CTNN more broadly defines a geometric framework for temporal computation. As the first concrete instantiation of this framework, we develop Complex-Time Convolutional Neural Networks (CTCNN). CTCNN achieves state-of-the-art performance on Something-Something V2 (70.2 ± 0.4%, +1.1% over VideoMAE v2, p < 0.01), strong performance on Kinetics-400 (78.4 ± 0.3%), and substantial gains on Long Range Arena Path-X (87.3% vs. 79.6%, +7.7%), using 3.4× fewer parameters than VideoMAE v2. Learnable angular parameters α and β provide computationally interpretable parameters related to memory-access span and prospection breadth, with values varying systematically across task families.
Full article
(This article belongs to the Special Issue Deep Neural Networks and Optimization Algorithms (2nd Edition))
Open AccessArticle
A Novel and Practical Algorithmic Enhancement for Enumerating Maximal and Maximum k-Partite Cliques in k-Partite Graphs
by
Cheng Chen, Faisal N. Abu-Khzam, Levente Dojcsak and Michael A. Langston
Algorithms 2026, 19(5), 333; https://doi.org/10.3390/a19050333 - 25 Apr 2026
Abstract
A k-partite graph is one whose vertices can be partitioned into k disjoint partite sets, with edges allowed between but not within these sets. In such a graph, a maximal k-partite clique is a subgraph with at least one vertex from
[...] Read more.
A k-partite graph is one whose vertices can be partitioned into k disjoint partite sets, with edges allowed between but not within these sets. In such a graph, a maximal k-partite clique is a subgraph with at least one vertex from each partite set and every allowable edge such that the subgraph cannot be enlarged by the incorporation of additional vertices. A maximum k-partite clique is of course a maximal k-partite clique of the greatest size. The results reported here describe a novel and practical modification of the best previously published algorithm for the enumeration of these special subgraphs. The relative performance of this new method relies on implicit edge addition and search tree pruning and is evaluated on graphs constructed from both pseudorandom and real-world data.
Full article
(This article belongs to the Special Issue 2026 and 2027 Selected Papers from Algorithms Editorial Board Members)
Open AccessArticle
Improvements to the Modified Anderson–Björck (modAB) Root-Finding Algorithm
by
Nedelcho Ganchovski, Oscar Smith, Christopher Rackauckas, Lachezar Tomov and Alexander Traykov
Algorithms 2026, 19(5), 332; https://doi.org/10.3390/a19050332 - 24 Apr 2026
Abstract
The Modified Anderson–Björck method is a new, robust, and efficient bracketing root-finding algorithm. It combines bisection with the Anderson–Björk method to achieve both fast performance and worst-case optimality. It relies on linearity check criteria for switching methods and uses Anderson–Björk corrections to overcome
[...] Read more.
The Modified Anderson–Björck method is a new, robust, and efficient bracketing root-finding algorithm. It combines bisection with the Anderson–Björk method to achieve both fast performance and worst-case optimality. It relies on linearity check criteria for switching methods and uses Anderson–Björk corrections to overcome the fixed endpoint issue of false-position. Initial benchmarks of this method have shown certain performance advantages compared to other methods, such as Ridders, Brent and ITP. In this paper, we propose further improvements to this method and perform some additional analysis and benchmarks of its behavior and performance.
Full article
Open AccessArticle
Performance Enhancement of Quadrotor UAVs via Gray Wolf Optimized Algorithm for Sliding Mode Control
by
Mustafa B. Nidham, Khalid Yahya, Mehdi Safaei, Nawal Rai and Saleh Al Dawsari
Algorithms 2026, 19(5), 331; https://doi.org/10.3390/a19050331 - 24 Apr 2026
Abstract
This article is an in-depth analysis of the performance and efficiency of various control systems used in quadrotor unmanned aerial vehicles (UAVs). The study is focused on the comparison of three main control approaches, including Sliding Mode Control (SMC), Fuzzy Logic Control (FLC),
[...] Read more.
This article is an in-depth analysis of the performance and efficiency of various control systems used in quadrotor unmanned aerial vehicles (UAVs). The study is focused on the comparison of three main control approaches, including Sliding Mode Control (SMC), Fuzzy Logic Control (FLC), and an extended version of Sliding Mode Control with the use of the Gray Wolf Optimizer (SMC-GWO), as well as a supportive validation model the Genetic Algorithm (SMC-GA). Based on the Newton–Euler formulation, the mathematical model of a quadrotor has been developed to provide a true picture of the dynamic behavior of the quadrotor. The model was then implemented in MATLAB/Simulink 2025b to test the performance of the system in its nominal and perturbed conditions. The findings have shown that the hybrid SMC-GWO controller has significant improvement in response speed, accuracy, and stability compared to the other controllers. Precisely, the SMC-GWO demonstrated 78.46 percent decrease in rise time and 23.40 percent decrease in settling time compared to the traditional SMC, as well as a nearly negligible steady-state error (SSE = 0.0008) in the roll channel. The proposed controller in the pitch channel reduced the rise time by 93.65 percent and the settling time by 20.22 percent, with a much smoother and more stable tracking and an effectively negligible steady-state error (SSE = 0.0001). The hybrid controller in the yaw channel had a 77.94 percent better rise time and 23.16 percent better settling time, resulting in a steady-state error of 0.0022. In relation to altitude control, SMC -GWO decreased the rise time by 91.87 percent and settling time by 25.04 percent over classical SMC, yet the steady-state error was almost zero. Under constant, time-varying actuator disturbances, the SMC-GWO controller also demonstrated better system stabilization and trajectory-tracking behavior than both SMC and FLC, as well as slightly better behavior than SMC-GA in the presence of faults and disturbances. These results verify that a UAV control framework based on the combination of the Gray Wolf Optimizer and Sliding Mode Control is more resilient, quick, and significantly more precise.
Full article
(This article belongs to the Special Issue Algorithmic Approaches to Control Theory and System Modeling)
Open AccessEditorial
From Interpretable Models to Clinical Implementation: Advances in AI-Assisted Medical Diagnostics
by
Milan Toma
Algorithms 2026, 19(5), 330; https://doi.org/10.3390/a19050330 - 24 Apr 2026
Abstract
The integration of artificial intelligence into medical diagnostics has evolved from controlled research demonstrations to real-world clinical deployment, creating both unprecedented opportunities and substantial challenges [...]
Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
►▼
Show Figures

Figure 1
Open AccessArticle
An RMST-Integrated Machine Learning Framework for Interpretable Survival Analysis Under Non-Proportional Hazards: Application to the METABRIC Cohort
by
Fangya Tan, Yang Zhou, Shuqiao Li, Chun Jiang, Jian-Guo Zhou and Srikar Bellur
Algorithms 2026, 19(5), 329; https://doi.org/10.3390/a19050329 - 24 Apr 2026
Abstract
►▼
Show Figures
(1) Background: Advances in machine learning (ML)-based survival modeling enable the analysis of high-dimensional biomedical data. However, many approaches rely on the proportional hazards (PH) assumption, which is frequently violated in oncology and can limit the interpretability of hazard ratio-based results. Using Estrogen
[...] Read more.
(1) Background: Advances in machine learning (ML)-based survival modeling enable the analysis of high-dimensional biomedical data. However, many approaches rely on the proportional hazards (PH) assumption, which is frequently violated in oncology and can limit the interpretability of hazard ratio-based results. Using Estrogen Receptor (ER) status in the METABRIC breast cancer cohort as a case study, we propose a framework that integrates machine learning survival models with Restricted Mean Survival Time (RMST) to provide a more robust and clinically interpretable approach for survival analysis under non-proportional hazards. (2) Methods: Overall survival was analyzed in 1104 patients. PH violations were confirmed using Schoenfeld residuals and Kaplan–Meier inspection. We compared four models: stratified Cox Elastic Net (Cox E-Net), Random Survival Forest (RSF), Gradient Boosting Survival Analysis (GBSA), and DeepHit. Performance was assessed using Harrell’s C-index, time-dependent IPCW C-index, and Integrated Brier Score (IBS). RMST at 180 months was utilized to quantify absolute survival differences between ER subgroups. To improve the stability of the estimates, 200 bootstrap resamples were performed, and 95% confidence intervals were derived from the bootstrap distribution. (3) ER status demonstrated significant PH violation (p < 0.005) with crossing survival curves. Discrimination (C-index 0.664–0.725) and calibration (IBS 0.149–0.169) were comparable across models, with RSF achieving the highest overall performance. Despite similar accuracy, survival curve structures differed substantially. Cox E-Net and RSF reproduced the observed crossing pattern, whereas GBSA generated smoother trajectories and DeepHit showed marked compression of subgroup separation. In the independent test cohort, the empirical RMST difference at 180 months was 16.6 months (ER-positive: 130.4; ER-negative: 113.8). Model-based RMST differences ranged from 1 month (DeepHit) to 27 months (Cox E-Net), with RSF and GBSA (12.8 and 13.8 months) most closely approximating the empirical benchmark. (4) Conclusions: We propose a novel, model-agnostic ML + RMST framework that addresses non-proportional hazards while providing quantifiable, time-specific clinical benefit. Moreover, models with similar discrimination and calibration produced markedly different survival curve behavior and absolute RMST estimates, demonstrating that accuracy metrics alone are insufficient for clinical interpretation. By linking prognostic modeling with absolute survival quantification, this framework advances survival evaluation beyond relative risk ranking toward individualized, clinically meaningful decision support.
Full article

Figure 1
Open AccessArticle
Comparative Development of Machine Learning Models for Short-Term Indoor CO2 Forecasting Using Low-Cost IoT Sensors: A Case Study in a University Smart Laboratory
by
Zhanel Baigarayeva, Assiya Boltaboyeva, Zhuldyz Kalpeyeva, Raissa Uskenbayeva, Maksat Turmakhan, Adilet Kakharov, Aizhan Anartayeva and Aiman Moldagulova
Algorithms 2026, 19(5), 328; https://doi.org/10.3390/a19050328 - 24 Apr 2026
Abstract
Unlike reactive systems, mechanical ventilation controlled by CO2 concentration operates at a target efficiency that dynamically increases whenever the target CO2 level is exceeded. This approach eliminates the typical ‘dead-time’ and prevents air quality degradation by ensuring the system adjusts its
[...] Read more.
Unlike reactive systems, mechanical ventilation controlled by CO2 concentration operates at a target efficiency that dynamically increases whenever the target CO2 level is exceeded. This approach eliminates the typical ‘dead-time’ and prevents air quality degradation by ensuring the system adjusts its performance immediately in response to concentration changes. In this work, the study focuses on the development and evaluation of data-driven predictive models for near-term indoor CO2 forecasting that can be integrated into pre-occupancy ventilation strategies, rather than designing a complete control scheme. Experimental data were collected over four months in a 48 m2 smart laboratory configured as an open-plan office, where a heterogeneous IoT sensing architecture logged synchronized time-series measurements of CO2 and microclimate variables (temperature, relative humidity, PM2.5, TVOCs), together with acoustic noise levels and appliance-level energy consumption used as indirect occupancy-related signals. Raw telemetry was transformed into a 22-feature state vector using a structured feature engineering method incorporating z-score standardization, cyclic time encodings, multi-horizon CO2 lags, rolling statistics, momentum features, and non-linear interactions to represent temporal autocorrelation and daily periodicity. The study benchmarks multiple regression paradigms, including simple baselines and ensemble methods, and found that an automated multi-level stacked ensemble achieved the highest predictive fidelity for short-term forecasting, with an Mean Absolute Error (MAE) of 32.97 ppm across an observed CO2 range of 403–2305 ppm, representing improvements of approximately 24% and 43% over Linear Regression and K-Nearest Neighbors (KNN), respectively. Temporal diagnostics showed strong phase alignment with observed CO2 rises during occupancy transitions and statistically reliable prediction intervals. Five-fold walk-forward cross-validation confirmed the temporal stability of these results, with top models achieving consistent R2 values of 0.93–0.95 across Folds 2–5. These results demonstrate that, within a single-room university laboratory setting, historical sensor data from low-cost IoT devices can support accurate short-term CO2 forecasting, providing a predictive layer that could support future proactive ventilation scheduling aimed at reducing CO2 lag at the start of occupancy while avoiding unnecessary ventilation runtime. Generalization to other building types and occupancy profiles requires further validation.
Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
►▼
Show Figures

Figure 1
Open AccessArticle
Integration of Building Information Modelling and Economic Multi-Criteria Decision-Making with Neural Networks: Towards a Smart Renewable Energy Community
by
Helena M. Ramos, Ana Paula Falcao, Praful Borkar, Oscar E. Coronado-Hernández, Francisco-Javier Sánchez-Romero and Modesto Pérez-Sánchez
Algorithms 2026, 19(5), 327; https://doi.org/10.3390/a19050327 - 23 Apr 2026
Abstract
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated
[...] Read more.
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated modelling and decision-making. The approach is applied to a hydropower site, evaluating five Scenarios (IDs 1–5) under a Community and Industry model. Financial benchmarks include a 10% Minimum Required Return and a 7-year payback period. ID3—hydropower, solar, and wind—proves most effective, with ANPV of €10,905 (wet) and €4501 (dry), and ROI of 155%/64%. Its ROIA/MRA Index peaks at 539%, and Payback/N ratios remain within acceptable limits (55%/96%). LCOE stays stable in average conditions (0.042–0.046 €/kWh), rising in dry years (0.07–0.10 €/kWh). Profitability differences primarily stem from demand and curtailment, rather than production costs. The NARX neural network reliably models SS% values from renewable inputs with low error across scenarios. The integrated BIM–EMCDM framework ensures transparent, sustainable, and risk-balanced energy system decisions for long-term autonomy.
Full article
(This article belongs to the Special Issue Computational Modeling and Intelligent Simulation of Next-Generation Energy Systems)
Open AccessArticle
High-Resolution Numerical Simulations of Urban Air Quality Using Computational Fluid Dynamics Model: Applications in Madrid, Spain
by
Roberto San Jose, Juan L. Perez-Camanyo and Miguel Jimenez-Gañan
Algorithms 2026, 19(5), 326; https://doi.org/10.3390/a19050326 - 22 Apr 2026
Abstract
This paper presents a high-spatial-resolution 3D system to simulate air quality in urban environments by coupling the WRF/Chem regional model with the PALM4U computational fluid dynamics model, together with an emission model using the SUMO microscopic traffic model. The system has been applied
[...] Read more.
This paper presents a high-spatial-resolution 3D system to simulate air quality in urban environments by coupling the WRF/Chem regional model with the PALM4U computational fluid dynamics model, together with an emission model using the SUMO microscopic traffic model. The system has been applied to two experiments in the city of Madrid, Spain. The first study quantifies the impact of four high-rise buildings on pollutant dispersion. The second evaluates the effect of changing tree types (broad-leaf vs. needle-leaf) in the Retiro Park on NO2 and O3 concentrations. Both simulations adopt a multiscale approach, using detailed 3D urban morphology, traffic flow data and meteorological conditions. In the first experiment, high-rise buildings caused local variations in NO2 and O3 of up to 15% and 20%, respectively. In the second experiment, replacing broad-leaf trees with needle-leaf trees led to a mean NO2 reduction of 1.69% across 90.67% of the study area. This research demonstrates the value of integrated CFD modeling for planning urban mitigation strategies and optimizing air quality in complex urban environments.
Full article
Open AccessArticle
SDS-Former: A Transformer-Based Method for Semantic Segmentation of Arid Land Remote Sensing Imagery
by
Yujie Du, Junfu Fan, Kuan Li and Yongrui Li
Algorithms 2026, 19(5), 325; https://doi.org/10.3390/a19050325 - 22 Apr 2026
Abstract
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks
[...] Read more.
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks and hinder the extraction of discriminative semantic representations. To address these issues, we propose SDS-Former, a lightweight semantic segmentation network specifically designed for remote sensing imagery in arid environments. SDS-Former incorporates an SSM-inspired Lightweight Semantic Enhancement (LSE) module to strengthen contextual modeling and alleviate the loss of discriminative information in deep features. To tackle scale variations, a Dynamic Selective Feature Fusion (DSFF) module is employed in the decoder to adaptively weight and fuse high-level semantics with low-level spatial details. Furthermore, a Feature Refinement Head (FRH) is introduced to enhance boundary localization and improve the recognition of small-scale and sparsely distributed land cover objects. Extensive ablation and comparative experiments demonstrate that SDS-Former consistently outperforms representative semantic segmentation methods across multiple evaluation metrics. On the Tarim Basin dataset, the proposed network achieves a mean Intersection over Union (mIoU) of 82.51% and an F1 score of 86.47%, indicating its superior effectiveness and robustness. Qualitative results further verify that SDS-Former exhibits clear advantages in distinguishing spectrally similar land cover types and preserving the spatial continuity of ground objects in complex arid-region scenes.
Full article
(This article belongs to the Special Issue Artificial Intelligence, Image Processing and Spatial Analytics in Environmental Informatics)
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Actuators, Algorithms, BDCC, Future Internet, JMMP, Machines, Robotics, Systems
Smart Product Design and Manufacturing on Industrial Internet
Topic Editors: Pingyu Jiang, Jihong Liu, Ying Liu, Jihong YanDeadline: 30 June 2026
Topic in
Algorithms, Data, Earth, Geosciences, Mathematics, Land, Water, IJGI
Applications of Algorithms in Risk Assessment and Evaluation
Topic Editors: Yiding Bao, Qiang WeiDeadline: 31 July 2026
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 30 August 2026
Topic in
Agriculture, Energies, Vehicles, Sensors, Sustainability, Urban Science, Applied Sciences, Algorithms
Sustainable Energy Systems
Topic Editors: Luis Hernández-Callejo, Carlos Meza Benavides, Jesús Armando Aguilar JiménezDeadline: 31 October 2026
Conferences
Special Issues
Special Issue in
Algorithms
Bio-Inspired Algorithms: 2nd Edition
Guest Editors: Sándor Szénási, Gábor KertészDeadline: 30 May 2026
Special Issue in
Algorithms
Evolution of Algorithms in the Era of Generative AI
Guest Editors: Domenico Ursino, Gianluca Bonifazi, Enrico Corradini, Michele MarchettiDeadline: 31 May 2026
Special Issue in
Algorithms
Machine Learning Algorithms and Optimization in the Digital Transition (2nd Edition)
Guest Editors: Mateus Mendes, Balduíno Mateus, Nuno LavadoDeadline: 31 May 2026
Special Issue in
Algorithms
Algorithms and Innovations for Real-Time Processing in Streaming Systems and Applications
Guest Editor: Vishnu S. PendyalaDeadline: 31 May 2026
Topical Collections
Topical Collection in
Algorithms
Parallel and Distributed Computing: Algorithms and Applications
Collection Editors: Charalampos Konstantopoulos, Grammati Pantziou
Topical Collection in
Algorithms
Feature Papers in Combinatorial Optimization, Graph, and Network Algorithms
Collection Editor: Roberto Montemanni
Topical Collection in
Algorithms
Feature Papers in Algorithms for Multidisciplinary Applications
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Feature Papers in Randomized, Online and Approximation Algorithms
Collection Editor: Frank Werner


