Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications, and is published monthly online by MDPI.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Theory and Methods) / CiteScore - Q1 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.2 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2025).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
2.1 (2024);
5-Year Impact Factor:
2.0 (2024)
Latest Articles
Improved Dhole Optimization Algorithm for Optimal Parameter Estimation of PEMFC Models for High-Fidelity Energy Conversion
Algorithms 2026, 19(5), 385; https://doi.org/10.3390/a19050385 - 11 May 2026
Abstract
Proton Exchange Membrane Fuel Cells (PEMFCs) are crucial for the advancement of environmentally friendly hydrogen cars. It is one of the promising solutions for alternatives to conventional engines, primarily due to their ability to convert hydrogen into electricity. Fuel cell systems have a
[...] Read more.
Proton Exchange Membrane Fuel Cells (PEMFCs) are crucial for the advancement of environmentally friendly hydrogen cars. It is one of the promising solutions for alternatives to conventional engines, primarily due to their ability to convert hydrogen into electricity. Fuel cell systems have a complex and non-linear mathematical model. Accurate identification of unknown parameters of the PEMFC mathematical model is an important aspect in energy conversion. This research intends to provide a novel meta-heuristic algorithm, which is known as the Improved Dhole Optimization Algorithm (IDOA), to estimate the unknown parameters of PEMFC models. The proposed IDOA is inspired by the collective hunting behavior of dholes. In this algorithm, candidate solutions are systematically arranged and dynamically updated to enhance the overall search process. The objective function is to minimize the sum of squared errors (SSE) between the actual and model-estimated voltages obtained using the proposed IDOA algorithm. In this research, three commonly known PEMFC benchmark models, NedStack PS6, BCS 500 W, and Horizon 500 W, are utilized to assess the performance of the IDOA algorithm. Also, the obtained results are compared against each other to validate their effectiveness. The comparative performance with an array of known optimization algorithms reported in the literature indicates that the IDOA algorithm has low estimation error, an excellent convergence rate, and superior robustness. Furthermore, these results support the appropriateness of the proposed IDOA algorithm for high-accuracy PEMFC modeling in energy conversion models.
Full article
(This article belongs to the Special Issue AI-Driven Engineering Optimization)
Open AccessReview
A Survey of Machine Learning Approaches to IoT Security
by
Iosef Georgian, Teșulă Adrian Zamfirel, Nicolae Goga and Răzvan Crăciunescu
Algorithms 2026, 19(5), 384; https://doi.org/10.3390/a19050384 - 11 May 2026
Abstract
The explosive growth of the Internet of Things (IoT) has expanded the attack surface across industrial systems, smart cities, healthcare, and homes, motivating a synthesis of recent advances in machine learning for IoT security and a clear statement of remaining gaps. This review
[...] Read more.
The explosive growth of the Internet of Things (IoT) has expanded the attack surface across industrial systems, smart cities, healthcare, and homes, motivating a synthesis of recent advances in machine learning for IoT security and a clear statement of remaining gaps. This review conducted a systematic search of MDPI, IEEE Xplore, Nature, ScienceDirect, and SpringerLink for publications from 2023 to 2025, screening them for domain relevance and organizing findings into a taxonomy of ML methods, threat types, and deployment contexts, with particular attention to datasets, edge constraints, and privacy considerations. We find that the field is shifting from signature-based detection to supervised and deep learning approaches that report high accuracy on benchmark traffic, while federated learning enables privacy-preserving, distributed intrusion detection with near-real-time edge performance. Across domains, prevalent threats include DDoS, unauthorized access, and malware; persistent challenges include device heterogeneity, rapid exploit weaponization, nonstandardized evaluation, concept drift, adversarial/poisoning risks, and governance and privacy constraints that hinder real world rollouts. We conclude that ML materially strengthens IoT resilience but requires rigorous, industry-scale validation, lightweight and explainable models, protocol-aware designs, robust federated aggregation, and SDN/NFV orchestration; we outline benchmark and deployment priorities to translate laboratory gains into operational security.
Full article
(This article belongs to the Special Issue Algorithms for Cyber Defense: From Cryptography to Behavioral Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
FictionRAG: A Stateful Metacognitive Framework for High-Fidelity Long-Narrative Role-Playing
by
Yifei Deng, Yudong Zhang, Jingpu Yang and Miao Fang
Algorithms 2026, 19(5), 383; https://doi.org/10.3390/a19050383 - 11 May 2026
Abstract
Maintaining high-fidelity character personas and tracking trusted narrative facts remain significant challenges for LLM-based role-playing systems, particularly in long-context scenarios. Traditional Retrieval-Augmented Generation (RAG) approaches, which typically rely on static, stateless retrieval, often struggle to capture evolving plot dynamics, leading to character hallucinations
[...] Read more.
Maintaining high-fidelity character personas and tracking trusted narrative facts remain significant challenges for LLM-based role-playing systems, particularly in long-context scenarios. Traditional Retrieval-Augmented Generation (RAG) approaches, which typically rely on static, stateless retrieval, often struggle to capture evolving plot dynamics, leading to character hallucinations and logical inconsistencies over prolonged interactions. To address these limitations, we present FictionRAG, a novel stateful retrieval-augmented framework designed to enhance long-narrative role-playing. FictionRAG introduces a hierarchical memory architecture that decouples narrative information into three distinct lanes: factual events, persona traits, and worldview constraints. Furthermore, it employs a failure-driven metacognitive regulatory loop that dynamically identifies and corrects retrieval deficiencies—such as persona drift or conflicting world rules—before response generation. By treating role-playing as a dynamic state tracking problem rather than simple question answering, FictionRAG ensures that generated responses are strictly grounded in both the narrative timeline and the character’s psychological profile. Extensive experiments on a dataset comprising twenty classic novels demonstrate that FictionRAG significantly outperforms existing baselines in factual accuracy, persona stability, and worldview consistency. Beyond literary role-playing, these results suggest that stateful, evidence-constrained retrieval can serve as a general mechanism for long-form controllable generation tasks that require persistent state tracking and multi-dimensional consistency.
Full article
(This article belongs to the Special Issue Generative AI Meets Agent-Based Modelling and Simulation)
Open AccessArticle
Data-Driven Dynamic Pricing for Mitigating the Hockey Stick Effect: A Hybrid Forecasting and Actor-Critic Reinforcement Learning Framework
by
Shanshan Peng, Dandan Wang and Fang Zhu
Algorithms 2026, 19(5), 382; https://doi.org/10.3390/a19050382 - 11 May 2026
Abstract
The demand for the fabric warehouse presents obvious characteristics of hockey stick effect. This leads to problems such as peak congestion and labor shortages during its operation. In order to alleviate this phenomenon, we propose a combination strategy that uses a SARIMA–Markov hybrid
[...] Read more.
The demand for the fabric warehouse presents obvious characteristics of hockey stick effect. This leads to problems such as peak congestion and labor shortages during its operation. In order to alleviate this phenomenon, we propose a combination strategy that uses a SARIMA–Markov hybrid model for demand forecasting, and then applies Actor-Critic reinforcement learning for dynamic pricing. This model integrates SARIMA with Markov chains for residual correction, capturing linear trends and seasonal patterns while correcting residuals, yielding more accurate predictions for highly volatile demand in textile logistics. Experimental results indicate that our approach achieves better performance than SARIMA, Temporal Fusion Transformer (TFT), and Ensemble, especially in identifying and reproducing sharp demand peaks. By combining forecasting results with price elasticity, the proposed dynamic pricing scheme cuts peak-hour demand by 12.54%, which in turn eases pressure on labor scheduling and boosts the efficiency of workforce allocation. This work offers a data-driven approach to flattening demand fluctuations via intelligent pricing, improves operational efficiency without requiring extra hardware investment, and provides a practical response to a long-standing bottleneck in the textile logistics sector.
Full article
Open AccessArticle
RNAFoldDiff-Based Sequence-Aware Graph Diffusion for Accurate RNA 3D Structure Prediction
by
Abdullah Al-Refai, Mohammad F. Al-Hammouri, Bandi Vamsi and Ali Al Bataineh
Algorithms 2026, 19(5), 381; https://doi.org/10.3390/a19050381 - 11 May 2026
Abstract
►▼
Show Figures
The prediction accuracy of RNA’s tertiary structure remains a core challenge in the field of computational biology. Existing models frequently encounter significant challenges due to the complexities of diverse topologies and the intricate nature of long-range interactions. We introduce RNAFoldDiff, a generative framework
[...] Read more.
The prediction accuracy of RNA’s tertiary structure remains a core challenge in the field of computational biology. Existing models frequently encounter significant challenges due to the complexities of diverse topologies and the intricate nature of long-range interactions. We introduce RNAFoldDiff, a generative framework that integrates a sequence-aware graph transformer with a geometric diffusion process for end-to-end RNA 3D structure prediction. RNA sequences and secondary structures are converted into graph representations that capture backbone connectivity and base pair topology. The transformer models local motifs and global dependencies, while the diffusion module iteratively denoises coordinates into physically consistent conformations. The model was pretrained on more than 15,000 structural motifs from the RNA 3D Hub and fine-tuned on complete RNAs from the RNA-Puzzles dataset. In benchmarking tests, RNAFold-Diff achieved an average root mean square deviation (RMSD) of 2.64 Å, a Global Distance Test (GDT) score of 68.7%, and a base pair accuracy of 89.5%, reducing RMSD by nearly 30% and improving GDT by 9 points compared to RoseTTAFoldNA. The framework also outperformed FARFAR2, SimRNA, and RNAformer. Ablation experiments confirmed the contributions of diffusion refinement, edge-aware graph encoding, and motif-level pretraining, while qualitative analyses showed biologically plausible folds including helices, junctions, and multiloops. By combining topology-aware graph learning with generative diffusion, RNAFoldDiff advances RNA tertiary structure modeling and provides a practical tool for RNA design, ribozyme analysis, and structure-guided drug discovery.
Full article

Figure 1
Open AccessArticle
MaLCA: Point Cloud Registration with Mamba-Enhanced Features and Local Correspondence Augmentation
by
Yuchen Huo, Longyun Zhang, Huijuan Guo, Jingyi Gong, Liqun Kuang, Xie Han and Fengguang Xiong
Algorithms 2026, 19(5), 380; https://doi.org/10.3390/a19050380 - 11 May 2026
Abstract
►▼
Show Figures
High-quality correspondences are critical to the accuracy and robustness of point cloud registration. Existing Transformer-based methods are fundamentally constrained by the quadratic computational complexity of self-attention, resulting in limited scalability. Moreover, conventional outlier removal paradigms operate by pruning initial correspondences, and thus fail
[...] Read more.
High-quality correspondences are critical to the accuracy and robustness of point cloud registration. Existing Transformer-based methods are fundamentally constrained by the quadratic computational complexity of self-attention, resulting in limited scalability. Moreover, conventional outlier removal paradigms operate by pruning initial correspondences, and thus fail catastrophically in low-overlap scenarios where initial inliers are inherently scarce. To address these challenges, we propose MaLCA, a point cloud registration method based on Mamba-enhanced features and local correspondence augmentation. We first adopt KPFCN as the backbone to extract multi-scale geometric features from raw point clouds. A Mamba selective state space model then replaces self-attention for global context modeling with linear complexity, while cross-attention is retained to facilitate inter-point-cloud feature interaction. Rather than following the conventional subtraction-based outlier removal paradigm, we introduce a prior-guided local rematching strategy combined with a fused neighbor matching mechanism that iteratively constructs dense, high-quality correspondences from sparse initial inliers, fundamentally overcoming the bottleneck of inlier scarcity in challenging scenes. Extensive experiments on the 3DMatch/3DLoMatch and 4DMatch/4DLoMatch benchmarks demonstrate that MaLCA achieves competitive registration performance across both rigid and deformable scenarios, with particular advantages in low-overlap cases.
Full article

Figure 1
Open AccessSystematic Review
A Systematic Review of Quantum Machine Learning in Education 5.0: Applications and Future Research Directions
by
Jimmy Aurelio Rosales Huamani, Jose Ogosi Auqui, Pedro Toribio Pando, Ernan Capcha Milla, Jorge Luis Quinto Esquivel and Jose Luis Castillo Sequera
Algorithms 2026, 19(5), 379; https://doi.org/10.3390/a19050379 - 11 May 2026
Abstract
Quantum computing is one of the most promising emerging technologies, and quantum machine learning (QML), as one of its key branches, is attracting growing interest for intelligent data processing in education. This study conducted a systematic review of QML in the context of
[...] Read more.
Quantum computing is one of the most promising emerging technologies, and quantum machine learning (QML), as one of its key branches, is attracting growing interest for intelligent data processing in education. This study conducted a systematic review of QML in the context of Education 5.0 using the PRISMA 2020 methodology. A total of 48 peer-reviewed articles from Springer, Scopus, IEEE Xplore, PubMed, MDPI, arXiv, and APS were analyzed. The results indicate that QML has significant potential to enhance personalized learning, optimize educational data processing, support curriculum innovation, and foster the development of quantum-related competencies. Representative QML algorithms, including Quantum Support Vector Machines, variational quantum circuits, and quantum neural networks, are identified as key technological enablers for future educational applications. However, significant challenges remain, such as limited access to quantum infrastructure, lack of specialized curricula, hardware constraints, and the need for interdisciplinary training. Overall, this study highlights the growing relevance of QML for adaptive learning, learning analytics, and intelligent educational systems, while emphasizing the need for further empirical validation and scalable implementation in real educational environments.
Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Route Search and Adaptive Fusion for Power QA with Small Language Model Guidance
by
Zhijun Shen, Qian Guo, Lizhou Jiang, Jingkang Huang, Zhenfan Yu, Xinlei Cai, Hailin Pang and Tao Yu
Algorithms 2026, 19(5), 378; https://doi.org/10.3390/a19050378 - 11 May 2026
Abstract
►▼
Show Figures
Power documentation serves as the core guideline for the safe operation of power systems, and its precise retrieval is crucial for ensuring grid stability and safety. In this context, Retrieval-Augmented Generation (RAG) frameworks emerge as an effective technique by combining LLMs with natural
[...] Read more.
Power documentation serves as the core guideline for the safe operation of power systems, and its precise retrieval is crucial for ensuring grid stability and safety. In this context, Retrieval-Augmented Generation (RAG) frameworks emerge as an effective technique by combining LLMs with natural language understanding capabilities and a retrieval-based model with traceability. However, existing Retrieval-Augmented Generation (RAG) frameworks face several main challenges for power-system documents: semantic drift caused by non-standardized industry terminology, increased semantic noise due to fixed-window segmentation, and knowledge conflicts in the multi-source retrieval context. To address these challenges, we propose a multi-path adaptive fusion retrieval framework based on small language models (SLMs). To map queries to standard terminology, our framework first constructs a common terminology repository and section-structure-aware index for the power industry while fully preserving the physical hierarchical logic from related documents. Subsequently, the SLM in our framework assigns prior weights based on query features and retrieved context, which contributes to adaptive fusion of retrieval paths through confidence assessment and consistency verification. With the help of the fusion process, our method effectively filters retrieval noise and resolves knowledge conflicts. Experimental results on real-world power-document datasets covering dispatch, energy storage and emergency response show that our framework achieves an average recall of 91%, outperforming DENSE and BM25 by 21% and 28% respectively. Compared with other methods, it yields the optimal BERTScore F1 (0.7798) and Rouge-1/2/L F1 (0.2430, 0.1588, 0.2098) and achieves the best results in the RAGAS framework evaluation, which significantly enhances the rigor and reliability of the question-answering system in the power engineering domain.
Full article

Figure 1
Open AccessArticle
Exact Pattern-Aware Extraction for Equality Saturation via Bounded-Depth Tree Covering
by
Zi Cheng, Mengting Yuan and Lefei Zhang
Algorithms 2026, 19(5), 377; https://doi.org/10.3390/a19050377 - 11 May 2026
Abstract
Equality saturation explores equivalent program expressions via e-graphs, and its extraction step selects one representative per equivalence class to form an output tree. Standard extraction minimizes a decomposable per-node cost function that cannot capture multi-node structural patterns arising in SMT preprocessing and compiler
[...] Read more.
Equality saturation explores equivalent program expressions via e-graphs, and its extraction step selects one representative per equivalence class to form an output tree. Standard extraction minimizes a decomposable per-node cost function that cannot capture multi-node structural patterns arising in SMT preprocessing and compiler instruction selection. We formalize pattern-aware extraction as a weighted pattern cover problem on AND-OR DAGs and establish its correspondence to tree covering in instruction selection. Three challenges arise when migrating tree covering to e-graphs: annotation ambiguity from multiple candidates per class, context-dependent selection from depth-2 templates, and DAG sharing conflict. We show that the coupled selection–tiling problem reduces to a tree DP with three mutually exclusive tile-role states, generalizing BURS tree covering from fixed trees to AND-OR DAGs. A bottom-up pass computes optimal DP values, and a top-down pass traces back decisions to produce the output tree. For template depth at most two, the algorithm computes an exact optimum in time. The evaluation targets extraction-level coverage, since end-to-end performance additionally depends on rewrite-rule design and saturation completeness. On SMT-COMP benchmarks, the algorithm achieves up to higher weighted pattern coverage than standard extraction. Depth-2 tiling contributes 45–51% additional improvement, with overhead within of standard extraction.
Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
►▼
Show Figures

Figure 1
Open AccessArticle
Remaining Useful Life Prediction for Special Gas Cylinders Based on SSA–PSO–ResNet–LSTM–Attention Framework
by
Hao Hu, Yujie Liu, Xiaojin Jin and Bo Hu
Algorithms 2026, 19(5), 376; https://doi.org/10.3390/a19050376 - 11 May 2026
Abstract
►▼
Show Figures
Accurate prediction of the Remaining Useful Life (RUL) of special gas cylinders is critical for industrial safety management. However, the nonlinear, strongly coupled degradation behaviors of these cylinders, combined with non-stationary and high-noise monitoring data, limit the performance of single deep learning models.
[...] Read more.
Accurate prediction of the Remaining Useful Life (RUL) of special gas cylinders is critical for industrial safety management. However, the nonlinear, strongly coupled degradation behaviors of these cylinders, combined with non-stationary and high-noise monitoring data, limit the performance of single deep learning models. Traditional hyperparameter tuning and signal processing methods often fail to meet the required prediction accuracy. To address these challenges, this study proposes a hybrid SSA–PSO–ResNet–LSTM–Attention framework for RUL prediction of special gas cylinders. The framework first applies Singular Spectrum Analysis (SSA) to decompose and reconstruct the 12-dimensional multi-source sensor signals, effectively suppressing noise while extracting core degradation trends. Subsequently, a ResNet–LSTM–Attention collaborative model is constructed, where ResNet ensures stable spatial feature propagation, LSTM captures long- and short-term temporal dependencies, and a multi-head attention mechanism emphasizes critical time steps associated with abrupt degradation. Furthermore, a Particle Swarm Optimization (PSO) algorithm is employed to globally optimize key hyperparameters, including the number of convolutional kernels, LSTM hidden units, and learning rate, mitigating the subjectivity of manual tuning. Experimental validation is conducted on 1000 real monitoring samples from 100 composite material gas cylinders, with a cylinder ID-based 7:1:2 train–validation–test split and stratified sampling covering four operating conditions. PSO optimizes hyperparameters using the validation set RMSE as the fitness function, and the test set is exclusively used for final performance evaluation. All results are reported as the mean ± standard deviation from grouped 5-fold cross-validation on the cylinder-wise partition. The proposed model achieves a test RMSE of 71.55, MAE of 50.63, and R2 of 0.9584, representing a 34.2% and 30.2% reduction in RMSE and MAE, respectively, compared with the second-best CNN-LSTM model, and significantly outperforming SVR, MLP, and other benchmark models. Ablation studies confirm the positive synergistic effect of each component, with the removal of either the attention mechanism or the ResNet module causing substantial performance degradation. By employing physically calibrated RUL labels and a balanced multi-condition dataset, the proposed framework achieves high predictive accuracy and good potential for industrial application, providing an effective solution for RUL prediction of special gas cylinders and similar high-pressure vessels, with potential applications in intelligent maintenance of complex industrial equipment.
Full article

Figure 1
Open AccessArticle
The Arithmetic Jump: A Branch-Free Index Inversion for 3D Arrays
by
Paul A. Gagniuc
Algorithms 2026, 19(5), 375; https://doi.org/10.3390/a19050375 - 11 May 2026
Abstract
►▼
Show Figures
This work presents a compact arithmetic formulation for inverting row-major linear indices into three-dimensional coordinates. The formulation defines a bijective and reversible mapping based solely on integer division and modulo operations and avoids iteration and control-flow constructs. A traversal-based reconstruction strategy and the
[...] Read more.
This work presents a compact arithmetic formulation for inverting row-major linear indices into three-dimensional coordinates. The formulation defines a bijective and reversible mapping based solely on integer division and modulo operations and avoids iteration and control-flow constructs. A traversal-based reconstruction strategy and the arithmetic formulation are evaluated on Graphics Processing Unit (GPU) hardware across multiple volumetric configurations. The experimental results show that arithmetic index decomposition yields uniform execution behavior, low run-to-run timing variability, and constant per-thread execution cost under massively parallel execution. The observed differences follow from GPU architectural characteristics, particularly sensitivity to control-flow divergence. The formulation provides a portable reference model for multidimensional index inversion suitable for parallel kernels and hardware-oriented implementations.
Full article

Graphical abstract
Open AccessArticle
GRU Learning of Asymmetric Sequence Structure in Penney’s Game
by
Huijuan Liao and Yanlong Sun
Algorithms 2026, 19(5), 374; https://doi.org/10.3390/a19050374 - 10 May 2026
Abstract
Alternation preference in random-sequence judgments has been linked to objective differences in pattern waiting times. The present study asks whether recurrent networks can learn such temporal asymmetries beyond single-pattern regularities and capture the more complex competitive structure of Penney’s game. To address this
[...] Read more.
Alternation preference in random-sequence judgments has been linked to objective differences in pattern waiting times. The present study asks whether recurrent networks can learn such temporal asymmetries beyond single-pattern regularities and capture the more complex competitive structure of Penney’s game. To address this question, we adopt Penney’s game as a mathematically tractable testbed, in which competitive advantage is determined not by marginal sequence frequency but by the joint effect of self-overlap and cross-overlap structure. Based on Conway’s formula, we formulate two complementary tasks for gated recurrent units (GRUs): optimal counterstrategy prediction and win-probability estimation. Experimental results show that the GRU achieves strong performance on both tasks, recovering optimal or near-optimal second player responses and accurately estimating theoretical winning probabilities with good ranking consistency. These findings suggest that recurrent networks can learn structural regularities underlying asymmetric sequence competition, extending from single-pattern waiting-time effects to more complex competitive sequence settings.
Full article
Open AccessArticle
Multiscale Model—Differential Evolutionary Algorithm for Inverse Solution of T-Wave Inversion in Electrocardiography
by
Tengda Guo, Junjiang Zhu and Yunjie Li
Algorithms 2026, 19(5), 373; https://doi.org/10.3390/a19050373 - 9 May 2026
Abstract
►▼
Show Figures
T-wave inversion (TWI) on an electrocardiogram (ECG) is a key indicator of myocardial ischemia, yet existing inverse ECG methods lack quantitative physiological parameter resolution. This study aims to propose a novel multiscale computational framework to inversely identify the ionic mechanisms underlying TWI. A
[...] Read more.
T-wave inversion (TWI) on an electrocardiogram (ECG) is a key indicator of myocardial ischemia, yet existing inverse ECG methods lack quantitative physiological parameter resolution. This study aims to propose a novel multiscale computational framework to inversely identify the ionic mechanisms underlying TWI. A cell–tissue–torso cardiac electrophysiological model was integrated with a differential evolution (DE) algorithm. The forward model combined the Grandi atrial model and BPS2020 ventricular model, simulating action potential propagation via cellular automata and body surface ECGs via field point potentials. The inverse solution optimized 29 physiological parameters by minimizing the root-mean-square error between the simulated and clinical ECGs. The method was applied to 30 normal and 30 TWI cases to analyze the repolarization abnormalities. The study revealed that extracellular Ca2+ > 2.88 mmol/L and K+ < 3.4 mmol/L in ventricular myocytes (Endo, M, Epi) induce TWI. Quantitative analysis identified specific 95% confidence intervals for ionic imbalances in three scenarios: Case 1 ( ) with [Ca2+] 2.60–3.30 mmol/L and [K+] 1.9–4.7 mmol/L; Case 2 ( ) with [Ca2+] 2.36–3.68 mmol/L and [K+] 3.13–4.07 mmol/L; and Case 3 ( ) with [Ca2+] 2.67–3.91 mmol/L and [K+] 3.11–3.45 mmol/L. This approach enables cellular-scale mechanistic insights into TWI by quantifying ionic concentration changes. The framework supports the advancement of personalized cardiac diagnostics and drug development.
Full article

Figure 1
Open AccessArticle
Region-Based Algorithm for Switching Frequency Reduction in Predictive Control of Converter Supplied Electric Drives
by
Manuel R. Arahal, Manuel G. Satué, Francisco Colodro and Alfredo P. Vega-Leal
Algorithms 2026, 19(5), 372; https://doi.org/10.3390/a19050372 - 9 May 2026
Abstract
Switching losses make up for a notable portion of all losses in converter-supplied electric drives. Control algorithms such as Finite State Model Predictive Control (FSMPC) have tackled this issue in different ways; in particular incorporating a switching penalty to the cost function. This,
[...] Read more.
Switching losses make up for a notable portion of all losses in converter-supplied electric drives. Control algorithms such as Finite State Model Predictive Control (FSMPC) have tackled this issue in different ways; in particular incorporating a switching penalty to the cost function. This, however, results in an optimization problem with increased computational load, restricting the attainable sampling frequency for a given computing hardware. Recently, fast algorithms have been developed that reduce the computational load. However they cannot incorporate the switching penalty term. This paper explores a way around this problem for the particular case of stator current control of a five-phase induction motor. The proposal achieves fast computation even if a term for switching frequency reduction is present in the cost function. Experimental results show how stator current tracking performance is affected in both the torque producing plane and the harmonic subspace.
Full article
(This article belongs to the Special Issue Advanced Predictive Control Algorithms for Electric Drives)
Open AccessArticle
Incremental Multi-Camera Extrinsic Calibration Method Based on PnP Integrating Weighted AprilTag Detections and Multi-View Triangulation
by
Liliya A. Demidova and Vladimir E. Zhuravlev
Algorithms 2026, 19(5), 371; https://doi.org/10.3390/a19050371 - 8 May 2026
Abstract
Accurate extrinsic calibration of multi-camera systems is a central problem in three-dimensional computer vision, as errors in the relative positioning of sensors directly propagate into geometric distortions that critically degrade the quality of downstream applications. This paper proposes an incremental extrinsic camera parameter
[...] Read more.
Accurate extrinsic calibration of multi-camera systems is a central problem in three-dimensional computer vision, as errors in the relative positioning of sensors directly propagate into geometric distortions that critically degrade the quality of downstream applications. This paper proposes an incremental extrinsic camera parameter initialization method that improves upon the baseline iterative registration algorithm based on the Perspective-n-Point (PnP) problem. Unlike board-based calibration frameworks, the proposed approach operates on individually placed markers with no prior knowledge of their mutual positions, enabling recalibration without dedicated calibration sessions. The accuracy improvement is achieved through the introduction of heuristic weighting of fiducial marker detections using AprilTags, as well as the application of a multi-view triangulation algorithm for dynamic refinement of marker spatial coordinates at each stage of scene expansion. Theoretical analysis demonstrates that the incorporation of these mechanisms does not increase the overall asymptotic computational complexity of the complete calibration cycle (including the global optimization stage), despite the higher computational cost of the initialization stage itself. Empirical validation of the method is performed on both synthetic datasets with known ground-truth camera parameters and real-world capture data through the evaluation of geometric errors and their comparison with the baseline method. Experimental results, supplemented by an ablation study, indicate that the proposed algorithm achieves statistically significant improvements on synthetic data in more than 80% of cases, while on real data it is on average 85% more accurate in terms of reprojection error.
Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
Open AccessArticle
Continuous-Variable Quantum Fourier Layer: Applications to Filtering and PDE Solving
by
Paolo Marcandelli, Stefano Mariani, Martina Siena and Stefano Markidis
Algorithms 2026, 19(5), 370; https://doi.org/10.3390/a19050370 - 8 May 2026
Abstract
Fourier representations play a central role in operator learning for partial differential equations and are increasingly being explored in quantum machine learning architectures. The classical fast Fourier transform (FFT), particularly in its Cooley–Tukey decomposition, exhibits a structure that naturally matches continuous-variable quantum circuits.
[...] Read more.
Fourier representations play a central role in operator learning for partial differential equations and are increasingly being explored in quantum machine learning architectures. The classical fast Fourier transform (FFT), particularly in its Cooley–Tukey decomposition, exhibits a structure that naturally matches continuous-variable quantum circuits. This correspondence establishes a direct structural isomorphism between the Cooley–Tukey butterfly network and Gaussian photonic gates, enabling the FFT to be realized as a native optical computation in continuous-variable quantum computing. Building on this observation, we introduce a continuous-variable Quantum Fourier Layer (CV–QFL) based on a bipartite Gaussian encoding and a Cooley–Tukey quantum Fourier transform, enabling exact two-dimensional spectral processing within a Gaussian photonic circuit. We test the CV–QFL on two representative tasks: spectral low-pass filtering and Fourier-domain integration of the heat equation. In both cases, the results match the classical reference to machine precision. More broadly, this work lays the foundation for continuous-variable approaches to quantum scientific computing and for the development of native spectral architectures in quantum machine learning.
Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
►▼
Show Figures

Figure 1
Open AccessArticle
The Nonlinear Relationship Between Fasting Plasma Glucose, HbA1c, and Blood Pressure: A Cross-Sectional Analysis of 54,881 Adults from NHANES 1999–2023
by
Mikhail Kolev, Irina Naskinova, Mariyan Milev, Hristo Kalinov, Gabriela Vasileva and Penko Mitev
Algorithms 2026, 19(5), 369; https://doi.org/10.3390/a19050369 - 7 May 2026
Abstract
The relationship between blood glucose levels and blood pressure is well established in clinical literature, yet its precise quantitative characterization, including nonlinear effects, threshold phenomena, and demographic modifiers, remains incompletely understood. In this study, we conducted a comprehensive cross-sectional analysis of the National
[...] Read more.
The relationship between blood glucose levels and blood pressure is well established in clinical literature, yet its precise quantitative characterization, including nonlinear effects, threshold phenomena, and demographic modifiers, remains incompletely understood. In this study, we conducted a comprehensive cross-sectional analysis of the National Health and Nutrition Examination Survey (NHANES) spanning 11 survey cycles (1999–2023), comprising 54,881 adult participants with at least one glycemic marker and standardized blood pressure measurements. Of these, 26,981 had valid fasting plasma glucose (FPG) measurements, and 49,327 had valid glycated hemoglobin (HbA1c) measurements. We employed restricted cubic splines (RCS), generalized additive models (GAMs), and segmented regression to characterize the dose–response relationship between glycemic markers and both systolic (SBP) and diastolic blood pressure (DBP). A 10 mg/dL increase in FPG was associated with a 0.32 mmHg increase in SBP (95% CI: 0.26–0.38, p < 0.001) after adjusting for age, sex, and body mass index (BMI). Nonlinearity was statistically significant for all exposure–outcome combinations (p < 10−7 for Wald tests). Segmented regression identified a FPG breakpoint at 122.1 mg/dL (95% CI: 119.5–125.6), below which SBP increased at 0.39 mmHg per mg/dL and above which the association was essentially flat. Stratified analyses revealed that the glucose–BP association was strongest in females (β = 0.048 per mg/dL) compared with males (β = 0.021), and in prediabetic individuals (β = 0.065) compared with those with established diabetes (β = 0.014). In the statistical mediation decomposition, body mass index accounted for 23.5% of the total FPG–SBP association. A significant FPG × BMI interaction (p < 0.001) indicated that the glucose–BP relationship is modulated by adiposity. These findings provide a large-scale population-level analysis of the glucose–blood pressure dose–response relationship and identify potential thresholds warranting further investigation for integrated cardiometabolic risk management (95% bootstrap CI: 19.3–28.9%; 1000 resamples); given the cross-sectional design and BMI’s plausible role as a shared upstream determinant of glucose and blood pressure, this proportion is reported as a confounding decomposition rather than as evidence of causal mediation. Insulin resistance (HOMA-IR) and C-reactive protein did not contribute significantly as additional decomposition pathways.
Full article
(This article belongs to the Special Issue Advanced Algorithms for Biomedical Data Analysis)
Open AccessArticle
Stroke Rehabilitation in Virtual Reality Through Enhanced Plantar Pressure Detection Using Sensor Resolution and Adaptive Thresholding
by
Audrey Rah and Yuhua Chen
Algorithms 2026, 19(5), 368; https://doi.org/10.3390/a19050368 - 6 May 2026
Abstract
Early-stage stroke rehabilitation increasingly incorporates virtual reality (VR) systems to provide interactive motor training and positive reinforcement. However, the minimal voluntary plantar pressure activations generated during early recovery are often below the detection limits of conventional pressure-sensing platforms, restricting timely feedback. This study
[...] Read more.
Early-stage stroke rehabilitation increasingly incorporates virtual reality (VR) systems to provide interactive motor training and positive reinforcement. However, the minimal voluntary plantar pressure activations generated during early recovery are often below the detection limits of conventional pressure-sensing platforms, restricting timely feedback. This study quantitatively evaluates the detectability of low-amplitude plantar micro-intent signals under varying sensor resolution and adaptive threshold conditions. Publicly available plantar pressure recordings from the PhysioNet Center for Verification and Evaluation of Stroke (CVES) database were used as physiological baseline signals. Micro-intent was modeled as short-duration half-sine pressure pulses with systematically varied amplitudes and integrated into low-load baseline segments. Sensor resolution was represented through controlled noise modeling to emulate low-, medium-, and high-resolution sensing scenarios. A sliding-window adaptive threshold detector was evaluated across multiple amplitudes and sensitivity stages. The detection probability, false positive rate, and minimum detectable amplitude (defined as ≥80% detection probability) were quantified. The results show that detection probability increases with signal amplitude and shifts toward lower amplitudes with improved sensor resolution and more sensitive threshold configurations. Higher-resolution sensing reduced the minimum detectable amplitude, while adaptive thresholding enabled earlier detection of weak plantar activations without substantial increases in false positives. These findings provide quantitative design guidance for pressure-sensing VR rehabilitation systems targeting early-stage motor recovery.
Full article
(This article belongs to the Special Issue Advanced Algorithms for Biomedical Data Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Prediction of Percutaneous Coronary Intervention from Clinical and ECG Data Using Machine Learning: A Retrospective Single-Center Observational Study
by
Zhadyra Alimbayeva, Chingiz Alimbayev, Kassymbek Ozhikenov, Kairat Karibayev, Aiman Ozhikenova, Ussen Shylmyrza and Dilfuza Akhmedova
Algorithms 2026, 19(5), 367; https://doi.org/10.3390/a19050367 - 6 May 2026
Abstract
►▼
Show Figures
The aim of this study was to evaluate the feasibility of predicting percutaneous coronary intervention (PCI) based on clinical, laboratory, and electrocardiographic data available at various stages of hospitalization. A retrospective single-center study was conducted, including 137 patients with suspected coronary artery disease.
[...] Read more.
The aim of this study was to evaluate the feasibility of predicting percutaneous coronary intervention (PCI) based on clinical, laboratory, and electrocardiographic data available at various stages of hospitalization. A retrospective single-center study was conducted, including 137 patients with suspected coronary artery disease. The fact that PCI was performed during the current hospitalization was considered as the endpoint. Taking into account the temporary availability of data, three sets of signs were formed: basic (SAFE), including indicators available at admission; clinical (CLINICAL); and extended (EXTENDED), supplemented with glycemic parameters. Logistic regression, random forest, and gradient boosting were used to build the models. The assessment was carried out using repeated stratified cross-validation (5 × 10). The main metrics were ROC-AUC, PR-AUC, accuracy and F1-measure. The models demonstrated moderate predictive ability. The basic model (SAFE) showed a ROC-AUC of 0.734 ± 0.092, while the best results were achieved using an extended model based on a random forest (ROC-AUC 0.755 ± 0.079). The addition of glycemic parameters provided a moderate improvement in prediction quality. In the logistic regression, the most significant predictor was the presence of type 2 diabetes mellitus (OR = 7.36; p < 0.001). The results indicate the potential for using non-invasive data to assess the likelihood of PCI in the early stages of hospitalization. However, the models show moderate accuracy and require further validation on larger and more independent samples.
Full article

Figure 1
Open AccessArticle
Chainguard: A Blockchain-Based Aid Distribution System with Mobile Application and System Architecture Design
by
Enes Rayman, Serra Öğütcen, Okan Yaman and Yusuf Murat Erten
Algorithms 2026, 19(5), 366; https://doi.org/10.3390/a19050366 - 5 May 2026
Abstract
Natural disasters are devastating occurrences that have a major influence on the well-being of numerous individuals on a global scale. The primary goal of this study is to facilitate the rapid, transparent, and safe delivery of various aid such as food and clothing
[...] Read more.
Natural disasters are devastating occurrences that have a major influence on the well-being of numerous individuals on a global scale. The primary goal of this study is to facilitate the rapid, transparent, and safe delivery of various aid such as food and clothing to people in disaster areas. For this purpose, a system has been established using blockchain technology in cooperation with institutions and humanitarian organizations. This system is designed to be accountable and reliable; it will supervise all processes from the source of aid materials to their distribution while protecting the personal information of disaster victims. The assistance process is improved using Smart Contracts in order to provide fast, effective, and coordinated assistance. Unlike existing humanitarian frameworks that rely on permissionless networks such as Bitcoin or Ethereum, this study proposes Hyperledger Fabric to ensure beneficiary privacy and eliminate per-transaction fees for end-users, thereby offering a more sustainable economic model for high-frequency aid distribution compared to public blockchains. The proposed system (Chainguard) addresses the ’efficiency gap’ in the current literature JSON Web Token (JWT)-based authentication layer. The results showed that Chainguard achieves a stable throughput of ~180 TPS with an end-to-end latency of less than 1.5 s, outperforming traditional heavy-cryptography models in terms of scalability and resource efficiency during real-time disaster response.
Full article
(This article belongs to the Special Issue Blockchain and Big Data Analytics: AI-Driven Data Science)
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Actuators, Algorithms, BDCC, Future Internet, JMMP, Machines, Robotics, Systems
Smart Product Design and Manufacturing on Industrial Internet
Topic Editors: Pingyu Jiang, Jihong Liu, Ying Liu, Jihong YanDeadline: 30 June 2026
Topic in
Algorithms, Data, Earth, Geosciences, Mathematics, Land, Water, IJGI
Applications of Algorithms in Risk Assessment and Evaluation
Topic Editors: Yiding Bao, Qiang WeiDeadline: 31 July 2026
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 30 August 2026
Topic in
Agriculture, Energies, Vehicles, Sensors, Sustainability, Urban Science, Applied Sciences, Algorithms
Sustainable Energy Systems
Topic Editors: Luis Hernández-Callejo, Carlos Meza Benavides, Jesús Armando Aguilar JiménezDeadline: 31 October 2026
Conferences
Special Issues
Special Issue in
Algorithms
Bio-Inspired Algorithms: 2nd Edition
Guest Editors: Sándor Szénási, Gábor KertészDeadline: 30 May 2026
Special Issue in
Algorithms
Evolution of Algorithms in the Era of Generative AI
Guest Editors: Domenico Ursino, Gianluca Bonifazi, Enrico Corradini, Michele MarchettiDeadline: 31 May 2026
Special Issue in
Algorithms
Machine Learning Algorithms and Optimization in the Digital Transition (2nd Edition)
Guest Editors: Mateus Mendes, Balduíno Mateus, Nuno LavadoDeadline: 31 May 2026
Special Issue in
Algorithms
Algorithms and Innovations for Real-Time Processing in Streaming Systems and Applications
Guest Editor: Vishnu S. PendyalaDeadline: 31 May 2026
Topical Collections
Topical Collection in
Algorithms
Parallel and Distributed Computing: Algorithms and Applications
Collection Editors: Charalampos Konstantopoulos, Grammati Pantziou
Topical Collection in
Algorithms
Feature Papers in Combinatorial Optimization, Graph, and Network Algorithms
Collection Editor: Roberto Montemanni
Topical Collection in
Algorithms
Feature Papers in Algorithms for Multidisciplinary Applications
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Feature Papers in Randomized, Online and Approximation Algorithms
Collection Editor: Frank Werner
