Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications, and is published monthly online by MDPI.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Theory and Methods) / CiteScore - Q1 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.2 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2025).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
2.1 (2024);
5-Year Impact Factor:
2.0 (2024)
Latest Articles
Prediction of Percutaneous Coronary Intervention from Clinical and ECG Data Using Machine Learning: A Retrospective Single-Center Observational Study
Algorithms 2026, 19(5), 367; https://doi.org/10.3390/a19050367 (registering DOI) - 6 May 2026
Abstract
►
Show Figures
The aim of this study was to evaluate the feasibility of predicting percutaneous coronary intervention (PCI) based on clinical, laboratory, and electrocardiographic data available at various stages of hospitalization. A retrospective single-center study was conducted, including 137 patients with suspected coronary artery disease.
[...] Read more.
The aim of this study was to evaluate the feasibility of predicting percutaneous coronary intervention (PCI) based on clinical, laboratory, and electrocardiographic data available at various stages of hospitalization. A retrospective single-center study was conducted, including 137 patients with suspected coronary artery disease. The fact that PCI was performed during the current hospitalization was considered as the endpoint. Taking into account the temporary availability of data, three sets of signs were formed: basic (SAFE), including indicators available at admission; clinical (CLINICAL); and extended (EXTENDED), supplemented with glycemic parameters. Logistic regression, random forest, and gradient boosting were used to build the models. The assessment was carried out using repeated stratified cross-validation (5 × 10). The main metrics were ROC-AUC, PR-AUC, accuracy and F1-measure. The models demonstrated moderate predictive ability. The basic model (SAFE) showed a ROC-AUC of 0.734 ± 0.092, while the best results were achieved using an extended model based on a random forest (ROC-AUC 0.755 ± 0.079). The addition of glycemic parameters provided a moderate improvement in prediction quality. In the logistic regression, the most significant predictor was the presence of type 2 diabetes mellitus (OR = 7.36; p < 0.001). The results indicate the potential for using non-invasive data to assess the likelihood of PCI in the early stages of hospitalization. However, the models show moderate accuracy and require further validation on larger and more independent samples.
Full article
Open AccessArticle
Chainguard: A Blockchain-Based Aid Distribution System with Mobile Application and System Architecture Design
by
Enes Rayman, Serra Öğütcen, Okan Yaman and Yusuf Murat Erten
Algorithms 2026, 19(5), 366; https://doi.org/10.3390/a19050366 - 5 May 2026
Abstract
Natural disasters are devastating occurrences that have a major influence on the well-being of numerous individuals on a global scale. The primary goal of this study is to facilitate the rapid, transparent, and safe delivery of various aid such as food and clothing
[...] Read more.
Natural disasters are devastating occurrences that have a major influence on the well-being of numerous individuals on a global scale. The primary goal of this study is to facilitate the rapid, transparent, and safe delivery of various aid such as food and clothing to people in disaster areas. For this purpose, a system has been established using blockchain technology in cooperation with institutions and humanitarian organizations. This system is designed to be accountable and reliable; it will supervise all processes from the source of aid materials to their distribution while protecting the personal information of disaster victims. The assistance process is improved using Smart Contracts in order to provide fast, effective, and coordinated assistance. Unlike existing humanitarian frameworks that rely on permissionless networks such as Bitcoin or Ethereum, this study proposes Hyperledger Fabric to ensure beneficiary privacy and eliminate per-transaction fees for end-users, thereby offering a more sustainable economic model for high-frequency aid distribution compared to public blockchains. The proposed system (Chainguard) addresses the ’efficiency gap’ in the current literature JSON Web Token (JWT)-based authentication layer. The results showed that Chainguard achieves a stable throughput of ~180 TPS with an end-to-end latency of less than 1.5 s, outperforming traditional heavy-cryptography models in terms of scalability and resource efficiency during real-time disaster response.
Full article
(This article belongs to the Special Issue Blockchain and Big Data Analytics: AI-Driven Data Science)
Open AccessArticle
Distribution Network Planning Considering Harmonics Based on a Parallel Genetic Algorithm Using Message Passing Interface
by
Vincent Roberge and Mohammed Tarbouchi
Algorithms 2026, 19(5), 365; https://doi.org/10.3390/a19050365 - 5 May 2026
Abstract
This paper presents a parallel genetic algorithm (GA) for the planning of power distribution networks considering harmonics. Power distribution systems are generally operated in a radial configuration, supplemented by tie switches that enable network reconfiguration during unexpected outages or planned maintenance. They can
[...] Read more.
This paper presents a parallel genetic algorithm (GA) for the planning of power distribution networks considering harmonics. Power distribution systems are generally operated in a radial configuration, supplemented by tie switches that enable network reconfiguration during unexpected outages or planned maintenance. They can also include distributed generators (DGs), capacitor banks (CBs), and soft open points (SOPs) to lower distribution losses and improve the voltage profile. Some of the loads and DG units may be nonlinear, generating harmonic currents in the system, polluting the power, and increasing losses. This paper makes use of a parallel GA to find an optimized configuration, optimized location, and sizing of DGs, CBs, and SOPs to lower real power distribution losses while considering harmonics and the physical constraints of the network. The proposed algorithm uses a solution encoding based on the minimum spanning tree to guarantee the radial topology of candidate solutions. It uses the backward–forward power flow method to compute the fundamental voltages and a decoupled harmonic power flow for the harmonic components. The algorithm is parallelized on a small computer cluster using the Message Passing Interface (MPI) to reduce its execution time. The proposed solver is validated on distribution systems ranging from 16 to 880 buses. The results show that simultaneously optimizing the topology, the DGs, the CBs, and the SOPs results in reducing power losses by 37% to 93%, improving the overall efficiency of the distribution system. The parallelization using MPI allows for a 90.9× speedup on a 96-core cluster.
Full article
(This article belongs to the Special Issue Swarm Intelligence and Evolutionary Algorithms for Real World Applications (3rd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
QuantFT-VL: Harmonizing Quantization and LoRA for Efficient Mobile Vision–Language Model Fine-Tuning
by
Fangyuan Jin, Hui Lin, Lu Zhang and Yiwei Chen
Algorithms 2026, 19(5), 364; https://doi.org/10.3390/a19050364 - 4 May 2026
Abstract
Vision–language models (VLMs) are increasingly deployed in resource-constrained environments, yet efficient fine-tuning remains challenging because post-training quantization often degrades the effectiveness of low-rank adaptation. This paper revisits that mismatch in the context of MobileVLM1.7B and presents QuantFT-VL, a novel initialization strategy following the
[...] Read more.
Vision–language models (VLMs) are increasingly deployed in resource-constrained environments, yet efficient fine-tuning remains challenging because post-training quantization often degrades the effectiveness of low-rank adaptation. This paper revisits that mismatch in the context of MobileVLM1.7B and presents QuantFT-VL, a novel initialization strategy following the quantization phase to seamlessly align with the LoRA technique. The key idea is to initialize LoRA using a low-rank approximation of the quantization residual instead of the default zero-initialization used in QLoRA-style pipelines. After quantizing a pretrained weight matrix W into Q, we compute the residual W − Q and use truncated singular value decomposition to initialize the LoRA factors (A and B) so that the starting adapted weight Q + ABT better matches the full-precision model. This residual-aware initialization reduces the discrepancy introduced by quantization and leads to faster and more stable optimization. Experiments on six standard VLM benchmarks show that QuantFT-VL consistently improves over QLoRA and recovers performance close to or better than full-precision LoRA in the best setting. On two RTX 3090 GPUs, QuantFT-VL improves the average benchmark score by 3.27 percentage points over QLoRA while preserving the memory and speed advantages of quantized fine-tuning.
Full article
Open AccessArticle
A Reproducible Benchmarking Methodology for Machine Learning Hardware: Performance–Energy Trade-Offs from GPUs to Apple Silicon
by
Oscar H. Sierra-Herrera, Mario Eduardo González Niño, Edwin Francis Cárdenas Correa, Jersson X. Leon-Medina and Francesc Pozo
Algorithms 2026, 19(5), 363; https://doi.org/10.3390/a19050363 - 4 May 2026
Abstract
While hardware selection is widely recognized as a key factor in machine learning performance, systematic and reproducible evaluation across heterogeneous and accessible platforms remains limited, particularly when jointly considering execution time, energy consumption, stability, and cost-efficiency. This work presents a unified and fully
[...] Read more.
While hardware selection is widely recognized as a key factor in machine learning performance, systematic and reproducible evaluation across heterogeneous and accessible platforms remains limited, particularly when jointly considering execution time, energy consumption, stability, and cost-efficiency. This work presents a unified and fully reproducible benchmarking framework for supervised learning, designed to enable controlled and comparable evaluation across diverse hardware environments. The proposed methodology enforces consistent training pipelines, fixed hyperparameter configurations, and repeated executions to ensure statistical reliability, while incorporating performance metrics such as execution time, power consumption, and energy usage, as well as performance-per-dollar. The framework is validated on a representative set of platforms, including CUDA-enabled GPUs, Apple Silicon (CPU/GPU), x86 processors, ARM-based embedded systems, and cloud-based environments, using convolutional, recurrent (RNN, LSTM, BiLSTM), and tree-based (XGBoost) models. The results reveal that hardware efficiency is strongly model-dependent. GPUs provide the highest computational performance and stability for parallel workloads, whereas Apple Silicon achieves superior energy efficiency with competitive execution times, particularly for recurrent architectures. The batch size analysis shows that performance can vary significantly depending on workload configuration, especially on CPU-based platforms, while epoch-based evaluation confirms that the measured performance reflects steady-state behavior rather than initialization overhead. In contrast, conventional CPUs and embedded systems exhibit significant scalability limitations for deep learning training, although they remain competitive for tree-based methods such as XGBoost, which demonstrates near hardware-independent predictive performance. These findings highlight the limitations of generalized hardware selection criteria and emphasize the need for model-aware and hardware-aware benchmarking. The proposed framework offers a practical and extensible foundation for reproducible, hardware-aware evaluation of machine learning systems, supporting informed decision-making in research, deployment, and cost-constrained scenarios.
Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Multiple String Pattern Matching Algorithm Using Multi-Character Inverted Lists
by
Chouvalit Khancome
Algorithms 2026, 19(5), 362; https://doi.org/10.3390/a19050362 - 4 May 2026
Abstract
Multiple string matching is a fundamental operation in real-time analytics, cybersecurity, bioinformatics, and large-scale information retrieval. Nevertheless, existing approaches continue to face inherent trade-offs among preprocessing efficiency, verification overhead, and support for dynamic pattern updates, particularly in large and continuously evolving environments. This
[...] Read more.
Multiple string matching is a fundamental operation in real-time analytics, cybersecurity, bioinformatics, and large-scale information retrieval. Nevertheless, existing approaches continue to face inherent trade-offs among preprocessing efficiency, verification overhead, and support for dynamic pattern updates, particularly in large and continuously evolving environments. This paper presents MMIVL, a high-performance algorithm founded on the multi-character inverted list (m-CIVL), a unified and inherently dynamic indexing framework for pattern management. By integrating positional information, termination semantics, and pattern associations within a single structure, m-CIVL enables direct matching without requiring a separate verification stage. MMIVL achieves a preprocessing complexity of O(|P|/s), a search complexity of O(|T| + nocc), and an update complexity of O(|p|/s), where s denotes the segment length. Extensive experiments on synthetic and real-world datasets demonstrate that MMIVL consistently outperforms representative baselines, with especially strong gains in large-scale scenarios, while maintaining stable performance and favorable memory efficiency. Overall, these results establish m-CIVL as an effective, scalable, and practically viable solution that unifies efficient preprocessing, high-throughput searching, and dynamic update capability for modern multiple string-matching applications.
Full article
(This article belongs to the Special Issue Algorithmic Innovations: Bridging Theoretical Foundations and Practical Applications (2nd Edition))
Open AccessArticle
Bayesian Optimization for Categorical and Mixed Variables Using a Multinomial Logit Surrogate
by
Muhammad Amir Saeed and Antonio Candelieri
Algorithms 2026, 19(5), 361; https://doi.org/10.3390/a19050361 - 4 May 2026
Abstract
Bayesian optimization (BO) is a widely used framework for optimizing expensive black-box functions. Most BO methods rely on Gaussian process (GP) surrogates, which perform well in continuous domains but encounter difficulties when decision variables include categorical or mixed discrete–continuous components. In particular, GP-based
[...] Read more.
Bayesian optimization (BO) is a widely used framework for optimizing expensive black-box functions. Most BO methods rely on Gaussian process (GP) surrogates, which perform well in continuous domains but encounter difficulties when decision variables include categorical or mixed discrete–continuous components. In particular, GP-based approaches typically require ad hoc numerical encodings of categorical variables that may fail to capture the structure of discrete decision spaces. In this work, we propose MNL-BO (Multinomial Logit Bayesian Optimization), a preference-based Bayesian optimization framework that replaces the GP surrogate with a multinomial logit (MNL) model trained from pairwise preference comparisons. The resulting surrogate provides a natural and interpretable representation of categorical alternatives while allowing continuous, discrete, and categorical variables to be handled within a unified optimization framework. The predictive utility estimates and uncertainty indicators generated by the MNL model are employed to formulate acquisition functions that reconcile exploration with exploitation. The proposed methodology is evaluated on three progressively complex optimization challenges: a purely categorical benchmark, a combinatorial Traveling Salesman problem, and a constrained mixed-variable engineering design problem concerning material selection in pressure vessel optimization. Multi-run tests provide consistent advantages over random search and exhibit stable convergence behavior across diverse random initializations. In addition to heuristic baselines such as local search and classical metaheuristics, we also compare against tree-based Bayesian optimization baselines inspired by the Sequential Model-based Algorithm Configuration (SMAC) framework. The results indicate that the proposed MNL-BO method achieves competitive performance under comparable evaluation budgets while providing an interpretable probabilistic surrogate for categorical decision spaces. These findings suggest that preference-based surrogate modeling provides a practical and flexible alternative for Bayesian optimization in categorical and mixed-variable optimization problems.
Full article
Open AccessArticle
Optimization of Gabor Filters Based on Quaternions for Image Preprocessing in the Automated Detection of Bemisia tabaci in Yellow Traps
by
Ramiro Esquivel-Felix, Mireya Moreno-Lucio, Celina Lizeth Castañeda-Miranda, Héctor Alonso Guerrero-Osuna, Rodrigo Castañeda-Miranda, Carlos A. Olvera-Olvera, Ma. del Rosario Martínez-Blanco and Luis Octavio Solís-Sánchez
Algorithms 2026, 19(5), 360; https://doi.org/10.3390/a19050360 - 4 May 2026
Abstract
In precision agriculture, identifying pests such as the whitefly (Bemisia tabaci) is a significant challenge, as precise knowledge of these insects is essential for developing effective Integrated Pest Management (IPM) strategies. Automated daily monitoring within IPM programs optimizes the diagnostic registration
[...] Read more.
In precision agriculture, identifying pests such as the whitefly (Bemisia tabaci) is a significant challenge, as precise knowledge of these insects is essential for developing effective Integrated Pest Management (IPM) strategies. Automated daily monitoring within IPM programs optimizes the diagnostic registration stage by reducing logistical expenses and manual errors, enabling early pest treatment interventions and providing quantitative data for informed decision-making. In this study, an image bank was processed using a Quaternionic Gabor Filter (QGF) algorithmto highlight textural features through hypercomplex correlation. The highlighted objects were then processed by a YOLOv8 pretrained model to identify Bemisia tabaci. Experimental results demonstrate that this combination achieves a precision of 0.868 and an mAP@0.5 of 0.950, while a PSNR of 34.10 dB ensures the structural integrity of the enhanced images. Although the total execution time averages 2.3 s per image due to preprocessing complexity, the GPU inference time of 10.3 ms confirms the potential for high-speed detection. This approach significantly enhanced the morphological features of Bemisia tabaci, increasing the robustness of the detection model and narrowing down processing conditions for yellow trap samples to strengthen precision in the semi-arid regions of Zacatecas, Mexico.
Full article
(This article belongs to the Special Issue Advances in Computer Vision: Emerging Trends and Applications)
Open AccessArticle
Eigenvalue Bounds for Symmetric, Multiple Saddle-Point Matrices with SPD Preconditioners
by
Luca Bergamaschi and Michele Bergamaschi
Algorithms 2026, 19(5), 359; https://doi.org/10.3390/a19050359 - 4 May 2026
Abstract
We derive the eigenvalue bounds for symmetric block-tridiagonal multiple saddle-point systems preconditioned with the symmetric positive definite (SPD) preconditioner proposed by J. Pearson and A. Potschka in 2024 and further studied by L. Bergamaschi and coauthors, and for double saddle-point problems with inexact
[...] Read more.
We derive the eigenvalue bounds for symmetric block-tridiagonal multiple saddle-point systems preconditioned with the symmetric positive definite (SPD) preconditioner proposed by J. Pearson and A. Potschka in 2024 and further studied by L. Bergamaschi and coauthors, and for double saddle-point problems with inexact Schur complement matrices. The analysis applies to an arbitrary number of blocks. We validate the proposed estimates with both synthetic and realistic test problems, and show the good performance of the proposed preconditioner under the condition that the Schur complements are accurately approximated.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Open AccessArticle
Explainability as a Structural Property: An Empirical Analysis of Rashomon Sets and Pareto Fronts
by
Roberto Stevens Porto Solano, Antonio Berlanga de Jesús, José M. Molina López Berlanga and Yair Rivera Julio
Algorithms 2026, 19(5), 358; https://doi.org/10.3390/a19050358 - 4 May 2026
Abstract
While most current work on interpretable models has centered on post hoc explainability of individual predictive models, the structure of the hypothesis space from which such models are drawn has been largely neglected. This paper proposes a contrasting perspective in which explainability is
[...] Read more.
While most current work on interpretable models has centered on post hoc explainability of individual predictive models, the structure of the hypothesis space from which such models are drawn has been largely neglected. This paper proposes a contrasting perspective in which explainability is treated not as an attribute of a single solution but as a structural property of the model space. By combining Rashomon set analysis with Pareto-based performance–model complexity trade-offs, we formulate a computational framework for identifying near-optimal and structurally simple models. A performance–model complexity trade-off landscape is constructed by systematically generating models under controlled complexity bounds and extracting Pareto-optimal solutions. The results show that explainability can emerge as a regional property of hypothesis spaces in which multiple interpretable models achieve competitive predictive performance. This perspective supports the identification of robust and auditable predictive solutions and complements traditional explainability approaches centered on isolated models. Cross-dataset replication on Wine (UCI) and Vehicle (UCI) confirms the generalizability of these findings.
Full article
(This article belongs to the Special Issue Explainable AI: Advances in Interpretability Algorithms and Applications)
Open AccessArticle
An Agricultural Product Price Prediction Model Based on Quadratic Clustering Decomposition and TOC-Optimized Deep Learning
by
Fengkai Ye, Ruoqian Li, Danping Wang and Mengyang Li
Algorithms 2026, 19(5), 357; https://doi.org/10.3390/a19050357 - 3 May 2026
Abstract
Accurate forecasting of agricultural product prices is crucial for informed decision-making in agricultural markets; however, such time series are inherently characterized by non-stationarity, multi-scale dynamics, and substantial noise, posing significant challenges to conventional methods. To overcome these limitations, this study proposes a novel
[...] Read more.
Accurate forecasting of agricultural product prices is crucial for informed decision-making in agricultural markets; however, such time series are inherently characterized by non-stationarity, multi-scale dynamics, and substantial noise, posing significant challenges to conventional methods. To overcome these limitations, this study proposes a novel hybrid framework, termed TOC-CNN-BiLSTM-SA, built upon a “quadratic decomposition–clustering–optimization” paradigm. Specifically, a composite CEEMDAN–K-means++–VMD approach is first employed to hierarchically decompose the raw price series via coarse decomposition, feature clustering, and refined decomposition, enabling effective noise suppression and multi-scale feature extraction. Subsequently, a deep learning architecture integrating Convolutional Neural Networks (CNNs), Bidirectional Long Short-Term Memory networks (BiLSTM), and a self-attention mechanism is developed, where CNN captures local patterns, BiLSTM models bidirectional temporal dependencies, and the attention mechanism enhances global feature representation. Furthermore, the Tornado Optimizer with Coriolis force (TOC) is introduced to adaptively tune key hyperparameters, thereby improving model robustness and generalization capability. Empirical results based on wheat price data from Henan Province, China, demonstrate that the proposed model achieves outstanding predictive performance, with RMSE, MAE, MAPE, and R2 values of 4.425, 3.9372, 0.16%, and 99.97%, respectively, significantly outperforming existing benchmark models. These research indicate that the proposed framework effectively captures complex price dynamics and offers a reliable and practical solution for agricultural price forecasting.
Full article
Open AccessArticle
Trajectory-Based Behavioral Analytics for Blockchain Systems
by
Francisco Javier Moreno Arboleda, Luzarait Cañas Quintero and Georgia Garani
Algorithms 2026, 19(5), 356; https://doi.org/10.3390/a19050356 - 2 May 2026
Abstract
►▼
Show Figures
Blockchain systems generate massive volumes of transactional data, yet most existing analytical approaches rely on query-based retrieval mechanisms that treat transactions as isolated records. In this paper, a trajectory-based framework for blockchain analysis is introduced where user activity is modeled as temporally ordered
[...] Read more.
Blockchain systems generate massive volumes of transactional data, yet most existing analytical approaches rely on query-based retrieval mechanisms that treat transactions as isolated records. In this paper, a trajectory-based framework for blockchain analysis is introduced where user activity is modeled as temporally ordered behavioral patterns. Four types of blockchain trajectories are formally defined: miner reward trajectories, sender value-and-fee trajectories, receiver value trajectories, and sender–receiver interaction trajectories. Unlike traditional query frameworks, trajectories are treated as first-class analytical objects, explicitly constructed and returned as outputs, thereby enabling structured temporal reasoning over blockchain behavior. To demonstrate the practicality of the approach, the proposed trajectory functions are implemented in Python 3.12 and experiments are conducted using real data from the Ethereum blockchain. Compared with conventional query-based approaches that return isolated transactions, the experimental results show that the proposed trajectory-based framework enables a more systematic identification of temporal behavioral patterns, including persistent miner dominance, recurrent zero-value interactions, sender–receiver role reversals and sender dominance by sending the highest values across several periods. The results show that trajectory-based modeling provides a systematic lens for uncovering temporal and structural regularities that are not readily observable through conventional query techniques. This work establishes a formal foundation for behavioral blockchain analytics and opens new research directions in centralization measurement, predictive modeling, and trajectory similarity analysis.
Full article

Figure 1
Open AccessArticle
CausalAgent: A Hierarchical Graph-Enhanced Multi-Agent Framework for Causal Question Answering in Production Safety Accident Reports
by
Tianyi Wang, Tao Shen, Zhiyuan Zhang, Shuangping Huang, Huiguo He, Qingguang Chen and Houqiang Yang
Algorithms 2026, 19(5), 355; https://doi.org/10.3390/a19050355 - 2 May 2026
Abstract
Accident reports provide a detailed account of environmental causes, unsafe human behaviors, and subsequent chain reactions. These records serve as essential resources for analyzing accident mechanisms and exploring potential risk patterns within production safety processes. Currently, Graph based Retrieval-Augmented Generation (RAG), which integrates
[...] Read more.
Accident reports provide a detailed account of environmental causes, unsafe human behaviors, and subsequent chain reactions. These records serve as essential resources for analyzing accident mechanisms and exploring potential risk patterns within production safety processes. Currently, Graph based Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs) with Knowledge Graphs (KGs), has emerged as a leading approach for complex causal question answering over extensive unstructured accident documentation. However, the application of this technology in the production safety domain still encounters two primary challenges. First, knowledge graph construction using a single granularity fails to capture fine-grained case details and macro-level standard systems. Second, traditional one-step retrieval paradigms lack the capacity to track deep causal chains or interpret the complex logic of multi-factor coupling. To address these limitations, we propose CausalAgent, a hierarchical graph-enhanced multi-agent framework for causal question answering in production safety accident reports. This framework innovatively combines a Hierarchical Causal Graph (HC-Graph) and a Multi-Agent Collaborative Reasoning (MACR) mechanism. Specifically, the HC-Graph employs a two-layer architecture that links a fine-grained instance layer with a national standard causation layer to resolve conflicts in semantic granularity. The MACR mechanism converts complex natural language queries into executable structured queries and logic verification steps through the sequential cooperation of four specialized agents, namely the Graph Parsing Agent, the Problem Analysis Agent, the Query Generation Agent, and the Reasoning Insight Agent. CausalAgent enables in-depth mining of accident causation mechanisms and provides scientific, robust and interpretable intelligent support for data-driven risk assessment and emergency decision-making. Experiments on real-world accident datasets demonstrate that CausalAgent achieves a query execution rate and an reasoning accuracy, outperforming the SOTA baseline by in terms of absolute accuracy.
Full article
(This article belongs to the Special Issue Intelligent Information Processing Methods in Interdisciplinary)
Open AccessReview
A Survey of Machine Learning and Deep Learning for Financial Fraud Detection: Architectures, Data Modalities, and Real-World Deployment Challenges
by
Spiros Thivaios, Georgios Kostopoulos, Antonia Stefani and Sotiris Kotsiantis
Algorithms 2026, 19(5), 354; https://doi.org/10.3390/a19050354 - 2 May 2026
Abstract
Financial fraud has become a critical challenge for modern financial systems due to the rapid growth of digital transactions, online banking services, and electronic payment platforms. Traditional rule-based fraud detection systems are increasingly inadequate in addressing the evolving and adaptive strategies employed by
[...] Read more.
Financial fraud has become a critical challenge for modern financial systems due to the rapid growth of digital transactions, online banking services, and electronic payment platforms. Traditional rule-based fraud detection systems are increasingly inadequate in addressing the evolving and adaptive strategies employed by fraudsters. Consequently, Machine Learning (ML) and Deep Learning (DL) techniques have emerged as powerful tools for detecting fraudulent activities in large-scale financial datasets. This paper presents a comprehensive survey of ML/DL approaches for financial fraud detection. The survey systematically reviews existing research across multiple methodological paradigms, including classical supervised learning, anomaly detection, graph-based methods, deep neural networks, multimodal architectures, and cost-sensitive learning frameworks. Particular emphasis is placed on emerging techniques such as graph neural networks, transformer-based architectures, and federated learning approaches designed to address privacy and scalability challenges. In addition to reviewing model architectures, this work analyzes key challenges inherent to fraud detection systems, including extreme class imbalance, concept drift, adversarial behavior, data privacy constraints, and real-time deployment requirements. Furthermore, the survey examines evaluation methodologies, highlighting the limitations of commonly used metrics and discussing more realistic evaluation strategies that incorporate operational costs and risk management considerations. This paper also provides a structured taxonomy of fraud detection methods, comparative analyses of commonly used datasets, and a synthesis of current research trends. Finally, open challenges and promising research directions are identified, including adaptive learning systems, interpretable Artificial Intelligence models, graph-based behavioral modeling, and privacy-preserving collaborative fraud detection frameworks.
Full article
(This article belongs to the Special Issue AI-Driven Business Analytics Revolution)
Open AccessArticle
A QUBO-Driven Simulated Annealing Methodology for Solving the Shortest Path Problem in Urban Transportation Networks
by
Isaac Oliva-González and Hugo Jiménez-Hernández
Algorithms 2026, 19(5), 352; https://doi.org/10.3390/a19050352 - 2 May 2026
Abstract
The shortest path problem presents formidable challenges in graph optimization, particularly within dense or large-scale networks, where traditional algorithms face serious scalability limitations. This paper puts forth a robust QUBO-based simulated annealing (QUBO-SA) methodology that effectively utilizes a Quadratic Unconstrained Binary Optimization (QUBO)
[...] Read more.
The shortest path problem presents formidable challenges in graph optimization, particularly within dense or large-scale networks, where traditional algorithms face serious scalability limitations. This paper puts forth a robust QUBO-based simulated annealing (QUBO-SA) methodology that effectively utilizes a Quadratic Unconstrained Binary Optimization (QUBO) framework to encode path costs and structural constraints simultaneously. Our approach has been rigorously evaluated on synthetic graphs with controlled connectivity, varying from to , and on a real-world urban transportation network from Querétaro, Mexico, comprising nodes. We assess performance through rigorous probabilistic reliability indicators, notably the success probability , Time-to-Solution, and the relative runtime ratio , benchmarked against Dijkstra’s algorithm. In small synthetic instances ( ), the QUBO-SA method demonstrates outstanding success rates ( ) with runtimes on par with the deterministic baseline ( ). However, as the problem size increases, success probabilities diminish while computational overhead rises, with soaring from approximately at to between and at . For the urban network, our solver achieves success probabilities between and , depending on the specified path length, with values ranging from to . Notably, reducing the target confidence level from to cuts runtime overhead by approximately fifty percent across all configurations. Although the QUBO formulation demonstrates scalability in relation to , potentially limiting its use in dense graphs, the sparse structure typical of real-world road networks enables competitive performance in moderately large instances. These findings decisively highlight the trade-off between solution reliability and computational efficiency, pinpointing specific problem regimes where QUBO-based optimization methods are not only viable but advantageous for path-optimization tasks.
Full article
Open AccessArticle
Machine Learning and Ranking-Based Evaluation for Prioritizing High-Potency Ionizable Lipids in LNP-Mediated RNA Delivery
by
Mostafa Zahed, Maryam Skafyan and Morteza Rasoulianboroujeni
Algorithms 2026, 19(5), 353; https://doi.org/10.3390/a19050353 - 1 May 2026
Abstract
The application of machine learning (ML) models to accelerate the discovery of high-transfection-potency ionizable lipids has gained significant momentum in advancing lipid nanoparticle (LNP)-mediated RNA delivery. In the present study, we adopt a screening-oriented evaluation framework based on early-recognition ranking metrics tailored to
[...] Read more.
The application of machine learning (ML) models to accelerate the discovery of high-transfection-potency ionizable lipids has gained significant momentum in advancing lipid nanoparticle (LNP)-mediated RNA delivery. In the present study, we adopt a screening-oriented evaluation framework based on early-recognition ranking metrics tailored to high-throughput discovery. Model performance was assessed using the enrichment factor (EF), normalized discounted cumulative gain (NDCG), and HitRate at the top 10% of the ranked list, with uncertainty quantified via 1000 nonparametric bootstrap resamples. To assess robustness of conclusions, additional analyses were conducted at the top 1% and top 5% thresholds, reflecting increasingly stringent prioritization scenarios. Four predictive models—XGBoost, Random Forest, Elastic Net, and Quantile Regression Forest—were evaluated across three molecular feature representations, circular Morgan fingerprints, expert-crafted descriptors, and Grover graph embeddings, using a held-out test set. Across all models and thresholds, Morgan fingerprints consistently yielded superior early-recognition performance. The best-performing configuration—XGBoost with Morgan fingerprints—achieved EF@10% = 4.850 (95% CI [3.182, 6.818]), NDCG@10% = 0.628 (95% CI [0.234, 0.909]), and HitRate@10% = 0.493 (95% CI [0.318, 0.683]), corresponding to nearly fivefold enrichment over random selection and identification of highly potent lipids in approximately half of the prioritized candidates. Threshold-sensitivity analyses revealed that although stricter cutoffs (top 1% and top 5%) exhibit greater variability, the relative performance ordering of molecular representations remains stable. Bootstrap distributional comparisons further demonstrate that Morgan fingerprints provide not only higher but also more consistent screening performance than expert descriptors and Grover embeddings. Collectively, these results indicate that molecular representation—rather than model architecture—is the primary determinant of early-recognition performance in ionizable lipid discovery and that this conclusion is robust across multiple screening depths.
Full article
(This article belongs to the Special Issue Integrating Machine Learning and Physics in Engineering and Biology)
Open AccessArticle
Optimizing EMG-Based Transtibial Movement Classification for Real-Time Prosthetic Control: A Feature Engineering and Multi-Window Voting Study
by
Carlos Gabriel Mireles-Preciado, Diana Carolina Toledo-Pérez, Roberto Augusto Gómez-Loenzo, Marcos Aviles and Juvenal Rodríguez-Reséndiz
Algorithms 2026, 19(5), 351; https://doi.org/10.3390/a19050351 - 1 May 2026
Abstract
Objective: This study investigates the optimization of surface EMG (sEMG) classification for seven transtibial movements using short analysis windows (64 ms) suitable for real-time control of below-knee prostheses. Methods: We systematically evaluated feature engineering strategies, dimensionality reduction techniques, and classification approaches using linear
[...] Read more.
Objective: This study investigates the optimization of surface EMG (sEMG) classification for seven transtibial movements using short analysis windows (64 ms) suitable for real-time control of below-knee prostheses. Methods: We systematically evaluated feature engineering strategies, dimensionality reduction techniques, and classification approaches using linear Support Vector Machines on four-channel sEMG data from the transtibial region. We compared amplitude-based versus derivative-based time-domain features, integrated frequency-domain features, and implemented multi-window majority voting with 50% overlap. Results: Evaluated across nine subjects (four male, five female), the optimized system achieves a population-level accuracy of with multi-window majority voting (per-subject range: 60.71–78.57%), with voting consistently improving accuracy over single-window classification by % on average. We demonstrate that PCA provides zero benefit for linear classifiers when all features are retained. Documented failed approaches include adaptive windowing and spectral entropy features. Conclusion: Careful feature engineering combining time-domain (MAV2, RMS, VAR, MAX, LOG, IEMG) and frequency-domain features (MPF, MF, band powers) with multi-window voting substantially recovers accuracy losses from aggressive window reduction while maintaining sub-100 ms latency suitable for prosthetic control. This work provides a validated methodology across multiple subjects for optimizing EMG classification latency–accuracy trade-offs, demonstrates that PCA is unnecessary for linear classifiers with well-engineered features, and documents negative results to guide future prosthetic control research.
Full article
(This article belongs to the Special Issue Machine Learning Techniques for Brain Data Analysis Using EEG, EMG or Image Data)
Open AccessArticle
Trust, Education, and Artificial Intelligence: Adoption, Explainability, and Epistemic Authority Among Teacher-Education Undergraduates in Greece
by
Epameinondas Panagopoulos, Charalampos M. Liapis, Anthi Adamopoulou, Ioannis Kamarianos and Sotiris Kotsiantis
Algorithms 2026, 19(5), 350; https://doi.org/10.3390/a19050350 - 1 May 2026
Abstract
This study investigates how teacher-education undergraduates in Greece use, evaluate, and trust Artificial Intelligence (AI) in higher education, with particular attention to the gap between widespread adoption and limited epistemic trust. The topic is important because generative AI is rapidly entering universities, reshaping
[...] Read more.
This study investigates how teacher-education undergraduates in Greece use, evaluate, and trust Artificial Intelligence (AI) in higher education, with particular attention to the gap between widespread adoption and limited epistemic trust. The topic is important because generative AI is rapidly entering universities, reshaping learning practices, academic integrity, and the legitimacy of knowledge, while learners often rely on systems whose outputs are not easily verifiable. The study focuses on future teachers because they are both current users of AI in higher education and likely future mediators of its use in school settings. Addressing this problem, the study contributes empirical evidence on how AI adoption relates to epistemic authority and institutional legitimacy within teacher education rather than across university students in general. A mixed-methods design was employed using a structured questionnaire completed by 363 teacher-education undergraduates from the University of Patras and the University of Ioannina in Greece; the sample was predominantly women (86.0%) and first-year students (92.6%). Quantitative responses were analyzed statistically, open-ended answers were examined thematically, and factor analysis was used to identify latent attitudinal dimensions. The findings indicate very high AI use in everyday life (92.6%) and study practices (81.3%), but only moderate trust: 1.4% reported complete trust and 12.1% generally trusted AI-generated answers. Six dimensions explained 61.73% of total variance, pointing to a layered attitudinal structure within this teacher-education population, consistent with an adoption–trust paradox and with the need for transparent, verifiable, human-supervised educational AI. The observed verification-based trust calibration may partly reflect an emerging pedagogical orientation toward source checking and responsibility for knowledge mediation, but given the strong concentration of first-year students, this should be interpreted as characteristic of early-stage teacher education rather than of university students more broadly.
Full article
(This article belongs to the Special Issue Explainable AI: Advances in Interpretability Algorithms and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Dynamic Fine-Tuning Rotation Network for Semantic Segmentation of Rock Paintings
by
Chuanping Bai, Donglin Jing, Zhixue Wang and Fangqin Zhang
Algorithms 2026, 19(5), 349; https://doi.org/10.3390/a19050349 - 1 May 2026
Abstract
The scale features of rock art exhibit significant diversity and graduality. Among the existing semantic segmentation methods for rock art, although some models have taken note of the scale differences in rock art patterns and the complexity of directional features, and proposed targeted
[...] Read more.
The scale features of rock art exhibit significant diversity and graduality. Among the existing semantic segmentation methods for rock art, although some models have taken note of the scale differences in rock art patterns and the complexity of directional features, and proposed targeted improvement strategies, most of these methods view scale adaptation and directional representation as unconnected problems. They fail to model the intrinsic correlation between the scale adaptation and directional representation, and particularly overlook the restrictive effect of scale accuracy on the extraction of directional features. This ultimately leads to the problem of "spatial representation misalignment" in the semantic segmentation of rock art. To address the above problems, this paper proposes a Dynamic Fine-tuning Rotation Network (DFTR-Net), which aims to solve the problems of imprecise scale feature extraction and directional misalignment for rock art patterns with arbitrary orientations. The network consists of a dynamic selective convolution structure and a shape-aware spatial feature extraction module. Specifically, the dynamic selective convolution dynamically adjusts the coverage range of the receptive field through inter-layer feature aggregation. It uses stacked small dilated convolution kernels to replace large convolution kernels with the same receptive field for extracting the neighborhood details of patterns. Then, by combining with feature aggregation, it constructs spatial feature differences and realizes intra-layer dynamic weighted fusion, thereby achieving accurate scale feature extraction. After obtaining fine-grained scale features, the shape-aware module first corrects the initial segmentation candidate regions of the patterns to generate directional guide boxes. Subsequently, it drives the rotational sampling of convolution kernels based on the angles of the guide boxes, forming region-constrained deformable convolutions that adapt to the shape of the patterns. These convolution kernels obtain strong supervision based on pixel-level annotations, which enhances the sensitivity to the directional features of the patterns and effectively alleviates the problem of directional misalignment. Extensive experiments show that DFTR-Net can achieve higher performance on the 3D-pitoti and Petroglyph Annotation datasets compared with the existing methods.
Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Open AccessArticle
GPU-Accelerated Tensorized Flexible Differential Evolution for Large-Scale Constrained Multi-Objective Optimization
by
Zihao Wang, Li Huang, Hua Han and Mingyang Chen
Algorithms 2026, 19(5), 348; https://doi.org/10.3390/a19050348 - 1 May 2026
Abstract
Large-scale constrained multi-objective optimization problems (LCMOPs) pose significant challenges due to the curse of dimensionality, complex constraint landscapes, and high computational overhead. In time-sensitive scenarios, existing large-scale constrained multi-objective evolutionary algorithms (LCMOEAs) often incur high computational costs and therefore struggle to meet efficiency
[...] Read more.
Large-scale constrained multi-objective optimization problems (LCMOPs) pose significant challenges due to the curse of dimensionality, complex constraint landscapes, and high computational overhead. In time-sensitive scenarios, existing large-scale constrained multi-objective evolutionary algorithms (LCMOEAs) often incur high computational costs and therefore struggle to meet efficiency requirements. This paper proposes a GPU-accelerated tensorized flexible differential evolution algorithm (TFDEMO) for LCMOPs. To address the curse of dimensionality and complex constraint landscapes in LCMOPs while maintaining GPU-level parallel efficiency, a tensorized flexible differential evolution operator (FlexDE) is developed. It utilizes a Bernoulli masking mechanism to switch between guided and random mutation modes in parallel on the GPU. The guidance probability is adaptively adjusted based on historical performance and the evolutionary state. Furthermore, a dual-population collaborative neighborhood selection mechanism is designed. For the main population, a Boolean mask tensor method is proposed, which constructs four Boolean mask tensors in parallel to encode feasibility states and dominance relations across all subproblems and their neighborhoods, and aggregates them via bitwise operations to produce the dominance tensor in a single pass. The auxiliary population performs constraint-ignoring neighborhood selection and shares its offspring with the main population to assist the main population in crossing large infeasible regions. The experimental results on the LIRCMOP and ZXH_CF benchmark suites with decision variable dimensions ranging from 100 to 800 demonstrate that TFDEMO achieves the best overall performance among the compared algorithms under both fixed-time and fixed function-evaluation settings. Additionally, a portfolio rebalancing problem with three objectives, five constraints, and scalable dimensions is designed to evaluate the performance of the proposed algorithm in time-sensitive application scenarios.
Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Actuators, Algorithms, BDCC, Future Internet, JMMP, Machines, Robotics, Systems
Smart Product Design and Manufacturing on Industrial Internet
Topic Editors: Pingyu Jiang, Jihong Liu, Ying Liu, Jihong YanDeadline: 30 June 2026
Topic in
Algorithms, Data, Earth, Geosciences, Mathematics, Land, Water, IJGI
Applications of Algorithms in Risk Assessment and Evaluation
Topic Editors: Yiding Bao, Qiang WeiDeadline: 31 July 2026
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 30 August 2026
Topic in
Agriculture, Energies, Vehicles, Sensors, Sustainability, Urban Science, Applied Sciences, Algorithms
Sustainable Energy Systems
Topic Editors: Luis Hernández-Callejo, Carlos Meza Benavides, Jesús Armando Aguilar JiménezDeadline: 31 October 2026
Conferences
Special Issues
Special Issue in
Algorithms
Bio-Inspired Algorithms: 2nd Edition
Guest Editors: Sándor Szénási, Gábor KertészDeadline: 30 May 2026
Special Issue in
Algorithms
Evolution of Algorithms in the Era of Generative AI
Guest Editors: Domenico Ursino, Gianluca Bonifazi, Enrico Corradini, Michele MarchettiDeadline: 31 May 2026
Special Issue in
Algorithms
Machine Learning Algorithms and Optimization in the Digital Transition (2nd Edition)
Guest Editors: Mateus Mendes, Balduíno Mateus, Nuno LavadoDeadline: 31 May 2026
Special Issue in
Algorithms
Algorithms and Innovations for Real-Time Processing in Streaming Systems and Applications
Guest Editor: Vishnu S. PendyalaDeadline: 31 May 2026
Topical Collections
Topical Collection in
Algorithms
Parallel and Distributed Computing: Algorithms and Applications
Collection Editors: Charalampos Konstantopoulos, Grammati Pantziou
Topical Collection in
Algorithms
Feature Papers in Combinatorial Optimization, Graph, and Network Algorithms
Collection Editor: Roberto Montemanni
Topical Collection in
Algorithms
Feature Papers in Algorithms for Multidisciplinary Applications
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Feature Papers in Randomized, Online and Approximation Algorithms
Collection Editor: Frank Werner


