Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications, and is published monthly online by MDPI.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Theory and Methods) / CiteScore - Q1 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.2 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2025).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
2.1 (2024);
5-Year Impact Factor:
2.0 (2024)
Latest Articles
Integration of Building Information Modelling and Economic Multi-Criteria Decision-Making with Neural Networks: Towards a Smart Renewable Energy Community
Algorithms 2026, 19(5), 327; https://doi.org/10.3390/a19050327 - 23 Apr 2026
Abstract
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated
[...] Read more.
This research introduces a novel methodology that combines Building Information Modelling (BIM) and Economic Multi-Criteria Decision-Making (EMCDM) with Neural Networks to optimize hybrid renewable energy systems in small communities. Its core aim is to improve sustainability, technical performance, and financial vokiability through integrated modelling and decision-making. The approach is applied to a hydropower site, evaluating five Scenarios (IDs 1–5) under a Community and Industry model. Financial benchmarks include a 10% Minimum Required Return and a 7-year payback period. ID3—hydropower, solar, and wind—proves most effective, with ANPV of €10,905 (wet) and €4501 (dry), and ROI of 155%/64%. Its ROIA/MRA Index peaks at 539%, and Payback/N ratios remain within acceptable limits (55%/96%). LCOE stays stable in average conditions (0.042–0.046 €/kWh), rising in dry years (0.07–0.10 €/kWh). Profitability differences primarily stem from demand and curtailment, rather than production costs. The NARX neural network reliably models SS% values from renewable inputs with low error across scenarios. The integrated BIM–EMCDM framework ensures transparent, sustainable, and risk-balanced energy system decisions for long-term autonomy.
Full article
(This article belongs to the Special Issue Computational Modeling and Intelligent Simulation of Next-Generation Energy Systems)
Open AccessArticle
High-Resolution Numerical Simulations of Urban Air Quality Using Computational Fluid Dynamics Model: Applications in Madrid, Spain
by
Roberto San Jose, Juan L. Perez-Camanyo and Miguel Jimenez-Gañan
Algorithms 2026, 19(5), 326; https://doi.org/10.3390/a19050326 - 22 Apr 2026
Abstract
This paper presents a high-spatial-resolution 3D system to simulate air quality in urban environments by coupling the WRF/Chem regional model with the PALM4U computational fluid dynamics model, together with an emission model using the SUMO microscopic traffic model. The system has been applied
[...] Read more.
This paper presents a high-spatial-resolution 3D system to simulate air quality in urban environments by coupling the WRF/Chem regional model with the PALM4U computational fluid dynamics model, together with an emission model using the SUMO microscopic traffic model. The system has been applied to two experiments in the city of Madrid, Spain. The first study quantifies the impact of four high-rise buildings on pollutant dispersion. The second evaluates the effect of changing tree types (broad-leaf vs. needle-leaf) in the Retiro Park on NO2 and O3 concentrations. Both simulations adopt a multiscale approach, using detailed 3D urban morphology, traffic flow data and meteorological conditions. In the first experiment, high-rise buildings caused local variations in NO2 and O3 of up to 15% and 20%, respectively. In the second experiment, replacing broad-leaf trees with needle-leaf trees led to a mean NO2 reduction of 1.69% across 90.67% of the study area. This research demonstrates the value of integrated CFD modeling for planning urban mitigation strategies and optimizing air quality in complex urban environments.
Full article
Open AccessArticle
SDS-Former: A Transformer-Based Method for Semantic Segmentation of Arid Land Remote Sensing Imagery
by
Yujie Du, Junfu Fan, Kuan Li and Yongrui Li
Algorithms 2026, 19(5), 325; https://doi.org/10.3390/a19050325 - 22 Apr 2026
Abstract
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks
[...] Read more.
Semantic segmentation of land use and land cover (LULC) in arid regions remains challenging due to severe class imbalance, fragmented spatial distributions, and high spectral similarity among different land cover types. These characteristics often lead to an information bottleneck in deep segmentation networks and hinder the extraction of discriminative semantic representations. To address these issues, we propose SDS-Former, a lightweight semantic segmentation network specifically designed for remote sensing imagery in arid environments. SDS-Former incorporates an SSM-inspired Lightweight Semantic Enhancement (LSE) module to strengthen contextual modeling and alleviate the loss of discriminative information in deep features. To tackle scale variations, a Dynamic Selective Feature Fusion (DSFF) module is employed in the decoder to adaptively weight and fuse high-level semantics with low-level spatial details. Furthermore, a Feature Refinement Head (FRH) is introduced to enhance boundary localization and improve the recognition of small-scale and sparsely distributed land cover objects. Extensive ablation and comparative experiments demonstrate that SDS-Former consistently outperforms representative semantic segmentation methods across multiple evaluation metrics. On the Tarim Basin dataset, the proposed network achieves a mean Intersection over Union (mIoU) of 82.51% and an F1 score of 86.47%, indicating its superior effectiveness and robustness. Qualitative results further verify that SDS-Former exhibits clear advantages in distinguishing spectrally similar land cover types and preserving the spatial continuity of ground objects in complex arid-region scenes.
Full article
(This article belongs to the Special Issue Artificial Intelligence, Image Processing and Spatial Analytics in Environmental Informatics)
Open AccessArticle
A Hybrid Heuristic Algorithm for the Traveling Salesman Problem with Structured Initialization in Global–Local Search
by
Eduardo Chandomí-Castellanos, Elías N. Escobar-Gómez, Jorge Antonio Orozco Torres, AlejandroMedina Santiago, Betty Yolanda López Zapata, Juan Antonio Arizaga Silva, José Roberto-Bermúdez and Héctor Daniel Vázquez-Delgado
Algorithms 2026, 19(5), 324; https://doi.org/10.3390/a19050324 - 22 Apr 2026
Abstract
This work proposes solving the Traveling Salesman Problem by applying combined heuristic global and local search methods. The proposed method is divided into three phases: first, it evaluates an initial route and chooses the minimum value of rows in a distance matrix. The
[...] Read more.
This work proposes solving the Traveling Salesman Problem by applying combined heuristic global and local search methods. The proposed method is divided into three phases: first, it evaluates an initial route and chooses the minimum value of rows in a distance matrix. The next phase seeks to improve the route’s cost globally and with a 2-opt local search method, remove the crossings, and further minimize the cost of departure. Finally, the last phase evaluates and conserves each cost using tabu search, proposing a parameter that describes the algorithm convergence factor. This paper assessed 29 TSPLIB instances and compared them with other algorithms: the ant colony optimization algorithm (ACO), artificial neural network (ANN), particle swarm optimization (PSO), and genetic algorithm (GA). With the proposed algorithm, results close to the optimal ones are obtained, and the proposed algorithm is assessed on 29 TSPLIB instances. Based on 30 independent runs per instance, the method achieves a mean absolute percentage error (MAPE) of 1.4484% relative to the known optima, demonstrating its accuracy. Furthermore, statistical comparisons using the coefficient of variation (CV) for runtime and the Wilcoxon signed-rank test confirm that the proposed hybrid algorithm is significantly faster than traditional ant colony optimization (T-ACO) and a new ant colony optimization algorithm (N-ACO) while maintaining competitive solution quality.
Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Open AccessReview
Closing the Loop in Neuromodulation: A Review of Machine Learning Approaches for EEG-Guided Transcranial Magnetic Stimulation
by
Elena Mongiardini and Paolo Belardinelli
Algorithms 2026, 19(4), 323; https://doi.org/10.3390/a19040323 - 21 Apr 2026
Abstract
Transcranial magnetic stimulation (TMS) combined with electroencephalography (EEG) provides a powerful framework to probe and modulate human cortical and corticospinal excitability. In recent years, brain state-dependent EEG–TMS paradigms have gained increasing interest by synchronizing stimulation to ongoing neural activity. However, traditional approaches relying
[...] Read more.
Transcranial magnetic stimulation (TMS) combined with electroencephalography (EEG) provides a powerful framework to probe and modulate human cortical and corticospinal excitability. In recent years, brain state-dependent EEG–TMS paradigms have gained increasing interest by synchronizing stimulation to ongoing neural activity. However, traditional approaches relying on single oscillatory features or fixed thresholds have yielded heterogeneous and often inconsistent results, motivating the adoption of machine learning (ML) and artificial intelligence (AI) methods to model brain state in a multivariate, data-driven manner. This review synthesizes current ML and deep learning (DL) approaches aimed at predicting cortical and corticospinal excitability from pre-stimulus EEG. We contextualize these methods within brain state-dependent EEG–TMS frameworks based on oscillatory phase, power, and network-level features, and within evolving definitions of brain state that move beyond local biomarkers toward distributed, large-scale, and dynamically evolving neural representations. The reviewed studies span feature-engineered models, data-driven decoding approaches, and emerging adaptive closed-loop frameworks. Finally, we discuss key methodological challenges, translational barriers, and future directions toward personalized, interpretable, and fully closed-loop neuromodulation systems.
Full article
(This article belongs to the Special Issue Machine Learning and Signal Processing for EEG, ECG, EDA, and Other Biosignals)
►▼
Show Figures

Figure 1
Open AccessArticle
A Hybrid Deep Learning Framework for Multi-Symbol Recognition and Positional Decoding of Handwritten Babylonian Numerals
by
Loay Alzubaidi, Kheir Eddine Bouazza and Islam Al-Qudah
Algorithms 2026, 19(4), 322; https://doi.org/10.3390/a19040322 - 20 Apr 2026
Abstract
The Babylonian numeral system, developed more than four thousand years ago, is one of the earliest known positional number systems, employing a sexagesimal (base-60) structure and a limited set of wedge-shaped symbols. Despite their visual simplicity, Babylonian numerals exhibit substantial structural and positional
[...] Read more.
The Babylonian numeral system, developed more than four thousand years ago, is one of the earliest known positional number systems, employing a sexagesimal (base-60) structure and a limited set of wedge-shaped symbols. Despite their visual simplicity, Babylonian numerals exhibit substantial structural and positional complexity, particularly when multiple symbols are combined to represent larger numerical values. This complexity presents significant challenges for modern computational recognition, especially in handwritten and degraded archaeological contexts. Most existing research has focused on the recognition of isolated Babylonian numeral symbols, which does not adequately reflect real inscriptions where numerals typically appear as composite sequences. To address this limitation, this paper proposes a hybrid deep learning framework capable of identifying, interpreting, and computing the decimal values of multi-symbol handwritten Babylonian numerals. Building on prior work in single-symbol recognition, we construct a synthetic yet realistic dataset of composite numeral images by combining handwritten glyphs into sequences of two to four symbols while incorporating natural variations in spacing, alignment, and handwriting style. The proposed framework integrates a Convolutional Neural Network (CNN) for visual feature extraction with optional structural feature fusion, followed by a Support Vector Machine (SVM) classifier for reliable multi-class discrimination. A rule-based positional decoder is then applied to convert recognized symbol sequences into their corresponding decimal values using Babylonian base-60 logic. By combining visual recognition with positional numerical reasoning, the proposed system enables end-to-end interpretation of handwritten Babylonian numeral sequences. To the best of our knowledge, this work represents one of the first approaches to jointly classify, decode, and compute numerical values from multi-symbol handwritten Babylonian numerals, contributing to digital epigraphy, archaeological text analysis, and cultural heritage preservation.
Full article
(This article belongs to the Special Issue Artificial Intelligence and Data-Driven Approaches for Cultural Heritage)
►▼
Show Figures

Figure 1
Open AccessArticle
D3QN-Guided Sand Cat Swarm Optimization with Hybrid Exploration for Multi-Objective Cloud Task Scheduling
by
Minghao Shao, Ying Guo, Jibin Wang and Hu Zhang
Algorithms 2026, 19(4), 321; https://doi.org/10.3390/a19040321 - 20 Apr 2026
Abstract
Task scheduling in cloud computing environments is a complex NP-hard problem that requires maximizing resource utilization while satisfying quality-of-service (QoS) constraints. Traditional meta-heuristic algorithms often become stuck in local optima, while single deep reinforcement learning (DRL) models exhibit instability when exploring large-scale solution
[...] Read more.
Task scheduling in cloud computing environments is a complex NP-hard problem that requires maximizing resource utilization while satisfying quality-of-service (QoS) constraints. Traditional meta-heuristic algorithms often become stuck in local optima, while single deep reinforcement learning (DRL) models exhibit instability when exploring large-scale solution spaces. To address this, this paper proposes a hybrid scheduling algorithm based on multi-objective sand cat colony optimization (MoSCO). This algorithm utilizes a D3QN network to extract task features and guide population initialization, followed by a multi-objective Sand Cat Swarm Optimization (SCSO) algorithm for refined local search. Results from 50 independent replicate experiments conducted in a simulated cloud environment, coupled with an analysis of the dynamic convergence process, demonstrate that MoSCO exhibits significant superiority and robustness. Scatter plot convergence analysis further confirms that MoSCO’s knowledge injection mechanism effectively overcomes the blind exploration phase of traditional algorithms and successfully breaks through the local optimum bottleneck in the late iteration stages of single reinforcement learning, achieving higher-quality, denser, and more stable convergence. Furthermore, 3D and 2D Pareto front analyses show that MoSCO generates highly competitive, well-distributed non-dominated solutions, offering flexible trade-off options for conflicting objectives. Compared to PureD3QN, H-SCSO, and NSGA-II, MoSCO exhibits the smallest performance fluctuations in box plots. Specifically, MoSCO elevates the average resource utilization of clusters to 92.20%, while reducing the average maximum Makespan and Tardiness to 528 and 4187, respectively. Experimental data confirm that MoSCO effectively balances global exploration with local exploitation, delivering stable, high-quality solutions for dynamic cloud task scheduling.
Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Assessment of Distributed PV Hosting Capacity in Distribution Areas Based on Operating Region Analysis
by
Xiaofeng Dong, Can Liu, Junting Li, Qiong Zhu, Yuying Wang and Junpeng Zhu
Algorithms 2026, 19(4), 320; https://doi.org/10.3390/a19040320 - 20 Apr 2026
Abstract
With the high penetration of distributed photovoltaics (PV) in distribution areas, transformer capacity limits and source–load fluctuations have become key factors constraining PV accommodation. To accurately assess the PV hosting capacity under energy storage regulation, this paper proposes an assessment method based on
[...] Read more.
With the high penetration of distributed photovoltaics (PV) in distribution areas, transformer capacity limits and source–load fluctuations have become key factors constraining PV accommodation. To accurately assess the PV hosting capacity under energy storage regulation, this paper proposes an assessment method based on operating region analysis. First, a coordinated operation model for the distribution area is established, incorporating the transformer capacity, energy storage constraints, and power balance. On this basis, the calculation boundaries for the PV hosting capacity are discussed in two scenarios: Model 1 ignores power curve uncertainty, characterizing the geometry of the conventional operating region to find the maximum deterministic hosting capacity ( ) that keeps the region non-empty. Model 2 introduces box-type uncertainty sets for the source and load, proposes the concept of a “Self-Balanced Operating Region”, and constructs a robust feasibility determination model ( ) based on a Min–Max–Min structure. To solve this multi-layer nested non-convex model, an iterative algorithm based on duality theory and Benders decomposition is employed to determine the robust hosting capacity under uncertainty ( ) at the critical point where shifts from zero to non-zero. Case studies show that source–load uncertainty leads to a significant contraction of the operating region, and the robust hosting capacity under uncertainty requirements is strictly less than the deterministic hosting capacity ( ). This method quantifies the reduction effect of uncertainty on the accommodation capability, providing a theoretical basis for planning high-renewable penetration distribution areas and energy storage configuration.
Full article
(This article belongs to the Special Issue Algorithms for Electrical and Electronic Engineering with a Focus on Renewable Energy Sources (2nd Edition))
Open AccessArticle
Strategic Capacity Planning Algorithm for Last-Mile Delivery Under High-Volume Demand Surges
by
Didar Yedilkhan, Aidarbek Shalakhmetov, Bakbergen Mendaliyev and Nursultan Khaimuldin
Algorithms 2026, 19(4), 319; https://doi.org/10.3390/a19040319 - 18 Apr 2026
Abstract
Last-mile delivery companies can face demand surges where large-volume order requests exceed daily courier capacity. In such cases fast and robust feasibility-first planning becomes more practical and valuable than building optimal routes. This paper proposes a hierarchical, computationally feasible decomposition pipeline that produces
[...] Read more.
Last-mile delivery companies can face demand surges where large-volume order requests exceed daily courier capacity. In such cases fast and robust feasibility-first planning becomes more practical and valuable than building optimal routes. This paper proposes a hierarchical, computationally feasible decomposition pipeline that produces shift-feasible clusters under a strict shift-duration limit using travel-time-based duration estimates. While decomposition methods for large-scale VRPs are well established, they typically remain oriented toward route-construction quality within a single operational day or toward balancing customer counts, demand, or Euclidean territory partitions. In contrast, the proposed method targets a different decision problem: rapid feasibility-first strategic capacity planning for one-time extreme demand surges, where the primary requirement is to estimate, within seconds, a conservative upper bound on the number of courier shifts under a strict shift-duration limit. When end-to-end latency is evaluated from raw geographic points, including distance-matrix preparation for monolithic baselines, the proposed pipeline becomes 187 to 1315 times faster than matrix-based monolithic optimization on the common benchmark sizes. Methodologically, the contribution lies in combining (i) topology-preserving spatial linearization with a Hilbert Space-Filling Curve, (ii) adaptive greedy microclustering driven by empirical travel-time quantiles, and (iii) lexicographic dynamic-programming merge that minimizes the number of shifts first and total travel time second. This yields a planning-oriented decomposition mechanism that is distinct from classical route-quality-centered hierarchical VRP approaches.
Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
►▼
Show Figures

Figure 1
Open AccessReview
On-Orbit Space AI: Federated, Multi-Agent, and Collaborative Algorithms for Satellite Constellations
by
Ziyang Wang
Algorithms 2026, 19(4), 318; https://doi.org/10.3390/a19040318 - 17 Apr 2026
Abstract
Satellite constellations are transforming space systems from isolated spacecraft into networked, software-defined platforms capable of on-orbit perception, decision making, and adaptation. Yet many of the existing AI studies remain centered on single-satellite inference, while constellation-scale autonomy introduces fundamentally new algorithmic requirements: learning and
[...] Read more.
Satellite constellations are transforming space systems from isolated spacecraft into networked, software-defined platforms capable of on-orbit perception, decision making, and adaptation. Yet many of the existing AI studies remain centered on single-satellite inference, while constellation-scale autonomy introduces fundamentally new algorithmic requirements: learning and coordination under dynamic inter-satellite connectivity, strict SWaP-C limits, radiation-induced faults, non-IID data, concept drift, and safety-critical operational constraints. This survey consolidates the emerging field of on-orbit space AI through three complementary paradigms: (i) federated learning for cross-satellite training, personalization, and secure aggregation; (ii) multi-agent algorithms for cooperative planning, resource allocation, scheduling, formation control, and collision avoidance; and (iii) collaborative sensing and distributed inference for multi-satellite fusion, tracking, split/early-exit inference, and cross-layer co-design with constellation networking. We provide a system-level view and a taxonomy that unifies collaboration architectures, temporal mechanisms, and trust models.
Full article
(This article belongs to the Collection Feature Papers on Artificial Intelligence Algorithms and Their Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Hybrid Physics-Informed ML Framework for Emission and Energy Flow Prediction in a Retrofitted Heavy-Duty Vehicle
by
Talha Mujahid, Teresa Donateo and Pietropaolo Morrone
Algorithms 2026, 19(4), 317; https://doi.org/10.3390/a19040317 - 17 Apr 2026
Abstract
This study introduces a physics-informed machine learning framework for predicting transient emissions and energy variables in a retrofitted heavy-duty diesel vehicle. It merges data-driven modeling with physically derived features for reliable real-world analysis. A Random Forest regressor is trained on a public dataset
[...] Read more.
This study introduces a physics-informed machine learning framework for predicting transient emissions and energy variables in a retrofitted heavy-duty diesel vehicle. It merges data-driven modeling with physically derived features for reliable real-world analysis. A Random Forest regressor is trained on a public dataset (26 trips from one instrumented vehicle) to predict CO2 and NOx mass rates, exhaust temperature, exhaust mass flow rate, and fuel flow rate from synchronized multi-sensor inputs using past-only, time-lagged features. On held-out trips, exhaust temperature prediction achieves = 0.9997 and = 0.53 g/s; for CO2, with = 0.9985 and = 0.38 g/s, comparable performance is reported for NOx, exhaust flow, and fuel rate. The trained model is integrated into a simulation framework to enable the evaluation of alternative operating conditions and powertrain configurations. First, the impact of cold-start versus hot-start operation is assessed, showing cumulative emission penalties of up to +28% for CO2 and +30% for NOx. Second, the effect of hybridization is investigated by comparing the baseline thermal configuration with a hybrid electric architecture, resulting in estimated reductions of −12.2% in CO2 and −10.5% in NOx emissions. This tool excels in high-fidelity emission prediction and system-level energy analysis, aiding advanced powertrain assessments under realistic driving conditions.
Full article
(This article belongs to the Special Issue Computational Modeling and Intelligent Simulation of Next-Generation Energy Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Predicting Enterprise AI Adoption in Europe from Cloud Sophistication, Digital Sales Capabilities, and Enterprise Size
by
Cristiana Tudor
Algorithms 2026, 19(4), 316; https://doi.org/10.3390/a19040316 - 17 Apr 2026
Abstract
This paper examines whether broad enterprise AI adoption in Europe is best understood as an isolated technology decision or as the outcome of a wider bundle of digital capabilities. Using harmonized Eurostat data for European enterprises, the analysis builds a repeated cross-section at
[...] Read more.
This paper examines whether broad enterprise AI adoption in Europe is best understood as an isolated technology decision or as the outcome of a wider bundle of digital capabilities. Using harmonized Eurostat data for European enterprises, the analysis builds a repeated cross-section at the country–size-class–year level and models high AI adoption with a combination of random forest and elastic-net estimation. The dependent variable captures enterprises using at least one AI technology, while the explanatory set focuses on cloud adoption, cloud CRM, cloud ERP, cloud database hosting, cloud security, cloud software use, e-sales intensity, and enterprise size. The findings reveal a stable predictive structure and consistent classification performance across specifications. Across models, cloud CRM and e-sales emerge as the strongest predictors of high AI adoption, followed by general cloud use and selected data-related cloud capabilities. This ordering remains largely stable in threshold-sensitivity checks based on alternative definitions of high adoption. The pattern also remains visible when country controls are removed, which suggests that the result is not merely a reflection of national heterogeneity. The paper contributes by shifting attention from broad claims about “digital readiness” to a narrower and more operational notion of capability complementarity: AI uptake tends to cluster where firms already possess customer-facing, cloud-based, and commercially digital infrastructures. In that sense, the paper offers a transparent, reproducible, and policy-relevant account of the digital foundations of enterprise AI adoption in Europe.
Full article
(This article belongs to the Special Issue AI-Driven Business Analytics Revolution)
►▼
Show Figures

Figure 1
Open AccessArticle
Agentic Hallucination Risk Scoring for Medical LLMs via Uncertainty Quantification and Clinical Knowledge Injection
by
Mayank Kapadia and Mohammad Masum
Algorithms 2026, 19(4), 315; https://doi.org/10.3390/a19040315 - 17 Apr 2026
Abstract
Large Language Models (LLMs) have witnessed significant adoption across numerous domains since 2020, but their proclivity to hallucinate creates unacceptable dangers in high-risk environments like healthcare, where wrong outputs can directly jeopardize human safety. While present systems focus on pre-generation mitigation strategies, they
[...] Read more.
Large Language Models (LLMs) have witnessed significant adoption across numerous domains since 2020, but their proclivity to hallucinate creates unacceptable dangers in high-risk environments like healthcare, where wrong outputs can directly jeopardize human safety. While present systems focus on pre-generation mitigation strategies, they cannot ensure the safety of individual outputs during inference. We provide a post hoc Hallucination Risk Scoring (HRS) methodology that intercepts questionable outputs before they reach patients via an agentic pipeline. Given a medical question, a domain-specific LLM generates an initial response from which five complimentary uncertainty signals are computed, which are then separated into a decision layer that governs escalation and a guidance layer that directs clinical knowledge injection by a GPT. The framework is tested using three biological question-answering datasets of various complexity: PubMedQA-Labeled, PubMedQA-Artificial, and BioASQ Task B. The results show an up to 38% safety increase at the most sensitive threshold configuration, zero deterioration across all experimental configurations enforced by the Revert Baseline method, and complexity-aware escalation rates that scale organically with dataset difficulty. Tunable thresholds allow physicians to calibrate system behavior based on deployment requirements, providing a practical safety–accuracy trade-off. Statistical research finds entropy as the primary uncertainty signal separating escalated from non-escalated situations across all datasets. These findings provide a deployable, interpretable, and configurable post hoc safety paradigm for reliable medical AI implementation.
Full article
(This article belongs to the Special Issue Evolution of Algorithms in the Era of Generative AI)
►▼
Show Figures

Figure 1
Open AccessArticle
A Data-Driven Spatiotemporal Feature Fusion Method for Traffic Flow Prediction
by
Long Li, Zhiwen Wang and Haoxu Wang
Algorithms 2026, 19(4), 314; https://doi.org/10.3390/a19040314 - 16 Apr 2026
Abstract
In response to the current severe traffic congestion issues, highly reliable traffic flow prediction serves as a fundamental prerequisite for optimizing municipal road networks and mitigating systemic vehicular congestion. Aiming to elevate the precision of short-term traffic flow prediction, this paper first addresses
[...] Read more.
In response to the current severe traffic congestion issues, highly reliable traffic flow prediction serves as a fundamental prerequisite for optimizing municipal road networks and mitigating systemic vehicular congestion. Aiming to elevate the precision of short-term traffic flow prediction, this paper first addresses the low precision of the Dung Beetle Optimizer (DBO) algorithm by introducing an exponential adaptive weight in the way of position update for the ball-rolling dung beetle, along with incorporating a Cauchy–Gaussian mutation strategy. We propose the Multi-strategy improved Dung Beetle Optimizer (MDBO), which is validated using eight benchmark test functions, demonstrating that MDBO outperforms common optimization algorithms in solution accuracy. Secondly, we adopt a combined prediction model, Traffic Flow Temporal-Spatio Network (TFTSNet), which constructs spatial feature modules and temporal feature modules in parallel fusion. Finally, we achieve short-term traffic flow prediction by optimizing the TFTSNet combined prediction model using MDBO. The experiment evaluated model performance using publicly available traffic flow datasets. The results demonstrate that, compared to other state-of-the-art models, the proposed joint prediction model based on MDBO-optimized TFTSNet achieves substantial enhancements in both prediction precision and generalization capability. Root mean square error (RMSE) decreased by 8.7–35.7%, mean absolute error (MAE) decreased by 6.6–40.0%, and R2 reached 0.975, showcasing robust predictive capabilities and engineering reference value.
Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Design and Analysis of a Reduced Switched-Capacitor Multilevel Inverter-Fed PMSM Drive for Solar–Battery Electric Vehicles Using Rat Swarm Optimization
by
Vijaychandra Joddumahanthi, Ramesh Devarapalli and Łukasz Knypiński
Algorithms 2026, 19(4), 313; https://doi.org/10.3390/a19040313 - 16 Apr 2026
Abstract
Solar photovoltaic (PV)-powered electric vehicles (EVs) have gained greater significance in the present-day era of transportation across the globe. This proposed work presents an analysis of a five-level reduced switched-capacitor multilevel inverter (RSC-MLI)-powered permanent magnet synchronous motor (PMSM) drive for solar PV-powered battery
[...] Read more.
Solar photovoltaic (PV)-powered electric vehicles (EVs) have gained greater significance in the present-day era of transportation across the globe. This proposed work presents an analysis of a five-level reduced switched-capacitor multilevel inverter (RSC-MLI)-powered permanent magnet synchronous motor (PMSM) drive for solar PV-powered battery vehicles enabled by a rat swarm optimization (RSO) maximum power point tracking (MPPT) control mechanism. The system proposed in this paper integrates solar PV arrays and battery storage systems for efficient power transfer to EVs for propulsion. In order to achieve fast, accurate tracking of the optimal maximum power point, the RSO technique is used. A five-level RSC-MLI is used in this study, which enables boosting the voltage and lowering switching losses in the system. The performance of the PMSM is further analyzed to obtain constant parameters, such as the velocity and torque of the electric vehicle.
Full article
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimal Design of Engineering Problems (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
CAHT: A Constraint-Aware Heterogeneous Transformer for Real-Time Multi-Robot Task Allocation in Warehouse Environments
by
Shengshuo Gong and Oleg Varlamov
Algorithms 2026, 19(4), 312; https://doi.org/10.3390/a19040312 - 16 Apr 2026
Abstract
►▼
Show Figures
The NP-hard coordination of heterogeneous robots for time-windowed warehouse tasks remains challenging: metaheuristics are precise but slow, whereas neural methods cannot handle heterogeneous constraints, leading to infeasible allocations. This paper presents the Constraint-Aware Heterogeneous Transformer (CAHT), a lightweight encoder–decoder architecture that performs end-to-end
[...] Read more.
The NP-hard coordination of heterogeneous robots for time-windowed warehouse tasks remains challenging: metaheuristics are precise but slow, whereas neural methods cannot handle heterogeneous constraints, leading to infeasible allocations. This paper presents the Constraint-Aware Heterogeneous Transformer (CAHT), a lightweight encoder–decoder architecture that performs end-to-end task assignment and sequencing in a single forward pass. The central innovation is a dynamic feasibility masking mechanism that enforces capacity and energy constraints directly within the softmax computation, eliminating infeasible allocations at the architectural level. This is complemented by a spatial-bias Transformer encoder and a two-stage supervised–reinforcement learning training paradigm using ALNS-generated labels. Experiments across four problem scales (5–20 robots, 50–200 tasks) demonstrate that CAHT achieves objective values within 7–13% of the ALNS reference while being 29–91× faster (23–104 ms vs. 2–3 s). Constraint violation rates remain below 6%, with time-window satisfaction above 94%. Ablation analysis identifies dynamic masking as the dominant contribution (+213% degradation upon removal), and cross-scale generalization reveals that the optimality gap decreases from 13.0% to 10.7% as the problem scale grows. With only 0.91 M parameters, CAHT occupies a new trade-off point on the Pareto frontier, offering a practical path toward real-time autonomous warehouse coordination.
Full article

Figure 1
Open AccessArticle
Strategic Approach for Enhancing Deep Learning Models
by
Oded Koren, Yoav Gvili and Liron J. Friedman
Algorithms 2026, 19(4), 311; https://doi.org/10.3390/a19040311 - 16 Apr 2026
Abstract
Modern large language models have achieved remarkable growth and performance across domains, yet their intense use of resources and high computational costs present challenges to scalability and sustainability. Current attempts to surpass baseline (naïve) AutoDL (Automated Deep Learning) models often rely on complex
[...] Read more.
Modern large language models have achieved remarkable growth and performance across domains, yet their intense use of resources and high computational costs present challenges to scalability and sustainability. Current attempts to surpass baseline (naïve) AutoDL (Automated Deep Learning) models often rely on complex manipulations to yield marginal accuracy gains while demanding deep domain knowledge and computational intensity. To address known inefficiencies in computation and implementation, this study proposes a strategic approach for enhancing processing without compromising model accuracy or performance through a simplified, scalable methodology. We present a novel AutoDL weight optimization model method that analyzes the most accurate deep learning starting point and achieves the highest outcomes while considering the additional “presetting” analysis overhead. Using 20 real-world datasets, we conducted experiments across three models, six weight configurations, and ten seeds, totaling 62,400 epochs. In all experiments, the optimized model outperformed baselines, achieving higher accuracy across every dataset while reducing preprocessing to only two epochs per seed. These results demonstrate that minimal preprocessing—limited to two epochs per seed—can substantially lower computational demand while maintaining precision. As the global demand for AI deployment accelerates, this conservation-oriented approach will be critical to sustaining innovation within resource and infrastructural constraints, enabling advances in computational sustainability and responsible AI development, tangible savings across multiple dimensions of resource consumption, and broader access to deep learning technologies.
Full article
(This article belongs to the Special Issue AI Applications and Modern Industry)
►▼
Show Figures

Figure 1
Open AccessReview
Traffic Flow Prediction in Intelligent Transportation Systems: A Comprehensive Review of Graph Neural Networks and Hybrid Deep Learning Methods
by
Zhenhua Wang, Xinmeng Wang, Lijun Wang, Zheng Wu, Jiangang Hu, Fujiang Yuan and Zhen Tian
Algorithms 2026, 19(4), 310; https://doi.org/10.3390/a19040310 - 16 Apr 2026
Abstract
►▼
Show Figures
Traffic flow prediction is a key component of Intelligent Transportation Systems (ITS), crucial for alleviating urban congestion, optimizing traffic management, and improving the overall efficiency of road networks. With the rapid growth in vehicle numbers and the increasing complexity of urban traffic patterns,
[...] Read more.
Traffic flow prediction is a key component of Intelligent Transportation Systems (ITS), crucial for alleviating urban congestion, optimizing traffic management, and improving the overall efficiency of road networks. With the rapid growth in vehicle numbers and the increasing complexity of urban traffic patterns, accurate short-term traffic flow prediction has become increasingly important. This paper comprehensively reviews the latest advancements in traffic flow prediction methods, focusing on graph neural network (GNN)-based approaches and hybrid deep learning frameworks. First, we introduce the fundamental theoretical foundations, including graph neural networks, deep learning algorithms, heuristic optimization methods, and attention mechanisms. Subsequently, we summarize GNN-based prediction methods into four paradigms: (1) federated learning and privacy-preserving methods, enabling cross-regional collaboration while protecting sensitive data; (2) dynamically adaptive graph structure methods, capturing time-varying spatial dependencies; (3) multi-graph fusion and attention mechanism methods, enhancing feature representations from multiple perspectives; and (4) cross-domain technology integration methods, fusing novel architectures and interdisciplinary technologies. Furthermore, we investigate hybrid methods combining signal decomposition, heuristic optimization, and attention mechanisms with LSTM networks to address challenges related to non-stationarity and model optimization. For each category, we analyzed representative works and summarized their core innovations, strengths, and limitations using a systematic comparative table. Finally, we discussed current challenges, including computational complexity, model interpretability, and generalization ability, and outlined future research directions such as lightweight model design, uncertainty quantification, multimodal data fusion, and integration with traffic control systems. This review provides researchers and practitioners with a systematic understanding of the latest advances in traffic flow prediction and offers guidance for methodological selection and future research.
Full article

Figure 1
Open AccessArticle
An Attention-Enhanced Network for Visual Attitude Estimation
by
Lu Liu, Jiahao Duan, Yaoyang Shen, Shihan Wang, Jiale Mao, Wei Liu, Yuyan Guo, Lan Wu, Ming Kong and Hang Yu
Algorithms 2026, 19(4), 309; https://doi.org/10.3390/a19040309 - 15 Apr 2026
Abstract
►▼
Show Figures
Accurate estimation of object attitude is essential for understanding motion behavior and achieving dynamic tracking. Existing image-based methods often suffer from low efficiency and limited accuracy, while the potential of deep learning has not been fully exploited in this field. To address these
[...] Read more.
Accurate estimation of object attitude is essential for understanding motion behavior and achieving dynamic tracking. Existing image-based methods often suffer from low efficiency and limited accuracy, while the potential of deep learning has not been fully exploited in this field. To address these limitations, a lightweight deep learning method for attitude estimation is proposed and validated on spherical particles. A synthetic dataset is generated through VTK-based rendering and automatic annotation, providing large-scale training samples with known Euler angles. An improved MobileNetV1 backbone is developed by integrating Squeeze-and-Excitation blocks, a dual-scale Pyramid Pooling Module, global average pooling, and a regression-oriented multilayer perceptron, which enhances feature extraction and enables direct Euler angle prediction. Experimental results show that the proposed method achieves an average error of 0.308° on synthetic test images. Furthermore, a solid particle was fabricated through 3D printing and physical measurements were conducted, where the network combined with image preprocessing and augmentation achieved an average error of about 0.5° on real images, demonstrating a lightweight and deployment-friendly framework for practical attitude estimation. The results verify the effectiveness of the method and demonstrate its potential for accurate and computationally efficient attitude measurement in applications such as fluid dynamics, industrial inspection, and motion tracking.
Full article

Figure 1
Open AccessArticle
A Hybrid Transformer–Graph Framework for Curriculum Sequencing and Prerequisite Optimization in Computer Science Education with Explainable AI
by
Ritika Awasthi, Abhinav Shukla, Ayush Kumar Agrawal, Parul Dubey and R Kanesaraj Ramasamy
Algorithms 2026, 19(4), 308; https://doi.org/10.3390/a19040308 - 14 Apr 2026
Abstract
►▼
Show Figures
Curriculum redesign in Computer Science and Information Technology has become increasingly complex due to rapid technological advancements, interdisciplinary knowledge requirements, and evolving industry expectations. Recent progress in artificial intelligence, particularly Transformer-based language models, offers new opportunities for data-driven and scalable curriculum analysis. This
[...] Read more.
Curriculum redesign in Computer Science and Information Technology has become increasingly complex due to rapid technological advancements, interdisciplinary knowledge requirements, and evolving industry expectations. Recent progress in artificial intelligence, particularly Transformer-based language models, offers new opportunities for data-driven and scalable curriculum analysis. This study utilizes syllabus-level textual datasets collected from multiple universities, comprising structured and unstructured course descriptions across diverse CS and IT programs. The dataset enables semantic representation learning and prerequisite inference while supporting cross-institutional curriculum analysis. We propose a hybrid framework that combines Transformer-based semantic encoding with graph-based prerequisite optimization and constraint-aware curriculum sequencing. The novelty of this work lies in integrating semantic prerequisite discovery, optimization-driven curriculum structuring, and explainable AI within a unified decision-support framework. Experimental results demonstrate that the proposed approach consistently outperforms existing machine learning and deep learning baselines, achieving higher prerequisite prediction accuracy, improved curriculum feasibility, and more coherent course sequencing, thereby offering a scalable and interpretable solution for evidence-based curriculum redesign in higher education.
Full article

Figure 1
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Actuators, Algorithms, BDCC, Future Internet, JMMP, Machines, Robotics, Systems
Smart Product Design and Manufacturing on Industrial Internet
Topic Editors: Pingyu Jiang, Jihong Liu, Ying Liu, Jihong YanDeadline: 30 June 2026
Topic in
Algorithms, Data, Earth, Geosciences, Mathematics, Land, Water, IJGI
Applications of Algorithms in Risk Assessment and Evaluation
Topic Editors: Yiding Bao, Qiang WeiDeadline: 31 July 2026
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 30 August 2026
Topic in
Agriculture, Energies, Vehicles, Sensors, Sustainability, Urban Science, Applied Sciences, Algorithms
Sustainable Energy Systems
Topic Editors: Luis Hernández-Callejo, Carlos Meza Benavides, Jesús Armando Aguilar JiménezDeadline: 31 October 2026
Conferences
Special Issues
Special Issue in
Algorithms
Advances in Computer Vision: Emerging Trends and Applications
Guest Editors: Noman Khan, Samee Ullah KhanDeadline: 30 April 2026
Special Issue in
Algorithms
Artificial Intelligence in Modeling and Simulation (2nd Edition)
Guest Editors: Nuno Fachada, Nuno DavidDeadline: 30 April 2026
Special Issue in
Algorithms
Algorithms in Multi-Sensor Imaging and Fusion
Guest Editors: Guanqiu Qi, Zhiqin ZhuDeadline: 30 April 2026
Special Issue in
Algorithms
Advances in Parallel and Distributed AI Computing
Guest Editor: Stefan BosseDeadline: 30 April 2026
Topical Collections
Topical Collection in
Algorithms
Parallel and Distributed Computing: Algorithms and Applications
Collection Editors: Charalampos Konstantopoulos, Grammati Pantziou
Topical Collection in
Algorithms
Feature Papers in Algorithms and Mathematical Models for Computer-Assisted Diagnostic Systems
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Algorithms for Games AI
Collection Editors: Wenxin Li, Haifeng Zhang
Topical Collection in
Algorithms
Feature Papers on Artificial Intelligence Algorithms and Their Applications
Collection Editors: Ulrich Kerzel, Mostafa Abbaszadeh, Andres Iglesias, Akemi Galvez Tomida


