Next Issue
Volume 19, March
Previous Issue
Volume 19, January
 
 

Algorithms, Volume 19, Issue 2 (February 2026) – 73 articles

Cover Story (view full-size image): Engineering is suffering a significant change driven by the integration of artificial intelligence (AI) into engineering optimization in design, analysis, and operational efficiency across numerous disciplines. Several frameworks for AI-based engineering optimization have been identified: (1) machine learning models are trained as objective and constraint functions for optimization problems; (2) machine learning techniques are used to improve the efficiency of optimization algorithms; (3) neural networks approximate complex simulation models such as finite element analysis (FEA) and computational fluid dynamics (CFD), and this makes it possible to optimize complex engineering systems; and (4) machine learning predicts design parameters/initial solutions that are subsequently optimized. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3861 KB  
Article
A Joint Estimation Method for Suspension Status and Road Gradient Under Sparse Sensor Conditions
by Mengdong Zheng, Xiaolin Wang, Zhaoxue Deng, Xingquan Li, Kun Yuan and Tao Gou
Algorithms 2026, 19(2), 165; https://doi.org/10.3390/a19020165 - 23 Feb 2026
Viewed by 252
Abstract
In the domain of engineering applications, this study addresses the challenge of achieving a unified estimation of the suspension relative velocity and road gradient during vehicle operation while mitigating the significant costs associated with automotive sensors. This paper proposes a simplified system model [...] Read more.
In the domain of engineering applications, this study addresses the challenge of achieving a unified estimation of the suspension relative velocity and road gradient during vehicle operation while mitigating the significant costs associated with automotive sensors. This paper proposes a simplified system model that integrates discretized vehicle vertical dynamics and longitudinal kinematics, intentionally excluding wheel dynamics. Utilizing the front axle vertical velocity, vehicle speed, and longitudinal and vertical accelerations as inputs, an estimator is employed in conjunction with the Extended Kalman filter algorithm to concurrently predict the relative velocity of the vehicle suspension, the sprung mass velocity, and the road gradient. The feasibility of the proposed methodology is corroborated through simulation experiments. Furthermore, real-world road tests validate the efficacy and timeliness of the joint estimation approach based on a “2 + 1” sensor arrangement. This methodology not only optimizes sensor system configuration and reduces engineering costs but also provides substantial technical support for further advancements in vehicle parameter estimation and suspension control applications. Full article
Show Figures

Figure 1

23 pages, 5752 KB  
Article
MDF-iTransformer: Multi Data Fusion-Based iTransformer for Load Prediction of Zero-Carbon Emission Integrated Energy System in Urban Park
by Yang Wei, Zhengwei Chang, Feng Yang, Han Zhang, Jie Zhang, Yumin Chen and Maomao Yan
Algorithms 2026, 19(2), 164; https://doi.org/10.3390/a19020164 - 21 Feb 2026
Viewed by 220
Abstract
To predict the output power of integrated energy systems (IES) under zero-carbon conditions, this research presents a Multi Data Fusion-based iTransformer prediction network (MDF-iTransformer). The network uses Multivariate Singular Spectrum Analysis (MSSA) to identify nonlinear relationships among variables and extract dynamic features from [...] Read more.
To predict the output power of integrated energy systems (IES) under zero-carbon conditions, this research presents a Multi Data Fusion-based iTransformer prediction network (MDF-iTransformer). The network uses Multivariate Singular Spectrum Analysis (MSSA) to identify nonlinear relationships among variables and extract dynamic features from multi-modal data. It integrates an embedding block and multivariate attention module into the iTransformer network to capture complex patterns and long-term temporal dependencies in multi-dimensional data, thereby extracting dynamic features across different time scales and spatial dimensions. Subsequently, to address the issue of imbalanced datasets, the improved K-means-SMOTE (KS) algorithm is adopted to augment the number of small-class samples, effectively reducing model bias. Experimental results indicate that the proposed MDF-iTransformer achieves a root-mean-square error (RMSE) of 7.2 kW, mean absolute error (MAE) of 5.6 kW, mean absolute percentage error (MAPE) of 2.7%, and an R-squared value (R2) of 0.92 for a 1 h prediction horizon. It still maintains an RMSE of 14.4 kW, MAE of 11.9 kW, MAPE of 3.68%, and R2 of 0.74 at the 10 h horizon, with cross-season load forecasting errors consistently below 4%. Compared with other algorithms, MDF-iTransformer demonstrates higher accuracy and stronger robustness, playing a crucial role in the optimal operation of integrated energy systems. Full article
Show Figures

Figure 1

31 pages, 686 KB  
Article
On a Method for Constructing Optimal Difference Formulas Using Discrete Operators with Variable Coefficients
by Kholmat Shadimetov and Shermamat Esanov
Algorithms 2026, 19(2), 163; https://doi.org/10.3390/a19020163 - 19 Feb 2026
Viewed by 216
Abstract
This paper deals with the problem of constructing optimal difference formulas in the Hilbert space H2m0,1 through Sobolev’s method. Firstly, Sobolev’s method of construction of optimal difference formulas in the Hilbert space H2m0,1 [...] Read more.
This paper deals with the problem of constructing optimal difference formulas in the Hilbert space H2m0,1 through Sobolev’s method. Firstly, Sobolev’s method of construction of optimal difference formulas in the Hilbert space H2m0,1, which is based on the discrete analogue Lhβ, is described. Secondly, a discrete analogue Lhβ of differential operator d2dx2+2sgnxddx+11d2dx2m2 having variable coefficients is contructed. Thirdly, for m=2 the optimal difference formula is obtained. Finally, at the end of the paper, we present some numerical results, which serves to confirm the numerical convergence of the optimal difference formula. Full article
Show Figures

Figure 1

30 pages, 15965 KB  
Article
A Trust-Centered Explainable Deep-Learning Framework for Acute Lymphoblastic Leukemia Detection Using Multi-Model Fusion and Interpretability Scoring
by Khadija Parwez, Syed Irfan Sohail, Muhammad Bilal, Wesam Shishah, Ali Raza and Mubeen Javed
Algorithms 2026, 19(2), 162; https://doi.org/10.3390/a19020162 - 19 Feb 2026
Viewed by 252
Abstract
Currently, advances in healthcare technologies are transforming medical diagnostics, particularly for data-driven disease detection. Acute lymphoblastic leukemia is a common and life-threatening blood cancer, especially prevalent in children. Although existing artificial intelligence-based diagnostic models have demonstrated promising accuracy, their black-box nature limits transparency, [...] Read more.
Currently, advances in healthcare technologies are transforming medical diagnostics, particularly for data-driven disease detection. Acute lymphoblastic leukemia is a common and life-threatening blood cancer, especially prevalent in children. Although existing artificial intelligence-based diagnostic models have demonstrated promising accuracy, their black-box nature limits transparency, explainability, and clinical trust. Moreover, most explainable artificial intelligence approaches for leukemia diagnosis rely primarily on qualitative visual explanations and lack a unified quantitative mechanism to measure trust. This study presents a novel, trust-centered explainable deep-learning framework for automated acute lymphoblastic leukemia detection using 153 publicly available microscopic blood smear images and multiple transfer-learning models. Among the evaluated architectures, the fine-tuned EfficientNetB4 achieved high diagnostic performance, attaining an accuracy of 98.31%, and was selected as the base model of the proposed framework. Beyond diagnostic accuracy, this work introduces a novel unified interpretability score that quantitatively assesses model trustworthiness by integrating diagnostic performance, explanation robustness (evaluated using deletion and stability metrics), and clinician trust feedback into a single reliability measure. This quantitative trust formulation represents a distinct advancement over existing studies based on explainable artificial intelligence, which typically rely on isolated or qualitative explainability assessments. The framework further enhances transparency through visual and textual explanations generated using standard post hoc explainable methods and advanced fusion heatmaps, while human-centric evaluation is conducted using two clinician-based trust scales. Overall, this study provides a unified and transparent framework that jointly evaluates accuracy, explainability, and clinician trust, representing a step towards the clinical adoption of trustworthy artificial intelligence driven leukemia diagnostic systems. Full article
Show Figures

Figure 1

4 pages, 141 KB  
Editorial
Special Issue “Selected 2024 and 2025 Papers from Algorithms Editorial Board Members”
by Frank Werner
Algorithms 2026, 19(2), 161; https://doi.org/10.3390/a19020161 - 19 Feb 2026
Viewed by 240
Abstract
This is the fourth edition of a Special Issue of Algorithms that is of a different nature than the other Special Issues in the journal, which are usually dedicated to a particular subject in the area of algorithms [...] Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
25 pages, 639 KB  
Article
A Sparse L-Norm Regularized Least Squares Support Vector Regression
by Xiaoyong Liu, Dong Li and Chengbin Zeng
Algorithms 2026, 19(2), 160; https://doi.org/10.3390/a19020160 - 18 Feb 2026
Viewed by 187
Abstract
Although Least Squares Support Vector Regression (LSSVR) reduces the hyperparameter space to two, it sacrifices sparsity, causing all training samples to become support vectors and increasing storage costs. In contrast, standard Support Vector Regression (SVR) preserves sparsity but requires tuning three highly coupled [...] Read more.
Although Least Squares Support Vector Regression (LSSVR) reduces the hyperparameter space to two, it sacrifices sparsity, causing all training samples to become support vectors and increasing storage costs. In contrast, standard Support Vector Regression (SVR) preserves sparsity but requires tuning three highly coupled hyperparameters, leading to higher computational burden. To address these limitations, this paper proposes a sparse L-norm regularized least squares SVR framework that incorporates the infinity norm of approximation errors into both the objective function and inequality constraints. The resulting optimization problem minimizes model complexity while controlling the maximum prediction deviation through a single slack variable, thereby transforming the conventional three-hyperparameter SVR tuning task into a two-parameter problem involving only the regularization coefficient and kernel width. This formulation restores sparsity by enabling a compact support vector set, while preserving the stability and convexity advantages of LSSVR. Experiments on both static and dynamic datasets demonstrate that the proposed method consistently achieves higher predictive accuracy and improved robustness compared with standard SVR and LSSVR. These results indicate that the proposed L-norm regularized framework offers a mathematically principled and computationally efficient alternative for sparse, robust, and scalable regression modeling. Full article
(This article belongs to the Topic Machine Learning and Data Mining: Theory and Applications)
Show Figures

Figure 1

21 pages, 2598 KB  
Article
AG2: Attention-Guided Dynamic Adaptation for Adversarial Attacks in Computer Vision
by Jie Tian and Vladimir Y. Mariano
Algorithms 2026, 19(2), 159; https://doi.org/10.3390/a19020159 - 18 Feb 2026
Viewed by 184
Abstract
Deep neural networks (DNNs) have achieved remarkable success in computer vision yet remain vulnerable to adversarial examples. Existing attacks typically distribute perturbations uniformly across the input, without leveraging the model’s internal attention mechanism, and fail to adapt to model responses. To tackle these [...] Read more.
Deep neural networks (DNNs) have achieved remarkable success in computer vision yet remain vulnerable to adversarial examples. Existing attacks typically distribute perturbations uniformly across the input, without leveraging the model’s internal attention mechanism, and fail to adapt to model responses. To tackle these limitations, we propose AG2 (Attention-Guided Adversarial Sample Generation), an adversarial attack algorithm that uses dynamically updated attention maps to guide perturbation placement and a dynamic feedback mechanism for adaptive optimization. AG2 comprises three steps: feature extraction and attention-weight computation, iterative optimization of perturbations guided by attention maps, and adjustment of optimization parameters based on attention shifts. By concentrating perturbations in regions receiving high attention from the victim model, AG2 improves attack effectiveness while preserving visual imperceptibility. The dynamic feedback mechanism further maintains robustness against defended models such as those trained with defensive distillation. Experiments on MNIST, CIFAR-10, and ImageNet show that AG2 achieves attack success rates of 93.7%, 93.5%, and 85.0%, respectively, outperforming prior methods. Moreover, AG2 exhibits strong cross-architecture transferability, achieving a 69.5% success rate on Vision Transformers, which is higher than the previous method’s 55.3% by 14.2%. Theoretical analysis provides convergence guarantees and stability bounds for the proposed attention-guided optimization. Full article
Show Figures

Figure 1

22 pages, 2827 KB  
Article
An Integer Ambiguity Resolution Method Based on the Hybrid Adaptive Differential Evolution Grey Wolf Optimizer Algorithm
by Jiangchao Tian, Xiyan Sun, Yuanfa Ji, Wuzheng Guo and Xizi Jia
Algorithms 2026, 19(2), 158; https://doi.org/10.3390/a19020158 - 18 Feb 2026
Viewed by 178
Abstract
In Global Navigation Satellite Systems (GNSS), high-precision position coordinates are typically determined by establishing a double-difference carrier phase observation model and resolving the integer ambiguities within it. Therefore, the ability to fix integer ambiguities rapidly and accurately is a critical challenge in carrier [...] Read more.
In Global Navigation Satellite Systems (GNSS), high-precision position coordinates are typically determined by establishing a double-difference carrier phase observation model and resolving the integer ambiguities within it. Therefore, the ability to fix integer ambiguities rapidly and accurately is a critical challenge in carrier phase measurements. To address the problem of double-difference integer ambiguity, this paper proposes a Hybrid Adaptive Differential Evolution Grey Wolf Optimizer (HADE-GWO) algorithm. Comparative experiments focusing on computation speed and stability were conducted against the GWO, LAMBDA, and M-LAMBDA algorithms. The results show that while achieving the same fixing success rate as the LAMBDA and M-LAMBDA algorithms, the HADE-GWO algorithm finds the optimal ambiguity solution in less time. To validate the high-dimensional ambiguity resolution capability of the HADE-GWO algorithm, 6-dimensional and 12-dimensional integer ambiguity resolution tests were performed. The outcomes indicate that the HADE-GWO algorithm possesses excellent high-dimensional resolution capabilities. Finally, an application experiment was conducted using single-frequency data from GPS and BeiDou (BDS) systems. The results demonstrate that the algorithm can achieve centimeter-level positioning accuracy in a combined single-frequency GPS+BDS solution. Full article
Show Figures

Figure 1

19 pages, 31453 KB  
Article
Performance Evaluation of Burn Area Indices for Effective Fire Detection Using Sentinel-2 Satellite Imagery
by Juan C. Valdiviezo-Navarro, Miguel Ángel Castillo-Santiago, Alejandro Téllez-Quiñones and Alejandra A. López-Caloca
Algorithms 2026, 19(2), 157; https://doi.org/10.3390/a19020157 - 16 Feb 2026
Viewed by 220
Abstract
In recent years, different spectral indices have been adapted or proposed for burn area (BA) extraction from satellite imagery. Many such indices have been particularly designed for specific satellite sensors, which could limit their applicability to other platforms. This research aims to explore [...] Read more.
In recent years, different spectral indices have been adapted or proposed for burn area (BA) extraction from satellite imagery. Many such indices have been particularly designed for specific satellite sensors, which could limit their applicability to other platforms. This research aims to explore the performance of spectral indices for burn area detection and post-fire recovery evaluation tasks in forest ecosystems. For this purpose, nine vegetation and burn area indices, commonly used in the current literature, were chosen to perform different experiments using Sentinel-2 images collected from three study areas characterised by large fire events. A separability analysis using the Spectral Discrimination index (SDI) led us to determine that A New Burned Area Index (ABAI), the Normalised Burn Radio Plus (NBR+), and the Normalised Burn Radio (NBR) indices were capable of discriminating burn areas when clouds and shadows were present in the imagery. Moreover, a short-term time series analysis allowed the identification of particular spectral index methods that could be useful for post-fire recovery evaluation in forest ecosystems. Full article
(This article belongs to the Special Issue Algorithms and Application for Spatiotemporal Data Processing)
Show Figures

Graphical abstract

27 pages, 2855 KB  
Article
Research on Workshop Dynamic Scheduling Method Considering Equipment Occupation Under Emergency Insertion Order
by Xuan Su, Jitai Han, Tongtong Gu, Junjie Yu and Weimin Ma
Algorithms 2026, 19(2), 156; https://doi.org/10.3390/a19020156 - 16 Feb 2026
Viewed by 247
Abstract
With the increasing demand for personalized services in the market, manufacturing enterprises are facing frequent emergency order insertion and equipment resource shortages, and traditional scheduling methods lack flexibility. This article focuses on the workshop scheduling problem under emergency insertion disturbance, and constructs a [...] Read more.
With the increasing demand for personalized services in the market, manufacturing enterprises are facing frequent emergency order insertion and equipment resource shortages, and traditional scheduling methods lack flexibility. This article focuses on the workshop scheduling problem under emergency insertion disturbance, and constructs a dynamic scheduling optimization method considering equipment occupancy status. Firstly, a dynamic scheduling framework is proposed, and a real-time status model is established to monitor emergency insertion and equipment occupancy status in real time. An event-driven dynamic scheduling mechanism is also constructed. Secondly, with the optimization objective of minimizing the maximum completion time, a mixed integer programming model is established, and an improved genetic simulated annealing algorithm is proposed to solve the proposed model. Finally, the proposed method was validated using a standard case set and real production scenarios. The experimental results showed that the solution of the proposed method was better than similar algorithms under three different problem scales. In three emergency insertion scenarios, the proposed method can reduce the disturbance of insertion on the original plan while ensuring equipment utilization, verifying the practicality and effectiveness of the proposed dynamic scheduling method. Full article
Show Figures

Figure 1

23 pages, 6166 KB  
Article
Efficient Multivariate Time Series Forecasting with SC-TWRNet: Combining Adaptive Multi-Resolution Wavelet and Parallelizable Decomposition
by Yu Chen and Hanshen Li
Algorithms 2026, 19(2), 155; https://doi.org/10.3390/a19020155 - 15 Feb 2026
Viewed by 332
Abstract
Long-term multivariate time series forecasting serves as a fundamental analytical tool across diverse domains, such as energy management, transportation analysis, and meteorology. However, conventional modeling paradigms often yield suboptimal results as they fail to adequately capture non-stationarity and multi-scale temporal correlations. While frequency-domain [...] Read more.
Long-term multivariate time series forecasting serves as a fundamental analytical tool across diverse domains, such as energy management, transportation analysis, and meteorology. However, conventional modeling paradigms often yield suboptimal results as they fail to adequately capture non-stationarity and multi-scale temporal correlations. While frequency-domain methods offer theoretical clarity, representative efficient spectral-domain architectures often rely on magnitude-based spectral pruning to ensure efficiency, inadvertently discarding high-frequency transient signals essential for non-stationary forecasting. To address these limitations, we propose the Structural Component-based Temporal Wavelet-Refine Network (SC-TWRNet), a framework that orchestrates adaptive wavelet filtering with explicit structural temporal decomposition. The architecture is anchored by the Adaptive Multi-Resolution Wavelet (AMRW) filter, designed to generate time-frequency representations while maintaining linear computational complexity. Concurrently, a structural temporal decomposition module decouples the input stream into distinct trend, seasonal, and residual components for targeted modeling. Extensive experiments on eight standard datasets demonstrate that SC-TWRNet achieves superior predictive accuracy compared to state-of-the-art baselines while maintaining linear computational complexity for efficient high-dimensional modeling. Full article
Show Figures

Figure 1

18 pages, 4973 KB  
Project Report
Data Management and Data Services in Large Collaborative Projects—DiverSea Experience
by Vassil Vassilev, Georgi Petkov, Boris Kraychev, Stoyan Haydushki, Stoyan Nikolov, Viktor Sowinski-Mydlarz, Ensiye Kiyamousavi, Nikolay Shivarov and Denitsa Stoilova
Algorithms 2026, 19(2), 154; https://doi.org/10.3390/a19020154 - 15 Feb 2026
Viewed by 282
Abstract
Collaborative projects under the Horizon Europe Framework Program of the European Union typically involve a large number of partners from multiple countries. Data-centric projects, among them, often require integration of disparate data source formats and collection methods, leading to complex data management architectures [...] Read more.
Collaborative projects under the Horizon Europe Framework Program of the European Union typically involve a large number of partners from multiple countries. Data-centric projects, among them, often require integration of disparate data source formats and collection methods, leading to complex data management architectures and policies. This article is an extended version of an article presented at the 1st International Conference on Big Data Analytics and Applications (BDAA’2025). It explores design decisions, organisational principles, and technological solutions to address these challenges by focusing on data integration of data sources and the hybridisation of data services. This experience was gathered while working on DiverSea, a project dedicated to the analysis of biodiversity dynamics along European coastlines—ranging from the Black Sea to the Mediterranean and the North Sea. While grounded in established technologies, the project’s takeaways offer valuable insights for environmental data projects across aquatic, terrestrial, and atmospheric domains. Full article
(This article belongs to the Special Issue Blockchain and Big Data Analytics: AI-Driven Data Science)
Show Figures

Figure 1

33 pages, 4353 KB  
Article
Probability Distribution Tree-Based Dishonest-Participant-Resistant Visual Secret Sharing Using Linearly Polarized Shares
by Shuvroo JadidAhabab and Laxmisha Rai
Algorithms 2026, 19(2), 153; https://doi.org/10.3390/a19020153 - 14 Feb 2026
Viewed by 392
Abstract
With the rapid growth of data transmission and visual encryption technologies, Visual Secret Sharing (VSS) has become an important technique for image-based information protection. However, many existing VSS schemes remain vulnerable to dishonest participants who attempt to recover secret images through unauthorized stacking [...] Read more.
With the rapid growth of data transmission and visual encryption technologies, Visual Secret Sharing (VSS) has become an important technique for image-based information protection. However, many existing VSS schemes remain vulnerable to dishonest participants who attempt to recover secret images through unauthorized stacking or manipulation of shares. To address this issue, this paper proposes a dishonest-participant-resistant VSS scheme based on linearly polarized shares and Probability Distribution Trees (PDTs). The proposed method embeds both secret and fake images into polarized shares, such that any unauthorized stacking of ordinary shares produces a visually plausible fake image or random noise, while only stacking that includes the master share under a predefined optical ordering reveals the true secret image. Binary image binarization and probability-guided polarization assignment are employed to improve computational efficiency and increase uncertainty against adaptive attacks. In addition to visual inspection and contrast analysis, peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and visual information fidelity (VIF) are used as complementary metrics to distinguish authorized reconstructions from unauthorized and partial ones. Experimental results show that authorized reconstructions achieve high visual fidelity and perceptual recognizability, whereas unauthorized and partial reconstructions yield significantly degraded or misleading outputs, demonstrating effective suppression of information leakage and strong resistance against dishonest behavior. Consequently, the proposed scheme enhances security and practical usability compared with existing polarization-based VSS approaches. Full article
(This article belongs to the Special Issue Visual Attributes in Computer Vision Applications)
Show Figures

Figure 1

23 pages, 3619 KB  
Article
Unbalanced Data Mining Algorithms from IoT Sensors for Early Cockroach Infestation Prediction in Sewer Systems
by Joaquín Aguilar, Cristóbal Romero, Carlos de Castro Lozano and Enrique García
Algorithms 2026, 19(2), 152; https://doi.org/10.3390/a19020152 - 14 Feb 2026
Viewed by 290
Abstract
Predictive pest management in urban sewer networks represents a sustainable alternative to reactive, biocide-based methods. Using data collected through an IoT architecture and validated with manual inspections across eight manholes over 113 days, we implemented a rigorous comparative framework evaluating eleven data mining [...] Read more.
Predictive pest management in urban sewer networks represents a sustainable alternative to reactive, biocide-based methods. Using data collected through an IoT architecture and validated with manual inspections across eight manholes over 113 days, we implemented a rigorous comparative framework evaluating eleven data mining algorithms, including classical methods (KNN, SVM, decision trees) and advanced ensemble techniques (XGBoost, LightGBM, CatBoost) optimized for unbalanced datasets. Gradient boosting models with explicit handling of class imbalance—where the absence of pests exceeds 77% of observations—showed exceptional performance, achieving a Macro-F1 score above 0.92 and high precision in identifying the minority high-risk class. Explainability analysis using SHAP consistently revealed that elevated CO2 concentrations are the primary predictor of infestation, enabling early identification of critical zones. This study demonstrates that carbon dioxide (CO2) acts as the most robust bioindicator for predicting severe infestations of Periplaneta americana, significantly outperforming conventional environmental variables such as temperature and humidity. The implementation of the model in a real-time monitoring platform generates interpretable heat maps that support proactive and localized interventions, optimizing resource use and reducing dependence on biocides. This study presents a scalable, operationally viable predictive system designed for direct integration into municipal asset management workflows, offering a concrete, industry-ready solution to transform pest control from a reactive, labor-intensive process into a data-driven, proactive operational paradigm. This approach not only transforms pest management from reactive to predictive but also aligns with the Sustainable Development Goals, offering a scalable, interpretable, and operationally viable system for smart cities. Full article
Show Figures

Figure 1

23 pages, 16195 KB  
Article
Integrating ShuffleNetV2 with Multi-Scale Feature Extraction and Coordinate Attention Combined with Knowledge Distillation for Apple Leaf Disease Recognition
by Wei-Chia Lo and Chih-Chin Lai
Algorithms 2026, 19(2), 151; https://doi.org/10.3390/a19020151 - 13 Feb 2026
Viewed by 235
Abstract
Misdiagnosing plant diseases often leads to a range of negative consequences, including the overuse of pesticides and unnecessary food waste. Traditionally, identifying diseases on plant leaves has relied on manual visual inspection, making it a complex and time-consuming task. Since the advent of [...] Read more.
Misdiagnosing plant diseases often leads to a range of negative consequences, including the overuse of pesticides and unnecessary food waste. Traditionally, identifying diseases on plant leaves has relied on manual visual inspection, making it a complex and time-consuming task. Since the advent of convolutional neural networks, however, recognition performance for leaf diseases has improved significantly. Most contemporary studies that apply AI techniques to plant-leaf disease classification focus primarily on boosting accuracy, frequently overlooking the limitations posed by resource-constrained real-world environments. To address these challenges, this thesis employs knowledge distillation to enable small models to approximate the recognition capabilities of larger ones. We enhance a ShuffleNetV2-based model by integrating multi-scale feature extraction and a coordinate-attention mechanism, and we further improve the lightweight student model through knowledge distillation to boost its recognition performance. Experimental results show that the proposed model achieves 93.15% accuracy on the Plant Pathology 2021- FGVC8 dataset, utilizing only 0.36 M parameters and 0.0931 GFLOPs. Compared to the ResNet50 baseline, our architecture slashes parameters by nearly 98% while limiting the accuracy gap to a mere 1.6%. These results confirm the model’s ability to maintain robust performance with minimal computational overhead, providing a practical solution for precision agriculture on resource-limited edge devices. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

25 pages, 1591 KB  
Article
Leveraging Semi-Markov Models to Identify Anomalies of Activities of Daily Living in Smart Homes Processes
by Eman Shaikh, Sally McClean, Zeeshan Tariq, Bryan Scotney and Nazeeruddin Mohammad
Algorithms 2026, 19(2), 150; https://doi.org/10.3390/a19020150 - 12 Feb 2026
Viewed by 278
Abstract
Stochastic Process Mining, in particular, Markov processes, is used to represent uncertainty and variability in Activities of Daily Living (ADLs). However, the Markov models inherently assume that the time spent in each state must follow an exponential distribution. This presents a significant challenge [...] Read more.
Stochastic Process Mining, in particular, Markov processes, is used to represent uncertainty and variability in Activities of Daily Living (ADLs). However, the Markov models inherently assume that the time spent in each state must follow an exponential distribution. This presents a significant challenge to model real-life complexities in ADLs. Therefore, this paper employs semi-Markov models on publicly available ADL event logs to model state durations, where results are validated via goodness-of-fit tests (Kullback–Leibler, Kolmogorov–Smirnov, Cramér–von Mises). Synthetic durations are generated using the inverse transform sampling technique. To simulate dementia-based behaviours, the weights of the mixture model are altered to reflect prolonged duration in napping, toileting, meal, and drink preparation. These anomalies are then detected through the employment of log-likelihood ratio and chi-square tests. Experimental results demonstrate that the proposed approach can be used to reliably identify abnormal ADL durations, offering a proven framework to track early detection of behavioural shifts, and showcasing the effectiveness of detecting duration-based anomalies in ADL. By identifying such anomalies, our work aims to detect deterioration in the smart home resident’s condition, focusing in particular on their ability to execute different ADLs. Full article
Show Figures

Figure 1

30 pages, 2492 KB  
Article
A Hybrid Deep Reinforcement Learning Framework for Vehicle Path Optimization with Time Windows
by Zhiguo Xiao, Changgen Li, Junli Liu and Xinyao Cao
Algorithms 2026, 19(2), 149; https://doi.org/10.3390/a19020149 - 11 Feb 2026
Viewed by 304
Abstract
The vehicle routing problem with time windows (VRPTW) is a core challenge in logistics optimization, requiring the minimization of transportation costs under constraints such as time windows and vehicle capacity. Deep reinforcement learning (DRL) provides an effective approach for solving such complex combinatorial [...] Read more.
The vehicle routing problem with time windows (VRPTW) is a core challenge in logistics optimization, requiring the minimization of transportation costs under constraints such as time windows and vehicle capacity. Deep reinforcement learning (DRL) provides an effective approach for solving such complex combinatorial optimization problems. However, existing DRL methods still suffer from shortcomings, including insufficient modeling of spatiotemporal correlations among customer nodes, inadequate capture of path temporal dependencies, and policy exploration prone to local optima. To address these issues, this paper proposes an end-to-end hybrid DRL framework: the encoder employs a graph attention network (GATv2) with adaptive gating to effectively model the coupling between customer spatial proximity and time window constraints; the decoder integrates multi-head attention (MHA) and a dynamic context-aware long short-term memory network (LSTM) to synergistically enhance the overall quality and constraint feasibility of route solutions; during the training phase, an improved proximal policy optimization (PPO) algorithm and a constraint-aware composite reward function are used to enhance optimization stability. Experiments on random instances, Solomon benchmark datasets, and real-world logistics datasets show that, compared to mainstream DRL methods and classical heuristic algorithms, the proposed framework reduces transportation costs by 2–10%, achieves a demand fulfillment rate exceeding 99%, and exhibits a performance degradation of only 3.2% in cross-distribution testing. This study provides an integrated DRL solution paradigm for combinatorial optimization problems with complex constraints, promoting the application of DRL in the field of intelligent logistics. Full article
Show Figures

Graphical abstract

19 pages, 644 KB  
Article
A Patch-Based State-Space Hybrid Network for Container Resource Usage Forecasting
by Zhilong Song, Xiangguo Yin, Chencheng Li, He Ba and Lin Li
Algorithms 2026, 19(2), 148; https://doi.org/10.3390/a19020148 - 11 Feb 2026
Viewed by 295
Abstract
Accurate forecasting of container resource usage is crucial for efficient resource scheduling and ensuring Quality of Service (QoS) in cloud data centers. The inherent complexity of container workloads, characterized by strong temporal dependencies, multivariate correlations, and non-stationarity, challenges existing forecasting models, which often [...] Read more.
Accurate forecasting of container resource usage is crucial for efficient resource scheduling and ensuring Quality of Service (QoS) in cloud data centers. The inherent complexity of container workloads, characterized by strong temporal dependencies, multivariate correlations, and non-stationarity, challenges existing forecasting models, which often fail to efficiently capture both fine-grained local patterns and global trends. To address this gap, this paper proposes a novel Patch-based State-space Hybrid Network (PSH). PSH features a dual-branch architecture: a Local Transformer Path to model complex short-range dependencies and a Global Mamba Path, leveraging a State-Space Model (SSM) with linear-complexity, to efficiently capture long-range dependencies. This method uses an initial patching mechanism to reduce sequence length, which lowers computational overhead and supports efficient feature processing, and a cross-attention fusion module to integrate representations from its dual-branch architecture (Local Transformer Path for short-range dependencies, Global Mamba Path for long-range trends). The fusion module enables bidirectional interaction between the two paths: global context from the Global Mamba Path refines local features from the Local Transformer Path, balancing the model’s ability to capture both local patterns and global trends while maintaining high computational efficiency. Extensive experiments on the large-scale, real-world Alibaba Cluster Traces 2018 dataset demonstrate that PSH significantly outperforms existing state-of-the-art forecasting models in terms of accuracy and robustness. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science: 2nd Edition)
Show Figures

Figure 1

29 pages, 2553 KB  
Article
Adaptive Path Planning for Autonomous Underwater Vehicle (AUV) Based on Spatio-Temporal Graph Neural Networks and Conditional Normalizing Flow Probabilistic Reconstruction
by Guoshuai Li, Jinghua Wang, Jichuan Dai, Tian Zhao, Danqiang Chen and Cui Chen
Algorithms 2026, 19(2), 147; https://doi.org/10.3390/a19020147 - 11 Feb 2026
Viewed by 242
Abstract
In underwater reconnaissance and patrol, AUV has to sense and judge traversability in cluttered areas that include reefs, cliffs, and seabed infrastructure. A narrow sonar field of view, occlusion, and current-driven disturbances leave the vehicle with local, time-varying information, so decisions are made [...] Read more.
In underwater reconnaissance and patrol, AUV has to sense and judge traversability in cluttered areas that include reefs, cliffs, and seabed infrastructure. A narrow sonar field of view, occlusion, and current-driven disturbances leave the vehicle with local, time-varying information, so decisions are made with incomplete and uncertain observations. A path-planning framework is built around two coupled components: spatiotemporal graph neural network prediction and conditional normalizing flow (CNF)-based probabilistic environment reconstruction. Forward-looking sonar and inertial navigation system (INS) measurements are fused online to form a local environment graph with temporal encoding. Cross-temporal message passing captures how occupancy and maneuver patterns evolve, which supports path prediction under dynamic reachability and collision-avoidance constraints. For regions that remain unobserved, CNF performs conditional generation from the available local observations, producing probabilistic completion and an explicit uncertainty output. Conformal calibration then maps model confidence to credible intervals with controlled miscoverage, giving a consistent probabilistic interface for risk budgeting. To keep pace with ocean currents and moving targets, edge weights and graph connectivity are updated online as new observations arrive. Compared with Informed Random Tree star (RRT*), D* Lite, Soft Actor-Critic (SAC), and Graph Neural Network-Probabilistic Roadmap (GNN-PRM), the proposed method achieves a near 100% success rate at 20% occlusion and maintains about an 80% success rate even under 70% occlusion. In dynamic obstacle scenarios, it yields about a 4% collision rate at low speeds and keeps the collision rate below 20% when obstacle speed increases to 3 m/s. Ablation studies further demonstrate that temporal modeling improves success rate by about 7.1%, CNF-based probabilistic completion boosts success rate by about 13.2% and reduces collisions by about 17%, while conformal calibration reduces coverage error by about 6.6%, confirming robust planning under heavy occlusion and time-varying uncertainty. Full article
Show Figures

Figure 1

22 pages, 3651 KB  
Article
Preliminary Exploration of a Gait Alteration Index to Detect Abnormal Walking Through a RGB-D Camera and Human Pose Estimation
by Gianluca Amprimo, Lorenzo Priano, Luca Vismara and Claudia Ferraris
Algorithms 2026, 19(2), 146; https://doi.org/10.3390/a19020146 - 11 Feb 2026
Viewed by 201
Abstract
Quantitative gait analysis is essential for assessing motor function, as altered walking patterns are linked to functional decline and increased fall risk. Although recent advances in markerless motion analysis and human pose estimation enable gait feature extraction from low-cost video systems compared to [...] Read more.
Quantitative gait analysis is essential for assessing motor function, as altered walking patterns are linked to functional decline and increased fall risk. Although recent advances in markerless motion analysis and human pose estimation enable gait feature extraction from low-cost video systems compared to expensive motion analysis laboratories, clinical translation remains limited by fragmented descriptors or approaches that directly regress clinical scores, often reducing interpretability and generalizability. We propose the Gait Alteration Index (GAI), an interpretable index that quantifies gait abnormality as a functional deviation from typical walking patterns, independently of specific pathologies. The GAI is computed from a small set of gait parameters and integrates three complementary domains: spatio-temporal characteristics, surrogates of dynamic stability, and arm swing behaviour, providing both a global index and domain-specific sub-indices. Preliminary evaluation on a heterogeneous cohort using clinician-derived assessments showed that the GAI captures clinically meaningful gait alterations (Spearman’s ρ=0.65), with the strongest agreement for spatio-temporal features (ρ=0.77). These results suggest that the GAI is a promising low-cost, and interpretable tool for objective gait assessment, screening, and longitudinal monitoring. Full article
Show Figures

Figure 1

16 pages, 3339 KB  
Article
PolygonTailor: A Parallel Algorithm for Polygon Boolean Operations in IC Layout Processing
by Zhirui Niu, Ruian Ji, Guan Wang, Siao Guo, Shijie Ye and Lan Chen
Algorithms 2026, 19(2), 145; https://doi.org/10.3390/a19020145 - 10 Feb 2026
Viewed by 220
Abstract
Polygon Boolean operations are widely used in integrated circuit (IC) layout processing tasks such as design rule checking (DRC) and optical proximity correction (OPC). Single-threaded Boolean algorithms cannot meet the efficiency demand of modern IC layouts, necessitating parallel algorithms for acceleration. However, existing [...] Read more.
Polygon Boolean operations are widely used in integrated circuit (IC) layout processing tasks such as design rule checking (DRC) and optical proximity correction (OPC). Single-threaded Boolean algorithms cannot meet the efficiency demand of modern IC layouts, necessitating parallel algorithms for acceleration. However, existing parallel algorithms exhibit unsatisfactory parallel speedups and limited scalability, which typically stem from an inefficient merging phase that uses generic Boolean OR operations and redundantly reprocesses all edges of polygons on grid boundaries. To solve these problems, we proposed Polygon Tailor, a novel parallel algorithm for polygon Boolean operations that employs a data-parallel strategy and a new merging approach performing incremental XOR operations solely on edges along grid boundaries, eliminating redundant computations in previous methods. This innovation drastically reduces the grid-merging time by 1–2 orders of magnitude. Compared with the parallel implementation from a commercial layout processing tool, PolygonTailor is on average 5.08× faster and up to 14.36× faster for OR operations that generate highly complex polygons. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 15341 KB  
Article
A Multimodal Three-Channel Bearing Fault Diagnosis Method Based on CNN Fusion Attention Mechanism Under Strong Noise Conditions
by Yingyong Zou, Chunfang Li, Yu Zhang, Zhiqiang Si and Long Li
Algorithms 2026, 19(2), 144; https://doi.org/10.3390/a19020144 - 10 Feb 2026
Viewed by 308
Abstract
Bearings, as core components of mechanical equipment, play a critical role in ensuring equipment safety and reliability. Early fault detection holds significant importance. Addressing the challenges of insufficient robustness in bearing fault diagnosis under industrial high-noise conditions and the difficulty of extracting fault [...] Read more.
Bearings, as core components of mechanical equipment, play a critical role in ensuring equipment safety and reliability. Early fault detection holds significant importance. Addressing the challenges of insufficient robustness in bearing fault diagnosis under industrial high-noise conditions and the difficulty of extracting fault features from a single modality, this study proposes a three-channel multimodal fault diagnosis method that integrates a Convolutional Auto-Encoder (CAE) with a dual attention mechanism (M-CNNBiAM). This approach provides an effective technical solution for the precise diagnosis of bearing faults in high-noise environments. To suppress substantial noise interference, a CAE denoising module was designed to filter out intense noise, providing high-quality input for subsequent diagnostic networks. To address the limitations of single-modal feature extraction and restricted generalization capabilities, a three-channel time–frequency signal joint diagnosis model combining the Continuous Wavelet Transform (CWT) with an attention mechanism was proposed. This approach enables deep mining and efficient fusion of multi-domain features, thereby enhancing fault diagnosis accuracy and generalization capabilities. Experimental results demonstrate that the designed CAE module maintains excellent noise reduction performance even under −10 dB strong noise conditions. When combined with the proposed diagnostic model, it achieves an average diagnostic accuracy of 98% across both the CWRU and self-test datasets, demonstrating outstanding diagnostic precision. Furthermore, under −4 dB noise conditions, it achieves a 94% diagnostic accuracy even without relying on the CAE denoising module. With a single training cycle taking only 6.8 s, it balances training efficiency and diagnostic performance, making it well-suited for real-time, reliable bearing fault diagnosis in industrial environments with high noise levels. Full article
Show Figures

Figure 1

30 pages, 2403 KB  
Article
Gamification in Education and Its Impact on Student Academic Performance: A Conceptual Model Based on Systematic Literature Review and PLS-SEM Analysis
by Ahmad Almufarreh
Algorithms 2026, 19(2), 143; https://doi.org/10.3390/a19020143 - 10 Feb 2026
Viewed by 1273
Abstract
Enhancing student academic performance remains a critical challenge for educators and administrators. Among various interventions, gamification has gained increasing attention as a promising approach to improving learning outcomes. However, existing research on the role of gamification in education remains fragmented due to its [...] Read more.
Enhancing student academic performance remains a critical challenge for educators and administrators. Among various interventions, gamification has gained increasing attention as a promising approach to improving learning outcomes. However, existing research on the role of gamification in education remains fragmented due to its multidisciplinary nature. This study aims to synthesize current knowledge through a systematic review in order to develop and validate a conceptual framework linking gamification to student academic performance. By following the PRISMA framework, 62 relevant studies were reviewed, highlighting several recurring themes, including game design and development, student performance outcomes, and critical aspects of gamification such as cognitive development, motivation, and empowerment. Key mechanisms identified include active learning, personalized and adaptive learning, and collaborative interaction. Reported outcomes of gamification interventions include higher test scores, reduced anxiety and stress, and increased engagement and positive attitudes toward learning. Building on these findings, the resulting conceptual framework was validated through empirical research. Data were collected from 289 students from Saudi Arabia using a structured survey instrument, and Partial Least Squares Structural Equation Modeling (PLS-SEM) was employed for analysis. The results provide empirical support for the proposed framework, confirming gamification as a significant driver of improved student academic performance. The findings provide practical implications for educators and policymakers seeking to leverage gamification as a strategic tool for enhancing student learning outcomes. Full article
(This article belongs to the Special Issue Artificial Intelligence in Education: Innovations and Implications)
Show Figures

Figure 1

24 pages, 7462 KB  
Article
Graph-Based Pattern Restoration for Occlusion-Robust Human Pose Estimation in Crowded Scenes
by Mansoor Iqbal, Syed Zarak Shah and Zahid Ullah
Algorithms 2026, 19(2), 142; https://doi.org/10.3390/a19020142 - 10 Feb 2026
Viewed by 315
Abstract
Human pose estimation is a core computer vision task with broad applications, yet its performance degrades significantly in crowded scenes and under heavy occlusion due to missing or unreliable visual evidence. To address this limitation, this work reformulates occluded pose estimation as a [...] Read more.
Human pose estimation is a core computer vision task with broad applications, yet its performance degrades significantly in crowded scenes and under heavy occlusion due to missing or unreliable visual evidence. To address this limitation, this work reformulates occluded pose estimation as a structured pattern restoration problem and proposes a graph-based framework that models the human body as a relational skeletal graph. Starting from noisy or incomplete keypoint detections, the proposed method employs a graph neural network to propagate contextual information from visible joints to occluded ones through iterative message passing. Geometry-aware constraints on bone lengths and joint angles are integrated to enforce anatomical plausibility, while an occlusion-aware prediction mechanism distinguishes visible from missing joints during inference. Experiments on COCO-Keypoints, CrowdPose, and OCHuman demonstrate consistent improvements over strong baselines, particularly under moderate and severe occlusion, confirming the effectiveness of structural reasoning for robust pose estimation in real-world environments. These results confirm that explicit structural reasoning enables more accurate, stable, and reliable human pose estimation in real-world, occlusion-heavy environments. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

19 pages, 388 KB  
Article
Scheduling with Multitasking and Outsourcing
by John Sum and Kevin I. J. Ho
Algorithms 2026, 19(2), 141; https://doi.org/10.3390/a19020141 - 9 Feb 2026
Viewed by 198
Abstract
In the presence of multitasking, a worker has to concurrently handle interruptions from the waiting jobs and routine jobs while processing a primary job. For over a decade, various studies in this research direction have been conducted aiming to figure out how jobs [...] Read more.
In the presence of multitasking, a worker has to concurrently handle interruptions from the waiting jobs and routine jobs while processing a primary job. For over a decade, various studies in this research direction have been conducted aiming to figure out how jobs are scheduled so as to reduce the effect due to multitasking. In this paper, two late-job problems in line with the classical late-job problems are tackled. In contrast to the classical setting in which all jobs must be completed, we suggest the idea of outsourcing. Some jobs are outsourced. Thus, the worker only processes the on-time jobs and handles the interruptions from the waiting jobs. Each outsourced job is assigned to a single freelancer to ensure that all jobs are completed on-time. The overhead is the charges to the freelancers, i.e., the total outsourcing cost. If the service charges of all the jobs are the same, the late-job problem is called the total number of outsourcing jobs (TNOJ) problem, which is in-line with the classical total number of late-job problems. If the service charges are different, the late-job problem is called the total weighted number of outsourcing jobs (TWNOJ) problem, which is in-line with the classical total weighted number of late-job problems. For general settings, it is proved that the TNOJ problem is NP-hard and the TWNOJ problem is strongly NP-hard. If the interruption of a waiting job is proportional to its remaining processing time, the TNOJ problem can be solved in O(nlog(n)P)-time and the TWNOJ problem can be solved in O(nP2)-time, where n is the number of jobs and P denotes the sum of their processing times. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

14 pages, 306 KB  
Article
Properties of Some Classes of Structured Minmaxmin Problems
by Narges Araboljadidi, Manlio Gaudioso, Giovanni Giallombardo and Giovanna Miglionico
Algorithms 2026, 19(2), 140; https://doi.org/10.3390/a19020140 - 9 Feb 2026
Viewed by 238
Abstract
Minmaxmin problems are well suited for representing some significant decision making problems, where both strategic and tactical decisions are to be made, at different points of time, in the presence of uncertain scenarios. We survey some basic properties and introduce some classes of [...] Read more.
Minmaxmin problems are well suited for representing some significant decision making problems, where both strategic and tactical decisions are to be made, at different points of time, in the presence of uncertain scenarios. We survey some basic properties and introduce some classes of structured minmaxmin problems. The main focus is on linear and bilinear minmaxmin problems, which reduce to classic nonsmooth optimization problems. Moreover, two classes of examples are introduced to highlight the practical role of such formulations. The first one is related to the optimal capacity planning of a production–distribution system, and the second one deals with product pricing and distribution in a profit-maximization framework. Finally, focusing on the capacity-planning and product-distribution problem, a computational study has been carried out to illustrate the practical performance of a cutting-plane and proximity-control algorithm for solving the resulting convex nonsmooth minmaxmin model. The numerical results confirm the robustness of the approach and its scalability with respect to both the network size and the number of scenarios. Full article
(This article belongs to the Special Issue Nonsmooth Optimization and Its Applications)
22 pages, 12186 KB  
Article
BIF-RCNN: Fusing Background Information for Rotated Object Detection
by Jianbin Zhao, Xing Xu, Shaoying Wang, Pengfei Zhang, Shengyi Shen, Hui Zeng, Xiangshuai Bu, Yiran Shen, Kaiwen Xue, Ping Zong, Guoxin Zhang, Zhonghong Ou, Meina Song and Yifan Zhu
Algorithms 2026, 19(2), 139; https://doi.org/10.3390/a19020139 - 9 Feb 2026
Viewed by 195
Abstract
Rotated object detection aims to achieve precise localization by strictly aligning bounding boxes with object orientations, thereby minimizing background interference. Existing methods predominantly focus on extracting intra-object features within rotated bounding boxes. However, these approaches often overlook the discriminative contextual information from the [...] Read more.
Rotated object detection aims to achieve precise localization by strictly aligning bounding boxes with object orientations, thereby minimizing background interference. Existing methods predominantly focus on extracting intra-object features within rotated bounding boxes. However, these approaches often overlook the discriminative contextual information from the surrounding background, leading to classification ambiguity when internal features are indistinguishable. To address this limitation, we propose Background Information Fusion R-CNN (BIF-RCNN), a novel rotated object detection framework that strategically re-integrates the background context from the object’s horizontal enclosing region to validate its category, turning previously discarded “noise” into auxiliary discriminative cues. Specifically, we introduce a dual-level rotation-horizontal feature fusion module (DFM), which leverages horizontal bounding boxes enclosing the rotated objects to extract contextual background features. These features are then adaptively fused with the internal object features to enhance the overall representation capability of the model. In addition, we design a Prediction Difference and Entropy-Constrained Loss (PDE Loss), which guides the model to focus on hard-to-classify samples that are prone to confusion due to similar feature representations. This loss function improves the model’s robustness and discriminative power. Extensive experiments conducted on the DOTA benchmark dataset demonstrate the effectiveness of the proposed method. Notably, our approach achieves up to a 4.02% AP improvement in single-category detection performance compared to a strong baseline, highlighting its superiority in rotated object detection tasks. Full article
Show Figures

Figure 1

25 pages, 1947 KB  
Article
A Multi-Strategy Improved Dung Beetle Optimizer for the Kapur Entropy Multi-Threshold Image Segmentation Algorithm
by Jinjin Li, Yecai Guo, Meiyu Liang, Haiyan Long and Tianfei Zhang
Algorithms 2026, 19(2), 138; https://doi.org/10.3390/a19020138 - 9 Feb 2026
Viewed by 203
Abstract
To address the issues of detail loss and unstable segmentation quality in image segmentation, this paper proposes a multi-strategy improved dung beetle optimization algorithm be applied to multi-threshold image segmentation. Thus, we have developed a multi-strategy improved dung beetle optimizer Kapur entropy multi-threshold [...] Read more.
To address the issues of detail loss and unstable segmentation quality in image segmentation, this paper proposes a multi-strategy improved dung beetle optimization algorithm be applied to multi-threshold image segmentation. Thus, we have developed a multi-strategy improved dung beetle optimizer Kapur entropy multi-threshold image segmentation algorithm (MIDBO-KMIA). This algorithm enhanced global search capability and convergence stability, improved segmentation accuracy and algorithm robustness, and solved the problems of detail preservation and segmentation quality in complex scenarios. Firstly, Sobol sequences were adopted to initialize the population, enhancing its diversity. Secondly, a multi-stage perturbation update mechanism was introduced to prevent convergence to local optima and improved global exploration. Thirdly, the convergence precision was further improved by optimizing the hybrid dynamic switching mechanism and proposing dynamic mutation update and distance selection update strategies. Finally, the MIDBO algorithm was applied to Kapur entropy multi-threshold image segmentation, and experimental research was conducted using Peak Signal-to-Noise Ratio (PSNR), SIMilarity index (SSIM), and Feature SIMilarity index (FSIM) as evaluation metrics. The experimental results demonstrate that the performance of the multi-strategy improved dung beetle optimization Kapur entropy multi-threshold image segmentation algorithm is significantly better than that of other algorithms, and that it can more effectively solve the problems of detail preservation and segmentation quality in complex scenes, and enhance the ability to adapt to complex image scenes. Full article
Show Figures

Figure 1

21 pages, 5257 KB  
Article
Usefulness of Wearable Devices for Monitoring Motor Activity in Patients with Early Myocardial Infarction
by Fabiola Boccuto, Ugo Lomoio, Salvatore De Rosa, Daniele Torella, Pierangelo Veltri and Pietro Hiram Guzzi
Algorithms 2026, 19(2), 137; https://doi.org/10.3390/a19020137 - 9 Feb 2026
Viewed by 357
Abstract
The rising incidence of myocardial infarction (MI) in individuals under 50 years of age underscores an urgent need for innovative rehabilitation strategies that extend beyond hospital care, empowering young patients to reclaim active lives through sustained physical activity and remote monitoring. Wearable health [...] Read more.
The rising incidence of myocardial infarction (MI) in individuals under 50 years of age underscores an urgent need for innovative rehabilitation strategies that extend beyond hospital care, empowering young patients to reclaim active lives through sustained physical activity and remote monitoring. Wearable health technologies hold transformative potential here, as studies demonstrate their ability to boost exercise capacity daily steps while reducing rehospitalizations in post-MI recovery. This study thus assesses the clinical value of wearable devices in remotely tracking motor activity among young adults during early MI rehabilitation. Using the SiDLY Care Pro wristband, continuous non-invasive measurements of heart rate, oxygen saturation, and physical activity were collected from 62 of 80 post-MI patients (<50 years) over seven days, alongside validated questionnaires (IPAQ, SF-36, DASS-21). Time-series clustering and principal component analysis characterized heart rate dynamics and activity patterns. Most participants showed sedentary behavior (2000–4000 steps/day), though self-reported health and psychological well-being were satisfactory. The device provided reliable, clinically meaningful data, particularly when linked to clinician feedback. Participants expressed an interest in using such technologies, especially if supported through reimbursement and professional guidance. Despite limitations—short monitoring timelines, small heterogeneous samples, and accuracy constraints—the findings suggest that wearable systems can enhance remote monitoring, patient engagement, and early intervention in post-MI care. Broader studies and supportive policies are recommended. Overall, integrating wearable technologies with professional oversight and patient participation may substantially improve recovery and outcomes for young MI survivors. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

19 pages, 311 KB  
Article
Investing in AI Interpretability, Control, and Robustness
by Maikel Leon
Algorithms 2026, 19(2), 136; https://doi.org/10.3390/a19020136 - 9 Feb 2026
Viewed by 372
Abstract
Artificial intelligence (AI) powers breakthroughs in language processing, computer vision, and scientific discovery; yet, the increasing complexity of frontier models makes their reasoning opaque. This opacity undermines public trust, complicates deployment in safety-critical settings, and frustrates compliance with emerging regulations. In response to [...] Read more.
Artificial intelligence (AI) powers breakthroughs in language processing, computer vision, and scientific discovery; yet, the increasing complexity of frontier models makes their reasoning opaque. This opacity undermines public trust, complicates deployment in safety-critical settings, and frustrates compliance with emerging regulations. In response to initiatives such as the White House AI Action Plan, we synthesize the scientific foundations and policy landscape for interpretability, control, and robustness. We clarify key concepts and survey intrinsically interpretable and post-hoc explanation techniques, discuss human-centered evaluation and governance, and analyze how adversarial threats and distributional shifts motivate robustness research. An empirical case study compares logistic regression, random forests, and gradient boosting on a synthetic dataset with a binary-sensitive attribute using accuracy, F1 score, and group-fairness metrics, and illustrates trade-offs between performance and fairness. We integrate ethical and policy perspectives, including recommendations from America’s AI Action Plan and recent civil rights frameworks, and conclude with guidance for researchers, practitioners, and policymakers on advancing trustworthy AI. Full article
(This article belongs to the Special Issue AI-Driven Business Analytics Revolution)
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop