Next Issue
Volume 19, February
Previous Issue
Volume 18, December
 
 

Algorithms, Volume 19, Issue 1 (January 2026) – 92 articles

Cover Story (view full-size image): Artificial intelligence has quickly become one of the key tools of modern clinical medicine. This paper presents an up-to-date overview of AI applications in internal medicine, including cardiology, pulmonology, neurology, hepatology, and pancreatic diseases. The analyzed clinical studies and real-world implementations demonstrate that AI algorithms outperform traditional diagnostic and therapeutic methods in detecting atrial fibrillation, heart failure, and cancer, as well as in optimizing invasive treatment and palliative care. AI has come a long way from an experimental technology to a tool with a real impact on patient prognosis, reduced hospitalizations, and healthcare system efficiency, while also highlighting challenges related to validation and safety. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1401 KB  
Article
Embedding-Based Detection of Indirect Prompt Injection Attacks in Large Language Models Using Semantic Context Analysis
by Mohammed Alamsabi, Michael Tchuindjang and Sarfraz Brohi
Algorithms 2026, 19(1), 92; https://doi.org/10.3390/a19010092 - 22 Jan 2026
Viewed by 252
Abstract
Large Language Models (LLMs) are vulnerable to Indirect Prompt Injection Attacks (IPIAs), where malicious instructions are embedded within external content rather than direct user input. This study presents an embedding-based detection approach that analyses the semantic relationship between user intent and external content, [...] Read more.
Large Language Models (LLMs) are vulnerable to Indirect Prompt Injection Attacks (IPIAs), where malicious instructions are embedded within external content rather than direct user input. This study presents an embedding-based detection approach that analyses the semantic relationship between user intent and external content, enabling the early identification of IPIAs that conventional defences overlook. We also provide a dataset of 70,000 samples, constructed using 35,000 malicious instances from the Benchmark for Indirect Prompt Injection Attacks (BIPIA) and 35,000 benign instances generated using ChatGPT-4o-mini. Furthermore, we performed a comparative analysis of three embedding models, namely OpenAI text-embedding-3-small, GTE-large, and MiniLM-L6-v2, evaluated in combination with XGBoost, LightGBM, and Random Forest classifiers. The best-performing configuration using OpenAI embeddings with XGBoost achieved an accuracy of 97.7% and an F1-score of 0.977, matching or exceeding the performance of existing IPIA detection methods while offering practical deployment advantages. Unlike prevention-focused approaches that require modifications to the underlying LLM architecture, the proposed method operates as a model-agnostic external detection layer with an average inference time of 0.001 ms per sample. This detection-based approach complements existing prevention mechanisms by providing a lightweight, scalable solution that can be integrated into LLM pipelines without requiring architectural changes. Full article
Show Figures

Figure 1

22 pages, 372 KB  
Review
A Structured Review of EEG-Based Machine Learning Approaches for Brain Age Prediction
by Ruslan Zhulduzbayev, Arian Ashourvan, Diana Arman, Alibek Bissembayev and Almira Kustubayeva
Algorithms 2026, 19(1), 91; https://doi.org/10.3390/a19010091 - 22 Jan 2026
Viewed by 151
Abstract
The determination of brain age based on electroencephalography (EEG) data has become widely developed with the spread of machine learning in recent years. In this research paper, we analyzed 21 articles published no earlier than 2015, focusing particularly on features, machine learning and [...] Read more.
The determination of brain age based on electroencephalography (EEG) data has become widely developed with the spread of machine learning in recent years. In this research paper, we analyzed 21 articles published no earlier than 2015, focusing particularly on features, machine learning and deep learning models, and the validation process. The studies reviewed presented model performance on EEG data using machine learning or deep learning techniques. Deep convolutional and transformer-based models trained on well-curated features forecasted chronological age most precisely. In newborns, time–frequency and entropy-based characteristics showed good predictive power for the brain age index (BAI) and functional brain age (FBA). Consistently, spectral and nonlinear descriptors ranked among the most informative characteristics. Methodological rigor, meanwhile, differed: only a small number of studies used bias correction techniques, addressed statistical assumptions, or reported external validation. Preprocessing techniques also showed significant variation. Although EEG-based models have good accuracy, problems of interpretability and generalizability restrict their clinical and developmental use. Advancing this discipline will call for biologically based outcome definitions, uniform evaluation systems, and open source processing pipelines. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (4th Edition))
Show Figures

Figure 1

12 pages, 359 KB  
Article
Mathematical Approach for Ameliorated Inventory Models
by Scott Shu-Cheng Lin
Algorithms 2026, 19(1), 90; https://doi.org/10.3390/a19010090 - 22 Jan 2026
Viewed by 45
Abstract
Hwang developed inventory models with amelioration items and applied the graphical method to locate the optimal solution. In this study, we derive an analytical method to find two local maximum points and one local minimum point. Our maximum profit is greatly superior to [...] Read more.
Hwang developed inventory models with amelioration items and applied the graphical method to locate the optimal solution. In this study, we derive an analytical method to find two local maximum points and one local minimum point. Our maximum profit is greatly superior to that of Hwang, because his maximum profit is about 0.246% of ours. The local maximum point near the starting point (denoted as 3×107) is almost impossible to discoverby the numerical method, illustrating the effectiveness of our analytical method. Full article
Show Figures

Figure 1

17 pages, 1555 KB  
Article
Path Planning in Sparse Reward Environments: A DQN Approach with Adaptive Reward Shaping and Curriculum Learning
by Hongyi Yang, Bo Cai and Yunlong Li
Algorithms 2026, 19(1), 89; https://doi.org/10.3390/a19010089 - 21 Jan 2026
Viewed by 238
Abstract
Deep reinforcement learning (DRL) has shown great potential in path planning tasks. However, in sparse reward environments, DRL still faces significant challenges such as low training efficiency and a tendency to converge to suboptimal policies. Traditional reward shaping methods can partially alleviate these [...] Read more.
Deep reinforcement learning (DRL) has shown great potential in path planning tasks. However, in sparse reward environments, DRL still faces significant challenges such as low training efficiency and a tendency to converge to suboptimal policies. Traditional reward shaping methods can partially alleviate these issues, but they typically rely on hand-crafted designs, which often introduce complex reward coupling, make hyperparameter tuning difficult, and limit generalization capability. To address these challenges, this paper proposes Curriculum-guided Learning with Adaptive Reward Shaping for Deep Q-Network (CLARS-DQN), a path planning algorithm that integrates Adaptive Reward Shaping (ARS) and Curriculum Learning (CL). The algorithm consists of two key components: (1) ARS-DQN, which augments the DQN framework with a learnable intrinsic reward function to reduce reward sparsity and dependence on expert knowledge; and (2) a curriculum strategy that guides policy optimization through a staged training process, progressing from simple to complex tasks to enhance generalization. Training also incorporates Prioritized Experience Replay (PER) to improve sample efficiency and training stability. CLARS-DQN outperforms baseline methods in task success rate, path quality, training efficiency, and hyperparameter robustness. In unseen environments, the method improves task success rate and average path length by 12% and 26%, respectively, demonstrating strong generalization. Ablation studies confirm the critical contribution of each module. Full article
Show Figures

Figure 1

15 pages, 595 KB  
Article
Collision of an Obstacle by an Elastic Bar in a Gravity Field: Solution with Discontinuous Velocity and Space-Time Primal-Dual Active Set Algorithm
by Victor A. Kovtunenko
Algorithms 2026, 19(1), 88; https://doi.org/10.3390/a19010088 - 20 Jan 2026
Viewed by 89
Abstract
A class of one-dimensional dynamic impact models is investigated with respect to non-smooth velocities using variational inequalities and space-time finite element approximation. For the problem of collision of a rigid obstacle by an elastic bar in the gravitational field, a benchmark based on [...] Read more.
A class of one-dimensional dynamic impact models is investigated with respect to non-smooth velocities using variational inequalities and space-time finite element approximation. For the problem of collision of a rigid obstacle by an elastic bar in the gravitational field, a benchmark based on particular solutions to the wave equation is constructed on a partition of rectangle domains. The full discretization of the collision problem is carried out over a uniform space-time triangulation and extended to distorted meshes. For the solution of the corresponding variational inequality, a semi-smooth Newton-based primal-dual active set algorithm is applied. Numerical experiments demonstrate advantages over time-step approximation: a high-precision numerical solution is computed in a few iterations without any spurious oscillations. Full article
(This article belongs to the Special Issue Nonsmooth Optimization and Its Applications)
Show Figures

Figure 1

24 pages, 1576 KB  
Article
Non-Imaging Differential Diagnosis of Lower Limb Osteoarthritis: An Interpretable Machine Learning Framework
by Zhanel Baigarayeva, Assiya Boltaboyeva, Baglan Imanbek, Bibars Amangeldy, Nurdaulet Tasmurzayev, Kassymbek Ozhikenov, Assylbek Ozhiken, Zhadyra Alimbayeva and Naoya Maeda-Nishino
Algorithms 2026, 19(1), 87; https://doi.org/10.3390/a19010087 - 20 Jan 2026
Viewed by 214
Abstract
Background: Osteoarthritis (OA) is a prevalent chronic degenerative disorder, with coxarthrosis (hip OA) and gonarthrosis (knee OA) representing its most significant clinical manifestations. While diagnosis typically relies on imaging, such methods can be resource-intensive and insensitive to early disease trajectories. Objective: This study [...] Read more.
Background: Osteoarthritis (OA) is a prevalent chronic degenerative disorder, with coxarthrosis (hip OA) and gonarthrosis (knee OA) representing its most significant clinical manifestations. While diagnosis typically relies on imaging, such methods can be resource-intensive and insensitive to early disease trajectories. Objective: This study aims to achieve the differential diagnosis of coxarthrosis and gonarthrosis using solely routine preoperative clinical and laboratory data, benchmarking state-of-the-art machine learning algorithms. Methods: A retrospective analysis was conducted on 893 patients (617 with knee OA, 276 with hip OA) from a clinical hospital in Almaty, Kazakhstan. The study evaluated a diverse portfolio of models, including gradient boosting decision trees (LightGBM, XGBoost, CatBoost), deep learning architectures (RealMLP, TabDPT, TabM), and the pretrained tabular foundation model RealTabPFN v2.5. Results: The RealTabPFN v2.5 (Tuned) model achieved superior performance, recording a mean ROC–AUC of 0.9831, accuracy of 0.9485, and an F1-score of 0.9474. SHAP interpretability analysis identified heart rate (66.2%) and age (18.1%) as the dominant predictors driving the model’s decision-making process. Conclusion: Pretrained tabular foundation models demonstrate exceptional capability in distinguishing OA subtypes using limited clinical datasets, outperforming traditional ensemble methods. This approach offers a practical, high-performance triage tool for primary clinical assessment in resource-constrained settings. Full article
Show Figures

Figure 1

23 pages, 1109 KB  
Review
A Review of End-to-End Decision Optimization Research: An Architectural Perspective
by Wenya Zhang and Gendao Li
Algorithms 2026, 19(1), 86; https://doi.org/10.3390/a19010086 - 20 Jan 2026
Viewed by 239
Abstract
Traditional decision optimization methods primarily focus on model construction and solution, leaving parameter estimation and inter-variable relationships to statistical research. The traditional approach divides problem-solving into two independent stages: predict first and then optimize. This decoupling leads to the propagation of prediction errors-even [...] Read more.
Traditional decision optimization methods primarily focus on model construction and solution, leaving parameter estimation and inter-variable relationships to statistical research. The traditional approach divides problem-solving into two independent stages: predict first and then optimize. This decoupling leads to the propagation of prediction errors-even minor inaccuracies in predictions can be amplified into significant decision biases during the optimization phase. To tackle this issue, scholars have proposed end-to-end decision optimization methods, which integrate the prediction and decision-making stages into a unified framework. By doing so, these approaches effectively mitigate error propagation and enhance overall decision performance. From an architectural design perspective, this review focuses on categorizing end-to-end decision optimization methods based on how the prediction and decision modules are integrated. It classifies mainstream approaches into three typical paradigms: constructing closed-loop loss functions, building differentiable optimization layers, and parameterizing the representation of optimization problems. It also examines their implementation pathways leveraging deep learning technologies. The strengths and limitations of these paradigms essentially stem from the inherent trade-offs in their architectural designs. Through a systematic analysis of existing research, this paper identifies key challenges in three core areas: data, variable relationships, and gradient propagation. Among these, handling non-convexity and complex constraints is critical for model generalization, while quantifying decision-dependent endogenous uncertainty remains an indispensable challenge for practical deployment. Full article
Show Figures

Figure 1

20 pages, 4501 KB  
Article
Improving Prostate Cancer Segmentation on T2-Weighted MRI Using Prostate Detection and Cascaded Networks
by Nikolay Nefediev, Nikolay Staroverov and Roman Davydov
Algorithms 2026, 19(1), 85; https://doi.org/10.3390/a19010085 - 19 Jan 2026
Viewed by 148
Abstract
Prostate cancer is one of the most lethal cancers in the male population, and accurate localization of intraprostatic lesions on MRI remains challenging. In this study, we investigated methods for improving prostate cancer segmentation on T2-weighted pelvic MRI using cascaded neural networks. We [...] Read more.
Prostate cancer is one of the most lethal cancers in the male population, and accurate localization of intraprostatic lesions on MRI remains challenging. In this study, we investigated methods for improving prostate cancer segmentation on T2-weighted pelvic MRI using cascaded neural networks. We used an anonymized dataset of 400 multiparametric MRI scans from two centers, in which experienced radiologists had delineated the prostate and clinically significant cancer on the T2 series. Our baseline approach applies 2D and 3D segmentation networks (UNETR, UNET++, Swin-UNETR, SegResNetDS, and SegResNetVAE) directly to full MRI volumes. We then introduce additional stages that filter slices using DenseNet-201 classifiers (cancer/no-cancer and prostate/no-prostate) and localize the prostate via a YOLO-based detector to crop the 3D region of interest before segmentation. Using Swin-UNETR as the backbone, the prostate segmentation Dice score increased from 71.37% for direct 3D segmentation to 76.09% when using prostate detection and cropped 3D inputs. For cancer segmentation, the final cascaded pipeline—prostate detection, 3D prostate segmentation, and 3D cancer segmentation within the prostate—improved the Dice score from 55.03% for direct 3D segmentation to 67.11%, with an ROC AUC of 0.89 on the test set. These results suggest that cascaded detection- and segmentation-based preprocessing of the prostate region can substantially improve automatic prostate cancer segmentation on MRI while remaining compatible with standard segmentation architectures. Full article
(This article belongs to the Special Issue AI-Powered Biomedical Image Analysis)
Show Figures

Figure 1

22 pages, 1240 KB  
Article
An Iterative Reinforcement Learning Algorithm for Speed Drop Compensation in Rolling Mills
by Shengyue Zong, Jiwei Chen, Yanpeng Hu and Jinyan Li
Algorithms 2026, 19(1), 84; https://doi.org/10.3390/a19010084 - 18 Jan 2026
Viewed by 120
Abstract
In the process of steel rolling production, the speed reduction compensation of the rolling mill is a key link to ensure the stability of slab rolling and product quality. This paper proposes a hybrid compensation method that integrates motor dynamic modeling with reinforcement [...] Read more.
In the process of steel rolling production, the speed reduction compensation of the rolling mill is a key link to ensure the stability of slab rolling and product quality. This paper proposes a hybrid compensation method that integrates motor dynamic modeling with reinforcement learning to minimize mass flow error between adjacent rolling mills during slab rolling. A two-stage compensation strategy is designed, consisting of a constant-gain compensation phase followed by a decaying compensation phase, which explicitly accounts for the repetitive and consistent rolling conditions in batch slab production. Based on a motor dynamics-based theoretical model, an initial estimation of compensation parameters is first obtained, providing a physically interpretable starting point for optimization. Subsequently, a Deep Deterministic Policy Gradient (DDPG) algorithm is employed to iteratively refine the compensation parameters by learning from the mass flow error of each rolled slab, enabling data-driven adaptation while preserving physical consistency. Simulation results demonstrate that the proposed hybrid approach significantly reduces the mass flow error and achieves stable convergence, outperforming strategies with randomly initialized parameters. The results verify the effectiveness and novelty of the proposed method in combining model-based insight with reinforcement learning for intelligent and adaptive rolling mill speed drop compensation. Full article
Show Figures

Figure 1

21 pages, 14300 KB  
Article
A Lightweight Embedded PPG-Based Authentication System for Wearable Devices via Hyperdimensional Computing
by Ruijin Zhuang, Haiming Chen, Daoyong Chen and Xinyan Zhou
Algorithms 2026, 19(1), 83; https://doi.org/10.3390/a19010083 - 18 Jan 2026
Viewed by 208
Abstract
In the realm of wearable technology, achieving robust continuous authentication requires balancing high security with the strict resource constraints of embedded platforms. Conventional machine learning approaches and deep learning-based biometrics often incur high computational costs, making them unsuitable for low-power edge devices. To [...] Read more.
In the realm of wearable technology, achieving robust continuous authentication requires balancing high security with the strict resource constraints of embedded platforms. Conventional machine learning approaches and deep learning-based biometrics often incur high computational costs, making them unsuitable for low-power edge devices. To address this challenge, we propose H-PPG, a lightweight authentication system that integrates photoplethysmography (PPG) and inertial measurement unit (IMU) signals for continuous user verification. Using Hyperdimensional Computing (HDC), a lightweight classification framework inspired by brain-like computing, H-PPG encodes user physiological and motion data into high-dimensional hypervectors that comprehensively represent individual identity, enabling robust, efficient and lightweight authentication. An adaptive learning process is employed to iteratively refine the user’s hypervector, allowing it to progressively capture discriminative information from physiological and behavioral samples. To further enhance identity representation, a dimension regeneration mechanism is introduced to maximize the information capacity of each dimension within the hypervector, ensuring that authentication accuracy is maintained under lightweight conditions. In addition, a user-defined security level scheme and an adaptive update strategy are proposed to ensure sustained authentication performance over prolonged usage. A wrist-worn prototype was developed to evaluate the effectiveness of the proposed approach and extensive experiments involving 15 participants were conducted under real-world conditions. The experimental results demonstrate that H-PPG achieves an average authentication accuracy of 93.5%. Compared to existing methods, H-PPG offers a lightweight and hardware-efficient solution suitable for resource-constrained wearable devices, highlighting its strong potential for integration into future smart wearable ecosystems. Full article
Show Figures

Figure 1

15 pages, 671 KB  
Article
Algorithms for Solving Ordinary Differential Equations Based on Orthogonal Polynomial Neural Networks
by Roman Parovik
Algorithms 2026, 19(1), 82; https://doi.org/10.3390/a19010082 - 17 Jan 2026
Viewed by 131
Abstract
This article proposes single-layer neural network algorithms for solving second-order ordinary differential equations, based on the principles of functional connection. According to this principle, the hidden layer of the neural network is replaced by a functional expansion unit to improve input patterns using [...] Read more.
This article proposes single-layer neural network algorithms for solving second-order ordinary differential equations, based on the principles of functional connection. According to this principle, the hidden layer of the neural network is replaced by a functional expansion unit to improve input patterns using orthogonal Chebyshev, Legendre, and Laguerre polynomials. The polynomial neural network algorithms were implemented in the Python programming language using the PyCharm environment. The performance of the polynomial neural network algorithms was tested by solving initial-boundary value problems for the nonlinear Lane–Emden equation. The solution results are compared with the exact solution of the problems under consideration, as well as with the solution obtained using a multilayer perceptron. It is shown that polynomial neural networks can perform more efficiently than multilayer neural networks. Furthermore, a neural network based on Laguerre polynomials can, in some cases, perform more accurately and faster than neural networks based on Legendre and Chebyshev polynomials. The issues of overtraining of polynomial neural networks and scenarios for overcoming it are also considered. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

26 pages, 544 KB  
Article
Physics-Aware Deep Learning Framework for Solar Irradiance Forecasting Using Fourier-Based Signal Decomposition
by Murad A. Yaghi and Huthaifa Al-Omari
Algorithms 2026, 19(1), 81; https://doi.org/10.3390/a19010081 - 17 Jan 2026
Viewed by 144
Abstract
Photovoltaic Systems have been a long-standing challenge to integrate with electrical Power Grids due to the randomness of solar irradiance. Deep Learning (DL) has potential to forecast solar irradiance; however, black-box DL models typically do not offer interpretation, nor can they easily distinguish [...] Read more.
Photovoltaic Systems have been a long-standing challenge to integrate with electrical Power Grids due to the randomness of solar irradiance. Deep Learning (DL) has potential to forecast solar irradiance; however, black-box DL models typically do not offer interpretation, nor can they easily distinguish between deterministic astronomical cycles, and random meteorological variability. The objective of this study was to develop and apply a new Physics-Aware Deep Learning Framework that identifies and utilizes physical attributes of solar irradiance via Fourier-based signal decomposition. The proposed method decomposes the time-series into polynomial trend, Fourier-based seasonal component and stochastic residual, each of which are processed within different neural network paths. A wide variety of architectures were tested (Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN)), at multiple historical window sizes and forecast horizons on a diverse dataset from a three-year span. All of the architectures tested demonstrated improved accuracy and robustness when using the physics aware decomposition as opposed to all other methods. Of the architectures tested, the GRU architecture was the most accurate and performed well in terms of overall evaluation. The GRU model had an RMSE of 78.63 W/m2 and an R2 value of 0.9281 for 15 min ahead forecasting. Additionally, the Fourier-based methodology was able to reduce the maximum absolute error by approximately 15% to 20%, depending upon the architecture used, and therefore it provided a way to reduce the impact of the larger errors in forecasting during periods of unstable weather. Overall, this framework represents a viable option for both physically interpretive and computationally efficient real-time solar forecasting that provides a bridge between Physical Modeling and Data-Driven Intelligence. Full article
(This article belongs to the Special Issue Artificial Intelligence in Sustainable Development)
Show Figures

Figure 1

17 pages, 2530 KB  
Article
Hybrid Optimization Technique for Finding Efficient Earth–Moon Transfer Trajectories
by Lorenzo Casalino, Andrea D’Ottavio, Giorgio Fasano, Janos D. Pintér and Riccardo Roberto
Algorithms 2026, 19(1), 80; https://doi.org/10.3390/a19010080 - 17 Jan 2026
Viewed by 305
Abstract
The Lunar Gateway is a planned small space station that will orbit the Moon and serve as a central hub for NASA’s Artemis program to return humans to the lunar surface and to prepare for Mars missions. This work presents a hybrid optimization [...] Read more.
The Lunar Gateway is a planned small space station that will orbit the Moon and serve as a central hub for NASA’s Artemis program to return humans to the lunar surface and to prepare for Mars missions. This work presents a hybrid optimization strategy for designing minimum-fuel transfers from an Earth orbit to a Lunar Near-Rectilinear Halo Orbit. The corresponding optimal control problem—crucial for missions to NASA’s Lunar Gateway—is characterized by a high-dimensional, non-convex solution space due to the multi-body gravitational environment. To tackle this challenge, a two-stage hybrid optimization scheme is employed. The first stage uses a Genetic Algorithm heuristic as a global search strategy, to identify promising feasible trajectory solutions. Subsequently, the initial solution guess (or guesses) produced by GA are improved by a local optimizer based on a Sequential Quadratic Programming method: from a suitable initial guess, SQP rapidly converges to a high-precision feasible solution. The proposed methodology is applied to a representative cargo mission case study, demonstrating its efficiency. Our numerical results confirm that the hybrid optimization strategy can reliably generate mission-grade quality trajectories that satisfy stringent constraints while minimizing propellant consumption. Our analysis validates the combined GA-SQP optimization approach as a robust and efficient tool for space mission design in the cislunar environment. Full article
Show Figures

Figure 1

16 pages, 401 KB  
Article
Heuristic Conductance-Aware Local Clustering for Heterogeneous Hypergraphs
by Jingtian Wei, Xuan Li and Hongen Lu
Algorithms 2026, 19(1), 79; https://doi.org/10.3390/a19010079 - 16 Jan 2026
Viewed by 124
Abstract
Graphs are widely used to model complex interactions among entities, yet they struggle to capture higher-order and multi-typed relationships. Hypergraphs overcome this limitation by allowing for edges to connect arbitrary sets of nodes, enabling richer modelling of higher-order semantics. Real-world systems, however, often [...] Read more.
Graphs are widely used to model complex interactions among entities, yet they struggle to capture higher-order and multi-typed relationships. Hypergraphs overcome this limitation by allowing for edges to connect arbitrary sets of nodes, enabling richer modelling of higher-order semantics. Real-world systems, however, often exhibit heterogeneity in both entities and relations, motivating the need for heterogeneous hypergraphs as a more expressive structure. In this study, we address the problem of local clustering on heterogeneous hypergraphs, where the goal is to identify a semantically meaningful cluster around a given seed node while accounting for type diversity. Existing methods typically ignore node-type information, resulting in clusters with poor semantic coherence. To overcome this, we propose HHLC, a heuristic heterogeneous hyperedge-based local clustering algorithm, guided by a heterogeneity-aware conductance measure that integrates structural connectivity and node-type consistency. HHLC employs type-filtered expansion, cross-type penalties, and low-quality hyperedge pruning to produce interpretable and compact clusters. Comprehensive experiments on synthetic and real-world heterogeneous datasets demonstrate that HHLC consistently outperforms strong baselines across metrics such as conductance, semantic purity, and type diversity. These results highlight the importance of incorporating heterogeneity into hypergraph algorithms and position HHLC as a robust framework for semantically grounded local analysis in complex multi-relational networks. Full article
(This article belongs to the Special Issue Graph and Hypergraph Algorithms and Applications)
Show Figures

Figure 1

28 pages, 2028 KB  
Article
Dynamic Resource Games in the Wood Flooring Industry: A Bayesian Learning and Lyapunov Control Framework
by Yuli Wang and Athanasios V. Vasilakos
Algorithms 2026, 19(1), 78; https://doi.org/10.3390/a19010078 - 16 Jan 2026
Viewed by 185
Abstract
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like [...] Read more.
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like brand reputation and customer base cannot be precisely observed. This paper establishes a systematic and theoretically grounded online decision framework to tackle this problem. We first model the problem as a Partially Observable Stochastic Dynamic Game. The core innovation lies in introducing an unobservable market position vector as the central system state, whose evolution is jointly influenced by firm investments, inter-channel competition, and macroeconomic randomness. The model further captures production lead times, physical inventory dynamics, and saturation/cross-channel effects of marketing investments, constructing a high-fidelity dynamic system. To solve this complex model, we propose a hierarchical online learning and control algorithm named L-BAP (Lyapunov-based Bayesian Approximate Planning), which innovatively integrates three core modules. It employs particle filters for Bayesian inference to nonparametrically estimate latent market states online. Simultaneously, the algorithm constructs a Lyapunov optimization framework that transforms long-term discounted reward objectives into tractable single-period optimization problems through virtual debt queues, while ensuring stability of physical systems like inventory. Finally, the algorithm embeds a game-theoretic module to predict and respond to rational strategic reactions from each channel. We provide theoretical performance analysis, rigorously proving the mean-square boundedness of system queues and deriving the performance gap between long-term rewards and optimal policies under complete information. This bound clearly quantifies the trade-off between estimation accuracy (determined by particle count) and optimization parameters. Extensive simulations demonstrate that our L-BAP algorithm significantly outperforms several strong baselines—including myopic learning and decentralized reinforcement learning methods—across multiple dimensions: long-term profitability, inventory risk control, and customer service levels. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

37 pages, 4259 KB  
Article
Image-Based Segmentation of Hydrogen Bubbles in Alkaline Electrolysis: A Comparison Between Ilastik and U-Net
by José Pereira, Reinaldo Souza, Arthur Normand and Ana Moita
Algorithms 2026, 19(1), 77; https://doi.org/10.3390/a19010077 - 16 Jan 2026
Viewed by 224
Abstract
This study aims to enhance the efficiency of hydrogen production through alkaline water electrolysis by analyzing hydrogen bubble dynamics using high-speed image processing and machine learning algorithms. The experiments were conducted to evaluate the effects of electrical current and ultrasound oscillations on the [...] Read more.
This study aims to enhance the efficiency of hydrogen production through alkaline water electrolysis by analyzing hydrogen bubble dynamics using high-speed image processing and machine learning algorithms. The experiments were conducted to evaluate the effects of electrical current and ultrasound oscillations on the system performance. The bubble formation and detachment process were recorded and analyzed using two segmentation models: Ilastik, a GUI-based tool, and U-Net, a deep learning convolutional network implemented in PyTorch. v. 2.9.0. Both models were trained on a dataset of 24 images under varying experimental conditions. The evaluation metrics included Intersection over Union (IoU), Root Mean Square Error (RMSE), and bubble diameter distribution. Ilastik achieved better accuracy and lower RMSE, while U-Net. U-Net offered higher scalability and integration flexibility within Python environments. Both models faced challenges when detecting small bubbles and under complex lighting conditions. Improvements such as expanding the training dataset, increasing image resolution, and adopting patch-based processing were proposed. Overall, the result demonstrates the automated image segmentation can provide reliable bubble characterization, contributing to the optimization of electrolysis-based hydrogen production. Full article
Show Figures

Graphical abstract

29 pages, 7379 KB  
Article
Boundary-Aware Multi-Point Preview Control: An Algorithm for Autonomous Articulated Mining Vehicles Operating in Highly Constrained Underground Spaces
by Shuo Huang, Yiting Kang, Jue Yang, Xiao Lv and Ming Zhu
Algorithms 2026, 19(1), 76; https://doi.org/10.3390/a19010076 - 16 Jan 2026
Viewed by 194
Abstract
To achieve the automation and intelligence of mining equipment, it is essential to address the challenge of autonomous driving, with the core task being how to navigate safely from the starting point to the mining area endpoint. This paper proposes a boundary-aware multi-point [...] Read more.
To achieve the automation and intelligence of mining equipment, it is essential to address the challenge of autonomous driving, with the core task being how to navigate safely from the starting point to the mining area endpoint. This paper proposes a boundary-aware multi-point preview control algorithm to tackle the strong dependency on predefined paths and the lack of foresight in the autonomous driving of underground articulated mining vehicles in highly confined underground spaces. The algorithm determines the driving direction by calculating the vehicle’s real-time state and LiDAR data, previewing road conditions without relying on preset path planning. Experiments conducted in a ROS Noetic/GAZEBO 11 simulation environment compared the proposed method with single-point and two-point preview algorithms, validating the effectiveness of the boundary-aware multi-point preview control. The results show that the proposed control strategy yields the lowest lateral deviation and the highest steering smoothness compared to single-point and two-point preview algorithms; it also outperforms the standard multi-point preview algorithm. This demonstrates its superior performance. Specifically, the proposed boundary-aware multi-point preview algorithm outperformed other methods in terms of steering smoothness and stability, significantly enhancing the vehicle system’s adaptability, robustness, and safety. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 3391 KB  
Article
An Intelligent Browser History Forensics Method for Automated Analysis of Web Activity Logs, Credentials, and User Behavioral Profiles
by Leila Rzayeva, Aliya Zhetpisbayeva, Alisher Batkuldin, Nursultan Nyssanov, Alissa Ryzhova and Faisal Saeed
Algorithms 2026, 19(1), 75; https://doi.org/10.3390/a19010075 - 16 Jan 2026
Viewed by 284
Abstract
In digital forensics, one of the complicated tasks is analyzing web browser data due to different types of devices, browsers, and the absence of modern analytical approaches. Browsers store a large amount of information about user activity because users most often access the [...] Read more.
In digital forensics, one of the complicated tasks is analyzing web browser data due to different types of devices, browsers, and the absence of modern analytical approaches. Browsers store a large amount of information about user activity because users most often access the internet through them. However, existing approaches to analyzing this browser data still have gaps. Existing approaches fail to provide a comprehensive and precise representation of user activity. This article examines the internal architecture of web browsers as stored in the memory and storage subsystems of various devices, including desktop and mobile platforms. A novel method is proposed that integrates machine learning algorithms, such as k-nearest neighbors and Naive Bayes, to automatically analyze browser data, identify suspicious login activities, and construct user behavior profiles. The results indicate that the proposed method and the developed platform can effectively construct individual user behavior profiles. Moreover, this approach not only productively observes top visited domains and main user’s favorite website categories, but also highlights suspicious websites and user’s login attempts. Compared to existing browser forensic tools which have less capabilities, the proposed technique provides increased accuracy (more than 90%) in automated user profiling and detection of suspicious user activity. Full article
Show Figures

Figure 1

23 pages, 2992 KB  
Article
Key-Value Mapping-Based Text-to-Image Diffusion Model Backdoor Attacks
by Lujia Chai, Yang Hou, Guozhao Liao and Qiuling Yue
Algorithms 2026, 19(1), 74; https://doi.org/10.3390/a19010074 - 15 Jan 2026
Viewed by 194
Abstract
Text-to-image (T2I) generation, a core component of generative artificial intelligence(AI), is increasingly important for creative industries and human–computer interaction. Despite impressive progress in realism and diversity, diffusion models still exhibit critical security blind spots particularly in the Transformer key-value mapping mechanism that underpins [...] Read more.
Text-to-image (T2I) generation, a core component of generative artificial intelligence(AI), is increasingly important for creative industries and human–computer interaction. Despite impressive progress in realism and diversity, diffusion models still exhibit critical security blind spots particularly in the Transformer key-value mapping mechanism that underpins cross-modal alignment. Existing backdoor attacks often rely on large-scale data poisoning or extensive fine-tuning, leading to low efficiency and limited stealth. To address these challenges, we propose two efficient backdoor attack methods AttnBackdoor and SemBackdoor grounded in the Transformer’s key-value storage principle. AttnBackdoor injects precise mappings between trigger prompts and target instances by fine-tuning the key-value projection matrices in U-Net cross-attention layers (≈5% of parameters). SemBackdoor establishes semantic-level mappings by editing the text encoder’s MLP projection matrix (≈0.3% of parameters). Both approaches achieve high attack success rates (>90%), with SemBackdoor reaching 98.6% and AttnBackdoor 97.2%. They also reduce parameter updates and training time by 1–2 orders of magnitude compared to prior work while preserving benign generation quality. Our findings reveal dual vulnerabilities at visual and semantic levels and provide a foundation for developing next generation defenses for secure generative AI. Full article
Show Figures

Figure 1

25 pages, 91838 KB  
Article
ICCA: Independent Multi-Agent Algorithm for Distributed Jamming Scheduling
by Wenpeng Wu, Zhenhua Wei, Haiyang You, Zhaoguang Zhang, Chenxi Li, Jianwei Zhan and Shan Zhao
Algorithms 2026, 19(1), 73; https://doi.org/10.3390/a19010073 - 15 Jan 2026
Viewed by 131
Abstract
In extreme scenarios, to prevent the leakage of jamming coordination information, the jammers must proactively terminate their communication functions and implement jamming resource scheduling via Non-Networked Cooperation. However, current research on this non-networked jamming approach is relatively limited. Furthermore, existing algorithms either rely [...] Read more.
In extreme scenarios, to prevent the leakage of jamming coordination information, the jammers must proactively terminate their communication functions and implement jamming resource scheduling via Non-Networked Cooperation. However, current research on this non-networked jamming approach is relatively limited. Furthermore, existing algorithms either rely on networked interactions or lack cognitive strategies for the surrounding communication countermeasure situation. For example, they fail to adapt to dynamic changes in electromagnetic noise and struggle to determine jamming effectiveness, leading to low jamming efficiency and severe energy waste in non-networked scenarios. To address this issue, this paper establishes a game process and corresponding algorithm for non-networked communication countermeasures and designs cognitive, cooperative, and scheduling strategies for individual jammers. Meanwhile, a novel performance metric called the “Overall Communication Suppression Ratio (OCSR)” is proposed. This metric quantifies the relationship between “sustained full-suppression duration” and “ operating duration of the jamming system,” overcoming the defect that traditional metrics cannot evaluate the dynamic jamming effectiveness in non-networked scenarios. Experimental results indicate that although the OCSR of the proposed Intelligent Concentric Circle Algorithm (ICCA) is significantly lower than that of the Full-Power Jamming Algorithm (FPJA), ICCA extends the operating duration of the jamming system by 4.8%. This achieves non-uniform power setting of jammers, enabling flexible and dynamic jamming in non-networked scenarios and retaining more battery capacity for jammers after overall jamming failure. Full article
Show Figures

Figure 1

18 pages, 3360 KB  
Article
ZechariahNet: A Novel Method of MS Lesion Diagnosis Through MRI Images by the Combination of C-LSTM and 3D CNN Algorithms
by Mahshid Dehghanpour, Mansoor Fateh, Zeynab Mohammadpoory and Saideh Ferdowsi
Algorithms 2026, 19(1), 72; https://doi.org/10.3390/a19010072 - 15 Jan 2026
Viewed by 172
Abstract
In light of the growing prevalence of the autoimmune disease multiple sclerosis (MS), accurate detection of MS lesions in brain magnetic resonance imaging (MRI) images plays a critical role in assisting neurologists with timely diagnosis. The high similarity between MS lesions and normal [...] Read more.
In light of the growing prevalence of the autoimmune disease multiple sclerosis (MS), accurate detection of MS lesions in brain magnetic resonance imaging (MRI) images plays a critical role in assisting neurologists with timely diagnosis. The high similarity between MS lesions and normal brain tissues, however, makes this task particularly challenging. Although numerous deep-learning-based approaches have been proposed for the automatic segmentation of MS lesions, the method presented in this study has achieved superior results. ZechariahNet is a U-Net-based architecture that integrates transition down blocks, squeeze-attention (SA) blocks, dense blocks, and Convolutional LSTM (C-LSTM) blocks within a 3D CNN framework. By jointly exploiting spatial–temporal information from three consecutive MRI slices (previous, current, and subsequent) and strategically applying C-LSTM modules across the encoder and decoder paths, the proposed model effectively captures the neighborhood dependencies for enhanced feature extraction and reconstruction. These architectural innovations significantly improve segmentation accuracy, enabling ZechariahNet to achieve a dice similarity coefficient (DSC) of 84.72%, outperforming existing state-of-the-art methods. Full article
Show Figures

Figure 1

36 pages, 9776 KB  
Article
Signal Timing Optimization Method for Intersections Under Mixed Traffic Conditions
by Hongwu Li, Yangsheng Jiang and Bin Zhao
Algorithms 2026, 19(1), 71; https://doi.org/10.3390/a19010071 - 14 Jan 2026
Viewed by 143
Abstract
The increasing proliferation of new energy vehicles and autonomous vehicles has led to the formation of mixed traffic flows characterized by diverse driving behaviors, posing new challenges for intersection signal control. To address this issue, this study proposes a multi-class customer feedback queuing [...] Read more.
The increasing proliferation of new energy vehicles and autonomous vehicles has led to the formation of mixed traffic flows characterized by diverse driving behaviors, posing new challenges for intersection signal control. To address this issue, this study proposes a multi-class customer feedback queuing network (MCFFQN) model that incorporates state-dependent road capacity and congestion propagation mechanisms to accurately capture the stochastic and dynamic nature of mixed traffic flows. An evaluation framework for intersection performance is established based on key indicators such as vehicle delay, the energy consumption of new energy vehicles, and the fuel consumption and emissions of conventional vehicles. A recursive solution algorithm is developed and validated through simulations under various traffic demand scenarios. Building on this model, a signal timing optimization model aimed at minimizing total costs—including delay and environmental impacts—is formulated and solved using the Mesh Adaptive Direct Search (MADS) algorithm. A case study demonstrates that the optimized signal timing scheme significantly enhances intersection performance, reducing vehicle delay, energy consumption, fuel consumption, and emissions by over 20%. The proposed methodology provides a theoretical foundation for sustainable traffic management under mixed traffic conditions. Full article
Show Figures

Figure 1

32 pages, 999 KB  
Article
A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines
by Khalid Nejjar, Khalid Jebari and Siham Rekiek
Algorithms 2026, 19(1), 70; https://doi.org/10.3390/a19010070 - 13 Jan 2026
Viewed by 109
Abstract
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the [...] Read more.
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the efficiency of the optimization algorithm used to solve their underlying dual problem, which is often complex and constrained. Classical solvers, such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD), present inherent limitations: SMO ensures numerical stability but lacks scalability and is sensitive to heuristics, while SGD scales well but suffers from unstable convergence and limited suitability for nonlinear kernels. To address these challenges, this study proposes a novel hybrid optimization framework based on Open Competency Optimization and Particle Swarm Optimization (OCO–PSO) to enhance the training of SVMs. The proposed approach combines the global exploration capability of PSO with the adaptive competency-based learning mechanism of OCO, enabling efficient exploration of the solution space, avoidance of local minima, and strict enforcement of dual constraints on the Lagrange multipliers. Across multiple datasets spanning medical (diabetes), agricultural yield, signal processing (sonar and ionosphere), and imbalanced synthetic data, the proposed OCO-PSO–SVM consistently outperforms classical SVM solvers (SMO and SGD) as well as widely used classifiers, including decision trees and random forests, in terms of accuracy, macro-F1-score, Matthews correlation coefficient (MCC), and ROC-AUC. On the Ionosphere dataset, OCO-PSO achieves an accuracy of 95.71%, an F1-score of 0.954, and an MCC of 0.908, matching the accuracy of random forest while offering superior interpretability through its kernel-based structure. In addition, the proposed method yields a sparser model with only 66 support vectors compared to 71 for standard SVC (a reduction of approximately 7%), while strictly satisfying the dual constraints with a near-zero violation of 1.3×103. Notably, the optimal hyperparameters identified by OCO-PSO (C=2, γ0.062) differ substantially from those obtained via Bayesian optimization for SVC (C=10, γ0.012), indicating that the proposed approach explores alternative yet equally effective regions of the hypothesis space. The statistical significance and robustness of these improvements are confirmed through extensive validation using 1000 bootstrap replications, paired Student’s t-tests, Wilcoxon signed-rank tests, and Holm–Bonferroni correction. These results demonstrate that the proposed metaheuristic hybrid optimization framework constitutes a reliable, interpretable, and scalable alternative for training SVMs in complex and high-dimensional classification tasks. Full article
Show Figures

Figure 1

22 pages, 884 KB  
Article
Sentiment-Augmented RNN Models for Mini-TAIEX Futures Prediction
by Yu-Heng Hsieh, Keng-Pei Lin, Ching-Hsi Tseng, Xiaolong Liu and Shyan-Ming Yuan
Algorithms 2026, 19(1), 69; https://doi.org/10.3390/a19010069 - 13 Jan 2026
Viewed by 174
Abstract
Accurate forecasting in low-liquidity futures markets is essential for effective trading. This study introduces a hybrid decision-support framework that combines Mini-TAIEX (MTX) futures data with sentiment signals extracted from 13 financial news sources and PTT forum discussions. Sentiment features are generated using three [...] Read more.
Accurate forecasting in low-liquidity futures markets is essential for effective trading. This study introduces a hybrid decision-support framework that combines Mini-TAIEX (MTX) futures data with sentiment signals extracted from 13 financial news sources and PTT forum discussions. Sentiment features are generated using three domain-adapted large language models—FinGPT-internLM, FinGPT-llama, and FinMA—trained on more than 360,000 finance-related texts. These features are integrated with technical indicators in four deep learning models: LSTM, GRU, Informer, and PatchTST. Experiments from June 2024 to June 2025 show that sentiment-augmented models consistently outperform baselines. Backtesting further demonstrates that the sentiment-enhanced PatchTST achieves a 526% cumulative return with a Sharpe ratio of 0.407, highlighting the value of incorporating sentiment into AI-driven futures trading systems. Full article
Show Figures

Figure 1

18 pages, 2883 KB  
Article
A Multi-Objective Giant Trevally Optimizer with Feasibility-Aware Archiving for Constrained Optimization
by Nashwan Hussein and Adnan Abdulazeez
Algorithms 2026, 19(1), 68; https://doi.org/10.3390/a19010068 - 13 Jan 2026
Viewed by 236
Abstract
Multi-objective optimization (MOO) plays a critical role in mechanical and industrial engineering, where conflicting design goals must be balanced under complex constraints. In this study, we introduce the Multi-Objective Giant Trevally Optimizer (MOGTO), a novel extension of the Giant Trevally Optimizer inspired by [...] Read more.
Multi-objective optimization (MOO) plays a critical role in mechanical and industrial engineering, where conflicting design goals must be balanced under complex constraints. In this study, we introduce the Multi-Objective Giant Trevally Optimizer (MOGTO), a novel extension of the Giant Trevally Optimizer inspired by predatory foraging dynamics. MOGTO integrates predation-regime switching into a Pareto-based framework, enhanced with feasibility-aware archiving, knee-biased selection, and adaptive constraint handling. We benchmark MOGTO against established algorithms—NSGA-II, SPEA2, MOEA/D, and ParetoSearch—using synthetic test suites (ZDT1–3, DTLZ2) and classical engineering problems (welded beam, spring, and pressure vessel). Performance was assessed with Hypervolume (HV), Inverted Generational Distance (IGD), Spacing, and coverage metrics across 30 independent runs. The results demonstrate that MOGTO consistently achieves competitive or superior HV and IGD, maintains more uniform spacing, and generates larger feasible archives than the baselines. Particularly on constrained engineering problems, MOGTO yields more feasible non-dominated solutions, confirming its robustness and industrial applicability. These findings establish MOGTO as a reliable and general-purpose metaheuristic for multi-objective optimization in engineering design. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Graphical abstract

18 pages, 998 KB  
Article
A Stock Price Prediction Network That Integrates Multi-Scale Channel Attention Mechanism and Sparse Perturbation Greedy Optimization
by Jiarun He, Fangying Wan and Mingfang He
Algorithms 2026, 19(1), 67; https://doi.org/10.3390/a19010067 - 12 Jan 2026
Viewed by 136
Abstract
The stock market is of paramount importance to economic development. Investors who accurately predict stock price fluctuations based on its high volatility can effectively mitigate investment risks and achieve higher returns. Traditional time series models face limitations when dealing with long sequences and [...] Read more.
The stock market is of paramount importance to economic development. Investors who accurately predict stock price fluctuations based on its high volatility can effectively mitigate investment risks and achieve higher returns. Traditional time series models face limitations when dealing with long sequences and short-term volatility issues, often yielding unsatisfactory predictive outcomes. This paper proposes a novel algorithm, MSNet, which integrates a Multi-scale Channel Attention mechanism (MSCA) and Sparse Perturbation Greedy Optimization (SPGO) onto an xLSTM framework. The MSCA enhances the model’s spatio-temporal information modeling capabilities, effectively preserving key price features within stock data. Meanwhile, SPGO improves the exploration of optimal solutions during training, thereby strengthening the model’s generalization stability against short-term market fluctuations. Experimental results demonstrate that MSNet achieves an MSE of 0.0093 and an MAE of 0.0152 on our proprietary dataset. This approach effectively extracts temporal features from complex stock market data, providing empirical insights and guidance for time series forecasting. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 4886 KB  
Article
YOLOv8-ECCα: Enhancing Object Detection for Power Line Asset Inspection Under Real-World Visual Constraints
by Rita Ait el haj, Badr-Eddine Benelmostafa and Hicham Medromi
Algorithms 2026, 19(1), 66; https://doi.org/10.3390/a19010066 - 12 Jan 2026
Viewed by 163
Abstract
Unmanned Aerial Vehicles (UAVs) have revolutionized power-line inspection by enhancing efficiency, safety, and enabling predictive maintenance through frequent remote monitoring. Central to automated UAV-based inspection workflows is the object detection stage, which transforms raw imagery into actionable data by identifying key components such [...] Read more.
Unmanned Aerial Vehicles (UAVs) have revolutionized power-line inspection by enhancing efficiency, safety, and enabling predictive maintenance through frequent remote monitoring. Central to automated UAV-based inspection workflows is the object detection stage, which transforms raw imagery into actionable data by identifying key components such as insulators, dampers, and shackles. However, the real-world complexity of inspection scenes poses significant challenges to detection accuracy. For example, the InsPLAD-det dataset—characterized by over 30,000 annotations across diverse tower structures and viewpoints, with more than 40% of components partially occluded—illustrates the visual and structural variability typical of UAV inspection imagery. In this study, we introduce YOLOv8-ECCα, a novel object detector tailored for these demanding inspection conditions. Our contributions include: (1) integrating CoordConv, selected over deformable convolution for its efficiency in preserving fine spatial cues without heavy computation; (2) adding Efficient Channel Attention (ECA), preferred to SE or CBAM for its ability to enhance feature relevance using only a single 1D convolution and no dimensionality reduction; and (3) adopting Alpha-IoU, chosen instead of CIoU or GIoU to produce smoother gradients and more stable convergence, particularly under partial overlap or occlusion. Evaluated on the InsPLAD-det dataset, YOLOv8-ECCα achieves an mAP@50 of 82.75%, outperforming YOLOv8s (81.89%) and YOLOv9-E (82.61%) by +0.86% and +0.14%, respectively, while maintaining real-time inference at 86.7 FPS—exceeding the baseline by +2.3 FPS. Despite these improvements, the model retains a compact footprint (28.5 GFLOPs, 11.1 M parameters), confirming its suitability for embedded UAV deployment in real inspection environments. Full article
Show Figures

Figure 1

34 pages, 5835 KB  
Review
RIS-UAV Cooperative ISAC Technology for 6G: Architecture, Optimization, and Challenges
by Yuanfei Zhang, Zhongqiang Luo, Wenjie Wu and Wencheng Tian
Algorithms 2026, 19(1), 65; https://doi.org/10.3390/a19010065 - 12 Jan 2026
Viewed by 360
Abstract
With the development of 6G technology, conventional wireless communication systems are increasingly unable to meet stringent performance requirements in complex and dynamic environments. Therefore, integrated sensing and communication (ISAC), which enables efficient spectrum sharing, has attracted growing attention as a promising solution. This [...] Read more.
With the development of 6G technology, conventional wireless communication systems are increasingly unable to meet stringent performance requirements in complex and dynamic environments. Therefore, integrated sensing and communication (ISAC), which enables efficient spectrum sharing, has attracted growing attention as a promising solution. This paper provides a comprehensive survey of reconfigurable intelligent surface (RIS)-unmanned aerial vehicle (UAV)-assisted ISAC systems. It first introduces a four-dimensional quantitative evaluation framework grounded in information theory. Then, we provide a structured overview of coordination mechanisms between different types of RIS and UAV platforms within ISAC architectures. Furthermore, we analyze the application characteristics of various multiple access schemes in these systems. Finally, the main technical challenges and potential future research directions are discussed and analyzed. Full article
Show Figures

Figure 1

20 pages, 11896 KB  
Article
Improved Secretary Bird Optimization Algorithm for UAV Path Planning
by Huanlong Zhang, Hang Cheng, Xin Wang, Liao Zhu, Dian Jiao and Zhoujingzi Qiu
Algorithms 2026, 19(1), 64; https://doi.org/10.3390/a19010064 - 12 Jan 2026
Viewed by 168
Abstract
In view of the complex flight scenarios existing in UAV path planning, it is necessary to model the UAV flight trajectory. When constructing the model, cost factors such as the minimum flight path of the UAV, obstacle avoidance, flight altitude, and trajectory smoothness [...] Read more.
In view of the complex flight scenarios existing in UAV path planning, it is necessary to model the UAV flight trajectory. When constructing the model, cost factors such as the minimum flight path of the UAV, obstacle avoidance, flight altitude, and trajectory smoothness are fully taken into account. To reduce the overall flight cost, a novel secretary bird optimization algorithm (NSBOA) is proposed in this paper, which effectively addresses the limitations of traditional algorithms in handling UAV path planning tasks. First of all, the Singer chaotic map is adopted to initialize the population instead of the conventional random initialization method. This improvement increases population diversity, enables the initial population to be more evenly distributed in the search space, and further accelerates the algorithm’s convergence speed in the subsequent optimization process. Second, an adaptive adjustment mechanism is integrated with the Levy flight mechanism to optimize the core logic of the algorithm, with a specific focus on improving the exploitation stage. By introducing appropriate perturbations near the current optimal solution, the algorithm is guided to jump out of local optimal traps, thereby enhancing its global optimization capability and avoiding premature convergence caused by insufficient population diversity. By comparing and analyzing NSBOA with SBOA, WOA, PSO, POA, NGO, and HHO algorithms in 12 common evaluation functions and CEC 2017 test functions, and applying NSBOA to the UAV path optimization problem, the simulation results show the effectiveness and superiority of the proposed scheme. Full article
Show Figures

Figure 1

32 pages, 3198 KB  
Review
Explainability in Deep Learning in Healthcare and Medicine: Panacea or Pandora’s Box? A Systemic View
by Wullianallur Raghupathi
Algorithms 2026, 19(1), 63; https://doi.org/10.3390/a19010063 - 12 Jan 2026
Viewed by 220
Abstract
Explainability in deep learning (XDL) for healthcare is increasingly portrayed as essential for addressing the “black box” problem in clinical artificial intelligence. However, this universal transparency mandate may create unintended consequences, including cognitive overload, spurious confidence, and workflow disruption. This paper examines a [...] Read more.
Explainability in deep learning (XDL) for healthcare is increasingly portrayed as essential for addressing the “black box” problem in clinical artificial intelligence. However, this universal transparency mandate may create unintended consequences, including cognitive overload, spurious confidence, and workflow disruption. This paper examines a fundamental question: Is explainability a panacea that resolves AI’s trust deficit, or a Pandora’s box that introduces new risks? Drawing on general systems theory we demonstrate that the answer is profoundly context dependent. Through systemic analysis of current XDL methods, Saliency Maps, LIME, SHAP, and attention mechanisms, we reveal systematic disconnects between technical transparency and clinical utility. This paper argues that XDL is a context-dependent systemic property rather than a universal requirement. It functions as a panacea when proportionately applied to high-stakes reasoning tasks (cancer treatment planning, complex diagnosis) within integrated socio-technical architectures. Conversely, it becomes a Pandora’s box when superficially imposed on routine operational functions (scheduling, preprocessing) or time-critical emergencies (e.g., cardiac arrest) where comprehensive explanation delays lifesaving intervention. The paper proposes a risk-stratified framework recognizing that a specific subset of healthcare AI applications—those involving high-stakes clinical reasoning—require comprehensive explainability, while other applications benefit from calibrated transparency appropriate to their clinical context. We conclude that explainability is neither a cure-all nor an inevitable harm, but rather a dynamic equilibrium requiring continuous rebalancing across technical, cognitive, and organizational dimensions. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop