Journal Description
Algorithms
Algorithms
is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications, and is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and its members receive discounts on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Theory and Methods) / CiteScore - Q1 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 19.2 days after submission; acceptance to publication is undertaken in 3.7 days (median values for papers published in this journal in the second half of 2025).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Artificial Intelligence: AI, AI in Medicine, Algorithms, BDCC, MAKE, MTI, Stats, Virtual Worlds and Computers.
Impact Factor:
2.1 (2024);
5-Year Impact Factor:
2.0 (2024)
Latest Articles
Probabilistic Orchestrator for Indeterministic Multi-Agent Systems in Real-Time Environments
Algorithms 2026, 19(4), 261; https://doi.org/10.3390/a19040261 - 29 Mar 2026
Abstract
Multi-agent perception systems must operate under fundamental asymmetries: some agents provide fast but unreliable observations, while others deliver higher-quality evidence with delay and uncertain correspondence. Traditional deterministic orchestration and rule-based fusion struggle to manage these trade-offs, often producing brittle or unstable behavior. We
[...] Read more.
Multi-agent perception systems must operate under fundamental asymmetries: some agents provide fast but unreliable observations, while others deliver higher-quality evidence with delay and uncertain correspondence. Traditional deterministic orchestration and rule-based fusion struggle to manage these trade-offs, often producing brittle or unstable behavior. We introduce a probabilistic orchestration framework that treats coordination as an epistemic generation problem—constructing and updating belief states under uncertainty—rather than a selection problem. Instead of committing to a single agent’s output, the orchestrator constructs a belief state that explicitly represents uncertainty, evidential provenance, and temporal relevance. Decisions are produced through latency-aware, association-weighted fusion, and uncertainty itself becomes a first-class signal governing action, deferral, and learning. Crucially, the orchestrator enables controlled teacher–student adaptation: high-confidence, well-associated stationary observations are gated into a feedback loop that improves ego perception over time while mitigating error amplification. We demonstrate the approach on an infrastructure-assisted dual-camera obstacle-recognition task. Experimental results show improved robustness to distance, occlusion, and delayed evidence compared to ego-only and deterministic orchestration baselines. By operationalizing orchestration as epistemic generation, this work provides a unifying framework for robust decision-making and safe adaptation in multi-agent systems, with implications that extend beyond perception to agentic and generative AI architectures.
Full article
Open AccessArticle
Multi-Objective Optimized Differential Privacy with Interpretable Machine Learning for Brain Stroke and Heart Disease Diagnosis
by
Mohammed Ibrahim Hussain, Arslan Munir, Safiul Haque Chowdhury, Mohammad Mamun and Muhammad Minoar Hossain
Algorithms 2026, 19(4), 260; https://doi.org/10.3390/a19040260 - 27 Mar 2026
Abstract
Brain stroke (BS) and heart disease (HD) are leading causes of global mortality and long-term disability, underscoring the critical need for early and accurate diagnostic tools. This research addresses the dual challenge of developing high-performance predictive models while ensuring the privacy of sensitive
[...] Read more.
Brain stroke (BS) and heart disease (HD) are leading causes of global mortality and long-term disability, underscoring the critical need for early and accurate diagnostic tools. This research addresses the dual challenge of developing high-performance predictive models while ensuring the privacy of sensitive patient data. We propose a framework that integrates ensemble machine learning (ML) models with a formal differential privacy (DP) mechanism. Using a dataset of 5110 samples with clinical features, we evaluate Extreme Gradient Boosting (XGB), Random Forest (RF), Light Gradient Boosting Machine (LGBM), and Categorical Boosting (CAT) for BS and HD prediction. To protect individual privacy, we apply the Gaussian mechanism of DP with two probabilities of failure (POF) parameters (10–5 and 10–6) and a privacy budget ranging from 0.5 to 5.0. A key novelty of this work is the application of Pareto frontier multi-objective optimization (PFMOO) to systematically identify the optimal trade-off between model accuracy and privacy constraints. Our approach successfully identifies optimal, privacy-preserving models: XGB achieves top performance for BS prediction (92.3% accuracy, 92.29% F1 score), with a POF of 10–6, while RF excels for HD detection (95.61% accuracy, 97.8% precision), with a POF of 10–5. Furthermore, we employ explainable AI (XAI) techniques, SHAP and LIME, to provide interpretability of the model decisions, enhancing clinical trust. This research delivers a robust, interpretable, and privacy-conscious framework for early disease detection, offering a significant advancement over existing methods by holistically balancing accuracy, data security, and transparency.
Full article
(This article belongs to the Special Issue 2026 and 2027 Selected Papers from Algorithms Editorial Board Members)
►▼
Show Figures

Figure 1
Open AccessArticle
Adapting Vision–Language Models for Few-Shot Industrial Defect Detection
by
Chayanon Sub-r-pa and Rung-Ching Chen
Algorithms 2026, 19(4), 259; https://doi.org/10.3390/a19040259 - 27 Mar 2026
Abstract
Automated surface defect detection often faces a “cold-start” problem due to limited annotated data for new anomalies. Traditional object detectors struggle to converge in such few-shot settings. To address this, we adapt Vision–Language Models (VLMs), specifically YOLO-World. We use semantic pre-training to mitigate
[...] Read more.
Automated surface defect detection often faces a “cold-start” problem due to limited annotated data for new anomalies. Traditional object detectors struggle to converge in such few-shot settings. To address this, we adapt Vision–Language Models (VLMs), specifically YOLO-World. We use semantic pre-training to mitigate data scarcity. We evaluate this approach on the MVTec AD dataset in bounding-box format. We use a strict 1:9 train-validation split, resulting in an average of 11.8 defect instances per category. YOLO-World surpasses traditional baselines, like YOLOv11s and YOLOv26s, in 12 of 15 categories. The optimized VLM pipeline achieves up to 64.9% mAP@50 on texture-heavy categories, such as Tile, with only nine training instances. Ablation studies show standard optimization techniques are limited under 10-shot constraints. We find a critical augmentation divide. Disabling spatial distortions (Mosaic) is vital to preserving rigid-object geometry. The Normalized Wasserstein Distance (NWD) improves the localization of microscopic anomalies. Varifocal Loss (VFL) often causes model collapse. Ultimately, VLMs offer a superior foundation for cold-start inspection but require carefully tailored pipelines for robustness.
Full article
(This article belongs to the Special Issue Data-Driven Intelligent Modeling and Optimization Algorithms for Industrial Processes: 3rd Edition)
►▼
Show Figures

Figure 1
Open AccessReview
Optimization Algorithms: Comprehensive Classification, Principles, and Scientometric Trends
by
Khadija Abouhssous, Rasha Hasan, Asmaa Zugari and Alia Zakriti
Algorithms 2026, 19(4), 258; https://doi.org/10.3390/a19040258 - 27 Mar 2026
Abstract
►▼
Show Figures
In recent years, optimization algorithms have emerged as powerful computational tools for addressing complex and dynamic challenges across diverse domains. These domains include engineering, technology, management, and decision-making. Their growing importance is motivated by (a) the increasing complexity of modern systems, (b) the
[...] Read more.
In recent years, optimization algorithms have emerged as powerful computational tools for addressing complex and dynamic challenges across diverse domains. These domains include engineering, technology, management, and decision-making. Their growing importance is motivated by (a) the increasing complexity of modern systems, (b) the need for efficient resource utilization, and (c) the demand for scalable algorithmic solutions. These algorithms enable the systematic and computational exploration of large solution spaces, supporting decision-making and design under uncertainty, large-scale data, and evolving requirements. This study provides a structured review and comparative scientometric analysis of optimization algorithms, covering: (a) exact methods, (b) approximation techniques, (c) metaheuristics, and (d) emerging physics-informed frameworks. The analysis highlights algorithmic trends, performance-oriented research directions, and the increasing integration of mathematical programming, machine learning, and numerical methods. The results show a renewed focus on classical algorithmic paradigms. Moreover, rapid growth in hybrid and physics-informed optimization approaches is observed. These findings confirm the central role of optimization algorithms in modern algorithm engineering and interdisciplinary computational research.
Full article

Figure 1
Open AccessArticle
Enhanced Facial Realism in Personalized Diffusion Models: A Memory-Optimized DreamBooth Implementation for Consumer Hardware
by
Sandeep Gupta, Kanad Ray, Shamim Kaiser, Sazzad Hossain and Jocelyn Faubert
Algorithms 2026, 19(4), 257; https://doi.org/10.3390/a19040257 - 27 Mar 2026
Abstract
Despite significant progress in general-purpose diffusion-based models capable of producing high-quality media, this approach is still too difficult to implement on consumer/gamer hardware. We present here a memory-optimized DreamBooth framework designed for consumer-grade GPUs with 16 GB of VRAM, that allows for end-to-end
[...] Read more.
Despite significant progress in general-purpose diffusion-based models capable of producing high-quality media, this approach is still too difficult to implement on consumer/gamer hardware. We present here a memory-optimized DreamBooth framework designed for consumer-grade GPUs with 16 GB of VRAM, that allows for end-to-end image personalization and addresses some of the limitations of existing solutions. Our system reduces peak GPU memory from 22 GB (baseline DreamBooth) to 14.2 GB through novel hierarchical memory management, including attention slicing, Variational Autoencoder (VAE) tiling, gradient accumulation, and gradient checkpointing integrated within the Hugging Face Accelerate ecosystem. The framework further incorporates state-of-the-art techniques for preserving facial features and a comprehensive automated quality management system. The result is a complete end-to-end pipeline achieving a peak memory of 14.2 GB, with quantitative performance (LPIPS: 0.139, SSIM: 0.879, identity: 0.852, and FID: 23.1) competitive with methods requiring significantly more hardware resources.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Genetic Programming Algorithm Evolving Robust Unary Costs for Efficient Graph Cut Segmentation
by
Reem M. Mostafa, Emad Mabrouk, Ahmed Ayman, Hamdy Z. Zidan and Abdelmonem M. Ibrahim
Algorithms 2026, 19(4), 256; https://doi.org/10.3390/a19040256 - 27 Mar 2026
Abstract
Accurate cell and nuclei segmentation remains challenging due to the sensitivity of classical graph-cut methods to parameter tuning. While deep learning models like U-Net offer strong performance, they require large annotated datasets and substantial GPU resources. This work presents a cost-effective alternative: a
[...] Read more.
Accurate cell and nuclei segmentation remains challenging due to the sensitivity of classical graph-cut methods to parameter tuning. While deep learning models like U-Net offer strong performance, they require large annotated datasets and substantial GPU resources. This work presents a cost-effective alternative: a genetic programming (GP) framework that jointly optimizes unary cost functions and regularization parameters for graph-cut segmentation, coupled with automatic seed selection. Evaluation is conducted under two distinct protocols: (1) oracle-guided per-image optimization, establishing upper-bound performance (mean Dice 0.822, IoU 0.733), and (2) true generalization via train/test split, where expressions learned on 50 images are applied to 50 unseen images (mean Dice 0.695, IoU 0.588). The fixed-model generalization still significantly outperforms the baseline graph cut ( Dice, ). Cross-dataset validation on MoNuSeg (H&E histopathology) achieves a Dice score of 0.823 with the fixed GP model, significantly outperforming the baseline (+0.272). This result uses a single fixed model—the best-performing expression from BBBC038 training—applied in a zero-shot manner to MoNuSeg without any retraining or domain adaptation. All 100 images showed non-negative improvement under oracle optimization in the experiments. The method requires no GPU training, runs in 550 s per image for oracle search, and offers interpretable symbolic cost functions. Code and annotations are provided to ensure reproducibility. This approach offers a practical, interpretable alternative in resource-constrained biomedical imaging settings.
Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Outlook for the Development of the Chip and Artificial Intelligence Industries—Application Perspective
by
Bao Rong Chang and Hsiu-Fen Tsai
Algorithms 2026, 19(4), 255; https://doi.org/10.3390/a19040255 - 26 Mar 2026
Abstract
This review examines the transformative interplay between computing chips and Artificial Intelligence (AI), driving a revolution across various industries. First, the broader artificial intelligence and semiconductor ecosystem is analyzed, including hardware manufacturers, software frameworks, and system integration. Next, the development prospects are examined,
[...] Read more.
This review examines the transformative interplay between computing chips and Artificial Intelligence (AI), driving a revolution across various industries. First, the broader artificial intelligence and semiconductor ecosystem is analyzed, including hardware manufacturers, software frameworks, and system integration. Next, the development prospects are examined, revealing current challenges such as power consumption, manufacturing complexity, supply chain constraints, and ethical considerations. Further discussion focuses on cloud-edge collaboration in relation to system architecture and workload allocation strategies. Then, cutting-edge AI technologies are analyzed, and key insights are summarized. Finally, the overall trends in artificial intelligence and the chip industry are summarized, clearly presenting the findings for the future and making a unique contribution to this review.
Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science: 2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
CSFADet: Dual-Modal Anti-UAV Detection via Cross-Spectral Feature Alignment and Adaptive Multi-Scale Refinement
by
Heqin Yuan and Yuheng Li
Algorithms 2026, 19(4), 254; https://doi.org/10.3390/a19040254 - 26 Mar 2026
Abstract
Anti-unmanned aerial vehicle (Anti-UAV) detection is critical for airspace security, yet existing single-modality approaches suffer from severe performance degradation under adverse illumination, thermal crossover, and extreme scale variation. In this paper, we propose CSFADet, a dual-modal detection framework that jointly exploits visible and
[...] Read more.
Anti-unmanned aerial vehicle (Anti-UAV) detection is critical for airspace security, yet existing single-modality approaches suffer from severe performance degradation under adverse illumination, thermal crossover, and extreme scale variation. In this paper, we propose CSFADet, a dual-modal detection framework that jointly exploits visible and infrared imagery through four tightly integrated modules. First, a Cross-Spectral Feature Alignment (CSFA) module performs early-stage spectral calibration by computing cross-modal query–value attention maps, generating modality-aware channel descriptors that re-weight and concatenate the two spectral streams. Second, a Dual-path Texture Enhancement Module (DTEM) enriches fine-grained spatial details via cascaded convolutions with residual connections. Third, a Dual-path Cross-Attention Module (DCAM) introduces a feature-shrinking token generation strategy followed by symmetric cross-attention branches with learnable scaling factors, Squeeze-and-Excitation recalibration, and a convolution fusion head, enabling deep bidirectional interaction between modalities. Fourth, a Dual-path Information Refinement Module (DIRM) embeds Adaptive Residual Groups (ARGs) that cascade Multi-modal Spatial Attention Blocks (MSABs) with channel and dynamic spatial attention, culminating in a Multi-scale Scale-aware Fusion Refinement (MSFR) unit that employs three parallel multi-head attention branches with a Scale Reasoning Gate and Channel Fusion Layer to produce scale-discriminative enhanced features. Experiments on the public Anti-UAV300 benchmark show that CSFADet achieves 91.4% mAP@0.5 and 58.7% mAP@0.5:0.95, surpassing fifteen representative detectors spanning single-stage, two-stage, YOLO-family, and Transformer-based categories. Ablation studies confirm the complementary contributions of each module, and heatmap visualizations verify the model’s capacity to focus on small, distant UAV targets under challenging conditions.
Full article
(This article belongs to the Topic Theoretical Foundations and Applications of Deep Learning Techniques)
►▼
Show Figures

Figure 1
Open AccessArticle
Cross-Project Software Defect Prediction Based on Domain Adaptation and Feature Fusion
by
Guanhua Guo, Yinglei Song and Peng Zhang
Algorithms 2026, 19(4), 253; https://doi.org/10.3390/a19040253 - 26 Mar 2026
Abstract
►▼
Show Figures
With the advancement of computer science, software has become increasingly prevalent across all facets of society, making software quality issues a focal point of industry concern. The scarcity of sufficient defect data in the early stages of projects undermines prediction accuracy, driving research
[...] Read more.
With the advancement of computer science, software has become increasingly prevalent across all facets of society, making software quality issues a focal point of industry concern. The scarcity of sufficient defect data in the early stages of projects undermines prediction accuracy, driving research into cross-project software defect prediction. The traditional manual measurement features face challenges due to the data distribution discrepancies between original and cross-project contexts, which hinder the prediction effectiveness. Furthermore, single features fail to comprehensively characterize software information. This paper proposes a domain adaptation and feature fusion-based cross-project software defect prediction method (DAFF-CPDP). The model employs the TCA+ algorithm for domain adaptation and utilizes an encoder layer for progressive feature fusion. Multiple Java projects were selected for evaluation. The comparisons with various baseline models demonstrated that the proposed model outperforms both the traditional machine learning-based feature models and the diverse deep learning-based single-feature or multi-feature models. Concurrently, this paper analyzes the impact of different source projects on target projects, confirming that class-balanced datasets and datasets with smaller distribution differences are more conducive to project prediction.
Full article

Figure 1
Open AccessArticle
Rigid-Chain Following and Kinematic Response Analysis on Piecewise Non-Smooth Paths: A DGPS-Based Solution Method
by
Yaxuan Zhao, Ziheng Li and Hualu Liu
Algorithms 2026, 19(4), 252; https://doi.org/10.3390/a19040252 - 25 Mar 2026
Abstract
Rigid-body chain following on piecewise analytic paths is a fundamental subroutine in motion planning and multibody simulation. The problem is nontrivial when only the leader trajectory of the first node is available: enforcing fixed inter-node distances reduces to circle–curve intersection, which is generally
[...] Read more.
Rigid-body chain following on piecewise analytic paths is a fundamental subroutine in motion planning and multibody simulation. The problem is nontrivial when only the leader trajectory of the first node is available: enforcing fixed inter-node distances reduces to circle–curve intersection, which is generally multi-valued and becomes particularly challenging near non-smooth junctions. We present a Dichotomy Geometric Path Search (DGPS) framework that converts each constraint into a one-dimensional root-finding task and resolves the branch selection through no-backtracking ordering: at every time step, the admissible solution for the current node is the nearest feasible root in the past relative to its immediately preceding node. DGPS combines backward bracketing with bisection, achieving robust convergence. Compared with the inverse Jacobian method, which maps end-effector velocities to joint velocities via explicit inversion, the proposed approach avoids Jacobian inversion and globally coupled nonlinear solves. We further characterize the local structure of the zero set and establish monotonicity/uniqueness conditions that justify stable root selection across piecewise junctions. Extensive tests on representative piecewise trajectories (line–arc–line, polylines with corners, piecewise sinusoids, and time reparameterization) show that DGPS enforces distance constraints to near machine precision, produces interpretable speed/acceleration transients around non-smooth events, and exhibits computational costs consistent with iteration difficulty. The results support DGPS as a general, efficient solver requiring only the prescribed leader trajectory.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Extended LSTM to Enhance Learner Performance Prediction
by
Adel Ihichr, Soukaina Hakkal, Omar Oustous, Younès El Bouzekri El Idrissi and Ayoub Ait Lahcen
Algorithms 2026, 19(4), 251; https://doi.org/10.3390/a19040251 - 25 Mar 2026
Abstract
Knowledge Tracing (KT) is a fundamental task in intelligent education systems, designed to track students’ evolving knowledge states and predict their future performance. While Deep Learning-based Knowledge Tracing (DLKT) models have advanced the field, they often face significant limitations in jointly capturing short-term
[...] Read more.
Knowledge Tracing (KT) is a fundamental task in intelligent education systems, designed to track students’ evolving knowledge states and predict their future performance. While Deep Learning-based Knowledge Tracing (DLKT) models have advanced the field, they often face significant limitations in jointly capturing short-term performance fluctuations and long-term knowledge retention, which restricts their predictive precision in complex learning trajectories. This paper proposes the Extended Deep Knowledge Tracing (xDKT) model, which integrates the Extended Long Short-Term Memory (xLSTM) architecture to enhance multi-scale temporal learning representations. Specifically, through rigorous ablation studies over extended learning sequences (up to 1000 steps), our analysis indicates that the exponential gating and advanced scalar memory of sLSTM units are the primary drivers of performance. This architecture effectively captures both short-term performance shifts and long-term knowledge retention without the vanishing gradient degradation inherent to standard LSTMs. We evaluate xDKT across six diverse benchmark datasets, including Synthetic, Algebra2005–2006, Statics2011, and the ASSISTments series, covering over 22,000 learners. Experimental results show that xDKT yields improved Area Under the ROC Curve (AUC) scores on Statics2011 (0.8562) and ASSISTments2009 (0.8318) compared to baseline models such as DKT, DKVMN, and AKT. Finally, through extensive validation, these findings suggest that xDKT architecture provides a robust and promising framework for accurate and adaptive learning environments.
Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
Human-Executable Algorithms for Phishing Avoidance
by
Paul A. Gagniuc, Ana Apetroaiei, Marius Claudiu Langa, Adriana Nicoleta Lazar, Ionut Marius Bulgaru, Maria-Iuliana Dascalu and Ionel-Bujorel Pavaloiu
Algorithms 2026, 19(4), 250; https://doi.org/10.3390/a19040250 (registering DOI) - 25 Mar 2026
Abstract
Phishing attacks remain effective because they exploit human decisions at the moment of action, often before automated defenses intervene. Established countermeasures focus on detection systems or awareness campaigns but rarely provide non-expert users with a formally specified decision procedure. This work presents a
[...] Read more.
Phishing attacks remain effective because they exploit human decisions at the moment of action, often before automated defenses intervene. Established countermeasures focus on detection systems or awareness campaigns but rarely provide non-expert users with a formally specified decision procedure. This work presents a lightweight, deterministic phishing avoidance algorithm that users can execute without specialized tools. The algorithm evaluates a finite set of observable indicators and applies a monotonic risk score to produce allow, caution, or block decisions. Formal properties of the procedure include monotonicity, bounded complexity, and decision traceability. A controlled study with 96 participants and 72 messages per participant showed that algorithm use increased mean classification accuracy from 68.4% to 84.7% and reduced the false-negative rate from 31.9% to 11.3%. Median decision time rose from 6.2 s to 8.7 s. These results show that phishing avoidance can be expressed as a human-executable algorithm rather than as advisory guidance, and that structured decision rules can measurably improve user level security outcomes.
Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
►▼
Show Figures

Graphical abstract
Open AccessSystematic Review
Recent Advances in Multi-Camera Computer Vision for Industry 4.0 and Smart Cities: A Systematic Review
by
Carlos Julio Fierro-Silva, Carolina Del-Valle-Soto, Samih M. Mostafa and José Varela-Aldás
Algorithms 2026, 19(4), 249; https://doi.org/10.3390/a19040249 (registering DOI) - 25 Mar 2026
Abstract
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and
[...] Read more.
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and enable consistent tracking of people and objects across non-overlapping views, thereby improving robustness against occlusions and viewpoint changes. This article presents a comprehensive review of multi-camera vision systems published between 2020 and 2025, covering application domains including public security and biometrics, intelligent transportation, smart cities and IoT, healthcare monitoring, precision agriculture, industry and robotics, pan–tilt–zoom (PTZ) camera networks, and emerging areas such as retail and forensic analysis. The review synthesizes predominant technical approaches, including deep-learning-based detection, multi-target multi-camera tracking (MTMCT), re-identification (Re-ID), spatiotemporal fusion, and edge computing architectures. Persistent challenges are identified, particularly in inter-camera data association, scalability, computational efficiency, privacy preservation, and dataset availability. Emerging trends such as distributed edge AI, cooperative camera networks, and active perception are discussed to outline future research directions toward scalable, privacy-aware, and intelligent multi-camera infrastructures.
Full article
(This article belongs to the Special Issue Algorithmic Innovations: Bridging Theoretical Foundations and Practical Applications (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Efficient Word-Level Sign Language Recognition Using Quantized Spatiotemporal Deep Learning for Low-Power Microcontrollers
by
Samuel Longwani Kimpinde and Peter O. Olukanmi
Algorithms 2026, 19(4), 248; https://doi.org/10.3390/a19040248 (registering DOI) - 25 Mar 2026
Abstract
►▼
Show Figures
Deploying efficient sign language recognition models on edge devices advances inclusive, affordable, and privacy-preserving human–computer interaction. Yet most state-of-the-art architectures target server-class hardware and fail under the strict memory, computation, and energy constraints of microcontrollers. This work introduces S3D-Conv1D, a separable spatiotemporal architecture
[...] Read more.
Deploying efficient sign language recognition models on edge devices advances inclusive, affordable, and privacy-preserving human–computer interaction. Yet most state-of-the-art architectures target server-class hardware and fail under the strict memory, computation, and energy constraints of microcontrollers. This work introduces S3D-Conv1D, a separable spatiotemporal architecture for isolated word-level sign language recognition, tailored for TinyML deployment. While the idea of separating spatial and temporal processing has been explored in earlier models, the novelty here lies in a deployment pipeline designed from the outset for microcontroller-class constraints: every operator has native INT8 support in TensorFlow Lite, CMSIS-NN, and NNoM; the architecture achieves full integer-only execution with competitive accuracy; and the evaluation scale (100 and 300 classes) substantially exceeds prior TinyML sign language recognition studies. Evaluations on datasets show that S3D-Conv1D achieves float32 accuracy on WLASL100 with stable cross-dataset generalization ( on SemLex100). After INT8 quantization, accuracy remains high ( on WLASL100) while compressing to 883 KB, the smallest across all evaluated architectures. An ultralight variant further reduces size to 24.7 KB while sustaining accuracy on WLASL100 and on WLASL300. Quantization-aware training improves stability, particularly at larger vocabulary scales. Among baselines, S3D achieves strong performances but negligible compression (30.3 MB) due to non-quantization-friendly operators. The MobileNet variant generalizes better with on WLASL100 and accuracy on SemLex100 but remains large at 2.71 MB in INT8 form. CNN + RNN and e-LSTM depend on unsupported recurrent or attention operators. In contrast, S3D-Conv1D meets all operator compatibility requirements, delivers full INT8 execution with a compact sub-1 MB footprint, and real-time performance. These results demonstrate that competitive word-level sign language recognition is achievable under embedded constraints when architectural design prioritizes quantization stability, operator compatibility, and deployment feasibility from the outset.
Full article

Graphical abstract
Open AccessArticle
A Real-Time Detection Approach for Bridge Crack
by
Tingjuan Wang, Jiuyuan Huo and Xinping Wu
Algorithms 2026, 19(4), 247; https://doi.org/10.3390/a19040247 - 25 Mar 2026
Abstract
►▼
Show Figures
To meet the requirement of real-time bridge crack detection, this paper proposes a lightweight detection model based on YOLOv7-tiny. First, an edge-preserved image enhancement method is proposed. It effectively enhances the image contrast and preserves the structural features of crack edges. This provides
[...] Read more.
To meet the requirement of real-time bridge crack detection, this paper proposes a lightweight detection model based on YOLOv7-tiny. First, an edge-preserved image enhancement method is proposed. It effectively enhances the image contrast and preserves the structural features of crack edges. This provides a high-quality data foundation for the detection network. Second, a LWCSP module is introduced. This module integrates hybrid convolution and shuffle operations. It reduces the model’s parameter count and computation. Simultaneously, it maintains strong feature representation capability. A good balance between detection performance and efficiency is achieved. Finally, an improved SWise-IoU is proposed to optimize the bounding box regression in YOLOv7-tiny. This method dynamically evaluates sample quality. It enables differentiated gradient adjustment for samples of different qualities. This promotes sufficient learning of sample features by the model, thereby improving detection accuracy. Experimental results show that the proposed model delivers strong performance on a public bridge crack dataset. Compared to the baseline, the mAP@0.5 is 12.1 higher, and model size, parameter count, and FLOPs are reduced by 7.3%, 8.03%, and 10%, respectively. The final model size is only 11.4 MB, and mAP@0.5 is 86.1%, suitable for a real-time crack detection task.
Full article

Figure 1
Open AccessArticle
Distributed and Data-Driven Optimization Frameworks for Logistics-Oriented Decision Support Under Partial and Asynchronous Information
by
Manuel J. C. S. Reis
Algorithms 2026, 19(4), 246; https://doi.org/10.3390/a19040246 - 24 Mar 2026
Abstract
This paper introduces D3O-GT, a distributed optimization framework designed to operate under partial, heterogeneous, and delayed information—conditions commonly encountered in large-scale logistics and networked decision support systems. The proposed approach integrates gradient tracking with delay-aware updates to address the steady-state bias
[...] Read more.
This paper introduces D3O-GT, a distributed optimization framework designed to operate under partial, heterogeneous, and delayed information—conditions commonly encountered in large-scale logistics and networked decision support systems. The proposed approach integrates gradient tracking with delay-aware updates to address the steady-state bias and instability that often affect classical distributed gradient methods. We formulate a consensus optimization model that captures decentralized decision variables while preserving global optimality, and we develop an algorithmic structure that balances convergence accuracy, communication efficiency, and robustness to asynchronous updates. Extensive numerical experiments demonstrate that D3O-GT achieves machine precision convergence in synchronous settings and remains stable under bounded communication delays, converging to a small neighborhood of the optimum. In contrast, conventional distributed gradient descent exhibits significant residual error under the same conditions. Scalability analyses further indicate that the proposed method maintains favorable iteration complexity as the number of agents increases. These results position D3O-GT as a practical and scalable solution for distributed decision-making environments, with direct relevance to logistics-oriented applications such as resource allocation, coordination of networked services, and real-time operational planning.
Full article
(This article belongs to the Special Issue Optimizing Logistics Activities: Models and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
An Improved AlexNet-Based Image Recognition Method for Transmission Line Wildfires
by
Zilin Zhao and Guoyong Duan
Algorithms 2026, 19(4), 245; https://doi.org/10.3390/a19040245 - 24 Mar 2026
Abstract
The wildfires in the vicinity of the power transmission corridors are famous for their sudden occurrence, rapid growth, and susceptibility to interference from fire-like interferences at night, which can easily lead to line discharge and trip accidents, thus affecting the safe operation of
[...] Read more.
The wildfires in the vicinity of the power transmission corridors are famous for their sudden occurrence, rapid growth, and susceptibility to interference from fire-like interferences at night, which can easily lead to line discharge and trip accidents, thus affecting the safe operation of the power system. In order to address the issue of the high false alarm rate and poor generalization performance of wildfire image recognition in complex power transmission corridor environments, a wildfire image recognition method based on an improved AlexNet is proposed in this paper. The proposed method improves the description of flame and smoke properties at different scales by designing a reparameterized multi-scale feature extraction structure, and effectively alleviates the influence of strong light reflection and fire-like interference at night by using lightweight multi-scale attention and hybrid pooling attention mechanisms. A wildfire image dataset is constructed based on 1246 on-site images of the power transmission corridor captured by a visual monitoring device and 600 wildfire images downloaded from the internet, and tested in real-world imbalanced distribution scenarios. The experimental results show that the proposed method can recognize wildfire images with an accuracy of 96.9% and an F1 value of 94.9% on the test dataset, which is much higher than that of the original AlexNet, and has a strong ability to adapt to cross-dataset tests. The research work can provide technical support for online monitoring and operation and maintenance of wildfires in power transmission corridors.
Full article
(This article belongs to the Special Issue AI-Based Techniques in Smart Grid Operations)
►▼
Show Figures

Figure 1
Open AccessReview
Smart Medical Image Processing System Based on Explainable and Generative Artificial Intelligence: A Comprehensive Review
by
Cosmin George Nicolăescu, Florentina Magda Enescu, Alin Gheorghiță Mazăre, Nicu Bizon and Cristian Toma
Algorithms 2026, 19(4), 244; https://doi.org/10.3390/a19040244 - 24 Mar 2026
Abstract
In recent years, the integration of advanced methods in medical imaging has become a major topic of interest due to its potential to enhance diagnostic accuracy, improve clinical efficiency, and increase specialists’ confidence in Artificial Intelligence (AI)-based decision-making. This paper explores the synthesis
[...] Read more.
In recent years, the integration of advanced methods in medical imaging has become a major topic of interest due to its potential to enhance diagnostic accuracy, improve clinical efficiency, and increase specialists’ confidence in Artificial Intelligence (AI)-based decision-making. This paper explores the synthesis of Explainable AI (XAI) and Generative AI (GAI) in medical imaging, highlighting the advantages and challenges of these emerging technologies. The objective of this paper is to explore how the combined use of XAI and GAI contributes both to interpretability and to diagnostic accuracy. This research represents a systematic literature review conducted in accordance with PRISMA 2020, based on searches carried out in the PubMed, Scopus, IEEE Xplore, MDPI and ScienceDirect databases. Thus, a comprehensive overview of the integration of XAI and GAI in medical imaging is presented, based on recent studies and validated clinical applications. The advantages of combining transparency and data amplification in diagnostic models are highlighted, demonstrating their complementary roles in improving diagnosis using medical imaging. Ongoing challenges in clinical adoption are also emphasised, including interpretability and the need for validated assessment metrics. Beyond technological benefits, the paper also underlines the importance of ethical and legal considerations in the use of XAI and GAI in medical imaging. Based on the detailed analysis of the investigated studies, the paper also proposes a visual and architectural system concept intended for medical imaging, oriented towards research into the development of a unified system capable of detecting multiple types of pathologies. This research provides a detailed perspective on how XAI and GAI can revolutionise medical imaging by optimising data interpretation, enhancing human-AI collaboration, and increasing patient safety.
Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning in Medical Imaging Diagnostics)
►▼
Show Figures

Figure 1
Open AccessArticle
Fast Approximate ℓ-Center Clustering in High-Dimensional Spaces
by
Mirosław Kowaluk, Andrzej Lingas and Mia Persson
Algorithms 2026, 19(3), 243; https://doi.org/10.3390/a19030243 - 23 Mar 2026
Abstract
We study the design of efficient approximation algorithms for the ℓ-center clustering and minimum-diameter ℓ-clustering problems in high-dimensional Euclidean and Hamming spaces. Our main tool is randomized dimension reduction. First, we present a general method of reducing the dependency of the
[...] Read more.
We study the design of efficient approximation algorithms for the ℓ-center clustering and minimum-diameter ℓ-clustering problems in high-dimensional Euclidean and Hamming spaces. Our main tool is randomized dimension reduction. First, we present a general method of reducing the dependency of the running time of a hypothetical algorithm for the ℓ-center problem in a high-dimensional Euclidean space on the dimension. Utilizing this method in part, we provide -approximation algorithms for the ℓ-center clustering and minimum-diameter ℓ-clustering problems in Euclidean and Hamming spaces that are substantially faster than the known 2-approximation algorithms when both ℓ and the dimension are super-logarithmic. Next, we apply the general method to the recent fast approximation algorithms with higher approximation guarantees for the ℓ-center clustering problem in a high-dimensional Euclidean space. Finally, we provide a speed-up of the known -approximation method for the generalization of the ℓ-center clustering problem that allows z outliers (i.e., z input points may be ignored when computing the maximum distance from an input point to a center) in high-dimensional Euclidean and Hamming spaces.
Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
►▼
Show Figures

Figure 1
Open AccessArticle
A Hybrid Algorithm Combining Wavelet Analysis and Deep Learning for Predicting Agroclimatic Pest Infestations
by
Akerke Akanova, Nazira Ospanova, Gulzhan Muratova, Saltanat Sharipova, Nurgul Tokzhigitova and Galiya Anarbekova
Algorithms 2026, 19(3), 242; https://doi.org/10.3390/a19030242 - 23 Mar 2026
Abstract
►▼
Show Figures
Forecasting crop pest outbreaks under conditions of increasing agroclimatic variability is a critical task for intelligent decision support systems in agriculture. Traditional statistical and empirical models typically have limited transferability and insufficient accuracy when describing nonlinear and multiscale relationships between climatic factors and
[...] Read more.
Forecasting crop pest outbreaks under conditions of increasing agroclimatic variability is a critical task for intelligent decision support systems in agriculture. Traditional statistical and empirical models typically have limited transferability and insufficient accuracy when describing nonlinear and multiscale relationships between climatic factors and pest population dynamics. This paper proposes a hybrid algorithm combining wavelet analysis and deep learning methods for forecasting agroclimatic pest infestation levels. The algorithm is based on multiscale decomposition of time series using a discrete wavelet transform, after which the extracted components are used as input features for a deep neural network implementing a nonlinear mapping between climatic parameters and infestation indicators. The developed computational framework includes the stages of data preprocessing, feature space formation, model training, and forecast generation in a single, reproducible pipeline. An experimental evaluation using long-term agroclimatic and phytosanitary data showed that the proposed algorithm outperforms classical regression and individual neural network models in terms of RMSE, MAE, and the coefficient of determination. The results confirm the effectiveness of integrating wavelet analysis and deep learning for developing phytosanitary risk forecasting algorithms and demonstrate the potential of the proposed approach for implementation in intelligent precision farming systems.
Full article

Figure 1
Journal Menu
► ▼ Journal Menu-
- Algorithms Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Actuators, Algorithms, BDCC, Future Internet, JMMP, Machines, Robotics, Systems
Smart Product Design and Manufacturing on Industrial Internet
Topic Editors: Pingyu Jiang, Jihong Liu, Ying Liu, Jihong YanDeadline: 30 June 2026
Topic in
Algorithms, Data, Earth, Geosciences, Mathematics, Land, Water, IJGI
Applications of Algorithms in Risk Assessment and Evaluation
Topic Editors: Yiding Bao, Qiang WeiDeadline: 31 July 2026
Topic in
Algorithms, Applied Sciences, Electronics, MAKE, AI, Software
Applications of NLP, AI, and ML in Software Engineering
Topic Editors: Affan Yasin, Javed Ali Khan, Lijie WenDeadline: 30 August 2026
Topic in
Agriculture, Energies, Vehicles, Sensors, Sustainability, Urban Science, Applied Sciences, Algorithms
Sustainable Energy Systems
Topic Editors: Luis Hernández-Callejo, Carlos Meza Benavides, Jesús Armando Aguilar JiménezDeadline: 31 October 2026
Conferences
Special Issues
Special Issue in
Algorithms
Algorithms for Dynamical Systems and Differential Equations: Theory, Computation Innovations and Applications
Guest Editor: Adil JhangeerDeadline: 31 March 2026
Special Issue in
Algorithms
Optimization in Renewable Energy Systems (2nd Edition)
Guest Editors: Cristina Requejo, Adelaide CerveiraDeadline: 31 March 2026
Special Issue in
Algorithms
Bayesian Machine Learning for Ecological and Environmental Applications
Guest Editor: Nezamoddin N. KachouieDeadline: 31 March 2026
Special Issue in
Algorithms
Multi-Objective and Multi-Level Optimization: Algorithms and Applications (2nd Edition)
Guest Editor: Massimiliano CaramiaDeadline: 31 March 2026
Topical Collections
Topical Collection in
Algorithms
Feature Papers in Evolutionary Algorithms and Machine Learning
Collection Editor: Stefano Mariani
Topical Collection in
Algorithms
Feature Papers in Algorithms and Mathematical Models for Computer-Assisted Diagnostic Systems
Collection Editor: Francesc Pozo
Topical Collection in
Algorithms
Traditional and Machine Learning Methods to Solve Imaging Problems
Collection Editors: Laura Antonelli, Lucia Maddalena
Topical Collection in
Algorithms
Feature Papers on Artificial Intelligence Algorithms and Their Applications
Collection Editors: Ulrich Kerzel, Mostafa Abbaszadeh, Andres Iglesias, Akemi Galvez Tomida



