Next Issue
Volume 19, April
Previous Issue
Volume 19, February
 
 

Algorithms, Volume 19, Issue 3 (March 2026) – 78 articles

Cover Story (view full-size image): This study explores how Generative Artificial Intelligence (GenAI) is reshaping the development of critical thinking in higher education. Through a systematic literature review, we synthesize emerging evidence on how GenAI tools influence reasoning, evaluation, and reflection processes in students. Rather than replacing thinking, GenAI introduces new cognitive dynamics that can both support and challenge critical engagement. The findings highlight the importance of intentional pedagogical design to harness GenAI as a catalyst for deeper learning, while addressing risks related to overreliance and reduced cognitive effort. This work contributes a timely perspective on the evolving relationship between human cognition and intelligent technologies. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 333 KB  
Article
Fast Approximate -Center Clustering in High-Dimensional Spaces
by Mirosław Kowaluk, Andrzej Lingas and Mia Persson
Algorithms 2026, 19(3), 243; https://doi.org/10.3390/a19030243 - 23 Mar 2026
Viewed by 205
Abstract
We study the design of efficient approximation algorithms for the -center clustering and minimum-diameter -clustering problems in high-dimensional Euclidean and Hamming spaces. Our main tool is randomized dimension reduction. First, we present a general method of reducing the dependency of the [...] Read more.
We study the design of efficient approximation algorithms for the -center clustering and minimum-diameter -clustering problems in high-dimensional Euclidean and Hamming spaces. Our main tool is randomized dimension reduction. First, we present a general method of reducing the dependency of the running time of a hypothetical algorithm for the -center problem in a high-dimensional Euclidean space on the dimension. Utilizing this method in part, we provide (2+ϵ)-approximation algorithms for the -center clustering and minimum-diameter -clustering problems in Euclidean and Hamming spaces that are substantially faster than the known 2-approximation algorithms when both and the dimension are super-logarithmic. Next, we apply the general method to the recent fast approximation algorithms with higher approximation guarantees for the -center clustering problem in a high-dimensional Euclidean space. Finally, we provide a speed-up of the known O(1)-approximation method for the generalization of the -center clustering problem that allows z outliers (i.e., z input points may be ignored when computing the maximum distance from an input point to a center) in high-dimensional Euclidean and Hamming spaces. Full article
(This article belongs to the Section Randomized, Online, and Approximation Algorithms)
Show Figures

Figure 1

34 pages, 5101 KB  
Article
A Hybrid Algorithm Combining Wavelet Analysis and Deep Learning for Predicting Agroclimatic Pest Infestations
by Akerke Akanova, Nazira Ospanova, Gulzhan Muratova, Saltanat Sharipova, Nurgul Tokzhigitova and Galiya Anarbekova
Algorithms 2026, 19(3), 242; https://doi.org/10.3390/a19030242 - 23 Mar 2026
Viewed by 209
Abstract
Forecasting crop pest outbreaks under conditions of increasing agroclimatic variability is a critical task for intelligent decision support systems in agriculture. Traditional statistical and empirical models typically have limited transferability and insufficient accuracy when describing nonlinear and multiscale relationships between climatic factors and [...] Read more.
Forecasting crop pest outbreaks under conditions of increasing agroclimatic variability is a critical task for intelligent decision support systems in agriculture. Traditional statistical and empirical models typically have limited transferability and insufficient accuracy when describing nonlinear and multiscale relationships between climatic factors and pest population dynamics. This paper proposes a hybrid algorithm combining wavelet analysis and deep learning methods for forecasting agroclimatic pest infestation levels. The algorithm is based on multiscale decomposition of time series using a discrete wavelet transform, after which the extracted components are used as input features for a deep neural network implementing a nonlinear mapping between climatic parameters and infestation indicators. The developed computational framework includes the stages of data preprocessing, feature space formation, model training, and forecast generation in a single, reproducible pipeline. An experimental evaluation using long-term agroclimatic and phytosanitary data showed that the proposed algorithm outperforms classical regression and individual neural network models in terms of RMSE, MAE, and the coefficient of determination. The results confirm the effectiveness of integrating wavelet analysis and deep learning for developing phytosanitary risk forecasting algorithms and demonstrate the potential of the proposed approach for implementation in intelligent precision farming systems. Full article
Show Figures

Figure 1

21 pages, 6402 KB  
Article
A New Method for Diagnosing Transformer Winding Faults Based on mRMR-RF Feature Selection and an Inverse Distance Weighted KNN Model
by Chenyang Wang, Huan Peng, Zirui Liu, Song Wang, Danyu Li, Fei Xie and Jian Yang
Algorithms 2026, 19(3), 241; https://doi.org/10.3390/a19030241 - 23 Mar 2026
Viewed by 176
Abstract
Accurately extracting deviation features in frequency response curves, which reflect winding deformation states, and selecting appropriate machine learning algorithms are critical for achieving a precise quantitative diagnosis of winding deformation based on frequency response analysis (FRA). To address the existing challenges in transformer [...] Read more.
Accurately extracting deviation features in frequency response curves, which reflect winding deformation states, and selecting appropriate machine learning algorithms are critical for achieving a precise quantitative diagnosis of winding deformation based on frequency response analysis (FRA). To address the existing challenges in transformer winding fault diagnosis, including the absence of a systematic feature evaluation framework for frequency response data and the limited recognition accuracy of machine learning models, a novel hybrid feature selection and diagnostic framework was developed. First, a high-dimensional feature pool comprising 25 numerical indices was extracted from experimental FRA curves. To eliminate feature redundancy and arbitrary selection, a hybrid mechanism integrating maximum-relevance, minimum-redundancy (mRMR) with random forest (RF) was developed to dynamically construct task-specific optimal feature subsets. Furthermore, an inverse-distance-weighted K-nearest neighbors (IKNN) model was introduced to enhance diagnostic sensitivity by accounting for feature-space distance variations. Experimental results obtained from a laboratory winding model demonstrate that the proposed mRMR-RF-IKNN model significantly outperforms traditional and optimized benchmarks across multiple macro-evaluation metrics. This study provides a systematic, intelligent screening mechanism that ensures high-precision identification of both the types and severity of faults in power transformers. Full article
(This article belongs to the Special Issue Optimization in Renewable Energy Systems (2nd Edition))
Show Figures

Figure 1

37 pages, 5953 KB  
Article
Fire Detection Using Sound Analysis Based on a Hybrid Artificial Intelligence Algorithm
by Robert-Nicolae Boştinaru, Sebastian-Alexandru Drǎguşin, Nicu Bizon, Dumitru Cazacu and Gabriel-Vasile Iana
Algorithms 2026, 19(3), 240; https://doi.org/10.3390/a19030240 - 23 Mar 2026
Viewed by 290
Abstract
Fire detection is a critical task for early warning systems, particularly in environments where visual sensing is unreliable. While most existing approaches rely on image-based or smoke-based detection, acoustic signals provide complementary information capable of capturing early combustion-related events. This study investigates deep [...] Read more.
Fire detection is a critical task for early warning systems, particularly in environments where visual sensing is unreliable. While most existing approaches rely on image-based or smoke-based detection, acoustic signals provide complementary information capable of capturing early combustion-related events. This study investigates deep learning models for sound-based fire detection, focusing on convolutional and Transformer-based architectures. VGG16 and VGG19 convolutional neural networks are adapted to process time-frequency audio representations for binary classification into Fire and No-Fire classes. An Audio Spectrogram Transformer (AST) is further employed to model long-range temporal dependencies in acoustic data. Finally, a hybrid VGG19-AST architecture is proposed, in which convolutional layers extract local spectral–temporal features, and Transformer-based self-attention performs global sequence modeling. The models are evaluated on a curated dataset containing fire sounds and diverse environmental background noises under multiple noise conditions. Experimental results demonstrate competitive performance across convolutional and Transformer-based models, while the proposed hybrid VGG19-AST architecture achieves the most consistent overall results. The findings suggest that integrating convolutional feature extraction with self-attention-based global modeling enhances robustness under complex acoustic variability. The proposed hybrid framework provides a scalable and cost-effective solution for sound-based fire detection, particularly in scenarios where visual monitoring may be obstructed or ineffective. Full article
Show Figures

Figure 1

20 pages, 613 KB  
Article
Automated Electronic Health Record Phenotyping of Acute and Subacute Subdural Hematoma
by Gregory B. Hooke, Haoqi Sun, Catherine Clive, Spencer Boris, Niels Turley, Lydia Petersen, Jaden Searle, Bram Overmeer, Ali Han Yaramis, Karan Singh, Arjun Singh, Daniel Sumsion, Aditya Gupta, Manohar Ghanta, Valdery F. Moura Junior, Marta Fernandes, Katie L. Stone, Dennis Hwang, Lynn Marie Trotti, Gari D. Clifford, Umakanth Katwa, Shibani S. Mukerji, Sahar F. Zafar, Robert J. Thomas and M. Brandon Westoveradd Show full author list remove Hide full author list
Algorithms 2026, 19(3), 239; https://doi.org/10.3390/a19030239 - 23 Mar 2026
Viewed by 328
Abstract
Accurate identification of acute and subacute subdural hematoma (acute/subacute SDH) is critical for improved patient outcomes. However, large-scale research is hindered by unreliable identification methods in electronic health records (EHRs). Current approaches relying on International Classification of Diseases (ICD) codes lack specificity and [...] Read more.
Accurate identification of acute and subacute subdural hematoma (acute/subacute SDH) is critical for improved patient outcomes. However, large-scale research is hindered by unreliable identification methods in electronic health records (EHRs). Current approaches relying on International Classification of Diseases (ICD) codes lack specificity and cannot distinguish acute, subacute, and chronic cases; manual chart review is too labor-intensive to scale. We developed an automated phenotyping algorithm using structured data and unstructured clinical notes for high-accuracy retrospective identification of acute/subacute SDH. We analyzed 2999 records from two hospitals, including ICD-positive and ICD-negative acute/subacute SDH cases verified by manual chart review. Features for model training included ICD codes, Current Procedural Terminology (CPT) codes, and clinical note keywords. Logistic regression and random forest models were trained using cross-validation and evaluated using AUROC and AUPRC. External validation involved training on one hospital and testing on the other. The random forest keywords-only model performed best, achieving an AUROC of 0.985 (95% CI: 0.980–0.990) and AUPRC of 0.944 (95% CI: 0.923–0.962) on the test set. External validation demonstrated strong AUROCs of 0.965 and 0.971 and AUPRCs of 0.831 and 0.840. The overall error rate was <1%. This model provides a scalable, highly accurate approach to acute/subacute SDH detection in EHR research. Full article
Show Figures

Figure 1

24 pages, 39455 KB  
Article
Information Bottleneck Scores for Identifying Causally Informative Attention Heads in Vision–Language Models
by Yiyou Zhang and Liyan Ma
Algorithms 2026, 19(3), 238; https://doi.org/10.3390/a19030238 - 23 Mar 2026
Viewed by 262
Abstract
Vision–language models (VLMs) have demonstrated remarkable performance on a wide range of multimodal reasoning tasks, yet their visual grounding mechanisms remain poorly understood and are often unreliable for fine-grained visual concepts. Existing approaches typically rely on raw attention maps or gradient-based saliency, which [...] Read more.
Vision–language models (VLMs) have demonstrated remarkable performance on a wide range of multimodal reasoning tasks, yet their visual grounding mechanisms remain poorly understood and are often unreliable for fine-grained visual concepts. Existing approaches typically rely on raw attention maps or gradient-based saliency, which provide heuristic explanations but lack a causal interpretation of how visual evidence contributes to model predictions. In this paper, we propose an Information Bottleneck Score (IBS) framework that explicitly quantifies the causal importance of visual patches through interventional analysis. By masking candidate image patches and measuring the induced change in the model prediction, the IBS captures patch-level causal contributions rather than correlation-based signals. We further lift patch-level importance to the attention-head level by aggregating the IBS with text-to-image attention, enabling the identification of a small subset of information-transmitting attention heads responsible for visual grounding. Building on the selected heads, we construct refined importance maps that guide visual cropping in a fully training-free manner. Extensive experiments on multiple detail-sensitive benchmarks, including TextVQA, V*, POPE, and DocVQA, demonstrate consistent improvements in fine-grained visual understanding, while evaluations on general-purpose datasets such as GQA, AOKVQA, and VQAv2 confirm that overall reasoning performance is preserved. Additional ablation studies further validate the effectiveness of each component in the proposed framework. Overall, our work provides a causal perspective on visual grounding in VLMs and offers a model-agnostic, training-free approach for both interpreting and enhancing multimodal reasoning. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

45 pages, 2643 KB  
Article
From Complexity Theory to Computational Wisdom: Enhancing EEG–Neurotransmitter Models Through Sophimatics for Brain Data Analysis
by Gerardo Iovane and Giovanni Iovane
Algorithms 2026, 19(3), 237; https://doi.org/10.3390/a19030237 - 22 Mar 2026
Viewed by 286
Abstract
The analysis of brain data through electroencephalography (EEG) has become essential in neuroscience, affective computing, and brain–computer interfaces. Recent work associates EEG features with artificial neurotransmitter models, simulating emotions and rational–emotional decision-making using complexity theory. However, current methods face limitations: (1) linear temporal [...] Read more.
The analysis of brain data through electroencephalography (EEG) has become essential in neuroscience, affective computing, and brain–computer interfaces. Recent work associates EEG features with artificial neurotransmitter models, simulating emotions and rational–emotional decision-making using complexity theory. However, current methods face limitations: (1) linear temporal representations lacking memory and anticipation, (2) limited contextual adaptation, (3) difficulty with paradoxical affective states, and (4) absence of ethical reasoning in decision-making. We present a framework based on Sophimatics, using complex time (t=treal+itimagC) where treal represents chronology and timag encodes experiential dimensions including memory depth and anticipatory imagination. The Super Time Cognitive Neural Network (STCNN) architecture enables the parallel processing of objective time sequences and subjective cognitive experiences. Our Sophimatics-assisted EEG analysis achieves: (1) two-dimensional temporal coherence integrating past experiences and future projections, (2) context-sensitive adaptation via ontological knowledge graphs, (3) interpretable symbolic reasoning compatible with clinical psychology, (4) mechanisms for resolving affective paradoxes, and (5) ethical constraints ensuring value-based decision-making. Across three case studies (emotion recognition, meditation-induced transitions, and brain–computer interface decision support), integrated Sophimatics models outperform traditional machine learning (15–22% accuracy improvement) and complexity theory models (8–14% improvement), while offering greater cognitive richness and immunity to incomplete data. Results establish a post-generative AI framework with computational wisdom: relationally interactive, ethically informed, and temporally consistent with human cognitive and affective life. The framework outlines paths toward next-generation neuromorphic systems achieving genuine understanding beyond pattern recognition. Full article
Show Figures

Figure 1

26 pages, 1172 KB  
Article
Channel Segmentation Proofreading Network for Crack Counting with Imbalanced Samples
by Mingsi Sun, Fangai Xu, Fachao Zhang, Jian Zhao and Hongwei Zhao
Algorithms 2026, 19(3), 236; https://doi.org/10.3390/a19030236 - 22 Mar 2026
Viewed by 268
Abstract
This paper presents a channel segmentation proofreading network for crack counting with imbalanced samples. The network is built by stacking basic blocks called channel segmentation proofreading blocks, which are composed of the Approximate Overlapping Window Transformer and the Counting Proofreading Module. The former [...] Read more.
This paper presents a channel segmentation proofreading network for crack counting with imbalanced samples. The network is built by stacking basic blocks called channel segmentation proofreading blocks, which are composed of the Approximate Overlapping Window Transformer and the Counting Proofreading Module. The former is designed to extract sufficient high-level semantic information, enhancing the ability of the network to judge crack quantities. Guided by the calculation results of the self-attention mechanism in the classical Transformer, Approximate Overlapping Window Transformer employs distinct computation steps to obtain the same results. Confining the computation process within overlapping windows, we continuously adjust to obtain the most suitable feature extraction process and internal structure for crack counting. Furthermore, to prevent the misidentification of multiple cracks as a single crack due to incorrect connection predictions of crack regions, the Counting Proofreading Module employs channel separation techniques. Following the concept of splitting positive and negative weights, it constructs positive and negative values with different characteristics, further confirming crack regions. Through the combined action of both components, when trained and tested on the crack counting dataset, our network achieves optimal results across all metrics. Full article
Show Figures

Figure 1

20 pages, 19133 KB  
Article
Uncovering Several Degrees of Anxiety in Mexican Students Through Advanced Deep Learning Techniques
by Marco A. Moreno-Armendáriz, Arturo Lara-Cázares, Jared Castillo-González and Halder V. Galdo-Navarro
Algorithms 2026, 19(3), 235; https://doi.org/10.3390/a19030235 - 20 Mar 2026
Viewed by 225
Abstract
Emotion identification via computer vision has made continuous progress over the last few years. Although images have been the gold standard for the past two decades, video is increasingly common. Video is particularly suitable for the study of emotions, as it allows them [...] Read more.
Emotion identification via computer vision has made continuous progress over the last few years. Although images have been the gold standard for the past two decades, video is increasingly common. Video is particularly suitable for the study of emotions, as it allows them to be considered as spatiotemporal phenomena. In particular, the discovery of anxiety among Mexican students is a key element for improving their learning in the classroom. In pursuit of this goal, we focused on the following challenges. First, the scarcity of specialized datasets for this task prompted us to develop an experimental protocol to generate a specific dataset; second, to conduct a thorough study of the appropriate number of emotional intensity levels; and third, to develop a suitable design for a deep learning architecture. Our pivotal results include the development of a new dataset labeled with three different emotion levels and appropriate ConvNet architectures, complemented by a study of various intensity levels. The optimal architecture achieved an F1-score of 0.7620 across five intensity levels and provides an adequate baseline for multiclass classification. Full article
(This article belongs to the Special Issue Modern Algorithms for Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 6052 KB  
Article
HSMD-YOLO: An Anti-Aliasing Feature-Enhanced Network for High-Speed Microbubble Detection
by Wenda Luo, Yongjie Li and Siguang Zong
Algorithms 2026, 19(3), 234; https://doi.org/10.3390/a19030234 - 20 Mar 2026
Viewed by 222
Abstract
Underwater micro-bubble detection entails multiple challenges, including diminutive target sizes, sparse pixel information, pronounced specular highlights and water scattering, indistinct bubble boundaries, and adhesion or overlap between instances. To address these issues, we propose HSMD-YOLO, an improved detector tailored for high-resolution micro-bubble detection [...] Read more.
Underwater micro-bubble detection entails multiple challenges, including diminutive target sizes, sparse pixel information, pronounced specular highlights and water scattering, indistinct bubble boundaries, and adhesion or overlap between instances. To address these issues, we propose HSMD-YOLO, an improved detector tailored for high-resolution micro-bubble detection and built upon YOLOv11. The model incorporates three novel components: the Scale Switch Block (SSB), a scale-transformation module that suppresses artifacts and background noise, thereby stabilizing edges in thin-walled bubble regions and enhancing sensitivity to geometric contours; the Global Local Refine Block (GLRB), which achieves efficient global relationship modeling with an asymptotic linear complexity (O(N)) in spatial dimensions while further refining local features, thereby strengthening boundary perception and improving bubble–background separability; and the Bidirectional Exponential Moving Attention Fusion (BEMAF), which accommodates the multi-scale nature of bubbles by employing a parallel multi-kernel architecture to extract spatial features across scales, coupled with a multi-stage EMA based attention mechanism to enhance detection robustness under weak boundaries and complex backgrounds. Experiments conducted on an Side-Illuminated Light Field Bubble Database (SILB-DB) and a public gas–liquid two-phase flow dataset (GTFD) demonstrate that HSMD-YOLO achieves mAP@50 scores of 0.911 and 0.854, respectively, surpassing mainstream detection methods. Ablation studies indicate that SSB, GLRB, and BEMAF contribute performance gains of 1.3%, 2.0%, and 0.4%, respectively, thereby corroborating the effectiveness of each module for micro-scale object detection. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 2404 KB  
Article
Flight Schedule Problem Optimization Based on Discrete Memory-Enhanced Restructured Particle Swarm Optimization Algorithm
by Wei Gao, Bingnan Wu, Jianhua Liu and Daoming Tang
Algorithms 2026, 19(3), 233; https://doi.org/10.3390/a19030233 - 19 Mar 2026
Viewed by 183
Abstract
Flight Schedule Problem optimization is a typical NP-hard combinatorial optimization problem that is challenging to solve using traditional algorithms, so metaheuristic algorithms are commonly adopted for such problems. This paper proposes a Discrete Memory-Enhanced Restructured Particle Swarm Optimization algorithm (DMERPSO) to address Flight [...] Read more.
Flight Schedule Problem optimization is a typical NP-hard combinatorial optimization problem that is challenging to solve using traditional algorithms, so metaheuristic algorithms are commonly adopted for such problems. This paper proposes a Discrete Memory-Enhanced Restructured Particle Swarm Optimization algorithm (DMERPSO) to address Flight Scheduling Problem optimization. Firstly, this paper designs a hybrid particle encoding scheme capable of simultaneously handling flight time adjustments (integer variables) and route selections (categorical variables) for the Flight Schedule Problem. Secondly, a new update equation of particle positions is provided based on probability selection within the three terms of the Memory-Enhanced Restructured Particle Swarm Optimization (MERPSO) algorithm, and the calculation of the selection probability is designed. Thirdly, the two strategies and perturbation terms of MERPSO are improved in order to be adapted to optimize the discrete Flight Schedule Problem. Finally, simulation experiments are conducted using DMERPSO based on real flight data from multiple Chinese airports with the objective of minimizing total flight delays, leading to better solutions that are faster than various benchmark algorithms. The DMERPSO algorithm exhibits significant advantages in reducing total delays, improving solution stability, and enhancing robustness, which validates that DMERPSO provides an effective new approach for solving Flight Schedule Problem optimization problems. Full article
Show Figures

Figure 1

15 pages, 444 KB  
Article
Steiner Tree Approximations in Graphs and Hypergraphs
by Miklós Molnár and Basma Mostafa Hassan
Algorithms 2026, 19(3), 232; https://doi.org/10.3390/a19030232 - 19 Mar 2026
Viewed by 259
Abstract
The construction of partial minimum spanning trees is an NP-hard problem, leading to the development of various heuristic algorithms. Existing heuristics, including Kruskal’s algorithm, frequently employ shortest paths to connect tree components. This study introduces an approximate algorithm for constructing the minimum Steiner [...] Read more.
The construction of partial minimum spanning trees is an NP-hard problem, leading to the development of various heuristic algorithms. Existing heuristics, including Kruskal’s algorithm, frequently employ shortest paths to connect tree components. This study introduces an approximate algorithm for constructing the minimum Steiner tree, which serves as the optimal structure for diffusion multicast. The proposed approach utilizes graph-based structures that provide advantages over conventional shortest-path methods. The algorithm incorporates connections analogous to those in simple Steiner trees when required. These simple trees are represented by hyperedges, and a Hyper Metric Closure can also be applied. Experimental results indicate that this hypergraph-based method enables constructions that more closely approximate the optimal Steiner tree cost compared to traditional pairwise techniques, offering a scalable balance between computational complexity and routing efficiency. Full article
(This article belongs to the Special Issue Graph and Hypergraph Algorithms and Applications)
Show Figures

Figure 1

34 pages, 7523 KB  
Article
Stroke2Font: A Hierarchical Vector Model with AI-Driven Optimization for Chinese Font Generation
by Qing-Sheng Li, Yu-Lin Bian and Zhen-Hui Chai
Algorithms 2026, 19(3), 231; https://doi.org/10.3390/a19030231 - 18 Mar 2026
Viewed by 269
Abstract
Chinese font generation is important for digital typography, cultural preservation, and personalized user interfaces. However, existing methods often face challenges in maintaining structural consistency, supporting diverse stylistic variations, and achieving computational efficiency simultaneously, especially in cloud-based environments. A key application is bandwidth-efficient font [...] Read more.
Chinese font generation is important for digital typography, cultural preservation, and personalized user interfaces. However, existing methods often face challenges in maintaining structural consistency, supporting diverse stylistic variations, and achieving computational efficiency simultaneously, especially in cloud-based environments. A key application is bandwidth-efficient font delivery, where compact structural templates replace large font files for on-demand style customization. To address these issues, this paper proposes Stroke2Font—a hierarchical vector model with AI-driven optimization for dynamic Chinese font generation. The core model decouples structural representation from style rendering through stroke element decomposition and Bézier curve parameterization. To further balance structural fidelity, style diversity, and real-time performance, we introduce a three-module optimization framework: (1) a reinforcement learning policy for dynamic selection of Bézier control parameters to minimize rendering latency; (2) a genetic algorithm for exploring style vector spaces and generating novel font variants; and (3) an adaptive complexity-aware optimization strategy that dynamically configures parameters based on character structural complexity. Experimental results on a dataset of 150 Chinese characters with 1123 stroke trajectories and 5287 feature points demonstrate that the adaptive complexity-aware optimization achieves the highest trajectory similarity of 65.2%, representing a 6.4% relative improvement over baseline (61.3%). The evaluation covers characters ranging from 1 to 18 strokes across 6 stroke types, with standard deviation reduced to ±5.7% (compared to ±6.5% baseline), indicating more consistent performance. Quantitative analysis confirms that the method generalizes effectively across varying character complexity, with the optimization showing stable improvement regardless of stroke count distribution. These results validate that Stroke2Font provides an effective solution for high-quality, efficient, and scalable Chinese font generation in cloud-based applications. Full article
Show Figures

Graphical abstract

25 pages, 2297 KB  
Article
A Multi-Agent Advisory Board Reinforcement Learning Framework for Adaptive Cooperative Control
by Onur Osman, Tolga Kudret Karaca, Bahar Yalcin Kavus, Gokalp Tulum and Sajjad Nematzadeh
Algorithms 2026, 19(3), 230; https://doi.org/10.3390/a19030230 - 18 Mar 2026
Viewed by 229
Abstract
This study proposes Advisory Board Reinforcement Learning (AdvB-RL), a cooperative reinforcement-learning framework that integrates multiple advisory neural networks to guide policy optimization. Unlike conventional single-agent architectures, AdvB-RL maintains a set of independently trained advisory networks that contribute to action selection through a dynamic [...] Read more.
This study proposes Advisory Board Reinforcement Learning (AdvB-RL), a cooperative reinforcement-learning framework that integrates multiple advisory neural networks to guide policy optimization. Unlike conventional single-agent architectures, AdvB-RL maintains a set of independently trained advisory networks that contribute to action selection through a dynamic aggregation mechanism. This design preserves diverse experiential knowledge while improving learning stability and the exploration–exploitation balance. The framework is evaluated on three benchmark control tasks, namely LunarLander-v2, CartPole-v1, and MountainCar-v0, using advisory board sizes of 1, 5, and 10 members against a Double Deep Q-Network (DDQN) baseline. The best-performing configuration, 10 AdvB, achieved 270.02 ± 24.74 on LunarLander-v2 versus 227.92 ± 86.02 for DDQN, 497.79 ± 5.18 on CartPole-v1 versus 304.37 ± 144.04, and −103.16 ± 15.46 on MountainCar-v0 versus −130.71 ± 31.64, indicating higher returns together with markedly lower variability. Across the three environments, these results show that increasing the number of advisory members improves both reward consistency and overall robustness, with the 10-member setting providing the strongest performance. Within the tested configurations, the advisory board mechanism remains computationally feasible, while preliminary experiments beyond 10 advisors show diminishing returns relative to added complexity. Overall, AdvB-RL provides a robust and modular alternative to single-policy reinforcement learning for adaptive cooperative control. Full article
Show Figures

Figure 1

17 pages, 320 KB  
Article
PSO-FSPMiner: A Metaheuristic Approach for Mining a Representative Subset of Frequent Similar Patterns
by Ansel Y. Rodríguez-González, Rosa María Valdovinos-Rosas, Gretel Bernal Baró, Ramón Aranda, Angel Díaz-Pacheco and Miguel Á. Álvarez-Carmona
Algorithms 2026, 19(3), 229; https://doi.org/10.3390/a19030229 - 18 Mar 2026
Viewed by 208
Abstract
In recent years, algorithms employing similarity functions beyond equality to unveil hidden knowledge have surged in popularity. Nonetheless, a notable challenge accompanying these algorithms is the proliferation of numerous frequent similar patterns, leading to heightened computational overhead and complicating analysis for humans. This [...] Read more.
In recent years, algorithms employing similarity functions beyond equality to unveil hidden knowledge have surged in popularity. Nonetheless, a notable challenge accompanying these algorithms is the proliferation of numerous frequent similar patterns, leading to heightened computational overhead and complicating analysis for humans. This paper proposes a metaheuristic approach based on Particle Swarm Optimization (PSO-FSPMiner) that extracts a representative subset of patterns to tackle this issue. Our experiments on real-world datasets demonstrate that the subset of frequent similar patterns mined by PSO-FSPMiner captures approximately 86.4% of the dataset’s knowledge, with a substantial reduction in frequent similar patterns of around 85.9%. Full article
Show Figures

Figure 1

21 pages, 6949 KB  
Article
Cross-Domain Bearing Fault Diagnosis Under Class Imbalance: A Dynamic Maximum Triple-View Classifier Discrepancy Network
by Rui Luo, Huiyang Xie, Haitian Wen, Hongying He, Yitong Li and Kai Wang
Algorithms 2026, 19(3), 228; https://doi.org/10.3390/a19030228 - 18 Mar 2026
Viewed by 180
Abstract
Traditional domain adaptation methods often assume balanced data distributions. However, this assumption is frequently violated in real-world industrial scenarios, where normal samples predominate while fault samples are inherently scarce. Under severe class imbalance, conventional decision boundaries tend to shift toward minority fault regions. [...] Read more.
Traditional domain adaptation methods often assume balanced data distributions. However, this assumption is frequently violated in real-world industrial scenarios, where normal samples predominate while fault samples are inherently scarce. Under severe class imbalance, conventional decision boundaries tend to shift toward minority fault regions. This shift leads to persistently high misclassification rates for rare fault samples. To overcome this limitation, we propose the Dynamic Maximum Triple-View Classifier Discrepancy (DMTVCD) network, which integrates a Triple-View Classifier (TVC) Architecture and a Primary–Auxiliary Fused Cooperative Loss (PAFL). Specifically, the TVC employs auxiliary binary classifiers to aggregate fine-grained fault sub-classes into a unified “Fault Super-class.” This constructs a robust “normal-fault” binary boundary that effectively counteracts class imbalance. Driven by the PAFL, this boundary acts as a hierarchical geometric constraint to suppress the primary classifier’s tendency to misclassify faults as normal samples, thereby enhancing feature discriminability. Furthermore, a dynamic weighting strategy is introduced to assign large initial weights. This forces the model to bypass simple decision logic dominated by the majority class, ensuring a smooth transition from global exploration to fine-grained alignment. Extensive evaluations on the CWRU and JNU datasets demonstrate that DMTVCD consistently outperforms state-of-the-art approaches under high imbalance ratios (e.g., 20:1). Full article
Show Figures

Figure 1

12 pages, 262 KB  
Article
On the Convergence of Weak Greedy Algorithm for a Class of Non-Smooth Optimization Problemsin Banach Spaces
by Sergei Sidorov
Algorithms 2026, 19(3), 227; https://doi.org/10.3390/a19030227 - 17 Mar 2026
Viewed by 169
Abstract
The paper discusses a greedy algorithm that can be used to solve non-smooth optimization problems in which its objective function can be represented as a minimum of a compactly parameterized family of uniformly smooth functions. The algorithm guarantees a sparse solution by adding [...] Read more.
The paper discusses a greedy algorithm that can be used to solve non-smooth optimization problems in which its objective function can be represented as a minimum of a compactly parameterized family of uniformly smooth functions. The algorithm guarantees a sparse solution by adding one atom from the dictionary to the solution at each iteration. The algorithm employs a gradient greedy step that maximizes a linear functional using gradient information from the previous iteration. However, the algorithm is considered “weak” because it only solves the linear subproblems approximately. By employing the duality gap evaluated at each gradient-greedy step, the paper proves convergence of the algorithm to Clarke stationary points. Explicit upper bounds on the duality gap are derived, yielding a quantitative measure of proximity to stationarity and establishing the corresponding rates of convergence. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
30 pages, 15769 KB  
Article
A Feature-Fusion Deep Reinforcement Learning Framework for Multi-Configuration Engineering Drawing Layout
by Yunlei Sun, Peng Dai, Yangxingyue Liu and Chao Liu
Algorithms 2026, 19(3), 226; https://doi.org/10.3390/a19030226 - 17 Mar 2026
Viewed by 283
Abstract
Engineering drawings are fundamental to industries such as oil and gas, construction, and manufacturing. However, current practices relying on manual design or rigid parametric templates often suffer from inefficiency and layout inconsistencies. To address these issues, the layout task is formulated as the [...] Read more.
Engineering drawings are fundamental to industries such as oil and gas, construction, and manufacturing. However, current practices relying on manual design or rigid parametric templates often suffer from inefficiency and layout inconsistencies. To address these issues, the layout task is formulated as the Orthogonal Rectangle Packing Problem with Multiple Configurations and Complex Constraints (ORPPMC). The Deep Reinforcement Learning for Multi-Configuration Drawing Layout (DRL-MCDL) framework is proposed, which integrates the Pointer Network for Drawing Element Sequencing (PN-DES) with the Target-Type-Matching-based Multi-Pattern Positioning Strategy (TTM-MPPS). Within this framework, PN-DES employs deep reinforcement learning and feature fusion to combine element attributes with layout configurations for optimal sequence inference, while TTM-MPPS performs precise positioning in accordance with industrial rules to ensure strict adherence to aesthetic requirements. Ablation experiments validate the contribution of each module. Experimental results on real-world engineering drawings demonstrate that DRL-MCDL achieves a Feasibility Rate (FR) exceeding 98.5% on standard instances (12–40 elements), significantly outperforming traditional methods. Furthermore, it maintains a high inference efficiency with an Average Time (AT) of less than 0.3 s, striking an optimal balance between layout quality and computational speed. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

21 pages, 420 KB  
Article
Grey Target Group Decision Making Based on Three-Parameter Interval Grey Numbers and Bidirectional Projection Method
by Huabin Cheng, Yingchun Chen, Yu Chen and Ping Xiong
Algorithms 2026, 19(3), 225; https://doi.org/10.3390/a19030225 - 16 Mar 2026
Viewed by 183
Abstract
To address the grey target group decision-making method (GDMM) concerning the aggregation of decision-maker information and the approach to ranking alternatives via bidirectional projection (BP), this paper develops a novel methodological framework that optimizes both information integration and alternative evaluation, where a new [...] Read more.
To address the grey target group decision-making method (GDMM) concerning the aggregation of decision-maker information and the approach to ranking alternatives via bidirectional projection (BP), this paper develops a novel methodological framework that optimizes both information integration and alternative evaluation, where a new transformation mechanism between the three-parameter interval grey number (TPIGN) and dual connection number (CN) is established by integrating an induced TPIGN weighted average (ITPIGN-WA) operator with an improved bidirectional projection method (IBPM). The results confirm that the new framework leads to more reliable and effective group decision making (GDM). The new TPIGN conversion mechanism and the set pair potential (SPP)-based ITPIGN-WA better captures decision-maker preferences than previous operators, and the introduction of the IBPM further refines alternative ranking. This dual innovation enriches the theoretical system of grey target decision making and provides significant applied value for enhancing the quality and effectiveness of GDM processes in uncertain environments. Full article
Show Figures

Figure 1

31 pages, 7528 KB  
Article
Shield Machine Attitude Prediction Method Based on Causal Graph Convolutional Network
by Liang Zeng, Xingao Yan, Chenning Zhang, Xue Wang and Shanshan Wang
Algorithms 2026, 19(3), 224; https://doi.org/10.3390/a19030224 - 16 Mar 2026
Viewed by 242
Abstract
Accurately predicting and controlling the attitude of a shield tunneling machine is critical for quality assurance in shield tunneling projects. Existing prediction methods utilize historical data to construct a machine learning framework to predict future attitude deviations. However, this method is poorly interpretable [...] Read more.
Accurately predicting and controlling the attitude of a shield tunneling machine is critical for quality assurance in shield tunneling projects. Existing prediction methods utilize historical data to construct a machine learning framework to predict future attitude deviations. However, this method is poorly interpretable and lacks practical engineering guidance. Considering the shortcomings of this prediction method, this study suggests an innovative deep learning method called causal graph convolutional network (C-GCN-GRU), and the goal of this project is the improvement of the interpretability of the shield attitude prediction. The causal relationships between key attitude features of the shield machine are recognized and quantified by the PCMCI+ method. The found causal relationships are converted into collocation matrices to be input into a model consisting of GCN and GRU, and combined with multi-head causal attention to better forecast the shield machine attitude. The results trained on a dataset from the Karnaphuli River Tunnel Project in Bangladesh show that the accuracy of the four variables characterizing the shield attitude and position predicted by the C-GCN-GRU model outperforms that of the other four similar models and provides decision support for attitude and position adjustments in shield tunnels. Full article
Show Figures

Figure 1

37 pages, 742 KB  
Article
A Life-Cycle Technology Upgrade Scheduling Model
by Massimiliano Caramia
Algorithms 2026, 19(3), 223; https://doi.org/10.3390/a19030223 - 16 Mar 2026
Viewed by 279
Abstract
Technology upgrades are a central lever for sustainability, yet many optimization models primarily account for use-phase emissions and treat embodied impacts and technological change exogenously. We propose a multi-period mixed-integer optimization framework that couples upgrade timing, technology choice, and operations with a life-cycle [...] Read more.
Technology upgrades are a central lever for sustainability, yet many optimization models primarily account for use-phase emissions and treat embodied impacts and technological change exogenously. We propose a multi-period mixed-integer optimization framework that couples upgrade timing, technology choice, and operations with a life-cycle assessment (LCA) structure. The model (i) separates use-phase and embodied impacts at the transition level, (ii) supports time-weighted valuation of impacts through a flexible weighting sequence (time value of carbon), and (iii) incorporates endogenous learning-by-doing that can reduce both investment costs and embodied impacts of future upgrades. We derive an exact Benders (L-shaped) decomposition that separates discrete upgrade dynamics from a linear operating subproblem. Computational experiments illustrate model behavior and report runtimes under an outer-loop implementation with open-source solvers, highlighting that decomposition becomes most beneficial when extensions substantially enlarge the dispatch layer (e.g., scenario expansion). Experiments also show that ignoring embodied impacts can mis-rank upgrade schedules and even violate life-cycle caps, that stronger time-weighting pushes upgrades earlier, and that learning can make staged upgrades economically preferable. Full article
(This article belongs to the Special Issue 2026 and 2027 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

22 pages, 2762 KB  
Article
Automated Classification of Medical Image Modality and Anatomy
by Jean de Smidt, Kian Anderson and Andries Engelbrecht
Algorithms 2026, 19(3), 222; https://doi.org/10.3390/a19030222 - 16 Mar 2026
Viewed by 280
Abstract
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow [...] Read more.
Radiological departments face challenges in efficiency and diagnostic consistency. The interpretation of radiographs remains highly variable between practitioners, which creates potential disparities in patient care. This study explores how artificial intelligence (AI), specifically transfer learning techniques, can automate parts of the radiological workflow to improve service quality and efficiency. Transfer learning methods were applied to various convolutional neural network (CNN) architectures and compared to classify medical images across different modalities, i.e., X-rays, ultrasound, magnetic resonance imaging (MRI), and angiography, through a two-component model: medical image modality prediction and anatomical region prediction. Several publicly available datasets were combined to create a representative dataset to evaluate residual networks (ResNet), dense networks (DenseNet), efficient networks (EfficientNet), and the Swin Transformer (Swin-T). The models were evaluated through accuracy, precision, recall, and F1-score metrics with macro-averaging to account for class imbalance. The results demonstrate that lightweight transfer learning methods effectively classify medical imagery, with an accuracy of 97.21% on test data for the combined transfer learning pipeline. EfficientNet-B4 demonstrated the best performance on both components of the proposed pipeline and achieved a 99.6% accuracy for modality prediction and 99.21% accuracy for anatomical region prediction on unseen test data. This approach offers the potential for streamlined radiological workflows while maintaining diagnostic quality. The strong model performance across diverse modalities and anatomical regions indicates robust generalisability for practical implementation in clinical settings. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Data Analysis)
Show Figures

Figure 1

21 pages, 1611 KB  
Article
Mobility-Aware Cooperative Optimization for Task Offloading and Resource Allocation in Multi-Edge Computing
by Dong Chen, Ximing Zhang, Kequan Lin, Chunhua Mei and Ru Huo
Algorithms 2026, 19(3), 221; https://doi.org/10.3390/a19030221 - 16 Mar 2026
Viewed by 256
Abstract
The rapid proliferation of mobile Internet of Things (IoT) devices has introduced significant resource scheduling challenges in multi-edge computing networks, where device mobility leads to dynamic network connectivity and load imbalance, complicating task offloading and resource management. To address these issues, this paper [...] Read more.
The rapid proliferation of mobile Internet of Things (IoT) devices has introduced significant resource scheduling challenges in multi-edge computing networks, where device mobility leads to dynamic network connectivity and load imbalance, complicating task offloading and resource management. To address these issues, this paper presents a mobility-driven hierarchical optimization framework for task offloading and computation resource allocation in multi-region edge computing environments, a functionally coupled hierarchical framework that integrates mobility-aware heuristic offloading with multi-agent deep deterministic policy gradient (MADDPG)-based resource allocation. Devices are first clustered according to their mobility patterns, and offloading decisions are dynamically made based on trajectory and dwell-time characteristics. Each edge server is modeled as an autonomous agent, and an MADDPG framework is adopted to collaboratively optimize resource allocation, with the joint objective of minimizing task processing delay and system energy consumption. Experimental evaluations under diverse mobility and workload conditions show that the proposed approach achieves a 19.0% reduction in task delay compared to the Multi-Objective Gray Wolf Optimization (MOGWO) method at the largest device scale (60 devices) and maintains comparable energy efficiency. Furthermore, it exhibits stronger adaptability and scheduling performance across varying mobility group distributions. These results confirm the effectiveness of the proposed method in enhancing system performance within dynamic mobile edge computing scenarios. Full article
Show Figures

Figure 1

16 pages, 2394 KB  
Article
A TSS-Compliant Ship Automatic Route-Planning Algorithm
by Ning Zhang, Fang He, Lubin Chang and Jingwen Zong
Algorithms 2026, 19(3), 220; https://doi.org/10.3390/a19030220 - 15 Mar 2026
Viewed by 240
Abstract
Aiming at solving the problem that existing automatic route-planning algorithms fail to consider the navigation rules in traffic separation scheme (TSS) zones, this paper proposes a ship automatic route-planning algorithm that fully considers TSS-zone navigation constraints. First, a formalized TSS-zone automatic planning module [...] Read more.
Aiming at solving the problem that existing automatic route-planning algorithms fail to consider the navigation rules in traffic separation scheme (TSS) zones, this paper proposes a ship automatic route-planning algorithm that fully considers TSS-zone navigation constraints. First, a formalized TSS-zone automatic planning module with a quadrilateral decomposition mechanism is designed, which realizes standardized processing of regular TSS zones and completes TSS-compliant route replanning through three core steps: invalid waypoint deletion, TSS-zone-traversal-order determination, and constrained route replanning. Second, a particle swarm optimization (PSO) algorithm is selected as the base global route-planning algorithm via a multi-algorithm comparative framework, considering the requirements of optimality, stability and real-time performance for ship-navigation. The TSS module is deeply integrated with the PSO algorithm, forming a unified global route-planning algorithm that balances TSS compliance and route optimality. Comparative experiments with four mainstream swarm intelligence algorithms (PSO/SSA/IVY/GOA) show that the PSO algorithm outperforms the others in terms of route length, stability and comprehensive efficiency, with an optimal route length of 57.71 and a low standard deviation of 3.42. Furthermore, the proposed algorithm is validated by real nautical chart data of Bohai Bay under single- and double-TSS-zone scenarios. The results indicate that the algorithm can stably generate TSS-compliant routes, with only a small increase in route length (0.6% and 4.4% for a single TSS zone, 1.1% and 1.8% for two TSS zones) and computational time and can automatically adjust the traversal strategy according to the start–end point settings. The designed TSS module has good scalability and can be integrated with other optimization algorithms, providing a feasible technical solution for an intelligent ship navigation system to realize automatic and compliant route planning in TSS zones with dense traffic. Full article
Show Figures

Figure 1

19 pages, 3461 KB  
Article
DCDRNet: Detail–Context Decoupled Representation Learning Network for Efficient Crack Segmentation
by Rihua Huang, Miaolin Feng and Yandong Hu
Algorithms 2026, 19(3), 219; https://doi.org/10.3390/a19030219 - 14 Mar 2026
Viewed by 303
Abstract
Accurate crack segmentation is critical for automated infrastructure inspection but remains challenging due to the inherent conflict between preserving fine-grained geometric details and modeling global semantic context. Existing deep learning approaches typically encode both requirements within a single hierarchical representation, leading to irreversible [...] Read more.
Accurate crack segmentation is critical for automated infrastructure inspection but remains challenging due to the inherent conflict between preserving fine-grained geometric details and modeling global semantic context. Existing deep learning approaches typically encode both requirements within a single hierarchical representation, leading to irreversible boundary degradation or fragmented predictions under complex backgrounds. To address this limitation, we propose DCDRNet, a detail–context decoupled network that explicitly separates geometry-sensitive and context-aware representations into parallel encoding streams. The Detail Encoder maintains high-resolution features to preserve thin crack boundaries, while the Context Encoder performs adaptive global reasoning to reinforce structural continuity. Their controlled interaction enables effective integration of local precision and long-range context without representational interference. Extensive experiments on three public crack segmentation benchmarks demonstrate that DCDRNet consistently outperforms state-of-the-art methods in accuracy and robustness, achieving superior performance especially on challenging datasets with thin and fragmented cracks. Moreover, DCDRNet delivers a favorable accuracy–efficiency trade-off, combining compact model size with near real-time inference speed, making it well-suited for practical deployment in real-world inspection scenarios. Full article
Show Figures

Figure 1

28 pages, 1638 KB  
Article
A Self-Deciding Adaptive Digital Twin Framework Using Agentic AI for Fuzzy Multi-Objective Optimization of Food Logistics
by Hamed Nozari and Zornitsa Yordanova
Algorithms 2026, 19(3), 218; https://doi.org/10.3390/a19030218 - 14 Mar 2026
Viewed by 400
Abstract
Due to the perishable nature of products, high uncertainty, and conflicting objectives, food supply chain logistics management requires dynamic and adaptive decision-making frameworks. In this study, an integrated decision-making architecture is presented that integrates a multi-objective fuzzy optimization model into an adaptive digital [...] Read more.
Due to the perishable nature of products, high uncertainty, and conflicting objectives, food supply chain logistics management requires dynamic and adaptive decision-making frameworks. In this study, an integrated decision-making architecture is presented that integrates a multi-objective fuzzy optimization model into an adaptive digital twin along with an agentic AI-based dynamic goal reset mechanism. The main methodological innovation of this study is not in the separate development of each of these components but in their structured integration in the form of a self-regulating decision-making loop in which the priority of goals is dynamically adjusted based on the current state of the system. Computational results based on real and simulated data show that the proposed framework reduces the total logistics cost by about 4–5% and reduces product waste by about 13% while simultaneously improving the service level by about 4%. Resilience analysis shows faster performance recovery in the face of operational disruptions, and scalability results confirm the controlled growth of computational time with increasing problem size. These findings demonstrate the effectiveness of integrating adaptive digital twins and agentic AI in a multi-objective fuzzy optimization environment for intelligent and resilient food logistics management. Full article
(This article belongs to the Special Issue Optimizing Logistics Activities: Models and Applications)
Show Figures

Figure 1

19 pages, 707 KB  
Article
Performance Analysis of Half-Hyperbolic Convolution (HHC)-Type Operators via Regression-Based Metrics
by George A. Anastassiou, Seda Karateke and Metin Zontul
Algorithms 2026, 19(3), 217; https://doi.org/10.3390/a19030217 - 13 Mar 2026
Viewed by 204
Abstract
In this paper, we first introduce the adjustable half-hyperbolic (adj HH) tangent function as an activation function. We then establish both quantitative and qualitative convergence results for HH-activated convolution-type positive linear operators (PLOs) acting on the space of bounded and continuous functions on [...] Read more.
In this paper, we first introduce the adjustable half-hyperbolic (adj HH) tangent function as an activation function. We then establish both quantitative and qualitative convergence results for HH-activated convolution-type positive linear operators (PLOs) acting on the space of bounded and continuous functions on the real line. The theoretical convergence results are numerically validated by means of error decay plots obtained using Python (version 3.13). Moreover, we compare three different classes of HHC-type operators in terms of their convergence behavior and approximation performance. Finally, we conclude by discussing several potential application areas that illustrate the relevance of the presented theoretical framework. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
Show Figures

Figure 1

23 pages, 8147 KB  
Article
SDENet: A Novel Approach for Single Image Depth of Field Extension
by Xu Zhang, Miaomiao Wen, Junyang Jia and Yan Liu
Algorithms 2026, 19(3), 216; https://doi.org/10.3390/a19030216 - 13 Mar 2026
Viewed by 262
Abstract
Traditional hardware-based approaches for depth-of-field extension (DOF-E), such as optimized lens design or focus-stacking via layer scanning, are often plagued by bulkiness and prohibitive costs. Meanwhile, conventional multi-focus image fusion algorithms demand precise spatial alignment, a challenge that becomes particularly acute in applications [...] Read more.
Traditional hardware-based approaches for depth-of-field extension (DOF-E), such as optimized lens design or focus-stacking via layer scanning, are often plagued by bulkiness and prohibitive costs. Meanwhile, conventional multi-focus image fusion algorithms demand precise spatial alignment, a challenge that becomes particularly acute in applications like microscopy. To address these limitations, this paper proposed a novel single-image DOF-E method termed SDENet. The method adopts an encoder –decoder architecture enhanced with multi-scale self-attention and depth enhancement modules, enabling the transformation of a single partially focused image into a fully focused output while effectively recovering regions outside the original depth of field (DOF). To support model training and performance evaluation, we introduce a dedicated dataset (MSED) containing 1772 pairs of single-focus and all-focus images covering diverse scenes. Experimental results on multiple datasets verify that SDENet significantly outperforms state-of-the-art deblurring methods, achieving a PSNR of 26.98 dB and SSIM of 0.846 on the DPDD dataset, which represents a substantial improvement in clarity and visual coherence compared to existing techniques. Furthermore, SDENet demonstrates competitive performance with multi-image fusion methods while requiring only a single input. Full article
Show Figures

Figure 1

21 pages, 526 KB  
Article
Understanding Tradeoffs in Clinical Text Extraction: Prompting, Retrieval-Augmented Generation, and Supervised Learning on Electronic Health Records
by Tanya Yadav, Aditya Tekale, Jeff Chong and Mohammad Masum
Algorithms 2026, 19(3), 215; https://doi.org/10.3390/a19030215 - 13 Mar 2026
Viewed by 295
Abstract
Clinical discharge summaries contain rich patient information but remain difficult to convert into structured representations for downstream analysis. Recent advances in large language models (LLMs) have introduced new approaches for clinical text extraction, yet their relative strengths compared with supervised methods remain unclear. [...] Read more.
Clinical discharge summaries contain rich patient information but remain difficult to convert into structured representations for downstream analysis. Recent advances in large language models (LLMs) have introduced new approaches for clinical text extraction, yet their relative strengths compared with supervised methods remain unclear. This study presents a controlled evaluation of three dominant strategies for structured clinical information extraction from electronic health records: prompting-based extraction using LLMs, retrieval-augmented generation for terminology canonicalization, and supervised fine-tuning of domain-specific transformer models. Using discharge summaries from the MIMIC-IV dataset, we compare zero-shot, few-shot, and verification-based prompting across closed-source and open-source LLMs, evaluate retrieval-augmented canonicalization as a post-processing mechanism, and benchmark these methods against a fine-tuned BioClinicalBERT model. Performance is assessed using a multi-level evaluation framework that combines exact matching, fuzzy lexical matching, and semantic assessment via an LLM-based judge. The results reveal clear tradeoffs across approaches: prompting achieves strong semantic correctness with minimal supervision, retrieval augmentation improves terminology consistency without expanding extraction coverage, and supervised fine-tuning yields the highest overall accuracy when labeled data are available. Across all methods, we observe a consistent 4050% gap between exact-match and semantic correctness, highlighting the limitations of string-based metrics for clinical Natural Language Processing (NLP). These findings provide practical guidance for selecting extraction strategies under varying resource constraints and emphasize the importance of evaluation methodologies that reflect clinical equivalence rather than surface-form similarity. Full article
(This article belongs to the Special Issue Advanced Algorithms for Biomedical Data Analysis)
Show Figures

Figure 1

14 pages, 1187 KB  
Article
Efficient and Verified Research Data Extraction with LLM
by Aleksandr Serdiukov, Vitaliy Dravgelis, Daniil Smutin, Amir Taldaev, Artem Ivanov, Leonid Adonin and Sergey Muravyov
Algorithms 2026, 19(3), 214; https://doi.org/10.3390/a19030214 - 13 Mar 2026
Viewed by 490
Abstract
Large language models (LLMs) hold promise for automated extraction of structured biological information from scientific literature, yet their reliability in some domain-specific tasks, such as DNA probe parsing remains underexplored. We developed a verification-focused, schema-guided extraction pipeline that transforms unstructured texts from scientific [...] Read more.
Large language models (LLMs) hold promise for automated extraction of structured biological information from scientific literature, yet their reliability in some domain-specific tasks, such as DNA probe parsing remains underexplored. We developed a verification-focused, schema-guided extraction pipeline that transforms unstructured texts from scientific articles into a normalized database of oligonucleotide probes, primers, and associated metadata. The system combined multi-turn JSON generation, strict schema validation, sequence-specific rule checks, and a post-processing recovery module that rescues systematically corrupted nucleotide outputs. Benchmarking across nine contemporary LLMs revealed distinct accuracy–hallucination trade-offs, with the context-optimized Qwen3 model achieving the highest overall extraction efficiency while maintaining low hallucination rates. Iterative prompting substantially improved fidelity but introduced notable latency and variance. Across all models, stable error profiles and the success of the recovery module indicated that most extraction failures stem from systematic and correctable formatting issues rather than semantic misunderstandings. These findings highlight both the potential and the current limitations of LLMs for structured scientific data extraction. The research provides a reproducible benchmark and extensible framework for future large-scale curation of molecular biology datasets. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop