Previous Issue
Volume 18, May
 
 

Algorithms, Volume 18, Issue 6 (June 2025) – 63 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
29 pages, 351 KiB  
Article
The Computability of the Channel Reliability Function and Related Bounds
by Holger Boche and Christian Deppe
Algorithms 2025, 18(6), 361; https://doi.org/10.3390/a18060361 - 11 Jun 2025
Abstract
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated [...] Read more.
The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the R function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the R function and the zero-error feedback capacity are not Banach–Mazur computable. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
17 pages, 370 KiB  
Article
A Box-Bounded Non-Linear Least Square Minimization Algorithm with Application to the JWL Parameter Determination in the Isentropic Expansion for Highly Energetic Material Simulation
by Yuri Caridi, Andrea Cucuzzella, Fabio Vicini and Stefano Berrone
Algorithms 2025, 18(6), 360; https://doi.org/10.3390/a18060360 - 11 Jun 2025
Abstract
This work presents a robust box-constrained nonlinear least-squares algorithm for accurately fitting the Jones–Wilkins–Lee (JWL) equation of state parameters, which describes the isentropic expansion of detonation products from high-energy materials. In the energetic material literature, there are plenty of methods that address this [...] Read more.
This work presents a robust box-constrained nonlinear least-squares algorithm for accurately fitting the Jones–Wilkins–Lee (JWL) equation of state parameters, which describes the isentropic expansion of detonation products from high-energy materials. In the energetic material literature, there are plenty of methods that address this problem, and in some cases, it is not fully clear which method is employed. We provide a fully detailed numerical framework that explicitly enforces Chapman–Jouguet (CJ) constraints and systematically separates the contributions of different terms in the JWL expression. The algorithm leverages a trust-region Gauss–Newton method combined with singular value decomposition to ensure numerical stability and rapid convergence, even in highly overdetermined systems. The methodology is validated through comprehensive comparisons with leading thermochemical codes such as CHEETAH 2.0, ZMWNI, and EXPLO5. The results demonstrate that the proposed approach yields lower residual fitting errors and improved consistency with CJ thermodynamic conditions compared to standard fitting routines. By providing a reproducible and theoretically based methodology, this study advances the state of the art in JWL parameter determination and improves the reliability of energetic material simulations. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
23 pages, 1093 KiB  
Article
ADDAEIL: Anomaly Detection with Drift-Aware Ensemble-Based Incremental Learning
by Danlei Li, Nirmal-Kumar C. Nair and Kevin I-Kai Wang
Algorithms 2025, 18(6), 359; https://doi.org/10.3390/a18060359 - 11 Jun 2025
Abstract
Time series anomaly detection in streaming environments faces persistent challenges due to concept drift, which gradually degrades model reliability. In this paper, we propose Anomaly Detection with Drift-Aware Ensemble-based Incremental Learning (ADDAEIL), an unsupervised anomaly detection framework that incrementally adapts to concept drift [...] Read more.
Time series anomaly detection in streaming environments faces persistent challenges due to concept drift, which gradually degrades model reliability. In this paper, we propose Anomaly Detection with Drift-Aware Ensemble-based Incremental Learning (ADDAEIL), an unsupervised anomaly detection framework that incrementally adapts to concept drift in non-stationary streaming time series data. ADDAEIL integrates a hybrid drift detection mechanism that combines statistical distribution tests with structural-based performance evaluation of base detectors in Isolation Forest. This design enables unsupervised detection and continuous adaptation to evolving data patterns. Based on the estimated drift intensity, an adaptive update strategy selectively replaces degraded base detectors. This allows the anomaly detection model to incorporate new information while preserving useful historical behavior. Experiments on both real-world and synthetic datasets show that ADDAEIL consistently outperforms existing state-of-the-art methods and maintains robust long-term performance in non-stationary data streams. Full article
(This article belongs to the Special Issue Machine Learning for Edge Computing)
Show Figures

Figure 1

20 pages, 25324 KiB  
Article
DGSS-YOLOv8s: A Real-Time Model for Small and Complex Object Detection in Autonomous Vehicles
by Siqiang Cheng, Lingshan Chen and Kun Yang
Algorithms 2025, 18(6), 358; https://doi.org/10.3390/a18060358 - 11 Jun 2025
Abstract
Object detection in complex road scenes is vital for autonomous driving, facing challenges such as object occlusion, small target sizes, and irregularly shaped targets. To address these issues, this paper introduces DGSS-YOLOv8s, a model designed to enhance detection accuracy and high-FPS performance within [...] Read more.
Object detection in complex road scenes is vital for autonomous driving, facing challenges such as object occlusion, small target sizes, and irregularly shaped targets. To address these issues, this paper introduces DGSS-YOLOv8s, a model designed to enhance detection accuracy and high-FPS performance within the You Only Look Once version 8 small (YOLOv8s) framework. The key innovation lies in the synergistic integration of several architectural enhancements: the DCNv3_LKA_C2f module, leveraging Deformable Convolution v3 (DCNv3) and Large Kernel Attention (LKA) for better the capture of complex object shapes; an Optimized Feature Pyramid Network structure (Optimized-GFPN) for improved multi-scale feature fusion; the Detect_SA module, incorporating spatial Self-Attention (SA) at the detection head for broader context awareness; and an Inner-Shape Intersection over Union (IoU) loss function to improve bounding box regression accuracy. These components collectively target the aforementioned challenges in road environments. Evaluations on the Berkeley DeepDrive 100K (BDD100K) and Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) datasets demonstrate the model’s effectiveness. Compared to baseline YOLOv8s, DGSS-YOLOv8s achieves mean Average Precision (mAP)@50 improvements of 2.4% (BDD100K) and 4.6% (KITTI). Significant gains were observed for challenging categories, notably 87.3% mAP@50 for cyclists on KITTI, and small object detection (AP-small) improved by up to 9.7% on KITTI. Crucially, DGSS-YOLOv8s achieved high processing speeds suitable for autonomous driving, operating at 103.1 FPS (BDD100K) and 102.5 FPS (KITTI) on an NVIDIA GeForce RTX 4090 GPU. These results highlight that DGSS-YOLOv8s effectively balances enhanced detection accuracy for complex scenarios with high processing speed, demonstrating its potential for demanding autonomous driving applications. Full article
(This article belongs to the Special Issue Advances in Computer Vision: Emerging Trends and Applications)
Show Figures

Figure 1

25 pages, 1991 KiB  
Article
Crude Oil and Hot-Rolled Coil Futures Price Prediction Based on Multi-Dimensional Fusion Feature Enhancement
by Yongli Tang, Zhenlun Gao, Ya Li, Zhongqi Cai, Jinxia Yu and Panke Qin
Algorithms 2025, 18(6), 357; https://doi.org/10.3390/a18060357 - 11 Jun 2025
Abstract
To address the challenges in forecasting crude oil and hot-rolled coil futures prices, the aim is to transcend the constraints of conventional approaches. This involves effectively predicting short-term price fluctuations, developing quantitative trading strategies, and modeling time series data. The goal is to [...] Read more.
To address the challenges in forecasting crude oil and hot-rolled coil futures prices, the aim is to transcend the constraints of conventional approaches. This involves effectively predicting short-term price fluctuations, developing quantitative trading strategies, and modeling time series data. The goal is to enhance prediction accuracy and stability, thereby supporting decision-making and risk management in financial markets. A novel approach, the multi-dimensional fusion feature-enhanced (MDFFE) prediction method has been devised. Additionally, a data augmentation framework leveraging multi-dimensional feature engineering has been established. The technical indicators, volatility indicators, time features, and cross-variety linkage features are integrated to build a prediction system, and the lag feature design is used to prevent data leakage. In addition, a deep fusion model is constructed, which combines the temporal feature extraction ability of the convolution neural network with the nonlinear mapping advantage of an extreme gradient boosting tree. With the help of a three-layer convolution neural network structure and adaptive weight fusion strategy, an end-to-end prediction framework is constructed. Experimental results demonstrate that the MDFFE model excels in various metrics, including mean absolute error, root mean square error, mean absolute percentage error, coefficient of determination, and sum of squared errors. The mean absolute error reaches as low as 0.0068, while the coefficient of determination can be as high as 0.9970. In addition, the significance and stability of the model performance were verified by statistical methods such as a paired t-test and ANOVA analysis of variance. This MDFFE algorithm offers a robust and practical approach for predicting commodity futures prices. It holds significant theoretical and practical value in financial market forecasting, enhancing prediction accuracy and mitigating forecast volatility. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 5824 KiB  
Article
Identifying Hubs Through Influential Nodes in Transportation Network by Using a Gravity Centrality Approach
by Worawit Tepsan, Aniwat Phaphuangwittayakul, Saronsad Sokantika and Napat Harnpornchai
Algorithms 2025, 18(6), 356; https://doi.org/10.3390/a18060356 - 10 Jun 2025
Abstract
Hubs are strategic locations that function as central nodes within clusters of cities, playing a pivotal role in the distribution of goods, services, and connectivity. Identifying these vital hubs—through analyzing influential locations within transportation networks—is essential for effective urban planning, logistics optimization, and [...] Read more.
Hubs are strategic locations that function as central nodes within clusters of cities, playing a pivotal role in the distribution of goods, services, and connectivity. Identifying these vital hubs—through analyzing influential locations within transportation networks—is essential for effective urban planning, logistics optimization, and enhancing infrastructure resilience. This task becomes even more crucial in developing and less-developed countries, where such hubs can significantly accelerate urban growth and drive economic development. However, existing hub identification approaches face notable limitations. Traditional centrality measures often yield low variance in node scores, making it difficult to distinguish truly influential nodes. Moreover, these methods typically rely solely on either local metrics or global network structures, limiting their effectiveness. To address these challenges, we propose a novel method called Hybrid Community-based Gravity Centrality (HCGC), which integrates local influence measures, community detection, and gravity-based modeling to more effectively identify influential nodes in complex networks. Through extensive experiments, we demonstrate that HCGC consistently outperforms existing methods in terms of spreading ability across varying truncation radii. To further validate our approach, we introduce ThaiNet, a newly constructed real-world transportation network dataset. The results show that HCGC not only preserves the strengths of traditional local approaches but also captures broader structural patterns, making it a powerful and practical tool for real-world network analysis. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 2140 KiB  
Article
Effective Detection of Malicious Uniform Resource Locator (URLs) Using Deep-Learning Techniques
by Yirga Yayeh Munaye, Aneas Bekele Workneh, Yenework Belayneh Chekol and Atinkut Molla Mekonen
Algorithms 2025, 18(6), 355; https://doi.org/10.3390/a18060355 - 7 Jun 2025
Viewed by 201
Abstract
The rapid growth of internet usage in daily life has led to a significant increase in cyber threats, with malicious URLs serving as a common cybercrime. Traditional detection methods often suffer from high false alarm rates and struggle to keep pace with evolving [...] Read more.
The rapid growth of internet usage in daily life has led to a significant increase in cyber threats, with malicious URLs serving as a common cybercrime. Traditional detection methods often suffer from high false alarm rates and struggle to keep pace with evolving threats due to outdated feature extraction techniques and datasets. To address these limitations, we propose a deep learning-based approach aimed at developing an effective model for detecting malicious URLs. Our proposed method, the Char2B model, leverages a fusion of BERT and CharBiGRU embedding, further enhanced by a Conv1D layer with a kernel size of three and unit-sized stride and padding. After combining the embedding, we used the BERT model as a baseline for comparison. The study involved collecting a dataset of 87,216 URLs, comprising both benign and malicious samples sourced from the open project directory (DMOZ), PhishTank, and Any.Run. Models were trained using the training set and evaluated on the test set using standard metrics, including accuracy, precision, recall, and F1-score. Through iterative refinement, we optimized the model’s performance to maximize its effectiveness. As a result, our proposed model achieved 98.50% accuracy, 98.27% precision, 98.69% recall, and a 98.48% F1-score, outperforming the baseline BERT model. Additionally, the false positive rate of our model was 0.017 better than the baseline model’s 0.018. By effectively extracting and utilizing informative features, the model accurately classified URLs into benign and malicious categories, thereby improving detection capabilities. This study highlights the significance of our deep learning approach in strengthening cybersecurity by integrating advanced algorithms that enhance detection accuracy, bolster defense mechanisms, and contribute to a safer digital environment. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

40 pages, 3827 KiB  
Review
A Review of Hybrid Vehicles Classification and Their Energy Management Strategies: An Exploration of the Advantages of Genetic Algorithms
by Yuede Pan, Kaifeng Zhong, Yubao Xie, Mingzhang Pan, Wei Guan, Li Li, Changye Liu, Xingjia Man, Zhiqing Zhang and Mantian Li
Algorithms 2025, 18(6), 354; https://doi.org/10.3390/a18060354 - 6 Jun 2025
Viewed by 287
Abstract
This paper presents a comprehensive analysis of hybrid electric vehicle (HEV) classification and energy management strategies (EMS), with a particular emphasis on the application and potential of genetic algorithms (GAs) in optimizing energy management strategies for hybrid electric vehicles. Initially, the paper categorizes [...] Read more.
This paper presents a comprehensive analysis of hybrid electric vehicle (HEV) classification and energy management strategies (EMS), with a particular emphasis on the application and potential of genetic algorithms (GAs) in optimizing energy management strategies for hybrid electric vehicles. Initially, the paper categorizes hybrid electric vehicles based on mixing rates and power source configurations, elucidating the operational principles and the range of applicability for different hybrid electric vehicle types. Following this, the two primary categories of energy management strategies—rule-based and optimization-based—are introduced, emphasizing their significance in enhancing energy efficiency and performance, while also acknowledging their inherent limitations. Furthermore, the advantages of utilizing genetic algorithms in optimizing energy management systems for hybrid vehicles are underscored. As a global optimization technique, genetic algorithms are capable of effectively addressing complex multi-objective problems by circumventing local optima and identifying the global optimal solution. The adaptability and versatility of genetic algorithms allow them to conduct real-time optimization across diverse driving conditions. Genetic algorithms play a pivotal role in hybrid vehicle energy management and exhibit a promising future. When combined with other optimization techniques, genetic algorithms can augment the optimization potential for tackling complex tasks. Nonetheless, the advancement of this technique is confronted with challenges such as cost, battery longevity, and charging infrastructure, which significantly influence its widespread adoption and application. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

15 pages, 349 KiB  
Article
Evolutionary Optimization for the Classification of Small Molecules Regulating the Circadian Rhythm Period: A Reliable Assessment
by Antonio Arauzo-Azofra, Jose Molina-Baena and Maria Luque-Rodriguez
Algorithms 2025, 18(6), 353; https://doi.org/10.3390/a18060353 - 6 Jun 2025
Viewed by 159
Abstract
The circadian rhythm plays a crucial role in regulating biological processes, and its disruption is linked to various health issues. Identifying small molecules that influence the circadian period is essential for developing targeted therapies. This study explores the use of evolutionary optimization techniques [...] Read more.
The circadian rhythm plays a crucial role in regulating biological processes, and its disruption is linked to various health issues. Identifying small molecules that influence the circadian period is essential for developing targeted therapies. This study explores the use of evolutionary optimization techniques to enhance the classification of these molecules. We applied a genetic algorithm to optimize feature selection and classification performance. Several tree-based learning classification algorithms (Decision Trees, Extra Trees, Random Forest, XGBoost) and a distance-based classifier (kNN) were employed. Their performance was evaluated using accuracy and F1-score, while considering their generalization ability with a validation set. The findings demonstrate that the proposed genetic algorithm improves classification accuracy and reduces overfitting compared to baseline models. Additionally, the use of variance in accuracy as a penalty factor may enhance the model’s reliability for real-world applications. Our study confirms that evolutionary optimization is an effective strategy for classifying small molecules regulating the circadian rhythm. The proposed approach not only improves predictive performance but also ensures a more robust model. Full article
Show Figures

Figure 1

28 pages, 1589 KiB  
Systematic Review
ChatGPT in Education: A Systematic Review on Opportunities, Challenges, and Future Directions
by Yirga Yayeh Munaye, Wasyihun Admass, Yenework Belayneh, Atinkut Molla and Mekete Asmare
Algorithms 2025, 18(6), 352; https://doi.org/10.3390/a18060352 - 6 Jun 2025
Viewed by 320
Abstract
This study presents a systematic review on the integration of ChatGPT in education, examining its opportunities, challenges and future directions. Utilizing the PRISMA framework, the review analyzes 40 peer-reviewed studies published from 2020 to 2024. Opportunities identified include the potential for ChatGPT to [...] Read more.
This study presents a systematic review on the integration of ChatGPT in education, examining its opportunities, challenges and future directions. Utilizing the PRISMA framework, the review analyzes 40 peer-reviewed studies published from 2020 to 2024. Opportunities identified include the potential for ChatGPT to foster individualized educational experiences, tailoring learning to meet the needs of individual students. Its capacity to automate grading and assessments is noted as a time-saving measure for educators, allowing them to focus on more interactive and engaging teaching methods. However, the study also addresses significant challenges associated with utilizing ChatGPT in educational contexts. Concerns regarding academic integrity are paramount, as students might misuse ChatGPT for cheating or plagiarism. Additionally, issues such as ChatGPT bias are highlighted, raising questions about the fairness and inclusivity of ChatGPT-generated content in educational materials. The necessity for ethical governance is emphasized, underscoring the importance of establishing clear policies to guide the responsible use of AI in education. The findings highlight several key trends regarding ChatGPT’s role in enhancing personalized learning, automating assessments, and providing support to educators. The review concludes by stressing the importance of identifying best practices to optimize ChatGPT’s effectiveness in teaching and learning environments. There is a clear need for future research focusing on adaptive ChatGPT regulation, which will be essential as educational stakeholders seek to understand and manage the long-term impacts of ChatGPT integration on pedagogy. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

16 pages, 1400 KiB  
Article
An RMSprop-Incorporated Latent Factorization of Tensor Model for Random Missing Data Imputation in Structural Health Monitoring
by Jingjing Yang
Algorithms 2025, 18(6), 351; https://doi.org/10.3390/a18060351 - 6 Jun 2025
Viewed by 202
Abstract
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization [...] Read more.
In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization of tensor (LFT)-based method has proven effective for such problems, with optimization typically achieved via stochastic gradient descent (SGD). However, SGD-based LFT models and other imputation methods exhibit significant sensitivity to learning rates and slow tail-end convergence. To address these limitations, this study proposes an RMSprop-incorporated latent factorization of tensor (RLFT) model, which integrates an adaptive learning rate mechanism to dynamically adjust step sizes based on gradient magnitudes. Experimental validation on a scaled bridge accelerometer dataset demonstrates that RLFT achieves faster convergence and higher imputation accuracy compared to state-of-the-art models including SGD-based LFT and the long short-term memory (LSTM) network, with improvements of at least 10% in both imputation accuracy and convergence rate, offering a more efficient and reliable solution for missing data handling in SHM. Full article
Show Figures

Figure 1

24 pages, 2877 KiB  
Article
Memory-Efficient Batching for Time Series Transformer Training: A Systematic Evaluation
by Phanwadee Sinthong, Nam Nguyen, Vijay Ekambaram, Arindam Jati, Jayant Kalagnanam and Peeravit Koad
Algorithms 2025, 18(6), 350; https://doi.org/10.3390/a18060350 - 5 Jun 2025
Viewed by 164
Abstract
Transformer-based time series models are being increasingly employed for time series data analysis. However, their training remains memory intensive, especially with high-dimensional data and extended look-back windows, while model-level memory optimizations are well studied, the batch formation process remains an underexplored factor to [...] Read more.
Transformer-based time series models are being increasingly employed for time series data analysis. However, their training remains memory intensive, especially with high-dimensional data and extended look-back windows, while model-level memory optimizations are well studied, the batch formation process remains an underexplored factor to performance inefficiency. This paper introduces a memory-efficient batching framework based on view-based sliding windows operating directly on GPU-resident tensors. This approach eliminates redundant data materialization caused by tensor stacking and reduces data transfer volumes without modifying model architectures. We present two variants of our solution: (1) per-batch optimization for datasets exceeding GPU memory, and (2) dataset-wise optimization for in-memory workloads. We evaluate our proposed batching framework systematically using peak GPU memory consumption and epoch runtime as efficiency metrics across varying batch sizes, sequence lengths, feature dimensions, and model architectures. Results show consistent memory savings, averaging 90% and runtime improvements of up to 33% across multiple transformer-based models (Informer, Autoformer, Transformer, and PatchTST) and a linear baseline (DLinear) without compromising model accuracy. We extensively validate our method using synthetic and standard real-world benchmarks, demonstrating accuracy preservation and practical scalability in distributed GPU environments. The proposed method highlights batch formation process as a critical component for improving training efficiency. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

22 pages, 9553 KiB  
Article
Testing the Effectiveness of Voxels for Structural Analysis
by Sara Gonizzi Barsanti and Ernesto Nappi
Algorithms 2025, 18(6), 349; https://doi.org/10.3390/a18060349 - 5 Jun 2025
Viewed by 112
Abstract
To assess the condition of cultural heritage assets for conservation, reality-based 3D models can be analyzed using FEA (finite element analysis) software, yielding valuable insights into their structural integrity. Three-dimensional point clouds obtained through photogrammetric and laser scanning techniques can be transformed into [...] Read more.
To assess the condition of cultural heritage assets for conservation, reality-based 3D models can be analyzed using FEA (finite element analysis) software, yielding valuable insights into their structural integrity. Three-dimensional point clouds obtained through photogrammetric and laser scanning techniques can be transformed into volumetric data suitable for FEA by utilizing voxels. When directly using the point cloud data in this process, it is crucial to employ the highest level of accuracy. The fidelity of r point clouds can be compromised by various factors, including uncooperative materials or surfaces, poor lighting conditions, reflections, intricate geometries, and limitations in the precision of the instruments. This data not only skews the inherent structure of the point cloud but also introduces extraneous information. Hence, the geometric accuracy of the resulting model may be diminished, ultimately impacting the reliability of any analyses conducted upon it. The removal of noise from point clouds, a crucial aspect of 3D data processing, known as point cloud denoising, is gaining significant attention due to its ability to reveal the true underlying point cloud structure. This paper focuses on evaluating the geometric precision of the voxelization process, which transforms denoised 3D point clouds into volumetric models suitable for structural analyses. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

23 pages, 676 KiB  
Article
Numerical and Theoretical Treatments of the Optimal Control Model for the Interaction Between Diabetes and Tuberculosis
by Saburi Rasheed, Olaniyi S. Iyiola, Segun I. Oke and Bruce A. Wade
Algorithms 2025, 18(6), 348; https://doi.org/10.3390/a18060348 - 5 Jun 2025
Viewed by 205
Abstract
We primarily focus on the formulation, theoretical, and numerical analyses of a non-autonomous model for tuberculosis (TB) prevention and control programs in a population where individuals suffering from the double trouble of tuberculosis and diabetes are present. The model incorporates four time-dependent control [...] Read more.
We primarily focus on the formulation, theoretical, and numerical analyses of a non-autonomous model for tuberculosis (TB) prevention and control programs in a population where individuals suffering from the double trouble of tuberculosis and diabetes are present. The model incorporates four time-dependent control functions, saturated treatment of non-infectious individuals harboring tuberculosis, and saturated incidence rate. Furthermore, the basic reproduction number of the autonomous form of the proposed optimal control mathematical model is calculated. Sensitivity indexes regarding the constant control parameters reveal that the proposed control and preventive measures will reduce the tuberculosis burden in the population. This study establishes that the combination of campaigns that teach people how the development of tuberculosis and diabetes can be prevented, a treatment strategy that provides saturated treatment to non-infectious individuals exposed to tuberculosis infections, and prompt effective treatment of individuals infected with tuberculosis disease is the optimal strategy to achieve zero TB by 2035. Full article
Show Figures

Figure 1

16 pages, 2603 KiB  
Article
A Novel Model for Accurate Daily Urban Gas Load Prediction Using Genetic Algorithms
by Xi Chen, Feng Wang, Li Xu, Taiwu Xia, Minhao Wang, Gangping Chen, Longyu Chen and Jun Zhou
Algorithms 2025, 18(6), 347; https://doi.org/10.3390/a18060347 - 5 Jun 2025
Viewed by 169
Abstract
With the increase of natural gas consumption year by year, the shortage of urban natural gas reserves leads to the increasingly serious gas supply–demand imbalance. It is particularly important to establish a correct and reasonable gas daily load forecasting model to ensure the [...] Read more.
With the increase of natural gas consumption year by year, the shortage of urban natural gas reserves leads to the increasingly serious gas supply–demand imbalance. It is particularly important to establish a correct and reasonable gas daily load forecasting model to ensure the realization of forecasting function and the accuracy and reliability of calculation results. Most of the current prediction models are combined with the characteristics of gas data and prediction models, and the influencing factors are often considered less. In order to solve this problem, the basic concept of multiple weather parameter (MWP) was introduced, and the influence of factors such as the average temperature, solar radiation, cumulative temperature, wind power, and temperature change of the building foundation on the daily load of urban gas were analyzed. A multiple weather parameter–daily load prediction (MWP-DLP) model based on System Thermal Days (STD) was established, and the genetic algorithm was used to solve the model. The daily gas load in a city was predicted, and the results were analyzed. The results show that the trend between the predicted value of gas daily load obtained by the MWP-DLP model and the actual value was basically consistent. The maximum relative error was 8.2%, and the mean absolute percentage error (MAPE) was 2.68%. The feasibility of the MWP- DLP prediction model was verified, which has practical significance for gas companies to reasonably formulate and decide peak shaving schemes to reserve natural gas. Full article
(This article belongs to the Special Issue Artificial Intelligence for More Efficient Renewable Energy Systems)
Show Figures

Figure 1

14 pages, 698 KiB  
Article
Inferring the Timing of Antiretroviral Therapy by Zero-Inflated Random Change Point Models Using Longitudinal Data Subject to Left-Censoring
by Hongbin Zhang, McKaylee Robertson, Sarah L. Braunstein, David B. Hanna, Uriel R. Felsen, Levi Waldron and Denis Nash
Algorithms 2025, 18(6), 346; https://doi.org/10.3390/a18060346 - 5 Jun 2025
Viewed by 168
Abstract
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution [...] Read more.
We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution for the longitudinal data, which is also subject to left-censoring, and the underlying data-generating mechanism is a nonlinear mixed-effects model. We extend the Stochastic EM (StEM) algorithm by combining a Gibbs sampler with a Metropolis–Hastings sampling. We apply the method to real HIV data to infer the timing of ART initiation since diagnosis. Additionally, we conduct simulation studies to assess the performance of our proposed method. Full article
Show Figures

Figure 1

27 pages, 552 KiB  
Article
Automatic Generation of Synthesisable Hardware Description Language Code of Multi-Sequence Detector Using Grammatical Evolution
by Bilal Majeed, Rajkumar Sarma, Ayman Youssef, Douglas Mota Dias and Conor Ryan
Algorithms 2025, 18(6), 345; https://doi.org/10.3390/a18060345 - 5 Jun 2025
Viewed by 203
Abstract
Quickly designing digital circuits that are both correct and efficient poses significant challenges. Electronics, especially those incorporating sequential logic circuits, are complex to design and test. While Electronic Design Automation (EDA) tools aid designers, they do not fully automate the creation of synthesisable [...] Read more.
Quickly designing digital circuits that are both correct and efficient poses significant challenges. Electronics, especially those incorporating sequential logic circuits, are complex to design and test. While Electronic Design Automation (EDA) tools aid designers, they do not fully automate the creation of synthesisable circuits that can be directly translated into hardware. This paper introduces a system that employs Grammatical Evolution (GE) to automatically generate synthesisable Hardware Description Language (HDL) code for the Finite State Machine (FSM) of a Multi-Sequence Detector (MSD). This MSD differs significantly from prior work as it can detect multiple sequences in contrast to the single-sequence detectors discussed in existing literature. Sequence Detectors (SDs) are essential in circuits that detect sequences of specific events to produce timely alerts. The proposed MSD applies to a real-time vending machine scenario, enabling customer selections upon successful payment. However, this technique can evolve any MSD, such as a traffic light control system or a robot navigation system. We examine two parent selection techniques, Tournament Selection (TS) and Lexicase Selection (LS), demonstrating that LS performs better than TS, although both techniques successfully produce synthesisable hardware solutions. Both hand-crafted “Gold” and evolved circuits are synthesised using Generic Process Design Kit (GPDK) technologies at 45 nm, 90 nm, and 180 nm scales, demonstrating their efficacy. Full article
Show Figures

Figure 1

15 pages, 920 KiB  
Article
A Novel Connected-Components Algorithm for 2D Binarized Images
by Costin-Anton Boiangiu, Giorgiana-Violeta Vlăsceanu, Constantin-Eduard Stăniloiu, Nicolae Tarbă and Mihai-Lucian Voncilă
Algorithms 2025, 18(6), 344; https://doi.org/10.3390/a18060344 - 5 Jun 2025
Viewed by 162
Abstract
This paper introduces a new memory-efficient algorithm for connected-components labeling in binary images, which is based on run-length encoding. Unlike conventional pixel-based methods that scan and label individual pixels using global buffers or disjoint-set structures, our approach encodes rows as linked segments and [...] Read more.
This paper introduces a new memory-efficient algorithm for connected-components labeling in binary images, which is based on run-length encoding. Unlike conventional pixel-based methods that scan and label individual pixels using global buffers or disjoint-set structures, our approach encodes rows as linked segments and merges them using a union-by-size strategy. We accelerate run detection by using a precomputed 16-bit cache of binary patterns, allowing for fast decoding without relying on bitwise CPU instructions. When compared against other run-length encoded algorithms, such as the Scan-Based Labeling Algorithm or Run-Based Two-Scan, our method achieves up to 35% faster on most real-world datasets. While other binary-optimized algorithms, such as Bit-Run Two-Scan and Bit-Merge Run Scan, are up to 45% faster than our algorithm, they require much higher memory usage. Compared to them, our method tends to reduce memory consumption on some large document datasets by up to 80%. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 383 KiB  
Article
A Standardized Validation Framework for Clinically Actionable Healthcare Machine Learning with Knee Osteoarthritis Grading as a Case Study
by Daniel Nasef, Demarcus Nasef, Michael Sher and Milan Toma
Algorithms 2025, 18(6), 343; https://doi.org/10.3390/a18060343 - 5 Jun 2025
Viewed by 417
Abstract
Background: High in-domain accuracy in healthcare machine learning (ML) models does not guarantee reliable clinical performance, especially when training and validation protocols are insufficiently robust. This paper presents a standardized framework for training and validating ML models intended for classifying medical conditions, emphasizing [...] Read more.
Background: High in-domain accuracy in healthcare machine learning (ML) models does not guarantee reliable clinical performance, especially when training and validation protocols are insufficiently robust. This paper presents a standardized framework for training and validating ML models intended for classifying medical conditions, emphasizing the need for clinically relevant evaluation metrics and external validation. Methods: We apply this framework to a case study in knee osteoarthritis grading, demonstrating how overfitting, data leakage, and inadequate validation can lead to deceptively high accuracy that fails to translate into clinical reliability. In addition to conventional metrics, we introduce composite clinical measures that better capture real-world utility. Results: Our findings show that models with strong in-domain performance may underperform on external datasets, and that composite metrics provide a more nuanced assessment of clinical applicability. Conclusions: Standardized training and validation protocols, together with clinically oriented evaluation, are essential for developing ML models that are both statistically robust and clinically reliable across a range of medical classification tasks. Full article
Show Figures

Figure 1

18 pages, 934 KiB  
Article
Optimization of PFMEA Team Composition in the Automotive Industry Using the IPF-RADAR Approach
by Nikola Komatina and Dragan Marinković
Algorithms 2025, 18(6), 342; https://doi.org/10.3390/a18060342 - 4 Jun 2025
Viewed by 115
Abstract
In the automotive industry, the implementation of Process Failure Mode and Effect Analysis (PFMEA) is conducted by a PFMEA team comprising employees who are connected to the production process or a specific product. Core PFMEA team members are actively engaged in PFMEA execution [...] Read more.
In the automotive industry, the implementation of Process Failure Mode and Effect Analysis (PFMEA) is conducted by a PFMEA team comprising employees who are connected to the production process or a specific product. Core PFMEA team members are actively engaged in PFMEA execution through meetings, analysis, and the implementation of corrective actions. Although the current handbook provides guidelines on the potential composition of the PFMEA team, it does not strictly define its members, allowing companies the flexibility to determine the team structure independently. This study aims to identify the core PFMEA team members by adhering to criteria based on the recommended knowledge and competencies outlined in the current handbook. By applying the RAnking based on the Distances and Range (RADAR) approach, extended with Interval-Valued Pythagorean Fuzzy Numbers (IVPFNs), a ranking of potential candidates was conducted. A case study was performed in a Tier-1 supplier company within the automotive supply chain. Full article
Show Figures

Figure 1

35 pages, 4147 KiB  
Article
S-EPSO: A Socio-Emotional Particle Swarm Optimization Algorithm for Multimodal Search in Low-Dimensional Engineering Applications
by Raynald Guilbault
Algorithms 2025, 18(6), 341; https://doi.org/10.3390/a18060341 - 4 Jun 2025
Viewed by 218
Abstract
This paper examines strategies aimed at improving search procedures in multimodal, low-dimensional domains. Here, low-dimensional domains refers to a maximum of five dimensions. The present analysis assembles strategies to form an algorithm named S-EPSO, which, at its core, locates and maintains multiple optima without [...] Read more.
This paper examines strategies aimed at improving search procedures in multimodal, low-dimensional domains. Here, low-dimensional domains refers to a maximum of five dimensions. The present analysis assembles strategies to form an algorithm named S-EPSO, which, at its core, locates and maintains multiple optima without relying on external niching parameters, instead adapting this functionality internally. The first proposed strategy assigns socio-emotional personalities to the particles forming the swarm. The analysis also introduces a technique to help them visit secluded zones. It allocates the particles of the initial distribution to subdomains based on biased decisions. The biases reflect the subdomain’s potential to contain optima. This potential is established from a balanced combination of the jaggedness and the mean-average interval descriptors developed in the study. The study compares the performance of S-EPSO to that of state-of-the-art algorithms over seventeen functions of the CEC benchmark, and S-EPSO is revealed to be highly competitive. It outperformed the reference algorithms 14 times, whereas the best of the latter outperformed the other two 10 times out of 30 relevant evaluations. S-EPSO performed best with the most challenging 5D functions of the benchmark. These results clearly illustrate the potential of S-EPSO when it comes to dealing with practical engineering optimization problems limited to five dimensions. Full article
Show Figures

Figure 1

28 pages, 1043 KiB  
Article
Beyond Kanban: POLCA-Constrained Scheduling for Job Shops
by Antonio Grieco, Pierpaolo Caricato and Paolo Margiotta
Algorithms 2025, 18(6), 340; https://doi.org/10.3390/a18060340 - 4 Jun 2025
Viewed by 112
Abstract
This study investigates the integration of finite capacity scheduling with POLCA-based workload control in high-mix, low-volume production environments. We propose a proactive scheduling approach that embeds POLCA constraints into a constraint programming (CP) model, aiming to reconcile the trade-offs between utilization efficiency and [...] Read more.
This study investigates the integration of finite capacity scheduling with POLCA-based workload control in high-mix, low-volume production environments. We propose a proactive scheduling approach that embeds POLCA constraints into a constraint programming (CP) model, aiming to reconcile the trade-offs between utilization efficiency and system responsiveness. The proposed methodology is evaluated in two phases. First, a simplified job shop simulation compares a traditional reactive POLCA implementation with the CP-based proactive approach under varying system configurations, demonstrating significant reductions in lead times, tardiness, and deadlock occurrences. Second, an industrial case study in an aerospace manufacturing firm validates the practical applicability of the approach by retrospectively comparing the CP model against an existing commercial scheduler. The results underscore that the integrated framework not only enhances scheduling performance through improved workload control but also provides a more stable operational environment. Full article
(This article belongs to the Collection Feature Papers in Algorithms)
Show Figures

Figure 1

23 pages, 2559 KiB  
Article
Cost-Effective Design, Content Management System Implementation and Artificial Intelligence Support of Greek Government AADE, myDATA Web Service for Generic Government Infrastructure, a Complete Analysis
by George Tsamis, Georgios Evangelos, Aris Papakostas, Giannis Vassiliou, Michael Grafanakis, Alexandros Garefalakis, Michalis Vassalos, Anastasia Mylona and Nikos Papadakis
Algorithms 2025, 18(6), 339; https://doi.org/10.3390/a18060339 - 4 Jun 2025
Viewed by 255
Abstract
One significant digital initiative that is changing Greece’s tax environment is the myDATA platform. The platform, which is a component of the wider digital governance agenda, provides significant added value to enterprises and the tax administration, despite the challenges of adaption. Despite the [...] Read more.
One significant digital initiative that is changing Greece’s tax environment is the myDATA platform. The platform, which is a component of the wider digital governance agenda, provides significant added value to enterprises and the tax administration, despite the challenges of adaption. Despite the positive response, we find that the development of the platform could have been carried out quickly and at a significantly lower cost and could have been able to cope much faster with the rapid and necessary changes that the platform will have to comply with. For these reasons, development in WordPress would be considered essential as this CMS platform guarantees a fast and developer-friendly environment. In this publication, as a contribution, we provide all the necessary information to develop a myDATA-like platform in a fast, economical and functional way using the WordPress CMS. Our contribution also contains the analysis of the minimum necessary amount of services of the myDATA platform in order to perform its basic functionalities, the description of the according database relational model, which must be implemented in order to provide the same functionality with the myDATA platform, and the analysis of available methods to quickly create the necessary forms and services. In addition, we study how to develop Artificial Intelligence mechanisms with a success rate reaching up to 90% for automatic tax violation detection algorithms. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

17 pages, 775 KiB  
Article
A Multi-Objective Bio-Inspired Optimization for Voice Disorders Detection: A Comparative Study
by Maria Habib, Victor Vicente-Palacios and Pablo García-Sánchez
Algorithms 2025, 18(6), 338; https://doi.org/10.3390/a18060338 - 4 Jun 2025
Viewed by 202
Abstract
As early detection of voice disorders can significantly improve patients’ situation, the automated detection using Artificial Intelligence techniques can be crucial in various applications in this scope. This paper introduces a multi-objective bio-inspired, AI-based optimization approach for the automated detection of voice disorders. [...] Read more.
As early detection of voice disorders can significantly improve patients’ situation, the automated detection using Artificial Intelligence techniques can be crucial in various applications in this scope. This paper introduces a multi-objective bio-inspired, AI-based optimization approach for the automated detection of voice disorders. Different multi-objective evolutionary algorithms (the Non-dominated Sorting Genetic Algorithm (NSGA-II), Strength Pareto Evolutionary Algorithm (SPEA-II), and the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D)) have been compared to detect voice disorders by optimizing two conflicting objectives: error rate and the number of features. The optimization problem has been formulated as a wrapper-based algorithm for feature selection and multi-objective optimization relying on four machine learning algorithms: K-Nearest Neighbour algorithm (KNN), Random Forest (RF), Multilayer Perceptron (MLP), and Support Vector Machine (SVM). Three publicly available voice disorder datasets have been utilized, and results have been compared based on Inverted-Generational Distance, Hypervolume, spacing, and spread. The results reveal that NSGA-II with the MLP algorithm attained the best convergence and performance. Further, the conformal prediction is leveraged to quantify uncertainty in the feature-selected models, ensuring statistically valid confidence intervals for predictions. Full article
Show Figures

Graphical abstract

3 pages, 126 KiB  
Editorial
Machine Learning Algorithms for Biomedical Image Analysis and Their Applications
by Francesco Prinzi, Ines Prata Machado and Carmelo Militello
Algorithms 2025, 18(6), 337; https://doi.org/10.3390/a18060337 - 4 Jun 2025
Viewed by 171
Abstract
In recent years, architectural and algorithmic innovations in machine learning have revolutionized the analysis of medical images [...] Full article
12 pages, 209 KiB  
Article
Navigating Stakeholders Perspectives on Artificial Intelligence in Higher Education
by Aleida Chavarria, Ramon Palau and Raúl Santiago
Algorithms 2025, 18(6), 336; https://doi.org/10.3390/a18060336 - 3 Jun 2025
Viewed by 222
Abstract
As artificial intelligence (AI) becomes increasingly integrated into higher education, understanding perceptions across different demographic groups is essential for its effective implementation. This study examines attitudes toward AI among students, lecturers, and academic staff, considering factors such as gender, age, occupation, academic discipline, [...] Read more.
As artificial intelligence (AI) becomes increasingly integrated into higher education, understanding perceptions across different demographic groups is essential for its effective implementation. This study examines attitudes toward AI among students, lecturers, and academic staff, considering factors such as gender, age, occupation, academic discipline, ethical concerns, and experience level. The findings indicate that while overall perceptions of AI in education are positive, concerns about ethics and uncertainty regarding its role persist. Gender and age differences in AI perceptions are minimal, though female students, educators, and individual in humanities disciplines express slightly higher ethical concerns. Teachers exhibit greater skepticism, emphasizing the need for transparency, ethical guidelines, and training to build trust. The study also highlights the influence of AI experience and perceptions. Frequent users tend to have a more positive outlook, whereas those with advance expertise engage with AI more selectively, suggesting a shift toward intentional and strategic use. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
16 pages, 649 KiB  
Article
Adapted B-Spline Quasi-Interpolation for Approximating Piecewise Smooth Functions
by David Levin and Nira Gruberger
Algorithms 2025, 18(6), 335; https://doi.org/10.3390/a18060335 - 3 Jun 2025
Viewed by 183
Abstract
We address the challenge of efficiently approximating piecewise smooth functions, particularly those with jump discontinuities. Given function values on a uniform grid over a domain Ω in Rd, we present a novel B-spline-based approximation framework, using new adaptable quasi-interpolation operators. This [...] Read more.
We address the challenge of efficiently approximating piecewise smooth functions, particularly those with jump discontinuities. Given function values on a uniform grid over a domain Ω in Rd, we present a novel B-spline-based approximation framework, using new adaptable quasi-interpolation operators. This approach integrates discontinuity detection techniques, allowing the quasi-interpolation operator to selectively use points from only one side of a discontinuity in both one- and two-dimensional cases. Among a range of candidate operators, the most suitable quasi-interpolation scheme is chosen to ensure high approximation accuracy and efficiency, while effectively suppressing spurious oscillations in the vicinity of discontinuities. Full article
(This article belongs to the Special Issue Nonsmooth Optimization and Its Applications)
Show Figures

Figure 1

18 pages, 6973 KiB  
Article
Adaptive Grid Generation by Solving One-Dimensional Diffusion Equation Using Physics-Informed Neural Networks
by Olzhas Turar, Maksat Mustafin and Darkhan Akhmed-Zaki
Algorithms 2025, 18(6), 334; https://doi.org/10.3390/a18060334 - 3 Jun 2025
Viewed by 212
Abstract
In this work, the construction of adaptive one-dimensional computational grids is considered using a numerical method and physics-informed neural networks (PINNs). The grid adaptation process is described by the diffusion equation, which allows for the redistribution of nodes based on a control function. [...] Read more.
In this work, the construction of adaptive one-dimensional computational grids is considered using a numerical method and physics-informed neural networks (PINNs). The grid adaptation process is described by the diffusion equation, which allows for the redistribution of nodes based on a control function. The numerical method employs an iterative scheme that adjusts the node positions according to the solution gradients, ensuring local refinement in key regions. In the PINN approach, the governing equation is incorporated into the loss function, enabling the neural network model to generate the grid based on physical constraints. To evaluate the performance of both methods, tests are conducted with different control function parameters, and the influence of diffusion coefficients on grid adaptation is analyzed. The results show that the numerical method leads to sharper variations in node spacing, while PINNs produce a smoother grid distribution. A comparative analysis of the deviations between the methods is performed. The obtained data allow for assessing the characteristics of grid adaptation in both approaches and for identifying possible directions for their further application in numerical modeling tasks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

29 pages, 3354 KiB  
Article
Enhancing Heart Attack Prediction: Feature Identification from Multiparametric Cardiac Data Using Explainable AI
by Muhammad Waqar, Muhammad Bilal Shahnawaz, Sajid Saleem, Hassan Dawood, Usman Muhammad and Hussain Dawood
Algorithms 2025, 18(6), 333; https://doi.org/10.3390/a18060333 - 2 Jun 2025
Viewed by 384
Abstract
Heart attack is a leading cause of mortality, necessitating timely and precise diagnosis to improve patient outcomes. However, timely diagnosis remains a challenge due to the complex and nonlinear relationships between clinical indicators. Machine learning (ML) and deep learning (DL) models have the [...] Read more.
Heart attack is a leading cause of mortality, necessitating timely and precise diagnosis to improve patient outcomes. However, timely diagnosis remains a challenge due to the complex and nonlinear relationships between clinical indicators. Machine learning (ML) and deep learning (DL) models have the potential to predict cardiac conditions by identifying complex patterns within data, but their “black-box” nature restricts interpretability, making it challenging for healthcare professionals to comprehend the reasoning behind predictions. This lack of interpretability limits their clinical trust and adoption. The proposed approach addresses this limitation by integrating predictive modeling with Explainable AI (XAI) to ensure both accuracy and transparency in clinical decision-making. The proposed study enhances heart attack prediction using the University of California, Irvine (UCI) dataset, which includes various heart analysis parameters collected through electrocardiogram (ECG) sensors, blood pressure monitors, and biochemical analyzers. Due to class imbalance, the Synthetic Minority Over-sampling Technique (SMOTE) was applied to enhance the representation of the minority class. After preprocessing, various ML algorithms were employed, among which Artificial Neural Networks (ANN) achieved the highest performance with 96.1% accuracy, 95.7% recall, and 95.7% F1-score. To enhance the interpretability of ANN, two XAI techniques, specifically SHapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME), were utilized. This study incrementally benchmarks SMOTE, ANN, and XAI techniques such as SHAP and LIME on standardized cardiac datasets, emphasizing clinical interpretability and providing a reproducible framework for practical healthcare implementation. These techniques enable healthcare practitioners to understand the model’s decisions, identify key predictive features, and enhance clinical judgment. By bridging the gap between AI-driven performance and practical medical implementation, this work contributes to making heart attack prediction both highly accurate and interpretable, facilitating its adoption in real-world clinical settings. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 592 KiB  
Article
Kriging-Based Variable Screening Method for Aircraft Optimization Problems with Expensive Functions
by Yadong Wang, Xinyao Duan, Jiang Wang, Jin Guo and Minglei Han
Algorithms 2025, 18(6), 332; https://doi.org/10.3390/a18060332 - 2 Jun 2025
Viewed by 277
Abstract
The computational complexity of airfoil optimization for aircraft wing designs typically involves high-dimensional parameter spaces defined by geometric variables, where each Computational Fluid Dynamics (CFD) simulation cycle may require significant processing resources. Therefore, performing variable selection to identify influential inputs becomes crucial for [...] Read more.
The computational complexity of airfoil optimization for aircraft wing designs typically involves high-dimensional parameter spaces defined by geometric variables, where each Computational Fluid Dynamics (CFD) simulation cycle may require significant processing resources. Therefore, performing variable selection to identify influential inputs becomes crucial for minimizing the number of necessary model evaluations, particularly when dealing with complex systems exhibiting nonlinear and poorly understood input–output relationships. As a result, it is desirable to use fewer samples to determine the influential inputs to achieve a simple, more efficient optimization process. This article provides a systematic, novel approach to solving aircraft optimization problems. Initially, a Kriging-based variable screening method (KRG-VSM) is proposed to determine the active inputs using a ikelihood-based screening method, and new stopping criteria for KRG-VSM are proposed and discussed. A genetic algorithm (GA) is employed to achieve the global optimum of the log-likelihood function. Subsequently, the airfoil optimization is conducted using the identified active design variables. According to the results, the Kriging-based variable screening method could select all the active inputs with a few samples. The Kriging-based variable screening method is then tested on the numerical benchmarks and applied to the airfoil aerodynamic optimization problem. Applying the variables screening technique can enhance the efficiency of the airfoil optimization process under acceptable accuracy. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop