Previous Issue
Volume 18, April
 
 

Algorithms, Volume 18, Issue 5 (May 2025) – 57 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
31 pages, 2956 KiB  
Article
A Heuristics-Guided Simplified Discrete Harmony Search Algorithm for Solving 0-1 Knapsack Problem
by Fuyuan Zheng, Kanglong Cheng, Kai Yang, Ning Li, Yu Lin and Yiwen Zhong
Algorithms 2025, 18(5), 295; https://doi.org/10.3390/a18050295 - 19 May 2025
Abstract
The harmony search (HS) algorithm is a novel metaheuristic which has been widely used to solve both continuous and discrete optimization problems. In order to improve the performance and simplify the implementation of the HS algorithm for solving the 0-1 knapsack problem (0-1KP), [...] Read more.
The harmony search (HS) algorithm is a novel metaheuristic which has been widely used to solve both continuous and discrete optimization problems. In order to improve the performance and simplify the implementation of the HS algorithm for solving the 0-1 knapsack problem (0-1KP), this paper proposes a heuristics-guided simplified discrete harmony search (SDHS) algorithm which does not use random search operator and has only one intrinsic parameter, harmony memory size. The SDHS algorithm uses a memory consideration operator to construct a feasible solution, and then the constructed solution is further enhanced by a solution-level pitch adjustment operator. Two heuristics, the profit–weight ratio of an item and the profit of an item, are used to greedily guide the memory consideration operator and the solution-level pitch adjustment operator, respectively. In the memory consideration operator, items are considered in non-ascending order of profit–weight ratio assigned from the harmony memory. In the solution-level pitch adjustment operator, items not in the knapsack are attempted to be selected in non-ascending order of profit. The SDHS algorithm outperforms several state-of-the-art algorithms, with an average improvement of 0.55% in the quality of solutions on large problem instances. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
25 pages, 5349 KiB  
Review
The Scientific Landscape of Hyper-Heuristics: A Bibliometric Analysis Based on Scopus
by Helen C. Peñate-Rodríguez, Gilberto Rivera, J. Patricia Sánchez-Solís and Rogelio Florencia
Algorithms 2025, 18(5), 294; https://doi.org/10.3390/a18050294 - 19 May 2025
Abstract
Hyper-heuristics emerged as a broader metaheuristic framework to address the limitations of traditional optimization heuristics. By abstracting the design of low-level heuristics, hyper-heuristics offer a flexible and adaptable approach to solving complex problems. This study conducts a bibliometric analysis of the hyper-heuristic-algorithms-related literature [...] Read more.
Hyper-heuristics emerged as a broader metaheuristic framework to address the limitations of traditional optimization heuristics. By abstracting the design of low-level heuristics, hyper-heuristics offer a flexible and adaptable approach to solving complex problems. This study conducts a bibliometric analysis of the hyper-heuristic-algorithms-related literature indexed in the Scopus database to map its evolution, identify key research trends, and pinpoint influential authors and journals. The study encompasses document growth over time, predominant author keywords, high-impact journals, and prolific authors ranked by publication count and citation impact. A detailed examination of author keywords unveils the core research themes within the hyper-heuristic domain. The findings of this study provide valuable insights into the current literature in hyper-heuristic research and offer guidance for novice and experienced researchers. Full article
Show Figures

Figure 1

18 pages, 1392 KiB  
Article
A Simulation of Contact Graph Routing for Mars–Earth Data Communication
by Basuki Suhardiman, Kuntjoro Adji Sidarto and Novriana Sumarti
Algorithms 2025, 18(5), 293; https://doi.org/10.3390/a18050293 - 19 May 2025
Abstract
In this study, we develop a simulation of Contact Graph Routing (CGR) for data communication between Mars, Earth, and relay satellites. Due to the changing of the satellites’ distances to Mars and Earth, respectively, there are specific contact windows between NASA’s Mars rovers [...] Read more.
In this study, we develop a simulation of Contact Graph Routing (CGR) for data communication between Mars, Earth, and relay satellites. Due to the changing of the satellites’ distances to Mars and Earth, respectively, there are specific contact windows between NASA’s Mars rovers and orbiting relay satellites, and specific contact windows between these relay satellites and NASA’s global system of antennas on Earth. The barrier in communication develops delays caused by link propagation, so it needs a Delay Tolerant Network (DTN) for routing networks among the nodes (satellites and antennas), which is the concept of storing and forwarding data whenever the windows are open. We construct an efficient algorithm for CGR, which puts all objects into a general framework of numbered nodes, so that we can easily develop another application of a network with a larger number of nodes. Simulated data are generated randomly to mimic the unpredicted data volumes that are sent from Mars to Earth. We construct some cases involving delivering data for one Martian day, and the simulation performs well in carrying, storing, and forwarding data from Mars to Earth, even though the relay satellites are not able to contact Earth for a period of time. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 1771 KiB  
Article
Adjustment Algorithm for Free Station Control Network of Ultra-Large Deepwater Jacket
by Xianyang Yang, Wei Shu, Huoping Wang, Haifeng Li, Yi Wang, Di Zhang, Jiayu Liu, Deyang Wang and Wangsui Xiao
Algorithms 2025, 18(5), 292; https://doi.org/10.3390/a18050292 - 19 May 2025
Abstract
The offshore oil engineering jacket is a giant super-heavy steel frame structure with dimensions in the hundreds of meters. A high-precision free station control network is usually arranged around it to ensure construction accuracy. However, as the jacket is gradually assembled, its extreme [...] Read more.
The offshore oil engineering jacket is a giant super-heavy steel frame structure with dimensions in the hundreds of meters. A high-precision free station control network is usually arranged around it to ensure construction accuracy. However, as the jacket is gradually assembled, its extreme weight will cause the widespread deformation of the surrounding ground surface, and each control point may be affected to varying degrees, resulting in the non-uniform deformation of the entire network. For adjustments of control networks in the subsequent phases, if the same starting point as the first phase is chosen without careful analysis, the starting points’ non-uniform deformation will degrade the whole network’s accuracy. Considering the particularities of the free station control network, this paper proposes an adjustment algorithm consisting of a three-step analytical method. Firstly, the initial coordinates of the points of the current phase are obtained through classical free network adjustment; second, stable and unstable points are identified via coordinate similarity transformation between the current and the first phase; and finally, quasi-stable adjustment is conducted. The experimental data analysis of a jacket control network shows that this method can effectively identify stable and unstable points, thereby ensuring construction accuracy and jacket stability. Full article
(This article belongs to the Special Issue Algorithms and Application for Spatiotemporal Data Processing)
23 pages, 4463 KiB  
Article
Dual-Priority Delayed Deep Double Q-Network (DPD3QN): A Dueling Double Deep Q-Network with Dual-Priority Experience Replay for Autonomous Driving Behavior Decision-Making
by Shuai Li, Peicheng Shi, Aixi Yang, Heng Qi and Xinlong Dong
Algorithms 2025, 18(5), 291; https://doi.org/10.3390/a18050291 - 19 May 2025
Abstract
The behavior decision control of autonomous vehicles is a critical aspect of advancing autonomous driving technology. However, current behavior decision algorithms based on deep reinforcement learning still face several challenges, such as insufficient safety and sparse reward mechanisms. To solve these problems, this [...] Read more.
The behavior decision control of autonomous vehicles is a critical aspect of advancing autonomous driving technology. However, current behavior decision algorithms based on deep reinforcement learning still face several challenges, such as insufficient safety and sparse reward mechanisms. To solve these problems, this paper proposes a dueling double deep Q-network based on dual-priority experience replay—DPD3QN. Initially, the dueling network is integrated with the double deep Q-network, and the original network’s output layer is restructured to enhance the precision of action value estimation. Subsequently, dual-priority experience replay is incorporated to facilitate the model’s ability to swiftly recognize and leverage critical experiences. Ultimately, the training and evaluation are conducted on the OpenAI Gym simulation platform. The test results show that DPD3QN helps to improve the convergence speed of driverless vehicle behavior decision-making. Compared with the currently popular DQN and DDQN algorithms, this algorithm achieves higher success rates in challenging scenarios. Test scenario I increases by 11.8 and 25.8 percentage points, respectively, while the success rates in test scenarios I and II rise by 8.8 and 22.2 percentage points, respectively, indicating a more secure and efficient autonomous driving decision-making capability. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

18 pages, 4535 KiB  
Article
Quantifying Intra- and Inter-Observer Variabilities in Manual Contours for Radiotherapy: Evaluation of an MR Tumor Autocontouring Algorithm for Liver, Prostate, and Lung Cancer Patients
by Gawon Han, Arun Elangovan, Jordan Wong, Asmara Waheed, Keith Wachowicz, Nawaid Usmani, Zsolt Gabos, Jihyun Yun and B. Gino Fallone
Algorithms 2025, 18(5), 290; https://doi.org/10.3390/a18050290 - 19 May 2025
Abstract
Real-time tumor-tracked radiotherapy with a linear accelerator-magnetic resonance (linac-MR) hybrid system requires accurate tumor delineation at a fast MR imaging rate. Various autocontouring methods have been previously evaluated against “gold standard” manual contours by experts. However, manually drawn contours have inherent intra- and [...] Read more.
Real-time tumor-tracked radiotherapy with a linear accelerator-magnetic resonance (linac-MR) hybrid system requires accurate tumor delineation at a fast MR imaging rate. Various autocontouring methods have been previously evaluated against “gold standard” manual contours by experts. However, manually drawn contours have inherent intra- and inter-observer variations. We aim to quantify these variations and evaluate our tumor-autocontouring algorithm against the manual contours. Ten liver, ten prostate, and ten lung cancer patients were scanned using a 3 tesla (T) magnetic resonance imaging (MRI) scanner with a 2D balanced steady-state free precession (bSSFP) sequence at 4 frames/s. Three experts manually contoured the tumor in two sessions. For autocontouring, an in-house built U-Net-based autocontouring algorithm was used, whose hyperparameters were optimized for each patient, expert, and session (PES). For evaluation, (A) Automatic vs. Manual and (B) Manual vs. Manual contour comparisons were performed. For (A) and (B), three types of comparisons were performed: (a) same expert same session, (b) same expert different session, and (c) different experts, using Dice coefficient (DC), centroid displacement (CD), and the Hausdorff distance (HD). For (A), the algorithm was trained using one expert’s contours and its autocontours were compared to contours from (a)–(c). For Automatic vs. Manual evaluations (Aa–Ac), DC = 0.91, 0.86, 0.78, CD = 1.3, 1.8, 2.7 mm, and HD = 3.1, 4.6, 7.0 mm averaged over 30 patients were achieved, respectively. For Manual vs. Manual evaluations (Ba–Bc), DC = 1.00, 0.85, 0.77, CD = 0.0, 2.1, 2.8 mm, and HD = 0.0, 4.9, 7.2 mm were achieved, respectively. We have quantified the intra- and inter-observer variations in manual contouring of liver, prostate, and lung patients. Our PES-specific optimized algorithm generated autocontours with agreement levels comparable to these manual variations, but with high efficiency (54 ms/autocontour vs. 9 s/manual contour). Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

15 pages, 2084 KiB  
Article
Analysis of Short Texts Using Intelligent Clustering Methods
by Jamalbek Tussupov, Akmaral Kassymova, Ayagoz Mukhanova, Assyl Bissengaliyeva, Zhanar Azhibekova, Moldir Yessenova and Zhanargul Abuova
Algorithms 2025, 18(5), 289; https://doi.org/10.3390/a18050289 - 19 May 2025
Abstract
This article presents a comprehensive review of short text clustering using state-of-the-art methods: Bidirectional Encoder Representations from Transformers (BERT), Term Frequency-Inverse Document Frequency (TF-IDF), and the novel hybrid method Latent Dirichlet Allocation + BERT + Autoencoder (LDA + BERT + AE). The article [...] Read more.
This article presents a comprehensive review of short text clustering using state-of-the-art methods: Bidirectional Encoder Representations from Transformers (BERT), Term Frequency-Inverse Document Frequency (TF-IDF), and the novel hybrid method Latent Dirichlet Allocation + BERT + Autoencoder (LDA + BERT + AE). The article begins by outlining the theoretical foundation of each technique and its merits and limitations. BERT is critiqued for its ability to understand word dependence in text, while TF-IDF is lauded for its applicability in terms of importance assessment. The experimental section compares the efficacy of these methods in clustering short texts, with a specific focus on the hybrid LDA + BERT + AE approach. A detailed examination of the LDA-BERT model’s training and validation loss over 200 epochs shows that the loss values start above 1.2 and quickly decrease to around 0.8 within the first 25 epochs, eventually stabilizing at approximately 0.4. The close alignment of these curves suggests the model’s practical learning and generalization capabilities, with minimal overfitting. The study demonstrates that the hybrid LDA + BERT + AE method significantly enhances text clustering quality compared to individual methods. Based on the findings, the study recommends the optimum choice and use of clustering methods for different short texts and natural language processing operations. The applications of these methods in industrial and educational settings, where successful text handling and categorization are critical, are also addressed. The study ends by emphasizing the importance of the holistic handling of short texts for deeper semantic comprehension and effective information retrieval. Full article
(This article belongs to the Section Databases and Data Structures)
Show Figures

Figure 1

23 pages, 1894 KiB  
Article
Innovative Control Techniques for Enhancing Signal Quality in Power Applications: Mitigating Electromagnetic Interference
by N. Manoj Kumar, Yousef Farhaoui, R. Vimala, M. Anandan, M. Aiswarya and A. Radhika
Algorithms 2025, 18(5), 288; https://doi.org/10.3390/a18050288 - 18 May 2025
Abstract
Electromagnetic interference (EMI) remains a difficult task in the design and operation of contemporary power electronic systems, especially in those applications where signal quality has a direct impact on the overall performance and efficiency. Conventional control schemes that have evolved to counteract the [...] Read more.
Electromagnetic interference (EMI) remains a difficult task in the design and operation of contemporary power electronic systems, especially in those applications where signal quality has a direct impact on the overall performance and efficiency. Conventional control schemes that have evolved to counteract the effects of EMI generally tend to have greater design complexity, greater error rates, poor control accuracy, and large amounts of harmonic distortion. In order to overcome these constraints, this paper introduces an intelligent and advanced control approach founded on the signal randomization principle. The suggested approach controls the switching activity of a DC–DC converter by dynamically tuned parameters like duty cycle, switching frequency, and signal modulation. A boost interleaved topology is utilized to maximize the current distribution and minimize ripple, and an innovative space vector-dithered sigma delta modulation (SV-DiSDM) scheme is proposed for cancelling harmonics via a digitalized control action. The used modulation scheme can effectively distribute the harmonic energy across a larger range of frequencies to largely eliminate EMI and boost the stability of the system. High-performance analysis is conducted by employing significant measures like total harmonic distortion (THD), switching frequency deviation, switching loss, and distortion product. Verification against conventional control models confirms the increased efficiency, less EMI, and greater signal integrity of the proposed method, and hence, it can be a viable alternative for EMI-aware power electronics applications. Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
24 pages, 6317 KiB  
Article
Generation of Realistic Synthetic Load Profile Based on the Markov Chains Theory: Methodology and Case Studies
by Irena Valova, Katerina G. Gabrovska-Evstatieva, Tsvetelina Kaneva and Boris I. Evstatiev
Algorithms 2025, 18(5), 287; https://doi.org/10.3390/a18050287 - 17 May 2025
Viewed by 81
Abstract
Digital energy systems rely on actual data about power consumption and generation, which are not always available and, in certain situations, can be replaced with synthetic forms. This study presents a methodology for generating synthetic time-series data of electrical power consumers. It is [...] Read more.
Digital energy systems rely on actual data about power consumption and generation, which are not always available and, in certain situations, can be replaced with synthetic forms. This study presents a methodology for generating synthetic time-series data of electrical power consumers. It is based on the Markov chains theory, and unlike previous studies, the data are divided into hourly and hour-change monthly records, which leads to the generation of 48 transition matrices for each month. This study aimed to ensure statistical and probabilistic similarity between the original and synthetic data, which was assessed using the Frobenius distance, the coefficient of determination, variance, and standard deviation. The methodology was applied to three load profiles obtained from different types of consumers—domestic, agricultural, and industrial. In all three cases, the statistical and probabilistic characteristics of the generated data were very similar to those of the original datasets; however, the visual comparison showed that it is recommended to increase the number of states to lower the data scattering. Based on the results, recommendations are proposed on choosing the number of states for the transition matrices to optimize the statistical and probabilistic similarity. The described methodology can be used by experts involved in the design of systems with renewable energy sources and by scientists dealing with long-term studies. Full article
Show Figures

Figure 1

15 pages, 28684 KiB  
Article
Efficient Expiration Date Recognition in Food Packages for Mobile Applications
by Hao Peng, Juan Bayon, Joaquin Recas and Maria Guijarro
Algorithms 2025, 18(5), 286; https://doi.org/10.3390/a18050286 - 15 May 2025
Viewed by 56
Abstract
The manuscript introduces an innovative framework for expiration date recognition aimed at improving accessibility for visually impaired individuals. The study underscores the pivotal role of convolutional neural networks (CNNs) in addressing complex challenges, such as variations in typography and image degradation. The system [...] Read more.
The manuscript introduces an innovative framework for expiration date recognition aimed at improving accessibility for visually impaired individuals. The study underscores the pivotal role of convolutional neural networks (CNNs) in addressing complex challenges, such as variations in typography and image degradation. The system attained an F1-score of 0.9303 for the detection task and an accuracy of 97.06% for the recognition model, with a total inference time of 63 milliseconds on a single GeForce GTX 1080 GPU. A comparative analysis of quantized models—FP32, FP16, and INT8—emphasizes the trade-offs in inference speed, energy efficiency, and accuracy on mobile devices. The experimental results indicate that the FP16 model operating in CPU mode achieves an optimal equilibrium between precision and energy consumption, underscoring its suitability for resource-constrained environments. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 3576 KiB  
Article
Development of a Model for Soil Salinity Segmentation Based on Remote Sensing Data and Climate Parameters
by Gulzira Abdikerimova, Dana Khamitova, Akmaral Kassymova, Assyl Bissengaliyeva, Gulsara Nurova, Murat Aitimov, Yerlan Alimzhanovich Shynbergenov, Moldir Yessenova and Roza Bekbayeva
Algorithms 2025, 18(5), 285; https://doi.org/10.3390/a18050285 - 14 May 2025
Viewed by 139
Abstract
The paper presents a hybrid machine learning model for the spatial segmentation of soils by salinity using multispectral satellite data from Sentinel-2 and climate parameters of the ERA5-Land model. The proposed method aims to solve the problem of accurate soil cover segmentation under [...] Read more.
The paper presents a hybrid machine learning model for the spatial segmentation of soils by salinity using multispectral satellite data from Sentinel-2 and climate parameters of the ERA5-Land model. The proposed method aims to solve the problem of accurate soil cover segmentation under climate change and high spatial heterogeneity of data. The approach includes the sequential application of unsupervised learning algorithms (K-Means, hierarchical clustering, DBSCAN), the XGBoost model, and a multitasking neural network that performs simultaneous classification and regression. At the first stage, pseudo-labels are formed using K-Means, then a probabilistic assessment of object membership in classes and ensemble voting of clustering algorithms are carried out. The final model is trained on an extended feature space and demonstrates improved results compared to traditional approaches. Experiments on a sample of 33,624 observations (23,536—training sample, 10,088—test sample) showed an increase in the Silhouette Score value from 0.7840 to 0.8156 and a decrease in the Davies–Bouldin Score from 0.3567 to 0.3022. The classification accuracy was 99.99%, with only one error in more than 10,000 test objects. The results confirmed the proposed method’s high efficiency and applicability for remote monitoring, environmental analysis, and sustainable land management. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

29 pages, 1821 KiB  
Article
Learning Analytics in a Non-Linear Virtual Course
by Jhon Mercado, Carlos Mendoza-Cardenas, Luis Fletscher and Natalia Gaviria-Gomez
Algorithms 2025, 18(5), 284; https://doi.org/10.3390/a18050284 - 13 May 2025
Viewed by 168
Abstract
Researchers have extensively explored learning analytics in online courses, primarily focusing on linear course structures where students progress sequentially through lessons and assessments. However, non-linear courses, which allow students to complete tasks in any order, present unique challenges for learning analytics due to [...] Read more.
Researchers have extensively explored learning analytics in online courses, primarily focusing on linear course structures where students progress sequentially through lessons and assessments. However, non-linear courses, which allow students to complete tasks in any order, present unique challenges for learning analytics due to the variability in course progression among students. This study proposes a method for applying learning analytics to non-linear, self-paced MOOC-style courses, addressing early performance prediction and online learning pattern detection. The novelty of our approach lies in introducing a personalized feature aggregation that adapts to each student’s progress rather than being defined at fixed timelines. We evaluated three types of features—engagement, behavior, and performance—using data from a non-linear large-scale Moodle course designed to prepare high school students for a public university entrance exam. Our approach predicted early student performance, achieving an F1-score of 0.73 at a 20% cumulative weight assessment. Feature importance analysis revealed that performance and behavior were the strongest predictors, while engagement features, such as time spent on educational resources, also played a significant role. In addition to performance prediction, we conducted a clustering analysis that identified four distinct online learning patterns recurring across various cumulative weight assessments. These patterns provide valuable insights into student behavior and performance and have practical implications, enabling educators to deliver more personalized feedback and targeted interventions to meet individual student needs. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 5766 KiB  
Article
Modeling of Global and Individual Kinetic Parameters in Wheat Straw Torrefaction: Particle Swarm Optimization and Its Impact on Elemental Composition Prediction
by Ismael Urbina-Salas, David Granados-Lieberman, Martín Valtierra-Rodríguez, Claudia Adriana Ramírez-Valdespino and David Aarón Rodríguez-Alejandro
Algorithms 2025, 18(5), 283; https://doi.org/10.3390/a18050283 - 13 May 2025
Viewed by 168
Abstract
With the growing demand for sustainable energy solutions, biomass torrefaction has emerged as a crucial technology for converting agricultural waste into high-value biofuels. This work develops dual kinetic modeling using global and individual parameters combined using particle swarm optimization (PSO) to predict energy [...] Read more.
With the growing demand for sustainable energy solutions, biomass torrefaction has emerged as a crucial technology for converting agricultural waste into high-value biofuels. This work develops dual kinetic modeling using global and individual parameters combined using particle swarm optimization (PSO) to predict energy densification based on elemental composition (CHNO) and high heating values (HHVs). The global parameters are calculated from experiments conducted at 250 °C, 275 °C, and 300 °C, and the individual parameters are obtained by adjusting experimental points at each temperature. A two-step kinetic model was used and optimized to achieve exceptional adjustment accuracy (98.073–99.999%). The experiments were carried out in an inert atmosphere of nitrogen with a heating rate of 20 °C/min and a 100 min residence time. The results obtained demonstrate a crucial trade-off: while individual parameters provide superior accuracy (an average fit of 99.516%) for predicting degradation by weight loss, global parameters offer better predictions for elemental composition, with average errors of 2.129% (carbon), 1.038% (hydrogen), 9.540% (nitrogen), and 3.997% (oxygen). Furthermore, it has been found that by determining the kinetic parameters at a torrefaction temperature higher than the maximum peak observed in the derivative thermogravimetric (DTG) curve (275 °C), it is possible to predict the behavior of the process within the 250–325 °C range with an R-squared value corresponding to an error lower than 3%. This approach significantly reduces the number of required experiments from twelve to only four by relying on a single isothermal condition for parameter estimation. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms in Sustainability)
Show Figures

Figure 1

30 pages, 2098 KiB  
Article
A Transfer Matrix Norm-Based Framework for Multimaterial Topology Optimization: Methodology and MATLAB Implementation
by Giacomo Galuppini and Paolo Venini
Algorithms 2025, 18(5), 282; https://doi.org/10.3390/a18050282 - 12 May 2025
Viewed by 136
Abstract
This paper introduces a novel method for the topology optimization of multi-material structures. The core innovation lies in minimizing an appropriate norm of the transfer matrix that links external loads to system outputs. Several formulations of structural compliance are considered within a framework [...] Read more.
This paper introduces a novel method for the topology optimization of multi-material structures. The core innovation lies in minimizing an appropriate norm of the transfer matrix that links external loads to system outputs. Several formulations of structural compliance are considered within a framework that accommodates both single and multiple inputs and outputs. The multimaterial interpolation scheme follows the approach proposed by Yi and collaborators, which is adapted to the general topology optimization framework proposed by Venini and collaborators. While the proposed method is inherently capable of addressing the dynamic response of multimaterial structures, this study focuses exclusively on static topology optimization. The extension to dynamic topology optimization is deferred to a fortcoming study, which requires a multimaterial interpolation of inertial properties that is currently in an advanced development stage. Finally, a series of numerical case studies illustrate the key features of the proposed approach. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

16 pages, 2845 KiB  
Article
HPANet: Hierarchical Path Aggregation Network with Pyramid Vision Transformers for Colorectal Polyp Segmentation
by Yuhong Ying, Haoyuan Li, Yiwen Zhong and Min Lin
Algorithms 2025, 18(5), 281; https://doi.org/10.3390/a18050281 - 11 May 2025
Viewed by 176
Abstract
The automatic segmentation technique for colorectal polyps in colonoscopy is considered critical for aiding physicians in real-time lesion identification and minimizing diagnostic errors such as false positives and missed lesions. Despite significant progress in existing research, accurate segmentation of colorectal polyps remains technically [...] Read more.
The automatic segmentation technique for colorectal polyps in colonoscopy is considered critical for aiding physicians in real-time lesion identification and minimizing diagnostic errors such as false positives and missed lesions. Despite significant progress in existing research, accurate segmentation of colorectal polyps remains technically challenging due to persistent issues such as low contrast between polyps and mucosa, significant morphological heterogeneity, and susceptibility to imaging artifacts caused by bubbles in the colorectal lumen and poor lighting conditions. To address these limitations, this study proposed a novel pyramid vision transformer-based hierarchical path aggregation network (HPANet) for polyp segmentation. Specifically, firstly, the backward multi-scale feature fusion module (BMFM) was developed to enhance the ability of processing polyps with different scales. Secondly, the forward noise reduction module (FNRM) was designed to learn the texture features of the upper and lower layers to reduce the influence of noise such as bubbles. Finally, in order to solve the problem of boundary ambiguity caused by repeated up and down sampling, the boundary feature refinement module (BFRM) was developed to further refine the boundary. The proposed network was compared with several representative networks on five public polyp datasets. Experimental results show that the proposed network achieves better segmentation performance, especially on the Kvasir SEG dataset, where the mDice and mIoU coefficients reach 0.9204 and 0.8655. Full article
Show Figures

Figure 1

16 pages, 289 KiB  
Article
A Local 6-Approximation Distributed Algorithm for Minimum Dominating Set Problem in Planar Triangle-Free Graphs
by Wojciech Wawrzyniak
Algorithms 2025, 18(5), 280; https://doi.org/10.3390/a18050280 - 10 May 2025
Viewed by 142
Abstract
In this paper, we present a new distributed approximation algorithm for the minimum dominating set problem in planar triangle-free graphs. The algorithm operates in a constant number of rounds in the LOCAL model. Using the bunch technique, we prove that our algorithm achieves [...] Read more.
In this paper, we present a new distributed approximation algorithm for the minimum dominating set problem in planar triangle-free graphs. The algorithm operates in a constant number of rounds in the LOCAL model. Using the bunch technique, we prove that our algorithm achieves an approximation ratio of 6, which is a significant improvement over previous results for distributed algorithms, where the best known approximation ratio was 8+ϵ for any ϵ>0. While sequential algorithms can achieve approximation ratios below 5 for this problem, our distributed algorithm achieves the best known approximation ratio in the LOCAL model. We provide a detailed proof and analysis of the algorithm, which can be implemented in a distributed manner. Full article
(This article belongs to the Collection Parallel and Distributed Computing: Algorithms and Applications)
Show Figures

Figure 1

17 pages, 2144 KiB  
Article
DEPANet: A Differentiable Edge-Guided Pyramid Aggregation Network for Strip Steel Surface Defect Segmentation
by Yange Sun, Siyu Geng, Chengyi Zheng, Chenglong Xu, Huaping Guo and Yan Feng
Algorithms 2025, 18(5), 279; https://doi.org/10.3390/a18050279 - 9 May 2025
Viewed by 204
Abstract
The steel strip is an important and ideal material for the automotive and aerospace industries due to its superior machinability, cost efficiency, and flexibility. However, surface defects such as inclusions, spots, and scratches can significantly impact product performance and durability. Accurately identifying these [...] Read more.
The steel strip is an important and ideal material for the automotive and aerospace industries due to its superior machinability, cost efficiency, and flexibility. However, surface defects such as inclusions, spots, and scratches can significantly impact product performance and durability. Accurately identifying these defects remains challenging due to the complex texture structures and subtle variations in the material. In order to tackle this challenge, we propose a Differentiable Edge-guided Pyramid Aggregation Network (DEPANet) to utilize edge information for improving segmentation performance. DEPANet adopts an end-to-end encoder-decoder framework, where the encoder consisting of three key components: a backbone network, a Differentiable Edge Feature Pyramid network (DEFP), and Edge-aware Feature Aggregation Modules (EFAMs). The backbone network is designed to extract overall features from the strip steel surface, while the proposed DEFP utilizes learnable Laplacian operators to extract multiscale edge information of defects across scales. In addition, the proposed EFAMs aggregate the overall features generating from the backbone and the edge information obtained from DEFP using the Convolutional Block Attention Module (CBAM), which combines channel attention and spatial attention mechanisms, to enhance feature expression. Finally, through the decoder, implemented as a Feature Pyramid Network (FPN), the multiscale edge-enhanced features are progressively upsampled and fused to reconstruct high-resolution segmentation maps, enabling precise defect localization and robust handling of defects across various sizes and shapes. DEPANet demonstrates superior segmentation accuracy, edge preservation, and feature representation on the SD-saliency-900 dataset, outperforming other state-of-the-art methods and delivering more precise and reliable defect segmentation. Full article
(This article belongs to the Special Issue Machine Learning Algorithms for Image Understanding and Analysis)
Show Figures

Figure 1

16 pages, 4397 KiB  
Article
Simulation and Optimization of Multi-Phase Terminal Trajectory for Three-Dimensional Anti-Ship Missiles Based on Hybrid MOPSO
by Jiandong Sun, Shixun You, Di Hua, Zhiwei Xu, Peiyao Wang and Zihang Yang
Algorithms 2025, 18(5), 278; https://doi.org/10.3390/a18050278 - 8 May 2025
Viewed by 194
Abstract
In high-dynamic battlefield environments, anti-ship missiles must perform intricate attitude adjustments and energy management within time constraints to hit a target accurately. Traditional optimization methods face challenges due to the high speed, flexibility, and varied constraints inherent to anti-ship missiles. To overcome these [...] Read more.
In high-dynamic battlefield environments, anti-ship missiles must perform intricate attitude adjustments and energy management within time constraints to hit a target accurately. Traditional optimization methods face challenges due to the high speed, flexibility, and varied constraints inherent to anti-ship missiles. To overcome these challenges, this research introduces a three-dimensional (3D) multi-stage trajectory optimization approach based on the hybrid multi-objective particle swarm optimization algorithm (MOPSO-h). A multi-stage optimization model is developed for terminal trajectory, dividing the flight process into three stages: cruising, altitude adjustment, and penetration dive. Dynamic equations are formulated for each stage, incorporating real-time observations and overload constraints and ensuring the trajectory remains smooth, continuous, and compliant with physical limitations. The proposed algorithm integrates an adaptive hybrid mutation strategy, effectively balancing global search with local exploitation, thus preventing premature convergence. The simulation results demonstrate that, in typical scenarios, the mean miss distance optimized by MOPSO-h remains no greater than 2.34 m, while the terminal landing angle is consistently no less than 85.68°. Furthermore, MOPSO-h enables the missile’s cruise altitude and speed, driven by multiple models, to maintain long-term stability, ensuring that the maneuver overload adheres to physical constraints. This research provides a rigorous and practical solution for anti-ship missile trajectory design and engagement with shipborne air defense systems in high-dynamic environments, achieved through a multi-stage collaborative optimization mechanism and error analysis. Full article
Show Figures

Figure 1

24 pages, 2910 KiB  
Article
Fast Equipartition of Complex 2D Shapes with Minimal Boundaries
by Costas Panagiotakis
Algorithms 2025, 18(5), 277; https://doi.org/10.3390/a18050277 - 8 May 2025
Viewed by 113
Abstract
In this paper, we study the 2D Shape Equipartition Problem (2D-SEP) with minimal boundaries, and we propose an efficient method that solves the problem with a low computational cost. The goal of 2D-SEP is to obtain a segmentation into N equal-area segments (regions), [...] Read more.
In this paper, we study the 2D Shape Equipartition Problem (2D-SEP) with minimal boundaries, and we propose an efficient method that solves the problem with a low computational cost. The goal of 2D-SEP is to obtain a segmentation into N equal-area segments (regions), where the number of segments (N) is given by the user under the constraint that the length of boundaries between the segments is minimized. We define the 2D-SEP, and we study problem solutions using basic geometric shapes. We propose a 2D Shape Equipartition algorithm based on a fast balanced clustering method (SEP-FBC) that efficiently solves the 2D-SEP problem under complex 2D shapes in O(N·|S|·log(|S|)), where |S| denotes the number of image pixels. The proposed SEP-FBC method initializes clustering using centroids provided by the k-means algorithm, which is executed first. During each iteration of the main SEP-FBC process, a region-growing procedure is applied, starting from the smallest region and expanding until regions of equal area are achieved. Additionally, a Particle Swarm Optimization (PSO) method that uses the SEP-FBC method under different initial centroids has also been proposed to explore better 2D-SEP solutions and to show how the selection of the initial centroids affect the performance of the proposed method. Finally, we present experimental results on more than 2800 2D shapes to evaluate the performance of the proposed methods and illustrate that their solutions outperform other methods from the literature. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Graphical abstract

19 pages, 755 KiB  
Review
Artificial Intelligence and the Human–Computer Interaction in Occupational Therapy: A Scoping Review
by Ioannis Kansizoglou, Christos Kokkotis, Theodoros Stampoulis, Erasmia Giannakou, Panagiotis Siaperas, Stavros Kallidis, Maria Koutra, Paraskevi Malliou, Maria Michalopoulou and Antonios Gasteratos
Algorithms 2025, 18(5), 276; https://doi.org/10.3390/a18050276 - 8 May 2025
Viewed by 287
Abstract
Occupational therapy (OT) is a client-centered health profession focused on enhancing individuals’ ability to perform meaningful activities and daily tasks, particularly for those recovering from injury, illness, or disability. As a core component of rehabilitation, it promotes independence, well-being, and quality of life [...] Read more.
Occupational therapy (OT) is a client-centered health profession focused on enhancing individuals’ ability to perform meaningful activities and daily tasks, particularly for those recovering from injury, illness, or disability. As a core component of rehabilitation, it promotes independence, well-being, and quality of life through personalized, goal-oriented interventions. Identifying and measuring the role of artificial intelligence (AI) in the human–computer interaction (HCI) within OT is critical for improving therapeutic outcomes and patient engagement. Despite AI’s growing significance, the integration of AI-driven HCI in OT remains relatively underexplored in the existing literature. This scoping review identifies and maps current research on the topic, highlighting applications and proposing directions for future work. A structured literature search was conducted using the Scopus and PubMed databases. Articles were included if their primary focus was on the intersection of AI, HCI, and OT. Out of 55 retrieved articles, 26 met the inclusion criteria. This work highlights three key findings: (i) machine learning, robotics, and virtual reality are emerging as prominent AI-driven HCI techniques in OT; (ii) the integration of AI-enhanced HCI offers significant opportunities for developing personalized therapeutic interventions; (iii) further research is essential to evaluate the long-term efficacy, ethical implications, and patient outcomes associated with AI-driven HCI in OT. These insights aim to guide future research efforts and clinical applications within this evolving interdisciplinary field. In conclusion, AI-driven HCI holds considerable promise for advancing OT practice, yet further research is needed to fully realize its clinical potential. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 1851 KiB  
Article
Generating Job Recommendations Based on User Personality and Gallup Tests
by Shakhmar Sarsenbay, Asset Kabdiyev, Iraklis Varlamis, Christos Sardianos, Cemil Turan, Bobir Razhametov and Yermek Kazym
Algorithms 2025, 18(5), 275; https://doi.org/10.3390/a18050275 - 8 May 2025
Viewed by 277
Abstract
This paper introduces a novel approach to job recommendation systems by incorporating personality traits evaluated through the Gallup CliftonStrengths assessment, aiming to enhance the traditional matching process beyond skills and qualifications. Unlike broad models like the Big Five, Gallup’s CliftonStrengths assesses 34 specific [...] Read more.
This paper introduces a novel approach to job recommendation systems by incorporating personality traits evaluated through the Gallup CliftonStrengths assessment, aiming to enhance the traditional matching process beyond skills and qualifications. Unlike broad models like the Big Five, Gallup’s CliftonStrengths assesses 34 specific talents (e.g., ‘Analytical’, ‘Empathy’), enabling finer-grained, actionable job matches. While existing systems focus primarily on hard skills, this paper argues that personality traits—such as those measured by the Gallup test—play a crucial role in determining career satisfaction and long-term job retention. The proposed approach offers a more granular and actionable method for matching candidates with job opportunities that align with their natural strengths. Leveraging Gallup tests, we develop a job-matching approach that identifies personality traits and integrates them with recommendation algorithms to generate a list of the most suitable specializations for the user. By utilizing a GPT-4 model to process job descriptions and rank relevant personality traits, the system generates more personalized recommendations that account for both hard and soft skills. The empirical experiments demonstrate that this integration can improve the accuracy and relevance of job recommendations, leading to better career outcomes. The paper contributes to the field by offering a comprehensive framework for personality-based job matching and validating its effectiveness, paving the way for a more holistic approach to recruitment and talent management. Full article
Show Figures

Figure 1

12 pages, 687 KiB  
Article
A Novel Algorithm for Personalized Federated Learning: Knowledge Distillation with Weighted Combination Loss
by Hengrui Hu, Anai N. Kothari and Anjishnu Banerjee
Algorithms 2025, 18(5), 274; https://doi.org/10.3390/a18050274 - 7 May 2025
Viewed by 119
Abstract
Federated learning (FL) offers a privacy-preserving framework for distributed machine learning, enabling collaborative model training across diverse clients without centralizing sensitive data. However, statistical heterogeneity, characterized by non-independent and identically distributed (non-IID) client data, poses significant challenges, leading to model drift and poor [...] Read more.
Federated learning (FL) offers a privacy-preserving framework for distributed machine learning, enabling collaborative model training across diverse clients without centralizing sensitive data. However, statistical heterogeneity, characterized by non-independent and identically distributed (non-IID) client data, poses significant challenges, leading to model drift and poor generalization. This paper proposes a novel algorithm, pFedKD-WCL (Personalized Federated Knowledge Distillation with Weighted Combination Loss), which integrates knowledge distillation with bi-level optimization to address non-IID challenges. pFedKD-WCL leverages the current global model as a teacher to guide local models, optimizing both global convergence and local personalization efficiently. We evaluate pFedKD-WCL on the MNIST dataset and a synthetic dataset with non-IID partitioning, using multinomial logistic regression (MLR) and multilayer perceptron models (MLP). Experimental results demonstrate that pFedKD-WCL outperforms state-of-the-art algorithms, including FedAvg, FedProx, PerFedAvg, pFedMe, and FedGKD in terms of accuracy and convergence speed. For example, on MNIST data with an extreme non-IID setting, pFedKD-WCL achieves accuracy improvements of 3.1%, 3.2%, 3.9%, 3.3%, and 0.3% for an MLP model with 50 clients compared to FedAvg, FedProx, PerFedAvg, pFedMe, and FedGKD, respectively, while gains reach 24.1%, 22.6%, 2.8%, 3.4%, and 25.3% for an MLR model with 50 clients. Full article
Show Figures

Figure 1

21 pages, 3370 KiB  
Article
An Improved Density-Based Spatial Clustering of Applications with Noise Algorithm with an Adaptive Parameter Based on the Sparrow Search Algorithm
by Zicheng Huang, Zuopeng Liang, Shibo Zhou and Shuntao Zhang
Algorithms 2025, 18(5), 273; https://doi.org/10.3390/a18050273 - 6 May 2025
Viewed by 247
Abstract
The density-based spatial clustering of applications with noise (DBSCAN) is able to cluster arbitrarily structured datasets. However, the clustering result of this algorithm is exceptionally sensitive to the neighborhood radius (Eps) and noise points, and it is hard to obtain the best result [...] Read more.
The density-based spatial clustering of applications with noise (DBSCAN) is able to cluster arbitrarily structured datasets. However, the clustering result of this algorithm is exceptionally sensitive to the neighborhood radius (Eps) and noise points, and it is hard to obtain the best result quickly and accurately with it. To address this issue, a parameter-adaptive DBSCAN clustering algorithm based on the Sparrow Search Algorithm (SSA), referred to as SSA-DBSCAN, is proposed. This method leverages the local fast search ability of SSA, using the optimal number of clusters and the silhouette coefficient of the dataset as the objective functions to iteratively optimize and select the two input parameters of DBSCAN. This avoids the adverse impact of manually inputting parameters, enabling adaptive clustering with DBSCAN. Experiments on typical synthetic datasets, UCI (University of California, Irvine) real-world datasets, and image segmentation tasks have validated the effectiveness of the SSA-DBSCAN algorithm. Comparative analysis with DBSCAN and other related optimization algorithms demonstrates the clustering performance of SSA-DBSCAN. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 6100 KiB  
Systematic Review
Speech Enhancement Algorithms: A Systematic Literature Review
by Sally Taha Yousif and Basheera M. Mahmmod
Algorithms 2025, 18(5), 272; https://doi.org/10.3390/a18050272 - 6 May 2025
Viewed by 296
Abstract
A growing and pressing need for Speech Enhancement Algorithms (SEAs) has emerged with the proliferation of hearing devices and mobile devices that aim to improve speech intelligibility without sacrificing speech quality. Recently, a tremendous number of studies have been conducted in the field [...] Read more.
A growing and pressing need for Speech Enhancement Algorithms (SEAs) has emerged with the proliferation of hearing devices and mobile devices that aim to improve speech intelligibility without sacrificing speech quality. Recently, a tremendous number of studies have been conducted in the field of speech enhancement. This study aims to map the field of speech enhancement by conducting a systematic literature review to provide comprehensive details of recently proposed SEAs. This systematic review aims to highlight research trends in SEAs and direct researchers to the most important topics published between 2015 and 2024. It attempts to address seven key research questions related to this topic. Moreover, it covers articles available in five research databases that were selected in accordance with the PRISMA protocol. Different inclusion and exclusion criteria have been performed. Across the selected databases, 47 studies met the defined inclusion criteria. A detailed explanation of SEAs in the recent literature is provided, with existing SEAs studied in a comparative fashion along with the factors influencing the choice of one over the others. This review presents different criteria related to the approaches utilized for signal modeling, the different datasets employed, types of transform-based SEAs, and the effectiveness of different measurements, among other topics. This study presents a systematic review of SEAs along with existing challenges in this field. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

21 pages, 1407 KiB  
Article
A Reconfigurable Framework for Hybrid Quantum–Classical Computing
by Pratibha and Naveed Mahmud
Algorithms 2025, 18(5), 271; https://doi.org/10.3390/a18050271 - 6 May 2025
Viewed by 274
Abstract
Hybrid quantum–classical (HQC) computing refers to the approach of executing algorithms coherently on both quantum and classical resources. This approach makes the best use of current or near-term quantum computers by sharing the workload with classical high-performance computing. However, HQC algorithms often require [...] Read more.
Hybrid quantum–classical (HQC) computing refers to the approach of executing algorithms coherently on both quantum and classical resources. This approach makes the best use of current or near-term quantum computers by sharing the workload with classical high-performance computing. However, HQC algorithms often require a back-and-forth exchange of data between quantum and classical processors, causing system bottlenecks and leading to high latency in applications. The objective of this study is to investigate novel frameworks that unify quantum and reconfigurable resources for HQC and mitigate system bottleneck and latency issues. In this paper, we propose a reconfigurable framework for hybrid quantum–classical computing. The proposed framework integrates field-programmable gate arrays (FPGAs) with quantum processing units (QPUs) for deploying HQC algorithms. The classical subroutines of the algorithms are accelerated on FPGA fabric using a high-throughput processing pipeline, while quantum subroutines are executed on the QPUs. High-level software is used to seamlessly facilitate data exchange between classical and quantum workloads through high-performance channels. To evaluate the proposed framework, an HQC algorithm, namely variational quantum classification, and the MNIST dataset are used as a test case. We present a quantitative comparison of the proposed framework with a state-of-the-art quantum software framework running on a server-grade CPU. The results demonstrate that the FPGA pipeline achieves up to 8× improvement in runtime compared to the CPU baseline. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 1676 KiB  
Article
An Evolutionary Algorithm for Efficient Flexible Workstation Layout in Multi-Product Handcrafted Manufacturing
by Eduardo Fernández-Echeverría, Gregorio Fernández-Lambert, Luis Enrique García-Santamaría, Loecelia Ruvalcaba-Sánchez, Roberto Angel Melendez-Armenta, Juan Manuel Carrión-Delgado and José Aparicio-Urbano
Algorithms 2025, 18(5), 270; https://doi.org/10.3390/a18050270 - 5 May 2025
Viewed by 258
Abstract
This study proposes an Evolutionary Algorithm (EA) to optimize the workstation layout in multi-product handcrafted furniture workshops with simultaneous manufacturing. The algorithm models the arrangement of workstations within a limited space, enhancing activity coordination and reducing unnecessary worker movement. The optimized solution is [...] Read more.
This study proposes an Evolutionary Algorithm (EA) to optimize the workstation layout in multi-product handcrafted furniture workshops with simultaneous manufacturing. The algorithm models the arrangement of workstations within a limited space, enhancing activity coordination and reducing unnecessary worker movement. The optimized solution is obtained through the application of evolutionary operators, including selection, crossover, mutation, and refinement, iterating over successive generations. To evaluate the EA’s performance, a computational simulation is conducted using ProModel®, comparing its efficiency against conventional methodologies such as Systematic Layout Planning (SLP) and the CRAFT algorithm. In a case study involving the simultaneous elaboration of three different products, each by a different artisan, the EA successfully reduces the total worker travel distance by 51.45% and the system’s total processing time by 13.2%. The results indicate that the proposed approach not only enhances operational efficiency in a smaller environment but also lays the groundwork for integrating advanced strategies. These include cellular manufacturing and hybrid production schemes, ultimately enhancing flexibility and sustainability in this sector. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1380 KiB  
Article
Assessing Environmental Influences on Flash Storage for Vehicle Computing: A Quantitative and Analytical Investigation
by Ying He, Donger Chen, Isabella Xu, Wang Feng, Qing Yang, Ted Tsao and Song Fu
Algorithms 2025, 18(5), 269; https://doi.org/10.3390/a18050269 - 5 May 2025
Viewed by 164
Abstract
As automotive technology advances, ensuring efficient and reliable vehicle storage systems becomes increasingly important to vehicle edges. Environmental factors, like extreme cold or intense heat, can greatly affect how well these critical components function. In this paper, we study the effects of different [...] Read more.
As automotive technology advances, ensuring efficient and reliable vehicle storage systems becomes increasingly important to vehicle edges. Environmental factors, like extreme cold or intense heat, can greatly affect how well these critical components function. In this paper, we study the effects of different temperatures on flash-based vehicle storage systems, especially how these conditions impact data storage workloads, machine learning workloads, and vehicle edge computing by analyzing the read and write performance of car flash memory. Our approach combines environmental simulations, performance testing, and data analysis to examine how temperature changes affect the performance and reliability of vehicle storage. By testing conditions from standard room temperatures to extreme heat, this study explores how such environments influence the speed, dependability, and overall functionality of flash memory in automotive systems. The results show detailed relationships between temperature changes and the speed (throughput) and delay (latency) in flash storage, identifying areas where these systems may be vulnerable or where improvements could be made. Understanding these dynamics is essential for improving the durability and flexibility of automotive storage systems and vehicle edges in various environmental conditions. We have refined our conclusions to note that while our findings provide insights into temperature-related performance shifts, they represent one piece of a broader set of design considerations for engineers and manufacturers. Rather than offering definitive guidance for policymakers, our findings primarily help illustrate potential thermal vulnerabilities, informing ongoing work toward more robust and reliable vehicle storage systems. As the automotive industry continues to innovate, this study offers an initial foundation for future developments in vehicle storage technology. Full article
(This article belongs to the Special Issue Algorithms for Games AI)
Show Figures

Figure 1

25 pages, 2444 KiB  
Article
Adam Algorithm with Step Adaptation
by Vladimir Krutikov, Elena Tovbis and Lev Kazakovtsev
Algorithms 2025, 18(5), 268; https://doi.org/10.3390/a18050268 - 4 May 2025
Viewed by 209
Abstract
Adam (Adaptive Moment Estimation) is a well-known algorithm for the first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. As shown by computational experiments, with an increase in the degree of conditionality of the problem and in the [...] Read more.
Adam (Adaptive Moment Estimation) is a well-known algorithm for the first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. As shown by computational experiments, with an increase in the degree of conditionality of the problem and in the presence of interference, Adam is prone to looping, which is associated with difficulties in step adjusting. In this paper, an algorithm for step adaptation for the Adam method is proposed. The principle of the step adaptation scheme used in the paper is based on reproducing the state in which the descent direction and the new gradient are found during one-dimensional descent. In the case of exact one-dimensional descent, the angle between these directions is right. In case of inexact descent, if the angle between the descent direction and the new gradient is obtuse, then the step is large and should be reduced; if the angle is acute, then the step is small and should be increased. For the experimental analysis of the new algorithm, test functions of a certain degree of conditionality with interference on the gradient and learning problems with mini-batches for calculating the gradient were used. As the computational experiment showed, in stochastic optimization problems, the proposed Adam modification with step adaptation turned out to be significantly more efficient than both the standard Adam algorithm and the other methods with step adaptation that are studied in the work. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 2156 KiB  
Article
Data-Driven Distributed Model-Free Adaptive Predictive Control for Multiple High-Speed Trains Under False Data Injection Attacks
by Bin Zhang, Dan Wang and Fuzhong Wang
Algorithms 2025, 18(5), 267; https://doi.org/10.3390/a18050267 - 4 May 2025
Viewed by 161
Abstract
This paper investigates the problem of ensuring the stable operation of multiple high-speed train systems under the threat of False Data Injection (FDI) attacks. Due to the wireless communication characteristics of railway networks, high-speed train systems are particularly vulnerable to FDI attacks, which [...] Read more.
This paper investigates the problem of ensuring the stable operation of multiple high-speed train systems under the threat of False Data Injection (FDI) attacks. Due to the wireless communication characteristics of railway networks, high-speed train systems are particularly vulnerable to FDI attacks, which can compromise the accuracy of train data and disrupt cooperative control strategies. To mitigate this risk, we propose a Distributed Model-Free Adaptive Predictive Control (DMFAPC) scheme, which is data-driven and does not rely on an accurate system model. First, by using a dynamic linearization method, we transform the nonlinear high-speed train system model into a dynamically linearized model. Then, based on the above linearized model, we design a DMFAPC control strategy that ensures bounded train velocity tracking errors even in the presence of FDI attacks. Finally, the stability of the proposed scheme is rigorously analyzed using the contraction mapping method, and simulation results demonstrate that the scheme exhibits excellent robustness and stability under attack conditions. Full article
Show Figures

Figure 1

26 pages, 4881 KiB  
Article
Generative Neural Networks for Addressing the Bioequivalence of Highly Variable Drugs
by Anastasios Nikolopoulos and Vangelis D. Karalis
Algorithms 2025, 18(5), 266; https://doi.org/10.3390/a18050266 - 4 May 2025
Viewed by 293
Abstract
Bioequivalence assessment of highly variable drugs (HVDs) remains a significant challenge, as the application of scaled approaches requires replicate designs, complex statistical analyses, and varies between regulatory authorities (e.g., FDA and EMA). This study introduces the use of artificial intelligence, specifically Wasserstein Generative [...] Read more.
Bioequivalence assessment of highly variable drugs (HVDs) remains a significant challenge, as the application of scaled approaches requires replicate designs, complex statistical analyses, and varies between regulatory authorities (e.g., FDA and EMA). This study introduces the use of artificial intelligence, specifically Wasserstein Generative Adversarial Networks (WGANs), as a novel approach for bioequivalence studies of HVDs. Monte Carlo simulations were conducted to evaluate the performance of WGANs across various variability levels, population sizes, and data augmentation scales (2× and 3×). The generated data were tested for bioequivalence acceptance using both EMA and FDA scaled approaches. The WGAN approach, even applied without scaling, consistently outperformed the scaled EMA/FDA methods by effectively reducing the required sample size. Furthermore, the WGAN approach not only minimizes the sample size needed for bioequivalence studies of HVDs, but also eliminates the need for complex, costly, and time-consuming replicate designs that are prone to high dropout rates. This study demonstrates that using WGANs with 3× data augmentation can achieve bioequivalence acceptance rates exceeding 89% across all FDA and EMA criteria, with 10 out of 18 scenarios reaching 100%, highlighting the WGAN method potential to transform the design and efficiency of bioequivalence studies. This is a foundational step in utilizing WGANs for the bioequivalence assessment of HVDs, highlighting that with clear regulatory criteria, a new era for bioequivalence evaluation can begin. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

Previous Issue
Back to TopTop