Previous Issue
Volume 18, April
 
 

Algorithms, Volume 18, Issue 5 (May 2025) – 21 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 3036 KiB  
Article
Particle Swarm Optimization Support Vector Machine-Based Grounding Fault Detection Method in Distribution Network
by Zhongqin Xiong, Shichang Huang, Shen Ren, Yutong Lin, Zewen Li, Dongyu Li and Fangming Deng
Algorithms 2025, 18(5), 259; https://doi.org/10.3390/a18050259 - 29 Apr 2025
Abstract
With the present fault detection method for low-voltage distribution networks, it is difficult to detect single-phase grounding faults under complex working conditions. In this paper, a particle swarm optimization (PSO) support vector machine (SVM)-based grounding fault detection method is proposed for distribution networks. [...] Read more.
With the present fault detection method for low-voltage distribution networks, it is difficult to detect single-phase grounding faults under complex working conditions. In this paper, a particle swarm optimization (PSO) support vector machine (SVM)-based grounding fault detection method is proposed for distribution networks. By improving the inertia weight value and introducing a flight-time factor, the PSO algorithm can be improved. The parameters C and g of the SVM can be optimized based on the improved PSO algorithm. Based on the PSO-SVM-based method, a grounding fault detection method can be established. By testing the proposed model in the simulation and experiment, its effectiveness and detection accuracy is validated. Full article
Show Figures

Figure 1

24 pages, 3776 KiB  
Article
Combination of Conditioning Factors for Generation of Landslide Susceptibility Maps by Extreme Gradient Boosting in Cuenca, Ecuador
by Esteban Bravo-López, Tomás Fernández, Chester Sellers and Jorge Delgado-García
Algorithms 2025, 18(5), 258; https://doi.org/10.3390/a18050258 - 29 Apr 2025
Abstract
Landslides are hazardous events that occur mainly in mountainous areas and cause substantial losses of various kinds worldwide; therefore, it is important to investigate them. In this study, a specific Machine Learning (ML) method was further analyzed due to the good results obtained [...] Read more.
Landslides are hazardous events that occur mainly in mountainous areas and cause substantial losses of various kinds worldwide; therefore, it is important to investigate them. In this study, a specific Machine Learning (ML) method was further analyzed due to the good results obtained in the previous stage of this research. The algorithm implemented is Extreme Gradient Boosting (XGBoost), which was used to evaluate the susceptibility to landslides recorded in the city of Cuenca (Ecuador) and its surroundings, generating the respective Landslide Susceptibility Maps (LSM). For the model implementation, a landslide inventory updated to 2019 was used and several sets from 15 available conditioning factors were considered, applying two different methods of random point sampling. Additionally, a hyperparameter tuning process of XGBoost has been employed in order to optimize the predictive and computational performance of each model. The results obtained were validated using AUC-ROC, F-Score and the degree of landslide coincidence adjustment at high and very high susceptibility levels, showing a good predictive capacity in most cases. The best results were obtained with the set of the six best conditioning factors previously determined, as it produced good values in validation metrics (AUC = 0.83; F-Score = 0.73) and a degree of coincidence of landslides in the high and very high susceptibility levels above 90%. The Wilcoxon text led to establishing significant differences between methods. These results show the need to perform susceptibility analyses with different data sets to determine the most appropriate ones. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science)
Show Figures

Figure 1

25 pages, 2566 KiB  
Article
Early Risk Prediction in Acute Aortic Syndrome on Clinical Data Using Machine Learning
by Mehdi Tavafi, Kalpdrum Passi and Robert Ohle
Algorithms 2025, 18(5), 257; https://doi.org/10.3390/a18050257 - 28 Apr 2025
Viewed by 75
Abstract
This study explores machine learning’s potential for early Acute Aortic Syndrome (AAS) prediction by integrating and cleaning extensive clinical datasets from 68 emergency departments in the USA, covering the medical histories of nearly 150,000 patients from 2021 to 2022. Utilizing various data-splitting strategies [...] Read more.
This study explores machine learning’s potential for early Acute Aortic Syndrome (AAS) prediction by integrating and cleaning extensive clinical datasets from 68 emergency departments in the USA, covering the medical histories of nearly 150,000 patients from 2021 to 2022. Utilizing various data-splitting strategies and classifiers, the research constructs predictive models and addresses dataset size limitations, achieving an exceptional accuracy of 99.3% with the Relief feature method and random forest classifier, facilitating further research on AAS and other cardiovascular diseases. Full article
Show Figures

Figure 1

20 pages, 8096 KiB  
Article
Simulating Intraday Electricity Consumption with ForGAN
by Ralf Korn and Laurena Ramadani
Algorithms 2025, 18(5), 256; https://doi.org/10.3390/a18050256 - 27 Apr 2025
Viewed by 72
Abstract
Sparse data and an unknown conditional distribution of future values are challenges for managing risks inherent in the evolution of time series. This contribution addresses both aspects through the application of ForGAN, a special form of a generative adversarial network (GAN), to German [...] Read more.
Sparse data and an unknown conditional distribution of future values are challenges for managing risks inherent in the evolution of time series. This contribution addresses both aspects through the application of ForGAN, a special form of a generative adversarial network (GAN), to German electricity consumption data. Electricity consumption time series have been selected due to their typical combination of (non-linear) seasonal behavior on different time scales and of local random effects. The primary objective is to demonstrate that ForGAN is able to capture such complicated seasonal figures and to generate data with the correct underlying conditional distribution without data preparation, such as de-seasonalization. In particular, ForGAN does so without assuming an underlying model for the evolution of the time series and is purely data-based. The training and validation procedures are described in great detail. Specifically, a long iteration process of the interplay between the generator and discriminator is required to obtain convergence of the parameters that determine the conditional distribution from which additional artificial data can be generated. Additionally, extensive quality assessments of the generated data are conducted by looking at histograms, auto-correlation structures, and further features comparing the real and the generated data. As a result, the generated data match the conditional distribution of the next consumption value of the training data well. Thus, the trained generator of ForGAN can be used to simulate additional time series of German electricity consumption. This can be seen as a kind of proof for the applicabilty of ForGAN. Through a detailed descriptions of the necessary steps of training and validation procedures, a detailed quality check before the actual use of the simulated data, and by providing the intuition and mathematical background behind ForGAN, this contribution aims to demystify the application of GANs to motivate both theorists and researchers in applied sciences to use them for data generation in similar applications. The proposed framework has laid out a plan for doing so. Full article
Show Figures

Figure 1

52 pages, 748 KiB  
Systematic Review
Advancements in Non-Destructive Detection of Biochemical Traits in Plants Through Spectral Imaging-Based Algorithms: A Systematic Review
by Aleksander Dabek, Lorenzo Mantovani, Susanna Mirabella, Michele Vignati and Simone Cinquemani
Algorithms 2025, 18(5), 255; https://doi.org/10.3390/a18050255 - 27 Apr 2025
Viewed by 163
Abstract
This paper provides a comprehensive overview of the state of the art non-destructive methods for detecting plant biochemical traits through spectral imaging of leafy greens. It offers insights into the various detection techniques and their effectiveness. The review emphasizes the algorithms used for [...] Read more.
This paper provides a comprehensive overview of the state of the art non-destructive methods for detecting plant biochemical traits through spectral imaging of leafy greens. It offers insights into the various detection techniques and their effectiveness. The review emphasizes the algorithms used for spectral data analysis, highlighting advancements in computational methods that have contributed to improving detection accuracy and efficiency. This systematic review, following the PRISMA 2020 guidelines, explores the applications of non-destructive measurements, techniques, and algorithms, including hyperspectral imaging and spectrometry for detecting a wide range of chemical compounds and elements in lettuce, basil, and spinach. This review focuses on studies published from 2019 onward, focusing on the detection of compounds such as chlorophyll, carotenoids, nitrogen, nitrate, and anthocyanin. Additional compounds such as phosphorus, vitamin C, magnesium, glucose, sugar, water content, calcium, soluble solid content, sulfur, and pH are also mentioned, although they were not the primary focus of this study. The techniques used are showcased and highlighted for each compound, and the accuracies achieved are presented to demonstrate effective detection. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 362 KiB  
Article
Cutting-Edge Stochastic Approach: Efficient Monte Carlo Algorithms with Applications to Sensitivity Analysis
by Ivan Dimov and Rayna Georgieva
Algorithms 2025, 18(5), 252; https://doi.org/10.3390/a18050252 - 27 Apr 2025
Viewed by 56
Abstract
Many important practical problems connected to energy efficiency in buildings, ecology, metallurgy, the development of wireless communication systems, the optimization of radar technology, quantum computing, pharmacology, and seismology are described by large-scale mathematical models that are typically represented by systems of partial differential [...] Read more.
Many important practical problems connected to energy efficiency in buildings, ecology, metallurgy, the development of wireless communication systems, the optimization of radar technology, quantum computing, pharmacology, and seismology are described by large-scale mathematical models that are typically represented by systems of partial differential equations. Such systems often involve numerous input parameters. It is crucial to understand how susceptible the solutions are to uncontrolled variations or uncertainties within these input parameters. This knowledge helps in identifying critical factors that significantly influence the model’s outcomes and can guide efforts to improve the accuracy and reliability of predictions. Sensitivity analysis (SA) is a method used efficiently to assess the sensitivity of the output results from large-scale mathematical models to uncertainties in their input data. By performing SA, we can better manage risks associated with uncertain inputs and make more informed decisions based on the model’s outputs. In recent years, researchers have developed advanced algorithms based on the analysis of variance (ANOVA) technique for computing numerical sensitivity indicators. These methods have also incorporated computationally efficient Monte Carlo integration techniques. This paper presents a comprehensive theoretical and experimental investigation of Monte Carlo algorithms based on “symmetrized shaking” of Sobol’s quasi-random sequences. The theoretical proof demonstrates that these algorithms exhibit an optimal rate of convergence for functions with continuous and bounded first derivatives and for functions with continuous and bounded second derivatives, respectively, both in terms of probability and mean square error. For the purposes of numerical study, these approaches were successfully applied to a particular problem. A specialized software tool for the global sensitivity analysis of an air pollution mathematical model was developed. Sensitivity analyses were conducted regarding some important air pollutant levels, calculated using a large-scale mathematical model describing the long-distance transport of air pollutants—the Unified Danish Eulerian Model (UNI-DEM). The sensitivity of the model was explored focusing on two distinct categories of key input parameters: chemical reaction rates and input emissions. To validate the theoretical findings and study the applicability of the algorithms across diverse problem classes, extensive numerical experiments were conducted to calculate the main sensitivity indicators—Sobol’ global sensitivity indices. Various numerical integration algorithms were employed to meet this goal—Monte Carlo, quasi-Monte Carlo (QMC), scrambled quasi-Monte Carlo methods based on Sobol’s sequences, and a sensitivity analysis approach implemented in the SIMLAB software for sensitivity analysis. During the study, an essential task arose that is small in value sensitivity measures. It required numerical integration approaches with higher accuracy to ensure reliable predictions based on a specific mathematical model, defining a vital role for small sensitivity measures. Both the analysis and numerical results highlight the advantages of one of the proposed approaches in terms of accuracy and efficiency, particularly for relatively small sensitivity indices. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
49 pages, 13540 KiB  
Article
Integrated Model Selection and Scalability in Functional Data Analysis Through Bayesian Learning
by Wenzheng Tao, Sarang Joshi and Ross Whitaker
Algorithms 2025, 18(5), 254; https://doi.org/10.3390/a18050254 (registering DOI) - 26 Apr 2025
Viewed by 79
Abstract
Functional data, including one-dimensional curves and higher-dimensional surfaces, have become increasingly prominent across scientific disciplines. They offer a continuous perspective that captures subtle dynamics and richer structures compared to discrete representations, thereby preserving essential information and facilitating the more natural modeling of real-world [...] Read more.
Functional data, including one-dimensional curves and higher-dimensional surfaces, have become increasingly prominent across scientific disciplines. They offer a continuous perspective that captures subtle dynamics and richer structures compared to discrete representations, thereby preserving essential information and facilitating the more natural modeling of real-world phenomena, especially in sparse or irregularly sampled settings. A key challenge lies in identifying low-dimensional representations and estimating covariance structures that capture population statistics effectively. We propose a novel Bayesian framework with a nonparametric kernel expansion and a sparse prior, enabling the direct modeling of measured data and avoiding the artificial biases from regridding. Our method, Bayesian scalable functional data analysis (BSFDA), automatically selects both subspace dimensionalities and basis functions, reducing the computational overhead through an efficient variational optimization strategy. We further propose a faster approximate variant that maintains comparable accuracy but accelerates computations significantly on large-scale datasets. Extensive simulation studies demonstrate that our framework outperforms conventional techniques in covariance estimation and dimensionality selection, showing resilience to high dimensionality and irregular sampling. The proposed methodology proves effective for multidimensional functional data and showcases practical applicability in biomedical and meteorological datasets. Overall, BSFDA offers an adaptive, continuous, and scalable solution for modern functional data analysis across diverse scientific domains. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

27 pages, 6303 KiB  
Article
Detecting and Analyzing Botnet Nodes via Advanced Graph Representation Learning Tools
by Alfredo Cuzzocrea, Abderraouf Hafsaoui and Carmine Gallo
Algorithms 2025, 18(5), 253; https://doi.org/10.3390/a18050253 - 26 Apr 2025
Viewed by 158
Abstract
Private consumers, small businesses, and even large enterprises are all at risk from botnets. These botnets are known for spearheading Distributed Denial-Of-Service (DDoS) attacks, spamming large populations of users, and causing critical harm to major organizations. The development of Internet of Things (IoT) [...] Read more.
Private consumers, small businesses, and even large enterprises are all at risk from botnets. These botnets are known for spearheading Distributed Denial-Of-Service (DDoS) attacks, spamming large populations of users, and causing critical harm to major organizations. The development of Internet of Things (IoT) devices led to the use of these devices for cryptocurrency mining, in-transit data interception, and sending logs containing private data to the master botnet. Different techniques were developed to identify these botnet activities, but only a few use Graph Neural Networks (GNNs) to analyze host activity by representing their communications with a directed graph. Although GNNs are intended to extract structural graph properties, they risk causing overfitting, which leads to failure when attempting to do so from an unidentified network. In this study, we test the notion that structural graph patterns might be used for efficient botnet detection. In this study, we also present SIR-GN, a structural iterative representation learning methodology for graph nodes. Our approach is built to work well with untested data, and our model is able to provide a vector representation for every node that captures its structural information. Finally, we demonstrate that, when the collection of node representation vectors is incorporated into a neural network classifier, our model outperforms the state-of-the-art GNN-based algorithms in the detection of bot nodes within unknown networks. Full article
Show Figures

Figure 1

30 pages, 2735 KiB  
Article
A Virtual Power Plant-Integrated Proactive Voltage Regulation Framework for Urban Distribution Networks: Enhanced Termite Life Cycle Optimization Algorithm and Dynamic Coordination
by Yonglin Li, Zhao Liu, Changtao Kan, Rongfei Qiao, Yue Yu and Changgang Li
Algorithms 2025, 18(5), 251; https://doi.org/10.3390/a18050251 - 25 Apr 2025
Viewed by 112
Abstract
Amid global decarbonization mandates, urban distribution networks (UDNs) face escalating voltage volatility due to proliferating distributed energy resources (DERs) and emerging loads (e.g., 5G base stations and data centers). While virtual power plants (VPPs) and network reconfiguration mitigate operational risks, extant methods inadequately [...] Read more.
Amid global decarbonization mandates, urban distribution networks (UDNs) face escalating voltage volatility due to proliferating distributed energy resources (DERs) and emerging loads (e.g., 5G base stations and data centers). While virtual power plants (VPPs) and network reconfiguration mitigate operational risks, extant methods inadequately model load flexibility and suffer from algorithmic stagnation in non-convex optimization. This study proposes a proactive voltage control framework addressing these gaps through three innovations. First, a dynamic cyber-physical load model quantifies 5G/data centers’ demand elasticity as schedulable VPP resources. Second, an Improved Termite Life Cycle Optimizer (ITLCO) integrates chaotic initialization and quantum tunneling to evade local optima, enhancing convergence in high-dimensional spaces. Third, a hierarchical control architecture coordinates the VPP reactive dispatch and topology adaptation via mixed-integer programming. The effectiveness and economic viability of the proposed strategy are validated through multi-scenario simulations of the modified IEEE 33-bus system (represented by 12.66 kV, it is actually oriented to a broader voltage scene). These advancements establish a scalable paradigm for UDNs to harness DERs and next-gen loads while maintaining grid stability under net-zero transitions. The methodology bridges theoretical gaps in flexibility modeling and metaheuristic optimization, offering utilities a computationally efficient tool for real-world implementation. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

20 pages, 908 KiB  
Article
Assigning Candidate Tutors to Modules: A Preference Adjustment Matching Algorithm (PAMA)
by Nikos Karousos, Despoina Pantazi, George Vorvilas and Vassilios S. Verykios
Algorithms 2025, 18(5), 250; https://doi.org/10.3390/a18050250 - 25 Apr 2025
Viewed by 121
Abstract
Matching problems arise in various settings where two or more entities need to be matched—such as job applicants to positions, students to colleges, organ donors to recipients, and advertisers to ads slots in web advertising platforms. This study introduces the preference adjustment matching [...] Read more.
Matching problems arise in various settings where two or more entities need to be matched—such as job applicants to positions, students to colleges, organ donors to recipients, and advertisers to ads slots in web advertising platforms. This study introduces the preference adjustment matching algorithm (PAMA), a novel matching framework that pairs elements, which conceptually represent a bipartite graph structure, based on rankings and preferences. In particular, this algorithm is applied to tutor–module assignment in academic settings, and the methodology is built on four key assumptions where each module must receive its required number of candidates, candidates can only be assigned to a module once, eligible candidates based on ranking and module capacity must be assigned, and priority is given to mutual first-preference matches with institutional policies guiding alternative strategies when needed. PAMA operates in iterative rounds, dynamically adjusting modules and tutors’ preferences while addressing capacity and eligibility constraints. The distinctive innovative element of PAMA is that it combines concepts of maximal and stable matching, pending status and deadlock resolution into a single process for matching tutors to modules to meet the specific requirements of academic institutions and their constraints. This approach achieves balanced assignments by adhering to ranking order and considering preferences on both sides (tutors and institution). PAMA was applied to a real dataset provided by the Hellenic Open University (HOU), in which 3982 tutors competed for 1906 positions within 620 modules. Its performance was tested through various scenarios and proved capable of effectively handling both single-round and multi-round assignments. PAMA effectively handles complex cases, allowing policy-based resolution of deadlocks. While it may lose maximality in such instances, it converges to stability, offering a flexible solution for matching-related problems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

13 pages, 804 KiB  
Article
A Method for Synthesizing Ultra-Large-Scale Clock Trees
by Ziheng Li, Benyuan Chen, Wanting Wang, Hui Lv, Qinghua Lv, Jie Chen, Yan Wang, Juan Li and Cheng Zhang
Algorithms 2025, 18(5), 249; https://doi.org/10.3390/a18050249 - 25 Apr 2025
Viewed by 105
Abstract
As integrated circuit technology continues to advance, clock tree synthesis has become increasingly significant in the design of ultra-large-scale integrated circuits. Traditional clock tree synthesis methods often face challenges such as insufficient computational resources and buffer fan-out limitations when dealing with ultra-large-scale clock [...] Read more.
As integrated circuit technology continues to advance, clock tree synthesis has become increasingly significant in the design of ultra-large-scale integrated circuits. Traditional clock tree synthesis methods often face challenges such as insufficient computational resources and buffer fan-out limitations when dealing with ultra-large-scale clock trees. To address this issue, this paper proposes an improved clock tree synthesis algorithm called incomplete balanced KSR (IB-KSR). Building upon the KSR algorithm, this proposed algorithm efficiently reduces the consumption of computational resources and constrains the fan-out of each buffer by incorporating incomplete minimum spanning tree (IMST) technology and a clustering strategy grounded in Balanced Split. In experiments, the IB-KSR algorithm was compared with the GSR algorithm. The results indicated that IB-KSR reduced the global skew of the clock tree by 43.4% and decreased the average latency by 34.3%. Furthermore, during program execution, IB-KSR maintained low computational resource consumption. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 4532 KiB  
Article
Development of Optimal Size Range of Modules for Driving of Automatic Sliding Doors
by Ivo Malakov, Velizar Zaharinov and Hasan Hasansabri
Algorithms 2025, 18(5), 248; https://doi.org/10.3390/a18050248 - 25 Apr 2025
Viewed by 68
Abstract
The article is dedicated to the choice of an optimal size range of modules driving automatic sliding doors. The optimal size range is a compromise between the conflicting interests of manufacturers and users. The problem is particularly relevant, since the product is widely [...] Read more.
The article is dedicated to the choice of an optimal size range of modules driving automatic sliding doors. The optimal size range is a compromise between the conflicting interests of manufacturers and users. The problem is particularly relevant, since the product is widely used in the construction sector, but there are no developments for scientifically sound determination of the elements of the range. Most often in practice, one oversized module is used for all doors, regardless of the conditions of the specific problem. This leads to an increase in the production costs and operating costs. Size range optimization will lead to increase in the competitiveness of the manufactured products and the efficiency of their application. To solve the problem, a developed approach is used, composed of several stages: determining the main parameter of the product; market demand study; selection of an optimality criterion—the total costs for production and operation; determining the functional dependence between the costs and the influencing factors; and building a mathematical model of the problem. Based on a known optimization method, recurrent dependencies for calculating the total costs have been derived. Utilizing developed algorithms and software application, the optimal size range is determined. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 7778 KiB  
Article
A Multi-Feature Fusion Algorithm for Fatigue Driving Detection Considering Individual Driver Differences
by Meng Zhou, Xiaoyi Zhou, Zhijian Li, Xinyue Liu and Chengming Chen
Algorithms 2025, 18(5), 247; https://doi.org/10.3390/a18050247 - 25 Apr 2025
Viewed by 122
Abstract
Fatigue driving is one of the crucial factors causing traffic accidents. Most existing fatigue driving detection algorithms overlook individual driver characteristics, potentially leading to misjudgments. This article presents a novel detection algorithm that utilizes facial multi-feature fusion, thoroughly considering the driver’s individual characteristics. [...] Read more.
Fatigue driving is one of the crucial factors causing traffic accidents. Most existing fatigue driving detection algorithms overlook individual driver characteristics, potentially leading to misjudgments. This article presents a novel detection algorithm that utilizes facial multi-feature fusion, thoroughly considering the driver’s individual characteristics. To improve the judging accuracy of the driver’s facial expressions, a personalized threshold is proposed based on the normalization of the driver’s eyes and mouth opening and closing instead of the traditional average threshold, as individual drivers have different eye and mouth sizes. Given the dynamic changes in fatigue level, a sliding window model is designed for further calculating blinking duration ratio (BF), yawning frequency (YF), and nodding frequency (NF), and these evaluation indexes are used in the feature fusion model. The reliability of the algorithm is verified by the actual test results, which show that the detection accuracy reaches 95.6% and shows good application potential in fatigue detection applications. In this way, facial multi-feature fusion and fully considering the driver’s individual characteristics makes fatigue driving detection more accurate. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 4352 KiB  
Article
Phase Plane Trajectory Planning for Double Pendulum Crane Anti-Sway Control
by Kai Zhang, Wangqing Niu and Kailun Zhang
Algorithms 2025, 18(5), 246; https://doi.org/10.3390/a18050246 - 24 Apr 2025
Viewed by 61
Abstract
In view of the double pendulum characteristics of cranes in actual production, simply equating them to single pendulum characteristics and ignoring the mass of the hook will lead to significant errors in the oscillation frequency. To tackle this issue, an input-shaping double pendulum [...] Read more.
In view of the double pendulum characteristics of cranes in actual production, simply equating them to single pendulum characteristics and ignoring the mass of the hook will lead to significant errors in the oscillation frequency. To tackle this issue, an input-shaping double pendulum anti-sway control method based on phase plane trajectory planning is proposed. This method generates the required acceleration signal by designing an input shaper and calculates the acceleration switching time and amplitude of the trolley according to the phase plane swing angle and the physical constraints of the system. Through this strategy, it is ensured that the speed of the trolley and the swing angle of the load are always kept within the constraint range so that the trolley can reach the target position accurately. The comparative analysis of numerical simulation and existing control methods shows that the proposed control method can significantly reduce the swing angle amplitude and enable the system to enter the swing angle stable state faster. Numerical simulation and physical experiments show the effectiveness of the control method. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 5368 KiB  
Article
DeCGAN: Speech Enhancement Algorithm for Air Traffic Control
by Haijun Liang, Yimin He, Hanwen Chang and Jianguo Kong
Algorithms 2025, 18(5), 245; https://doi.org/10.3390/a18050245 - 24 Apr 2025
Viewed by 144
Abstract
Air traffic control (ATC) communication is susceptible to speech noise interference, which undermines the quality of civil aviation speech. To resolve this problem, we propose a speech enhancement model, termed DeCGAN, based on the DeConformer generative adversarial network. The model’s generator, the DeConformer [...] Read more.
Air traffic control (ATC) communication is susceptible to speech noise interference, which undermines the quality of civil aviation speech. To resolve this problem, we propose a speech enhancement model, termed DeCGAN, based on the DeConformer generative adversarial network. The model’s generator, the DeConformer module, combining a time frequency channel attention (TFC-SA) module and a deformable convolution-based feedforward neural network (DeConv-FFN), effectively captures both long-range dependencies and local features of speech signals. For this study, the outputs from two branches—the mask decoder and the complex decoder—were amalgamated to produce an enhanced speech signal. An evaluation metric discriminator was then utilized to derive speech quality evaluation scores, and adversarial training was implemented to generate higher-quality speech. Subsequently, experiments were performed to compare DeCGAN with other speech enhancement models on the ATC dataset. The experimental results demonstrate that the proposed model is highly competitive compared to existing models. Specifically, the DeCGAN model achieved a perceptual evaluation of speech quality (PESQ) score of 3.31 and short-time objective intelligibility (STOI) value of 0.96. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

62 pages, 10751 KiB  
Review
Unmanned Aerial Vehicles (UAV) Networking Algorithms: Communication, Control, and AI-Based Approaches
by Mien L. Trinh, Dung T. Nguyen, Long Q. Dinh, Mui D. Nguyen, De Rosal Ignatius Moses Setiadi and Minh T. Nguyen
Algorithms 2025, 18(5), 244; https://doi.org/10.3390/a18050244 - 24 Apr 2025
Viewed by 465
Abstract
This paper focuses on algorithms and technologies for unmanned aerial vehicles (UAVs) networking across different fields of applications. Given the limitations of UAVs in both computations and communications, UAVs usually need algorithms for either low latency or energy efficiency. In addition, coverage problems [...] Read more.
This paper focuses on algorithms and technologies for unmanned aerial vehicles (UAVs) networking across different fields of applications. Given the limitations of UAVs in both computations and communications, UAVs usually need algorithms for either low latency or energy efficiency. In addition, coverage problems should be considered to improve UAV deployment in many monitoring or sensing applications. Hence, this work firstly addresses common applications of UAV groups or swarms. Communication routing protocols are then reviewed, as they can make UAVs capable of supporting these applications. Furthermore, control algorithms are examined to ensure UAVs operate in optimal positions for specific purposes. AI-based approaches are considered to enhance UAV performance. We provide either the latest work or evaluations of existing results that can suggest suitable solutions for specific practical applications. This work can be considered as a comprehensive survey for both general and specific problems associated with UAVs in monitoring and sensing fields. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 7340 KiB  
Article
BWO–ICEEMDAN–iTransformer: A Short-Term Load Forecasting Model for Power Systems with Parameter Optimization
by Danqi Zheng, Jiyun Qin, Zhen Liu, Qinglei Zhang, Jianguo Duan and Ying Zhou
Algorithms 2025, 18(5), 243; https://doi.org/10.3390/a18050243 - 24 Apr 2025
Viewed by 144
Abstract
Maintaining the equilibrium between electricity supply and demand remains a central concern in power systems. A demand response program can adjust the power load demand from the demand side to promote the balance of supply and demand. Load forecasting can facilitate the implementation [...] Read more.
Maintaining the equilibrium between electricity supply and demand remains a central concern in power systems. A demand response program can adjust the power load demand from the demand side to promote the balance of supply and demand. Load forecasting can facilitate the implementation of this program. However, as electricity consumption patterns become more diverse, the resulting load data grows increasingly irregular, making precise forecasting more difficult. Therefore, this paper developed a specialized forecasting scheme. First, the parameters of improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) were optimized using beluga whale optimization (BWO). Then, the nonlinear power load data were decomposed into multiple subsequences using ICEEMDAN. Finally, each subsequence was independently predicted using the iTransformer model, and the overall forecast was derived by integrating these individual predictions. Data from Singapore was selected for validation. The results showed that the BWO–ICEEMDAN–iTransformer model outperformed the other comparison models, with an R2 of 0.9873, RMSE of 48.0014, and MAE of 66.2221. Full article
Show Figures

Figure 1

13 pages, 1369 KiB  
Article
Algorithm-Based Real-Time Analysis of Training Phases in Competitive Canoeing: An Automated Approach for Performance Monitoring
by Sergio Amat, Sonia Busquier, Carlos D. Gómez-Carmona, Manuel Gómez-López and José Pino-Ortega
Algorithms 2025, 18(5), 242; https://doi.org/10.3390/a18050242 - 24 Apr 2025
Viewed by 142
Abstract
The increasing demands in high-performance sports have led to the integration of technological solutions for training optimization. This study aimed to develop and validate an algorithm-based system for analyzing three critical phases in canoe training: initial acceleration, steady-state cruising, and final sprint. Using [...] Read more.
The increasing demands in high-performance sports have led to the integration of technological solutions for training optimization. This study aimed to develop and validate an algorithm-based system for analyzing three critical phases in canoe training: initial acceleration, steady-state cruising, and final sprint. Using inertial measurement units (WIMU PRO™) sampling at 10 Hz, we collected performance data from 12 young canoeists at the Mar Menor High-Performance Sports Center. The custom-developed algorithm processed velocity–time data through polynomial fitting and phase detection methods. Results showed distinctive patterns in the acceleration phase, with initial rapid acceleration (5 s to stabilization) deteriorating in subsequent trials (9–10 s). Athletes maintained consistent stabilized speeds (14.62–14.98 km/h) but required increasing space for stabilization (13.49 to 31.70 m), with slope values decreasing from 2.58% to 0.74% across trials. Performance deterioration was evident through decreasing maximum speeds (18.58 to 17.30 km/h) and minimum speeds (11.17 to 10.17 km/h) across series. The algorithm successfully identified phase transitions and provided real-time feedback on key performance indicators. This technological approach enables automated detection of training phases and provides quantitative metrics for technique assessment, offering coaches and athletes an objective tool for performance optimization in canoeing. Our aim is to automate the analysis task that is currently performed manually by providing an algorithm that the coaches can understand, using very basic mathematical tools, and that saves time for them. Full article
(This article belongs to the Special Issue Emerging Trends in Distributed AI for Smart Environments)
Show Figures

Figure 1

22 pages, 5204 KiB  
Article
Faulty Links’ Fast Recovery Method Based on Deep Reinforcement Learning
by Wanwei Huang, Wenqiang Gui, Yingying Li, Qingsong Lv, Jia Zhang and Xi He
Algorithms 2025, 18(5), 241; https://doi.org/10.3390/a18050241 - 24 Apr 2025
Viewed by 150
Abstract
Aiming to address the high recovery delay and link congestion issues in the communication network of Wide-Area Measurement Systems (WAMSs), this paper introduces Software-Defined Networking (SDN) and proposes a deep reinforcement learning-based faulty-link fast recovery method (DDPG-LBBP). The DDPG-LBBP method takes delay and [...] Read more.
Aiming to address the high recovery delay and link congestion issues in the communication network of Wide-Area Measurement Systems (WAMSs), this paper introduces Software-Defined Networking (SDN) and proposes a deep reinforcement learning-based faulty-link fast recovery method (DDPG-LBBP). The DDPG-LBBP method takes delay and link utilization as the optimization objectives and uses gated recurrent neural network to accelerate algorithm convergence and output the optimal link weights for load balancing. By designing maximally disjoint backup paths, the method ensures the independence of the primary and backup paths, effectively preventing secondary failures caused by path overlap. The experiment compares the (1+2ε)-BPCA, FFRLI, and LIR methods using IEEE 30 and IEEE 57 benchmark power system communication network topologies. Experimental results show that DDPG-LBBP outperforms the others in faulty-link recovery delay, packet loss rate, and recovery success rate. Specifically, compared to the superior algorithm (1+2ε)-BPCA, recovery delay is decreased by about 12.26% and recovery success rate is improved by about 6.91%. Additionally, packet loss rate is decreased by about 15.31% compared to the superior FFRLI method. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

16 pages, 2075 KiB  
Article
Improved Trimming Ant Colony Optimization Algorithm for Mobile Robot Path Planning
by Junxia Ma, Qilin Liu, Zixu Yang and Bo Wang
Algorithms 2025, 18(5), 240; https://doi.org/10.3390/a18050240 - 23 Apr 2025
Viewed by 172
Abstract
Traditional ant colony algorithms for mobile robot path planning often suffer from slow convergence, susceptibility to local optima, and low search efficiency, limiting their applicability in dynamic and complex environments. To address these challenges, this paper proposes an improved trimming ant colony optimization [...] Read more.
Traditional ant colony algorithms for mobile robot path planning often suffer from slow convergence, susceptibility to local optima, and low search efficiency, limiting their applicability in dynamic and complex environments. To address these challenges, this paper proposes an improved trimming ant colony optimization (ITACO) algorithm. The method introduces a dynamic weighting factor into the state transition probability formula to balance global exploration and local exploitation, effectively avoiding local optima. Additionally, the traditional heuristic function is replaced with an artificial potential field attraction function, dynamically adjusting the potential field strength to enhance search efficiency. A path-length-dependent pheromone increment mechanism is also proposed to accelerate convergence, while a triangular pruning strategy is employed to remove redundant path nodes and shorten the optimal path length. Simulation experiments show that the ITACO algorithm improves the path length by up to 62.86% compared to the classical ACO algorithm. The ITACO algorithm improves the path length by 6.68% compared to the latest related research results. These improvements highlight the ITACO algorithm as an efficient and reliable solution for mobile robot path planning in challenging scenarios. Full article
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)
Show Figures

Figure 1

21 pages, 1150 KiB  
Article
Laplacian Controllability and Observability of Multi-Agent Systems: Recent Advances in Tree Graphs
by Gianfranco Parlangeli
Algorithms 2025, 18(5), 239; https://doi.org/10.3390/a18050239 - 23 Apr 2025
Viewed by 80
Abstract
Laplacian controllability and observability of a consensus network is a widely considered topic in the area of multi-agent systems, complex networks, and large-scale systems. In this paper, this problem is addressed when the communication among nodes is described through a starlike tree topology. [...] Read more.
Laplacian controllability and observability of a consensus network is a widely considered topic in the area of multi-agent systems, complex networks, and large-scale systems. In this paper, this problem is addressed when the communication among nodes is described through a starlike tree topology. After a brief description of the mathematical setting of the problem adopted in a wide number of multi-agent systems engineering applications, some novel results are drawn based on node positions within the network only. The resulting methods are graphical and thus effective and exempt from numerical errors, and the final algorithm is provided to perform the analysis by machine computation. Several examples are provided to show the effectiveness of the algorithm proposed. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

Previous Issue
Back to TopTop