Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (22)

Search Parameters:
Keywords = terminal iterative learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 4764 KB  
Article
Training-Free and Environment-Robust Human Motion Segmentation with Commercial WiFi Device: An Image Perspective
by Xu Wang, Linghua Zhang and Feng Shu
Appl. Sci. 2026, 16(1), 373; https://doi.org/10.3390/app16010373 - 29 Dec 2025
Viewed by 209
Abstract
WiFi sensing relies on capturing channel state information (CSI) fluctuations induced by human activities. Accurate motion segmentation is crucial for applications ranging from intrusion detection to activity recognition. However, prevailing methods based on variance, correlation coefficients, or deep learning are often constrained by [...] Read more.
WiFi sensing relies on capturing channel state information (CSI) fluctuations induced by human activities. Accurate motion segmentation is crucial for applications ranging from intrusion detection to activity recognition. However, prevailing methods based on variance, correlation coefficients, or deep learning are often constrained by complex threshold-setting procedures and dependence on high-quality sample data. To address these limitations, this paper proposes a training-free and environment-independent motion segmentation system using commercial WiFi devices from an image-processing perspective. The system employs a novel quasi-envelope to characterize CSI fluctuations and an iterative segmentation algorithm based on an improved Otsu thresholding method. Furthermore, a dedicated motion detection algorithm, leveraging the grayscale distribution of variance images, provides a precise termination criterion for the iterative process. Real-world experiments demonstrate that our system achieves an E-FPR of 0.33% and an E-FNR of 0.20% in counting motion events, with average temporal errors of 0.26 s and 0.29 s in locating the start and end points of human activity, respectively, confirming its effectiveness and robustness. Full article
Show Figures

Figure 1

48 pages, 5403 KB  
Article
Enhanced Chimp Algorithm and Its Application in Optimizing Real-World Data and Engineering Design Problems
by Hussam N. Fakhouri, Riyad Alrousan, Hasan Rashaideh, Faten Hamad and Zaid Khrisat
Computation 2026, 14(1), 1; https://doi.org/10.3390/computation14010001 - 20 Dec 2025
Viewed by 310
Abstract
This work proposes an Enhanced Chimp Optimization Algorithm (EChOA) for solving continuous and constrained data science and engineering optimization problems. The EChOA integrates a self-adaptive DE/current-to-pbest/1 (with jDE-style parameter control) variation stage with the canonical four-leader ChOA guidance and augments the search with [...] Read more.
This work proposes an Enhanced Chimp Optimization Algorithm (EChOA) for solving continuous and constrained data science and engineering optimization problems. The EChOA integrates a self-adaptive DE/current-to-pbest/1 (with jDE-style parameter control) variation stage with the canonical four-leader ChOA guidance and augments the search with three lightweight modules: (i) L’evy flight refinement around the incumbent best, (ii) periodic elite opposition-based learning, and (iii) stagnation-aware partial restarts. The EChOA is compared with more than 35 optimizers on the CEC2022 single-objective suite (12 functions). The results shows that the EChOA attains state-of-the-art results at both D=10 and D=20. At D=10, it ranks first on all functions (average rank 1.00; 12/12 wins) with the lowest mean objective and the smallest dispersion relative to the strongest competitor (OMA). At D=20, the EChOA retains the best overall rank and achieves top scores on most functions, indicating stable scalability with problem dimension. Pairwise Wilcoxon signed-rank tests (α=0.05) against the full competitor set corroborate statistical superiority on the majority of functions at both dimensions, aligning with the aggregate rank outcomes. Population size studies indicate that larger populations primarily enhance reliability and time to improvement while yielding similar terminal accuracy under a fixed iteration budget. Four constrained engineering case studies (including welded beam, helical spring, pressure vessel, and cantilever stepped beam) further confirm practical effectiveness, with consistently low cost/weight/volume and tight dispersion. Full article
26 pages, 1491 KB  
Article
Time and Memory Trade-Offs in Shortest-Path Algorithms Across Graph Topologies: A*, Bellman–Ford, Dijkstra, AI-Augmented A* and a Neural Baseline
by Nahier Aldhafferi
Computers 2025, 14(12), 545; https://doi.org/10.3390/computers14120545 - 10 Dec 2025
Viewed by 565
Abstract
This study presents a comparative evaluation of Dijkstra’s algorithm, A*, Bellman-Ford, AI-Augmented A* and a neural AI-based model for shortest-path computation across diverse graph topologies, with a focus on time efficiency and memory consumption under standardized experimental conditions. We analyzed grids, random graphs, [...] Read more.
This study presents a comparative evaluation of Dijkstra’s algorithm, A*, Bellman-Ford, AI-Augmented A* and a neural AI-based model for shortest-path computation across diverse graph topologies, with a focus on time efficiency and memory consumption under standardized experimental conditions. We analyzed grids, random graphs, and scale-free graphs of sizes up to 103,103 nodes, specifically examining 100- and 1000-node grids, 100- and 1000-node random graphs, and 100-node scale-free graphs. The algorithms were benchmarked through repeated runs per condition on a server-class system equipped with an Intel Xeon Gold 6248R processor, NVIDIA Tesla V100 GPU (32 GB), 256 GB RAM, and Ubuntu 20.04. A* consistently outperformed Dijkstra’s algorithm when paired with an informative admissible heuristic, exhibiting faster runtimes by approximately 1.37× to 1.91× across various topologies. In comparison, Bellman-Ford was slower than Dijkstra’s by approximately 1.50× to 1.92×, depending on graph type and size; however, it remained a robust option in scenarios involving negative edge weights or when early-termination conditions reduced practical iterations. The AI model demonstrated the slowest performance across conditions, incurring runtimes that were 2.60× to 3.23× higher than A* and 1.62× to 2.15× higher than Bellman-Ford, offering limited gains as a direct solver. These findings underscore topology-sensitive trade-offs: A* is preferred when a suitable heuristic is available; Dijkstra’s serves as a strong baseline in the absence of heuristics; Bellman-Ford is appropriate for handling negative weights; and current AI approaches are not yet competitive for exact shortest paths but may hold promise as learned heuristics to augment A*. We provide environmental details and comparative results to support reproducibility and facilitate further investigation into hybrid learned-classical strategies. Full article
Show Figures

Figure 1

19 pages, 3742 KB  
Article
Adaptive Label Refinement Network for Domain Generalization in Compound Fault Diagnosis
by Qiyan Du, Jiajia Yao, Jingyuan Yang, Fengmiao Tu and Suixian Yang
Sensors 2025, 25(22), 6939; https://doi.org/10.3390/s25226939 - 13 Nov 2025
Viewed by 500
Abstract
Domain generalization (DG) aims to develop models that perform robustly on unseen target domains, a critical but challenging objective for real-world fault diagnosis. The challenge is further complicated in compound fault diagnosis, where the rigidity of hard labels and the simplicity of label [...] Read more.
Domain generalization (DG) aims to develop models that perform robustly on unseen target domains, a critical but challenging objective for real-world fault diagnosis. The challenge is further complicated in compound fault diagnosis, where the rigidity of hard labels and the simplicity of label smoothing under-represent inter-class relations and compositional structures, degrading cross-domain robustness. While current domain generalization methods can alleviate these issues, they typically rely on multi-source domain data. However, considering the limitations of equipment operational conditions and data acquisition costs in industrial applications, only one or two independently distributed source datasets are typically available. In this work, an adaptive label refinement network (ALRN) was designed for learning with imperfect labels under source-scarce conditions. Compared to hard labels and label smoothing, ALRN learns richer, more robust soft labels that encode the semantic similarities between fault classes. The model first trains a convolutional neural network (CNN) to obtain initial class probabilities. It then iteratively refines the training labels by computing a weighted average of predictions within each class, using the sample-wise cross-entropy loss as an adaptive weighting factor. Furthermore, a label refinement stability coefficient based on the max-min Kullback–Leibler (KL) divergence ratio across classes is proposed to evaluate label quality and determine when to terminate the refinement iterations. With only one or two source domains for training, ALRN achieves accuracy gains exceeding 22% under unseen operating conditions compared with a conventional CNN baseline. These results validate that the proposed label refinement algorithm can effectively enhance the cross-domain diagnostic performance, providing a novel and practical solution for learning with imperfect supervision in cross-domain compound fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

18 pages, 4336 KB  
Article
Joint Optimization of Container Resource Defragmentation and Task Scheduling in Queueing Cloud Computing: A DRL-Based Approach
by Yan Guo, Lan Wei, Cunqun Fan, You Ma, Xiangang Zhao and Henghong He
Future Internet 2025, 17(11), 483; https://doi.org/10.3390/fi17110483 - 22 Oct 2025
Viewed by 614
Abstract
Container-based virtualization has become pivotal in cloud computing, and resource fragmentation is inevitable due to the frequency of container deployment/termination and the heterogeneous nature of IoT tasks. In queuing cloud systems, resource defragmentation and task scheduling are interdependent yet rarely co-optimized in existing [...] Read more.
Container-based virtualization has become pivotal in cloud computing, and resource fragmentation is inevitable due to the frequency of container deployment/termination and the heterogeneous nature of IoT tasks. In queuing cloud systems, resource defragmentation and task scheduling are interdependent yet rarely co-optimized in existing research. This paper addresses this gap by investigating the joint optimization of resource defragmentation and task scheduling in a queuing cloud computing system. We first formulate the problem to minimize task completion time and maximize resource utilization, then transform it into an online decision problem. We propose a Deep Reinforcement Learning (DRL)-based two-layer iterative approach called DRL-RDG, which uses a Resource Defragmentation approach based on a Greedy strategy (RDG) to find the optimal container migration solution and a DRL algorithm to learn the optimal task-scheduling solution. Simulation results show that DRL-RDG achieves a low average task completion time and high resource utilization, demonstrating its effectiveness in queuing cloud environments. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

20 pages, 3029 KB  
Article
The Parameter-Optimized Recursive Sliding Variational Mode Decomposition Algorithm and Its Application in Sensor Signal Processing
by Yunyi Liu, Wenjun He, Tao Pan, Shuxian Qin, Zhaokai Ruan and Xiangcheng Li
Sensors 2025, 25(6), 1944; https://doi.org/10.3390/s25061944 - 20 Mar 2025
Viewed by 3675
Abstract
In industrial polishing, the sensor on the polishing motor needs to extract accurate signals in real time. Due to the insufficient real-time performance of Variational Mode Decomposition (VMD) for signal extraction, some studies have proposed the Recursive Sliding Variational Mode Decomposition (RSVMD) algorithm [...] Read more.
In industrial polishing, the sensor on the polishing motor needs to extract accurate signals in real time. Due to the insufficient real-time performance of Variational Mode Decomposition (VMD) for signal extraction, some studies have proposed the Recursive Sliding Variational Mode Decomposition (RSVMD) algorithm to address this limitation. However, RSVMD can exhibit unstable performance in strong-interference scenarios. To suppress this phenomenon, a Parameter-Optimized Recursive Sliding Variational Mode Decomposition (PO-RSVMD) algorithm is proposed. The PO-RSVMD algorithm optimizes RSVMD in the following two ways: First, an iterative termination condition based on modal component error mutation judgment is introduced to prevent over-decomposition. Second, a rate learning factor is introduced to automatically adjust the initial center frequency of the current window to reduce errors. Through simulation experiments with signals with different signal-to-noise ratios (SNR), it is found that as the SNR increases from 0 dB to 17 dB, the PO-RSVMD algorithm accelerates the iteration time by at least 53% compared to VMD and RSVMD; the number of iterations decreases by at least 57%; and the RMSE is reduced by 35% compared to the other two algorithms. Furthermore, when applying the PO-RSVMD algorithm and the RSVMD algorithm to the Inertial Measurement Unit (IMU) for measuring signal extraction performance under strong interference conditions after the polishing motor starts, the average iteration time and number of iterations of PO-RSVMD are significantly lower than those of RSVMD, demonstrating its capability for rapid signal extraction. Moreover, the average RMSE values of the two algorithms are very close, verifying the high real-time performance and stability of PO-RSVMD in practical applications. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

13 pages, 4111 KB  
Article
A Novel Tool Wear Identification Method Based on a Semi-Supervised LSTM
by Xin He, Meipeng Zhong, Chengcheng He, Jinhao Wu, Haiyang Yang, Zhigao Zhao, Wei Yang, Cong Jing, Yanlin Li and Chen Gao
Lubricants 2025, 13(2), 72; https://doi.org/10.3390/lubricants13020072 - 7 Feb 2025
Cited by 4 | Viewed by 1416
Abstract
Machine learning models have been widely used in the field of cutting tool wear identification, achieving favorable results. However, in actual industrial scenarios, obtaining sufficient labeled samples is time consuming and costly, while unlabeled samples are abundant and easy to collect. This situation [...] Read more.
Machine learning models have been widely used in the field of cutting tool wear identification, achieving favorable results. However, in actual industrial scenarios, obtaining sufficient labeled samples is time consuming and costly, while unlabeled samples are abundant and easy to collect. This situation significantly affects the model’s performance. To address this challenge, a novel semi-supervised method, based on long short-term memory (LSTM) networks, is provided. The proposed method leverages both small labeled and abundant unlabeled data to improve tool wear identification performance. The proposed method trains an initial tool wear regression model using LSTM, using a small amount of labeled samples. It then uses manifold regularization to generate pseudo-labels for the unlabeled samples. These pseudo-labeled samples are combined with the original labeled samples to retrain the MR–LSTM model iteratively to improve its performance. This process continues until a termination condition is met. The method considers the correlation between sample labels and feature structures, as well as the correlation between global and local sample labels. Experiments involving milling tool wear identification demonstrate that the proposed method significantly outperforms support vector regression (SVR) and recurrent neural network (RNN)-based methods, when a small amount of labeled samples and abundant unlabeled samples are available. The average R2 values in terms of the proposed method’s predicted results can reach above 0.95. The proposed method is a potential technique for low-cost tool wear identification, without the need to collect a large number of labeled samples. Full article
(This article belongs to the Special Issue Advanced Computational Studies in Frictional Contact)
Show Figures

Figure 1

18 pages, 944 KB  
Article
Real-Time Data Collection and Trajectory Scheduling Using a DRL–Lagrangian Framework in Multiple UAVs Collaborative Communication Systems
by Shanshan Wang and Zhiyong Luo
Remote Sens. 2024, 16(23), 4378; https://doi.org/10.3390/rs16234378 - 23 Nov 2024
Cited by 3 | Viewed by 2326
Abstract
UAV-assisted communication facilitates efficient data collection from IoT nodes by exploiting UAVs’ flexible deployment and wide coverage capabilities. In this paper, we consider a scenario in which UAVs equipped with high-precision sensors collect sensing data from ground terminals (GTs) in real-time over a [...] Read more.
UAV-assisted communication facilitates efficient data collection from IoT nodes by exploiting UAVs’ flexible deployment and wide coverage capabilities. In this paper, we consider a scenario in which UAVs equipped with high-precision sensors collect sensing data from ground terminals (GTs) in real-time over a wide geographic area and transmit the collected data to a ground base station (BS). Our research aims to jointly optimize the trajectory scheduling and the allocation of collection time slots for multiple UAVs, to maximize the system’s data collection rates and fairness while minimizing energy consumption within the task deadline. Due to UAVs’ limited sensing distance and battery energy, ensuring timely data processing in target areas presents a challenge. To address this issue, we propose a novel constraint optimization-based deep reinforcement learning–Lagrangian UAV real-time data collection management (CDRLL—RDCM) framework utilizing centralized training and distributed execution. In this framework, a CNN–GRU network units extract spatial and temporal features of the environmental information. We then introduce the PPO–Lagrangian algorithm to iteratively update the policy network and Lagrange multipliers at different time scales, enabling the learning of more effective collaborative policies for real-time UAV decision-making. Extensive simulations show that our proposed framework significantly improves the efficiency of multi-UAV collaboration and substantially reduces data staleness. Full article
Show Figures

Figure 1

30 pages, 5421 KB  
Article
A Comprehensive Investigation on Catalytic Behavior of Anaerobic Jar Gassing Systems and Design of an Enhanced Cultivation System
by Fatih S. Sayin, Hasan Erdal, Nurver T. Ulger, Mehmet B. Aksu and Mehmet M. Guncu
Bioengineering 2024, 11(11), 1068; https://doi.org/10.3390/bioengineering11111068 - 25 Oct 2024
Cited by 1 | Viewed by 3466
Abstract
The rapid and reliable diagnosis of anaerobic bacteria constitutes one of the key procedures in clinical microbiology. Automatic jar gassing systems are commonly used laboratory instruments for this purpose. The most critical factors affecting the cultivation performance of these systems are the level [...] Read more.
The rapid and reliable diagnosis of anaerobic bacteria constitutes one of the key procedures in clinical microbiology. Automatic jar gassing systems are commonly used laboratory instruments for this purpose. The most critical factors affecting the cultivation performance of these systems are the level of residual oxygen remaining in the anaerobic jar and the reaction rate determined by the Pd/Al2O3 catalyst. The main objective of the presented study is to design and manufacture an enhanced jar gassing system equipped with an extremum seeking-based estimation algorithm that combines real-time data and a reaction model of the Pd/Al2O3 catalyst. The microkinetic behavior of the palladium catalyst was modeled through a learning-from-experiment methodology. The majority of microkinetic model parameters were derived from material characterization analysis. A comparative validation test of the designed cultivation system was conducted using conventional gas pouches via six different bacterial strains. The results demonstrated high cell viability, with colony counts ranging from 1.26 × 105 to 2.17 × 105 CFU mL−1. The favorable catalyst facets for water formation on Pd surfaces and the crystal structure of Pd/Al2O3 pellets were identified by X-Ray diffraction analysis (XRD). The doping ratio of the noble metal (Pd) and the support material (Al2O3) was validated via energy-dispersive spectroscopy (EDS) measurements as 0.68% and 99.32%, respectively. The porous structure of the catalyst was also analyzed by scanning electron microscopy (SEM). During the reference clinical trial, the estimation algorithm was terminated after 878 iterations, having reached its predetermined termination value. The measured and modelled reaction rates were found to converge with a root-mean-squared error (RMSE) of less than 10−4, and the Arrhenius parameters of ongoing catalytic reaction were obtained. Additionally, our research offers a comprehensive analysis of anaerobic jar gassing systems from an engineering perspective, providing novel insights that are absent from the existing literature. Full article
(This article belongs to the Section Biochemical Engineering)
Show Figures

Figure 1

15 pages, 6534 KB  
Article
Groundwater Pollution Source and Aquifer Parameter Estimation Based on a Stacked Autoencoder Substitute
by Han Wang, Jinping Zhang, Hang Li, Guanghua Li, Jiayuan Guo and Wenxi Lu
Water 2024, 16(18), 2564; https://doi.org/10.3390/w16182564 - 10 Sep 2024
Cited by 2 | Viewed by 1210
Abstract
A concurrent heuristic search iterative process (CHSIP) is used for estimating groundwater pollution sources and aquifer parameters in this work. Frequent calls to carry out a numerical simulation of groundwater pollution have generated a huge calculated load during the CHSIP. Therefore, a valid [...] Read more.
A concurrent heuristic search iterative process (CHSIP) is used for estimating groundwater pollution sources and aquifer parameters in this work. Frequent calls to carry out a numerical simulation of groundwater pollution have generated a huge calculated load during the CHSIP. Therefore, a valid means to mitigate this is building a substitute to emulate the numerical simulation at a low calculated load. However, there is a complicated nonlinear correlativity between the import and export of the numerical simulation on account of the large quantity of variables. This leads to a poor approach accuracy of the substitute compared to the simulation when using shallow learning methods. Therefore, we first built a stacked autoencoder substitute, using the deep learning method, to boost the approach accuracy of the substitute compared to the numerical simulation. In total, 400 training samples and 100 testing samples for the substitute were collected by employing the Latin hypercube sampling method and running the numerical simulator. The CHSIP was then employed for estimating the groundwater pollution sources and aquifer parameters, and the estimated outcome was obtained when the CHSIP was terminated. The data analysis, including interval estimation and point estimation, was implemented on the MATLAB platform. A relevant hypothetical case is set to verify our approaches, which shows that the CHSIP is helpful for estimating the groundwater pollution source and aquifer parameters and that the stacked autoencoder method can effectively boost the approach precision of the substitute for the simulator. Full article
Show Figures

Figure 1

26 pages, 1413 KB  
Article
Active Learning for Biomedical Article Classification with Bag of Words and FastText Embeddings
by Paweł Cichosz
Appl. Sci. 2024, 14(17), 7945; https://doi.org/10.3390/app14177945 - 6 Sep 2024
Viewed by 2321
Abstract
In several applications of text classification, training document labels are provided by human evaluators, and therefore, gathering sufficient data for model creation is time consuming and costly. The labeling time and effort may be reduced by active learning, in which classification models are [...] Read more.
In several applications of text classification, training document labels are provided by human evaluators, and therefore, gathering sufficient data for model creation is time consuming and costly. The labeling time and effort may be reduced by active learning, in which classification models are created based on relatively small training sets, which are obtained by collecting class labels provided in response to labeling requests or queries. This is an iterative process with a sequence of models being fitted, and each of them is used to select query articles to be added to the training set for the next one. Such a learning scenario may pose different challenges for machine learning algorithms and text representation methods used for text classification than ordinary passive learning, since they have to deal with very small, often imbalanced data, and the computational expense of both model creation and prediction has to remain low. This work examines how classification algorithms and text representation methods that have been found particularly useful by prior work handle these challenges. The random forest and support vector machines algorithms are coupled with the bag of words and FastText word embedding representations and applied to datasets consisting of scientific article abstracts from systematic literature review studies in the biomedical domain. Several strategies are used to select articles for active learning queries, including uncertainty sampling, diversity sampling, and strategies favoring the minority class. Confidence-based and stability-based early stopping criteria are used to generate active learning termination signals. The results confirm that active learning is a useful approach to creating text classification models with limited access to labeled data, making it possible to save at least half of the human effort needed to assign relevant or irrelevant class labels to training articles. Two of the four examined combinations of classification algorithms and text representation methods were the most successful: the SVM algorithm with the FastText representation and the random forest algorithm with the bag of words representation. Uncertainty sampling turned out to be the most useful query selection strategy, and confidence-based stopping was found more universal and easier to configure than stability-based stopping. Full article
(This article belongs to the Special Issue Data and Text Mining: New Approaches, Achievements and Applications)
Show Figures

Figure 1

21 pages, 5459 KB  
Article
Fault Localization in Multi-Terminal DC Distribution Networks Based on PSO Algorithm
by Mingyuan Wang and Yan Xu
Electronics 2024, 13(17), 3420; https://doi.org/10.3390/electronics13173420 - 28 Aug 2024
Cited by 3 | Viewed by 1411
Abstract
Flexible DC power grids are widely recognized as an important component of building smart grids. Compared with traditional AC power grids, flexible DC power grids have strong technical advantages in islanding power supplies, distributed power supplies, regional power supplies, and AC system interconnection. [...] Read more.
Flexible DC power grids are widely recognized as an important component of building smart grids. Compared with traditional AC power grids, flexible DC power grids have strong technical advantages in islanding power supplies, distributed power supplies, regional power supplies, and AC system interconnection. In multi-terminal flexible DC power grids containing renewable energy sources such as solar and wind power, due to the instability and intermittency of renewable energy, it is usually necessary to add energy storage units to pre-regulate the power of the multi-terminal flexible DC power grid in islanded operation. Aiming at the important problem of large current impact and serious consequences when the flexible DC distribution network fails, a combined location method combining an improved impedance method (series current-limiting reactors at both ends of the line to obtain a more accurate current differential value) and a particle swarm optimization algorithm is proposed. Initially, by establishing the enhanced impedance model, the differential variables under the conditions of inter-electrode short-circuit and single-pole grounding fault can be obtained. Then tailor-made fitness functions are designed for these two models to optimize parameter identification. Subsequently, the iterative parameters of the particle swarm optimization algorithm are fine-tuned, giving it dynamic sociality and self-learning ability in the iterative process, which significantly improves the convergence speed and successfully avoids local optimization. Finally, various fault types in a six-terminal DC distribution network are simulated and analyzed by MATLAB, and the results show that this method has good accuracy and robustness. This research provides strong theoretical and methodological support for improving the safety and reliability of DC distribution systems. Full article
(This article belongs to the Special Issue Advanced Online Monitoring and Fault Diagnosis of Power Equipment)
Show Figures

Figure 1

22 pages, 586 KB  
Article
A New Alternating Suboptimal Dynamic Programming Algorithm with Applications for Feature Selection
by David Podgorelec, Borut Žalik, Domen Mongus and Dino Vlahek
Mathematics 2024, 12(13), 1987; https://doi.org/10.3390/math12131987 - 27 Jun 2024
Cited by 4 | Viewed by 2181
Abstract
Feature selection is predominantly used in machine learning tasks, such as classification, regression, and clustering. It selects a subset of features (relevant attributes of data points) from a larger set that contributes as optimally as possible to the informativeness of the model. There [...] Read more.
Feature selection is predominantly used in machine learning tasks, such as classification, regression, and clustering. It selects a subset of features (relevant attributes of data points) from a larger set that contributes as optimally as possible to the informativeness of the model. There are exponentially many subsets of a given set, and thus, the exhaustive search approach is only practical for problems with at most a few dozen features. In the past, there have been attempts to reduce the search space using dynamic programming. However, models that consider similarity in pairs of features alongside the quality of individual features do not provide the required optimal substructure. As a result, algorithms, which we will call suboptimal dynamic programming algorithms, find a solution that may deviate significantly from the optimal one. In this paper, we propose an iterative dynamic programming algorithm, which invertsthe order of feature processing in each iteration. Such an alternating approach allows for improving the optimization function by using the score from the previous iteration to estimate the contribution of unprocessed features. The iterative process is proven to converge and terminates when the solution does not change in three successive iterations or when the number of iterations reaches the threshold. Results in more than 95% of tests align with those of the exhaustive search approach, being competitive and often superior to the reference greedy approach. Validation was carried out by comparing the scores of output feature subsets and examining the accuracy of different classifiers learned on these features across nine real-world applications, considering different scenarios with various numbers of features and samples. In the context of feature selection, the proposed algorithm can be characterized as a robust filter method that can improve machine learning models regardless of dataset size. However, we expect that the idea of alternating suboptimal optimization will soon be generalized to tasks beyond feature selection. Full article
(This article belongs to the Special Issue Dynamic Programming)
Show Figures

Figure 1

33 pages, 5783 KB  
Article
A Data-Light and Trajectory-Based Machine Learning Approach for the Online Prediction of Flight Time of Arrival
by Zhe Zheng, Bo Zou, Wenbin Wei and Wen Tian
Aerospace 2023, 10(8), 675; https://doi.org/10.3390/aerospace10080675 - 28 Jul 2023
Cited by 7 | Viewed by 3431
Abstract
The ability to accurately predict flight time of arrival in real time during a flight is critical to the efficiency and reliability of aviation system operations. This paper proposes a data-light and trajectory-based machine learning approach for the online prediction of estimated time [...] Read more.
The ability to accurately predict flight time of arrival in real time during a flight is critical to the efficiency and reliability of aviation system operations. This paper proposes a data-light and trajectory-based machine learning approach for the online prediction of estimated time of arrival at terminal airspace boundary (ETA_TAB) and estimated landing time (ELDT), while a flight is airborne. Rather than requiring a large volume of data on aircraft aerodynamics, en-route weather, and traffic, this approach uses only flight trajectory information on latitude, longitude, and speed. The approach consists of four modules: (a) reconstructing the sequence of trajectory points from the raw trajectory that has been flown, and identifying its best-matched historical trajectory which bears the most similarity; (b) predicting the remaining trajectory, based on what has been flown and the best-matched historical trajectory; this is achieved by developing a long short-term memory (LSTM) network trajectory prediction model; (c) predicting the ground speed of the flight along its predicted trajectory, iteratively using the current position and previous speed information; to this end, a gradient boosting machine (GBM) speed prediction model is developed; and (d) predicting ETA_TAB using trajectory and speed prediction from (b) and (c), and using ETA_TAB to further predict ELDT. Since LSTM and GBM models can be trained offline, online computation efforts are kept at a minimum. We apply this approach to real-world flights in the US. Based on our findings, the proposed approach yields better prediction performance than multiple alternative methods. The proposed approach is easy to implement, fast to perform, and effective in prediction, thus presenting an appeal to potential users, especially those interested in flight ETA prediction in real time but having limited data access. Full article
(This article belongs to the Special Issue Advances in Air Traffic and Airspace Control and Management)
Show Figures

Figure 1

16 pages, 1360 KB  
Article
Learning-Based Model Predictive Control for Autonomous Racing
by João Pinho, Gabriel Costa, Pedro U. Lima and Miguel Ayala Botto
World Electr. Veh. J. 2023, 14(7), 163; https://doi.org/10.3390/wevj14070163 - 21 Jun 2023
Cited by 9 | Viewed by 8756
Abstract
In this paper, we present the adaptation of the terminal component learning-based model predictive control (TC-LMPC) architecture for autonomous racing to the Formula Student Driverless (FSD) context. We test the TC-LMPC architecture, a reference-free controller that is able to learn from previous iterations [...] Read more.
In this paper, we present the adaptation of the terminal component learning-based model predictive control (TC-LMPC) architecture for autonomous racing to the Formula Student Driverless (FSD) context. We test the TC-LMPC architecture, a reference-free controller that is able to learn from previous iterations by building an appropriate terminal safe set and terminal cost from collected trajectories and input sequences, in a vehicle simulator dedicated to the FSD competition. One major problem in autonomous racing is the difficulty in obtaining accurate highly nonlinear vehicle models that cover the entire performance envelope. This is more severe as the controller pushes for incrementally more aggressive behavior. To address this problem, we use offline and online measurements and machine learning (ML) techniques for the online adaptation of the vehicle model. We test two sparse Gaussian process regression (GPR) approximations for model learning. The novelty in the model learning segment is the use of a selection method for the initial training dataset that maximizes the information gain criterion. The TC-LMPC with model learning achieves a 5.9 s reduction (3%) in the total 10-lap FSD race time. Full article
(This article belongs to the Special Issue Advances in ADAS)
Show Figures

Figure 1

Back to TopTop