Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,016)

Search Parameters:
Keywords = LSTM neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2291 KiB  
Article
State of Charge Estimation for Sodium-Ion Batteries Based on LSTM Network and Unscented Kalman Filter
by Xiangang Zuo, Xiaoheng Fu, Xu Han, Meng Sun and Yuqian Fan
Batteries 2025, 11(7), 274; https://doi.org/10.3390/batteries11070274 - 18 Jul 2025
Abstract
With the increasing application of sodium-ion batteries in energy storage systems, accurate state of charge (SOC) estimation plays a vital role in ensuring both available battery capacity and operational safety. Traditional Kalman-filter-based methods often suffer from limited model expressiveness or oversimplified physical assumptions, [...] Read more.
With the increasing application of sodium-ion batteries in energy storage systems, accurate state of charge (SOC) estimation plays a vital role in ensuring both available battery capacity and operational safety. Traditional Kalman-filter-based methods often suffer from limited model expressiveness or oversimplified physical assumptions, making it difficult to balance accuracy and robustness under complex operating conditions, which may lead to unreliable estimation results. To address these challenges, this paper proposes a hybrid framework that combines an unscented Kalman filter (UKF) with a long short-term memory (LSTM) neural network for SOC estimation. Under various driving conditions, the UKF—based on a second-order equivalent circuit model with online parameter identification—provides physically interpretable estimates, while LSTM effectively captures complex temporal dependencies. Experimental results under CLTC, NEDC, and WLTC cycles demonstrate that the proposed LSTM-UKF approach reduces the mean absolute error (MAE) by an average of 2% and the root mean square error (RMSE) by an average of 3% compared to standalone methods. The proposed framework exhibits excellent adaptability across different scenarios, offering a precise, stable, and robust solution for SOC estimation in sodium-ion batteries. Full article
(This article belongs to the Section Battery Modelling, Simulation, Management and Application)
Show Figures

Figure 1

18 pages, 9419 KiB  
Article
STNet: Prediction of Underwater Sound Speed Profiles with an Advanced Semi-Transformer Neural Network
by Wei Huang, Junpeng Lu, Jiajun Lu, Yanan Wu, Hao Zhang and Tianhe Xu
J. Mar. Sci. Eng. 2025, 13(7), 1370; https://doi.org/10.3390/jmse13071370 - 18 Jul 2025
Abstract
The real-time acquisition of an accurate underwater sound velocity profile (SSP) is crucial for tracking the propagation trajectory of underwater acoustic signals, making it play a key role in ocean communication positioning. SSPs can be directly measured by instruments or inverted leveraging sound [...] Read more.
The real-time acquisition of an accurate underwater sound velocity profile (SSP) is crucial for tracking the propagation trajectory of underwater acoustic signals, making it play a key role in ocean communication positioning. SSPs can be directly measured by instruments or inverted leveraging sound field data. Although measurement techniques provide a good accuracy, they are constrained by limited spatial coverage and require a substantial time investment. The inversion method based on the real-time measurement of acoustic field data improves operational efficiency but loses the accuracy of SSP estimation and suffers from limited spatial applicability due to its stringent requirements for ocean observation infrastructures. To achieve accurate long-term ocean SSP estimation independent of real-time underwater data measurements, we propose a semi-transformer neural network (STNet) specifically designed for simulating sound velocity distribution patterns from the perspective of time series prediction. The proposed network architecture incorporates an optimized self-attention mechanism to effectively capture long-range temporal dependencies within historical sound velocity time-series data, facilitating an accurate estimation of current SSPs or prediction of future SSPs. Through the architectural optimization of the transformer framework and integration of a time encoding mechanism, STNet could effectively improve computational efficiency. For long-term forecasting (using the Pacific Ocean as a case study), STNet achieved an annual average RMSE of 0.5811 m/s, outperforming the best baseline model, H-LSTM, by 26%. In short-term forecasting for the South China Sea, STNet further reduced the RMSE to 0.1385 m/s, demonstrating a 51% improvement over H-LSTM. Comparative experimental results revealed that STNet outperformed state-of-the-art models in predictive accuracy and maintained good computational efficiency, demonstrating its potential for enabling accurate long-term full-depth ocean SSP forecasting. Full article
Show Figures

Figure 1

29 pages, 6396 KiB  
Article
A Hybrid GAS-ATT-LSTM Architecture for Predicting Non-Stationary Financial Time Series
by Kevin Astudillo, Miguel Flores, Mateo Soliz, Guillermo Ferreira and José Varela-Aldás
Mathematics 2025, 13(14), 2300; https://doi.org/10.3390/math13142300 - 18 Jul 2025
Abstract
This study proposes a hybrid approach to analyze and forecast non-stationary financial time series by combining statistical models with deep neural networks. A model is introduced that integrates three key components: the Generalized Autoregressive Score (GAS) model, which captures volatility dynamics; an attention [...] Read more.
This study proposes a hybrid approach to analyze and forecast non-stationary financial time series by combining statistical models with deep neural networks. A model is introduced that integrates three key components: the Generalized Autoregressive Score (GAS) model, which captures volatility dynamics; an attention mechanism (ATT), which identifies the most relevant features within the sequence; and a Long Short-Term Memory (LSTM) neural network, which receives the outputs of the previous modules to generate price forecasts. This architecture is referred to as GAS-ATT-LSTM. Both unidirectional and bidirectional variants were evaluated using real financial data from the Nasdaq Composite Index, Invesco QQQ Trust, ProShares UltraPro QQQ, Bitcoin, and gold and silver futures. The proposed model’s performance was compared against five benchmark architectures: LSTM Bidirectional, GARCH-LSTM Bidirectional, ATT-LSTM, GAS-LSTM, and GAS-LSTM Bidirectional, under sliding windows of 3, 5, and 7 days. The results show that GAS-ATT-LSTM, particularly in its bidirectional form, consistently outperforms the benchmark models across most assets and forecasting horizons. It stands out for its adaptability to varying volatility levels and temporal structures, achieving significant improvements in both accuracy and stability. These findings confirm the effectiveness of the proposed hybrid model as a robust tool for forecasting complex financial time series. Full article
Show Figures

Figure 1

11 pages, 1540 KiB  
Article
Extraction of Clinically Relevant Temporal Gait Parameters from IMU Sensors Mimicking the Use of Smartphones
by Aske G. Larsen, Line Ø. Sadolin, Trine R. Thomsen and Anderson S. Oliveira
Sensors 2025, 25(14), 4470; https://doi.org/10.3390/s25144470 - 18 Jul 2025
Abstract
As populations age and workforces decline, the need for accessible health assessment methods grows. The merging of accessible and affordable sensors such as inertial measurement units (IMUs) and advanced machine learning techniques now enables gait assessment beyond traditional laboratory settings. A total of [...] Read more.
As populations age and workforces decline, the need for accessible health assessment methods grows. The merging of accessible and affordable sensors such as inertial measurement units (IMUs) and advanced machine learning techniques now enables gait assessment beyond traditional laboratory settings. A total of 52 participants walked at three speeds while carrying a smartphone-sized IMU in natural positions (hand, trouser pocket, or jacket pocket). A previously trained Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM)-based machine learning model predicted gait events, which were then used to calculate stride time, stance time, swing time, and double support time. Stride time predictions were highly accurate (<5% error), while stance and swing times exhibited moderate variability and double support time showed the highest errors (>20%). Despite these variations, moderate-to-strong correlations between the predicted and experimental spatiotemporal gait parameters suggest the feasibility of IMU-based gait tracking in real-world settings. These associations preserved inter-subject patterns that are relevant for detecting gait disorders. Our study demonstrated the feasibility of extracting clinically relevant gait parameters using IMU data mimicking smartphone use, especially parameters with longer durations such as stride time. Robustness across sensor locations and walking speeds supports deep learning on single-IMU data as a viable tool for remote gait monitoring. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

23 pages, 14080 KiB  
Article
Regional Ecological Environment Quality Prediction Based on Multi-Model Fusion
by Yiquan Song, Zhengwei Li and Baoquan Wei
Land 2025, 14(7), 1486; https://doi.org/10.3390/land14071486 - 17 Jul 2025
Abstract
Regional ecological environmental quality (EEQ) is a vital indicator for environmental management and supporting sustainable development. However, the absence of robust and accurate EEQ prediction models has hindered effective environmental strategies. This study proposes a novel approach to address this gap by integrating [...] Read more.
Regional ecological environmental quality (EEQ) is a vital indicator for environmental management and supporting sustainable development. However, the absence of robust and accurate EEQ prediction models has hindered effective environmental strategies. This study proposes a novel approach to address this gap by integrating the ecological index (EI) model with several predictive models, including autoregressive integrated moving average (ARIMA), convolutional neural network (CNN), long short-term memory (LSTM), and cellular automata (CA), to forecast regional EEQ. Initially, the spatiotemporal evolution of the input data used to calculate the EI score was analyzed. Subsequently, tailored prediction models were developed for each dataset. These models were sequentially trained and validated, and their outputs were integrated into the EI model to enhance the accuracy and coherence of the final EEQ predictions. The novelty of this methodology lies not only in integrating existing predictive models but also in employing an innovative fusion technique that significantly improves prediction accuracy. Despite data quality issues in the case study dataset led to higher prediction errors in certain regions, the overall results exhibit a high degree of accuracy. A comparison of long-term EI predictions with EI assessment results reveals that the R2 value for the EI score exceeds 0.96, and the kappa value surpasses 0.76 for the EI level, underscoring the robust performance of the integrated model in forecasting regional EEQ. This approach offers valuable insights into exploring regional EEQ trends and future challenges. Full article
Show Figures

Figure 1

20 pages, 7661 KiB  
Article
Incorporating a Deep Neural Network into Moving Horizon Estimation for Embedded Thermal Torque Derating of an Electric Machine
by Alexander Winkler, Pranav Shah, Katrin Baumgärtner, Vasu Sharma, David Gordon and Jakob Andert
Energies 2025, 18(14), 3813; https://doi.org/10.3390/en18143813 - 17 Jul 2025
Abstract
This study presents a novel state estimation approach integrating Deep Neural Networks (DNNs) into Moving Horizon Estimation (MHE). This is a shift from using traditional physics-based models within MHE towards data-driven techniques. Specifically, a Long Short-Term Memory (LSTM)-based DNN is trained using synthetic [...] Read more.
This study presents a novel state estimation approach integrating Deep Neural Networks (DNNs) into Moving Horizon Estimation (MHE). This is a shift from using traditional physics-based models within MHE towards data-driven techniques. Specifically, a Long Short-Term Memory (LSTM)-based DNN is trained using synthetic data derived from a high-fidelity thermal model of a Permanent Magnet Synchronous Machine (PMSM), applied within a thermal derating torque control strategy for battery electric vehicles. The trained DNN is directly embedded within an MHE formulation, forming a discrete-time nonlinear optimal control problem (OCP) solved via the acados optimization framework. Model-in-the-Loop simulations demonstrate accurate temperature estimation even under noisy sensor conditions and simulated sensor failures. Real-time implementation on embedded hardware confirms practical feasibility, achieving computational performance exceeding real-time requirements threefold. By integrating the learned LSTM-based dynamics directly into MHE, this work achieves state estimation accuracy, robustness, and adaptability while reducing modeling efforts and complexity. Overall, the results highlight the effectiveness of combining model-based and data-driven methods in safety-critical automotive control systems. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Graphical abstract

19 pages, 3923 KiB  
Article
Automated Aneurysm Boundary Detection and Volume Estimation Using Deep Learning
by Alireza Bagheri Rajeoni, Breanna Pederson, Susan M. Lessner and Homayoun Valafar
Diagnostics 2025, 15(14), 1804; https://doi.org/10.3390/diagnostics15141804 - 17 Jul 2025
Abstract
Background/Objective: Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard [...] Read more.
Background/Objective: Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard of care primarily focuses on measuring aneurysm diameter at its widest point, providing a limited perspective on aneurysm morphology and lacking efficient methods to measure aneurysm volumes. Yet, volume measurement can offer deeper insight into aneurysm progression and severity. In this study, we propose an automated approach that leverages the strengths of pre-trained neural networks and expert systems to delineate aneurysm boundaries and compute volumes on an unannotated dataset from 60 patients. The dataset includes slice-level start/end annotations for aneurysm but no pixel-wise aorta segmentations. Method: Our method utilizes a pre-trained UNet to automatically locate the aorta, employs SAM2 to track the aorta through vascular irregularities such as aneurysms down to the iliac bifurcation, and finally uses a Long Short-Term Memory (LSTM) network or expert system to identify the beginning and end points of the aneurysm within the aorta. Results: Despite no manual aorta segmentation, our approach achieves promising accuracy, predicting the aneurysm start point with an R2 score of 71%, the end point with an R2 score of 76%, and the volume with an R2 score of 92%. Conclusions: This technique has the potential to facilitate large-scale aneurysm analysis and improve clinical decision-making by reducing dependence on annotated datasets. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

18 pages, 11737 KiB  
Article
MoHiPr-TB: A Monthly Gridded Multi-Source Merged Precipitation Dataset for the Tarim Basin Based on Machine Learning
by Ping Chen, Junqiang Yao, Jing Chen, Mengying Yao, Liyun Ma, Weiyi Mao and Bo Sun
Remote Sens. 2025, 17(14), 2483; https://doi.org/10.3390/rs17142483 - 17 Jul 2025
Abstract
A reliable precipitation dataset with high spatial resolution is essential for climate research in the Tarim Basin. This study evaluated the performances of four models, namely a random forest (RF), a long short-term memory network (LSTM), a support vector machine (SVM), and a [...] Read more.
A reliable precipitation dataset with high spatial resolution is essential for climate research in the Tarim Basin. This study evaluated the performances of four models, namely a random forest (RF), a long short-term memory network (LSTM), a support vector machine (SVM), and a feedforward neural network (FNN). FNN, which was found to be superior to the other models, was used to integrate eight precipitation datasets spanning from 1990 to 2022 across the Tarim Basin, resulting in a new monthly high-resolution (0.1°) precipitation dataset named MoHiPr-TB. This dataset was subsequently bias-corrected by the China Land Data Assimilation System version 2.0 (CLDAS2.0). Validation results indicate that the corrected MoHiPr-TB not only accurately reflects the spatial distribution of precipitation but also effectively simulates its intensity and interannual and seasonal variations. Moreover, MoHiPr-TB is capable of detecting the precipitation–elevation relationship in the Pamir Plateau, where precipitation initially increases and then decreases with elevation, as well as the synchronous variation of precipitation and elevation in the Tianshan region. Collectively, this study delivers a high-accuracy precipitation dataset for the Tarim Basin, which is anticipated to have extensive applications in meteorological, hydrological, and ecological research. Full article
(This article belongs to the Section Earth Observation Data)
Show Figures

Figure 1

25 pages, 4363 KiB  
Article
Method for Predicting Transformer Top Oil Temperature Based on Multi-Model Combination
by Lin Yang, Minghe Wang, Liang Chen, Fan Zhang, Shen Ma, Yang Zhang and Sixu Yang
Electronics 2025, 14(14), 2855; https://doi.org/10.3390/electronics14142855 - 17 Jul 2025
Abstract
The top oil temperature of a transformer is a vital sign reflecting its operational condition. The accurate prediction of this parameter is essential for evaluating insulation performance and extending equipment lifespan. At present, the prediction of oil temperature is mainly based on single-feature [...] Read more.
The top oil temperature of a transformer is a vital sign reflecting its operational condition. The accurate prediction of this parameter is essential for evaluating insulation performance and extending equipment lifespan. At present, the prediction of oil temperature is mainly based on single-feature prediction. However, it overlooks the influence of other features. This has a negative effect on the prediction accuracy. Furthermore, the training dataset is often made up of data from a single transformer. This leads to the poor generalization of the prediction. To tackle these challenges, this paper leverages large-scale data analysis and processing techniques, and presents a transformer top oil temperature prediction model that combines multiple models. The Convolutional Neural Network was applied in this method to extract spatial features from multiple input variables. Subsequently, a Long Short-Term Memory network was employed to capture dynamic patterns in the time series. Meanwhile, a Transformer encoder enhanced feature interaction and global perception. The spatial characteristics extracted by the CNN and the temporal characteristics extracted by LSTM were further integrated to create a more comprehensive representation. The established model was optimized using the Whale Optimization Algorithm to improve prediction accuracy. The results of the experiment indicate that the maximum RMSE and MAPE of this method on the summer and winter datasets were 0.5884 and 0.79%, respectively, demonstrating superior prediction accuracy. Compared with other models, the proposed model improved prediction performance by 13.74%, 36.66%, and 43.36%, respectively, indicating high generalization capability and accuracy. This provides theoretical support for condition monitoring and fault warning of power equipment. Full article
Show Figures

Figure 1

18 pages, 533 KiB  
Article
Comparative Analysis of Deep Learning Models for Intrusion Detection in IoT Networks
by Abdullah Waqas, Sultan Daud Khan, Zaib Ullah, Mohib Ullah and Habib Ullah
Computers 2025, 14(7), 283; https://doi.org/10.3390/computers14070283 - 17 Jul 2025
Abstract
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of [...] Read more.
The Internet of Things (IoT) holds transformative potential in fields such as power grid optimization, defense networks, and healthcare. However, the constrained processing capacities and resource limitations of IoT networks make them especially susceptible to cyber threats. This study addresses the problem of detecting intrusions in IoT environments by evaluating the performance of deep learning (DL) models under different data and algorithmic conditions. We conducted a comparative analysis of three widely used DL models—Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), and Bidirectional LSTM (biLSTM)—across four benchmark IoT intrusion detection datasets: BoTIoT, CiCIoT, ToNIoT, and WUSTL-IIoT-2021. Each model was assessed under balanced and imbalanced dataset configurations and evaluated using three loss functions (cross-entropy, focal loss, and dual focal loss). By analyzing model efficacy across these datasets, we highlight the importance of generalizability and adaptability to varied data characteristics that are essential for real-world applications. The results demonstrate that the CNN trained using the cross-entropy loss function consistently outperforms the other models, particularly on balanced datasets. On the other hand, LSTM and biLSTM show strong potential in temporal modeling, but their performance is highly dependent on the characteristics of the dataset. By analyzing the performance of multiple DL models under diverse datasets, this research provides actionable insights for developing secure, interpretable IoT systems that can meet the challenges of designing a secure IoT system. Full article
(This article belongs to the Special Issue Application of Deep Learning to Internet of Things Systems)
Show Figures

Figure 1

21 pages, 4238 KiB  
Article
Fault Prediction of Hydropower Station Based on CNN-LSTM-GAN with Biased Data
by Bei Liu, Xiao Wang, Zhaoxin Zhang, Zhenjie Zhao, Xiaoming Wang and Ting Liu
Energies 2025, 18(14), 3772; https://doi.org/10.3390/en18143772 - 16 Jul 2025
Viewed by 65
Abstract
Fault prediction of hydropower station is crucial for the stable operation of generator set equipment, but the traditional method struggles to deal with data with an imbalanced distribution and untrustworthiness. This paper proposes a fault detection method based on a convolutional neural network [...] Read more.
Fault prediction of hydropower station is crucial for the stable operation of generator set equipment, but the traditional method struggles to deal with data with an imbalanced distribution and untrustworthiness. This paper proposes a fault detection method based on a convolutional neural network (CNNs) and long short-term memory network (LSTM) with a generative adversarial network (GAN). Firstly, a reliability mechanism based on principal component analysis (PCA) is designed to solve the problem of data bias caused by multiple monitoring devices. Then, the CNN-LSTM network is used to predict time series data, and the GAN is used to expand fault data samples to solve the problem of an unbalanced data distribution. Meanwhile, a multi-scale feature extraction network with time–frequency information is designed to improve the accuracy of fault detection. Finally, a dynamic multi-task training algorithm is proposed to ensure the convergence and training efficiency of the deep models. Experimental results show that compared with RNN, GRU, SVM, and threshold detection algorithms, the proposed fault prediction method improves the accuracy performance by 5.5%, 4.8%, 7.8%, and 9.3%, with at least a 160% improvement in the fault recall rate. Full article
(This article belongs to the Special Issue Optimal Schedule of Hydropower and New Energy Power Systems)
Show Figures

Figure 1

20 pages, 2632 KiB  
Article
Data-Driven Attack Detection Mechanism Against False Data Injection Attacks in DC Microgrids Using CNN-LSTM-Attention
by Chunxiu Li, Xinyu Wang, Xiaotao Chen, Aiming Han and Xingye Zhang
Symmetry 2025, 17(7), 1140; https://doi.org/10.3390/sym17071140 - 16 Jul 2025
Viewed by 50
Abstract
This study presents a novel spatio-temporal detection framework for identifying False Data Injection (FDI) attacks in DC microgrid systems from the perspective of cyber–physical symmetry. While modern DC microgrids benefit from increasingly sophisticated cyber–physical symmetry network integration, this interconnected architecture simultaneously introduces significant [...] Read more.
This study presents a novel spatio-temporal detection framework for identifying False Data Injection (FDI) attacks in DC microgrid systems from the perspective of cyber–physical symmetry. While modern DC microgrids benefit from increasingly sophisticated cyber–physical symmetry network integration, this interconnected architecture simultaneously introduces significant cybersecurity vulnerabilities. Notably, FDI attacks can effectively bypass conventional Chi-square detector-based protection mechanisms through malicious manipulation of communication layer data. To address this critical security challenge, we propose a hybrid deep learning framework that synergistically combines: Convolutional Neural Networks (CNN) for robust spatial feature extraction from power system measurements; Long Short-Term Memory (LSTM) networks for capturing complex temporal dependencies; and an attention mechanism that dynamically weights the most discriminative features. The framework operates through a hierarchical feature extraction process: First-level spatial analysis identifies local measurement patterns; second-level temporal analysis detects sequential anomalies; attention-based feature refinement focuses on the most attack-relevant signatures. Comprehensive simulation studies demonstrate the superior performance of our CNN-LSTM-Attention framework compared to conventional detection approaches (CNN-SVM and MLP), with significant improvements across all key metrics. Namely, the accuracy, precision, F1-score, and recall could be improved by at least 7.17%, 6.59%, 2.72% and 6.55%. Full article
Show Figures

Figure 1

24 pages, 6089 KiB  
Article
An Optimized 1-D CNN-LSTM Approach for Fault Diagnosis of Rolling Bearings Considering Epistemic Uncertainty
by Onur Can Kalay
Machines 2025, 13(7), 612; https://doi.org/10.3390/machines13070612 - 16 Jul 2025
Viewed by 40
Abstract
Rolling bearings are indispensable but also the most fault-prone components of rotating machinery, typically used in fields such as industrial aircraft, production workshops, and manufacturing. They encounter diverse mechanical stresses, such as vibration and friction during operation, which may lead to wear and [...] Read more.
Rolling bearings are indispensable but also the most fault-prone components of rotating machinery, typically used in fields such as industrial aircraft, production workshops, and manufacturing. They encounter diverse mechanical stresses, such as vibration and friction during operation, which may lead to wear and fatigue cracks. From this standpoint, the present study combined a 1-D convolutional neural network (1-D CNN) with a long short-term memory (LSTM) algorithm for classifying different ball-bearing health conditions. A physics-guided method that adopts fault characteristics frequencies was used to calculate an optimal input size (sample length). Moreover, grid search was utilized to optimize (1) the number of epochs, (2) batch size, and (3) dropout ratio and further enhance the efficacy of the proposed 1-D CNN-LSTM network. Therefore, an attempt was made to reduce epistemic uncertainty that arises due to not knowing the best possible hyper-parameter configuration. Ultimately, the effectiveness of the physics-guided optimized 1-D CNN-LSTM was tested by comparing its performance with other state-of-the-art models. The findings revealed that the average accuracies could be enhanced by up to 20.717% with the help of the proposed approach after testing it on two benchmark datasets. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

17 pages, 2117 KiB  
Article
On-Orbit Life Prediction and Analysis of Triple-Junction Gallium Arsenide Solar Arrays for MEO Satellites
by Huan Liu, Chenjie Kong, Yuan Shen, Baojun Lin, Xueliang Wang and Qiang Zhang
Aerospace 2025, 12(7), 633; https://doi.org/10.3390/aerospace12070633 - 16 Jul 2025
Viewed by 53
Abstract
This paper focuses on the triple-junction gallium arsenide solar array of a MEO (Medium Earth Orbit) satellite that has been in orbit for 7 years. Through a combination of theoretical and data-driven methods, it conducts research on anti-radiation design verification and life prediction. [...] Read more.
This paper focuses on the triple-junction gallium arsenide solar array of a MEO (Medium Earth Orbit) satellite that has been in orbit for 7 years. Through a combination of theoretical and data-driven methods, it conducts research on anti-radiation design verification and life prediction. This study integrates the Long Short-Term Memory (LSTM) algorithm into the full life cycle management of MEO satellite solar arrays, providing a solution that combines theory and engineering for the design of high-reliability energy systems. Based on semiconductor physics theory, this paper establishes an output current calculation model. By combining radiation attenuation factors obtained from ground experiments, it derives the theoretical current values for the initial orbit insertion and the end of life. Aiming at the limitations of traditional physical models in addressing solar performance degradation under complex radiation environments, this paper introduces an LSTM algorithm to deeply mine the high-density current telemetry data (approximately 30 min per point) accumulated over 7 years in orbit. By comparing the prediction accuracy of LSTM with traditional models such as Recurrent Neural Network (RNN) and Feedforward Neural Network (FNN), the significant advantage of LSTM in capturing the long-term attenuation trend of solar arrays is verified. This study integrates deep learning technology into the full life cycle management of solar arrays, constructs a closed-loop verification system of “theoretical modeling–data-driven intelligent prediction”, and provides a solution for the long-life and high-reliability operation of the energy system of MEO orbit satellites. Full article
Show Figures

Figure 1

20 pages, 690 KiB  
Article
Wearable Sensor-Based Human Activity Recognition: Performance and Interpretability of Dynamic Neural Networks
by Dalius Navakauskas and Martynas Dumpis
Sensors 2025, 25(14), 4420; https://doi.org/10.3390/s25144420 - 16 Jul 2025
Viewed by 124
Abstract
Human Activity Recognition (HAR) using wearable sensor data is increasingly important in healthcare, rehabilitation, and smart monitoring. This study systematically compared three dynamic neural network architectures—Finite Impulse Response Neural Network (FIRNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)—to examine their suitability [...] Read more.
Human Activity Recognition (HAR) using wearable sensor data is increasingly important in healthcare, rehabilitation, and smart monitoring. This study systematically compared three dynamic neural network architectures—Finite Impulse Response Neural Network (FIRNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU)—to examine their suitability and specificity for HAR tasks. A controlled experimental setup was applied, training 16,500 models across different delay lengths and hidden neuron counts. The investigation focused on classification accuracy, computational cost, and model interpretability. LSTM achieved the highest classification accuracy (98.76%), followed by GRU (97.33%) and FIRNN (95.74%), with FIRNN offering the lowest computational complexity. To improve model transparency, Layer-wise Relevance Propagation (LRP) was applied to both input and hidden layers. The results showed that gyroscope Y-axis data was consistently the most informative, while accelerometer Y-axis data was the least informative. LRP analysis also revealed that GRU distributed relevance more broadly across hidden units, while FIRNN relied more on a small subset. These findings highlight trade-offs between performance, complexity, and interpretability and provide practical guidance for applying explainable neural wearable sensor-based HAR. Full article
Show Figures

Figure 1

Back to TopTop