Next Article in Journal
Optimization and Tradespace Analysis of a Classic Machine—A Street Clock Movement Study
Previous Article in Journal
Gain-Enhanced Correlation Fusion for PMSM Inter-Turn Faults Severity Detection Using Machine Learning Algorithms
Previous Article in Special Issue
Fault Diagnosis of Gearbox Bearings Based on Multi-Feature Fusion Dual-Channel CNN-Transformer-CAM
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Effects of Window and Batch Size on Autoencoder-LSTM Models for Remaining Useful Life Prediction

1
Department of Semiconductor Engineering, Hoseo University, Asan 31499, Republic of Korea
2
MyMeta Co., Ltd., Seoul 08790, Republic of Korea
*
Author to whom correspondence should be addressed.
Machines 2026, 14(2), 135; https://doi.org/10.3390/machines14020135
Submission received: 28 December 2025 / Revised: 20 January 2026 / Accepted: 22 January 2026 / Published: 23 January 2026

Abstract

Remaining useful life (RUL) prediction is central to predictive maintenance, but acquiring sufficient run-to-failure data remains challenging. To better exploit limited labeled data, this study investigates a pipeline combining an unsupervised autoencoder (AE) and supervised LSTM regression on the NASA C-MAPSS dataset. Building on an AE-LSTM baseline, we analyze how window size and batch size affect accuracy and training efficiency. Using the FD001 and FD004 subsets with training-capped RUL labels, we perform multi-seed experiments over a wide grid of window lengths and batch sizes. The AE is pre-trained on normalized sensor streams and reused as a feature extractor, while the LSTM head is trained with early stopping. Performance was assessed using RMSE, C-MAPSS score, and training time, reporting 95% confidence intervals. Results show that fine-tuning the encoder with a batch size of 128 yielded the best mean RMSE of 13.99 (FD001) and 28.67 (FD004). We obtained stable optimal window ranges (40–70 for FD001; 60–80 for FD004) and found that batch sizes of 64–256 offer the best accuracy–efficiency trade-off. These optimal ranges were further validated using Particle Swarm Optimization (PSO). These findings offer practical recommendations for tuning AE-LSTM-based RUL prediction models and demonstrate that performance remains stable within specific hyperparameter ranges.
Keywords: remaining useful life; unsupervised learning; hyperparameter optimization; statistical validation; stability analysis remaining useful life; unsupervised learning; hyperparameter optimization; statistical validation; stability analysis

Share and Cite

MDPI and ACS Style

Jeon, E.; Jin, D.; Kim, Y. Effects of Window and Batch Size on Autoencoder-LSTM Models for Remaining Useful Life Prediction. Machines 2026, 14, 135. https://doi.org/10.3390/machines14020135

AMA Style

Jeon E, Jin D, Kim Y. Effects of Window and Batch Size on Autoencoder-LSTM Models for Remaining Useful Life Prediction. Machines. 2026; 14(2):135. https://doi.org/10.3390/machines14020135

Chicago/Turabian Style

Jeon, Eugene, Donghwan Jin, and Yeonhee Kim. 2026. "Effects of Window and Batch Size on Autoencoder-LSTM Models for Remaining Useful Life Prediction" Machines 14, no. 2: 135. https://doi.org/10.3390/machines14020135

APA Style

Jeon, E., Jin, D., & Kim, Y. (2026). Effects of Window and Batch Size on Autoencoder-LSTM Models for Remaining Useful Life Prediction. Machines, 14(2), 135. https://doi.org/10.3390/machines14020135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop