Interpretable Recurrent Variational State-Space Model for Fault Detection of Complex Systems Based on Multisensory Signals
Abstract
:1. Introduction
- (1).
- A novel interpretable recurrent variational state-space model (IRVSSM) is proposed for diagnosis of complex systems. Recurrent neural networks (RNNs) are utilized to capture long-term dependencies in sensor data, while the state-space model (SSM) captures dynamic changes in complex sensor data. The incorporation of nonlinear emission and transition matrices enables the model to flexibly adapt to various data distributions. Additionally, integrating a variational autoencoder (VAE) within the state-space framework not only enhances the model’s generalization capability but also helps alleviate overfitting issues.
- (2).
- To extract crucial information from sensor data, an automatic relevance determination (ARD) network is designed. By comparing sensor weights with actual fault information and conducting detailed analyses of the model’s decisions, the ARD network furnishes compelling explanations for engine fault classification.
- (3).
- Experiments were carried out on simulated and actual liquid rocket engine test data. The results demonstrate the fault diagnosis performance of the proposed methodology outperforms baseline models.
2. Related Works
2.1. State-Space Models
2.2. Deep State-Space Model
2.3. Variational Autoencoders
3. Proposed Model
3.1. Problem Formulation
3.2. Recurrent Variational State-Space Model for Liquid Rocket Engines
3.3. Inference Network
3.4. Automatic Relevance Determination Network
4. Experiments
4.1. Datasets
4.2. Compared Methods
4.3. Results and Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hu, N.; Qin, G.; Hu, L.; Xie, G.; Hu, H. Real-time fault detection system for turbopump of liquid rocket engine based on vibration measurement signals. In Proceedings of the 6th International Workshop on Structural Health Monitoring, Stanford, CA, USA, 11–13 September 2007. [Google Scholar]
- Norman, J.A.; Nemeth, E. Development of a health monitoring algorithm. In Proceedings of the 26th Joint Propulsion Conference, Orlando, FL, USA, 16–18 July 1990; p. 1991. [Google Scholar]
- Park, S.-Y.; Ahn, J. Deep neural network approach for fault detection and diagnosis during startup transient of liquid-propellant rocket engine. Acta Astronaut. 2020, 177, 714–730. [Google Scholar] [CrossRef]
- Zhu, X.; Cheng, Y.; Wu, J.; Hu, R.; Cui, X. Steady-state process fault detection for liquid rocket engines based on convolutional auto-encoder and one-class support vector machine. IEEE Access 2019, 8, 3144–3158. [Google Scholar] [CrossRef]
- Yan, H.; Liu, Z.; Chen, J.; Feng, Y.; Wang, J. Memory-augmented skip-connected autoencoder for unsupervised anomaly detection of rocket engines with multi-source fusion. ISA Trans. 2023, 133, 53–65. [Google Scholar] [CrossRef] [PubMed]
- Feng, Y.; Liu, Z.; Chen, J.; Lv, H.; Wang, J.; Yuan, J. Make the rocket intelligent at IoT edge: Stepwise GAN for anomaly detection of LRE with multisource fusion. IEEE Internet Things J. 2021, 9, 3135–3149. [Google Scholar] [CrossRef]
- Zhang, Z.; Chen, H.; Qi, C.; Liu, Y. Model-Based Leakage Estimation and Remaining Useful Life Prediction of Control Gas Cylinder. Int. J. Aerosp. Eng. 2023, 2023, 3606822. [Google Scholar] [CrossRef]
- Omata, N.; Tsutsumi, S.; Abe, M.; Satoh, D.; Hashimoto, T.; Sato, M.; Kimura, T.; IEEE. Model-based fault detection with uncertainties in a reusable rocket engine. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022. [Google Scholar]
- Sharma, S.; Majumdar, A. Deep state space model for predicting cryptocurrency price. Inf. Sci. 2022, 618, 417–433. [Google Scholar] [CrossRef]
- Li, L.; Yan, J.; Wen, Q.; Jin, Y.; Yang, X. Learning robust deep state space for unsupervised anomaly detection in contaminated time-series. IEEE Trans. Knowl. Data Eng. 2022, 35, 6. [Google Scholar] [CrossRef]
- Liu, Y.; Ajirak, M.; Djuric, P. Sequential Estimation of Gaussian Process-based Deep State-Space Models. IEEE Trans. Signal Process. 2023, 71, 14. [Google Scholar] [CrossRef]
- Lefferts, E.J.; Markley, F.L.; Shuster, M.D. Kalman Filtering for Spacecraft Attitude Estimation. J. Guid. Control. Dyn. 1982, 5, 536–542. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2014, arXiv:1312.6114. [Google Scholar]
- Rangapuram, S.S.; Seeger, M.; Gasthaus, J.; Stella, L.; Wang, Y.; Januschowski, T. Deep state space models for time series forecasting. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 7796–7805. [Google Scholar]
- Fraccaro, M. Deep Latent Variable Models for Sequential Data; DTU Compute: Lyngby, Denmark, 2018. [Google Scholar]
- Nugraha, A.A.; Sekiguchi, K.; Yoshii, K. A Flow-Based Deep Latent Variable Model for Speech Spectrogram Modeling and Enhancement. TechRxiv 2020, 28, 1104–1117. [Google Scholar] [CrossRef]
- Shen, X. Deep Latent-Variable Models for Text Generation. arXiv 2022, arXiv:2203.02055. [Google Scholar]
- Hai-Long, T.; Makoto, M.; Sophia, A. BioVAE: A pre-trained latent variable language model for biomedical text mining. Bioinformatics 2021, 38, 872–874. [Google Scholar]
- Chira, D.; Haralampiev, I.; Winther, O.; Dittadi, A.; Liévin, V. Image Super-Resolution with Deep Variational Autoencoders. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Krishnan, R.G.; Shalit, U.; Sontag, D.; Aaai. Structured Inference Networks for Nonlinear State Space Models. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2101–2109. [Google Scholar]
- Gedon, D.; Wahlstrm, N.; Schn, T.B.; Ljung, L. Deep State Space Models for Nonlinear System Identification; Elsevier: Amsterdam, The Netherlands, 2021. [Google Scholar]
- Burda, Y.; Grosse, R.; Salakhutdinov, R. Importance Weighted Autoencoders. arXiv 2015, arXiv:1509.00519. [Google Scholar]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Rezende, D.J.; Mohamed, S.; Wierstra, D. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In Proceedings of the International Conference on Machine Learning, Beijing, China, 22–24 June 2014; pp. 1278–1286. [Google Scholar]
- Kingma, D.P.; Rezende, D.J.; Mohamed, S.; Welling, M. Semi-supervised Learning with Deep Generative Models. In Proceedings of the 28th Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 8–13 December 2014. [Google Scholar]
- Wipf, D.; Nagarajan, S. A New View of Automatic Relevance Determination. Adv. Neural Inf. Process. Syst. 2007, 49, 641. [Google Scholar]
- Rudy, S.H.; Sapsis, T.P. Sparse methods for automatic relevance determination. Phys. D Nonlinear Phenom. 2021, 418, 132843. [Google Scholar] [CrossRef]
- Li, L.Y.; Yan, J.C.; Yang, X.K.; Jin, Y.H. Learning Interpretable Deep State Space Model for Probabilistic Time Series Forecasting. In Proceedings of the 28th International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 2901–2908. [Google Scholar]
Dataset | Sensors Number | Samples Number/ Samples Length | Subsequence Length |
---|---|---|---|
LRE-1 | 28 | 50/5500 | 25 |
LRE-2 | 48 | 2/20,000 | 20 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, M.; Zhu, J. Interpretable Recurrent Variational State-Space Model for Fault Detection of Complex Systems Based on Multisensory Signals. Appl. Sci. 2024, 14, 3772. https://doi.org/10.3390/app14093772
Ma M, Zhu J. Interpretable Recurrent Variational State-Space Model for Fault Detection of Complex Systems Based on Multisensory Signals. Applied Sciences. 2024; 14(9):3772. https://doi.org/10.3390/app14093772
Chicago/Turabian StyleMa, Meng, and Junjie Zhu. 2024. "Interpretable Recurrent Variational State-Space Model for Fault Detection of Complex Systems Based on Multisensory Signals" Applied Sciences 14, no. 9: 3772. https://doi.org/10.3390/app14093772
APA StyleMa, M., & Zhu, J. (2024). Interpretable Recurrent Variational State-Space Model for Fault Detection of Complex Systems Based on Multisensory Signals. Applied Sciences, 14(9), 3772. https://doi.org/10.3390/app14093772