A Multi-Model Fusion Framework for Aeroengine Remaining Useful Life Prediction
Abstract
1. Introduction
2. Problem Description
- (1)
- Conditional dependence on cumulative degradation: RUL is inherently a conditional expectation, relying on cumulative degradation information embedded in Z(t). Its accurate estimation thus depends on capturing temporal patterns in long-term monitoring data.
- (2)
- Inherent stochasticity: TF(t) is influenced by unpredictable factors such as the thermal stress-induced degradation and sensor noise. Traditional deterministic models (e.g., [4]) often oversimplify this uncertainty, leading to significant prediction errors in late-stage degradation.
2.1. Data Characteristics and Challenges
- (1)
- Computational inefficiency: 24-dimensional inputs impose higher computational burdens compared to reduced, relevant subsets (e.g., 14 dimensions).
- (2)
- Obscured critical patterns: Static noise masks subtle degradation trends in key sensors, hindering the extraction of meaningful features.
2.2. RUL Labeling and Degradation Patterns
2.3. The Prediction Task: Non-Linearity and Temporal Dependencies
3. RUL Prediction Methods
3.1. CNNs for RUL Prediction
- (1)
- Constrained long-term dependency modeling: CNNs struggle to capture temporal relationships exceeding the kernel size, which are critical for tracking gradual degradation over hundreds of cycles.
- (2)
- Sensitivity to noise: Without robust regularization, CNNs overfit to irrelevant fluctuations in sensor data, obscuring subtle degradation trends.
- (3)
- Assumption of stationary patterns: CNNs perform poorly under varying operational conditions, where degradation dynamics shift non-linearly.
3.2. LSTM for RUL Prediction
- (1)
- Computational inefficiency: Sequential processing of high-frequency sensor data leads to prolonged training and inference times.
- (2)
- Poor adaptability to abrupt shifts: LSTM networks struggle to capture sudden degradation (e.g., component faults under extreme conditions) due to their focus on smooth temporal continuity.
- (3)
- Hyperparameter sensitivity: Balancing memory retention and forgetting requires meticulous tuning of gate thresholds and hidden layer dimensions.
3.3. TCN for RUL Prediction
3.4. A Hybrid CLSTM-TCN Model Architecture and Design Rationale
3.4.1. Hybrid CLSTM-TCN Model Design
- (1)
- 2D-CNN as the Initial Module
- (2)
- LSTM as the Intermediate Module
- (3)
- TCN as the Final Module
3.4.2. Rationale for Module Sequencing in CLSTM-TCN Architecture
- (1)
- CNN primacy in spatio-temporal processing: Placing the CNN first is critical for filtering spatial noise from raw data, providing clean temporal sequences for subsequent LSTM modeling. Inverting this order (e.g., LSTM→CNN) would force LSTM networks to process unfiltered noise, destabilizing gradient flow.
- (2)
- LSTM network after CNN for sequential modeling: The gated mechanisms (input, forget, and output gates) of LSTM networks are specialized to retain long-term historical information from CNN-processed features, making them ideal for modeling degradation trends. Placing LSTM networks after TCN (e.g., CNN → TCN → LSTM) would truncate critical long-range patterns via TCN’s fixed dilation rates, undermining the strength of LSTM networks in extended dependency modeling.
- (3)
- TCN lasts for multi–scale refinement: The dilated convolutions with exponentially increasing rates of TCN capture multi-scale temporal patterns missed by LSTM networks without increasing model depth. They recover high-frequency short-term details smoothed by LSTM networks, mitigate vanishing gradients via parallel computation, and excel at modeling extended sequences. Early TCN placement (e.g., TCN → CNN → LSTM) prioritizes temporal over spatial patterns, conflicting with data where sensor interactions drive degradation.
3.5. Detailed Modeling Process of CLSTM-TCN
- Step 1: Data Preprocessing
- (1)
- Feature Selection: The time series plots of the 24-dimensional raw monitoring features are visualized to identify sensor data with significant degradation trends. Sensors with clear degradation patterns are selected as input features, while irrelevant and redundant information is removed. Through this selection process, the 24-dimensional features were reduced to 14 dimensions, excluding those lacking obvious degradation trends or with high redundancy. The retained effective features serve as inputs for the model, ensuring that the subsequent modeling focuses on the informative signals closely linked to engine degradation.
- (2)
- Data Normalization: To eliminate scale differences in sensor monitoring data, the selected 14-dimensional monitoring feature parameters are normalized. This process converts the raw sensor data into values within the range of [0, 1], standardizing the input space to facilitate stable convergence of the neural network during training and prevent dominant features from overshadowing imperceptible degradation indicators.
- (3)
- Sliding Window for Sample Construction: To fully leverage the temporal information inherent in the sequential data, the sliding window technique is employed to segment the data set into multiple time series samples. A window length W and a step size T are defined, where each sample consists of W consecutive time steps. The step size T governs the overlap between adjacent samples, ensuring the model can effectively learn the evolutionary patterns of the equipment’s operational states.
- (4)
- RUL Labeling: Monitored data are labeled using a piecewise linear function to improve the model’s ability to learn degradation trends. A maximum threshold is defined for the RUL: when the actual RUL exceeds this threshold, it is capped at the maximum value to reduce the excessive variance in labels and enhance the model’s stability, especially in predicting the later stages of engine life characterized by accelerating degradation.
- Step 2: CLSTM Module Processing
- Step 3: TCN Module Refinement
- Step 4: Output Layer Prediction
- Step 5: Model Training and Optimization
- Step 6: Model Evaluation
4. Experiment Results and Analysis
4.1. Experimental Data Set and Preprocessing
4.2. Evaluation Index
4.3. RUL Label Setting
4.4. Experimental Results
5. Ablation Study
5.1. Ablation Study Design and Experimental Setup
5.2. Ablation Experiment Results and Analysis
6. Conclusions and Discussions
6.1. Conclusions
- (1)
- The proposed CLSTM-TCN framework achieves complementary advantages through hierarchical design: 2D-CNN effectively extracts short-term local features and inter-feature interactions from the input data, providing a fine-grained feature foundation for subsequent modeling; the LSTM network captures long-term temporal dependencies, preserving global degradation trends across extended cycles; and TCN refines multi-scale temporal patterns via dilated convolutions, overcoming the vanishing gradient issues of the LSTM network and enabling parallel computation for efficiency.
- (2)
- On the FD001 subset of C-MAPSS, the CLSTM-TCN model achieves an RMSE of 13.35 and a score of 219. Ablation studies further confirm that removing any module (2D-CNN, LSTM, or TCN) leads to significant performance degradation: excluding TCN (resulting in CNN-LSTM) increases the RMSE by 38.8% and the score by 131.5%; omitting the LSTM network (resulting in CNN-TCN) raises the RMSE by 44.5% and the score by 169.9%; and removing 2D-CNN (resulting in LSTM-TCN) causes the RMSE to increase by 44.6% and the score by 163.9%. Compared to hybrid baselines (CNN-LSTM, CNN-TCN, and LSTM-TCN), it reduces the RMSE by 27.94–30.88% and maintains stability across test cases (RMSE fluctuations < 15%), verifying its reliability in a complex operational environment.
- (3)
- CLSTM-TCN outperforms published methods (CNN [20], DBN [25], CNN-LSTM [20], and LSTM [26]) in the RMSE by 15.6–38.0% (38.0% vs. CNN [20], 15.6% vs. DBN [25], 22.5% vs. CNN-LSTM [20], and 20.9% vs. the LSTM network [26]) and in the score by 52.5–313.7%, demonstrating its superiority in modeling the non-linear, multi-phase degradation dynamics of aeroengines.
6.2. Discussions
6.2.1. Current Limitations
- (1)
- Data Dependency: The performance of the CLSTM-TCN model is primarily reliant on high-quality, large-scale data sets. In data-scarce scenarios (e.g., newly deployed engines or rare operating conditions), insufficient training samples lead to degraded prediction accuracy, limiting adaptability to novel environments.
- (2)
- Computational Complexity: Integrating 2D-CNN, LSTM, and TCN incurs high computational costs, leading to prolonged training and inference times. This hinders real-time deployment in time-sensitive aviation applications where rapid RUL updates are critical for maintenance decisions.
- (3)
- Hyperparameter Sensitivity: The model exhibits high sensitivity to key hyperparameters, including learning rate, convolution kernel size, LSTM hidden layer dimensions, and TCN dilation rates. Extensive manual tuning is required to achieve optimality, increasing the implementation complexity.
6.2.2. Future Research Directions
- (1)
- Enhancing Adaptability to Complex Operating Conditions: The current study is constrained to RUL prediction of turbofan engines under static, single operating conditions, with insufficient consideration of real-world complexities, specifically, multi-condition operations and dynamic shifts in operational parameters, which are prevalent in practical aviation scenarios. This narrow scope limits the model’s adaptability and generalizability across the diverse operational landscapes encountered in engineering practice. To address this issue, future research will expand the scope to systematically explore life-cycle modeling under multi-condition and variable-condition regimes. Key efforts will include developing tailored transfer learning frameworks and domain adaptation methodologies to enable robust generalization across heterogeneous operational scenarios, thereby enhancing the model’s practical utility in real-world aviation environments.
- (2)
- Integrating Uncertainty Analysis Mechanisms: In aviation, quantifying the uncertainty in model predictions is foundational to robust risk control and data-driven decision making. However, current research in this domain remains predominantly reliant on RMSE- and score-based metrics for statistical evaluation, which fail to comprehensively characterize uncertainty distributions or rigorously assess prediction credibility. To bridge this critical gap, future work will integrate Bayesian deep learning frameworks, confidence interval estimation, and Monte Carlo dropout sampling methods to explicitly quantify aleatoric and epistemic uncertainties. This enhancement will not only enhance the interpretability and reliability of the predictive outputs but also deliver actionable insights that strengthen risk mitigation protocols and decision support systems in safety-critical aviation operations.
- (3)
- Advancing Multi-Model Fusion Strategies: The current framework adopts a static serial integration paradigm to combine model components, while this validates the value of modular collaboration, it lacks flexibility in adapting to dynamic data patterns. Future research will transcend this fixed structure by exploring adaptive fusion mechanisms, including dynamic weighting schemes, attention-driven feature aggregation, and multi-task learning architectures. These advanced strategies will be designed to more effectively leverage the intrinsic complementarity between heterogeneous models (e.g., CNNs for spatial patterns, LSTM networks for temporal sequences, and TCNs for multi-scale dependencies) and amplify their synergistic effects. By enabling context-aware integration of model outputs, such approaches have the potential to enhance prediction accuracy while preserving computational efficiency, thereby strengthening the model’s performance in complex aviation scenarios.
- (4)
- Exploring Module Order Variations: The current module order (CNN→LSTM→TCN) aligns with the hierarchical spatio-temporal characteristics of the data set. Future work will conduct a rigorous ablation study to systematically evaluate alternative configurations. This comprehensive investigation will encompass the following three key dimensions:
- Quantifying performance trade-offs across all permutations of the modules, such as LSTM→CNN→TCN and TCN→LSTM→CNN, with a focus on metrics such as prediction accuracy, convergence stability, and error distribution patterns.
- Analyzing computational efficiency and scalability across different architectural orders to identify configurations optimized for real-world deployment.
- Investigating task-specific sensitivities by validating these module permutations on diverse data sets characterized by varying spatio-temporal dynamics, ranging from high-frequency industrial sensors to long-term sequential data, thereby assessing generalizability beyond aviation-specific scenarios.
- (5)
- Mitigating Data Dependency and Computational Burden: To address the model’s reliance on large-scale data sets and high computational overhead, future work will focus on developing lightweight network architectures, incorporating techniques such as parameter pruning, knowledge distillation, and efficient convolution operators, to reduce computational costs without sacrificing predictive performance. Concurrently, few-shot and zero-shot learning frameworks will be integrated to enhance adaptability to data-scarce scenarios.
- (6)
- Optimizing Hyperparameter Tuning: To alleviate the complexity of implementation, future work will focus on designing adaptive hyperparameter optimization frameworks and automated tuning pipelines. These systems will dynamically adjust critical parameters, such as learning rates, convolution kernel sizes, and LSTM hidden layer dimensions, based on data set characteristics and task requirements. By streamlining the tuning process, this advancement will ensure consistent model performance across diverse data sets while enhancing accessibility for practical deployment, reducing the technical barrier for engineers and researchers in aviation applications.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Khelif, R.; Chebel–Morello, B.; Malinowski, S.; Laajili, E.; Fnaiech, F.; Zerhouni, N. Direct remaining useful life estimation based on support vector regression. IEEE Trans. Ind. Electron. 2017, 64, 2276–2285. [Google Scholar] [CrossRef]
- Zio, E. Prognostics and Health Management (PHM): Where are we and where do we (need to) go in theory and practice. Reliab. Eng. Syst. Saf. 2022, 218, 108119. [Google Scholar] [CrossRef]
- Cai, Z.Y.; Wang, Z.Z.; Chen, Y.X.; GUO, J.S.; Xiang, H.C. Remaining useful lifetime prediction for equipment based on nonlinear implicit degradation modeling. J. Syst. Eng. Electron. 2020, 31, 198–209. [Google Scholar] [CrossRef]
- Zhang, X.; Tang, S.; Liu, T.; Zhang, B. A new residual life prediction method for complex systems based on wiener process and evidential reasoning. J. Control Sci. Eng. 2018, 2018, 9473526. [Google Scholar] [CrossRef]
- Wang, C.; Yang, B.; Zhu, T.; Xiao, S.N.; Yang, G.; Fan, X. Research on crack propagation and residual life prediction method of surface–strengthened axle. J. Mech. Eng. 2023, 59, 43–53. [Google Scholar]
- Zhang, X.; Liu, Y. Research on residual life prediction of aeroengine based on knowledge distillation compression mixing model. Comput. Integr. Manuf. Syst. 2025, 31, 290–305. [Google Scholar]
- Zhang, H.; Long, L.; Dong, K. Detect and evaluate dependencies between aeroengine gas path system variables based on multi scale horizontal visibility graph analysis. Phys. A Stat. Mech. Its Appl. 2019, 526, 120830. [Google Scholar] [CrossRef]
- Peng, H.B.; Liu, M.M.; Wang, Y.G. Research on life prediction of aeroengine based on take–off exhaust temperature margin (EGTM). Sci. Technol. Eng. 2014, 14, 160–164. [Google Scholar]
- Gu, M.Y.; Ge, J.Q.; Li, Z.N. Improved similarity–based residual life prediction method based on grey markov model. J. Braz. Soc. Mech. Sci. Eng. 2023, 45, 294–306. [Google Scholar] [CrossRef]
- Song, Y.; Xia, T.; Zheng, Y.; Zhuo, P.; Pan, E. Prediction of remaining life of turbofan engine based on auto encoder–BLSTM. Comput. Integr. Manuf. Syst. 2019, 25, 1611–1619. [Google Scholar]
- Ren, L.; Cui, J.; Sun, Y.; Cheng, X. Multi–bearing remaining useful life collaborative prediction: A deep learning approach. J. Manuf. Syst. 2017, 43, 248–256. [Google Scholar] [CrossRef]
- Babu, G.S.; Zhao, P.; Li, X.L. Deep convolutional neural network based regression approach for estimation of remaining useful life. In Proceedings of the International Conference on Database Systems for Advanced Applications, Dallas, TX, USA, 16–19 April 2016; pp. 214–228. [Google Scholar]
- Zhang, X.; Qin, Z.; Li, M.; Shi, J. Prediction of remaining life of aeroengine based on multi–feature fusion. J. Comput. Syst. Appl. 2023, 32, 95–103. [Google Scholar]
- Yu, P.; Wang, H.; Cao, J. Aero–engine residual life prediction based on time–series residual neural networks. J. Intell. Fuzzy Syst. Appl. Eng. Technol. 2023, 45, 2437–2448. [Google Scholar] [CrossRef]
- He, Y.; Wen, C.; Xu, W. Residual life prediction of SA-CNN-BILSTM aero-engine based on a multichannel hybrid network. Appl. Sci. 2025, 15, 966. [Google Scholar] [CrossRef]
- Huang, M.; Yang, L.; Jiang, G.; Hao, X.; Lu, H.; Luo, H.; Wang, P.; Li, J. Rescconv–xlstm: An improved xlstm model with spatiotemporal feature extraction capability for remaining useful life prediction of aero–engine. Results Eng. 2025, 26, 105513. [Google Scholar] [CrossRef]
- Al–Dulaimi, A.; Zabihi, S.; Asif, A.; Mohammadi, A. A multimodal and hybrid deep neural network model for remaining useful life estimation. Comput. Ind. 2019, 108, 186–196. [Google Scholar] [CrossRef]
- Lyu, D.; Hu, Y. Remaining useful life prediction of aeroengine based on principal component analysis and one-dimensional convolutional neural network. Trans. Nanjing Univ. Aeronaut. Astronaut. 2022, 38, 867–875. (In Chinese) [Google Scholar] [CrossRef]
- Liu, Q.; Dai, Z.; Chen, P.; Lai, H.; Liang, Y.; Chen, M.; Xu, X.; Hou, M.; Wang, G. Remaining useful life prediction of rolling bearings based on TCN–LSTM. In Proceedings of the 14th International Conference on Quality, Reliability, Risk, Maintenance, and Safety Engineering (QR2MSE 2024), Harbin, China, 24–27 July 2024; pp. 1016–1020. [Google Scholar]
- Li, J.; Jia, Y.J.; Zhang, Z.X.; Li, R.R. Prediction of remaining life of aeroengine based on fusion neural network. Propuls. Technol. 2021, 42, 1725–1734. [Google Scholar]
- Ji, W.; Cheng, J.; Chen, Y. Remaining useful life prediction for mechanical equipment based on temporal convolutional network. In Proceedings of the 14th IEEE International Conference on Electronic Measurement and Instruments (ICEMI), Changsha, China, 1–3 November 2019; pp. 1192–1199. [Google Scholar]
- Yang, W.; Yao, Q.; Ye, K.; Xu, C.Z. Empirical mode decomposition and temporal convolutional networks for remaining useful life estimation. Int. J. Parallel Program. 2020, 48, 61–79. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Saxena, A.; Goebel, K.; Simon, D.; Eklund, N. Damage propagation modeling for aircraft engine run-to-failure simulation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; pp. 1–9. Available online: https://ieeexplore.ieee.org/document/4711414/ (accessed on 16 June 2025).
- Zhang, C.; Lim, P.; Qin, A.K.; Tan, K.C. Multiobjective deep belief networks ensemble for remaining useful life estimation in prognostics. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2306–2318. [Google Scholar] [CrossRef] [PubMed]
- Zheng, S.; Ristovski, K.; Farahat, A.; Gupta, C. Long short–term memory network for remaining useful life estimation. In Proceedings of the 2017 IEEE International Conference on Prognostics and Health Management, Dallas, TX, USA, 19–21 June 2017; pp. 88–95. [Google Scholar]
Module | Layer | Core Parameters | Functionality and Design Intent |
---|---|---|---|
CNN | Conv2d | in_channels = 1, | Convolutional layer-extracting local features |
out_channels = 64, | |||
kernel_size = (5, features), | |||
padding = (2, 0) | |||
BathNorm2d | num_features = 64 | Improve training stability | |
ReLU | — | Non-linear activation function | |
MaxPool2d | kernel_size = (2, 1) | Pooling layer-reducing data dimensionality | |
LSTM network | LSTM | input_size=64, | Extract temporal sequence features |
hidden_size = 128, | |||
num_layers = 2, | |||
dropout = 0.3 | |||
TCN | Dropout | p = 0.5 | Prevent overfitting |
Block 1 | in_channels = 128, | Extract local temporal dependence, prevent future information leakage: causal convolution+ReLU+Dropout, further models long-term temporal dependencies | |
out_channels = 128, | |||
kernel_size = 3, | |||
dilation = 1 | |||
Block 2 | in_channels = 128, | ||
out_channels = 128, | |||
kernel_size = 3, | |||
dilation = 2 | |||
Block 3 | in_channels = 128, | ||
out_channels = 128, | |||
kernel_size = 3, | |||
dilation = 4 | |||
Fully Connected | Linear | in_features = 128 | Output the final predicted RUL |
out_features = 1 |
Data Set | Number of Training Samples | Number of Test Samples | Operating Conditions | Failure Modes | Fault Components |
---|---|---|---|---|---|
FD001 | 100 | 100 | 1 | 1 | HPC |
FD002 | 260 | 259 | 6 | 1 | HPC |
FD003 | 100 | 100 | 1 | 2 | HPC + Fan |
FD004 | 248 | 249 | 6 | 2 | HPC + Fan |
Serial Number | Representation | Description |
---|---|---|
Setting 1 | H | Flight altitude |
Setting 2 | Ma | Mach number |
Setting 3 | TRA | Throttle lever angle |
Sensor 1 | T2 | Total temperature at the fan inlet |
Sensor 2 | T24 | Total temperature at the LPC outlet |
Sensor 3 | T30 | Total temperature at the HPC outlet |
Sensor 4 | T50 | Total temperature at the LPT outlet |
Sensor 5 | P2 | Pressure at the fan inlet |
Sensor 6 | P15 | Total pressure in the bypass duct |
Sensor 7 | P30 | Total pressure at the HPC outlet |
Sensor 8 | Nf | Physical fan speed |
Sensor 9 | Nc | Physical core speed |
Sensor 10 | Erp | Engine pressure ratio (P50/P2) |
Sensor 11 | Ps30 | Static pressure at the HPC outlet |
Sensor 12 | phi | Ratio of fuel flow to Ps30 |
Sensor 13 | NRf | Corrected fan speed |
Sensor 14 | NRc | Corrected core speed |
Sensor 15 | BPR | Bypass ratio |
Sensor 16 | farB | Burner fuel–air ratio |
Sensor 17 | htBleed | Bleed enthalpy |
Sensor 18 | Nf_dmd | Demanded fan speed |
Sensor 19 | PCNfR_dmd | Demanded corrected fan speed |
Sensor 20 | W31 | HPT coolant bleed |
Sensor 21 | W32 | LPT coolant bleed |
Models | RMSE | Score | Performance Gap vs. CLSTM-TCN |
---|---|---|---|
CLSTM-TCN | 13.35 | 219 | — |
CNN-LSTM | 18.53 | 507 | +38.8% (RMSE), +131.5% (Score) |
CNN-TCN | 19.29 | 591 | +44.5% (RMSE), +169.9% (Score) |
LSTM-TCN | 19.30 | 578 | +44.6% (RMSE), +163.9% (Score) |
Models | RMSE | Score | Performance Gap vs. CLSTM-TCN |
---|---|---|---|
CNN [20] | 18.42 | 906 | +38.0% (RMSE), +313.7% (Score) |
DBN [25] | 15.04 | 334 | +15.6% (RMSE), +52.5% (Score) |
CNN-LSTM [20] | 16.36 | 443 | +22.5% (RMSE), +102.3% (Score) |
LSTM [26] | 16.14 | 338 | +20.9% (RMSE), +54.3% (Score) |
CLSTM-TCN | 13.35 | 219 | — |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tan, B.; Zhang, Y.; Wei, X.; Wang, L.; Chang, Y.; Zhang, L.; Fan, Y.; Roza, C.G.R.L. A Multi-Model Fusion Framework for Aeroengine Remaining Useful Life Prediction. Eng 2025, 6, 210. https://doi.org/10.3390/eng6090210
Tan B, Zhang Y, Wei X, Wang L, Chang Y, Zhang L, Fan Y, Roza CGRL. A Multi-Model Fusion Framework for Aeroengine Remaining Useful Life Prediction. Eng. 2025; 6(9):210. https://doi.org/10.3390/eng6090210
Chicago/Turabian StyleTan, Bing, Yang Zhang, Xia Wei, Lei Wang, Yanming Chang, Li Zhang, Yingzhe Fan, and Caio Graco Rodrigues Leandro Roza. 2025. "A Multi-Model Fusion Framework for Aeroengine Remaining Useful Life Prediction" Eng 6, no. 9: 210. https://doi.org/10.3390/eng6090210
APA StyleTan, B., Zhang, Y., Wei, X., Wang, L., Chang, Y., Zhang, L., Fan, Y., & Roza, C. G. R. L. (2025). A Multi-Model Fusion Framework for Aeroengine Remaining Useful Life Prediction. Eng, 6(9), 210. https://doi.org/10.3390/eng6090210