You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Article
  • Open Access

14 June 2024

A Compound Framework for Forecasting the Remaining Useful Life of PEMFC

,
,
and
1
College of Electrical Engineering and New Energy, China Three Gorges University, Yichang 443002, China
2
Hubei Provincial Key Laboratory for Operation and Control of Cascaded Hydropower Station, China Three Gorges University, Yichang 443002, China
3
Wuhan Second Ship Design and Research Institute, Wuhan 430064, China
*
Authors to whom correspondence should be addressed.

Abstract

Proton exchange membrane fuel cells (PEMFC) are widely acknowledged as a prospective power source, but durability problems have constrained development. Therefore, a compound prediction framework is proposed in this paper by integrating the locally weighted scatter plot smoothing method (LOESS), uniform information coefficient (UIC), and attention-based stacked generalization model (ASGM) with improved dung beetle optimization (IDBO). Firstly, LOESS is adopted to filter original degraded sequences. Then, UIC is applied to obtain critical information by selecting relevant factors of the processed degraded sequences. Subsequently, the critical information is input into the base models of ASGM, including kernel ridge regression (KRR), extreme learning machine (ELM), and the temporal convolutional network (TCN), to acquire corresponding prediction results. Finally, the prediction results are fused using the meta-model attention-based LSTM of ASGM to obtain future degradation trends (FDT) and the remaining useful life (RUL), in which the attention mechanism is introduced to deduce weight coefficients of the base model prediction results in LSTM. Meanwhile, IDBO based on Levy flight, adaptive mutation, and polynomial mutation strategies are proposed to search for optimal parameters in LSTM. The application of two different datasets and their comparison with five related models shows that the proposed framework is suitable and effective for forecasting the FDT and RUL of PEMFC.

1. Introduction

1.1. Motivations

Proton exchange membrane fuel cells (PEMFC) are extensively acknowledged as one of the most prospective energy techniques, which has the advantages of strong anti-interference, faster start-up, zero pollution emission, low operating temperature, pleasurable portability, and so on. After nearly two decades of continuous and rapid development, they have entered the initial commercialization phase, which is widely adopted in fixed, portable, and transport fields [1,2]. However, operating expenses and the useful life of PEMFC still face challenges, which have become major constraints to their large-scale promotion and adoption [3]. While degradation of proton exchange membrane fuel cells is unavoidable in real-life applications, the accurate forecasting of FDT can effectively assist users in taking timely maintenance measures, thus significantly extending the useful life of PEMFC [4,5,6].

1.2. Literature Review

Existing PEMFC prediction methods are classified into three categories: model-based [7,8], data-driven [9,10], and hybrid [11,12,13]. Among them, model-based methods mainly predict the RUL of PEMFC through parameter estimation by establishing a mathematical model according to the attenuation mechanism of PEMFC. As such, Zhou et al. [7] proposed an aging prediction model based on particle filters. Chen et al. [8] combined the unscented Kalman filter (UKF) with the empirical voltage attenuation model to pre-RUL. Xie et al. [14] used singular spectrum analysis to preprocess the measured data and then used a deep Gaussian process to achieve RUL prediction. Ao et al. [15] proposed an RUL prediction method based on the Kalman filter in the frequency domain to process aging data of PEMFC in groups. Wang et al. [16] established an aging model based on polarization curves, used the particle filter algorithm to estimate the aging parameters online, and used the rated voltage as a new health index. Although model-based methods can analyze the aging process combined with the mechanism, the aging mechanism of PEMFC has not been thoroughly studied, and various aging models are usually unproven, so the prediction accuracy of model-driven methods is usually not guaranteed [4].
To predict degradation more accurately, researchers proposed hybrid methods combining model-based and data-driven ones. Although hybrid methods play a positive role in improving accuracy, they inevitably raise complications. Considering that hybrid methods inherently depend on model-based ones, the limitations associated with model-based methods are also manifested in hybrid ones [17]. On the contrary, data-driven methods do not need to fully understand the internal degradation mechanism of fuel cells, which is trained by a large number of known experimental data, after which the accurate prediction effect is obtained by a deep learning algorithm [9,10]. Therefore, data-driven methods are employed in this study.
Data-driven methods primarily contain an extreme learning machine (ELM) [11], an echo state network (ESN) [18,19], kernel ridge regression (KRR) [20], a long short-term memory neural network (LSTM) [21,22,23,24,25,26], a temporal convolutional network (TCN) [27,28], and so on. For example, Pan et al. [27] constructed a prediction framework and joint degradation metrics of TCN to predict the RUL of PEMFC, confirming that the TCN can accurately predict the RUL of PEMFC. Deng et al. [17] proposed a highly accurate and efficient degradation prediction method for PEMFC by integrating ELM based on an auto-encoder and the fuzzy extension broad learning system, demonstrating superior competitiveness in accuracy, especially when applied to power time series with multiple input variables. Li et al. [29] decomposed voltage data into linear and nonlinear parts and applied the autoregressive integral moving average (ARIMA) and attention-based gated recurrent unit (GRU) for prediction, respectively, which achieved good prediction results. However, the prediction effect of the individual model is limited, so the integrated model, which blends multiple models, is gradually preferred by academics and employed in various fields, among which the common integrated models contain bagging, boosting, and stacking [30]. Compared to the other two integrated models, stacking is able to effectively merge heterogeneous model variation characteristics to achieve better prediction than a single model [31]. Specifically, base models utilize their respective strengths to obtain forecast results, which are integrated by the meta-model to obtain final composite forecast results [32]. The stacked generalization model is rarely applied in the field of fuel cells at present, and the meta-model in the stacked generalization model has limited ability to focus on important information and cannot fully utilize the forecast result from the base models. For this reason, an attention-based stacked generalization model (ASGM) is proposed in which base models not only contain ELM with better nonlinear fitting ability and KRR with strong high-dimensional data processing ability but also choose TCN with long-time sequence learning ability. Furthermore, attention-based LSTM with a strong generalization capability is employed as the meta-model layer. By introducing a meta-model with an attention mechanism, it can adaptively adjust the weights of the base model. This allows it to enhance the impact of the high-precision base model as well as diminish the influence of the lower-precision base model, contributing to achieving a better understanding of the base model outputs and improving prediction accuracy.
For the meta-model layer, the parameters of LSTM directly impact prediction results and can be calibrated in an artificial way, an empirical way, and using an optimizer. Among them, artificial and empirical methods mainly calibrate the parameters through human awareness or experience subjectively, which makes it difficult for LSTM to achieve optimal prediction performance. Contrastively, optimization algorithms can solve this problem by automatically searching the optimal parameters, such as the grey wolf optimization algorithm [33], sine cosine algorithm [34], and dung beetle optimization [35]. Wang et al. [25] proposed a stacked LSTM model to predict the degradation of PEMFC and achieved good predictions using a differential evolutionary algorithm to optimize the hyperparameters. Ren et al. [36] employed a particle swarm optimization algorithm to select the parameters of LSTM. In Ref. [37], the grey wolf optimizer algorithm was employed to optimize support vector regression. To boost the performance of optimization algorithms further, a variety of complementary strategies have been successively proposed, such as the Levy flight strategy, polynomial variation strategy, and adaptive variation strategy. For example, Liu et al. [38] enhanced the candidate selection process in ant colony optimization by integrating the Levy flight strategy, ensuring both rapid search speed and an expanded search space for improved performance. Motivated by previous studies, an improved dung beetle optimization (IDBO) algorithm is presented in this paper to calibrate the optimal parameters of LSTM, which contributes to the improvement of the prediction accuracy. To promote the performance of dung beetle optimization (DBO), Levy flight [38] and polynomial variation strategy [39] are introduced to enhance global exploration ability, as an adaptive variation strategy [40] is embedded to avoid trapping into local optimal solution and thus maintain solution diversity in later iterations.

1.3. Research Gaps and Contributions

In conjunction with the above discussion, a summary is presented of the challenges faced by current researchers. Firstly, stacking has the problem of the output information of the base models not being fully utilized. Secondly, existing studies often rely on empirical values or manual selection to determine model hyperparameters, which cannot guarantee optimal prediction performance. In addition, existing research often neglects to provide an extensive discussion of the precise choice of the best input variables for the model, which is a critical element influencing forecast precision. To address the above challenges, a compound forecasting framework incorporating LOESS, UIC, and ASGM with IDBO is presented to forecast the FDT and RUL of PEMFC. Within the proposed framework, LOESS is regarded as a preprocessing method to filter original degraded sequences. Then, UIC is applied to obtain critical information by selecting relevant factors of the processed degraded sequences. Subsequently, the critical information is input into base models of ASGM, including KRR, ELM, and TCN, for prediction to acquire corresponding forecast results, respectively. Finally, the forecast results are fused using the meta-model attention-based LSTM of ASGM to obtain FDT and RUL, in which the attention mechanism is introduced to deduce weight coefficients of the base model prediction results in the dense layer of LSTM, where the proposed IDBO is applied to search optimum parameters in LSTM. The main contributions of this paper are illustrated below:
(1)
In view of the problem that the meta-model of stacking has a limited ability to capture significant information output of the base models, ASGM integrating the single model ELM, KRR, TCN, and LSTM is established, and an attention mechanism module is embedded into stacking to improve the prediction effect further.
(2)
IDBO is employed to optimize the hyperparameters of LSTM to achieve higher forecasting accuracy, where IDBO is attained by embedding the Levy flight strategy, adaptive variation, and polynomial variation into DBO to promote the global and local detection ability.
(3)
UIC is utilized in the selection of input variables to capture critical information, which is able to decrease the training complexity and enhance the overall efficiency of the proposed model.
The rest of this article is structured as follows. Section 2 presents theoretical approaches. Section 3 details the proposed framework and its procedures. Data processing and experimental setup are described in Section 4. Section 5 demonstrates the experiment and contrastive results, while Section 6 presents the conclusion.

3. PEMFC FDT and RUL Forecasting Framework Based on LOESS, UIC, ASGM, and IDBO

In this section, a compound forecasting framework incorporating LOESS, UIC, and ASGM with IDBO is presented to forecast the FDT and RUL of PEMFC. The detailed procedures are depicted in Figure 4 and outlined below:
Figure 4. Flow chart of the proposed UIC-IDBO-ASGM prediction model.
Step 1: Historically degraded datasets are processed through a half-hour interval resampling, after which LOESS is deployed to filter the reconstructed data to remove noise and spikes.
Step 2: Optimal input variables are obtained by employing UIC to select the processed data.
Step 3: Perform dataset splits based on three-fold cross-validation with the best input and output variables.
Step 4: A split dataset is input into the base models of ASGM, including KRR, ELM, and TCN, for prediction to acquire corresponding prediction results.
Step 5: The prediction results are reconstructed by stacking and averaging to obtain the input data for the meta-model.
Step 6: The reconstructed data are input into the meta-model attention-based LSTM of ASGM to obtain the FDT and RUL of PEMFC. Meanwhile, IDBO is applied to calibrate the parameters of LSTM.
Step 7: The RMSE, MAPE, and R2 between the predicted and true values are calculated to verify the generalization ability and prediction accuracy of the model.

4. Experiment

4.1. Data Preprocessing

The FCLAB released the PEMFC durability experimental dataset in the 2014 Data Challenge [18,50]. One dataset, FC1, was operated under static conditions, roughly at rated conditions, with a load current of 70 A. The other dataset, FC2, was operated under quasi-dynamic conditions with a load current set to 70 A but with a 10% triangular current ripple (5 kHz). Table 1 shows the relevant information on degradation parameters monitored by the PEMFC reactor experiment.
Table 1. Aging parameter information in FC1 and FC2.
Considering the vast quantity of collected data points exceeding 100,000, from the PEMFC stack degradation, data reduction and extraction of representative data are essential. The degradation dataset is reconstructed by regular sampling at half-hour intervals. Meanwhile, LOESS is applied to process the reconstructed data to remove noise and spikes, creating two new degradation datasets, FC1 and FC2 [25]. In particular, stack voltage fluctuates or decreases significantly over time, while the other parameters do not change obviously. Thus, stack voltage is adopted as an indicator to detect the health of the PEMFC in this study. The stack voltage degradation trend of FC1 and FC2 after resampling and smoothing is illustrated in Figure 5a,b, from which it can be seen that the processed dataset retains the primary trends of the original data and eliminates noise and spikes effectively.
Figure 5. Degradation data of stack voltage: (a) FC1 and (b) FC2.
It is noteworthy that the degradation data include 24 variables, among which stack voltage is chosen as the output variable. Further analysis is necessary to determine whether all of the remaining variables contribute to the degradation of the stack voltage of PEMFC. To do this, UIC is adopted to select the input variables. Firstly, the UIC values between the remaining variables and the stacked voltage are calculated, and the results are shown in Figure 6. Thus, U1~U5 are adopted as the best input variables for FC1 and FC2. The input dataset is composed of the optimal input and output variables, where the first 50% of the data are allocated for training, and the remaining are designated for testing, as shown in Figure 7.
Figure 6. The calculation results of UIC values.
Figure 7. Illustration of training and testing datasets.

4.2. Model Hyperparameter Setting

To promote the performance of the proposed model, the inherent parameters of ELM and KRR are optimized using a grid search. The search ranges for the number of neurons of ELM and the regularization parameter of KRR are set as [50, 200] and [0, 1], respectively. The internal parameters of LSTM and TCN, including the number of hidden layer nodes, convolution kernel size, batch size, epochs of training, etc., are determined through the trial and error method. The parameters of SGM, UIC-SGM, and UIC-ASGM are consistent with the single models. For UIC-IDBO-SGM and UIC-IDBO-ASGM, IDBO is applied to optimize the parameters of LSTM, where the maximum number of iterations and the population size in IDBO are set to 50 and 30 separately. The optimization search ranges for epochs of training and the initial learning rate of LSTM are [300, 500] and [0, 1], respectively. In particular, Table 2 provides a comprehensive overview of the detailed settings of the parameters for the experiment.
Table 2. Parameter settings of all experimental models.

5. Results and Discussion

To assess the precision of forecast results, several common indicators, including root mean square error (RMSE), mean absolute percentage error (MAPE), and R square (R2), are introduced to measure the results of all experiments in a comprehensive manner [51]. The smaller the values of RMSE and MAPE, the smaller the error between actual and predicted values, and the nearer R2 approaches 1, the more excellent fit is achieved. The formulas for calculating the three metrics are as follows:
R M S E = 1 N i = 1 N Y i Y i * 2 ,
M A P E = 1 N i = 1 N 100 × Y i Y i * Y i ,
R 2 Y , Y = 1 i = 1 N Y i Y i 2 i = 1 N Y i Y i ¯ 2 ,
where Y and Y* denote the observed and forecasted stack voltage values, respectively. In addition, in the experiments of FC1 and FC2, RUL is regarded as the time before reaching a certain level of voltage loss. Specifically, 3.0%, 3.5%, and 4.0% of the initial voltage (Vinit = 3.35 V) are considered as the failure thresholds (FTS) of FC1. Under the quasi-dynamic condition, the stack voltage of the fuel cell degrades more severely compared to the static condition. Similar to static operation, 4.5%, 5.0%, and 5.5% of the initial voltage (Vinit = 3.33 V) are considered as FTS for FC2. Meanwhile, the final score accuracy (FAscore) of whole forecasting at different FTS is applied to assess the model’s prediction results on RUL [52]. When the FAscore approaches 1, it indicates that the model is more accurate and has better prediction performance [52]. The score is calculated as follows:
(1)
Compute the predicted RUL (PRUL) and observe RUL (ORUL) with various FTS:
O R U L = T o F T T p r e d ,
P R U L = T p F T T p r e d ,
where Tpred denotes the start moment of prediction, ToFT indicates the moment when the original signal first arrives at FTS, and TpFT implies the moment when the predicted signal first arrives at FTS.
(2)
Compute the error (Er) between PRUL and ORUL:
E r = O R U L P R U L O R U L × 100 % .
(3)
Compute the accuracy score (Ascore) of RUL prediction:
A s c o r e = exp ln 0.5 × 0.2 E r E r 0 exp ln 0.5 × 0.05 E r    E r > 0 .
(4)
Average the Ascore under all FTS to obtain the FAscore:
F A s c o r e = 1 Z z = 1 z A s c o r e ,
where Z denotes the value of defined FTS and is set to three in this study.
In addition, the percentage improvement index P I i n d e x (index = RMSE, MAPE, and R2) is also employed to represent the extent of improvement in the prediction results of the proposed model. The computational formula is as follows:
P I i n d e x = E I V m o d e l 1 E I V m o d e l 2 E I V m o d e l 2 × 100 % ,
where E I V m o d e l 1 and E I V m o d e l 2 represent the values of the evaluation indicators for the proposed model and the comparison model, respectively.

5.1. Results

5.1.1. Future Degradation Trend Forecasting Results

The evaluation indicator results of the six models in predicting FDT for FC1 are provided in Table 3, as well as displayed in Figure 8 for visual representation. The following conclusions can be drawn according to Table 3 and Figure 8: (i) UIC-IDBO-ASGM achieves the optimal forecasting performance. The proposed model exhibits the best RMSE and MAPE, which are 0.00147 and 4.296. Moreover, R2 stands at 0.98969, which is nearest to 1. (ii) UIC-SGM shows superior forecasting performance compared to SGM, with smaller RMSE and MAPE and larger R2, revealing that UIC is able to optimize the input variables efficiently. (iii) Comparing the forecasting results of UIC-SGM, UIC-ASGM, UIC-IDBO-SGM, and UIC-IDBO-ASGM, it is evident that the fusion of IDBO and attention mechanism into SGM can improve the prediction performance, indicating that optimizing model hyperparameters using IDBO is effective, and the attention mechanism can enhance the learning capability of the meta-model.
Table 3. Evaluation results in FC1.
Figure 8. RMSE, MAPE, and R2 of the six models adopted to predict FDT for FC1.
Likewise, Table 4 and Figure 9 present the evaluation indicator results of the six models in predicting FDT for FC2. Similar conclusions can be obtained according to Table 4 and Figure 9: (i) UIC-IDBO-ASGM achieves the best results with the smallest RMSE and MAPE of 0.00212% and 0.04572%, respectively, and R2 of 0.99359. (ii) UIC effectively selects input variables, and UIC-SGM outperforms SGM in terms of RMSE, MAPE, and R2. (iii) Introducing the IDBO for hyperparameter optimization and incorporating attention mechanism to enhance the learning capability of the meta-model both improve forecasting performance. In addition, prediction curves and linearly fitted scatter plots are drawn to further show the superior performance of the proposed model intuitively for LSTM, SGM, UIC-SGM, UIC-ASGM, and UIC-IDBO-SGM, as shown in Figure 10 and Figure 11 for FC1 and FC2, respectively. In the magnified view, it is evident that the prediction curve of the proposed model closely aligns with the actual curve. It is notable that the proposed model exhibits the narrowest 95% prediction band in the linear fit scatter plot. The above conclusions illustrate that UIC-IDBO-ASGM can achieve better performance in predicting FDT.
Table 4. Evaluation results in FC2.
Figure 9. RMSE, MAPE, and R2 of the six models adopted to predict FDT for FC2.
Figure 10. FDT prediction results for FC1.
Figure 11. FDT prediction results for FC2.

5.1.2. Remaining Useful Life Forecasting Results

In the static operation task, the experiment data come from a 1050 h duration test on the PEMFC reactor. For FC1, the actual values of RUL at various FTS conditions are 63 h (3.0%), 227 h (3.5%), and 232 h (4.0%), respectively. Table 5 and Figure 12 present the FAscores for the six models. It is observed that the FAscores for the six models are 0.6708, 0.7837, 0.8912, 0.9293, 0.93192, and 0.9671, respectively, with the FAscore of UIC-IDBO-ASGM nearest to 1. When FT is 3%, the RUL forecasted with the proposed model is 62 h, advancing one hour from reaching the failure threshold, which can facilitate maintenance of the PEMFC system. In summary, the prediction performance of the proposed model is pleasurable in static operation tasks.
Table 5. The final scores in the FC1 evaluation.
Figure 12. RUL prediction results of the six models for FC1.
Likewise, the actual values of RUL at various FTS conditions are 246 h (4.5%), 264 h (5%), and 424 h (5.5%) in FC2, respectively. The FAscore for RUL forecast precision of the six models is presented in Table 6 and Figure 13, from which it can be noticed that the FAscore of RUL forecast precision for UIC-IDBO-ASGM is 0.9832, which is closest to 1. When FT is 5%, the RUL forecasted with the proposed model is 263 h, which is one hour ahead of reaching the failure threshold. The results demonstrate that the proposed model also exhibits excellent forecasting performance for FC2. On the whole, the proposed UIC-IDBO-ASGM attains the optimal prediction performance under the two conditions.
Table 6. The final scores in the FC2 evaluation.
Figure 13. RUL prediction results of the six models for FC.

5.2. Discussion

5.2.1. Discussion on the Effectiveness of UIC

To ascertain the effectiveness of UIC, the performance improvement percentages between contrastive models with and without UIC for predicting FC1 and FC2 are presented in Table 7, from which it can be seen that the performance of models with UIC has improved to different degrees. Taking FC2 prediction as an example, UIC-SGM reduces RMSE by 33.49%, MAPE by 22.94%, and improves R2 by 5.00% compared to SGM, which demonstrates UIC can effectively eliminate irrelevant variables and select the best input variables, thus promoting the prediction performance.
Table 7. Performance improvement percentages.

5.2.2. Discussion on the Effectiveness of the Proposed IDBO

To validate the effectiveness of the proposed IDBO, nine benchmark functions are applied for analysis. Meanwhile, DBO, WOA, ALO, SCA, GWO, and MFO are employed for comparison. The nine benchmark functions illustrated in Table 8 include unimodal functions (F5, F6, and F7), multimodal functions (F10, F12, and F13), and fixed-dimension multi-modal functions (F14, F15, and F17) [53].
Table 8. Benchmark functions.
All algorithms are run 10 times on each benchmark function with population size and maximum iteration number set to 30 and 200, respectively [54]. In addition, two statistical metrics, average (Ave.) and standard deviation (Std.), are applied as evaluation indicators. Figure 14 illustrates the convergence curves, and Table 9 provides a performance comparison. Table 9 reveals that IDBO achieves superior performance and better convergence speed among all optimization algorithms. As shown in Figure 14, compared with other algorithms, IDBO not only performs well in discovering globally optimal solutions but also exhibits promising convergence within a relatively short time. For F17, although IDBO has similar search results compared to the other algorithms, IDBO converges faster. Moreover, the performance improvement of the prediction model with IDBO is demonstrated in Table 7. Taking FC2 as an example, compared to UIC-SGM, UIC-IDBO-SGM achieves a reduction of 30.14% for RMSE and a decrease of 28.92% for MAPE, while R2 is increased by 1.37%. In summary, the proposed IDBO demonstrates robust global search capability and can effectively optimize the parameters of the prediction model.
Figure 14. Convergence curves of IDBO, DBO, ALO, WOA, SCA, GWO, and MFO for different benchmark functions.
Table 9. Performance comparison algorithms.

5.2.3. Discussion on the Effectiveness of the Proposed ASGM

The prediction results of the proposed model surpass those of comparative models, as illustrated in Table 3 and Table 4, from which it can be concluded that the proposed ASGM integrating attention mechanism into the meta-model of SGM attains promising forecasting performance. In addition, it is obvious from Table 7 that the performance of the prediction model blended with the attention mechanism is improved to some extent. In the case of FC1, the RMSE and MAPE of UIC-IDBO-ASGM are decreased by 37.9% and 37.1%, respectively, compared to UIC-IDBO-SGM, while R2 is raised by 1.718%. The experimental results demonstrate that integrating the attention mechanism can further enhance the learning ability of the meta-model, thus improving the prediction performance of the overall model.

6. Conclusions

To enhance the forecasting accuracy of the FDT and RUL for PEMFC, a compound framework incorporating LOESS, UIC, and ASGM with IDBO is proposed in this paper. Within the proposed framework, UIC is adopted to acquire critical information by selecting relevant factors of the degraded sequences, which are filtered by LOESS. Subsequently, base models of ASGM are applied to acquire corresponding prediction results by forecasting critical information. Finally, the meta-model attention-based LSTM of ASGM, in which the attention mechanism is introduced to deduce weight coefficients of prediction results in the dense layer of LSTM, is employed to fuse the corresponding prediction results for obtaining FDT and RUL. Meanwhile, an IDBO based on Levy flight, adaptive mutation, and polynomial mutation strategies is utilized to optimize the parameters of LSTM. Through the application of two different datasets of PEMFC and comparison with five related models, the conclusions are obtained as follows: (1) for RMSE, the MAPE of UIC-IDBO-ASGM attains the smallest and its R2 attains the largest, demonstrating that UIC-IDBO-ASGM has a better prediction performance; (2) the proposed IDBO surpasses the comparative optimization algorithm, such as DBO, WOA, ALO, SCA, GWO, and MFO, which can supply adequate support for ASGM; (3) UIC-based correlation analysis methods can be effective to eliminate irrelevant variables and select the best input variables; (4) the introduction of a meta-model with an attention mechanism in ASGM can effectively fuse the prediction results of the base models to obtain better prediction results. Future research will focus on exploring long-term aging prediction and RUL online estimation under dynamic operating conditions.

Author Contributions

Methodology, software, experiments, and writing the original draft, C.W.; conceptualization and review of this manuscript, W.F.; supervision and project management, Y.S.; software, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Natural Science Foundation of Hubei Province of China (No. 2022CFB935) and the Open Fund of Hubei Key Laboratory for Operation and Control of Cascaded Hydropower Station (No. 2022KJX10).

Data Availability Statement

The datasets in this paper are public datasets, which are available at https://search-data.ubfc.fr/FR-18008901306731-2021-07-19_IEEE-PHM-Data-Challenge-2014.html (accessed on 13 April 2024).

Conflicts of Interest

The authors declare no competing interests.

Abbreviations

ASGMAttention-based Stacked Generalization Model
ARIMAAutoregressive Integral Moving Average
Ave.Average
DBODung Beetle Optimization
ELMExtreme Learning Machine
ESNEcho State Network
FDTFuture Degradation Trend
FAscoreFinal Score Accuracy
FTSFailure Thresholds
GRUGated Recurrent Unit
IDBOImproved Dung Beetle Optimization
KRRKernel Ridge Regression
LOESSLocally Weighted Scatter Plot Smoothing
LSTMLong Short-term Memory Neural Network
MAPEMean Absolute Percentage Error
PEMFCProton Exchange Membrane Fuel Cell
UICUniform Information Coefficient
RULRemaining Useful Life
R2R Square
RMSERoot Mean Square Error
SGMStacked Generalization Model
Std.Standard Deviation
TCNTemporal Convolutional Network

References

  1. Liu, Z.; Xu, S.; Zhao, H.; Wang, Y. Durability estimation and short-term voltage degradation forecasting of vehicle PEMFC system: Development and evaluation of machine learning models. Appl. Energy 2022, 326, 119975. [Google Scholar] [CrossRef]
  2. Li, W.; Zhang, Q.; Wang, C.; Yan, X.; Shen, S.; Xia, G. Experimental and numerical analysis of a three-dimensional flow field for PEMFCs. Appl. Energy 2017, 195, 278–288. [Google Scholar] [CrossRef]
  3. Zhou, D.; Gao, F.; Breaz, E.; Ravey, A.; Miraoui, A. Degradation prediction of PEM fuel cell using a moving window based hybrid prognostic approach. Energy 2017, 138, 1175–1186. [Google Scholar] [CrossRef]
  4. Liu, H.; Chen, J.; Hissel, D.; Lu, J.; Hou, M.; Shao, Z. Prognostics methods and degradation indexes of proton exchange membrane fuel cells: A review. Renew. Sustain. Energy Rev. 2020, 123, 109721–109743. [Google Scholar] [CrossRef]
  5. Zuo, J.; Lv, H.; Zhou, D.; Xue, Q.; Jin, L.; Zhou, W. Deep learning based prognostic framework towards proton exchange membrane fuel cell for automotive application. Appl. Energy 2021, 281, 115937–115950. [Google Scholar] [CrossRef]
  6. Wang, C.; Wang, Z.; Chu, S.; Ma, H.; Yang, N. A two-stage underfrequency load shedding strategy for microgrid groups considering risk avoidance. Appl. Energy 2024, 367, 123343. [Google Scholar] [CrossRef]
  7. Zhou, D.; Wu, Y.; Gao, F. Degradation prediction of PEM fuel cell stack based on multiphysical aging model with particle filter approach. IEEE Trans. Ind. Appl 2017, 53, 4041–4052. [Google Scholar] [CrossRef]
  8. Chen, K.; Laghrouche, S.; Djerdir, A. Fuel cell health prognosis using Unscented Kalman Filter: Postal fuel cell electric vehicles case study. Int. J. Hydrogen Energy 2019, 44, 1930–1939. [Google Scholar] [CrossRef]
  9. Ma, R.; Yang, T.; Breaz, E.; Li, Z.; Briois, P.; Gao, F. Data-driven proton exchange membrane fuel cell degradation predication through deep learning method. Appl. Energy 2018, 231, 102–115. [Google Scholar] [CrossRef]
  10. Benaggoune, K.; Yue, M.; Jemei, S.; Zerhouni, N. A data-driven method for multistep ahead prediction and long-term prognostics of proton exchange membrane fuel cell. Appl. Energy 2022, 313, 118835–118850. [Google Scholar] [CrossRef]
  11. Chen, K.; Laghrouche, S.; Djerdir, S. Degradation model of proton exchange membrane fuel cell based on a novel hybrid method. Appl. Energy 2019, 252, 113439–113448. [Google Scholar] [CrossRef]
  12. Pan, R.; Yang, D.; Wang, Y.; Chen, Z. Performance degradation prediction of proton exchange membrane fuel cell using a hybrid prognostic approach. Int. J. Hydrogen Energy 2020, 45, 30994–31008. [Google Scholar] [CrossRef]
  13. Liu, H.; Chen, J.; Hissel, D.; Su, H. Remaining useful life estimation for proton exchange membrane fuel cells using a hybrid method. Appl. Energy 2019, 237, 910–919. [Google Scholar] [CrossRef]
  14. Xie, Y.; Zou, J.; Peng, C. A novel PEM fuel cell remaining useful life prediction method based on singular spectrum analysis and deep Gaussian processes. Int. J. Hydrogen Energy 2020, 45, 30942–30956. [Google Scholar] [CrossRef]
  15. Ao, Y.; Laghrouche, S.; Depernet, D. Proton exchange membrane fuel cell prognosis based on frequency-domain Kalman filter. IEEE Trans. Transport. Electrific. 2021, 7, 2332–2343. [Google Scholar] [CrossRef]
  16. Wang, P.; Liu, H.; Hou, M. Estimating the Remaining Useful Life of Proton Exchange Membrane Fuel Cells under Variable Loading Conditions Online. Processes 2021, 9, 1459. [Google Scholar] [CrossRef]
  17. Deng, Z.; Chan, S.; Chen, Q.; Liu, H.; Zhang, L.; Zhou, K.; Fu, Z. Efficient degradation prediction of PEMFCs using ELM-AE based on fuzzy extension broad learning system. Appl. Energy 2023, 331, 120385. [Google Scholar] [CrossRef]
  18. Hua, Z.; Zheng, Z.; Péra, M.-C.; Gao, F. Remaining useful life prediction of PEMFC systems based on the multi-input echo state network. Appl. Energy 2020, 265, 114791. [Google Scholar] [CrossRef]
  19. Mezzi, R.; Yousfi-Steiner, N.; Péra, M.C.; Hissel, D.; Larger, L. An Echo State Network for fuel cell lifetime prediction under a dynamic micro-cogeneration load profile. Appl. Energy 2021, 283, 116297. [Google Scholar] [CrossRef]
  20. Durganjali, C.S.; Avinash, G.; Megha, K. Prediction of PV cell parameters at different temperatures via ML algorithms and comparative performance analysis in Multiphysics environment. Energ. Convers. Manag. 2023, 282, 116881. [Google Scholar] [CrossRef]
  21. Liao, W.; Fu, W.; Yang, K. Multi-scale residual neural network with enhanced gated recurrent unit for fault diagnosis of rolling bearing. Meas. Sci. Technol. 2024, 35, 056114. [Google Scholar] [CrossRef]
  22. Yang, J.; Wu, Y.; Liu, X. Proton Exchange Membrane Fuel Cell Power Prediction Based on Ridge Regression and Convolutional Neural Network Data-Driven Model. Sustainability 2023, 15, 11010. [Google Scholar] [CrossRef]
  23. Chen, K.; Laghrouche, S.; Djerdir, A. Aging prognosis model of proton exchange membrane fuel cell in different operating conditions. Int. J. Hydrogen Energy 2020, 45, 11761–11772. [Google Scholar] [CrossRef]
  24. Zheng, L.; Hou, Y.; Zhang, T.; Pan, X. Performance prediction of fuel cells using long short-term memory recurrent neural network. Int. J. Energy Res. 2021, 45, 9141–9161. [Google Scholar] [CrossRef]
  25. Liu, J.; Li, Q.; Chen, W.; Yan, Y.; Qiu, Y.; Cao, T. Remaining useful life prediction of PEMFC based on long short-term memory recurrent neural networks. Int. J. Hydrogen Energy 2019, 44, 5470–5480. [Google Scholar] [CrossRef]
  26. Ullah, N.; Ahmad, Z.; Siddique, M.F.; Im, K.; Shon, D.-K.; Yoon, T.-H.; Yoo, D.-S.; Kim, J.-M. An Intelligent Framework for Fault Diagnosis of Centrifugal Pump Leveraging Wavelet Coherence Analysis and Deep Learning. Sensors 2023, 23, 8850. [Google Scholar] [CrossRef] [PubMed]
  27. Pan, M.; Hu, P.; Gao, R. Multistep prediction of remaining useful life of proton exchange membrane fuel cell based on temporal convolutional network. Int. J. Green Energy 2023, 20, 408–422. [Google Scholar] [CrossRef]
  28. Siddique, M.F.; Ahmad, Z.; Ullah, N.; Kim, J. A Hybrid Deep Learning Approach: Integrating Short-Time Fourier Transform and Continuous Wavelet Transform for Improved Pipeline Leak Detection. Sensors 2023, 23, 8079. [Google Scholar] [CrossRef] [PubMed]
  29. Li, C.; Lin, W.; Wu, H.; Li, Y.; Zhu, W.; Xie, C.; Gooi, H.B.; Zhao, B.; Zhang, L. Performance degradation decomposition-ensemble prediction of PEMFC using CEEMDAN and dual data-driven model. Renew. Energy 2023, 215, 118913. [Google Scholar] [CrossRef]
  30. Yuan, Z.; Meng, L.; Gu, X.; Bai, Y.; Cui, H.; Jiang, C. Prediction of NOx emissions for coal-fired power plants with stacked generalization ensemble method. Fuel 2021, 289, 119748. [Google Scholar] [CrossRef]
  31. Fu, W.; Fu, Y.; Li, B. A compound framework incorporating improved outlier detection and correction, VMD, weight-based stacked generalization with enhanced DESMA for multi-step short-term wind speed forecasting. Appl. Energy 2023, 348, 121587. [Google Scholar] [CrossRef]
  32. Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  33. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  34. Wang, J.; Yang, W.; Du, P.; Niu, T. A novel hybrid forecasting system of wind speed based on a newly developed multi-objective sine cosine algorithm. Energ. Conver. Manage. 2018, 163, 134–150. [Google Scholar] [CrossRef]
  35. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  36. Ren, X.; Liu, S.; Yu, X. A method for state-of-charge estimation of lithium-ion batteries based on PSO-LSTM. Energy 2021, 234, 121236. [Google Scholar] [CrossRef]
  37. Chen, K.; Laghrouche, S.; Djerdir, A. Remaining useful life prediction for fuel cell based on support vector regression and grey wolf optimizer algorithm. IEEE Trans. Energy Convers. 2021, 37, 778–787. [Google Scholar] [CrossRef]
  38. Liu, Y.; Cao, B. A novel ant colony optimization algorithm with Levy flight. IEEE Access 2020, 8, 67205–67213. [Google Scholar] [CrossRef]
  39. Abed-alguni, B.H.; Paul, D. Island-based Cuckoo Search with elite opposition-based learning and multiple mutation methods for solving optimization problems. Soft Comput. 2022, 26, 3293–3312. [Google Scholar] [CrossRef]
  40. Dong, W.; Kang, L.; Zhang, W. Opposition-based particle swarm optimization with adaptive mutation strategy. Soft Computing 2017, 21, 5081–5090. [Google Scholar] [CrossRef]
  41. Venthuruthiyil, S.P.; Chunchu, M. Trajectory reconstruction using locally weighted regression: A new methodology to identify the optimum window size and polynomial order. Transp. A Transp. Sci. 2018, 14, 881–900. [Google Scholar] [CrossRef] [PubMed]
  42. Reshef, D.N.; Reshef, Y.A.; Finucane, H.K. Detecting novel associations in large data sets. Science 2011, 334, 1518–1524. [Google Scholar] [CrossRef] [PubMed]
  43. Mousavi, A.; Baraniuk, R.G. Uniform partitioning of data grid for association detection. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1098–1107. [Google Scholar] [CrossRef] [PubMed]
  44. Tang, X.; Chen, F.; Xu, Q. Short-term load forecasting based on multi-dimensional deep extreme learning machine optimized by improved whale algorithm. Shandong Electr. Power 2023, 50, 1–7. (In Chinese) [Google Scholar]
  45. Naik, J.; Satapathy, P.; Dash, P.K. Short-term wind speed and wind power prediction using hybrid empirical mode decomposition and kernel ridge regression. Appl. Soft Comput. 2018, 70, 1167–1188. [Google Scholar] [CrossRef]
  46. Huang, Y.; Wen, B.; Liao, W. Image Enhancement Based on Dual-Branch Generative Adversarial Network Combining Spatial and Frequency Domain Information for Imbalanced Fault Diagnosis of Rolling Bearing. Symmetry 2024, 16, 512. [Google Scholar] [CrossRef]
  47. Fu, W.; Yang, K.; Wen, B.; Shan, Y. Rotating machinery fault diagnosis with limited multisensor fusion samples by fused attention-guided wasserstein GAN. Symmetry 2024, 16, 285. [Google Scholar] [CrossRef]
  48. Greff, K.; Srivastava, R.K.; Koutník, J. LSTM: A search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2222–2232. [Google Scholar] [CrossRef] [PubMed]
  49. Guo, M.; Xu, T.; Liu, J. Attention mechanisms in computer vision: A survey. Comput. Vis. Media 2022, 8, 331–368. [Google Scholar] [CrossRef]
  50. Jin, J.; Chen, Y.; Xie, C.; Zhu, W.; Wu, F. Remaining useful life prediction of PEMFC based on cycle reservoir with jump model. Int. J. Hydrogen Energy 2021, 46, 40001–40013. [Google Scholar] [CrossRef]
  51. Chen, X.; Wu, J.; Cai, J. Short-term load prediction based on BiLSTM optimized by hunter-prey optimization algorithm. Shandong Electr. Power 2024, 51, 64–71. (In Chinese) [Google Scholar]
  52. Hua, Z.; Zheng, Z.; Pahon, E. A review on lifetime prediction of proton exchange membrane fuel cells system. J. Power Sources 2022, 529, 231256. [Google Scholar] [CrossRef]
  53. Fu, W.; Zhang, K.; Wang, K. A hybrid approach for multi-step wind speed forecasting based on two-layer decomposition, improved hybrid DE-HHO optimization and KELM. Renew. Energy 2021, 164, 211–229. [Google Scholar] [CrossRef]
  54. Wang, K.; Fu, W.; Chen, T. A compound framework for wind speed forecasting based on comprehensive feature selection, quantile regression incorporated into convolutional simplified long short-term memory network and residual error correction. Energy Convers. Manag. 2020, 222, 113234. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.