Generalized Regression Neural Network Based Meta-Heuristic Algorithms for Parameter Identiﬁcation of Proton Exchange Membrane Fuel Cell

: An accurate parameter extraction of the proton exchange membrane fuel cell (PEMFC) is crucial for establishing a reliable cell model, which is also of great signiﬁcance for subsequent research on the PEMFC. However, because the parameter identiﬁcation of the PEMFC is a nonlinear optimization problem with multiple variables, peaks


Introduction
With the rapid development of technology and continuous economic growth, the demand for various fossil fuels and electricity is increasing day by day.The existing problem is that the energy conversion efficiency of traditional fossil energy is relatively low, and it causes huge environmental pollution, bringing the greenhouse effect, rising sea levels, acid rain, and other thorny environmental problems [1,2].In addition, the massive development and utilization of traditional non-renewable energy will also cause the global energy crisis [3].In this context, countries around the world have begun to vigorously develop clean energy and renewable energy.The proton exchange membrane fuel cell (PEMFC) is widely used because of the advantages of high energy density, high power generation efficiency, starting at a low temperature and a long working life [4,5].
With the widespread application of the PEMFC, precise modeling of batteries is crucial for optimizing the control of cell systems and improving cell power generation efficiency.Currently, there are many models for the PEMFC, including three-dimensional steady-state models [6] and electrochemical steady-state models [7].Among them, electrochemical the parameters of the PEMFC model, and then compares the results with those of other algorithms, proving that the algorithm has good fitting accuracy, and it can significantly improve and enhance the accuracy of the PEMFC model parameters [16].In reference [17], a PEMFC parameter identification method based on Bayesian regularization neural network (BRNN) was proposed.BRNN is used to de-noise data and MhAs are used to identify parameters, and the results are compared with other heuristic algorithms.The extraction results of BRNN data de-noising are more accurate than the original data results, and the results obtained are more stable with fewer outliers.
Overall, current research on PEMFC parameter identification mainly utilizes the MhAs method [18][19][20], and most of the research focuses on algorithm improvement to improve the accuracy and speed of parameter extraction.Only a few studies consider the impact of the data itself on the identification results.However, the study proposes MhAs based on a generalized regression neural network (GRNN) for PEMFC parameter extraction, which trains the GRNN, predicting and de-noising the data, fully considering the insufficient measured data and the impact of noise data on the final identification results, and conducting parameter identification research on the PEMFC under three operating conditions, namely high temperature and low pressure (HTLP), medium temperature and medium pressure (MTMP), and low temperature and high pressure (LTHP) [17].The last results demonstrate that after data processing, its identification accuracy is higher and its performance is better.This study provides a new approach to the identification of PEMFC parameters, and its contributions and innovations can be summarized as follows: 1.
Established the PEMFC model and conducted parameter identification research on the model under three operating conditions; 2.
Considering the influence of insufficient data volume and noise data, a GRNN was used to de-noise and predict the measured V-I data, and the final results fully demonstrate its excellent robustness when applied to PEMFC parameter extraction under various operation conditions; 3.
Based on the data processed by a GRNN, six typical heuristic algorithms were compared for their effectiveness in PEMFC parameter identification.The results demonstrate that after data processing, accuracy can be greatly improved.
The structure of the remaining part is as follows: Section 2 is the modeling of the PEMFC, mainly introducing the internal chemical mechanism of PEMFC power generation and its cell model, and then establishing an objective function for the model.Section 3 mainly displays the application of GRNN-MhAs in PEMFC parameter identification research, which involves using a GRNN for data de-noising and prediction processing, and then using MhAs for parameter identification.Section 4 mainly displays the parameter identification results obtained by six algorithms under three working conditions.Section 5 is the discussion section.Section 6 provides some important conclusions obtained from this research, as well as some prospects for future PEMFC parameter identification research.

PEMFC Modeling
Establishing the PEMFC model is beneficial to conduct in-depth research on the parameter identification of a cell.This section mainly introduces the basic principles and mathematical models of the mechanism of the PEMFC.

The Mechanism of the PEMFC
In principle, the PEMFC is equivalent to a reverse device for water electrolysis.A typical PEMFC is composed of an anode, a cathode, and a proton exchange membrane.The anode is the site of hydrogen fuel oxidation, the cathode is the site of oxidant reduction, and both poles contain catalysts to accelerate electrode electrochemical reactions [21][22][23].
Anode side: Cathode side: Overall chemical reaction,

Mathematical Model of the PEMFC
The model introduced in this section is only one kind of cell model, namely the 0-D model.Note that many other multi-dimensional models exist.Considering the impact of some losses in electrochemical reactions on the output characteristics of the PEMFC, the output voltage is as follows [25]: where V act , V ohm and V con , respectively, represent activation voltage loss (V), ohmic voltage loss (V), and concentration voltage loss (V); E nernst is the thermodynamic electromotive force (V); E nernst can be expressed as [26]: where ∆G and ∆S represent changes in free Gibbs energy and entropy, respectively, the value of ∆G is 228,170 J/mol; F represents a constant (96,485.3383C/mol); R is the universal gas constant (8.314J/(K•mol)); T k and T ref , respectively, represent the actual temperature and reference temperature; T k has a value of 353.15K under HTLP operating conditions, 333.15K under MTMP operating conditions, and 313.15K under operating conditions; P H 2 and P O 2 denote the partial pressure of hydrogen (atm) and oxygen (atm), which can be expressed as [27]: where RH a and RH c are the relative humidity of the vapor, the values of RH a and RH c are both 1 under HTLP operating conditions, 2 under MTMP operating conditions, and 3 under operating conditions; P a and P c the inlet pressure of the anode and cathode (atm), respectively; i cell is the output current (A); A is the effective activation area, the value of ∆G is 50.6 cm 2 ; P sat H 2 O is the saturation pressure (atm), which is as follows: In addition, the activation voltage loss V act can be expressed as: Energies 2023, 16, 5290 5 of 30 where ε 1 , ε 2 , ε 3 , and ε 4 are semi-empirical coefficients; C O 2 denote the concentration of oxygen catalyzed by the anode catalyst surface (mol/cm 3 ), which is shown below: In addition, the ohmic voltage V ohm loss is as follows [28]: where R m and R c are the electron transfer resistance and proton exchange membrane equivalent resistance (Ω), R m can be expressed as: where l is the thickness of the proton exchange membrane, the value of l is 178 µm; ρ m represents the resistivity (Ω•cm), which can be expressed as: where λ is the water content.
In addition, the concentration voltage V con loss can be expressed as: where b is the parameter coefficient (V); J is the current density (A/cm 2 ); J max is the maximum current density, the value of J max is 1.5 A/cm 2 .Finally, it is clear from Equations ( 4)-( 15) that the PEMFC needs to identify seven unknown parameters, namely ε 1 , ε 2 , ε 3 , ε 4 , λ, R c , b.

Objective Function
This study utilizes RMSE to measure the accuracy of extraction results.It can effectively reflect the accuracy of the calculated value, that is, the degree of deviation between the calculated value and the actual value.Therefore, RMSE is defined as the objective function, as follows: where N is the quantity of data; V act and V est represent the measured voltage and calculated voltage.Furthermore, the constraints of key parameters are as follows:

Principle of GRNN
A GRNN is a special form of nonlinear regression feedforward neural network, belonging to the branch of radial basis function (RBF).GRNN is based on non-parametric regres-sion and follows the principle of maximum probability to obtain the network output [29].The GRNN model inherits the good nonlinear approximation function of RBF neural network.The algorithm of the GRNN model has a fast convergence speed, a small amount of calculation, and can be well handled in the face of fewer training samples.It has been widely applied in structural analysis, control decision-making, system identification, and other aspects, especially in dealing with a curve fitting.
As shown in Figure 1, the GRNN model consists of four function layers, namely the input layer, pattern layer, summation layer, and output layer [29].The network input X = [x 1 , x 2 , . . . ,x n ] T , and its output is Y = [y 1 , y 2 , . . . ,y k ] T .The GRNN adopts the idea of nonlinear regression analysis.Let ,  be random variables, let  be the real observation value, (, ) be the joint probability density function, and the regression of  for  is determined by the following Equation (18): The function (, ) can be obtained by nonparametric estimation of the observation samples of  and , as the Equations ( 19) and (20) show: where σ is called the smoothing factor.Bring Equations (19) and (20) into Equation (18), and because ∫  − 2 d +∞ −∞ = 0, simplifying Equation ( 18) can be shown as follows: Obviously, in Equation ( 21) above, when the input training samples are determined, the training of neural networks is essential to determine the smoothness factor .The process only requires adjusting the smoothness factor  to change the transfer function.
Based on the above principles, the basic operation process of a GRNN is as follows [30]: The GRNN adopts the idea of nonlinear regression analysis.Let x, y be random variables, let X be the real observation value, g(x, y) be the joint probability density function, and the regression of y for x is determined by the following Equation ( 18): The function g(x, y) can be obtained by nonparametric estimation of the observation samples of x and y, as the Equations ( 19) and (20) show: where σ is called the smoothing factor.Bring Equations (19) and (20) into Equation (18), and because +∞ −∞ xe −x 2 dx = 0, simplifying Equation ( 18) can be shown as follows: Obviously, in Equation ( 21) above, when the input training samples are determined, the training of neural networks is essential to determine the smoothness factor σ. The process only requires adjusting the smoothness factor σ to change the transfer function.
Based on the above principles, the basic operation process of a GRNN is as follows [30]: Step 1 Input Layer: The number of neurons is equal to the dimension m of the input vector X = [x 1 , x 2 , . . . ,x n ] T in the learning sample, and directly transfers the input variables to the pattern layer.
Step 2 Model Layer: The number of neurons in the model layer is equal to the number of learning samples n, and the neurons correspond to the learning samples one by one.Assuming a function which is shown in the following formula: where D 2 i represents the square of the Euclid distance between the input variable of the i-th neuron and its learning sample X.In mode layer, Gaussian function is chosen as the activation kernel function, and the transfer function can be expressed as: where σ is a smoothing parameter.
Step 3 Summation Layer: about the GRNN, two types of neurons are used for summation in the summation layer.Among them, the first type corresponds to the dimension k of the output vector, with a total of k nodes.The connection weight between the i-th neuron in the pattern layer and the j-th molecular summation neuron in the summation layer is the j-th element of the output sample Y i , there is a transfer function as follows: The second type only has one neuron S D .Perform arithmetic summation on all neurons in the pattern layer, and another transfer function is as follows: Step 4 Output Layer: Each neuron divides the output of the summation layer, and the output of the j-th neuron corresponds to the estimation result Ŷ(X) is as follows: In summary, in the training process of the GRNN, only the smoothing parameters need to be adjusted σ to change the transfer function to obtain regression estimates.

Parameter Extraction Process
The conventional process for parameter identification of PEMFC based on a GRNN and MhAs is mainly divided into three sectors: data collection, data preprocessing, and optimization parameter extraction, and the specific process is shown in Figure 2.
The concrete process can be expressed as follows: collect actual cell voltage and current data, and then the GRNN model is trained for data prediction and data noise reduction to obtain the predicted data and de-noising data.Finally, six heuristic algorithms were used to optimize and iterate the PEMFC data, and the final parameter identification results were obtained.Note that this work uses RMSE to measure the size of error, and the steps are shown in Figure 3.

Case Studies
In this part, a GRNN and six typical MhAs were used to extract the parameters of the PEMFC model, respectively, moth fire optimization (MFO) [31], PSO [32], beetle antennae search (BAS) [33], grey wolf optimization (GWO) [34], marine predator algorithm (MPA) [35], and artificial ecosystem-based optimization (AEO).Then, the operating conditions were set up according to the actual working conditions of the cell, namely HTLP, Energies 2023, 16, 5290 9 of 30 MTMP, and LTHP.Due to the phenomenon of noise and insufficient available data, a GRNN was used to preprocess, de-noise and predict the 25 pairs of current and voltage data extracted from the cell.Finally, 145 sets of data were predicted and used for parameter identification research under multi-data.In this study, the parameters of PEMFC are shown in Table 1.Remark 1.The cell data in this study comes from experimental data provided by the cell manufacturer.The reason why a GRNN is used to process the V-I data of the PEMFC in this research is due to the inevitable impact of noise data in the measurement data.In addition, due to the loss of measured data, to verify the robustness of the GRNN applied to PEMFC parameter recognition, as well as the difficulty in measuring the V-I data of the PEMFC during actual operation, and due to battery aging and other phenomena, the difference between the measured data and the data from the battery factory is significant, which has a significant impact on the final parameter identification results.

GRNN for V-I Data De-noising
Small fluctuations may affect experimental data, similarly, the PEMFC is inevitably affected by noise when used in different environments.Undoubtedly, irregular changes in multiple variables can affect the inaccurate parameter identification of the PEMFC.
Therefore, to minimize the effect of the noise condition on the accuracy of the calculation results as much as possible, this paper adopts a GRNN [31].The results obtained by de-noising the original data obtained under three operating conditions using the GRNN are shown in Figures 4-6.

GRNN for V-I Data Prediction
The parameter identification of PEMFC essentially relies on the most primitive current and voltage data, and the accuracy of the final identified parameters largely depends on the original data.However, actual data are difficult to obtain.
Therefore, this study uses existing data to train the GRNN model, then performs data prediction, expands the data volume, and improves the accuracy of identification parameters.The results obtained by data prediction of the original data obtained under three operating conditions using the GRNN are shown in Figures 7-9.

GRNN for V-I Data Prediction
The parameter identification of PEMFC essentially relies on the most primitive current and voltage data, and the accuracy of the final identified parameters largely depends on the original data.However, actual data are difficult to obtain.
Therefore, this study uses existing data to train the GRNN model, then performs data prediction, expands the data volume, and improves the accuracy of identification parameters.The results obtained by data prediction of the original data obtained under three operating conditions using the GRNN are shown in Figures 7-9.

GRNN for V-I Data Prediction
The parameter identification of PEMFC essentially relies on the most primitive current and voltage data, and the accuracy of the final identified parameters largely depends on the original data.However, actual data are difficult to obtain.
Therefore, this study uses existing data to train the GRNN model, then performs data prediction, expands the data volume, and improves the accuracy of identification parameters.The results obtained by data prediction of the original data obtained under three operating conditions using the GRNN are shown in Figures 7-9.

Noised Data
Table A1 of Appendix A shows the statistics of the results of parameter extraction of noise and noise reduction data, respectively, by six algorithms under HTLP, where the symbol 'N' denotes the results obtained from noised data and 'DN' denotes the results obtained from de-noised data.From Table A1 of Appendix A, it is obvious that after data noise reduction, the RMSE is lower than that obtained from noised data.After data noise reduction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of the other four algorithms has a magnitude   A1 of Appendix A, it is obvious that after data noise reduction, the RMSE is lower than that obtained from noised data.After data noise reduction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of the other four algorithms has a magnitude of the minus third power of ten.The MPA algorithm exhibits the most significant decrease of 82.10%, whereas the BAS algorithm demonstrates a comparatively smaller reduction of 42.62%.
In addition, Figure 10 shows the RMSE convergence curves obtained by six algorithms trained on two datasets.The results obtained based on data de-noising have smaller errors than those obtained from noised data.The special process is that the RMSE obtained by six algorithms on de-noised data is lower than that obtained from noised data.
In order to acquire the visual impact of the two different training data, the boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 11.It can be seen from the figure that after data de-noising, the RMSE corresponding to each algorithm in the boxplot decreased to a certain extent.However, after data denoising, the upper and low bounds of the boxplot of PSO and BAS changed significantly, shrinking toward the RMSE median.In addition, MPA, AEO, GWO, and MFO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.
Figure 12 presents the V-I characteristic curves based on high-temperature and lowpressure obtained by the GRNN fitting the MPA algorithm under noise reduction data conditions.It can be seen that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 99.39%, which demonstrates the parameter identification effect is in line with expectations.
of the minus third power of ten.The MPA algorithm exhibits the most significant decrease of 82.10%, whereas the BAS algorithm demonstrates a comparatively smaller reduction of 42.62%.
In addition, Figure 10 shows the RMSE convergence curves obtained by six algorithms trained on two datasets.The results obtained based on data de-noising have smaller errors than those obtained from noised data.The special process is that the RMSE obtained by six algorithms on de-noised data is lower than that obtained from noised data.In order to acquire the visual impact of the two different training data, the boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 11.It can be seen from the figure that after data de-noising, the RMSE corresponding to each algorithm in the boxplot decreased to a certain extent.However, after data de-noising, the upper and low bounds of the boxplot of PSO and BAS changed significantly, shrinking toward the RMSE median.In addition, MPA, AEO, GWO, and MFO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.Figure 12 presents the V-I characteristic curves based on high-temperature and lowpressure obtained by the GRNN fitting the MPA algorithm under noise reduction data conditions.It can be seen that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 99.39%, which demonstrates the parameter identification effect is in line with expectations.Figure 12 presents the V-I characteristic curves based on high-temperature and lowpressure obtained by the GRNN fitting the MPA algorithm under noise reduction data conditions.It can be seen that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 99.39%, which demonstrates the parameter identification effect is in line with expectations.

Insufficient Data
Table A2 of Appendix A shows the statistics of the results of parameter extraction of insufficient and predicted data, respectively, by six algorithms under HTLP, where the symbol 'O' denotes the source data and 'P' denotes the predicted data.From Table A2 of Appendix A, it can be obtained by observation that after data prediction, the RMSE achieved by the five algorithms is lower than that obtained from predicted data, except for PSO algorithm.After data prediction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of the other four algorithms has a magnitude of the minus fourth power of ten.The GWO algorithm exhibits the most significant decrease of 66.66%, whereas the MPA algorithm demonstrates a comparatively smaller reduction of 28.58%.
Figure 13 describes the RMSE convergence curves obtained by six algorithms on two datasets, with most algorithms having lower RMSE obtained from predicted data, and only RMSE based on multi-data of PSO being larger than RMSE based on low data.In addition, compared with other algorithms, MPA, AEO, and MFO can quickly acquire a smaller RMSE and have great stability.A2 of Appendix A, it can be obtained by observation that after data prediction, the RMSE achieved by the five algorithms is lower than that obtained from predicted data, except for PSO algorithm.After data prediction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of the other four algorithms has a magnitude of the minus fourth power of ten.The GWO algorithm exhibits the most significant decrease of 66.66%, whereas the MPA algorithm demonstrates a comparatively smaller reduction of 28.58%.
Figure 13 describes the RMSE convergence curves obtained by six algorithms on two datasets, with most algorithms having lower RMSE obtained from predicted data, and only RMSE based on multi-data of PSO being larger than RMSE based on low data.In addition, compared with other algorithms, MPA, AEO, and MFO can quickly acquire a smaller RMSE and have great stability.The boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 14.It can be obtained by observation that except for the PSO, the RMSE obtained based on predicted data are lower than the RMSE obtained from original data.On the contrary, the RMSE of PSO has increased.In addition, MPA, AEO, GWO, and MFO have superior performance compared with other algorithms.A3 of Appendix A, it can be seen that after data noise reduction, the RMSE obtained by the six algorithms is lower than that obtained from noised data.After data noise reduction, The boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 14.It can be obtained by observation that except for the PSO, the RMSE obtained based on predicted data are lower than the RMSE obtained from original data.On the contrary, the RMSE of PSO has increased.In addition, MPA, AEO, GWO, and MFO have superior performance compared with other algorithms.The boxplot illustrates the distribution of RMSE obtained by MhAs which is sented in Figure 14.It can be obtained by observation that except for the PSO, the RM obtained based on predicted data are lower than the RMSE obtained from original d On the contrary, the RMSE of PSO has increased.In addition, MPA, AEO, GWO, and M have superior performance compared with other algorithms.A3 of Appendix A shows the statistics of the results of parameter extractio noise and noise reduction data, respectively, by six algorithms under MTMP.From T A3 of Appendix A, it can be seen that after data noise reduction, the RMSE obtained the six algorithms is lower than that obtained from noised data.After data noise reduct   A3 of Appendix A, it can be seen that after data noise reduction, the RMSE obtained by the six algorithms is lower than that obtained from noised data.After data noise reduction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSEs of the other third algorithms have a magnitude of the minus third power of ten.The GWO algorithm exhibits the most significant decrease of 66.53%, whereas the BAS algorithm demonstrates a comparatively smaller reduction of 25.09%.
Figure 15 describes the RMSE convergence curves obtained by six algorithms under noise and noise reduction data conditions.It can be obtained by observation that the RMSE based on de-noised data of parameter identification results has decreased.The special process is that the RMSE obtained by six algorithms on de-noised data is lower than that obtained from noised data.
the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSEs of the other third algorithms have a magnitude of the minus third power of ten.The GWO algorithm exhibits the most significant decrease of 66.53%, whereas the BAS algorithm demonstrates a comparatively smaller reduction of 25.09%.
Figure 15 describes the RMSE convergence curves obtained by six algorithms under noise and noise reduction data conditions.It can be obtained by observation that the RMSE based on de-noised data of parameter identification results has decreased.The special process is that the RMSE obtained by six algorithms on de-noised data is lower than that obtained from noised data.Figure 16 describes the RMSE distribution boxplot obtained by six algorithms.It can be obtained by observation that except for the BAS, the RMSE obtained from predicted data has decreased.On the contrary, the upper and low bounds of BAS have increased.Also, there are a few outliers in the boxplot of MFO and PSO.In addition, MPA, AEO, and GWO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.
Figure 16 describes the RMSE distribution boxplot obtained by six algorithms.It can be obtained by observation that except for the BAS, the RMSE obtained from predicted data has decreased.On the contrary, the upper and low bounds of BAS have increased.Also, there are a few outliers in the boxplot of MFO and PSO.In addition, MPA, AEO, and GWO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.Figure 17 presents the V-I characteristic curves based on medium-temperature and medium-pressure obtained by the GRNN fitting the GWO algorithm under noise reduction data conditions.It can be obtained by observation that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 99.07%, which demonstrates the parameter identification effect is in line with expectations.Figure 17 presents the V-I characteristic curves based on medium-temperature and medium-pressure obtained by the GRNN fitting the GWO algorithm under noise reduction data conditions.It can be obtained by observation that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 99.07%, which demonstrates the parameter identification effect is in line with expectations.
Figure 16 describes the RMSE distribution boxplot obtained by six algorithms.It can be obtained by observation that except for the BAS, the RMSE obtained from predicted data has decreased.On the contrary, the upper and low bounds of BAS have increased.Also, there are a few outliers in the boxplot of MFO and PSO.In addition, MPA, AEO, and GWO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.Figure 17 presents the V-I characteristic curves based on medium-temperature and medium-pressure obtained by the GRNN fitting the GWO algorithm under noise reduction data conditions.It can be obtained by observation that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 99.07%, which demonstrates the parameter identification effect is in line with expectations.

Insufficient Data
Table A4 of Appendix A shows the statistics of the results of parameter extraction of insufficient and predicted data, respectively, by six algorithms under MTMP.From Table A4 of Appendix A, it is obvious that by data prediction, the RMSE obtained by the four algorithms is lower than that obtained from predicted data, except for the MFO and PSO algorithms.After data prediction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of other algorithms has a magnitude exceeding the minus fourth power of ten.The MPA algorithm exhibits the most significant decrease of 63.40%, whereas the BAS algorithm demonstrates a comparatively smaller reduction of 13.26%.
Figure 18 describes the RMSE convergence curves obtained by six algorithms on two datasets, with most algorithms having lower RMSE based on predicted data, and only RMSE based on prediction data of BAS and PSO being larger than RMSE based on low data.In addition, compared with other algorithms, MPA and AEO can quickly acquire a smaller RMSE and have great stability.
Table A4 of Appendix A shows the statistics of the results of parameter extraction of insufficient and predicted data, respectively, by six algorithms under MTMP.From Table A4 of Appendix A, it is obvious that by data prediction, the RMSE obtained by the four algorithms is lower than that obtained from predicted data, except for the MFO and PSO algorithms.After data prediction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of other algorithms has a magnitude exceeding the minus fourth power of ten.The MPA algorithm exhibits the most significant decrease of 63.40%, whereas the BAS algorithm demonstrates a comparatively smaller reduction of 13.26%.
Figure 18 describes the RMSE convergence curves obtained by six algorithms on two datasets, with most algorithms having lower RMSE based on predicted data, and only RMSE based on prediction data of BAS and PSO being larger than RMSE based on low data.In addition, compared with other algorithms, MPA and AEO can quickly acquire a smaller RMSE and have great stability.Figure 19 describes the RMSE distribution boxplot obtained by six algorithms.It can be obtained by observation that the RMSE obtained based on predicted data has decreased.In addition, MPA has superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.A5 of Appendix A, it can be obtained by observation that after data de-noising, the RMSE obtained by the five algorithms is lower than that obtained from noised data, except for the BAS algorithm.In particular, the MFO algorithm exhibits the most significant decrease of 73.85%, whereas the PSO algorithm demonstrates a comparatively smaller reduction of 26.21%.After data noise reduction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of the other four algorithms has a magnitude of the minus third power of ten.
Figure 20 describes the RMSE convergence curves obtained by six algorithms under noise and de-noised data conditions.It can be obtained by observation that most of the RMSE based on de-noised data of identification results have decreased, while the RMSE of the BAS has increased after data noise reduction.A5 of Appendix A, it can be obtained by observation that after data de-noising, the RMSE obtained by the five algorithms is lower than that obtained from noised data, except for the BAS algorithm.In particular, the MFO algorithm exhibits the most significant decrease of 73.85%, whereas the PSO algorithm demonstrates a comparatively smaller reduction of 26.21%.After data noise reduction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude of the minus second power of ten, while the RMSE of the other four algorithms has a magnitude of the minus third power of ten.
Figure 20 describes the RMSE convergence curves obtained by six algorithms under noise and de-noised data conditions.It can be obtained by observation that most of the RMSE based on de-noised data of identification results have decreased, while the RMSE of the BAS has increased after data noise reduction.
The boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 21.It can be obtained by observation that except for the BAS, the RMSE obtained from predicted data has decreased.On the contrary, the upper and low bounds of PSO and the upper bound of BAS have increased.In addition, MPA, AEO, and GWO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.
Figure 22 shows the V-I characteristic curves based on low-temperature and highpressure obtained by GRNN fitting the MFO algorithm under noise reduction data cases.It can be obtained by observation that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 98.70%, which demonstrates the parameter identification effect is in line with expectations.The boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 21.It can be obtained by observation that except for the BAS, the RMSE obtained from predicted data has decreased.On the contrary, the upper and low bounds of PSO and the upper bound of BAS have increased.In addition, MPA, AEO, and GWO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.Figure 22 shows the V-I characteristic curves based on low-temperature and highpressure obtained by GRNN fitting the MFO algorithm under noise reduction data cases.It can be obtained by observation that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 98.70%, which demonstrates the parameter identification effect is in line with expectations.Figure 22 shows the V-I characteristic curves based on low-temperature and highpressure obtained by GRNN fitting the MFO algorithm under noise reduction data cases.It can be obtained by observation that the curve of fitting data almost coincides with the curve of actual data and the error measured by RMSE is equal to 98.70%, which demonstrates the parameter identification effect is in line with expectations.

Insufficient Data
Table A6 of Appendix A shows the statistics of the results of parameter extraction of insufficient and predicted data, respectively, by six algorithms under LTHP.From Table A6 of Appendix A, it is obvious that after data prediction, the RMSE obtained by the six algorithms is lower than that obtained from predicted data.After data prediction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude not exceeding the minus third power of ten, while the RMSE of the other algorithm has a magnitude exceeding the minus fourth power of ten.The MFO algorithm exhibits the most significant decrease of 92.69%, whereas the PSO algorithm demonstrates a comparatively smaller reduction of 43.01%.
Figure 23 describes the RMSE convergence curves obtained by six algorithms on two datasets, with all algorithms having lower RMSE based on predicted data.

Insufficient Data
Table A6 of Appendix A shows the statistics of the results of parameter extraction of insufficient and predicted data, respectively, by six algorithms under LTHP.From Table A6 of Appendix A, it is obvious that after data prediction, the RMSE obtained by the six algorithms is lower than that obtained from predicted data.After data prediction, the RMSE of the PSO algorithm and the BAS algorithm has a magnitude not exceeding the minus third power of ten, while the RMSE of the other algorithm has a magnitude exceeding the minus fourth power of ten.The MFO algorithm exhibits the most significant decrease of 92.69%, whereas the PSO algorithm demonstrates a comparatively smaller reduction of 43.01%.
Figure 23 describes the RMSE convergence curves obtained by six algorithms on two datasets, with all algorithms having lower RMSE based on predicted data.The boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 24.It can be obtained by observation that except for the BAS and PSO, the RMSE of other algorithms obtained from predicted data has decreased.However, the lower bound RMSE of PSO and the upper bound RMSE of BAS have increased.In addition, MPA, AEO, and MFO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.The boxplot illustrates the distribution of RMSE obtained by MhAs which is presented in Figure 24.It can be obtained by observation that except for the BAS and PSO, the RMSE of other algorithms obtained from predicted data has decreased.However, the lower bound RMSE of PSO and the upper bound RMSE of BAS have increased.In addition, MPA, AEO, and MFO have superior performance compared with other algorithms.This fully shows that GRNN data noise reduction can improve the stability of MhAs in parameter identification.

Discussions
Table 2 summarizes the research related to PEMFC parameter identification in recent years.It can be seen from the statistical comparison results that most studies have not simultaneously considered the impact of noise data and insufficient data volume on the final parameter extraction accuracy.The research conducted in this study precisely compensates for the shortcomings in this area and provides excellent guidance for the research on PEMFC parameter extraction direction.However, through the research in this study, it can be found that in using heuristic algorithms, due to their unique parameter random search ability, some algorithms have abnormal numerical accuracy in the results when extracting parameters.For example, under MTMP conditions, after data prediction and parameter extraction using the BAS algorithm, the identification accuracy showed abnormalities, after data de-noising, and the accuracy decreased by 13.26%.Overall, the method proposed in this research is not only suitable for PEMFC parameter identification but also for photovoltaic (PV) and solid oxide fuel cell (SOFC) parameter identification.Through this study, it has been fully demonstrated that it has extremely good performance in the field of parameter identification.All experimental results in this article are based on the data in Tables A7-A9, where Table A7 represents the V-I data of PEMFC under HTLP, Table A8 represents the V-I data under MTMP, and Table A9 represents the V-I data under LTHP.
Additionally, the study did not take into account the impact of changes in temperature and other factors on the identification results during the actual operation of the

Discussions
Table 2 summarizes the research related to PEMFC parameter identification in recent years.It can be seen from the statistical comparison results that most studies have not simultaneously considered the impact of noise data and insufficient data volume on the final parameter extraction accuracy.The research conducted in this study precisely compensates for the shortcomings in this area and provides excellent guidance for the research on PEMFC parameter extraction direction.However, through the research in this study, it can be found that in using heuristic algorithms, due to their unique parameter random search ability, some algorithms have abnormal numerical accuracy in the results when extracting parameters.For example, under MTMP conditions, after data prediction and parameter extraction using the BAS algorithm, the identification accuracy showed abnormalities, after data de-noising, and the accuracy decreased by 13.26%.Overall, the method proposed in this research is not only suitable for PEMFC parameter identification but also for photovoltaic (PV) and solid oxide fuel cell (SOFC) parameter identification.Through this study, it has been fully demonstrated that it has extremely good performance in the field of parameter identification.All experimental results in this article are based on the data in Tables A7-A9, where Table A7 represents the V-I data of PEMFC under HTLP, Table A8 represents the V-I data under MTMP, and Table A9 represents the V-I data under LTHP.Additionally, the study did not take into account the impact of changes in temperature and other factors on the identification results during the actual operation of the PEMFC, and the shortcomings of this study are that although the overall accuracy can be improved after data processing, it has not improved much.Further research is needed in this direction in the future.Additionally, the research did not consider the specific impact and role of the identified parameters on the cell itself [36][37][38].

Conclusions and Prospect
This study proposes a parameter identification method for the PEMFC using GRNN and MhAs.The original cell V-I data are processed using GRNN, which includes data de-noising and data prediction.In addition, six typical heuristic algorithms were used to extract parameters of the PEMFC under three operating conditions: HTLP, MTMP, and LTHP.Then, the obtained results were compared with the results extracted from the original data, and the results show that using GRNN to process the data can markedly enhance the precision rate of final identification, specifically, after data prediction, the accuracy of the MFO algorithm has been improved by 92.69% under LTHP conditions.And after data de-noising processing, it is obvious that it can improve the stability of parameter identification results.Finally, by substituting the identified parameters into the model, the fitting accuracy of V-I data obtained under all three operating conditions was very high.Specifically, under HTLP conditions, the V-I fitting accuracy achieved 99.39%, the fitting accuracy was 99.07% on MTMP, and the fitting accuracy was 98.70%.All in all, after processing the PEMFC data using GRNN and using MhAs for cell parameter extraction, the efficiency, accuracy, and stability of the final identification results of PEMFC parameter identification can be greatly improved.This study provides a novel approach to the field of PEMFC parameter identification.
In the end, this study provides significant guidance for future research on PEMFC parameter extraction.However, future research on this aspect should pay more attention to the impact of data analysis on the final identification results.In addition, consideration should also be given to the impact of the identified parameters on the internal mechanism of the cell itself.In general, further research should be conducted on the internal characteristics of the cell, such as its state of charge and health, through the identified parameters.

Figure 4 .
Figure 4. Data de-noise result under HTLP operating conditions.Figure 4. Data de-noise result under HTLP operating conditions.

Figure 4 .
Figure 4. Data de-noise result under HTLP operating conditions.Figure 4. Data de-noise result under HTLP operating conditions.

Figure 4 .
Figure 4. Data de-noise result under HTLP operating conditions.

Figure 5 .
Figure 5. Data de-noise result under MTMP operating conditions.Figure 5. Data de-noise result under MTMP operating conditions.

Figure 5 .
Figure 5. Data de-noise result under MTMP operating conditions.Figure 5. Data de-noise result under MTMP operating conditions.Energies 2023, 16, x FOR PEER REVIEW 16 of 36

Figure 6 .
Figure 6.Data de-noise results under LTHP operating conditions.

Figure 6 .
Figure 6.Data de-noise results under LTHP operating conditions.

Figure 7 .
Figure 7. Data prediction result under HTLP operating conditions.

Figure 8 .
Figure 8. Data prediction result under MTMP operating conditions.

Figure 8 .
Figure 8. Data prediction result under MTMP operating conditions.

Figure 9 .
Figure 9. Data prediction result under LTHP operating conditions.

Figure 9 .
Figure 9. Data prediction result under LTHP operating conditions.
4.2.PEMFC Parameter Extraction of HTLP 4.2.1.Noised Data Table A1 of Appendix A shows the statistics of the results of parameter extraction of noise and noise reduction data, respectively, by six algorithms under HTLP, where the symbol 'N' denotes the results obtained from noised data and 'DN' denotes the results obtained from de-noised data.From Table

Figure 10 .
Figure 10.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under HTLP.(a) noise data and (b) de-noised data.

Figure 10 .
Figure 10.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under HTLP.(a) noise data and (b) de-noised data.

Figure 11 .
Figure 11.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under HTLP.

Figure 11 .
Figure 11.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under HTLP.

Figure 11 .
Figure 11.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under HTLP.

Figure 12 .
Figure 12.GRNN for V-I curve fitting based on de-noised data under HTLP of MPA.

Figure 12 .
Figure 12.GRNN for V-I curve fitting based on de-noised data under HTLP of MPA.

Figure 14 .
Figure 14.Boxplot of RMSEs obtained by MhAs on original data and predicted data under HTLP.

4. 3 .
PEMFC Parameter Extraction of MTMP 4.3.1.Noised Data Table A3 of Appendix A shows the statistics of the results of parameter extraction of noise and noise reduction data, respectively, by six algorithms under MTMP.From Table

Figure 13 .
Figure 13.Convergence curves of RMSEs obtained by MhAs on original data and predicted data under HTLP.(a) original data and (b) predicted data.

Energies 2023 ,Figure 13 .
Figure 13.Convergence curves of RMSEs obtained by MhAs on original data and predicted under HTLP.(a) original data and (b) predicted data.

Figure 14 .
Figure 14.Boxplot of RMSEs obtained by MhAs on original data and predicted data under HT 4.3.PEMFC Parameter Extraction of MTMP 4.3.1.Noised Data TableA3of Appendix A shows the statistics of the results of parameter extractio noise and noise reduction data, respectively, by six algorithms under MTMP.From T A3 of Appendix A, it can be seen that after data noise reduction, the RMSE obtained the six algorithms is lower than that obtained from noised data.After data noise reduct

Figure 14 .
Figure 14.Boxplot of RMSEs obtained by MhAs on original data and predicted data under HTLP.

4. 3 .
PEMFC Parameter Extraction of MTMP 4.3.1.Noised Data Table A3 of Appendix A shows the statistics of the results of parameter extraction of noise and noise reduction data, respectively, by six algorithms under MTMP.From Table

Figure 15 .
Figure 15.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.(a) noise data and (b) de-noised data.Figure 15.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.(a) noise data and (b) de-noised data.

Figure 15 .
Figure 15.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.(a) noise data and (b) de-noised data.Figure 15.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.(a) noise data and (b) de-noised data.

Figure 16 .
Figure 16.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.

Figure 17 .
Figure 17.GRNN for V-I curve fitting based on de-noised data under MTMP of GWO.

Figure 16 .
Figure 16.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.

Figure 16 .
Figure 16.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under MTMP.

Figure 17 .
Figure 17.GRNN for V-I curve fitting based on de-noised data under MTMP of GWO.Figure 17. GRNN for V-I curve fitting based on de-noised data under MTMP of GWO.

Figure 17 .
Figure 17.GRNN for V-I curve fitting based on de-noised data under MTMP of GWO.Figure 17. GRNN for V-I curve fitting based on de-noised data under MTMP of GWO.

Figure 18 .
Figure 18.Convergence curves of RMSEs obtained by MhAs on original data and predicted data under MTMP.(a) original data and (b) predicted data.

Figure 19 Figure 18 .
Figure19describes the RMSE distribution boxplot obtained by six algorithms.It can be obtained by observation that the RMSE obtained based on predicted data has decreased.In addition, MPA has superior performance compared with other algorithms.

Figure 19 .
Figure 19.Boxplot of RMSEs obtained by MhAs on original data and predicted data under MTMP.

4. 4 .
PEMFC Parameter Extraction of LTHP 4.4.1.Noised Data Table A5 of Appendix A shows the statistics of the results of parameter extraction of noise and noise reduction data, respectively, by six algorithms under LTHP.From Table

Figure 19 .
Figure 19.Boxplot of RMSEs obtained by MhAs on original data and predicted data under MTMP.

4. 4 .
PEMFC Parameter Extraction of LTHP 4.4.1.Noised Data Table A5 of Appendix A shows the statistics of the results of parameter extraction of noise and noise reduction data, respectively, by six algorithms under LTHP.From Table

Figure 20 .
Figure 20.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under LTHP.(a) noise data and (b) de-noised data.

Figure 20 . 36 Figure 21 .
Figure 20.Convergence curves of RMSEs obtained by MhAs on noise data and de-noised data under LTHP.(a) noise data and (b) de-noised data.

Figure 22 .
Figure 22.GRNN for V-I curve fitting based on de-noised data under LTHP of MFO.

Figure 21 .
Figure 21.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under LTHP.

Figure 21 .
Figure 21.Boxplot of RMSEs obtained by MhAs on noise data and de-noised data under LTHP.

Figure 22 .
Figure 22.GRNN for V-I curve fitting based on de-noised data under LTHP of MFO.Figure 22. GRNN for V-I curve fitting based on de-noised data under LTHP of MFO.

Figure 22 .
Figure 22.GRNN for V-I curve fitting based on de-noised data under LTHP of MFO.Figure 22. GRNN for V-I curve fitting based on de-noised data under LTHP of MFO.

Figure 23 .
Figure 23.Convergence curves of RMSEs obtained by MhAs on original data and predicted data under LTHP.(a) original data and (b) predicted data.

Energies 2023 , 36 Figure 23 .
Figure 23.Convergence curves of RMSEs obtained by MhAs on original data and predicted data under LTHP.(a) original data and (b) predicted data.

Figure 24 .
Figure 24.Boxplot of RMSEs obtained by MhAs on original data and predicted data under LTHP.

Figure 24 .
Figure 24.Boxplot of RMSEs obtained by MhAs on original data and predicted data under LTHP.

Table 1 .
Model and algorithms parameter settings.

Table A2 of
Appendix A shows the statistics of the results of parameter extraction of insufficient and predicted data, respectively, by six algorithms under HTLP, where the symbol 'O' denotes the source data and 'P' denotes the predicted data.From Table

Table 2 .
Summary of research on parameter identification of some PEMFCs in recent years.

Table A2 .
Parameters identification results of original data and predicted data based under HTLP and MhAs.

Table A3 .
Parameters identification results of noise data and de-noised data based under MTMP and MhAs.

Table A4 .
Parameters identification results of original data and predicted data based under MTMP and MhAs.

Table A5 .
Parameters identification results of noise data and de-noised data based under LTHP and MhAs.

Table A6 .
Parameters identification results of original data and predicted data based under LTHP and MhAs.