Improving the Reliability of Photovoltaic and Wind Power Storage Systems Using Least Squares Support Vector Machine Optimized by Improved Chicken Swarm Algorithm

: In photovoltaic and wind power storage systems, the reliability of the battery directly a ﬀ ects the overall reliability of the energy storage system. Failed batteries can seriously a ﬀ ect the stable operation of energy storage systems. This paper aims to improve the reliability of the storage systems by accurately predicting battery life and identifying failing batteries in time. The current prediction models mainly use artiﬁcial neural networks, Gaussian process regression and hybrid models. Although these models can achieve high prediction accuracy, the computational cost is high due to model complexity. Least squares support vector machine (LSSVM) is a computationally e ﬃ cient alternative. Hence, this study combines the improved chicken swarm optimization algorithm (ICSO) and LSSVM into a hybrid ICSO-LSSVM model for the reliability of photovoltaic and wind power storage systems. The following are the contributions of this work. First, the optimal penalty parameter and kernel width are determined. Second, the chicken swarm optimization algorithm (CSO) is improved by introducing chaotic search behavior in the hen and an adaptive learning factor in the chicks. The performance of the ICSO algorithm is shown to be better than CSO using standard test problems. Third, the prediction accuracy of the three models is compared. For NMC1 battery, the predicted relative error of ICSO-LSSVM is 0.94%; for NMC2 battery, the relative error of ICSO-LSSVM is 1%. These ﬁndings show that the proposed model is suitable for predicting the failure of batteries in energy storage systems, which can improve preventive and predictive maintenance of such systems.


Introduction
Fossil energy will be exhausted eventually, thus requiring people to develop new energy sources in the near future [1]. Also, the development of new energy has become mainstream in the world's energy strategy [2]. Nowadays, there has been a great effort placed in new combustion technology [3][4][5][6][7] prediction accuracy, the ICSO algorithm optimizes the parameters of the LSSVM model. In particular, hen particles are sorted. The first 10% of the hen particles with superior performance replace the rooster particles with poor fitness, which speeds up the search speed of the chickens and avoids the rooster particles falling into local minima.
Furthermore, an adaptive learning factor is introduced into the chick location and the mutation operator is proposed. The adaptive learning factor makes the chicks move quickly to the best particles that speed up the convergence rate of the algorithm; the mutation operator broadens the shrinking population search space and makes the chicken swarm jump out of previous locally optimal locations. This feature enlarges the search space and reduces the risk of entrapment in local optima. By introducing chaotic search, mutation operator and adaptive learning factor in ICSO algorithm, the whole convergence performance is improved. The life of NMCI and NMC2 batteries is forecasted by using this ICSO-LSSVM model, and the prediction error of the model is analyzed. Two groups of test data are used to test the ICSO-LSSVM model. The test results showed that the proposed model can predict the life of lithium-ion batteries efficiently and reliably.
This paper is organized as follows. Section 2 presents the ICSO-LSSVM method. Section 3 introduces the charge and discharge test data of lithium-ion battery. The results are discussed in detail in Section 4. Section 5 then gives the findings and conclusions.

Least Squares Support Vector Machine(LSSVM) Principle
Because of the good generalization ability of support vector machine (SVM), SVM has been widely used for forecasting [38]. However, the training process of SVM is a constrained quadratic programming problem. The sample size equals the number of constraints. When the sample is too large, the training process takes a long time. A linear system of equations needs to be solved in the training process of LSSVM, which improves the training efficiency of SVM to some extent. The principle of LSSVM model is as follows: Q = (u j , v j ) j = 1, 2, . . . , n is the training sample, where u j (u j ∈ R m ) is the input value, and v j (v j ∈ R) is the output value. The kernel function maps samples. And regression function β, as indicated in Equation (1), is constructed in multidimensional space [39].
Note: µ is the weight vector; δ is the penalty coefficient; and e i is the prediction error. According to the principle of structural risk minimization, the constrained optimization problem is shown in Equation (4) [40]: where o is the offset and Φ(•) represents the mapping method. Based on the theory of constrained optimization, the corresponding Lagrange form can be obtained by introducing Lagrange multiplier g i (g i ≥ 0): Appl. Sci. 2019, 9, 3788 4 of 18 Equation (4) is obtained by the Karush-Kuhn-Tucher condition: Equation (5) is shown as follows: . . . ; 1]; g is the Lagrange multiplier; E is the unit matrix; and H is the kernel function.
According to the Mercer condition, the relationship between the mapping function Φ(•) and the kernel function H can be obtained using Equation (6): where θ is the Gauss kernel parameter. Therefore, the regression function of LSSVM is represented by Equation (7).
In LSSVM, parameters g and θ have a great influence on the generalization ability of LSSVM. The right g and θ can improve the prediction performance.

The Particle Swarm Optimization (PSO) Algorithm
Particle swarm optimization (PSO) is a classical swarm intelligence optimization algorithm. The optimization process of PSO algorithm is to imitate the foraging process of particles [41]. In the optimization process of the PSO algorithm, the current particle position can be updated by referring to its historical best position and global best position. The optimal position of the particle is obtained by multiple iterations [42,43].
Set in the dm dimension space, there are a total of R particles. In the l th iteration, the position of the i th particle in the j th dimensional space is pos l i,j (i = 1, 2, · · · , R; j = 1, 2, · · · , dm), and the velocity is vel l i,j . At the (l + 1) th iteration, the position and velocity of the particles are updated as follows [44,45].
where w is the weight coefficient; q 1 and q 2 are the learning coefficients; p 1 and p 2 are random numbers between 0 and 1; pbest is the optimal position of the individual; and pgest is the global optimal position.

The Chicken Swarm Optimization Algorithm (CSO)
The CSO algorithm is a biologically inspired metaheuristic algorithm, which imitates swarm foraging [34,35]. The CSO algorithm includes roosters, hens, and chicks. The rooster is the leader of Appl. Sci. 2019, 9, 3788 5 of 18 the flock and leads the population to find food; the hen follows the rooster for food; and the chick follows the hen for foraging. The algorithm follows the following four rules: Rule 1: There are many groups of chicken. There is a rooster, a small number of chicks, and several hens in each population.
Rule 2: Particles are grouped based on the fitness value. Particles with good fitness are chosen as roosters in each subgroup. Every rooster has the widest range of foraging. The particles with poor fitness are chosen as chicks which have the smallest foraging range. Other individuals are chosen as hens. The foraging range of hens is less than that of roosters but larger than that of chicks. Hens randomly select the group of roosters.
Rule 3: The hierarchical order is only updated during the iterative process. Rule 4: The hens in each group forage around the roosters, and the chicks feed around the hens. Because of the difference of the foraging ability of roosters, hens and chicks, different strategies are used to update their position.
It is assumed that D is search space and M is the population size. Each individual is depicted by position x t i,j (i = 1, 2, . . . , M; j = 1, 2, . . . , D) at t moment [46,47]. The position update equation for the i th rooster is shown in Equation (10): where randn(0, σ 2 ) is a normal distribution with a mean value of 0 and standard deviation of σ 2 ; ε is an infinite decimal; f k is the fitness value of any rooster after removing the i th rooster. The position update equation of the i th hen is shown in Equation (12): where r s; rand is a random number between [0,1], which obeys uniform distribution. The position of the i th chick is as follows: where F(F ∈ [0, 2]) is a learning parameter; x t mj means the position of the hen followed by the chick.

The Improved Chicken Swarm Optimization Algorithm (ICSO)
As the iteration goes deeper, the search ability of rooster particles becomes worse. In addition, the number of hens is the largest, and the search results of the hen particles have a great influence on the algorithm. Therefore, the chaotic search is introduced in the ICSO algorithm. The hens are sorted based on their fitness values and the first 10% of the hen particles with good fitness are searched. After the search, the first 10% of the hen particles with superior performance replace roosters with poor fitness which speeds up the search speed of the swarm. The neighborhood points of many local optimal solutions are generated in the iterative process due to the characteristics of ergodicity and randomness of chaotic search [48].
In the ICSO algorithm, the chaotic search is constructed by using the Logistic equation [49]: where C i (C i ∈ (0, 1)) is a chaotic factor, and µ is the control parameter. When µ = 4, the system is completely in the chaotic state. The foraging ability of the rooster is the strongest, the hen is second, and the chick is the worst in the CSO algorithm. The adaptive learning factor is introduced in the chick's location to enhance the foraging ability of the chick. The relative change rate of the fitness value of chicks is defined as: where f i (t) is the fitness value of the i th chick, and f best is the fitness value of the best particle in the flock.
In the t th generation, the adaptive learning factor of the i th chick is updated: where k is the relative change rate. After the improvement is complete, the position of the i th chick is shown in Equation (17): where randn(0, σ 2 ) is a normal distribution, 0 is the mean value, σ 2 is the standard deviation; and s t i is the adaptive learning factor of the i th chick in the t th generation.
The relative change rate k of the chicks reflects the difference between the i th chick and the optimum particle. The adaptive learning factor s t i reflects the deviation degree of the position of the i th chick and the optimum particle position in the flock. When the relative change rate k increases, the adaptive learning factor s t i increases. When the relative change rate k decreases, the adaptive learning factor s t i decreases. In this way, the movement of chick particles to the best particles is accelerated. The mutation operator is introduced in position of the chick to further avoid the chicks falling into extreme values. The fitness variance of chicks is as shown in Equation (18): where n k is the number of chicks; f i is the fitness value of the i th chick; f avg is the average fitness value of the chicks; and f = max 1, is the normalized factor. The fitness variance σ 2 of the chicks reflects the aggregation degree of the fitness values of the chicks. The smaller the σ 2 is, the higher the aggregation degree of the fitness value is. When the chick particles fall into the local extreme value, the fitness variance σ 2 tends to 0. If the fitness variance σ 2 reaches the threshold, the chicks are mutated. The narrowing search space of the chick can be broadened by the mutation operation.
There are several steps in the optimization process of ICSO algorithm.
(1) The parameters of the algorithm are defined, such as the number of roosters n r , hens n h , chicks n k , etc.
(2) The chicken flock X is initialized and the chickens according to their fitness values are grouped.
(3) To start the iteration, we first decide whether the status of the chicken flocks needs to be updated. If necessary, the chicken group status is updated; if not, the rooster's position is updated by Equation (10). Then the position of the hen is updated by Equation (12). The fitness values of the hens are sorted, and the first 10% of the hen particles are searched. After the search, the first 10% of the hen particles with superior performance replace the rooster particles with poor fitness value. The chick's position is updated by Equation (17), the fitness values of the chicks are calculated, and the mutation operator determines whether to reset the position of the chicks or not.
(4) Update individual fitness values and find global optimal locations.
(5) Determine whether to terminate the algorithm. If the termination condition is not met, return to step three to continue the iteration.
The ICSO algorithm flow chart is shown in Figure 1.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 7 of 18 (5) Determine whether to terminate the algorithm. If the termination condition is not met, return to step three to continue the iteration.
The ICSO algorithm flow chart is shown in Figure 1.

ICSO-LSSVM Model
Penalty parameter and kernel width parameter are two important parameters of the prediction model. The penalty parameter affects the generalization ability of the prediction model, and kernel width affects the training and prediction speed of the model. Intelligent algorithm ICSO is used to find the parameters of the LSSVM model to improve the prediction effect of the LSSVM model, and the ICSO-LSSVM model is thus constructed.
During training, Equation (19) is used as the fitness function of the ICSO algorithm.
where p and p are the predicted and true values.
The residual life prediction process of lithium-ion battery is as follows.

ICSO-LSSVM Model
Penalty parameter and kernel width parameter are two important parameters of the prediction model. The penalty parameter affects the generalization ability of the prediction model, and kernel width affects the training and prediction speed of the model. Intelligent algorithm ICSO is used to find the parameters of the LSSVM model to improve the prediction effect of the LSSVM model, and the ICSO-LSSVM model is thus constructed.
During training, Equation (19) is used as the fitness function of the ICSO algorithm.
where pˆand p are the predicted and true values. The residual life prediction process of lithium-ion battery is as follows.
(1) The NMC1 and NMC2 battery capacity data are imported.
(2) The input and output sample sets are normalized.
(3) The parameters of chicken flocks are initialized, such as the number of roosters n r , hens n h and chicks n k .
(4) ICSO algorithm is used to optimize the Super-parameters of the LSSVM model.

Charge and Discharge Test
Ternary material Li[NixCoyMn1-x-y]O2 is abbreviated as NMC. A lithium battery with a positive electrode of NMC ternary material is called an NMC lithium battery. The cell NMC1 and NMC2 with a capacity of 2000/mAh are used as the experimental object, and the charge and discharge experiments are carried out on NMC1 and NMC2 cells at a room temperature of 25 °C. Specific implementation measures are presented: The constant current charging with the constant voltage is performed on NMC1 and NMC2 cells. The charging rate is 1.0 C, the charge cut-off voltage is 4.20 V, and the charge cut-off current is 0.02 A. Finally, the two batteries are put aside for about 30 min after charging.
The constant current discharge is performed on NMC1 and NMC2 cells. The discharge rate is 1.0 C, and the discharge cut-off voltage is 2.50 V. Finally, the two batteries are put aside for about 30 min after discharging.
The lithium-ion battery capacity decay curve is shown in Figure 3. As the number of cycles increases, the battery discharge capacity gradually decreases. The dotted line in Figure 3 indicates the failure threshold. When the battery capacity is reduced to the failure threshold, the battery fails. The failure threshold is 80% of the rated capacity in this paper. The failure thresholds of NMC1 and NMC2 battery are 1600/mAh.

Charge and Discharge Test
Ternary material Li[Ni x Co y Mn 1−x−y ]O 2 is abbreviated as NMC. A lithium battery with a positive electrode of NMC ternary material is called an NMC lithium battery. The cell NMC1 and NMC2 with a capacity of 2000/mAh are used as the experimental object, and the charge and discharge experiments are carried out on NMC1 and NMC2 cells at a room temperature of 25 • C. Specific implementation measures are presented: The constant current charging with the constant voltage is performed on NMC1 and NMC2 cells. The charging rate is 1.0 C, the charge cut-off voltage is 4.20 V, and the charge cut-off current is 0.02 A. Finally, the two batteries are put aside for about 30 min after charging.
The constant current discharge is performed on NMC1 and NMC2 cells. The discharge rate is 1.0 C, and the discharge cut-off voltage is 2.50 V. Finally, the two batteries are put aside for about 30 min after discharging. The lithium-ion battery capacity decay curve is shown in Figure 3. As the number of cycles increases, the battery discharge capacity gradually decreases. The dotted line in Figure 3 indicates the failure threshold. When the battery capacity is reduced to the failure threshold, the battery fails. The failure threshold is 80% of the rated capacity in this paper. The failure thresholds of NMC1 and NMC2 battery are 1600/mAh. NMC1 batteries include 760 lithium-ion battery capacity decay data, NMC1 batteries fail after 712 cycles, NMC2 batteries include 1060 lithium-ion battery capacity decay data, and NMC2 batteries fail after 999 cycles. In order to make the model fit the data better during the training process, 60-70% of the sample data were selected as the training samples in the study, and the remaining 30-40% of the data were selected as the test samples.
The NMC1 battery capacity decay curve is shown in Figure 4, and the NMC2 battery capacity decay curve is shown in Figure 5.    NMC1 batteries include 760 lithium-ion battery capacity decay data, NMC1 batteries fail after 712 cycles, NMC2 batteries include 1060 lithium-ion battery capacity decay data, and NMC2 batteries fail after 999 cycles. In order to make the model fit the data better during the training process, 60-70% of the sample data were selected as the training samples in the study, and the remaining 30-40% of the data were selected as the test samples.

Analysis of Algorithm Convergence Performance
The NMC1 battery capacity decay curve is shown in Figure 4, and the NMC2 battery capacity decay curve is shown in Figure 5. NMC1 batteries include 760 lithium-ion battery capacity decay data, NMC1 batteries fail after 712 cycles, NMC2 batteries include 1060 lithium-ion battery capacity decay data, and NMC2 batteries fail after 999 cycles. In order to make the model fit the data better during the training process, 60-70% of the sample data were selected as the training samples in the study, and the remaining 30-40% of the data were selected as the test samples.
The NMC1 battery capacity decay curve is shown in Figure 4, and the NMC2 battery capacity decay curve is shown in Figure 5.    NMC1 batteries include 760 lithium-ion battery capacity decay data, NMC1 batteries fail after 712 cycles, NMC2 batteries include 1060 lithium-ion battery capacity decay data, and NMC2 batteries fail after 999 cycles. In order to make the model fit the data better during the training process, 60-70% of the sample data were selected as the training samples in the study, and the remaining 30-40% of the data were selected as the test samples.

Analysis of Algorithm Convergence Performance
The NMC1 battery capacity decay curve is shown in Figure 4, and the NMC2 battery capacity decay curve is shown in Figure 5.

Analysis of Algorithm Convergence Performance
The parameters of the CSO algorithm and ICSO algorithm are set. The population size is 20; the number of roosters, chicks and hens are 4, 2 and 14; the number of hens with mother-child relationship is 6; the maximum number of cycles is 300. The parameters of the PSO algorithm are set. The population size is 20; the maximum particle speed is 50; the minimum particle speed is −50; the maximum particle position is 500; the minimum particle position is −500; and the iteration number is 300. The algorithms are tested on the same platform. The main parameters of the computer used in the simulation experiment are as follows: i5 processor, CPU main frequency 2.5 GHz, memory capacity 2 GB, hard disk capacity 500 GB.
Four test functions are used to test the algorithms. The range, optimal value and expression of the functions are as follows. The first test function is D1 (D1 = S j=1 t 2 j ), and the optimal value of the function is 0, which ranges from [−100, 100]; the second test function is D2 (D2 = S j=1 |t i | + S j=1 |t i |), and the optimal value of the function is 0, which ranges from [−10, 10]; the third test function is D3 , and the optimal value of the function is 0, which ranges from [−100, 100]; the fourth test function is D4 (D4 = S j=1 j * t 4 j + random[0, 1)), and the optimal value of the function is 0, which ranges from [−100, 100]. The dimension of the function is 20. Each function is tested 10 times for the three algorithms. The test results are as follows. Table 1 shows the convergence accuracy and calculation cost of the three optimization algorithms. By analyzing the convergence results of the three algorithms, it is found that neither PSO nor CSO converge to 0 for the D1, D2 and D4 functions. Compared with CSO and ICSO, the convergence result of PSO is poor, which is caused by the fact that the ability of PSO to jump out of the minimum is poor. For D1 and D3, the ICSO converges to 0; for D2 and D4, the ICSO does not converge to 0, but the convergence value of the ICSO is significantly better than that of PSO and CSO. For computing cost, the running time spent by iteration of the PSO is the smallest. For the four test functions, the total time consumed by the PSO program iteration is significantly shorter than that of CSO and ICSO. This is because the structure of the PSO algorithm is simple and the cost of calculation is small. For the four functions, the total time consumed by the PSO for 40 optimizations is 20.93 s; the total time consumed by the CSO program for 40 optimizations is 31.86 s; the total time consumed by the ICSO for 40 optimizations is 40.38 s. The time consumed by running ICSO programs is nearly twice as long as the PSO; it is 26.74% more than the CSO. Compared with PSO and CSO, ICSO runs longer. Because chaotic search is introduced into ICSO algorithm, the computation cost of ICSO is increased.
In order to analyze the convergence speed of the three algorithms, Figure 6 shows the convergence curves of the algorithms. In order to analyze the convergence speed of the three algorithms, Figure 6 shows the convergence curves of the algorithms.  The convergence effect of the three algorithms can be more clearly observed from Figure 6. Compared with CSO, the convergence performance of ICSO is significantly improved. For the D1, D3, and D4 functions, ICSO converges to the optimal value 4, 28, and 9 generations earlier than CSO, respectively, and ICSO maintains a high convergence accuracy. Comprehensive comparison of the convergence data in Table 1 and the convergence curves in Figure 6 reveals that ICSO has better convergence ability. The convergence effect of the three algorithms can be more clearly observed from Figure 6. Compared with CSO, the convergence performance of ICSO is significantly improved. For the D1, D3, and D4 functions, ICSO converges to the optimal value 4, 28, and 9 generations earlier than CSO, respectively, and ICSO maintains a high convergence accuracy. Comprehensive comparison of the convergence data in Table 1 and the convergence curves in Figure 6 reveals that ICSO has better convergence ability.

Simulation Experiment on Life Prediction of Lithium-Ion Battery
Firstly, the ICSO-LSSVM model is trained. Then, the model is validated with the test samples. The ICSO-LSSVM model is tested by NMC1 and NMC2 battery capacity data. For NMC1 battery, the number of training samples and testing samples utilizes the first 500 and the last 260 samples respectively; for NMC2 battery, the number of training samples and testing samples utilizes the first 700 and the last 360 samples.
The goal of the optimization process is to find the minimum value of f. The percentage of relative error (RE) is used as the evaluation index, and the percentage of the relative error of capacity prediction is as follows: The predictive effects of three models were tested by using the data of NMC1 and NMC2 batteries, respectively. The first 500 sets of NMC1 battery capacity data are used as training samples; three models are trained by training samples. Then the 260 groups of data are predicted by PSO-LSSVM model, CSO-LSSVM model and ICSO-LSSVM model. The prediction curves are as follows.
From Figure 7, it can be seen that the degradation trend of the three prediction curves is similar to that of the actual value curves. The prediction curves of the three models reach the failure threshold at 220, 215 and 212 times, respectively. The higher degree of fitting of the red prediction curve indicates that the ICSO-LSSVM model has a stronger ability to interpret NMC1 battery data.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 18 The predictive effects of three models were tested by using the data of NMC1 and NMC2 batteries, respectively. The first 500 sets of NMC1 battery capacity data are used as training samples; three models are trained by training samples. Then the 260 groups of data are predicted by PSO-LSSVM model, CSO-LSSVM model and ICSO-LSSVM model. The prediction curves are as follows.
From Figure 7, it can be seen that the degradation trend of the three prediction curves is similar to that of the actual value curves. The prediction curves of the three models reach the failure threshold at 220, 215 and 212 times, respectively. The higher degree of fitting of the red prediction curve indicates that the ICSO-LSSVM model has a stronger ability to interpret NMC1 battery data. The prediction errors of the three models are calculated according to the prediction results of Figure 7, and the prediction results are evaluated according to the relative error RE.
As shown in Figure 8, The maximum relative error of the PSO-LSSVM model exceeds 1%. The prediction error of the CSO-LSSVM and ICSO-LSSVM models is controlled within 1%. With the increase in the number of predicted samples, the prediction errors of the three models did not show significant fluctuations, indicating that the three models have high stability for NMC1 battery prediction.  The prediction errors of the three models are calculated according to the prediction results of Figure 7, and the prediction results are evaluated according to the relative error RE.
As shown in Figure 8, The maximum relative error of the PSO-LSSVM model exceeds 1%. The prediction error of the CSO-LSSVM and ICSO-LSSVM models is controlled within 1%. With the increase in the number of predicted samples, the prediction errors of the three models did not show significant fluctuations, indicating that the three models have high stability for NMC1 battery prediction. Figure 7, and the prediction results are evaluated according to the relative error RE.
As shown in Figure 8, The maximum relative error of the PSO-LSSVM model exceeds 1%. The prediction error of the CSO-LSSVM and ICSO-LSSVM models is controlled within 1%. With the increase in the number of predicted samples, the prediction errors of the three models did not show significant fluctuations, indicating that the three models have high stability for NMC1 battery prediction.  The first 700 sets of NMC2 battery capacity data are used as training samples. Then the 360 groups of data are predicted by PSO-LSSVM, CSO-LSSVM and ICSO-LSSVM models. The NMC2 battery life prediction curves for each model are shown in Figure 9.   The first 700 sets of NMC2 battery capacity data are used as training samples. Then the 360 groups of data are predicted by PSO-LSSVM, CSO-LSSVM and ICSO-LSSVM models. The NMC2 battery life prediction curves for each model are shown in Figure 9.  Table 3 shows the RE intervals of the PSO-LSSVM, CSO-LSSVM and ICSO-LSSVM models, which are ± 0.5%, ± 0.8% and ± 0.4%, respectively. The average RE values of the ICSO-LSSVM model are 0.02% and 0.09% smaller than the PSO-LSSVM model and CSO-LSSVM model, respectively. By comparison, it is found that the RE of ICSO-LSSVM is smaller than that of PSO-LSSVM and CSO-LSSVM.   Table 3 shows the RE intervals of the PSO-LSSVM, CSO-LSSVM and ICSO-LSSVM models, which are ±0.5%, ±0.8% and ±0.4%, respectively. The average RE values of the ICSO-LSSVM model are 0.02% and 0.09% smaller than the PSO-LSSVM model and CSO-LSSVM model, respectively. By comparison, it is found that the RE of ICSO-LSSVM is smaller than that of PSO-LSSVM and CSO-LSSVM.  Table 4 presents the remaining life prediction of the NMC1 battery, the |RE| of ICSO-LSSVM model is 0.94%, which is lower than 3.77% of the PSO-LSSVM model and 1.41% of the CSO-LSSVM model. For the remaining life prediction of NMC2 battery, the |RE| of ICSO-LSSVM model is 1.00%, which is lower than −1.33% of the PSO-LSSVM model and 6.02% of the CSO-LSSVM model. The test result showed that the ICSO-LSSVM model is more accurate than the PSO-LSSVM and CSO-LSSVM models.

Conclusions
Wind power and PV power play an important role in decarbonizing electricity generation. Lithium-ion battery energy storage systems are an important part of such an energy generation system as they stabilize the inherent volatility of wind and sunlight. Deteriorating batteries can be replaced in time by predicting the battery life to enhance the reliability of energy-storage systems. This work developed an ICSO-LSSVM model for this purpose.
The novel features of this work are as follows. First, the chaotic search, mutation operator and adaptive learning factor are introduced into the CSO algorithm to improve its performance. Secondly, results show that the precision and convergence speed of the ICSO algorithm are better than CSO PSO; thirdly, three models are tested using the 260 groups of data of NMC1 battery capacity data and the 360 groups of data of NMC2 battery capacity data. These results show that the improved ICSO-LSSVM model is suitable for predicting the life of lithium-ion batteries.
The main engineering contribution of this new model is that it can improve the safety and reliability of integrated PV and wind power systems by enhancing predictive maintenance capability for the storage units. This capability has important implications for grid decarbonization. Nonetheless, this study has limitations. This paper studies the residual life of a single battery, but in practical applications, the energy storage equipment is a battery pack with multiple components. The remaining life of battery packs should be predicted as a whole in future research.