A Multi-Strategy Adaptive Comprehensive Learning PSO Algorithm and Its Application

In this paper, a multi-strategy adaptive comprehensive learning particle swarm optimization algorithm is proposed by introducing the comprehensive learning, multi-population parallel, and parameter adaptation. In the proposed algorithm, a multi-population parallel strategy is designed to improve population diversity and accelerate convergence. The population particle exchange and mutation are realized to ensure information sharing among the particles. Then, the global optimal value is added to velocity update to design a new velocity update strategy for improving the local search ability. The comprehensive learning strategy is employed to construct learning samples, so as to effectively promote the information exchange and avoid falling into local extrema. By linearly changing the learning factors, a new factor adjustment strategy is developed to enhance the global search ability, and a new adaptive inertia weight-adjustment strategy based on an S-shaped decreasing function is developed to balance the search ability. Finally, some benchmark functions and the parameter optimization of photovoltaics are selected. The proposed algorithm obtains the best performance on 6 out of 10 functions. The results show that the proposed algorithm has greatly improved diversity, solution accuracy, and search ability compared with some variants of particle swarm optimization and other algorithms. It provides a more effective parameter combination for the complex engineering problem of photovoltaics, so as to improve the energy conversion efficiency.


Introduction
Many problems in reality can be transformed into optimization problems. These optimization problems have complex characteristics, such as multiple constraints, high dimensionality, nonlinearity, and uncertainty, making them difficult to solve by the traditional optimization methods [1,2]. Therefore, an efficient new method is sought to solve these complex problems. Swarm intelligence optimization algorithms are a new evolutionary computing technology, which refers to some intelligent optimization algorithms with distributed intelligent behavior characteristics inspired by the swarm behavior of insects, herds, birds, fish, etc. [3][4][5]. This has become the research focus of more and more researchers. It has a special relationship with artificial life, and includes Harris hawk optimization (HHO), slime mold algorithm (SMA), artificial bee colony (ABC), firefly optimization, cuckoo search, and brainstorming optimization algorithm [6][7][8][9] for engineering scheduling, image processing, the traveling salesman problem, cluster analysis, and logistics location.
PSO is a swarm intelligence optimization technology developed by Kennedy and Eberhart [10]. The main idea is to solve the optimization problem through individual cooperation and information sharing. The PSO takes on a simple, strong parallel structure. Therefore, it has been used in multi-objective optimization, scheduling optimization, vehicle routing problems, etc. Although the PSO shows good optimization performance, it has slow convergence in solving complex optimization problems. Thus, a variety of improvement strategies for PSO are presented. Nickabadi et al. [11] presented a new adaptive

Sources Results and Contribution to PSO
Nickabadi et al. [11] Designed an adaptive inertia weight strategy for PSO Zhan et al. [13] Designed an orthogonal learning strategy for PSO Xu [15] Designed an adaptive tuning strategy for the parameters for PSO Wang et al. [16] Developed a hybrid PSO Cheng and Jin [19] Developed a social learning PSO Tanweer et al. [20] Developed a self-regulating PSO Moradi and Gholampour [22] Designed a local search strategy for PSO Gong et al. [23] Developed a new hybridized PSO Xue et al. [27] Developed a self-adaptive PSO Song et al. [28] Developed a variable-size cooperative co-evolutionary PSO Song et al. [29] Developed a bare-bones PSO The comprehensive learning PSO (CLPSO) algorithm is a variant of PSO, and has good application in multimodal problems. However, because the CLPSO algorithm uses the current search velocity and individual optimal value to update the search velocity, the search velocity value in the later iteration is very small, resulting in slow convergence and reducing the computational efficiency. In order to improve the CLPSO algorithm, researchers have conducted some useful works. Liang et al. [30] presented a variant of PSO (CLPSO) using a new learning strategy. Maltra et al. [31] presented a hybrid cooperative CLPSO by cloning fitter particles. Mahadevan and Kannan [32] presented a learning strategy for PSO to develop a CLPSO to overcome premature convergence. Ali and Khan [33] presented an attributed multi-objective CLPSO for solving well-known benchmark problems. Hu et al. [34] presented a CLPSO-based memetic algorithm. Zhong et al. [35] presented a discrete CLPSO with an acceptance criterion of SA. Lin and Sun [36] presented a multi-leader CLPSO based on adaptive mutation. Zhang et al. [37] presented a local optima topology (LOT) structure with the CLPSO for solving various functions. Lin et al. [38] presented an adaptive mechanism to adjust the comprehensive learning probability of CLPSO. Wang and Liu [39] presented a novel saturated control method for a quadrotor to achieve three-dimensional spatial trajectory tracking with heterogeneous CLPSO. Cao et al. [40] presented a CLPSO with local search. Chen et al. [41] presented a grey-wolf-enhanced CLPSO based on the elite-based dominance scheme. Wang et al. [42] presented a heterogeneous CLPSO with a mutation operator and dynamic multi-swarm. Zhang et al. [43] presented a novel CLPSO using the Bayesian iteration method. Zhou et al. [44] presented an adaptive hierarchical update CLPSO based on the strategies of weighted synthesis. Tao et al. [45] presented an enhanced CLPSO with dynamic multi-swarm.

Sources
Results and Contribution to PSO These improved CLPSO algorithms use the individual optimal information of particles to guide the whole iterative process, have better diversity and search range, and can solve complex multimodal problems. However, because the global optimal value does not participate in the particle velocity and position, the particle velocity is too small in the later search, and the convergence speed is slow. At the same time, due to the lack of measures for avoiding the local optimization, once the optimal values of most particles fall into the local optimization, the convergence is unable to find the global optimal value, and the performance is unstable. Therefore, to improve the optimization performance of CLPSO, a novel multi-strategy adaptive CLPSO (MSACLPSO) based on making use of comprehensive learning, multi-population parallel, and parameter adaptation was designed for this paper. The MSACLPSO effectively promotes information exchange in different dimensions, ensures information sharing in the population, enhances the convergence and stability, and balances the search ability compared with the other related algorithms.
The main contributions and novelties of this paper are described as follows.
(1) A novel multi-strategy adaptive CLPSO (MSACLPSO) based on comprehensive learning, multi-population parallel, and parameter adaptation is presented. (2) A multi-population parallel strategy is designed to improve population diversity and accelerate convergence. (3) A new velocity update strategy is designed by adding the global optimal value in the population to the velocity update. (4) A new adaptive adjustment strategy of learning factors is developed by linearly changing the learning factors. (5) A parameter optimization method of photovoltaics is designed to prove the actual application ability.

PSO
PSO is a population-based search algorithm that simulates the social behavior of birds within a range. In PSO, all individuals are referred to as particles, which are flown through the search space to delete the success of other individuals. The position of particles changes according to the individual's social and psychological tendencies. The change of one particle is influenced by knowledge or experience. As a modeling result of the social behavior, the search is processed to return to previously successful areas in the search space. The particle's velocity (v) and position (x) are changed by the particle best value (pBest) and global best value (gBest). The formula for updating velocity and position is given as follows: v t+1 where v t+1 ij is the velocity of the i th particle at the j th iteration, x t+1 ij is the position of particle i th at the j th iteration, and the position of the particle is related to its velocity. w is an inertia weight factor, which is used to reflect the motion habits of particles and represent the tendency of particles to maintain their previous speed. c 1 is a self-cognition factor, which reflects the particle's memory of its own historical experience, and represents that the particle has a tendency to approach its best position. c 2 is a social cognition factor, which reflects the population's historical experience of collaboration and knowledge sharing among particles, and represents that particles tend to approach the best position in the population or field history. r 1 and r 2 represent random numbers in [0, 1], which denote the remembrance ability for the research. Generally, the value in the V can be clamped to the range [−V max , V max ] in order to control the excessive roaming of particles outside the search space. The PSO is terminated until the maximal number of iterations is reached or the best particle position cannot be further improved. The PSO achieves better robustness and effectiveness in solving optimization problems.
The basic flow of the PSO is shown in Figure 1.

CLPSO
PSO can easily fall into local extrema, which leads to premature convergence. Thus, a new update strategy is presented to develop a CLPSO algorithm. In the PSO, each particle learns from its own optimal value and the global optimal value. Therefore, in the velocity update formula of CLPSO, the social part of the global optimal solution of particle learning is not used. In addition, in the velocity update formula of the traditional PSO algorithm, each particle learns from all dimensions of its own optimal value, but its own optimal value is not optimal in all dimensions. Therefore, the CLPSO algorithm introduces a comprehensive learning strategy to construct learning samples using the pBest of all particles to promote the information exchange, improve population diversity, and avoid falling into local extrema. The comprehensive learning strategy is to use the individual historical optimal solution of all particles in the population to update the particle position in order to effectively enhance the exploration ability of the PSO and achieve excellent optimization performance in solving multimodal optimization problems. The velocity update of particle and position is described as follows: where i = 1, 2, 3, · · · , P and j = 1, 2, 3, · · · , D. P is the size of the population and D is the search space dimension.
ij is a randomly distributed number on (0, 1), f i (j) refers to other particles that particle i needs to learn in the D-dimension, and pbest t f i (j) can be the optimal position of any particle.
The determination method of f i (j) is described as follows: For each particle dimension, a random probability is produced. If the random probability is greater than the learning probability P c i , then this particle dimension learns from the corresponding dimension of its own individual optimal value. On the other hand, two particles are randomly selected to learn the better optimal value. To ensure the population's polymorphism, the CLPSO also sets an update interval number m; that is, when the individual optimal value of particle i has not been updated for m iterations, it is regenerated.

MSACLPSO
PSO has simplicity, practicality, and fixed parameters, but it has the disadvantage of easily falling into local optima, as well as weak local search ability. The CLPSO has slow velocity in the later search, low convergence speed, and unstable performance. To solve these problems, a multi-strategy adaptive CLPSO (MSACLPSO) algorithm is proposed by introducing a comprehensive learning strategy, multi-population parallel strategy, velocity update strategy, and parameter adaptive strategy. In MSACLPSO, a comprehensive learning strategy is introduced to construct learning samples using the pBest of all particles to promote information exchange, improve population diversity, and avoid falling into local extrema. To overcome the lack of local search ability in the later stage, the global optimal value of the population is used for the velocity update, and a new update strategy is proposed to enhance the local search ability. The multi-population parallel strategy is employed to divide the population into N subpopulations, and then iterative evolution is carried out appropriately to achieve particle exchange and mutation, enhance the population diversity, accelerate the convergence, and ensure information sharing between the particles. The linearly changing strategy of the learning factors is employed to realize the iterative evolution in different stages and the adaptive adjustment strategy of learning factors, which can enhance the global search ability and improve the local search ability. The S-shaped decreasing function is adopted to realize the adaptive adjustment of inertia weight to ensure that the population has high speed in the initial stage, reduce the search speed in the middle stage-so that the particles will more easily converge to the global optimum-and maintain a certain speed for the final convergence in the later stage.

Multi-Population Parallel Strategy
The idea of multi-population parallel is based on the natural phenomenon of the evolution of the same species in different regions. It divides the population into multiple subpopulations, and then each subpopulation searches for the optimal value in parallel to improve the search ability. The indirect exchange of the optimal value and dynamic recombination of the population can enhance the population diversity and accelerate the convergence. A multi-population parallel strategy is proposed here. The main ideas of the multi-population parallel strategy are described as follows: The population is divided into N subpopulations in the process of evolution. For each subpopulation, the particle carries out iterative evolution, and the particle exchange and particle mutation under appropriate conditions are executed according to certain rules, so as to ensure information sharing between the particles of the population through the exchange of particles between subpopulations. Therefore, to enhance the local search ability of the CLSPO algorithm in the later stage, a new update strategy is presented after the g0 generation is completed. That is, the global optimal value gBest of the population is added to the velocity update, as shown in Equation (4): where c 1 and c 2 are learning factors, pBest t f i (j) is the optimal value of the particle in each subpopulation pBest t 1 , pBest t 2 , · · · , pBest t p , · · · , pBest t P , gBest t f i (j) is the optimal value of each subpopulation gBest t 1 , gpBest t 2 , · · · , gBest t p , · · · , gBest t P , r t 1ij and r t 2ij are randomly distributed numbers on (0, 1).

Adaptive Learning Factor Strategy
In PSO, the values of c 1 and c 2 are set in advance according to experiences, reducing the self-learning ability. Therefore, the linearly changing strategy of the learning factors is developed for c 1 and c 2 . In the early evolution stage, the self-cognition item is reduced and the social cognition item is increased to improve the global search ability. In the later evolution stage, the local search ability is guaranteed by encouraging particles to converge towards the global optimum. Therefore, the adaptive learning factor strategy is described as follows: where c max and c min are the maximum value and minimum value, respectively.

Adaptive Inertia Weight Strategy
In PSO, when the particles in the population tend to be the same, the last two terms in the particle velocity update formula-namely, the social cognition part, and the individual's own cognition part-will gradually tend towards 0. If the inertia weight ω is less than 1, the particle speed will gradually decrease, or even stop moving, which result in premature convergence. When the optimal fitness of the population has not changed (i.e., has stagnated) for a long time, the inertia weight ω should be adjusted adaptively according to the degree of premature convergence. If the same adaptive operation is adopted for the population, when the population has converged to the global optimum, the probability of destroyed excellent particles will increase with the increase in their inertia weight, which will degrade the performance of the PSO algorithm. To better balance the search ability, an S-shaped decreasing function is adopted to ensure that the population has high speed in the initial stage, and the search speed decreases in the middle stage, so that the particles can easily converge to the global optimum value and, finally, converge at a certain speed in the later stage. The S-shaped decreasing function for the inertia weight ω is described as follows: where ω max and ω min are the maximum and minimum values, respectively-ω max = 0.9 and ω min = 0.2-and a is the control coefficient to adjust the speed change, where a = 13.

Model of MSACLPSO
The flow of MSACLPSO is shown in Figure 2. The steps of MSACLPSO are described as follows: Step 1: Divide the population into N subpopulations, and initialize all parameters.
Step 2: Execute the CLPSO algorithm for each subpopulation. The objective function is used to find out the individual optimal value of the particle, the optimal value of the subpopulation, and the global optimal value of the population. To ensure the high global search ability in the early stage, T0 is set for the early stage, and each subpopulation updates all particle states according to Equation (3). To enhance the local search ability of CLSPO in the later stage, after the T0 iteration is completed, each subpopulation updates all particle states according to Equation (4).
Step 3: If the optimal value of one subpopulation does not update for successive R1 iterations, the population may fall into local optimization. To avoid falling into the local optimum for the subpopulation, the mutation strategy is used here. Each dimension of each particle in the subpopulation is mutated with the probability P m . The mutation mode is described as follows: where randn is the random number on (−1, 1).
Step 4: After T0 iterations are executed, to enhance population diversity, the particles are randomly exchanged between populations every interval R iteration to recombine subpopulations. The recombination of subpopulations is described as follows: All subpopulations randomly select 50% of the particles, which are randomly exchanged with the particles of other populations. Then, according to the fitness values of all particles in all subpopulations, 1/N particles with the best fitness values in each subpopulation are selected to construct a new population. It is worth noting that the exchanged particle can be any particle in any other population.
Step 5: Determine whether the end conditions are met. If they are met, the optimal result is output; otherwise, return to Step 2.

Test Functions
To verify the performance of MSACLPSO, 10 famous benchmark functions were selected. The detailed description is shown in Table 1.

Experimental Environment and Parameter Setting
The experimental environment mainly included Core I5-4200H, Win10, RAM-16GB, and MATLAB R2018b. The optimization performance of MSACLPSO was compared with other state-of-the-art algorithms, including the basic version of PSO (PSO) [46], self-organizing hierarchical PSO (HPSO) [47], fully-informed PSO (FIPS) [48], unified PSO (UPSO) [49], CLPSO [30], and static heterogeneous particle swarm optimization (sH-PSO) [50]. In MSACLPSO, the population is divided into two subpopulations, and four main parameters are adjusted to balance exploration and exploitation. These parameters include population size, acceleration coefficients, iteration number, and dimensions. In our experiment, a large number of alternative values were tested, and some classical values were selected from other literature, and then these parameter values were experimentally modified until the most reasonable parameter values were selected. These selected parameter values attained the optimal solution, so that they could accurately and efficiently verify the effectiveness of MSACLPSO in solving optimization problems. Some parameters that were tuned included the population size N P = 40, the number of subpopulations N = 2, c _min = 0.5 and c _max = 2.5, the dimension D = 30, run times T = 30, the maximum number of iterations G = 200, and function evaluations FEs = 300,000. The specific settings are shown in Table 2.

Experimental Results and Analysis
The population was divided into two subpopulations, and different numbers of individuals were set. The error mean (mean) value and standard deviation (Std) value were applied to evaluate the optimization performance of MSACLPSO. The obtained experimental results with the different numbers of individuals for 30-dimensional problems are shown in Table 3. The best results are the bold.
As can be seen from Table 3, the subpopulation size P 1 = 10 and P 2 = 30 obtained the best optimization performance for the 10 test benchmark functions compared with other subpopulation sizes. However, for the functions F 5 , F 6 , and F 9 , MSACLPSO did not obtain satisfactory optimization performance. Therefore, the subpopulation size P 1 = 10 and P 2 = 30 was selected for performance evaluation of MSACLPSO.
MSACLPSO was compared with some variants of PSO algorithms. The optimization performance was obtained according to the mean and Std of the 20 obtained results. The obtained experimental results using different algorithms for test functions with 30 dimensions are shown in Table 4. The obtained best results are highlighted in bold.  As shown in Table 4, all algorithms performed equally on test function F 1 , and PSO obtained the best solution on test function F 4 . FIPS obtained the best solution on test function F 5 . MSACLPSO performed well on test functions F 1~F5 . For multimodal functions, MSACLPSO performed well on all functions, and obtained the best performance on test functions F 7 , F 8 , and F 10 , and the second-best performance on test functions F 6 and F 9 . On the other hand, CLPSO and HCLPSO obtained the best solution on test function F 6 , and OLPSO and HCLPSO obtained the best solution on test function F 9 . Overall, MSACLPSO obtained the best performance on 6 out of 10 test functions. Therefore, MSACLPSO performs well, and obtains the best optimization performance for multimodal problems. In our experiment, MSACLPSO used several strategies of comprehensive learning, multipopulation parallel, and parameter adaptation. Although the strategies of comprehensive learning and parameter adaptation need more running time, the multi-population parallel strategy can reduce the running time. Therefore, the time complexity of MSACLPSO is similar to that of the other compared algorithms.
To test the statistical difference between MSACLPSO and the other variants of PSO algorithms, the non-parametric Wilcoxon signed-rank test was used to compare the results of MSACLPSO and the results of the other variants of PSO. The obtained results of MSACLPSO against other algorithms are shown in Table 5. As shown in Table 5, MSACLPSO performs better than the other variants of PSO algorithms through the number of (+/=/−) in the last row of the Wilcoxon signed-rank test results under α = 0.05.
To sum up, it can be seen that the optimized values of parameters for MSACLPSO are ω = 0.43, c = 2.1, c 1 = 1.8, c 2 = 2.1, and P c i = 0.5 for solving these complex optimization problems.

Case Analysis
Renewable energy has always been the focus of dealing with the key issues of traditional energy consumption, which uses nonrenewable energy. Solar energy is an upand-coming resource, in which PV plays a vital role. However, the PV device is usually placed in an exposed environment, which leads to its degradation. This seriously affects the efficiency of PV. Therefore, MSACLPSO was employed to effectively and accurately optimize the PV parameters to establish an optimized PV model. The values of parameters for MSACLPSO were the same as given in Section 5.3.

Modeling for PV
A lot of PV models have been designed, and were applied to illustrate the I-V characteristics. The SDM and DDM are the most widely used [51]. The PV model is described in Table 6.
It is crucial to search for the optimal parameter values in order to minimize the error of the PV models. The error functions are described as follows: For the SDM, For the DDM, To evaluate the PV model, the RMSE is described as follows: As can be seen from Table 7, CPMPSO, MPPCEDE, and MSACLPSO obtained the SRE, LRE, and MRE values. For the Std of RMSE, MSACLPSO performed well. Therefore, the optimization performance of MSACLPSO was better than that of the compared algorithms for SDM. As can be seen from Table 8, MSACLPSO obtained the best results for the SRE, LRE, MRE, and Std of RMSE. For the Std of RMSE, MSACLPSO obtained the best Std. Therefore, MSACLPSO is the best algorithm for DDM.
To sum up, it can be seen that the performance of MSACLPSO was demonstrated by optimizing the PV model parameters All of the compared results containing the optimized parameters, along with the SRE, LRE, MRE, and Std values, show that MSACLPSO can obtain the optimal parameters. This provides a more effective parameter combination for the complex engineering problems of photovoltaics, so as to improve the energy conversion efficiency.

Conclusions
In this paper, a multi-strategy adaptive CLPSO with comprehensive learning, multipopulation parallel, and parameter adaptation is proposed. A multi-population parallel strategy was designed to improve population diversity and accelerate convergence. Then, a new velocity update strategy was designed for the velocity update, and a new adaptive adjustment strategy of learning factors was developed. Additionally, a parameter optimization method for photovoltaics was designed to prove the actual application ability. Ten benchmark functions were used to prove the effectiveness of MSACLPSO in comparison with different variants of PSO. On 6 out of 10 functions, MSACLPSO obtained the best performance. MSACLPSO performed well and obtained the best optimization performance for multimodal problems. In addition, the actual SDM and DDM were selected for parameter optimization. The experimental results show that the actual application ability of the MSACLPSO was confirmed in comparison with the other algorithms. MSACLPSO is an alternative optimization technique for solving complex problems and actual engineering problems.
However, MSACLPSO is still insufficient in solving large-scale parameter optimization problems, such as time complexity and easy stagnation, among others. In the future, these applications should be considered [65][66][67][68][69][70][71][72]. The algorithm should be deeply studied, and the parameter adaptability of MSACLPSO in different stages and scales should also be further explored in future works. Acknowledgments: Thank you very much for contribution of Deng.

Conflicts of Interest:
The authors declare no conflict of interest.