1. Introduction
Optimization problems emerge commonly and become more and more complicated in many research fields and industrial engineering [
1,
2], such as object detection and tracking [
3,
4], automatic design of algorithms for visual attention [
5,
6], path planning optimization [
7,
8], and control of pollutant spreading on social networks [
9]. In particular, these complicated optimization problems usually are nondifferentiable, discontinuous, nonconvex, nonlinear, or multimodal [
10,
11,
12]. Confronted with such kinds of complicated optimization problems, the optimization effectiveness of traditional optimization methods, such as conjugate gradient methods [
13,
14], spacefilling curve methods [
15,
16], quasiNewton methods [
17,
18], line search methods [
19,
20,
21], and trustregion methods [
22,
23], deteriorates rapidly. In extreme cases, they are even infeasible for solving these complex problems. As a consequence, effective optimization algorithms are increasingly demanded to solve increasingly emerging complex optimization problems, such that the development of related fields could be boosted.
In recent years, evolutionary algorithms (EAs), such as particle swarm optimization (PSO) [
24,
25] and differential evolution (DE) [
26,
27], have presented good optimization ability in problem optimization, especially in solving those problems that traditional optimization methods cannot tackle, such as locating multiple global optima of optimization problems [
28,
29,
30,
31], and simultaneously optimizing more than one objective [
32,
33], etc. Different from traditional mathematical optimization methods [
34,
35,
36], which usually adopt only one feasible solution to iteratively search the solution space, EAs generally employ a population of candidate solutions to undergo iterative evolution to seek the global optimum. In this manner, compared with traditional mathematical approaches [
16,
36,
37], EAs own many unique merits. (1) EAs have no requirements on the mathematical properties of the problem to be optimized [
38,
39], or even can deal with problems without mathematical models. However, most traditional optimization methods [
16,
17,
18], especially gradientbased approaches [
17,
18], have critical requirements on the properties of optimization problems, such as continuous, differentiable, and convex. Theoretically, EAs can be adopted to optimize any kinds of problems. However, in the literature [
40,
41], EAs are mainly employed to tackle optimization problems that traditional optimization techniques cannot cope with. (2) EAs usually have strong global search ability since they maintain a population of individuals to explore the search space in different directions [
38,
41]. Therefore, falling into local regions could be avoided with a high probability. Nevertheless, traditional mathematical optimization methods [
42,
43,
44] usually employ only one feasible candidate, and thus search the solution space in only one direction to seek the global optimum. As a consequence, it is likely for them to fall into local areas, especially when dealing with optimization problems with a lot of wide and flat local basins.
As a kind of EA, PSO [
45,
46] has been successfully employed to cope with different kinds of optimization problems since its first introduction in 1995 by Kennedy and Eberhart [
47,
48]. During the optimization, PSO maintains a population of candidate feasible solutions to search the solution space iteratively. By means of its great advantages, such as independence of the mathematic properties of problems to be optimized, fast convergence, and inherent parallelism [
49], PSO has been researched by a lot of researchers. As a result, PSO has been not only widely applied to solve complex problems, such as multimodal optimization [
28,
29,
30,
31], and multiobjective optimization [
32,
33], but also has been commonly adopted to tackle realworld optimization problems, such as vehicle routing problems [
45,
50], neural networks [
51,
52], and task assignment [
53,
54].
In the literature [
55,
56,
57,
58,
59], it is widely accepted that the learning strategy in updating the velocity of particles has a significant influence on assisting PSO to obtain good performance because it determines the way of information diffusion within the swarm. As a result, researchers have designed a lot of novel learning schemes for PSO to promote its optimization performance [
11,
60], such as comprehensive learning strategies [
56,
61,
62], orthogonal learning mechanisms [
57,
63,
64], and hybrid algorithm learning methods [
55,
65,
66]. Roughly, existing learning mechanisms for PSO could be classified into two categories: exemplar constructionbased learning methods [
55,
56,
57] and topologybased learning strategies [
58,
59,
67].
Exemplar constructionbased learning strategies aim to construct new learning exemplars for particles to learn from [
55,
56,
57] that may not be visited by particles. In most existing studies, the constructed exemplar is generated by some dimension recombination methods based on historically best positions of particles. By this method, it is expected that the constructed exemplars could provide guidance for the evolution of the swarm, so that particles could move to more promising areas. Along this line, the most representative approach is the comprehensive learning PSO (CLPSO) [
56]. To improve the construction efficiency of exemplars, researchers have devised many other exemplarconstruction methods, such as the orthogonal learning PSO (OLPSO) [
57] and the genetic learning PSO (GLPSO) [
55].
Different from exemplar constructionbased learning strategies, topologybased learning strategies mainly adopt certain kinds of topologies to select guiding exemplars to update particles [
58,
59,
67,
68]. Different topology structures affect the way of information exchange between particles and the speed of information circulation, thereby affecting the performance of PSO. Specifically, in most topologybased learning strategies, each particle cognitively learns from its own historically best position and socially learns from the historically best position among the neighbors connected by the associated topologies. In the classical PSO [
47,
48], a global topology connecting all particles was utilized to select the best position among the personal best positions of all particles as the learning exemplar for each particle. This global best position brings in overly greedy attraction, such that when dealing with multimodal problems [
10,
48], the swarm often falls into local regions. To alleviate this dilemma, many different neighborhood topologies have been designed [
58], such as ring topology, fourcluster topology, pyramid topology, and square topology. Some researchers even proposed dynamic topologies [
59,
68] and composited different topologies [
67] to select promising exemplars to direct the update of particles, so that the learning abilities of particles are further promoted.
Though PSO has been advanced significantly, and a lot of remarkable PSO variants [
55,
56,
59,
62,
69,
70,
71] have shown their great feasibility in coping with optimization problems, their optimization ability encounters great challenges when dealing with complicated problems with a number of interacting variables and a lot of wide and flat local basins. Unfortunately, these complicated problems are ubiquitous in the era of big data and Internet of Things (IoT) [
72]. As a result, there is still an increasing and urgent demand for effective and efficient PSOs to tackle the increasingly emerging complex optimization problems.
To further promote the optimization performance of PSO in dealing with complicated optimization problems, this paper designs a predominant cognitive learning particle swarm optimization (PCLPSO) algorithm, which utilizes a predominant cognitive learning strategy to construct guiding exemplars for particles. Specifically, the main components of PCLPSO are summarized as follows:
 (1)
A predominant cognitive learning strategy (PCL) is devised to construct guiding exemplars to update particles. Different from existing exemplar constructionbased learning PSOs [
55,
56,
57] that construct the guiding exemplars in an elementwise way, the proposed PCL constructs a promising exemplar to guide the update of each particle by letting its personal best position cognitively learn from a predominant one randomly selected from those better than the personal best position of the updated particle. On the one hand, the personal best position of this particle learns from a better one, and thus it is expected that the constructed exemplar is more promising. As a result, the learning effectiveness of particles is expectedly promoted. On the other hand, due to the random selection in PCL, different particles generally preserve different guiding exemplars, and thus the learning diversity of particles is expectedly improved. In this way, the proposed PCLPSO could expectedly compromise the search diversity and the search convergence well to find satisfactory solutions.
 (2)
Dynamic parameter adjustment strategies are further designed to alleviate the predicament that PCLPSO is sensitive to involved parameters. With these dynamic strategies, different particles usually preserve different parameter settings, which is beneficial for further improving the learning diversity of particles.
To validate the optimization effectiveness and efficiency of PCLPSO, comprehensive experiments were carried out on the commonly adopted CEC 2017 benchmark function set [
73] with three dimensionality (namely 30, 50, and 100) by comparing PCLPSO with seven representative and stateoftheart PSO variants. At the same time, deep investigations on PCLPSO were also executed to determine what contributes to its promising performance.
The remainder of this paper is arranged as follows. Closely related works on PSOs are briefly reviewed in
Section 2. Then, the developed PCLPSO is described in
Section 3. In
Section 4, comparative experiments are executed to testify the effectiveness of PCLPSO. Finally,
Section 5 concludes this paper.
3. Proposed PCLPSO
To elevate the search effectiveness and the search diversity of particles in tackling complicated optimization problems, this paper devises a predominant cognitive learning particle swarm optimization (PCLPSO) via utilizing the devised predominant cognitive learning strategy to construct promising learning exemplars for particles to update. Therefore, the proposed PCLPSO is a constructivelearningbased PSO variant.
3.1. Predominant Cognitive Learning Strategy
In the three classical constructive learning PSOs (namely CLPSO [
56], OLPSO [
57], and GLPSO [
55]), the construction efficiency for promising guiding exemplars cannot be guaranteed in CLPSO on account of its random selection of
pbests dimension by dimension. Although OLPSO and GLPSO construct promising exemplars more efficiently than CLPSO, they usually consume a great number of fitness evaluations during the construction of promising exemplars, which leads to the number of fitness evaluations used for the swarm evolution being reduced and consequently it is not beneficial for the algorithm to seek highaccuracy solutions.
To alleviate the above issues, this paper proposes a predominant cognitive learning strategy (PCL) to construct promising exemplars for particles. Specifically, given that the number of particles maintained in the swarm is
NP, then, we sort the personal best positions
pbests of all particles from the best to the worst. Subsequently, for each particle (
x_{i}, 0
$\le $ i $\le $ NP), we construct a promising guiding exemplar as follows:
where
pbest_{i} is the personal best position found by the
ith particle so far;
gbest is the global best position found by the entire swarm so far;
pbest_{rb} is the personal best position randomly selected from those which are better than
pbest_{i};
e_{i} is the constructed exemplar for the
ith particle;
F_{i} is a control parameter within [0, 1], which can be seen as a learning step of
pbest_{i} of the
ith particle, and
t denotes the generation index.
From Equation (11), we can see that the learning exemplar (e_{i}) for each particle (x_{i}) is constructed by letting its cognitive experience (namely pbest_{i}) learn from a predominant cognitive experience of other particles (namely a better personal best position pbest_{rb}), which is randomly selected from those pbests with better fitness than pbest_{i}. In this way, the constructed exemplar is expectedly close to promising areas. It means that if the personal best position (pbest_{i}) of one particle is just gbest, we do not construct a guiding exemplar for this particle since there is no better one to learn from. In this situation, we directly use pbest_{i} (also gbest) to direct the update of this particle.
After the construction of the guiding exemplar for the
ith particle, it is updated in the following way:
where
x_{i} and
v_{i} denote the position and the velocity vectors of the
ith particle, respectively;
e_{i} is the constructed guiding exemplar for the
ith particle;
t denotes the generation index;
$\omega $ denotes the inertia weight;
c_{i} represents the acceleration coefficient for the
ith particle;
r is randomly and uniformly sampled from [0, 1].
Indepth observation of Equations (11) and (12) shows that the proposed PCL strategy preserves the following merits:
 (1)
The constructed guiding exemplar for each particle is hopefully more promising than its personal best position because the exemplar is generated by letting its personal best position cognitively learn from a randomly selected better one. Therefore, the learning effectiveness of particles is hopefully promoted, which helps particles locate optimal areas fast.
 (2)
Due to the random selection of the learning candidates in PCL, the constructed exemplars to guide the update of different particles are likely different. Hence, the learning diversity of particles is also expectedly promoted, which is helpful for the swarm to escape from local basins.
 (3)
In particular, we find that different particles have different numbers of learning candidates in PCL to construct exemplars, which results in pbests of different particles having different ranges to learn. Specifically, the better the pbest is, the fewer candidates this pbest has to learn from, and thus the narrower range this position moves in. Implicitly, it is found that particles with worse personal best positions prefer to explore the solution space, while particles with better personal best positions prefer to exploit the solution space.
 (4)
By means of the above merits, the devised PCL is expected to compromise exploration and exploitation well to search the solution space appropriately. Therefore, it is likely that the proposed PCLPSO could achieve good performance in coping with different kinds of optimization problems.
3.2. Dynamic Strategies for Control Parameters
From Equations (11) and (12), it is found that PCLPSO has three key parameters, namely inertia weight
$\omega $, acceleration coefficient
c, and control parameter
F. With respect to the inertia weight
$\omega $, we directly utilize the following linear decay strategy, which is commonly employed in the literature [
56,
62,
67,
69,
84]:
where
t is the current generation, while
T_{max} denotes the preset maximum number of generations.
Taking observation of Equation (14), we can see that the inertia weight $\omega $ linearly reduces from 0.9 to 0.2 as the evolution progresses. Therefore, in the early stage, a large $\omega $ is maintained to keep the moving inertia of particles, which is profitable for particles searching the solution space with high diversity. In the late stage, a small $\omega $ is maintained to decrease the influence of the inertia part. As a result, the swarm expectedly tends to exploit the found promising areas to obtain highaccuracy solutions.
With respect to the control parameter F and the acceleration coefficient c, we devise the following dynamic strategies.
3.2.1. Adaptive Strategy for F
In Equation (11), the control parameter F controls the learning step that the personal best position (pbest_{i}) of the updated particle takes learning from the randomly selected predominant one (pbest_{rb}). Therefore, it has a great effect on the quality of the constructed exemplars. In particular, a too large F_{i} leads to too greedy learning, which results in the constructed exemplar being too close to the randomly selected pbest_{rb}. In this situation, the updated particle may approach promising areas too fast, which may lead to the risk of falling into local basins and premature convergence. By contrast, a too small F_{i} results in insufficient learning, which results in the constructed exemplar for each particle being close to its pbest. In this situation, the learning effectiveness of particles is improved very limitedly, which may slow down the convergence. Moreover, since the learning ranges of different particles are different, the settings of F_{i} should be different for different particles as well.
Bearing the above considerations into mind, we devise the following adaptive strategy for
F:
where
rank(
i) is the ranking of the personal best position (
pbest_{i}) of the
ith particle after all
pbests are sorted from the best to the worst.
F_{i} is the setting of the control parameter
F of the
ith particle.
NP represents the number of particles in the swarm.
Gaussian (
$\frac{\mathit{rank}\left(i\right)}{\mathit{NP}}$, 0.1) randomly samples a real random number according to the Gaussian distribution with the mean value set as
$\frac{\mathit{rank}\left(i\right)}{\mathit{NP}}$ and the standard deviation set as 0.1. Here, it should be mentioned that the Gaussian distribution with a small variance (0.1) is utilized because such a Gaussian distribution has a narrow sampling range and thus could generate diverse values around the mean value. This offers slight diversity in the exemplar construction for each particle without damaging the construction efficiency.
From Equation (15), we can see that the control parameter F_{i} for each particle is randomly generated by the Gaussian distribution with the mean value set as the division between the rank of its personal best position and the population size NP and the variance set as a small value, namely 0.1. Such an adaptive strategy brings the following benefits to the proposed PCLPSO:
 (1)
Different particles have different settings of F_{i}. Because of the rank of the personal best position of each particle being different from each other, the mean value of the Gaussian distribution is different for different particles and thus, for different particles, the learning step F_{i} is different during the exemplar construction. This is beneficial for further improving the learning diversity of particles and thus profitable for assisting the swarm to get out of local basins.
 (2)
Particles with better pbests preserve small F_{i}, while those with worse pbests have large F_{i} during the exemplar construction. Specifically, better pbests usually have small ranks, and thus the mean values of the Gaussian distribution are small. Therefore, the learning step F_{i} for particles with better pbests is expectedly small. This just matches the expectation that the constructed exemplars for those particles with better pbests should not be too close to the randomly selected better positions. This is because the learning range of those better pbests is narrow due to the small number of learning candidates. By contrast, worse pbests usually have large ranks, leading to the mean value of the Gaussian distribution being large. Therefore, the learning step F_{i} is expectedly large during the construction of guiding exemplars for those particles with worse pbests. This also matches the expectation that particles with worse pbests should learn more from better ones to accelerate their moving to promising areas.
 (3)
As a whole, we can see that the devised adaptive scheme for F could implicitly help PCLPSO compromise the search diversity and the search effectiveness of particles well to search the solution space properly to obtain highaccuracy solutions.
Experiments conducted in
Section 4.3 validate the usefulness of the designed adaptive strategy for
F in helping PCLPSO achieve good performance.
3.2.2. Dynamic Acceleration Coefficient c Strategy
As for the acceleration coefficient
c, instead of using fixed values in the literature [
55,
56,
59,
67,
69], we develop the following dynamic strategy to generate different values for different particles
where
Cauchy(1.6, 0.2) generates a real number based the Cauchy distribution with the position factor set as 1.6 and the scaling factor set as 0.2.
c_{i} is the setting of the acceleration coefficient of the
ith particle. It deserves attention that instead of using the Gaussian distribution, the Cauchy distribution is employed here because the Cauchy distribution has a long fat tail and thus can generate more diversified values than the Gaussian distribution. Moreover, we set the position parameter and the scaling factor of the Cauchy distribution as 1.6 and 0.2, respectively, because in the literature [
55,
56,
59,
67,
69], researchers have investigated that the setting of the acceleration coefficient
c for PSOs is usually in the range of [1.0, 2.2] and with these parameter settings, the Cauchy distribution could generate diversified value in such a range.
With this dynamic strategy, different particles have different settings of c_{i} and the difference among the values of different particles is relatively large. This is beneficial for further improving the learning diversity of particles, which is very valuable in solving complicated optimization problems with many local basins.
Experiments conducted in
Section 4.3 demonstrate the usefulness of the designed dynamic strategy for
c in assisting PCLPSO to obtain promising optimization performance.
3.3. Complete Procedure of PCLPSO
Integrating the abovementioned techniques together, we develop the complete PCLPSO with the overall procedure presented in Algorithm 1. Specifically, the algorithm first randomly initializes NP particles and then evaluates their fitness as shown in Line 1. After the initialization, the algorithm proceeds to the main loops of the optimization (Lines 2~18). Before the update of the swarm (Lines 5~17), pbests of all particles are first sorted from the best to the worst (Line 3) and then the inertia weight $\omega $ is computed based on Equation (14) (Line 4). Then, for each particle, the control parameter F_{i} is first calculated according to Equation (15) (Line 6) and then a promising guiding exemplar is constructed (Lines 7~12). After the construction of the guiding exemplar, the acceleration coefficient c_{i} is sampled from the Cauchy distribution (Line 13) and then the particle is updated (Line 14). Subsequently, the updated particle is reevaluated (Line 15) and its pbest is updated accordingly (Line 16). The main loop (Lines 2~18) continuously iterates until the termination condition is satisfied. Finally, when the algorithm terminates, the best solution among all pbests is obtained as the final output (Line 19).
From Algorithm 1, we can that except for the time used for evaluating the fitness of particles, at each generation, it takes O(NP×log_{2}NP) to sort all pbests, O(NP) to select random better pbests for all particles, and O(NP×D) to construct new guiding exemplars for all particles. Then, it takes O(NP×D) to update all particles and O(NP×D) to update pbests. On the whole, the time complexity of PCLPSO is O(NP×D). With respect to the space complexity, the same as the classical PSO, PCLPSO needs O(NP×D), O(NP×D), and O(NP×D) to store the velocity vectors, the position vectors, and the personal best position vectors of all particles, respectively.
In summary, it is concluded that PCLPSO remains as efficient as the classical PSO regarding the time complexity and the space occupation.
Algorithm 1: The Complete Procedure of PCLPSO 
Input: Population size NP, Total fitness evaluations FE_{max} 
1:  Randomly initialize NP particles and compute their fitness, and fes = NP; 
2:  While (fes ≤ FE_{max}) do 
3:  Sort pbests from the best to the worst; 
4:  Calculate inertia weight $\omega $ based on Equation (14); 
5:  For i = 1:NP do 
6:  Calculate the control parameter F_{i} according to Equation (15); 
7:  If pbest_{i} == gbest then 
8:  Use gbest as the learning exemplar e_{i}; 
9:  Else 
10:  Select a better pbest_{rb} randomly from those which are better than pbest_{i}; 
11:  Construct the learning exemplar e_{i} according to Equation (11); 
12:  End If 
13:  Obtain the acceleration coefficient c_{i} according to Equation (16);

14:  Update particle x_{i} based on Equation (12) and Equation (13); 
15:  Compute the fitness of x_{i}: f(x_{i}), and fes++; 
16:  Update its pbest_{i}; 
17:  End For 
18:  End While 
19:  Find the global best solution gbest among all pbests; 
Output: f(gbest) and gbest 
4. Numerical Analysis
This section mainly presents extensive experiments to comprehensively validate the effective optimization of PCLPSO. To be specific,
Section 4.1 first briefly introduces the used benchmark functions and the compared methods. Then, extensive comparisons between PCLPSO and the compared approaches are displayed in
Section 4.2. At last, to perform a deep analysis on the proposed algorithm, investigative experiments are executed to testify the influence of each component on the developed PCLPSO, so that readers have a better view on why the developed method could achieve good performance.
4.1. Experimental Settings
In the experiments, we utilize the CEC 2017 benchmark problem set [
73], which has been commonly used to validate evolutionary algorithms in the literature [
75,
85,
86], to testify the optimization performance of the proposed PCLPSO. As shown in
Table 1, there are 29 optimization problems including 2 unimodal problems, 7 simple multimodal problems, 10 hybrid problems, and 10 composition problems in this benchmark set. More detailed information can be found in [
73]. It should be mentioned that to more visually understand the optimization results, for each particle, we utilize the error computed by the subtraction between its function value and the true global optimum as its fitness on each optimization problem.
Firstly, to comprehensively testify the effective optimization ability of the proposed PCLPSO, seven stateoftheart and representative PSOs were selected for comparison with it. Specifically, the selected seven algorithms are TCSPSO [
59], AWPSO [
84], PSODLP [
67], GLPSO [
29], CLPSO [
56], HCLPSO [
62], and CLPSOLS [
69]. The former three algorithms are topologybased learning PSOs, while the latter four methods are exemplar constructionbased learning PSOs.
Secondly, to comprehensively compare PCLPSO with the selected PSO variants, extensive comparison experiments were conducted on the CEC 2017 benchmark set with three dimensionality settings, namely 30D, 50D, and 100D. To make fair comparisons, we set the total number of fitness evaluations (FEmax) as 10,000*D for all algorithms.
Thirdly, for fairness, except for the swarm size, we set the key parameter settings of the selected seven algorithms as suggested in the related papers. With respect to the swarm size, since it is usually problemdependent, we tuned its settings on the CEC 2017 benchmark set with three settings of the dimension size for all algorithms. After the preliminary experiments on finetuning, the settings of the swarm size along with the settings of the other key parameters of all algorithms are presented in
Table 2.
Fourthly, to comprehensively measure the optimization performance of each algorithm, the median, the mean, and the standard deviation (Std) values in terms of the fitness of the global best solutions found at the end of the associated algorithms over 30 independent runs were employed as the measurements. Furthermore, to tell whether there is a significant difference between the proposed PCLPSO and the compared methods, the Wilcoxon rank sum test at the significance level of α = 0.05 was conducted to compare PCLPSO with each of the compared methods on each problem. To investigate the overall optimization performance of all algorithms on one whole benchmark set, we carried out the Friedman test at the significance level of α = 0.05 to get the average rank of each algorithm. It should be mentioned that the above two tests were executed by directing using the API in Matlab.
At last, it should be noticed that all algorithms were coded under PyCharm CE and run on a server with Inter(R) Core (TM) i710700T CPU @ 2.90 GHz and 8G RAM.
4.2. Comparison with StateoftheArt PSOs
Table 3,
Table 4 and
Table 5 exhibit the comparison results in terms of the global best fitness between PCLPSO and the seven selected PSO variants on the CEC 2017 benchmark sets with the three dimension sizes (30
D, 50
D, and 100
D), respectively. In the three tables, the bolded
pvalues denote that the proposed PCLPSO is significantly better than the compared algorithms on the associated problems. Besides, the symbol “+” above the
pvalues denotes that PCLPSO obtains significant superiority to the corresponding compared PSO variants on the related problems, the symbol “” above the
pvalues denotes that PCLPSO obtains significant inferiority to the corresponding compared PSO variants on the associated problems, and the symbol “=” above the
pvalues denotes that PCLPSO obtains equivalent performance with the corresponding compared PSO variants on the associated problems. Furthermore, “+/=
/−” in the three tables counts the numbers of “+”, “=”, and “−” in the whole benchmark set, respectively. Additionally, the average rank of each algorithm obtained from the Friedman test is also presented in the three tables. To clearly observe the statistical comparison results,
Table 6 summarizes the statistical comparison results between PCLPSO and the seven compared peer PSO methods on the CEC 2017 benchmark set with the three different dimension sizes in terms of “+/=/−”.
Observing
Table 3, we summarize the comparison results between PCLPSO and the seven compared PSOs on the 30
^{D} CEC 2017 functions as follows:
 (1)
With respect to the Friedman test results, PCLPSO obtains the smallest rank, namely 1.97, and this rank value is much smaller than those of the seven compared algorithms (at least 2.66). This substantiates that PCLPSO achieves the best overall performance on the 30D CEC 2017 benchmark set and shows significant overall dominance to the seven compared PSO variants.
 (2)
From the perspective of the Wilcoxon rank sum test results, except for HCLPSO and CLPSO, PCLPSO significantly outperforms the seven compared PSO algorithms on at least 23 problems and shows worse performance on at most three problems. Compared with HCLPSO and CLPSO, PCLPSO shows significant superiority to them both on 17 problems and displays inferiority to them on at most nine problems. In particular, we find that PCLPSO shows significant superiority to PSODLP on all the 29 problems, and significantly outperforms AWPSO on 28 problems.
 (3)
In view of the comparison results on different kinds of optimization problems, on the two unimodal problems, PCLPSO dominates TCSPSO, AWPSO, and PSODLP on the two problems, and achieves competitive performance with the other four algorithms (CLPSOLS, GLPSO, HCLPSO, and CLPSO). On the seven simple multimodal problems, PCLPSO is significantly superior to PSODLP on all these problems and significantly dominates AWPSO and CLPSOLS both on six problems. Competing with GLPSO, PCLPSO shows significant dominance on five problems and displays no failure to GLPSO. Compared with the other three algorithms, namely TCSPSO, HCLPSO, and CLPSO, PCLPSO achieves highly competitive performance with them. When it comes to the 10 hybrid problems, PCLPSO shows significant superiority to TCSPSO, AWPSO, CLPSOLS, and PSODLP all on the 10 problems. Compared with GLPSO, HCLPSO, and CLPSO, PCLPSO outperforms them significantly on nine, eight, and six problems, respectively. In terms of the 10 composition problems, PCLPSO significantly dominates AWPSO and PSODLP both on all these problems and outperforms GLPSO on nine problems. In comparison with TCSPSO, CLPSOLS, and CLPSO, PCLPSO shows significant superiority to them all on seven problems. Compared with HCLPSO, PCLPSO beats it on five problems and is defeated on only two problems.
Taking a look at
Table 4, we obtain the following findings from the comparison results between PCLPSO and the seven compared PSOs on the 50
D CEC 2017 functions:
 (1)
In regard to the Friedman test results, PCLPSO still gains the lowest rank (2.34) among all algorithms. Moreover, such a rank value is still much lower than those of the seven PSO algorithms (at least 2.97). This substantiates that PCLPSO still performs the best on the whole 50D CEC 2017 benchmark set, and its overall performance is significantly better than those of the seven compared PSO variants.
 (2)
According to the Wilcoxon rank sum test results, PCLPSO presents significant dominance to AWPSO, PSODLP, and GLPSO on 27, 29, and 24 problems, respectively. Compared with TCSPSO and CLPSOLS, PCLPSO obtains significantly better performance on 20 problems. In competition with HCLPSO, PCLPSO achieves significant superiority on 13 problems and shows inferiority on 12 problems. This demonstrates that PCLPSO obtains highly competitive performance with HCLPSO on the 50D CEC 2017 benchmark set.
 (3)
Regarding the comparison results on different kinds of benchmark problems, on the two unimodal problems, PCLPSO performs much better than AWPSO, PSODLP, and GLPSO all on these two problems, and performs competitively with TCSPSO, CLPSOLS, and CLPSO. On the seven simple multimodal problems, PCLPSO performs significantly better than TCSPSO, AWPSO, CLPSOLS, and PSODLP on at least five problems, and obtains highly competitive performance with HCLPSO and CLPSO. In face of the 10 hybrid problems, PCLPSO exhibits much better performance than the compared PSO variants on at least five problems, while it performs worse than them on at most three functions. In particular, in comparison with AWPSO, PSODLP, and GLPSO, PCLPSO displays significant superiority to them on 9, 10, and 8 problems, respectively. When tackling the 10 composition problems, PCLPSO is better than TCSPSO, AWPSO, CLPSOLS, PSODLP, GLPSO, and CLPSO on at least seven problems, and performs similarly with HCLPSO.
Lastly, observing of
Table 5, we achieve the following observations from the comparison results between PCLPSO and the seven selected PSO methods on the 100
D CEC 2017 problems:
 (1)
In terms of the Friedman test results, PCLPSO still obtains the lowest rank value (2.14) among all algorithms. This demonstrates that PCLPSO consistently obtains the best overall performance on the whole 100D CEC 2017 benchmark set.
 (2)
Regarding the Wilcoxon rank sum test presented in the second to last row, PCLPSO outperforms TCSPSO, AWPSO, CLPSOLS, PSODLP, and GLPSO on 20, 27, 20, 29, and 24 problems respectively. Competing with HCLPSO and CLPSO, PCLPSO performs significantly better than them on 13 and 17 problems, respectively.
 (3)
Regarding the optimization performance on different kinds of problems, on the two unimodal problems, PCLPSO beats AWPSO, PSODLP, and GLPSO all on the two problems, and performs competitively with TCSPSO, CLPSOLS, and CLPSO. On the seven simple multimodal problems, PCLPSO significantly outperforms TCSPSO, AWPSO, CLPSOLS, PSODLP, and GLPSO on at least five problems, and performs worse than them on at most one problem. In comparison with HCLPSO and CLPSO, PCLPSO achieves very competitive performance with them. On the 10 hybrid problems, PCLPSO obtains significantly better performance than the seven PSO variants on at least five problems and obtains worse performance than them on at most three problems. On the 10 composition problems, except for HCLPSO, PCLPSO presents its dominance over the other six compared methods on at least seven problems.
In summary, from
Table 6, we can see that PCLPSO consistently performs the best and exhibits significant superiority to the seven compared PSO methods on the CEC 2017 problem set with the three settings of dimensionality. This substantiates that PCLPSO is promising for dealing with optimization problems and has a good scalability in solving various optimization problems. In particular, PCLPSO performs much better than the compared methods on complex problems, such as the hybrid problems and the composition problems. This verifies that PCLPSO has a good optimization ability in dealing with complicated optimization problems.
The superiority of PCLPSO mainly profits from the devised predominant learning strategy, which could construct promising and effective guiding exemplars to update particles. In addition, the proposed dynamic parameter strategies also contribute to the good performance of PCLPSO in improving the swarm diversity. With the cohesive cooperation among the above techniques, PCLPSO could compromise the search diversity and the search effectiveness of particles well to search the solution space to obtain satisfactory performance.
4.3. Deep Investigations on PCLPSO
This section presents experiments for deep observations on PCLPSO to investigate the usefulness of each component, so that it can be determined what contributes to the good performance of PCLPSO.
4.3.1. Effectiveness of the Predominant Cognitive Learning Strategy
First, we carried out experiments to validate the usefulness of the devised predominant cognitive learning strategy. To achieve this goal, we first developed three additional versions of PCLPSO to make comparisons with the proposed PCLPSO. The first is to remove the predominant cognitive learning strategy and directly use the personal best position of the updated particle to guide its update. We name this variant of PCLPSO as “PCLPSOWPCL”. The second variant is to randomly pick a pbest from those of the other particles to generate an exemplar to update each particle in Equation (11) instead of using a random definitely better one. We name this variant as “PCLPSORand”. The third method is to use gbest to construct the guiding exemplar in Equation (11). We name this PCLPSO variant as “PCLPSOGbest”.
After the above preparation, we executed experiments on the 50D CEC 2017 problem set to compare PCLPSO with the three variants.
Table 7 presents the comparison results among these four versions of PCLPSO, which are all the mean fitness values of the global best solutions found at the end of the algorithms over 30 independent runs.
From
Table 7, it is found that with respect to both the Friedman test results and the number of problems where the associated algorithm performs the best, PCLPSO obtains the best overall performance. In particular, PCLPSOWPCL achieves the worst performance. This demonstrates that the proposed PCL strategy is effective. Compared with PCLPSOGbest, the proposed PCLPSO and PCLPSORand perform much better. This is because in PCLPSOGbest, there is only one predominant position, namely the
gbest, used to construct the guiding exemplar. This leads to the diversity of the constructed exemplars being very limited, and thus the learning diversity of particles is not high enough, leading to the swarm diversity being improved limitedly and thus easily falls into local regions. In competition with PCLPSORand, the proposed PCLPSO achieves the best results on more problems (14 problems) and obtains smaller rank (1.52). This demonstrates the superiority of using a predominant cognitive best position over using a random one to construct the learning exemplar to update each particle.
On the whole, based on the above comparison experiments, the effectiveness of the PCL strategy is demonstrated, which could construct effective exemplars to direct the updating of particles.
4.3.2. Effectiveness of the Adaptive Strategy for F
Subsequently, we carried out experiments to validate the usefulness of the devised adaptive strategy (Equation (15)) for the learning step
F. To this end, we set
F with different fixed values ranging from 0.1 to 0.9.
Table 8 shows the comparison results in view of the mean fitness values of the global best solutions found at the end of the associated algorithm over 30 independent runs between PCLPSO with the adaptive
F and those with different fixed settings of
F on the 50
D CEC 2017 benchmark set. The bolded values in this table mean that the associated algorithms achieve the best performance on the corresponding problems. In addition, the average rank of each configuration of
F attained from the Friedman test is also presented in this table.
Taking a careful look at
Table 8, we attain the following observations:
 (1)
In view of the Friedman test results, the PCLPSO with the adaptive F achieves the lowest rank and its rank value is much lower than those of the others. This demonstrates that the PCLPSO with adaptive strategy obtains the best overall performance, which demonstrates the great superiority of the adaptive strategy to the fixed ones.
 (2)
Indepth observations demonstrate that the PCLPSO with the adaptive strategy performs the best on 10 problems, while those with the fixed values obtain the best results on at most 3 problems. Moreover, the results obtained by the adaptive PCLPSO on the other 19 problems are very close to the best results obtained by PCLPSO with the associated optimal F. In particular, we find that the optimal F for PCLPSO is different on different optimization problems.
In conclusion, the adaptive strategy for F not only helps PCLPSO achieve more promising performance, but also helps PCLPSO get out of being sensitive to the parameter F. The great effectiveness of the adaptive strategy mainly benefits from the fact that it takes the difference among pbests into consideration to set the learning step F. In this way, not only the search effectiveness, but also the search diversity of particles could be improved to a large extent, and thus PCLPSO with this strategy could achieve good performance.
4.3.3. Effectiveness of the Dynamic Strategy for c
At last, we executed experiments to validate the usefulness of the dynamic acceleration coefficient strategy (Equation (16)). To achieve this goal, we set
c with different fixed values ranging from 0.8 to 2.0 with a step size of 0.2.
Table 9 displays the comparison results in terms of the mean fitness values of the global best solutions found at the end of the associated algorithm over 30 independent runs between PCLPSO with the dynamic strategy for
c and those with different fixed
c on the 50
D CEC 2017 benchmark set.
From
Table 9, the following observations can be attained:
 (1)
In view of the Friedman test results, the PCLPSO with the proposed dynamic strategy achieves the lowest rank (2.72) and such a rank is much smaller than those of the PCLPSO (at least 3.52) with the fixed values. This verifies that the PCLPSO with the proposed dynamic strategy obtains the best overall performance and presents its great dominance to the PCLPSO with the fixed c. This verifies the great superiority of the proposed dynamic strategy to the fixed one.
 (2)
Taking further observations, we find that the PCLPSO with the dynamic strategy obtains the best optimization results on 16 problems, while those with the fixed values obtain the best performance on at most 4 problems. Moreover, the results obtained by PCLPSO with the dynamic strategy on the other 13 problems are very similar to the best results obtained by PCLPSO with the associated optimal c.
Based on the above experiments, the effectiveness of the proposed dynamic strategy is demonstrated. Such a strategy helps PCLPSO achieve promising performance because it can generate diversified values of c, which is beneficial for further improving the search diversity of particles.
To summarize, the abovementioned experiments have comprehensively demonstrated the effectiveness of the proposed PCLPSO in solving optimization problems. In particular, PCLPSO performs much better than the compared peer methods in tackling complex problems, such as the multimodal problems, the hybrid problems, and the composition problems. The superiority of PCLPSO mainly profits from the proposed PCL strategy and the devised dynamic strategies for the learning step F and the acceleration coefficient c, whose effectiveness was also verified by the experiments.
5. Conclusions
This paper devised a predominant cognitive learning particle swarm optimization (PCLPSO) to tackle optimization problems. Instead of letting particles learn from their own cognitive experience and the social experience of the entire swarm, the proposed PCLPSO constructs an effective guiding exemplar by the devised predominant cognitive learning (PCL) strategy to update each particle. Specifically, the guiding exemplar for each particle is constructed by letting its pbest learn from a predominant pbest randomly selected from those which are better than pbest of the updated particle. In this way, the constructed exemplar for each particle is expectedly more promising than its pbest, and thus the search effectiveness of particles is expectedly improved. Moreover, due to the random selection of the predominant positions, the constructed guiding exemplars to update different particles are likely different, and thus the search diversity of particles is expectedly promoted as well. To further promote the search diversity and get rid of the sensitivity of PCLPSO to the related parameters, two dynamic strategies are particularly designed for the learning step in the exemplar construction and the acceleration coefficient in the velocity update. The proposed PCL and the devised dynamic strategies collaborate cohesively to help PCLPSO compromise the search effectiveness and the search diversity of particles well to search the solution space to obtain satisfactory performance.
Comparative experiments were carried out on the commonly adopted CEC 2017 benchmark set with three settings of dimensionality (30D, 50D, and 100D) to compare the proposed PCLPSO with seven representative and stateoftheart PSOs. Experimental results substantiated the great effectiveness of the devised PCLPSO and demonstrated that PCLPSO preserves a good scalability to solve different kinds of optimization problems. In particular, it was verified that the proposed PCLPSO preserves a good ability in tackling complex optimization problems, such as the multimodal problems, the hybrid problems, and the composition problems. To determine what contributes to the good performance of PCLPSO, deep investigations on PCLPSO were also carried out. The experimental results demonstrated that the proposed techniques contribute a lot to assisting PCLPSO to obtain good performance.
In the future, we aim to employ the proposed PCLPSO to tackle realworld optimization problems. Since PCLPSO is mainly designed for lowdimensional continuous optimization problems and it is independent of the mathematical properties of optimization problems, we mainly intend to use PCLPSO to solve continuous optimization problems in academic research and realworld engineering.