Next Article in Journal
Large Deviations for the Maximum of the Absolute Value of Partial Sums of Random Variable Sequences
Previous Article in Journal
Bayesian and Non-Bayesian Estimation of the Nadaraj ah–Haghighi Distribution: Using Progressive Type-1 Censoring Scheme
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems

1
School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China
2
Department of Electrical and Electronic Engineering, Hanyang University, Ansan 15588, Korea
3
Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413310, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(5), 761; https://doi.org/10.3390/math10050761
Submission received: 30 January 2022 / Revised: 23 February 2022 / Accepted: 25 February 2022 / Published: 27 February 2022
(This article belongs to the Topic Soft Computing)

Abstract

:
Optimization problems become increasingly complicated in the era of big data and Internet of Things, which significantly challenges the effectiveness and efficiency of existing optimization methods. To effectively solve this kind of problems, this paper puts forward a stochastic cognitive dominance leading particle swarm optimization algorithm (SCDLPSO). Specifically, for each particle, two personal cognitive best positions are first randomly selected from those of all particles. Then, only when the cognitive best position of the particle is dominated by at least one of the two selected ones, this particle is updated by cognitively learning from the better personal positions; otherwise, this particle is not updated and directly enters the next generation. With this stochastic cognitive dominance leading mechanism, it is expected that the learning diversity and the learning efficiency of particles in the proposed optimizer could be promoted, and thus the optimizer is expected to explore and exploit the solution space properly. At last, extensive experiments are conducted on a widely acknowledged benchmark problem set with different dimension sizes to evaluate the effectiveness of the proposed SCDLPSO. Experimental results demonstrate that the devised optimizer achieves highly competitive or even much better performance than several state-of-the-art PSO variants.

1. Introduction

Optimization problems widely exist in daily life and real-world engineering, such as resource allocation optimization [1], path planning optimization [2,3], and robot task allocation [4]. However, with the advent of Internet of Things and big data, optimization problems are becoming increasingly complicated [5,6], with many undesirable properties, such as non-differentiable, discontinuous, non-convex, non-linear and multimodal with many local areas [7]. In particular, these complex optimization problems tremendously challenge the effectiveness of traditional gradient descent based methods, or even make them infeasible [6,8,9]. Therefore, it is urgent to develop effective optimization algorithms to solve these complex problems, so as to boost the development of related areas.
Recently, heuristic algorithms, such as particle swarm optimization (PSO) [10,11] and differential evolution (DE) [12,13], have shown very promising performance in solving optimization problems. Unlike mathematical algorithms [14,15,16,17], which only maintain one solution to iteratively find the global optimum of the optimization problem, heuristic algorithms mainly maintain a population of feasible solutions to iteratively search the solution space to find the global optimum of the optimization problem. In particular, compared with traditional mathematical algorithms [14,15,16,17], heuristic algorithms preserve many unique advantages. (1) Heuristic algorithms usually have no requirements on the mathematic properties of the optimization problem, or even can optimize problems without mathematic models [18]. However, most mathematical algorithms, especially gradient decedent based optimization methods, usually require that the optimization problems be continuous, differentiable, and convex. Nevertheless, in the era of big data and Internet of Things (IoT), optimization problems usually have characteristics such as non-differentiable, discontinuous, non-convex, non-linear and multimodal [6,19]. Confronted with these optimization problems, heuristic algorithms such as PSO usually preserve many advantages over mathematical algorithms. In particular, heuristic algorithms, like PSO, preserve unique merits in solving NP-hard problems [20,21]. (2) Heuristic algorithms usually preserve strong global search ability due to the maintenance of a population to search the solution space. Mathematical algorithms usually maintain only one solution to iteratively find the global optimum of an optimization problem. In this case, they are at great risk of falling into local areas, especially when tackling multimodal problems with many wide and flat local regions, whereas heuristic algorithms could search the solution space in different directions by maintaining a population of individual solutions. (3) Heuristic algorithms usually preserve inherent parallelism to accelerate the iteration [18,22]. Specifically, at least, during the optimization, the fitness evaluation of each individual solution, which is usually the most time-consuming part in heuristic algorithms, could be separately executed, let alone that some parallel techniques could be designed and embedded into heuristic algorithms to accelerate the search process. However, most mathematical algorithms can only be executed sequentially because the current iteration usually relies on the results of the preceding iteration.
In recent years, PSO has received extensive attention from researchers since it was proposed by Eberhart and Kennedy in 1995 [10,11]. Therefore, many remarkable PSO variants have emerged [23,24,25,26,27], and PSO has been widely applied to solve various optimization problems [6,28], such as multimodal optimization [29,30,31,32] and multi-objective optimization [27,33].
In the literature of PSO, it is well recognized that the learning strategy of particles plays a key role in helping PSO achieve promising performance [8,34,35,36,37]. As a result, researchers have focused significant efforts in designing effective learning strategies for PSO to improve its performance, and thus many remarkable novel learning schemes have emerged, such as cooperative learning mechanisms [38,39], comprehensive learning strategies [8,40], and social learning methods [41,42]. In fact, the key to devising effective learning strategies is to select appropriate guiding exemplars for particles to update. In a broad sense, existing selection of guiding exemplars can be divided into the following two types.
The most popular way to select guiding exemplars for particles is to employ different topologies to communicate with other particles to find proper guiding exemplars [43,44]. In the earliest PSO [10,11], Eberhart and Kennedy utilized the global topology with full connections with all particles to choose the global best position in the whole swarm as one guiding exemplar to direct the update of particles. However, such a global topology leads to too greedy attraction, and thus the swarm usually falls into local areas when dealing with multimodal problems. To alleviate this predicament, researchers have attempted many other topologies, such as the ring topology [45], the star topology [46], the random topology [47,48], the wheel topology [46] and the dynamic topology [49]. In the early stage, researchers mainly employed the topologies on the personal best positions of particles to select proper exemplars. Since the obsolete historical evolutionary information may also contain useful information to guide the learning of particles, some researchers have attempted to deploy topologies on the personal best positions and the recorded historical best positions to find proper exemplars [35,50]. Nevertheless, these topology-based exemplar selection methods mainly find promising exemplars based on historical positions visited by particles.
To further improve the learning effectiveness of particles, researchers have sought new solutions in another direction, namely constructing guiding exemplars for particles. Unlike the former types of exemplar selection methods, this type of exemplar selection method mainly constructs new exemplars, which may not be visited by particles, to guide the evolution of the swarm. Consequently, many remarkable constructive learning strategies [8,51,52,53] have been devised for PSO to tackle complicated optimization problems. In this direction, the most representative method is the comprehensive learning PSO (CLPSO) [8], which constructs a guiding exemplar dimension by dimension for each particle based on the personal best positions of all particles. Inspired by this strategy, many other constructive approaches have been proposed, such as the orthogonal learning PSO (OLPSO) [51], which utilizes an orthogonal matrix to find suitable combinations of dimensions to construct the guiding exemplar for each particle, and the genetic learning PSO (GLPSO) [53], which adopts the operators in the genetic algorithm to construct the guiding exemplar for each particle.
Though most existing PSOs have shown promising performance on simple optimization problems, such as unimodal problems and simple multimodal problems [22,54], they are confronted with limitations when tackling complex optimization problems, such as multimodal problems with many interacted variables and an ocean of wide local regions, which are increasingly common in the era of big data and IoT. As a result, there is an increasing demand for effective PSO to solve emerging complicated optimization problems.
Inspired by the competition mechanisms in human society that groups of randomly assembled individuals spontaneously engage in costly group competition [55], this paper proposes a stochastic cognitive dominance leading particle swarm optimization algorithm (SCDLPSO) to improve the learning effectiveness and efficiency of particles when tackling complicated optimization problems. Instead of the competition between individuals, this paper employs the competition between the personal best positions of particles to select guiding exemplars for each updated particle. Specifically, for each particle to be updated, two different personal best positions are first randomly selected from the ones of the other particles. Then, the two selected personal best positions compete with the personal best position of the updated particle. Only when at least one randomly selected personal best position is better than the one of the particle to be updated, this particle is updated by cognitively learning from the two better personal best positions (either the two better randomly selected personal best positions or the better randomly selected one and its own one); otherwise, it is not updated, and thus directly enters the next generation. In this way, on the one hand, the swarm diversity could be largely promoted due to the random selection of the two competing personal best positions and the retention of some particles, preserving relatively good historical evolutionary information. On the other hand, the learning effectiveness of particles could also be largely improved because each updated particle learns from two better personal best positions. As a result, the proposed SCDLPSO is expected to balance search diversification and intensification well to explore and exploit the solution space appropriately.
To verify the effectiveness of the proposed SCDLPSO, comparative experiments are extensively conducted on the widely used CEC 2017 benchmark problem set [56] with different dimension sizes (namely 30, 50, and 100) by comparing SCDLPSO with seven state-of-the-art PSO methods.
The rest of this paper is organized as follows. Section 2 reviews the canonical PSO and the representative and latest PSO variants. Then, in Section 3, the devised SCDLPSO is elaborated in detail. This is followed in Section 4 by the verification of the effectiveness of SCDLPSO using extensive experiments. Finally, conclusions are given in Section 5.

2. Related Work

2.1. Canonical PSO

In the classical PSO [10,11], each particle is represented as two vectors, namely the position vector and the velocity vector. During the evolution, each particle memorizes its own personal best position found so far, while the whole swarm memorizes the global best position found so far by all particles. Then, each particle is updated by cognitively learning from its own search experience, namely the personal best position, and socially learning from the search experience of all particles, namely the global best position. Specifically, the velocity and the position of each particle are updated as follows:
v i t + 1 = w v i t + c 1 r 1 ( p b e s t i t x i t ) + c 2 r 2 ( g b e s t t x i t )
x i t + 1 = x i t + v i t + 1
where xi and vi are the position vector and the velocity vector of the ith particle; pbesti is the personal best position of the ith particle; gbest is the global best position of the whole swarm found by all particles; t denotes the generation index; w ∈ [0, 1] is the inertia weight; r1 and r2 are two real random numbers uniformly sampled within [0, 1]; c1 and c2 are two acceleration factors in charge of the influence of the two guiding exemplars on the updated particle.
In the literature [8,11,57], a linearly decreased w defined as follows is usually adopted to alleviate the sensitivity of PSO to the inertia weight:
w = 0.9 0.5 × t T m a x
where t is the number of generations used so far, while Tmax stands for the maximum number of generations.
From Equation (1), it is found that in the classical PSO, all particles learn from one same guiding exemplar, namely the global best position gbest of the whole swarm. As a result, particles in the swarm converge very quickly to promising areas. However, once gbest falls into local areas, it is hard for particles to escape from the local basin, and thus the algorithm encounters premature convergence. Therefore, the classical PSO is very suitable for unimodal optimization problems, but unaccommodating for multimodal optimization problems [22,30,43].

2.2. Advancement of Learning Strategies for PSO

As shown in Equation (1), in the literature [8,22,26,45,51,52,58], the learning strategy used to update the velocity of particles plays the most crucial role in assisting PSO to achieve promising performance. Therefore, to further improve the optimization performance of PSO in solving various optimization problems, especially the complicated ones, such as multimodal problems, researchers have focused extensive attention on devising novel and effective learning strategies for PSO. As a consequence, an ocean of remarkable PSO variants have emerged [22,25,27,34,41]. Broadly speaking, existing learning strategies for PSO can be divided into two main categories, namely the topology-based learning strategies [43,48,59,60,61], and constructive learning strategies [8,35,51,52,53].
The topology-based learning strategies [23,43,45,49,62] mainly utilize different topologies to interact with particles to find proper guiding exemplars to direct the update of the associated particle. In fact, the learning strategy in the classical PSO [10,11] described above is also topology-based, where the topology is the full topology connecting all particles. However, such a topology leads to too greedy attraction of the second guiding exemplar in Equation (1), which results in premature convergence of PSO in solving multimodal problems. To alleviate this predicament, researchers have developed many novel topologies to find less greedy exemplars to improve the learning diversity of particles. To name a few representatives, in [45], a ring topology was utilized to organize particles into a ring, and each particle interacts with its left and right neighbors to select the locally best position (lbest) as the guiding exemplar to replace gbest in Equation (1). In addition, the star topology [46] and the wheel topology [46] were also utilized to organize particles and select the best position in the associated topology to replace gbest. In [63], the cellular automata (CA) with the lattice and the “smart-cell” structures were integrated in PSO to select the second guiding exemplars for particles to update. In [60], the ring topology along with an elitist learning strategy was incorporated into PSO to maintain exploration and exploitation balance to properly search the solution space.
In the above studies, the adopted topologies are usually fixed during the whole optimization. This results in insufficient information exchange among particles. To further improve the learning effectiveness of particles, some researchers have attempted to adopt dynamic topologies to find promising guiding exemplars for particles. For instance, in [61], a dynamic-neighborhood-based switching PSO (DNSPSO) was proposed according to a distance-based dynamic topology along with a novel switching learning strategy to adaptively select the acceleration coefficients based on the searching state of the swarm. In this way, the evolutionary information of the swarm could be fully used to update particles. In [59], Gong and Zhang proposed a small-world network based topology to let each particle interact with its cohesive neighbors and by chance to communicate with some distant particles via small-world randomization with probability.
Except for the above researches that use only one topology on the whole swarm, some researchers have also been devoted to hybridizing different topologies based on subpopulation techniques to further improve the learning diversity of particles. For example, in [64], a fitness peak clustering (FPC) based dynamic multi-warm PSO with an enhanced learning strategy (FPCMSPSO) was designed. Specifically, this algorithm uses FPC to divide the swarm into several sub-swarms, and then evolves each sub-swarm independently based on the local topology. In [65], a PSO with double learning patterns (PSO-DLP) was developed. Specifically, this PSO variant adopts two swarms, namely the master swarm and the slave swarm, and employs two different learning patterns to update particles in the two swarms, so that a trade-off between the convergence speed and the swarm diversity can be achieved. In particular, in the slave swarm, a local topology is used to update its particles to explore the search space, while in the master swarm, a global topology is utilized to exploit the found promising areas. In [47], a memetic multi-topology particle swarm optimizer (MMTPSO) was devised by utilizing two different topologies and during the evolution, this algorithm biases to use the best performing topology to evolve the swarm.
Different from the topology-based learning strategies that select guiding exemplars from the historical best positions already found by particles, the constructive learning strategies mainly build new exemplars that may not appear during the evolution based on the historical best positions [8,35,51,53,66]. Specifically, these methods mainly aim to recombine dimensions of the historical best positions to try to generate promising guiding exemplars via some recombination techniques. In this direction, the most representative method is the comprehensive learning strategy [8], which constructs a new exemplar dimension by dimension for each particle based on the personal best positions of all particles. Since the advent of CLPSO, researchers have proposed many additional techniques to further improve its optimization ability. For instance, in [40], a local search method was incorporated into CLPSO, and an adaptive local search starting strategy was further put forward to adaptively trigger the local search by utilizing the quasi-entropy index. Instead of adopting fixed comprehensive learning (CL) probabilities in CLPSO, in [35], an adaptive CLPSO with cooperative archive (ACLPSO-CA) was developed by adaptively adjusting the CL probability along with a cooperative archive (CA). Specifically, this algorithm divides the CL probability into three levels and adjusts the CL probability level of each particle dynamically according to its performance during the evolution. In [66], a multi-leader CLPSO with adaptive mutation (ML-CLPSO-AM) was developed by incorporating a multi-leader (ML) strategy and an adaptive mutation (AM) strategy into CLPSO. In the ML strategy, a set of top-ranked particles form a pool of candidate leaders. During the update of each particle, a leader is randomly selected from the pool to guide the learning of this particle. In the AM strategy, the stagnated particles are adaptively mutated to restart their evolution. In [52], a heterogeneous CLPSO (HCLPSO) was devised by using the personal best positions of particles to generate guiding exemplars for the subpopulation responsible for exploration, while adopting the global best positions of the entire swarm to generate guiding exemplars for the subpopulation responsible for exploitation. In [50], a triple archives PSO (TAPSO) was designed by maintaining three archives. The first archive is used to store particles with better fitness, and the second archive is used to record the other particles. Then, similarly to CLPSO, this algorithm generates a new exemplar for each particle dimension by dimension by randomly choosing one particle from the first archive and one from the second archive based on the ordinary genetic operators. If the constructed exemplar has excellent performance, it will be saved in the third archive and then reused by inferior particles.
Although the above CLPSO variants have shown promising performance, the construction of new effective exemplars is inefficient because the recombination of dimension is thoroughly random. To improve the effectiveness of the recombination, in [51], an orthogonal learning PSO (OLPSO) was devised by orthogonal experimental design to discover useful information that lies in the historical positions found by particles. Specifically, this algorithm utilizes an orthogonal matrix to find the effective combinations of dimensions to construct a promising and efficient exemplar for each particle. However, this method is too resource-consuming because it needs many fitness evaluations in the orthogonal experimental design. To alleviate this issue, in [53], the genetic operators including crossover, mutation, and selection, were adopted to construct guiding exemplars based on the historical search information of particles. In this way, the generated exemplars are expected to be not only well diversified, but also of high quality. In [45], a global GLPSO with a ring topology (GGL-PSOD) was devised. In this PSO variant, the ring topology is used to breed diversified exemplars based on two directly connected neighbor particles, so that the exploration ability can be promoted.
Except for the learning strategies, the settings of key parameters in Equation (1) also play a crucial role in aiding PSO to achieve good performance when tackling optimization problems [19,57,67,68,69]. However, these parameter settings are usually different for different optimization problems. To alleviate this dilemma, researchers have designed many adaptive parameter adjustment strategies for PSO [70]. For instance, in [71], an adaptive PSO was proposed by adaptively adjusting the inertia weight and the acceleration coefficients. Specifically, a real-time evolutionary state estimation method was designed to classify the evolutionary states of the swarm into four types, namely exploration, exploitation, convergence, and jumping out in each generation. Then, the inertia weight and the acceleration coefficients are dynamically adjusted based on the estimated state. Taking inspiration from the activation function of neural networks, Liu et al. [57] proposed a sigmoid-function-based weighting strategy for PSO to update the acceleration coefficients by considering both the distances from the updated particle to gbest and from the particle to its pbest. In [72], a self-adaptive parameter updating strategy based on success history information was proposed to automatically adjust the learning parameters to appropriate values.
Although PSO has been advanced to a large extent as briefly described above since its advent, its optimization ability is still challenged when confronted with complicated optimization problems with many interacting variables and wide and flat local areas. Therefore, how to improve the optimization performance of PSO is still an open issue and remains a hot and attractive topic in the evolutionary computation community. To this end, this paper proposes a stochastic cognitive dominance leading particle swarm optimization (SCDLPSO) to improve the optimization ability of PSO in solving complex optimization problems, which is elucidated in the next section.

3. Stochastic Cognitive Dominance Leading Particle Swarm Optimization

In human society, groups of randomly assembled individuals usually engage in costly group competition spontaneously [55]. Likewise, in the swarm of PSO, we can also randomly assemble particles and let them compete with each other based on their historical cognitive experience. Inspired by this, we propose a stochastic cognitive dominance leading particle swarm optimization algorithm (SCDLPSO) in this paper to improve the optimization ability of PSO in tackling optimization problems.

3.1. Stochastic Cognitive Dominance Leading Strategy

During the evolution, different particles usually preserve different historical experiences. These cognitive experiences preserve valuable information to guide the evolution of the swarm. In the classical PSO [10,11], each particle exchanges its own experience with all the other particles to find the best one to direct its learning. Such an exchange is too greedy and with low diversity, as all particles learn from the same best position, namely gbest. To improve the learning diversity and effectiveness of particles, we propose a stochastic cognitive dominance leading strategy (SCDL) for PSO by imitating the stochastic competition mechanism in human society [55].
Specifically, given that NP particles are maintained in the swarm, during the evolution, for each particle to be updated (denoted as xi, i ∈ [1, NP]), we first randomly select two different personal best positions (denoted by pbestpr1 and pbestpr2) from those of the rest (NP-1) of the particles. Between the two selected personal best positions, we suppose f ( pbest pr 1 ) f ( pbest pr 2 ) . Then, the two selected best positions compete with the personal best position of the particle to be updated. Only when at least one of the two selected personal best positions shows dominance to the one of the particle, this particle is updated; otherwise, this particle is not updated and directly enters the next generation.
In particular, with the assumption that f ( pbest pr 1 ) f ( pbest pr 2 ) , in the competition between the two selected best positions (pbestpr1 and pbestpr2) and the personal best position (pbesti) of the particle to be updated, three cases may occur: namely, pbesti is dominated by both of the two selected positions, pbesti is dominated by only one of the two positions, or pbesti dominates both of the two positions. In the three cases, the velocity of the associated particle is updated accordingly, as follows:
Case 1: f ( pbest pr 1 )   f ( pbest pr 2 )     f ( pbest i ) :
v i t + 1 = w v i t + β ( r 1 ( p b e s t p r 1 x i t ) + r 2 ( p b e s t p r 2 x i t ) )
Case 2:  f ( pbest pr 1 )     f ( pbest i )   <   f ( pbest pr 2 ) :
v i t + 1 = w v i t + β ( r 1 ( p b e s t p r 1 x i t ) + r 2 ( p b e s t i x i t ) )
Case 3:  f ( pbest i )   <   f ( pbest pr 1 )     f ( pbest pr 2 ) :
v i t + 1 = v i t
where vi is the velocity vector of the ith particle; pbesti is its personal best position; pbestpr1 and pbestpr2 are the two randomly selected personal best positions; w ∈ [0, 1] is the inertia weight; r1 and r2 are two real random numbers uniformly sampled within [0, 1]; β is the acceleration parameter controlling the influence of the two guiding exemplars on the updated particle.
In Case 1, the randomly selected two personal best positions pbestpr1 and pbestpr2 are both superior to the one (pbesti) of the updated particle. In this situation, it is likely that the historical experience of the associated two particles is more valuable than the one of the particle to be updated. Therefore, to accelerate the learning efficiency of this particle, we let it cognitively learn from these two random best positions (pbestpr1 and pbestpr2) instead of learning from its own historical experience. In this way, it is expected that the updated particle could approach the promising areas quickly. It should be noticed that such a fast approach to promising areas is not at the expense of search diversity, because the two personal best positions are randomly selected from those of all particles.
In Case 2, only one (with the assumption that f ( pbest pr 1 )     f ( pbest pr 2 ) , it is actually pbestpr1) of the two selected personal best positions shows dominance to the personal best position (pbesti) of the particle to be updated. To enhance the probability of finding more promising areas, we let this particle cognitively learn from the better one (pbestpr1) of the two selected historical best positions and its own personal best position (pbesti). By this means, the updated particle also learns from good historical experience and thus likely approaches promising areas quickly.
In Case 3, both of the two randomly selected personal best positions are inferior to the one of the particle to be updated. In this situation, except for its own valuable experience, no extra useful experience is available for this particle to learn during this information exchange between the particle and the two particles associated with the two randomly selected personal best positions. Therefore, this particle is not updated and directly enters the next generation as shown in Equation (6), where the velocity of the particle remains unchanged.
On the whole, from the above three cases, we can see that the proposed SCDL strategy can assist PSO to maintain high search diversity from two perspectives. (1) The random selection of the two personal best positions allows different particles to learn from different exemplars. Therefore, the learning diversity of particles can be largely improved, which is beneficial for the promotion of the swarm diversity. (2) In Case 3, some particles with promising experience survive. Such an implicit retention mechanism affords great chances of improving the swarm diversity. Additionally, SCDL can also help PSO achieve fast convergence, because each updated particle in Case 1 and Case 2 learns from valuable cognitive experience. Such learning from elite experience offers high possibility for the updated particles to approach promising areas quickly. Consequently, based on the above investigation, it is expected that the proposed SCDLPSO can balance the search intensification and diversification well to explore and exploit the solution space appropriately to find high-quality solutions to optimization problems.

3.2. Difference between SCDL and Existing PSO Variants

In fact, the proposed SCDL strategy is a topology-based learning strategy. Specifically, the topology is a random triad topology connecting each particle to be updated with two random ones selected from the rest of the particles. Compared with existing studies adopting random topologies [47,48,59,73], the proposed SCDL distinguishes them from the following two perspectives:
(1)
SCDL lets each particle learn from the best two personal best positions in the associated random triad topology. That is to say, the topology affects the selection of the two guiding exemplars in the velocity update. However, most existing random topology-based learning strategies [47,48,59,73] let each particle learn from its own personal best position and the best among the random topologies. In other words, the random topologies only influence the selection of the second guiding exemplar in the velocity update.
(2)
In Case 3 in the proposed SCDL, some particles with promising historical experience are not updated and directly enter the next generation. With this retention mechanism, some promising historical experience is preserved from being attracted to local areas, which is beneficial for the swarm to escape from local areas. However, in most existing studies [47,48], all particles are updated, and thus there is no retention mechanism like Case 3 in SCDL.
Based on these two unique advantages, the proposed SCDL is expected to help PSO balance the swarm diversity and the convergence speed better to explore and exploit the solution space.

3.3. Overall Procedure

The overall procedure of the developed SCDLPSO is shown in Algorithm 1. Specifically, after NP particles are randomly initialized and evaluated as shown in Line 1, the algorithm goes to the main iteration of the evolution. During the update of the swarm (Lines 3–15), for each particle, two random personal best positions are first selected (Line 4), and then they compete with the personal best positions of the particle to be updated. With different competition cases, the particle is updated accordingly, as shown in Lines 9–13. After the particle is updated, it is reevaluated, and its personal best position is updated accordingly (Line 14). The above main iteration proceeds continuously until the maximum number of fitness evaluations is exhausted. At the end of the algorithm, the global best position found by the swarm is obtained as the final output (Line 17).
Algorithm 1 The pseudocode of SCDLPSO
Input: swarm size NP, maximum number of fitness evaluations FEmax
1:  Initialize NP particles randomly and calculate their fitness, and set fes = NP;
2:While (fesFEmax) do
3:    For i = 1:NP do
4:  Select two different exemplars randomly from the personal best positions of
  all particles: pbestpr1pbestpr2;
5:If (f(pbestpr2) < f(pbestpr1)) then
6: Swap pr2 and pr1;
7:End If
8:Compute the inertia weight w according to Equation (3);
9:If (f(pbestpr1) ≤ f(pbestpr2) ≤ f(pbesti)) then
10: Update the particle according to Equation (4) and Equation (2);
11:Else If (f(pbestpr1) ≤ f(pbesti) < f(pbestpr2)) then
12: Update the particle according to Equation (5) and Equation (2);
13:End If
14:Calculate the fitness of the updated particle: f(xi), update its pbesti and fes += 1;
15:End For
16:End While
17:Obtain the global best solution gbest and its fitness f(gbest);
Output: f(gbest) and gbest
From Algorithm 1, it is found that at each generation, except for the fitness evaluation time, it takes O (NP) to select random personal best positions for all particles, and O (NP) to compete the selected personal best positions with the ones of the associated particles. Then, it takes O (NP ∗ D) to update all particles. On the whole, it is found that the overall time complexity of SCDLPSO is O(NP ∗ D). As for the space complexity, the same with the classical PSO, SCDLPSO needs O (NP ∗ D), O (NP ∗ D) and O (NP ∗ D) to store the velocity, the positions and the personal best positions of all particles, respectively.
In conclusion, we can see that the proposed SCDLPSO remains as efficient as the classical PSO in terms of both the time complexity and the space occupation.

4. Experiments

In this section, extensive experiments are carried out to verify the effectiveness of the proposed SCDLPSO on the widely used CEC 2017 benchmark problem set [56]. In particular, this benchmark set contains 29 optimization problems with four types, namely unimodal problems (F1 and F3), simple multimodal problems (F4–F10), hybrid problems (F11–F20) and composition problems (F21–F30). The latter two types of problems are much more difficult to optimize than the former two. For more detailed information about this benchmark set, please refer to [56].

4.1. Experimental Setup

First, to comprehensively verify the performance of SCDLPSO, we compare it with several state-of-the-art PSO algorithms. Specially, the selected representative and state-of-the-art PSO variants are XPSO [74], TCSPSO [75], DNSPSO [61], AWPSO [57], CLPSO_LS [40], GLPSO [53], and CLPSO [8]. Among these compared algorithms, XPSO and DNSPSO are topology-based learning PSO variants, while CLPSO_LS, GLPSO and CLPSO are constructive learning-based PSO methods.
Second, to make comprehensive comparisons between the proposed SCDLPSO and the compared PSO variants, we evaluate their performance on the CEC 2017 benchmark problems with three different dimension sizes, namely 30-D, 50-D, and 100-D. For fair comparisons, the maximum number of fitness evaluations (FEmax) is set as 10,000 ∗ D for all algorithms.
Third, to make fair comparisons, we fine-tune the population size for all algorithms on the CEC 2017 benchmark set with different dimension sizes. After preliminary experiments for the fine-tuning of the population size, the parameter settings of all algorithms were as listed in Table 1.
Fourth, to comprehensively and fairly evaluate each algorithm, we run each algorithm independently for 30 times and utilize the median, the mean and the standard deviation over the 30 independent runs to evaluate its optimization performance. In addition, to identify the statistical significance, we conduct the Wilcoxon rank sum test at the significance level of α = 0.05. Moreover, to investigate the overall performance of each algorithm on the whole CEC 2017 benchmark set, the Friedman test is also performed at the significance level of α = 0.05 to obtain the average rank of each algorithm.

4.2. Parameter Sensitivity Analysis

In the proposed SCDLPSO, there are two parameters, namely the swarm size NP and the control parameter β , that need to be fine-tuned. Therefore, to investigate the sensitivity of SCDLPSO to the two parameters, we conduct experiments by varying NP from 50 to 200 and ranging β from 0.1 to 1.0 for SCDLPSO on the 50-D CEC 2017 benchmark set as a representative. Table 2 shows the comparison results for SCDLPSO with different settings of NP and β on the 50-D CEC 2017 benchmark problems. In this table, the best results are highlighted in bold, and the average rank of each setting of β under the same setting of NP, as presented in the last row of the table, is obtained by the Friedman test at the significance level of “α = 0.05”.
From Table 2, we can derive the following findings. (1) From the perspective of the Friedman test, when NP is fixed, the setting of β is neither too large nor too small, and the optimal setting is within [0.3, 0.5]. Specifically, when NP is 100 and 200, the optimal β is 0.5. When NP is set to 50, the optimal setting of β is 0.4. When NP is 150, the optimal setting of β is 0.3 and 0.4. (2) On closer observation, we can find that no matter what the swarm size is, the performance of SCDLPSO first improves with the increase in β at the beginning. After it reaches 0.5, the larger the setting of β is, the worse performance SCDLPSO achieves. (3) Taking comprehensive comparisons among all settings into consideration, we find that β is not so closely related to swarm size NP. Comprehensively, we recommend setting β = 0 .5 for SCDLPSO to solve optimization problems.
To summarize, β is not so closely related to swarm size NP, and the optimal setting is generally within [0.3, 0.5]. In this paper, we recommend setting β = 0 .5 for SCDLPSO to solve optimization problems.

4.3. Comparison with State-of-the-Art PSO Variants

In this section, we conduct extensive comparison experiments on the CEC 2017 benchmark set with different dimension sizes to compare the proposed SCDLPSO with the seven state-of-the-art and representative PSO variants. Table 3, Table 4 and Table 5 show the detailed comparison results for the 30-D, 50-D and 100-D CEC 2017 benchmark problems, respectively. In these tables, the symbols “+”, “−” and “=” above the p-values mean that SCDLPSO is significantly superior, inferior, or equivalent to the compared algorithms on the associated problems. In the second to last rows of these tables, “w/t/l” count the number of problems where the devised SCDLPSO achieves significantly better, equivalent, or worse performance than the associated compared algorithms, respectively. They are actually the number of “+”, “=” and “−”, respectively. In the last rows of these tables, the average rank of each algorithm obtained by the Friedman test is presented. In addition, Table 6 summarizes the statistical comparison results between SCDLPSO and the seven state-of-the-art PSO variants on the CEC 2017 benchmark set with different dimensions in terms of “w/t/l”.
As shown in Table 3, the comparison results on the 30-D CEC 2017 benchmark problems can be summarized as follows:
(1)
As shown in the last row of Table 3, the proposed SCDLPSO achieves the lowest rank in terms of the Friedman test, and its rank is much smaller than those of the other algorithms. This demonstrates that SCDLPSO achieves the best overall performance on the 30-D CEC 2017 benchmark set and its overall performance is much superior to the compared algorithms.
(2)
The second to last row of Table 3 shows that SCDLPSO performs much better than the seven compared algorithms from the perspective of the Wilcoxon rank sum test. Specifically, compared with TCSPSO, DNSPSO, AWPSO, CLPSO_LS, GLPSO, and CLPSO, SCDLPSO achieves significantly superior performance to the other algorithms on at least 20 problems, and displays inferiority to them on at most six problems. Compared with XPSO, the proposed SCDLPSO shows significant superiority on 17 problems and is worse than XPSO on only four problems.
(3)
In terms of the comparison results on different types of optimization problems, on the two unimodal problems, SCDLPSO outperforms DNSPSO, AWPSO, and CLPSO, while it achieves competitive performance with XPSO, TCSPSO, CLPSO_LS, and GLPSO. On the seven simple multimodal problems, SCDLPSO presents significant superiority to the seven compared algorithms on at least five problems. As for the 10 hybrid problems, except for XPSO, SCDLPSO performs significantly better than the other compared PSO variants on at least seven problems. In comparison with XPSO, SCDLPSO shows great superiority on five problems and is defeated by XPSO on only one problem. In terms of the 10 composition problems, SCDLPSO is significantly better than AWPSO and TCSPSO on all of these problems. It achieves better performance than DNSPSO, CLPSO_LS, and GLPSO on eight, eight, and nine problems, respectively. In comparison with XPSO and CLPSO, SCDLPSO outperforms them on six and five problems respectively, and shows worse performance on only one and two problems, respectively.
Subsequently, from Table 4, we can draw the following conclusions in regard to the comparison results between SCDLPSO and the compared state-of-the-art PSO variants on the 50-D CEC 2017 benchmark problems:
(1)
According to the last row of Table 4, SCDLPSO still achieves the lowest rank among all algorithms. This demonstrates that SCDLPSO still obtains the best overall performance on the 50-D CEC 2017 problems.
(2)
From the perspective of the Wilcoxon rank sum test, as shown in the second to last row of Table 4, SCDLPSO achieves better performance than TCSPSO, DNSPSO, AWPSO, CLPSO_LS, and CLPSO on 20, 25, 28, 23 and 20 problems, respectively. In comparison with XPSO and GLPSO, SCDLPSO outperforms them on 17 and 19 problems, respectively.
(3)
As for the comparison results on different types of optimization problems, on the two unimodal problems, SCDLPSO is significantly superior to DNSPSO, AWPSO, and CLPSO on both problems, while it obtains very competitive performance with the other compared algorithms. As for the seven simple multimodal problems, SCDLPSO significantly outperforms the seven compared PSO variants on at least five problems. On the 10 hybrid problems, SCDLPSO achieves significantly better performance than DNSPSO and AWPSO on 9 and 10 problems, respectively. In competition with the other five compared PSO variants, SCDLPSO shows no inferiority to them on at least eight problems. Confronted with the 10 composition problems, SCDLPSO presents great superiority to the seven compared PSO variants on at least six problems, and displays inferiority to them on at most three problems.
At last, from Table 5, the following conclusions can be drawn from the comparison results between SCDLPSO and the compared state-of-the-art PSO variants on the 100-D CEC 2017 benchmark problems:
(1)
According to the averaged rank obtained from the Friedman test, SCDLPSO still ranks first among all algorithms. This verifies that SCDLPSO consistently achieves the best overall performance on the 100-D CEC 2017 benchmark set.
(2)
From the results of the Wilcoxon rank sum test, SCDLPSO presents significant dominance to the seven compared algorithms on at least 20 problems. In particular, competed with AWPSO, CLPSO_LS, and GLPSO, SCDLPSO significantly outperforms them on 26 problems, and shows no inferiority on any of the 29 problems.
(3)
Regarding the comparison results for different types of optimization problems, on the two unimodal problems, SCDLPSO is significantly superior to AWPSO and GLPSO on both problems and achieves competitive performance with the other algorithms. On the seven simple multimodal problems, SCDLPSO outperforms all seven compared algorithms on at least six problems. When it comes to the 10 hybrid problems, SCDLPSO achieves significantly better performance than all compared PSO variants on at least six problems. Particularly, SCDLPSO significantly beats TCSPSO, DNSPSO, AWPSO, and CLPSO_LS on at least nine problems, and shows no worse performance than them on all 10 problems. As for the 10 composition problems, SCDLSPO outperforms AWPSO and GLPSO on all of these problems, and wins the competition with TCSPSO and CLPSO_LS on nine. When compared with XPSO, DNSPSO, and CLPSO, SCDLPSO is superior on at least six problems and shows inferiority on at most four problems.
Overall, as shown in Table 6, it is found that the proposed SCDLPSO consistently shows great superiority to the compared state-of-the-art and representative PSO variants on the CEC 2017 benchmark problem set with different dimension sizes. On the one hand, from a comprehensive perspective, the above comparative experiments validate that SCDLPSO preserves good scalability to solve optimization problems. On the other hand, after deep investigation of the comparison results for different types of optimization problems, we find that SCDLPSO performs particularly much better than the compared algorithms on complicated problems, such as the multimodal problems, the hybrid problems and the composition problems. This demonstrates that SCDLPSO preserves the capability to solve complex optimization problems. All of these examples of SCDLPSO superiority mainly benefit from the proposed SCDL strategy, which affords high learning diversity and effectiveness for particles. With this strategy, SCDLPSO can balance search intensification and diversification well to explore and exploit the solution space to find high-quality solutions.

5. Conclusions

This paper has proposed a stochastic cognitive dominance leading particle swarm optimization (SCDLPSO) algorithm to tackle complicated optimization problems. Specifically, in this optimizer, a random triad topology is employed for each particle to communicate with two other particles randomly selected from the swarm. Then, the historical cognitive experiences of the three particles are allowed to compete with each other. Only when at least one of the personal best positions of the two randomly selected particles shows dominance to the personal best position of the particle to be updated, this particle is updated by learning from the two best cognitive experiences; otherwise this particle is not updated and directly enters the next generation. In this way, high learning diversity can be maintained due to the random interaction, and at the same time, high learning effectiveness can be preserved because each updated particle learns from the two best experiences. Therefore, the proposed SCDLPSO is expected to explore and exploit the solution space appropriately during the evolution.
Extensive comparative experiments were conducted on the CEC 2017 benchmark problem set with three different dimension sizes by comparing SCDLPSO with seven state-of-the-art and representative PSO variants. The experimental results demonstrate that SCDLPSO consistently achieves great superiority to the compared algorithms on most problems with the three dimension sizes. In particular, it was verified that SCDLPSO performs much better than the compared algorithms on complicated problems, such as multimodal problems, hybrid problems and composition problems. Therefore, SCDLPSO can be considered as a promising optimizer for optimization problems, especially the complicated ones.

Author Contributions

Q.Y.: Conceptualization, supervision, methodology, formal analysis, and writing—original draft preparation. L.H.: Implementation, formal analysis, and writing—original draft preparation. X.G.: Methodology, and writing—review and editing. D.X.: Writing—review and editing. Z.L.: Writing—review and editing, and funding acquisition. S.-W.J.: Writing—review and editing. J.Z.: Conceptualization and writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62006124 and U20B2061, in part by the Natural Science Foundation of Jiangsu Province under Project BK20200811, in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJB520006, in part by the National Research Foundation of Korea (NRF-2021H1D3A2A01082705), and in part by the Startup Foundation for Introducing Talent of NUIST.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, W.; Xu, X.; Huang, G.; Li, B.; Wu, Y.; Zhao, N.; Yu, F.R. Energy Efficiency Optimization in SWIPT Enabled WSNs for Smart Agriculture. IEEE Trans. Ind. Inform. 2021, 17, 4335–4344. [Google Scholar] [CrossRef]
  2. Zhou, J.; He, R.; Wang, Y.; Jiang, S.; Zhu, Z.; Hu, J.; Miao, J.; Luo, Q. Autonomous Driving Trajectory Optimization with Dual-Loop Iterative Anchoring Path Smoothing and Piecewise-Jerk Speed Optimization. IEEE Robot. Autom. Lett. 2021, 6, 439–446. [Google Scholar] [CrossRef]
  3. Zhang, L.; Zhang, Y.; Li, Y. Mobile Robot Path Planning Based on Improved Localized Particle Swarm Optimization. IEEE Sensors J. 2021, 21, 6962–6972. [Google Scholar] [CrossRef]
  4. Huang, L.; Ding, Y.; Zhou, M.; Jin, Y.; Hao, K. Multiple-Solution Optimization Strategy for Multirobot Task Allocation. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 50, 4283–4294. [Google Scholar] [CrossRef]
  5. Ghorpade, S.; Zennaro, M.; Chaudhari, B. Survey of Localization for Internet of Things Nodes: Approaches, Challenges and Open Issues. Futur. Internet 2021, 13, 210. [Google Scholar] [CrossRef]
  6. Zhan, Z.-H.; Shi, L.; Tan, K.C.; Zhang, J. A survey on evolutionary computation for complex continuous optimization. Artif. Intell. Rev. 2021, 55, 59–110. [Google Scholar] [CrossRef]
  7. Yang, Q.; Chen, W.-N.; Zhang, J. Probabilistic Multimodal Optimization. In Metaheuristics for Finding Multiple Solutions; Springer: Berlin/Heidelberg, Germany, 2021; pp. 191–228. [Google Scholar]
  8. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  9. Yang, Q.; Li, Y.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Jeon, S.-W.; Zhang, J. An Adaptive Covariance Scaling Estimation of Distribution Algorithm. Mathematics 2021, 9, 3207. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  11. Eberhart, R.; Kennedy, J. A New Optimizer Using Particle Swarm Theory. In Proceedings of the International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  12. Yang, Q.; Xie, H.-Y.; Chen, W.-N.; Zhang, J. Multiple parents guided differential evolution for large scale optimization. In Proceedings of the Congress on Evolutionary Computation, Vancouver, BC, Canada, 24–29 July 2016; pp. 3549–3556. [Google Scholar]
  13. Yu, W.-J.; Ji, J.-Y.; Gong, Y.-J.; Yang, Q.; Zhang, J. A tri-objective differential evolution approach for multimodal optimization. Inf. Sci. 2018, 423, 1–23. [Google Scholar] [CrossRef]
  14. Zhigljavsky, A.; Žilinskas, A. Bi-objective Decisions and Partition-Based Methods in Bayesian Global Optimization. In Bayesian and High-Dimensional Global Optimization; Springer International Publishing: Cham, Switzerland, 2021; pp. 41–88. [Google Scholar]
  15. Xue, Y.; Wang, Y.; Liang, J. A self-adaptive gradient descent search algorithm for fully-connected neural networks. Neurocomputing 2022, 478, 70–80. [Google Scholar] [CrossRef]
  16. Žilinskas, A.; Calvin, J. Bi-Objective Decision Making in Global Optimization Based on Statistical Models. J. Glob. Optim. 2019, 74, 599–609. [Google Scholar] [CrossRef] [Green Version]
  17. Pepelyshev, A.; Zhigljavsky, A.; Žilinskas, A. Performance of global random search algorithms for large dimensions. J. Glob. Optim. 2018, 71, 57–71. [Google Scholar] [CrossRef] [Green Version]
  18. Zelinka, I. A Survey on Evolutionary Algorithms Dynamics and its Complexity–Mutual Relations, Past, Present and Future. Swarm and Evolutionary Computation. Swarm Evol. Comput. 2015, 25, 2–14. [Google Scholar] [CrossRef]
  19. Bonyadi, M.R. A Theoretical Guideline for Designing an Effective Adaptive Particle Swarm. IEEE Trans. Evol. Comput. 2020, 24, 57–68. [Google Scholar] [CrossRef]
  20. Mor, B.; Shabtay, D.; Yedidsion, L. Heuristic algorithms for solving a set of NP-hard single-machine scheduling problems with resource-dependent processing times. Comput. Ind. Eng. 2021, 153, 107024. [Google Scholar] [CrossRef]
  21. Anbuudayasankar, S.P.; Ganesh, K.; Mohapatra, S. Survey of Methodologies for TSP and VRP. In Models for Practical Routing Problems in Logistics: Design and Practices; Springer: Berlin/Heidelberg, Germany, 2014; pp. 11–42. [Google Scholar]
  22. Tang, J.; Liu, G.; Pan, Q. A Review on Representative Swarm Intelligence Algorithms for Solving Optimization Problems: Applications and Trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  23. Yang, Q.; Chen, W.-N.; Gu, T.; Jin, H.; Mao, W.; Zhang, J. An Adaptive Stochastic Dominant Learning Swarm Optimizer for High-Dimensional Optimization. IEEE Trans. Cybern. 2020, 1–17. [Google Scholar] [CrossRef]
  24. Ji, X.; Zhang, Y.; Gong, D.; Sun, X. Dual-Surrogate-Assisted Cooperative Particle Swarm Optimization for Expensive Multimodal Problems. IEEE Trans. Evol. Comput. 2021, 25, 794–808. [Google Scholar] [CrossRef]
  25. Yang, Q.; Chen, W.-N.; Da Deng, J.; Li, Y.; Gu, T.; Zhang, J. A Level-Based Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Evol. Comput. 2017, 22, 578–594. [Google Scholar] [CrossRef]
  26. Lan, R.; Zhu, Y.; Lu, H.; Liu, Z.; Luo, X. A Two-Phase Learning-Based Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 51, 6284–6293. [Google Scholar] [CrossRef]
  27. Qu, B.; Li, G.; Yan, L.; Liang, J.; Yue, C.; Yu, K.; Crisalle, O.D. A grid-guided particle swarm optimizer for multimodal multi-objective problems. Appl. Soft Comput. 2022, 117, 108381. [Google Scholar] [CrossRef]
  28. Wei, F.-F.; Chen, W.-N.; Yang, Q.; Deng, J.; Luo, X.-N.; Jin, H.; Zhang, J. A Classifier-Assisted Level-Based Learning Swarm Optimizer for Expensive Optimization. IEEE Trans. Evol. Comput. 2021, 25, 219–233. [Google Scholar] [CrossRef]
  29. Jang-Ho, S.; Chang-Hwan, I.; Chang-Geun, H.; Jae-Kwang, K.; Hyun-Kyo, J.; Cheol-Gyun, L. Multimodal Function Optimization Based on Particle Swarm Optimization. IEEE Trans. Magn. 2006, 42, 1095–1098. [Google Scholar] [CrossRef]
  30. Zou, J.; Deng, Q.; Zheng, J.; Yang, S. A close neighbor mobility method using particle swarm optimizer for solving multimodal optimization problems. Inf. Sci. 2020, 519, 332–347. [Google Scholar] [CrossRef]
  31. Yang, Q.; Chen, W.-N.; Yu, Z.; Gu, T.; Li, Y.; Zhang, H.; Zhang, J. Adaptive Multimodal Continuous Ant Colony Optimization. IEEE Trans. Evol. Comput. 2016, 21, 191–205. [Google Scholar] [CrossRef] [Green Version]
  32. Yang, Q.; Chen, W.-N.; Li, Y.; Chen, C.L.P.; Xu, X.-M.; Zhang, J. Multimodal Estimation of Distribution Algorithms. IEEE Trans. Cybern. 2017, 47, 636–650. [Google Scholar] [CrossRef] [Green Version]
  33. Tanabe, R.; Ishibuchi, H. A Review of Evolutionary Multimodal Multiobjective Optimization. IEEE Trans. Evol. Comput. 2020, 24, 193–200. [Google Scholar] [CrossRef]
  34. Molaei, S.; Moazen, H.; Najjar-Ghabel, S.; Farzinvash, L. Particle swarm optimization with an enhanced learning strategy and crossover operator. Knowl.-Based Syst. 2021, 215, 106768. [Google Scholar] [CrossRef]
  35. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Adaptive comprehensive learning particle swarm optimization with cooperative archive. Appl. Soft Comput. 2019, 77, 533–546. [Google Scholar] [CrossRef]
  36. Yang, Q.; Chen, W.-N.; Gu, T.; Zhang, H.; Deng, J.D.; Li, Y.; Zhang, J. Segment-Based Predominant Learning Swarm Optimizer for Large-Scale Optimization. IEEE Trans. Cybern. 2016, 47, 2896–2910. [Google Scholar] [CrossRef] [Green Version]
  37. Yang, Q.; Chen, W.-N.; Gu, T.; Zhang, H.; Yuan, H.; Kwong, S.; Zhang, J. A Distributed Swarm Optimizer with Adaptive Communication for Large-Scale Optimization. IEEE Trans. Cybern. 2020, 50, 3393–3408. [Google Scholar] [CrossRef] [PubMed]
  38. Zhang, X.; Du, K.-J.; Zhan, Z.-H.; Kwong, S.; Gu, T.-L.; Zhang, J. Cooperative Coevolutionary Bare-Bones Particle Swarm Optimization with Function Independent Decomposition for Large-Scale Supply Chain Network Design with Uncertainties. IEEE Trans. Cybern. 2019, 50, 4454–4468. [Google Scholar] [CrossRef]
  39. Song, X.-F.; Zhang, Y.; Guo, Y.-N.; Sun, X.-Y.; Wang, Y.-L. Variable-Size Cooperative Coevolutionary Particle Swarm Optimization for Feature Selection on High-Dimensional Data. IEEE Trans. Evol. Comput. 2020, 24, 882–895. [Google Scholar] [CrossRef]
  40. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive Learning Particle Swarm Optimization Algorithm with Local Search for Multimodal Functions. IEEE Trans. Evol. Comput. 2019, 23, 718–731. [Google Scholar] [CrossRef]
  41. Zhang, X.; Wang, X.; Kang, Q.; Cheng, J. Differential mutation and novel social learning particle swarm optimization algorithm. Inf. Sci. 2019, 480, 109–129. [Google Scholar] [CrossRef]
  42. Liang, X.; Li, W.; Liu, P.; Zhang, Y.; Agbo, A.A. Social Network-based Swarm Optimization algorithm. In Proceedings of the International Conference on Networking, Sensing and Control, Taipei, Taiwan, 9–11 April 2015; pp. 360–365. [Google Scholar]
  43. Blackwell, T.; Kennedy, J. Impact of Communication Topology in Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2019, 23, 689–702. [Google Scholar] [CrossRef] [Green Version]
  44. Kennedy, J.; Mendes, R. Population Structure and Particle Swarm Performance. In Proceedings of the IEEE Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; pp. 1671–1676. [Google Scholar]
  45. Lin, A.; Sun, W.; Yu, H.; Wu, G.; Tang, H. Global genetic learning particle swarm optimization with diversity enhancement by ring topology. Swarm Evol. Comput. 2019, 44, 571–583. [Google Scholar] [CrossRef]
  46. Kennedy, J. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, p. 1931. [Google Scholar]
  47. Elsayed, S.M.; Sarker, R.A.; Essam, D.L. Memetic multi-topology particle swarm optimizer for constrained optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–8. [Google Scholar]
  48. Li, F.; Guo, J. Topology Optimization of Particle Swarm Optimization. In Proceedings of the Advances in Swarm Intelligence, Hefei, China, 17–20 October 2014; pp. 142–149. [Google Scholar]
  49. Bonyadi, M.R.; Li, X.; Michalewicz, Z. A hybrid particle swarm with a time-adaptive topology for constrained optimization. Swarm Evol. Comput. 2014, 18, 22–37. [Google Scholar] [CrossRef]
  50. Xia, X.; Gui, L.; Yu, F.; Wu, H.; Wei, B.; Zhang, Y.-L.; Zhan, Z.-H. Triple Archives Particle Swarm Optimization. IEEE Trans. Cybern. 2019, 50, 4862–4875. [Google Scholar] [CrossRef]
  51. Zhan, Z.; Zhang, J.; Li, Y.; Shi, Y. Orthogonal Learning Particle Swarm Optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef] [Green Version]
  52. Lynn, N.; Suganthan, P. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  53. Gong, Y.-J.; Li, J.-J.; Zhou, Y.; Li, Y.; Chung, H.S.-H.; Shi, Y.-H.; Zhang, J. Genetic Learning Particle Swarm Optimization. IEEE Trans. Cybern. 2016, 46, 2277–2290. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Osaba, E.; Yang, X.-S. Applied Optimization and Swarm Intelligence: A Systematic Review and Prospect Opportunities. In Applied Optimization and Swarm Intelligence; Osaba, E., Yang, X.-S., Eds.; Springer: Singapore, 2021; pp. 1–23. [Google Scholar] [CrossRef]
  55. Puurtinen, M.; Heap, S.; Mappes, T. The joint emergence of group competition and within-group cooperation. Ethol. Sociobiol. 2015, 36, 211–217. [Google Scholar] [CrossRef]
  56. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Korea; Nanyang Technological University: Singapore, 2017; pp. 1–16. [Google Scholar]
  57. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A Novel Sigmoid-Function-Based Adaptive Weighted Particle Swarm Optimizer. IEEE Trans. Cybern. 2021, 51, 1085–1093. [Google Scholar] [CrossRef]
  58. Xie, H.-Y.; Yang, Q.; Hu, X.-M.; Chen, W.-N. Cross-generation Elites Guided Particle Swarm Optimization for large scale optimization. In Proceedings of the Symposium Series on Computational Intelligence, Athens, Greece, 6–9 December 2016; pp. 1–8. [Google Scholar]
  59. Gong, Y.-j.; Zhang, J. Small-world Particle Swarm Optimization with Topology Adaptation. In Proceedings of the Annual Conference on Genetic and Evolutionary Computation, Amsterdam, The Netherlands, 6–10 July 2013; pp. 25–32. [Google Scholar]
  60. Xu, G.; Zhao, X.; Wu, T.; Li, R.; Li, X. An Elitist Learning Particle Swarm Optimization with Scaling Mutation and Ring Topology. IEEE Access 2018, 6, 78453–78470. [Google Scholar] [CrossRef]
  61. Zeng, N.; Wang, Z.; Liu, W.; Zhang, H.; Hone, K.; Liu, X. A Dynamic Neighborhood-Based Switching Particle Swarm Optimization Algorithm. IEEE Trans. Cybern. 2020, 1–12. [Google Scholar] [CrossRef] [PubMed]
  62. Tanweer, M.; Suresh, S.; Sundararajan, N. Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems. Inf. Sci. 2016, 326, 1–24. [Google Scholar] [CrossRef]
  63. Shi, Y.; Liu, H.; Gao, L.; Zhang, G. Cellular particle swarm optimization. Inf. Sci. 2011, 181, 4460–4493. [Google Scholar] [CrossRef]
  64. Tao, X.; Guo, W.; Li, X.; He, Q.; Liu, R.; Zou, J. Fitness peak clustering based dynamic multi-swarm particle swarm optimization with enhanced learning strategy. Expert Syst. Appl. 2020, 116301. [Google Scholar] [CrossRef]
  65. Shen, Y.; Wei, L.; Zeng, C.; Chen, J. Particle Swarm Optimization with Double Learning Patterns. Comput. Intell. Neurosci. 2016, 2016, 1–19. [Google Scholar] [CrossRef] [Green Version]
  66. Lin, A.; Sun, W. Multi-Leader Comprehensive Learning Particle Swarm Optimization with Adaptive Mutation for Economic Load Dispatch Problems. Energies 2019, 12, 116. [Google Scholar] [CrossRef] [Green Version]
  67. Wang, Z.-J.; Zhan, Z.-H.; Kwong, S.; Jin, H.; Zhang, J. Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization. IEEE Trans. Cybern. 2021, 51, 1175–1188. [Google Scholar] [CrossRef] [PubMed]
  68. Feng, Q.; Li, Q.; Wang, H.; Feng, Y.; Pan, Y. Two-Stage Adaptive Constrained Particle Swarm Optimization Based on Bi-Objective Method. IEEE Access 2020, 8, 150647–150664. [Google Scholar] [CrossRef]
  69. Wang, R.; Hao, K.; Chen, L.; Wang, T.; Jiang, C. A novel hybrid particle swarm optimization using adaptive strategy. Inf. Sci. 2021, 579, 231–250. [Google Scholar] [CrossRef]
  70. Song, G.-W.; Yang, Q.; Gao, X.-D.; Ma, Y.-Y.; Lu, Z.-Y.; Zhang, J. An Adaptive Level-Based Learning Swarm Optimizer for Large-Scale Optimization. In Proceedings of the International Conference on Systems, Man, and Cybernetics, Melbourne, Australia, 17–20 October 2021; pp. 152–159. [Google Scholar] [CrossRef]
  71. Zhan, Z.; Zhang, J.; Li, Y.; Chung, H.S. Adaptive Particle Swarm Optimization. IEEE Trans. Syst. Man Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef] [Green Version]
  72. Tao, X.; Li, X.; Chen, W.; Liang, T.; Li, Y.; Guo, J.; Qi, L. Self-Adaptive two roles hybrid learning strategies-based particle swarm optimization. Inf. Sci. 2021, 578, 457–481. [Google Scholar] [CrossRef]
  73. Sun, W.; Lin, A.; Yu, H.; Liang, Q.; Wu, G. All-dimension neighborhood based particle swarm optimization with randomly selected neighbors. Inf. Sci. 2017, 405, 141–156. [Google Scholar] [CrossRef]
  74. Xia, X.; Gui, L.; He, G.; Wei, B.; Zhang, Y.; Yu, F.; Wu, H.; Zhan, Z.-H. An expanded particle swarm optimization based on multi-exemplar and forgetting ability. Inf. Sci. 2020, 508, 105–120. [Google Scholar] [CrossRef]
  75. Zhang, X.; Liu, H.; Zhang, T.; Wang, Q.; Wang, Y.; Tu, L. Terminal crossover and steering-based particle swarm optimization algorithm with disturbance. Appl. Soft Comput. 2019, 85, 105841. [Google Scholar] [CrossRef]
Table 1. Parameter settings of all comparative algorithms.
Table 1. Parameter settings of all comparative algorithms.
AlgorithmsDParameter Settings
SCDLPSO30NP = 100w = 0.9–0.4 β = 0.5
50NP = 100
100NP = 150
XPSO30NP = 100η = 0.2 p = 0.5 Stagemax = 5
50NP = 150
100NP = 150
TCSPSO30NP = 50w = 0.9–0.4 c1 = c2 = 2
50NP = 50
100NP = 50
DNSPSO30NP = 50w = 0.9–0.4 k = 5 F = 0.5 CR = 0.9
50NP = 50
100NP = 50
AWPSO30NP = 40w = 0.9–0.4
50NP = 60
100NP = 100
CLPSO_LS30NP = 40c = 1.4945 w = 0.9–0.4 β = 1/3 θ = 0.94 Pc = 0.05–0.5
50NP = 50
100NP = 50
GLPSO30NP = 40w = 0.7298 c1 = c2= 1.49618 pm = 0.01 sg = 7
50NP = 40
100NP = 50
CLSPO30NP = 40w = 0.9–0.2 c1 = c2 = 1.49445 Pc = 0.05–0.5
50NP = 60
100NP = 60
Table 2. Comparison results for SCDLPSO with different settings of NP and β on the 50-D CEC 2017 problems.
Table 2. Comparison results for SCDLPSO with different settings of NP and β on the 50-D CEC 2017 problems.
FNP = 50NP = 100
β = 0.1β = 0.2β = 0.3β = 0.4β = 0.5β = 0.6β = 0.7β = 0.8β = 0.9β = 1.0β = 0.1β = 0.2β = 0.3β = 0.4β = 0.5β = 0.6β = 0.7β = 0.8β = 0.9β = 1.0
F18.01 × 1042.21 × 1031.89 × 1035.02 × 1035.38 × 1038.37 × 1031.16 × 1041.36 × 1042.17 × 1042.24 × 1042.39 × 1037.77 × 1021.45 × 1032.32 × 1032.32 × 1036.33 × 1039.69 × 1031.40 × 1041.76 × 1042.69 × 104
F37.85 × 1042.94 × 1041.52 × 1048.64 × 1036.00 × 1036.07 × 1038.68 × 1031.81 × 1042.98 × 1045.09 × 1047.04 × 1043.65 × 1042.02 × 1041.46 × 1041.44 × 1041.77 × 1042.50 × 1042.89 × 1045.06 × 1048.59 × 104
F41.82 × 1021.19 × 1029.51 × 1011.23 × 1021.29 × 1021.55 × 1021.42 × 1021.59 × 1021.67 × 1021.83 × 1021.83 × 1021.38 × 1029.50 × 1011.02 × 1021.23 × 1021.34 × 1021.57 × 1021.73 × 1021.81 × 1021.97 × 102
F51.73 × 1027.94 × 1013.03 × 1012.11 × 1011.91 × 1011.99 × 1012.42 × 1011.70 × 1023.34 × 1023.62 × 1021.58 × 1026.59 × 1012.43 × 1011.22 × 1011.08 × 1011.08 × 1015.79 × 1013.15 × 1023.39 × 1023.63 × 102
F63.02 × 1014.17 × 1002.68 × 10−17.74 × 10−25.20 × 10−24.88 × 10−27.08 × 10−24.29 × 10−23.97 × 10−21.55 × 10−12.45 × 1012.80 × 1009.15 × 10−21.44 × 10−26.38 × 10−45.96 × 10−32.48 × 10−34.51 × 10−32.74 × 10−34.73 × 10−3
F72.77 × 1021.24 × 1028.19 × 1016.99 × 1016.75 × 1017.56 × 1011.61 × 1023.65 × 1023.83 × 1024.04 × 1022.18 × 1021.01 × 1027.33 × 1016.38 × 1016.22 × 1019.15 × 1013.32 × 1023.70 × 1023.84 × 1024.15 × 102
F81.81 × 1027.74 × 1012.97 × 1011.98 × 1011.80 × 1012.01 × 1012.31 × 1011.22 × 1023.36 × 1023.60 × 1021.53 × 1026.98 × 1012.36 × 1011.41 × 1011.01 × 1011.14 × 1014.48 × 1013.03 × 1023.34 × 1023.65 × 102
F94.48 × 1032.62 × 1022.40 × 1016.55 × 1001.13 × 1011.91 × 1012.36 × 1011.37 × 1011.55 × 1013.55 × 1012.80 × 1031.11 × 1027.89 × 1002.07 × 1001.57 × 1005.58 × 10−12.63 × 1003.91 × 1003.92 × 10−22.98 ×10−3
F105.78 × 1034.47 × 1034.25 × 1034.62 × 1037.65 × 1031.20 × 1041.26 × 1041.23 × 1041.20 × 1041.09 × 1045.88 × 1034.20 × 1034.04 × 1034.90 × 1039.87 × 1031.26 × 1041.29 × 1041.29 × 1041.22 × 1041.15 × 104
F112.46 × 1021.51 × 1021.56 × 1021.19 × 1028.87 × 1018.18 × 1016.36 × 1017.03 × 1011.07 × 1021.95 × 1022.13 × 1021.36 × 1021.29 × 1028.81 × 1016.62 × 1015.15 × 1013.88 × 1013.89 × 1011.52 × 1021.58 × 102
F122.26 × 1063.49 × 1052.15 × 1053.63 × 1052.35 × 1056.58 × 1074.53 × 1051.44 × 1066.68 × 1063.43 × 1061.76 × 1062.84 × 1051.99 × 1052.64 × 1052.77 × 1055.29 × 1058.67 × 1051.43 × 1062.55 × 1064.82 × 106
F131.09 × 1048.16 × 1033.16 × 1058.62 × 1031.18 × 1041.01 × 1062.47 × 1042.93 × 1041.46 × 1082.60 × 1078.27 × 1034.85 × 1034.00 × 1035.48 × 1035.90 × 1031.52 × 1042.12 × 1042.85 × 1043.04 × 1043.37 × 104
F144.15 × 1043.05 × 1042.00 × 1041.64 × 1043.75 × 1045.37 × 1047.84 × 1041.25 × 1051.57 × 1051.77 × 1053.73 × 1042.16 × 1041.74 × 1042.07 × 1043.49 × 1048.85 × 1049.64 × 1041.45 × 1052.44 × 1052.54 × 105
F156.47 × 1035.70 × 1035.31 × 1035.87 × 1036.65 × 1031.21 × 1041.91 × 1042.60 × 1042.96 × 1043.11 × 1046.80 × 1035.63 × 1035.96 × 1035.62 × 1036.48 × 1038.22 × 1031.96 × 1042.72 × 1043.04 × 1043.13 × 104
F161.44 × 1038.35 × 1025.62 × 1024.44 × 1024.76 × 1027.09 × 1027.08 × 1021.19 × 1032.04 × 1032.43 × 1031.23 × 1036.63 × 1024.99 × 1024.85 × 1024.29 × 1025.56 × 1028.06 × 1021.86 × 1032.37 × 1032.84 × 103
F171.35 × 1037.86 × 1024.89 × 1024.87 × 1024.29 × 1025.22 × 1026.97 × 1028.27 × 1021.48 × 1031.74 × 1031.28 × 1037.18 × 1024.95 × 1023.76 × 1023.70 × 1027.97 × 1029.33 × 1021.26 × 1031.57 × 1031.74 × 103
F182.51 × 1051.20 × 1057.21 × 1047.75 × 1041.16 × 1051.99 × 1053.47 × 1051.10 × 1062.01 × 1062.21 × 1061.45 × 1059.03 × 1046.69 × 1048.02 × 1041.30 × 1052.74 × 1058.92 × 1051.66 × 1063.43 × 1064.41 × 106
F191.48 × 1041.47 × 1041.81 × 1041.68 × 1041.82 × 1041.50 × 1041.96 × 1041.63 × 1041.26 × 1043.73 × 1031.48 × 1041.50 × 1041.42 × 1041.28 × 1041.43 × 1041.18 × 1041.20 × 1049.55 × 1034.58 × 1034.86 × 103
F207.22 × 1023.57 × 1022.50 × 1023.93 × 1024.80 × 1021.10 × 1031.35 × 1031.42 × 1031.45 × 1031.53 × 1036.39 × 1023.54 × 1022.37 × 1022.78 × 1027.04 × 1021.22 × 1031.35 × 1031.45 × 1031.49 × 1031.57 × 103
F213.61 × 1022.80 × 1022.41 × 1022.37 × 1022.31 × 1022.34 × 1022.34 × 1023.00 × 1025.36 × 1025.56 × 1023.16 × 1022.54 × 1022.29 × 1022.29 × 1022.19 × 1022.19 × 1022.23 × 1025.01 × 1025.35 × 1025.66 × 102
F224.74 × 1033.47 × 1033.57 × 1033.96 × 1035.50 × 1031.05 × 1041.24 × 1041.28 × 1041.31 × 1041.31 × 1044.94 × 1032.19 × 1033.46 × 1033.67 × 1036.52 × 1031.20 × 1041.27 × 1041.27 × 1041.30 × 1041.31 × 104
F237.41 × 1026.39 × 1025.95 × 1025.99 × 1026.16 × 1026.25 × 1026.26 × 1026.32 × 1026.73 × 1028.58 × 1026.30 × 1025.57 × 1025.30 × 1025.34 × 1025.27 × 1025.24 × 1025.29 × 1025.63 × 1027.92 × 1028.21 × 102
F247.55 × 1026.83 × 1026.76 × 1026.80 × 1026.93 × 1026.92 × 1027.11 × 1028.44 × 1029.41 × 1029.67 × 1026.68 × 1026.11 × 1025.97 × 1025.99 × 1025.98 × 1026.09 × 1026.26 × 1028.70 × 1028.82 × 1028.93 × 102
F256.10 × 1025.66 × 1025.61 × 1025.43 × 1024.97 × 1024.81 × 1024.87 × 1024.85 × 1024.84 × 1024.95 × 1025.95 × 1025.64 × 1025.45 × 1025.65 × 1025.08 × 1024.80 × 1024.80 × 1024.80 × 1024.80 × 1024.82 × 102
F264.77 × 1031.66 × 1031.35 × 1031.91 × 1032.26 × 1032.55 × 1032.90 × 1033.05 × 1033.62 × 1034.93 × 1033.70 × 1032.01 × 1031.15 × 1031.91 × 1031.89 × 1031.93 × 1032.21 × 1032.50 × 1034.53 × 1035.38 × 103
F279.41 × 1027.80 × 1027.27 × 1027.81 × 1027.46 × 1028.01 × 1028.58 × 1028.68 × 1028.82 × 1029.52 × 1028.55 × 1027.47 × 1027.15 × 1027.09 × 1026.79 × 1027.15 × 1027.57 × 1027.63 × 1027.92 × 1027.84 × 102
F285.73 × 1025.11 × 1025.01 × 1024.92 × 1024.89 × 1024.79 × 1027.88 × 1021.58 × 1034.26 × 1035.37 × 1035.52 × 1025.04 × 1025.06 × 1024.92 × 1024.74 × 1024.77 × 1024.73 × 1021.96 × 1033.46 × 1035.44 × 103
F291.96 × 1031.17 × 1037.35 × 1026.72 × 1026.62 × 1027.13 × 1027.31 × 1027.31 × 1021.27 × 1031.69 × 1031.81 × 1031.02 × 1036.16 × 1025.05 × 1025.16 × 1025.08 × 1025.42 × 1028.00 × 1021.47 × 1031.79 × 103
F301.31 × 1068.35 × 1059.29 × 1059.97 × 1059.75 × 1051.04 × 1061.21 × 1061.56 × 1061.90 × 1062.04 × 1061.18 × 1068.33 × 1058.12 × 1058.26 × 1058.25 × 1059.98 × 1051.28 × 1061.63 × 1061.56 × 1061.67 × 106
Rank7.414.453.553.143.214.485.596.527.698.977.245.003.313.282.934.175.387.037.868.79
FNP = 150NP = 200
β = 0.1β = 0.2β = 0.3β = 0.4β = 0.5β = 0.6β = 0.7β = 0.8β = 0.9β = 1.0β = 0.1β = 0.2β = 0.3β = 0.4β = 0.5β = 0.6β = 0.7β = 0.8β = 0.9β = 1.0
F11.34 × 1031.54 × 1031.53 × 1031.99 × 1032.44 × 1033.06 × 1037.89 × 1031.24 × 1041.62 × 1042.48 × 1041.35 × 1039.97 × 1021.59 × 1031.89 × 1032.53 × 1035.30 × 1039.45 × 1031.24 × 1042.08 × 1042.58 × 104
F37.10 × 1043.90 × 1043.12 × 1042.62 × 1042.76 × 1043.32 × 1043.65 × 1045.18 × 1048.11 × 1041.14 × 1037.03 × 1044.69 × 1043.78 × 1043.60 × 1044.20 × 1044.30 × 1045.03 × 1046.90 × 1049.84 × 1041.38 × 103
F41.78 × 1021.11 × 1021.14 × 1021.11 × 1021.24 × 1021.47 × 1021.85 × 1021.86 × 1022.00 × 1022.05 × 1021.73 × 1021.49 × 1021.36 × 1021.08 × 1021.06 × 1021.70 × 1021.89 × 1021.94 × 1022.00 × 1022.11 × 102
F51.44 × 1026.00 × 1012.26 × 1011.08 × 1017.73 × 1007.36 × 1009.43 × 1013.15 × 1023.41 × 1023.66 × 1021.44 × 1026.01 × 1012.27 × 1019.19 × 1005.94 × 1005.31 × 1001.26 × 1023.22 × 1023.42 × 1023.72 × 102
F62.20 × 1012.39 × 1001.22 × 10−11.04 × 10−22.17 × 10−32.13 × 10−46.13 × 10−55.61 × 10−33.41 × 10−61.46 × 10−32.17 × 1012.45 × 1001.13 × 10−11.00 × 10−29.46 × 10−41.30 × 10−33.32 × 10−53.78 × 10−43.00 × 10−61.43 × 10−2
F72.05 × 1029.68 × 1017.03 × 1016.22 × 1016.86 × 1011.72 × 1023.56 × 1023.66 × 1023.88 × 1024.13 × 1022.07 × 1029.52 × 1017.01 × 1016.20 × 1017.91 × 1012.30 × 1023.50 × 1023.70 × 1023.87 × 1024.23 × 102
F81.43 × 1025.75 × 1012.19 × 1011.00 × 1018.22 × 1007.50 × 1008.07 × 1013.13 × 1023.38 × 1023.69 × 1021.36 × 1025.50 × 1012.14 × 1019.39 × 1005.44 × 1005.27 × 1007.43 × 1013.17 × 1023.39 × 1023.71 × 102
F92.32 × 1039.69 × 1017.67 × 1001.53 × 1001.58 × 1003.28 × 10−11.18 × 10−11.03 × 10−15.97 × 10−31.81 × 10−21.96 × 1031.24 × 1026.87 × 1002.01 × 1007.63 × 10−13.05 × 10−11.02 × 10−17.23 × 10−21.73×10−103.85 × 10−3
F105.76 × 1034.79 × 1034.77 × 1034.94 × 1031.10 × 1041.25 × 1041.25 × 1041.29 × 1041.21 × 1041.21 × 1046.06 × 1034.73 × 1034.28 × 1034.76 × 1031.12 × 1041.27 × 1041.28 × 1041.29 × 1041.29 × 1041.19 × 104
F111.86 × 1021.25 × 1021.23 × 1021.00 × 1026.69 × 1015.12 × 1014.23 × 1014.64 × 1011.43 × 1021.79 × 1021.74 × 1021.20 × 1021.15 × 1029.74 × 1017.41 × 1015.93 × 1014.14 × 1015.23 × 1011.52 × 1021.78 × 102
F121.33 × 1063.23 × 1052.47 × 1052.47 × 1053.06 × 1056.99 × 1051.40 × 1062.33 × 1063.90 × 1067.79 × 1061.70 × 1064.03 × 1052.56 × 1053.24 × 1054.74 × 1057.62 × 1051.44 × 1062.12 × 1063.75 × 1068.57 × 106
F137.45 × 1034.81 × 1033.51 × 1033.87 × 1037.54 × 1031.12 × 1041.63 × 1042.94 × 1043.22 × 1043.16 × 1047.92 × 1034.21 × 1033.44 × 1032.10 × 1033.24 × 1031.14 × 1041.59 × 1042.77 × 1043.06 × 1043.30 × 104
F143.24 × 1042.16 × 1041.96 × 1041.95 × 1043.79 × 1046.97 × 1041.10 × 1051.85 × 1052.21 × 1053.14 × 1052.69 × 1042.26 × 1042.31 × 1042.54 × 1043.01 × 1041.05 × 1051.44 × 1052.20 × 1052.38 × 1053.00 × 105
F156.25 × 1035.30 × 1034.91 × 1035.64 × 1034.94 × 1037.13 × 1031.32 × 1042.59 × 1043.10 × 1043.13 × 1046.34 × 1035.72 × 1035.92 × 1034.96 × 1035.17 × 1035.59 × 1031.42 × 1042.56 × 1043.10 × 1043.12 × 104
F161.19 × 1037.21 × 1025.07 × 1024.95 × 1024.19 × 1024.90 × 1021.32 × 1032.26 × 1032.63 × 1032.88 × 1031.09 × 1037.40 × 1025.24 × 1025.39 × 1025.39 × 1027.37 × 1021.45 × 1032.21 × 1032.65 × 1032.88 × 103
F171.19 × 1037.65 × 1025.64 × 1024.37 × 1023.94 × 1026.73 × 1021.13 × 1031.42 × 1031.52 × 1031.71 × 1031.27 × 1037.66 × 1024.54 × 1023.86 × 1023.22 × 1028.72 × 1021.12 × 1031.43 × 1031.56 × 1031.76 × 103
F181.59 × 1057.85 × 1045.56 × 1048.38 × 1041.18 × 1054.23 × 1051.27 × 1062.43 × 1063.90 × 1065.93 × 1062.21 × 1057.01 × 1046.44 × 1047.45 × 1041.83 × 1055.85 × 1051.27 × 1063.08 × 1065.30 × 1064.23 × 106
F191.51 × 1041.51 × 1041.42 × 1041.50 × 1041.54 × 1041.34 × 1047.49 × 1037.54 × 1034.91 × 1032.46 × 1031.50 × 1041.48 × 1041.42 × 1041.45 × 1041.29 × 1041.18 × 1041.03 × 1047.09 × 1035.41 × 1033.74 × 103
F206.32 × 1023.88 × 1022.29 × 1023.94 × 1029.29 × 1021.24 × 1031.31 × 1031.38 × 1031.51 × 1031.56 × 1036.62 × 1024.50 × 1022.75 × 1025.11 × 1029.92 × 1021.24 × 1031.35 × 1031.41 × 1031.53 × 1031.53 × 103
F212.95 × 1022.44 × 1022.23 × 1022.17 × 1022.13 × 1022.14 × 1022.36 × 1025.14 × 1025.38 × 1025.68 × 1022.94 × 1022.41 × 1022.20 × 1022.14 × 1022.14 × 1022.12 × 1022.98 × 1025.10 × 1025.43 × 1025.73 × 102
F223.14 × 1031.42 × 1033.18 × 1034.06 × 1037.58 × 1031.19 × 1041.29 × 1041.29 × 1041.31 × 1041.32 × 1042.59 × 1039.54 × 1023.28 × 1035.62 × 1039.13 × 1031.25 × 1041.28 × 1041.29 × 1041.32 × 1041.31 × 104
F236.06 × 1025.29 × 1025.11 × 1025.06 × 1025.00 × 1025.06 × 1025.01 × 1026.23 × 1027.89 × 1028.17 × 1025.77 × 1025.19 × 1025.05 × 1025.05 × 1024.96 × 1024.92 × 1025.16 × 1027.62 × 1027.85 × 1028.13 × 102
F246.20 × 1025.79 × 1025.78 × 1025.84 × 1025.83 × 1025.86 × 1027.15 × 1028.53 × 1028.65 × 1028.80 × 1026.02 × 1025.78 × 1025.65 × 1025.75 × 1025.72 × 1025.77 × 1027.64 × 1028.55 × 1028.65 × 1028.70 × 102
F255.97 × 1025.71 × 1025.66 × 1025.48 × 1024.96 × 1024.80 × 1024.80 × 1024.80 × 1024.80 × 1025.05 × 1025.93 × 1025.73 × 1025.66 × 1025.40 × 1025.12 × 1024.80 × 1024.80 × 1024.80 × 1024.81 × 1025.20 × 102
F263.05 × 1031.59 × 1031.29 × 1031.57 × 1031.67 × 1031.89 × 1032.02 × 1033.08 × 1034.92 × 1035.14 × 1033.27 × 1032.48 × 1031.36 × 1031.59 × 1031.70 × 1031.79 × 1031.93 × 1033.23 × 1034.87 × 1035.12 × 103
F278.33 × 1027.36 × 1026.80 × 1026.63 × 1026.76 × 1027.16 × 1027.22 × 1027.26 × 1027.36 × 1027.69 × 1028.45 × 1027.15 × 1026.68 × 1026.64 × 1026.61 × 1027.07 × 1027.17 × 1027.24 × 1027.61 × 1027.14 × 102
F285.41 × 1025.07 × 1024.97 × 1024.91 × 1024.73 × 1024.71 × 1024.75 × 1027.48 × 1023.35 × 1035.10 × 1035.47 × 1025.07 × 1025.06 × 1025.06 × 1024.85 × 1024.70 × 1024.77 × 1027.11 × 1023.78 × 1035.14 × 103
F291.77 × 1031.09 × 1036.71 × 1025.90 × 1024.36 × 1024.51 × 1024.74 × 1021.06 × 1031.44 × 1031.88 × 1031.74 × 1031.10 × 1036.93 × 1025.79 × 1024.37 × 1024.61 × 1024.93 × 1021.06 × 1031.68 × 1031.99 × 103
F301.14 × 1068.25 × 1057.99 × 1058.63 × 1058.17 × 1059.02 × 1051.39 × 1061.51 × 1061.54 × 1061.55 × 1061.21 × 1068.06 × 1058.14 × 1058.35 × 1058.25 × 1058.55 × 1051.21 × 1061.52 × 1061.56 × 1061.51 × 106
Rank6.524.723.383.383.454.215.487.147.868.866.664.723.413.343.284.075.667.038.108.72
Table 3. Comparison results between SCDLPSO and state-of-the-art PSO variants on the 30-D CEC 2017 benchmark functions.
Table 3. Comparison results between SCDLPSO and state-of-the-art PSO variants on the 30-D CEC 2017 benchmark functions.
FCategoryQualitySCDLPSOXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
F1Unimodal
Functions
Median2.28 × 1032.90 × 1033.20 × 1031.80 × 1051.55 × 10101.42 × 1041.32 × 1031.79 × 104
Mean3.11 × 1034.27 × 1033.66 × 1032.35 × 1051.63 × 10101.67 × 1041.90 × 1031.82 × 104
Std3.15 × 1034.47 × 1034.08 × 1032.18 × 1055.52 × 1097.78 × 1032.23 × 1035.65 × 103
p-value-3.76 × 10−1 =5.65 × 10−1 =3.88 × 10−7 +7.28 × 23 +4.06 × 10−12 +9.86 × 10−2 =3.15 × 10−18 +
F3Median2.66 × 1025.86 × 10−29.94 × 1031.55 × 1055.25 × 1046.80 × 10−111.14 × 10−136.68 × 103
Mean4.82 × 1025.45 × 10−11.15 × 1041.56 × 1055.20 × 1044.28 × 1032.58 × 10−136.20 × 103
Std5.86 × 1021.57 × 1003.68 × 1032.67 × 1043.62 × 1041.61 × 1046.22 × 10−132.42 × 103
p-value-4.39 × 10−5 −9.52 × 10−23 +4.84 × 10−38 +2.29 × 10−10 +2.10 × 10−1 =4.28 × 10−5 −6.12 × 10−18 +
F1,3w/t/l-0/1/11/1/02/0/02/0/01/1/00/1/12/0/0
F4Simple
Multimodal
Functions
Median8.33 × 1011.22 × 1021.30 × 1022.56 × 1011.40 × 1038.90 × 1011.52 × 1026.17 × 104
Mean7.66 × 1011.17 × 1021.32 × 1022.54 × 1011.70 × 1038.90 × 1011.60 × 1025.87 × 104
Std1.11 × 1012.67 × 1014.84 × 1018.61 × 10−11.36 × 1034.06 × 10−16.32 × 1011.06 × 104
p-value-8.13 × 10−11 +1.25 × 10−7 +1.71 × 10−32 −2.38 × 10−8 +1.18 × 10−7 +2.68 × 10−9 +9.78 × 10−37 +
F5Median4.97 × 1004.18 × 1018.56 × 1011.96 × 1021.89 × 1022.17 × 1025.67 × 1019.58 × 101
Mean5.14 × 1004.34 × 1018.92 × 1011.98 × 1021.84 × 1022.18 × 1025.79 × 1019.57 × 101
Std1.84 × 1001.41 × 1012.54 × 1011.31 × 1013.22 × 1011.21 × 1011.37 × 1012.26 × 100
p-value-6.05 × 10−19 +3.81 × 10−25 +1.29 × 10−60 +7.43 × 10−37 +6.04 × 10−65 +2.42 × 10−28 +1.33 × 10−79 +
F6Median1.11 × 10−62.02 × 10−38.00 × 10−11.48 × 10−12.48 × 1014.32 × 10−16.21 × 10−38.08 × 101
Mean8.31 × 10−62.08 × 10−21.04 × 1001.48 × 10−12.72 × 1019.35 × 10−11.36 × 10−28.05 × 101
Std1.41 × 10−56.31 × 10−21.15 × 1004.10 × 10−21.04 × 1011.04 × 1001.60 × 10−21.17 × 101
p-value-3.84 × 10−2 +8.59 × 10−6 +3.75 × 10−27 +2.83 × 10−20 +1.07 × 10−5 +2.45 × 10−5 +3.68 × 10−42 +
F7Median3.48 × 1017.70 × 1011.45 × 1022.34 × 1023.24 × 1022.36 × 1021.06 × 1021.43 × 10−3
Mean3.94 × 1018.06 × 1011.42 × 1022.32 × 1023.09 × 1022.33 × 1021.07 × 1021.43 × 10−3
Std2.05 × 1011.80 × 1012.87 × 1011.32 × 1011.25 × 1021.89 × 1012.08 × 1014.04 × 10−4
p-value-1.82 × 10−11 +1.68 × 10−22 +2.15 × 10−45 +1.43 × 10−16 +2.80 × 10−42 +3.76 × 10−18 +8.33 × 10−15
F8Median3.98 × 1004.18 × 1019.55 × 1012.04 × 1021.81 × 1022.25 × 1026.47 × 1011.03 × 102
Mean4.58 × 1004.37 × 1019.33 × 1012.03 × 1021.76 × 1022.22 × 1026.42 × 1011.03 × 102
Std1.33 × 1001.60 × 1012.22 × 1011.25 × 1013.48 × 1011.15 × 1011.70 × 1019.06 × 100
p-value-6.64 × 10−20 +2.49 × 10−29 +1.18 × 10−62 +5.20 × 10−34 +5.47 × 10−67 +2.20 × 10−26 +5.39 × 10−53 +
F9Median1.14 × 10−131.72 × 1003.01 × 1021.61 × 1004.44 × 1031.90 × 1015.56 × 1019.06 × 101
Mean2.11 × 10−23.10 × 1003.85 × 1022.32 × 1004.40 × 1032.29 × 1016.39 × 1019.10 × 101
Std8.34 × 10−24.41 × 1003.36 × 1022.80 × 1001.72 × 1032.74 × 1013.28 × 1019.88 × 100
p-value-8.30 × 10−5 +6.95 × 10−8 +4.20 × 10−5 +5.54 × 10−20 +3.35 × 10−5 +5.12 × 10−15 +3.52 × 10−49 +
F10Median6.32 × 1032.70 × 1032.98 × 1035.38 × 1033.90 × 1036.42 × 1033.22 × 1039.22 × 102
Mean5.97 × 1032.61 × 1032.97 × 1035.24 × 1034.00 × 1036.26 × 1033.46 × 1039.44 × 102
Std1.35 × 1036.38 × 1024.22 × 1021.01 × 1035.96 × 1026.36 × 1028.35 × 1022.89 × 102
p-value-1.25 × 10−17 −1.66 × 10−16 −2.28 × 10−2 −1.44 × 10−9 −2.96 × 10−1 =8.04 × 10−12 −2.56 × 10−27 −
F4-10w/t/l-6/0/16/0/15/0/26/0/16/1/06/0/15/0/2
F11Hybrid
Functions
Median1.28 × 1018.06 × 1011.16 × 1028.95 × 1011.34 × 1031.82 × 1021.01 × 1023.15 × 103
Mean3.41 × 1018.65 × 1011.18 × 1028.77 × 1013.57 × 1031.81 × 1028.71 × 1013.11 × 103
Std2.95 × 1014.45 × 1014.27 × 1018.80 × 1004.85 × 1034.03 × 1013.50 × 1013.20 × 102
p-value-3.37 × 10−6 +4.21 × 10−12 +3.33 × 10−13 +2.35 × 10−4 +9.97 × 10−23 +5.66 × 10−8 +3.96 × 10−50 +
F12Median2.68 × 1042.48 × 1041.86 × 1055.51 × 1071.04 × 1094.68 × 1051.08 × 1051.42 × 102
Mean2.72 × 1041.36 × 1055.12 × 1056.03 × 1071.36 × 1099.27 × 1051.85 × 1061.48 × 102
Std1.47 × 1044.38 × 1056.68 × 1052.65 × 1071.06 × 1098.77 × 1053.11 × 1062.23 × 101
p-value-1.15 × 10−1 =2.43 × 10−4 +1.08 × 10−17 +4.07 × 10−9 +8.19 × 10−7 +2.56 × 10−3 +4.27 × 10−14 −
F13Median8.46 × 1031.05 × 1048.24 × 1031.28 × 1065.08 × 1066.55 × 1031.78 × 1042.47 × 106
Mean1.80 × 1041.29 × 1043.28 × 1051.37 × 1065.11 × 1081.74 × 1041.10 × 1052.83 × 106
Std1.79 × 1041.22 × 1041.14 × 1065.04 × 1059.07 × 1082.18 × 1044.13 × 1051.19 × 106
p-value-3.66 × 10−2 −1.49 × 10−1 =7.39 × 10−21 +3.59 × 10−3 +9.11 × 10−1 =2.37 × 10−1 =2.38 × 10−18 +
F14Median1.92 × 1034.80 × 1033.49 × 1041.81 × 1028.16 × 1041.10 × 1051.02 × 1031.36 × 104
Mean3.19 × 1036.46 × 1035.20 × 1041.86 × 1025.84 × 1051.06 × 1059.06 × 1041.32 × 104
Std3.20 × 1035.50 × 1037.90 × 1042.27 × 1011.71 × 1065.32 × 1041.62 × 1054.83 × 103
p-value-3.55 × 10−2 +1.53 × 10−3 +4.49 × 10−6 −7.26 × 10−2 +6.23 × 10−15 +5.30 × 10−3 +4.12 × 10−13 +
F15Median6.60 × 1022.13 × 1031.08 × 1043.58 × 1041.55 × 1054.13 × 1041.92 × 1034.39 × 104
Mean1.70 × 1034.82 × 1031.33 × 1043.95 × 1043.95 × 1073.60 × 1046.38 × 1034.41 × 104
Std2.38 × 1036.33 × 1031.04 × 1042.21 × 1042.12 × 1088.63 × 1038.32 × 1033.33 × 104
p-value-6.02 × 10−2 =2.81 × 10−7 +7.51 × 10−13 +3.20 × 10−1 =2.29 × 10−28 +5.06 × 10−3 +5.60 × 10−9 +
F16Median2.21 × 1015.70 × 1028.65 × 1021.90 × 1031.38 × 1031.29 × 1037.00 × 1027.82 × 102
Mean9.21 × 1015.32 × 1028.54 × 1021.89 × 1031.44 × 1031.14 × 1037.01 × 1028.34 × 102
Std1.21 × 1022.31 × 1022.56 × 1021.70 × 1023.66 × 1024.10 × 1022.74 × 1023.73 × 102
p-value-1.04 × 10−14 +6.95 × 10−21 +1.43 × 10−47 +2.11 × 10−26 +3.91 × 10−19 +9.54 × 10−16 +1.62 × 10−14 +
F17Median5.11 × 1011.56 × 1023.18 × 1028.59 × 1026.29 × 1026.85 × 1021.82 × 1026.32 × 102
Mean5.80 × 1011.47 × 1022.96 × 1028.58 × 1026.84 × 1029.36 × 1022.35 × 1026.20 × 102
Std2.44 × 1019.61 × 1011.44 × 1029.65 × 1013.22 × 1026.39 × 1021.54 × 1021.42 × 102
p-value-6.92 × 10−7 +3.09 × 10−12 +7.78 × 10−46 +6.20 × 10−15 +6.52 × 10−10 +8.63 × 10−8 +8.03 × 10−29 +
F18Median8.54 × 1049.98 × 1041.41 × 1052.40 × 1056.70 × 1056.81 × 1051.53 × 1042.00 × 102
Mean1.16 × 1051.45 × 1052.78 × 1052.45 × 1054.11 × 1062.44 × 1066.82 × 1041.91 × 102
Std9.74 × 1041.15 × 1052.93 × 1059.96 × 1041.18 × 1074.12 × 1061.99 × 1057.60 × 101
p-value-3.26 × 10−1 =6.61 × 10−3 +5.90 × 10−6 +7.34 × 10−2 =3.64 × 10−3 +2.47 × 10−1 =2.69 × 10−8 −
FCategoryQualitySCDLPSOXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
F19Hybrid
Functions
Median1.72 × 1032.28 × 1037.66 × 1031.75 × 1031.80 × 1073.51 × 1046.16 × 1032.00 × 105
Mean3.58 × 1034.05 × 1031.49 × 1042.29 × 1037.38 × 1073.51 × 1041.25 × 1042.34 × 105
Std3.83 × 1034.55 × 1031.58 × 1041.43 × 1032.46 × 1081.94 × 1041.41 × 1041.14 × 105
p-value-3.98 × 10−1 =3.83 × 10−4 +9.22 × 10−2 =1.12 × 10−1 =6.65 × 10−12 +1.81 × 10−3 +1.32 × 10−15 +
F20Median4.23 × 1011.73 × 1023.84 × 1024.06 × 1025.19 × 1025.98 × 1022.77 × 1021.94 × 102
Mean6.38 × 1011.85 × 1023.70 × 1024.07 × 1025.38 × 1025.87 × 1022.72 × 1022.30 × 102
Std7.20 × 1017.16 × 1011.39 × 1021.05 × 1021.97 × 1021.55 × 1021.19 × 1021.71 × 102
p-value-1.55 × 10−8 +4.21 × 10−15 +5.22 × 10−21 +1.29 × 10−17 +1.39 × 10−23 +5.06 × 10−11 +9.95 × 10−6 +
F11-20w/t/l-5/4/19/1/08/1/17/3/09/1/08/2/08/0/2
F21Composition
Functions
Median2.09 × 1022.38 × 1022.80 × 1023.98 × 1023.74 × 1024.00 × 1022.62 × 1022.02 × 102
Mean2.09 × 1022.41 × 1022.84 × 1023.97 × 1023.80 × 1024.02 × 1022.66 × 1022.09 × 102
Std2.82 × 1001.22 × 1012.25 × 1011.43 × 1014.55 × 1017.78 × 1002.17 × 1016.88 × 101
p-value-1.06 × 10−20 +3.60 × 10−25 +1.70 × 10−57 +6.97 × 10−28 +2.69 × 10−72 +2.78 × 10−20 +9.99 × 10−1 =
F22Median1.00 × 1021.00 × 1021.04 × 1026.64 × 1034.16 × 1036.99 × 1031.02 × 1022.89 × 102
Mean2.73 × 1022.95 × 1021.70 × 1036.52 × 1034.04 × 1036.95 × 1032.06 × 1022.78 × 102
Std9.33 × 1027.77 × 1021.75 × 1035.90 × 1021.03 × 1033.10 × 1025.56 × 1024.38 × 101
p-value-5.37 × 10−1 =2.63 × 10−4 +2.23 × 10−37 +4.31 × 10−21 +9.29 × 10−42 +7.40 × 10−1 =9.80 × 10−1 =
F23Median3.92 × 1023.97 × 1024.44 × 1025.68 × 1026.64 × 1025.61 × 1024.26 × 1023.05 × 102
Mean3.91 × 1023.96 × 1024.46 × 1025.72 × 1026.80 × 1025.57 × 1024.33 × 1025.22 × 102
Std9.71 × 1001.38 × 1012.85 × 1012.29 × 1019.92 × 1011.39 × 1012.73 × 1017.83 × 102
p-value-1.68 × 10−2 +4.26 × 10−14 +2.28 × 10−43 +1.92 × 10−22 +9.96 × 10−51 +1.29 × 10−10 +3.70 × 10−1 =
F24Median4.66 × 1024.63 × 1025.37 × 1026.74 × 1027.32 × 1026.26 × 1024.94 × 1024.51 × 102
Mean4.70 × 1024.68 × 1025.38 × 1026.97 × 1027.51 × 1026.20 × 1025.14 × 1024.51 × 102
Std1.60 × 1012.19 × 1015.08 × 1017.02 × 1018.86 × 1011.02 × 1015.44 × 1019.85 × 100
p-value-5.75 × 10−1 =5.54 × 10−9 +3.73 × 10−24 +5.97 × 10−24 +2.12 × 10−45 +9.84 × 10−5 +7.08 × 10−7 −
F25Median3.88 × 1023.91 × 1024.14 × 1023.78 × 1028.68 × 1023.88 × 1024.10 × 1025.60 × 102
Mean3.88 × 1023.95 × 1024.12 × 1023.78 × 1021.13 × 1033.88 × 1024.11 × 1025.62 × 102
Std5.00 × 10−11.00 × 1011.56 × 1011.12 × 1007.54 × 1024.27 × 10−12.15 × 1011.54 × 101
p-value-7.63 × 10−6 +6.91 × 10−12 +1.20 × 10−44 −1.64 × 10−6 +3.94 × 10−3 =2.73 × 10−7 +2.56 × 10−54 +
F26Median1.33 × 1033.00 × 1022.32 × 1033.28 × 1034.30 × 1033.16 × 1031.94 × 1033.90 × 102
Mean1.33 × 1037.57 × 1022.23 × 1033.27 × 1034.20 × 1033.14 × 1031.92 × 1033.90 × 102
Std1.15 × 1026.08 × 1026.97 × 1022.16 × 1029.58 × 1021.08 × 1024.77 × 1029.49 × 10−1
p-value-7.35 × 10−7 −5.77 × 10−9 +1.68 × 10−45 +5.88 × 10−23 +1.31 × 10−54 +3.08 × 10−8 +2.65 × 10−46 −
F27Median5.14 × 1025.36 × 1025.61 × 1025.00 × 1026.85 × 1025.18 × 1025.48 × 1021.99 × 103
Mean5.15 × 1025.36 × 1025.61 × 1025.00 × 1026.88 × 1025.22 × 1025.50 × 1021.88 × 103
Std9.77 × 1001.18 × 1011.95 × 1010.00 × 1008.94 × 1011.52 × 1011.28 × 1012.71 × 102
p-value-7.65 × 10−10 +3.60 × 10−16 +1.08 × 10−11 −9.24 × 10−15 +6.01 × 10−2 =6.66 × 10−17 +1.09 × 10−34 +
F28Median4.08 × 1024.03 × 1024.40 × 1025.00 × 1021.52 × 1033.50 × 1034.72 × 1025.13 × 102
Mean4.15 × 1023.78 × 1024.51 × 1025.00 × 1021.98 × 1033.07 × 1034.51 × 1025.13 × 102
Std3.61 × 1016.75 × 1015.21 × 1010.00 × 1001.25 × 1039.06 × 1027.00 × 1013.62 × 100
p-value-5.03 × 10−2 =3.32 × 10−3 +2.27 × 10−18 +7.59 × 10−9 +1.24 × 10−22+1.74 × 10−2 +6.55 × 10−21 +
F29Median4.80 × 1025.53 × 1028.62 × 1021.58 × 1031.34 × 1039.33 × 1027.74 × 1024.90 × 102
Mean4.80 × 1025.73 × 1029.05 × 1021.60 × 1031.38 × 1031.11 × 1038.17 × 1025.00 × 102
Std2.24 × 1018.08 × 1011.86 × 1022.07 × 1024.13 × 1026.02 × 1022.37 × 1022.44 × 101
p-value-5.04 × 10−8+1.21 × 10−17 +3.97 × 10−36 +5.93 × 10−17 +4.95 × 10−7 +2.52 × 10−10 +2.04 × 10−3 +
F30Median3.90 × 1038.08 × 1031.20 × 1045.00 × 1048.80 × 1061.43 × 1049.44 × 1036.47 × 102
Mean5.05 × 1039.04 × 1031.80 × 1046.79 × 1041.65 × 1071.38 × 1042.08 × 1046.46 × 102
Std2.81 × 1035.57 × 1031.77 × 1045.66 × 1042.11 × 1071.51 × 1032.96 × 1046.63 × 101
p-value-9.15 × 10−4 +2.58 × 10−4 +1.51 × 10−7 +8.77 × 10−5 +2.13 × 10−21 +5.93 × 10−3 +1.07 × 10−11 −
F21-30w/t/l-6/3/110/0/08/0/210/0/08/2/09/1/05/3/2
w/t/l-17/8/426/2/123/1/525/3/124/5/023/4/220/3/6
rank1.932.664.835.387.285.863.694.38
Table 4. Comparison results between SCDLPSO and state-of-the-art PSO variants on the 50-D CEC 2017 benchmark functions.
Table 4. Comparison results between SCDLPSO and state-of-the-art PSO variants on the 50-D CEC 2017 benchmark functions.
FCategoryQualitySCDLPSOXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
F1Unimodal
Functions
Median1.01 × 1031.84 × 1035.46 × 1033.32 × 1036.18 × 10104.59 × 1071.64 × 1032.15 × 104
Mean2.32 × 1034.71 × 1032.84 × 1065.78 × 1036.31 × 10101.29 × 1081.25 × 1042.45 × 104
Std2.71 × 1036.06 × 1031.52 × 1077.38 × 1031.08 × 10103.33 × 1084.75 × 1041.39 × 104
p-value-2.20 × 10−1 =3.21 × 10−1 =2.10 × 10−2 +4.04 × 10−38 +4.08 × 10−2 +2.53 × 10−1 =1.23 × 10−11 +
F3Median1.38 × 1044.36 × 1035.54 × 1043.70 × 1051.06 × 1055.19 × 10−105.68 × 10−133.02 × 104
Mean1.44 × 1044.62 × 1035.83 × 1043.70 × 1051.19 × 1052.37 × 1043.31 × 10−123.33 × 104
Std3.94 × 1031.56 × 1039.39 × 1035.40 × 1046.45 × 1046.07 × 1041.20 × 10−111.89 × 104
p-value-1.26 × 10−17 −4.84 × 10−31 +5.92 × 10−41 +3.50 × 10−12 +4.11 × 10−1 =2.70 × 10−27 −2.07 × 10−6 +
F1,3w/t/l-0/1/11/1/02/0/02/0/01/1/01/0/12/0/0
F4Simple
Multimodal
Functions
Median1.33 × 1022.45 × 1022.88 × 1024.57 × 1018.68 × 1032.38 × 1022.96 × 1021.73 × 105
Mean1.23 × 1022.34 × 1022.93 × 1025.50 × 1018.83 × 1032.40 × 1023.03 × 1021.75 × 105
Std5.20 × 1015.08 × 1019.09 × 1012.53 × 1013.63 × 1033.13 × 1016.31 × 1011.50 × 104
p-value-3.54 × 10−11 +3.48 × 10−12 +4.12 × 10−8 −1.08 × 10−18 +6.79 × 10−15 +4.07 × 10−17 +5.95 × 10−55 +
F5Median1.04 × 1018.16 × 1011.87 × 1024.10 × 1024.26 × 1024.40 × 1021.46 × 1022.03 × 102
Mean1.08 × 1018.53 × 1011.91 × 1024.13 × 1024.35 × 1024.40 × 1021.49 × 1022.01 × 102
Std3.61 × 1002.02 × 1013.82 × 1011.82 × 1016.20 × 1012.10 × 1012.97 × 1011.52 × 101
p-value-5.21 × 10−33 +4.60 × 10−33 +1.38 × 10−70 +6.58 × 10−42 +9.51 × 10−69 +1.09 × 10−32 +3.71 × 10−56 +
F6Median5.14 × 10−45.67 × 10−23.00 × 1009.77 × 10−24.25 × 1015.71 × 1001.75 × 10−22.17 × 102
Mean6.37 × 10−41.53 × 10−13.93 × 1001.03 × 10−14.29 × 1016.01 × 1002.06 × 10−22.16 × 102
Std5.24 × 10−42.87 × 10−13.68 × 1002.85 × 10−21.07 × 1011.22 × 1001.48 × 10−22.07 × 101
p-value-2.41 × 10−3 +3.51 × 10−7 +5.50 × 10−27 +2.56 × 10−29 +4.11 × 10−34 +9.84 × 10−10 +2.85 × 10−52 +
F7Median6.18 × 1011.60 × 1023.18 × 1024.72 × 1021.04 × 1035.15 × 1022.38 × 1029.44 × 10−5
Mean6.22 × 1011.64 × 1023.35 × 1024.71 × 1021.02 × 1035.15 × 1022.34 × 1021.04 × 10−4
Std2.31 × 1003.63 × 1016.19 × 1011.80 × 1013.13 × 1023.49 × 1013.79 × 1013.41 × 10−5
p-value-1.27 × 10−26 +1.44 × 10−31 +1.70 × 10−71 +1.85 × 10−23 +1.30 × 10−57 +4.20 × 10−32 +5.65 × 10−76 −
F8Median9.45 × 1008.66 × 1011.96 × 1024.13 × 1024.35 × 1024.44 × 1021.40 × 1022.39 × 102
Mean1.01 × 1018.93 × 1012.09 × 1024.07 × 1024.34 × 1024.45 × 1021.41 × 1022.36 × 102
Std3.34 × 1002.30 × 1016.12 × 1012.12 × 1016.72 × 1011.33 × 1013.04 × 1011.74 × 101
p-value-2.19 × 10−26 +8.08 × 10−25 +1.69 × 10−66 +5.43 × 10−40 +5.55 × 10−80 +6.67 × 10−31 +2.43 × 10−57 +
F9Median5.89 × 10−11.50 × 1012.84 × 1031.21 × 1011.24 × 1041.10 × 1034.76 × 1022.25 × 102
Mean1.57 × 1004.69 × 1013.34 × 1032.24 × 1011.44 × 1041.13 × 1037.01 × 1022.23 × 102
Std3.67 × 1007.80 × 1011.87 × 1032.85 × 1015.97 × 1033.00 × 1025.49 × 1021.79 × 101
p-value-5.42 × 10−3 +1.30 × 10−13 +2.50 × 10−4 +7.15 × 10−19 +6.34 × 10−28 +4.97 × 10−9 +5.22 × 10−56 +
F10Median1.20 × 1045.22 × 1035.56 × 1031.20 × 1047.82 × 1031.31 × 1045.26 × 1035.04 × 103
Mean9.87 × 1035.11 × 1035.56 × 1031.16 × 1047.96 × 1031.31 × 1045.73 × 1035.14 × 103
Std3.77 × 1038.60 × 1026.71 × 1021.44 × 1037.97 × 1024.34 × 1021.53 × 1031.10 × 103
p-value 7.01 × 10−9 −1.05 × 10−7 −2.24 × 10−2 +9.81 × 10−3 −2.41 × 10−5 +9.39 × 10−7 −2.09 × 10−8 −
F4-10w/t/l-6/0/16/0/16/0/16/0/17/0/06/0/15/0/2
F11Hybrid
Functions
Median6.05 × 1011.49 × 1022.15 × 1022.05 × 1025.50 × 1032.74 × 1025.73 × 1027.35 × 103
Mean6.61 × 1011.54 × 1022.36 × 1022.06 × 1028.68 × 1031.24 × 1039.01 × 1027.29 × 103
Std1.85 × 1013.28 × 1019.86 × 1012.07 × 1017.82 × 1035.21 × 1039.97 × 1023.63 × 102
p-value-3.41 × 10−18 +8.47 × 10−13 +1.12 × 10−34 +1.76 × 10−7 +2.29 × 10−1 =3.26 × 10−5 +2.54 × 10−68 +
F12Median2.22 × 1053.96 × 1051.83 × 1062.87 × 1071.58 × 10102.56 × 1071.98 × 1062.07 × 102
Mean2.78 × 1059.37 × 1058.82 × 1063.46 × 1071.66 × 10102.70 × 1078.69 × 1062.15 × 102
Std1.61 × 1051.32 × 1062.40 × 1072.24 × 1078.14 × 1091.13 × 1071.46 × 1073.49 × 101
p-value-1.42 × 10−3 +6.01 × 10−2 =2.16 × 10−11 +9.48 × 10−16 +1.93 × 10−18 +3.05 × 10−3 +5.24 × 10−13 −
F13Median3.29 × 1032.14 × 1033.80 × 1032.26 × 1067.08 × 1093.78 × 1043.56 × 1032.79 × 107
Mean5.90 × 1034.53 × 1037.76 × 1032.84 × 1067.30 × 1091.03 × 1061.82 × 1053.05 × 107
Std6.31 × 1034.68 × 1039.15 × 1031.77 × 1065.13 × 1095.34 × 1069.49 × 1051.21 × 107
p-value-3.11 × 10−1 =3.71 × 10−1 =5.67 × 10−12 +2.35 × 10−10 +3.07 × 10−1 =3.23 × 10−1 =1.14 × 10−19 +
F14Median2.21 × 1042.99 × 1044.09 × 1047.80 × 1031.33 × 1063.32 × 1051.94 × 1043.01 × 104
Mean3.49 × 1043.58 × 1042.30 × 1058.34 × 1033.25 × 1063.92 × 1052.16 × 1053.26 × 104
Std2.94 × 1042.68 × 1045.29 × 1052.55 × 1036.94 × 1063.44 × 1055.80 × 1051.10 × 104
p-value-5.11 × 10−1 =5.25 × 10−2 =1.02 × 10−5 −1.57 × 10−2 +6.69 × 10−7 +9.80 × 10−2 =7.02 × 10−1 =
F15Median7.16 × 1032.71 × 1037.12 × 1034.18 × 1057.14 × 1073.16 × 1043.08 × 1034.58 × 105
Mean6.48 × 1034.02 × 1031.46 × 1044.47 × 1054.23 × 1082.35 × 10076.26 × 1035.01 × 105
Std4.35 × 1034.07 × 1032.54 × 1042.19 × 1058.11 × 1087.71 × 10071.22 × 1042.50 × 105
p-value-4.79 × 10−2 −9.31 × 10−2 =1.48 × 10−15 +6.78 × 10−3 +1.07 × 10−1 =9.29 × 10−1 =2.69 × 10−15 +
F16Median4.14 × 1029.22 × 1021.62 × 1033.74 × 1032.62 × 1033.20 × 1031.54 × 1031.72 × 103
Mean4.29 × 1029.45 × 1021.70 × 1033.78 × 1032.68 × 1033.21 × 1031.59 × 1031.81 × 103
Std2.14 × 1023.49 × 1024.35 × 1022.22 × 1026.31 × 1021.96 × 1024.61 × 1027.35 × 102
p-value-1.09 × 10−9 +1.80 × 10−20 +2.88 × 10−53 +1.15 × 10−25 +3.58 × 10−50 +9.35 × 10−18 +8.37 × 10−14 +
F17Median2.48 × 1028.63 × 1021.15 × 1032.34 × 1032.61 × 1032.06 × 1039.67 × 1021.34 × 103
Mean3.70 × 1028.40 × 1021.17 × 1032.34 × 1032.58 × 1032.20 × 1031.00 × 1031.33 × 103
Std2.98 × 1022.46 × 1022.97 × 1021.76 × 1024.68 × 1027.09 × 1022.66 × 1022.29 × 102
p-value-1.11 × 10−8 +1.59 × 10−14 +1.82 × 10−37 +2.93 × 10−29 +1.49 × 10−18 +8.43 × 10−12 +7.20 × 10−20 +
F18Median9.30 × 1041.67 × 1053.12 × 1063.39 × 1064.48 × 1065.68 × 1061.36 × 1061.05 × 103
Mean1.30 × 1053.41 × 1055.58 × 1063.69 × 1069.85 × 1068.35 × 1062.45 × 1061.03 × 103
Std8.55 × 1044.26 × 1055.71 × 1061.28 × 1061.88 × 1076.36 × 1062.70 × 1061.59 × 102
p-value-5.67 × 10−3 +3.38 × 10−6 +1.35 × 10−21 +7.13 × 10−3 +3.35 × 10−9 +2.08 × 10−5 +4.10 × 10−11 −
FCategoryQualitySCDLPSOXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
F19Hybrid
Functions
Median1.43 × 1041.06 × 1041.30 × 1043.06 × 1042.16 × 1072.52 × 1038.92 × 1031.33 × 106
Mean1.43 × 1041.19 × 1041.51 × 1043.56 × 1042.27 × 1082.51 × 1031.18 × 1041.37 × 106
Std7.16 × 1038.67 × 1031.38 × 1041.83 × 1044.59 × 1081.72 × 1019.93 × 1036.37 × 105
p-value-1.23 × 10−1 =7.70 × 10−1 =2.57 × 10−7 +9.88 × 10−3 +2.17 × 10−12 −2.81 × 10−1 =1.40 × 10−16 +
F20Median8.70 × 1024.75 × 1029.58 × 1021.56 × 1031.36 × 1031.72 × 1037.44 × 1023.53 × 102
Mean7.04 × 1024.82 × 1029.01 × 1021.56 × 1031.32 × 1031.70 × 1037.43 × 1025.48 × 102
Std5.18 × 1022.07 × 1022.96 × 1023.18 × 1023.37 × 1021.35 × 1022.66 × 1024.61 × 102
p-value-2.91 × 10−2 −7.94 × 10−2 =2.92 × 10−10 +1.66 × 10−6 +2.49 × 10−14 +7.17 × 10−1 =2.33 × 10−1 =
F11-20w/t/l-5/3/24/6/09/0/110/0/06/3/15/5/06/2/2
F21Composition
Functions
Median2.18 × 1022.81 × 1023.77 × 1025.96 × 1025.98 × 1026.34 × 1023.36 × 1026.29 × 102
Mean2.19 × 1022.82 × 1024.02 × 1025.98 × 1026.13 × 1026.36 × 1023.46 × 1026.08 × 102
Std4.32 × 1001.93 × 1016.26 × 1011.81 × 1016.90 × 1011.65 × 1014.16 × 1011.50 × 102
p-value-3.92 × 10−23 +1.48 × 10−22 +6.05 × 10−69 +1.63 × 10−37 +1.39 × 10−73 +2.85 × 10−23 +3.56 × 10−20 +
F22Median3.76 × 1035.54 × 1036.39 × 1031.30 × 1048.15 × 1031.33 × 1046.48 × 1034.48 × 102
Mean6.52 × 1034.61 × 1036.14 × 1031.29 × 1048.12 × 1031.32 × 1045.88 × 1034.46 × 102
Std4.14 × 1032.32 × 1031.52 × 1037.68 × 1021.12 × 1033.66 × 1023.61 × 1031.99 × 101
p-value-5.39 × 10−2 =6.51 × 10−1 =3.68 × 10−11 +4.85 × 10−2 +3.82 × 10−12 +5.33 × 10−1 =8.94 × 10−11 −
F23Median5.21 × 1025.21 × 1026.42 × 1028.94 × 1021.22 × 1038.58 × 1026.65 × 1027.71 × 103
Mean5.27 × 1025.19 × 1026.50 × 1029.04 × 1021.25 × 1038.56 × 1026.80 × 1027.19 × 103
Std3.04 × 1012.93 × 1015.85 × 1015.59 × 1011.82 × 1021.38 × 1018.50 × 1011.70 × 103
p-value-7.27 × 10−1 =3.26 × 10−14 +1.86 × 10−38 +9.39 × 10−29 +7.16 × 10−51 +7.53 × 10−13 +7.69 × 10−29 +
F24Median5.91 × 1016.03 × 1027.06 × 1021.14 × 1031.30 × 1039.06 × 1027.38 × 1026.69 × 102
Mean5.98 × 1026.28 × 1027.09 × 1021.15 × 1031.32 × 1039.06 × 1027.49 × 1026.66 × 102
Std3.21 × 1018.09 × 1017.38 × 1011.38 × 1021.26 × 1021.22 × 1019.38 × 1011.51 × 101
p-value-5.13 × 10−1 =4.81 × 10−10 +1.30 × 10−28 +6.07 × 10−37 +1.44 × 10−48 +2.54 × 10−11 +6.76 × 10−15 +
F25Median4.80 × 1025.95 × 1026.74 × 1024.31 × 1024.84 × 1035.58 × 1026.61 × 1028.25 × 102
Mean5.08 × 1025.93 × 1026.76 × 1024.33 × 1025.22 × 1035.59 × 1026.66 × 1028.20 × 102
Std3.86 × 1012.38 × 1016.58 × 1018.36 × 1002.83 × 1039.38 × 10007.00 × 1013.20 × 101
p-value-4.38 × 10−15 +3.72 × 10−17 +1.39 × 10−14 −1.51 × 10−12 +4.36 × 10−9 +3.17 × 10−15 +1.24 × 10−39 +
F26Median1.84 × 1031.73 × 1033.98 × 1037.50 × 1031.05 × 1045.62 × 1032.96 × 1035.37 × 102
Mean1.89 × 1031.28 × 1034.07 × 1037.72 × 1031.04 × 1045.60 × 1033.04 × 1035.37 × 102
Std1.65 × 1028.75 × 1021.02 × 1032.13 × 1031.97 × 1031.86 × 1026.50 × 1024.38 × 100
p-value-3.58 × 10−5 −2.43 × 10−16 +3.29 × 10−21 +3.59 × 10−31 +3.21 × 10−61 +6.05 × 10−13 +2.20 × 10−46 −
F27Median6.66 × 1026.99 × 1029.01 × 1025.00 × 1021.39 × 10037.18 × 1025.48 × 1023.65 × 103
Mean6.79 × 1027.17 × 1029.06 × 1025.00 × 1021.44 × 10037.32 × 1025.50 × 1023.58 × 103
Std6.07 × 1018.03 × 1019.24 × 1010.00 × 1002.58 × 1027.84 × 1011.28 × 1012.08 × 102
p-value-7.25 × 10−3 +7.07 × 10−16 +8.02 × 10−23 −2.86 × 10−22 +6.22 × 10−3 +3.93 × 10−16 −1.94 × 10−58 +
F28Median4.59 × 1025.39 × 1026.68 × 1025.00 × 1027.56 × 1035.44 × 1034.72 × 1026.54 × 102
Mean4.74 × 1025.43 × 1026.77 × 1025.00 × 1027.80 × 1035.34 × 1034.51 × 1026.53 × 102
Std2.69 × 1013.56 × 1016.99 × 1010.00 × 1001.47 × 1034.48 × 1027.00 × 1012.16 × 101
p-value-3.50 × 10−11 +4.61 × 10−21 +2.37 × 10−6 +2.41 × 10−34 +3.07 × 10−53 +1.02 × 10−1 =2.69 × 10−35 +
F29Median4.98 × 1028.54 × 1021.43 × 1033.26 × 1033.73 × 1032.10 × 1037.74 × 1021.69 × 103
Mean5.17 × 1028.60 × 1021.43 × 1033.25 × 1034.01 × 1032.24 × 1038.17 × 1021.78 × 103
Std1.27 × 1021.93 × 1022.49 × 1022.65 × 1021.07 × 1035.86 × 1022.37 × 1024.53 × 102
p-value-1.67 × 10−11 +7.15 × 10−25 +2.16 × 10−49 +9.40 × 10−25 +2.91 × 10−22 +1.26 × 10−7 +6.43 × 10−21 +
F30Median7.96 × 1051.82 × 1062.00 × 1062.39 × 1063.41 × 1081.46 × 1062.00 × 1061.04 × 103
Mean8.26 × 1051.86 × 1062.36 × 1062.51 × 1064.80 × 1081.16 × 1072.40 × 1061.03 × 103
Std1.11 × 1053.34 × 1058.05 × 1059.62 × 1057.04 × 1085.37 × 1071.44 × 1061.62 × 102
p-value-4.17 × 10−24 +1.69 × 10−14 +3.12 × 10−13 +5.36 × 10−4 +2.87 × 10−1 =2.25 × 10−7 + 6.00 × 10−44 −
F21-30w/t/l-6/3/19/1/08/0/210/0/09/1/07/2/17/0/3
w/t/l-17/7/520/8/125/0/428/0/123/5/119/7/320/2/7
rank2.142.454.725.077.416.173.524.52
Table 5. Comparison results between SCDLPSO and state-of-the-art PSO variants on the 100-D CEC 2017 benchmark functions.
Table 5. Comparison results between SCDLPSO and state-of-the-art PSO variants on the 100-D CEC 2017 benchmark functions.
FCategoryQualitySCDLPSOXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
F1Unimodal
Functions
Median1.94 × 1033.97 × 1032.62 ×1034.08 × 1032.15 × 10117.93 × 1091.46 × 1042.71 × 104
Mean5.40 × 1037.78 × 1036.22 × 1037.17 × 1032.19 × 10118.00 × 1093.49 × 1043.69 × 108
Std4.91 × 1038.48 × 1037.24 × 1038.26 × 1033.66 × 10101.30 × 1095.38 × 1044.72 × 108
p-value-4.65 × 10−2 +2.14 × 10−1 =9.87 × 10−2 =1.07 × 10−38 +1.98 × 10−39 +3.36 × 10−3 +8.88 × 10−5 +
F3Median1.74 × 1057.15 × 1042.54 × 1051.04 × 1064.94 × 1052.72 × 10−81.92 × 1053.53 × 104
Mean1.72 × 1057.11 × 1042.53 × 1051.03 × 1065.19 × 1052.51 × 1091.98 × 1054.60 × 104
Std2.09 × 1041.09 × 1043.05 × 1041.18 × 1051.67 × 1051.34 × 10102.70 × 1042.70 × 104
p-value-3.66 × 10−32 −1.15 × 10−16 +4.26 × 10−43 +7.31 × 10−16 +3.16 × 10−1 =3.34 × 10−4 +7.13 × 10−28 −
F1,3w/t/l-1/0/11/1/01/1/02/0/01/1/02/0/01/0/1
F4Simple
Multimodal
Functions
Median2.18 × 1024.89 × 1026.28 × 1022.02 × 1025.03 × 1047.20 × 1021.62 × 1035.67 × 105
Mean2.18 × 1024.88 × 1027.03 × 1022.05 × 1025.16 × 1047.50 × 1021.58 ×1035.66 × 105
Std1.98 × 1016.07 × 1011.82 × 1025.34 × 1011.47 × 1041.11 × 1024.02 × 1025.55 × 104
p-value-1.65 × 10−30 +1.21 × 10−20 +2.21 × 10−1 =2.41 × 10−26 +4.60 × 10−33 +1.37 × 10−25 +1.09 × 10−51 +
F5Median2.59 × 1012.15 × 1025.36 × 1021.03 ×1031.24 ×1031.07 ×1034.44 × 1022.74 × 102
Mean2.59 × 1012.22 × 1025.55 × 1021.03 ×1031.23 ×1031.07 ×1034.51 × 1022.82 × 102
Std3.52 × 1014.44 × 1011.09 × 1024.69 × 1011.57 × 1022.54 × 1018.62 × 1012.03 × 101
p-value-2.07 × 10−30 +8.72 × 10−34 +5.07 × 10−70 +1.11 × 10−44 +2.54 × 10−86 +4.04 × 10−34 +1.48 × 10−56 +
F6Median3.73 × 10−24.61 × 1001.85 × 1012.36 × 10−17.13 × 1012.92 × 1012.42 × 1007.74 × 102
Mean5.58 × 10−25.06 × 1001.78 × 1012.78 × 10−17.14 × 1012.92 × 1013.03 × 1007.72 × 102
Std5.01 × 10−23.54 × 1006.21 × 1002.20 × 10−19.32 × 1001.54 × 1001.97 × 1002.83 × 101
p-value-1.43 × 10−9 +3.54 × 10−22 +1.86 × 10−06 +1.16 × 10−44 +3.41 × 10−67 +3.67 × 10−11 +2.52 × 10−76 +
F7Median1.38 × 1024.31 × 1021.22 ×1031.12 × 1034.02 × 1031.45 × 1037.43 × 1028.98× 10−7
Mean1.39 × 1024.49 × 1021.23 ×1031.13 × 1034.13 × 1031.52 × 1037.62 × 1027.95× 10−3
Std5.87 × 1008.30 × 1011.99 × 1023.75 × 1017.71 × 1021.72 × 1021.19 × 1022.08× 10−2
p-value-7.75 × 10−28 +1.26 × 10−36 +3.60 × 10−75 +2.91 × 10−35 +7.31 × 10−46 +1.79 × 10−35 +1.07× 10−72 −
F8Median2.89 × 1012.13 × 1025.29 × 1021.02 ×1031.22 ×1031.06 ×1034.68 × 1027.71 × 102
Mean2.93 × 1012.13 × 1025.54 × 1021.03 ×1031.25 ×1031.06 ×1035.10 × 1027.69 × 102
Std4.21 × 1003.84 × 1018.00 × 1013.98 × 1011.43 × 1022.61 × 1011.53 × 1025.38 × 101
p-value-1.95 × 10−32 +7.17 × 10−41 +5.08 × 10−74 +3.44 × 10−47 +2.73 × 10−85 +3.96 × 10−24 +5.03 × 10−59 +
F9Median1.89 × 1014.19 × 1021.38 × 1041.16 × 1034.41 × 1041.32 × 1041.28 × 1047.89 × 102
Mean2.59 × 1015.62 × 1021.40 × 1042.29 × 1034.65 × 1041.33 × 1041.41 × 1047.88 × 102
Std2.49 × 1014.12 × 1024.05 × 1032.95 × 1039.32 × 1031.58 × 1037.09 × 1033.20 × 101
p-value-4.91 × 10−11 +4.57 × 10−26 +1.13 × 10−4 +2.03 × 10−34 +5.89 × 10−47 +2.21 × 10−15 +5.66 × 10−67 +
F10Median1.05 × 1041.22 × 1041.36 × 1043.03 × 1041.87 × 1043.01 × 1041.98 × 1042.65 × 104
Mean1.55 × 1041.24 × 1041.34 × 1043.02 × 1041.86 × 1043.01 × 1042.07 × 1042.58 × 104
Std8.91 × 1031.36 × 1031.08 × 1038.44 × 1022.19 × 1034.22 × 1023.69 × 1033.20 × 103
p-value-4.95 × 10−2 −1.67 × 10−1 =4.46 × 10−12 +9.35 × 10−2 =4.84 × 10−12 +7.40 × 10−3 +3.53 × 10−7 +
F4-10w/t/l--6/0/16/0/16/1/06/1/07/0/07/0/0
F11Hybrid
Functions
Median8.01 × 1021.15 ×1032.74 ×1032.24 × 1041.42 × 1051.44 ×1032.51 × 1042.26 × 104
Mean8.31 × 1021.18 × 1033.42 × 1032.46 × 1041.56 × 1053.89 × 1032.38 × 1042.26 × 104
Std1.52 × 1022.31 × 1021.92 × 1039.75 × 1036.76 × 1045.95 × 1038.81 ×1035.20 × 102
p-value-5.03 × 10−9 +1.18 × 10−9 +4.85 × 10−19 +7.25 × 10−18 +7.63 × 10−3 +2.52 × 10−20 +5.26 × 10−86 +
F12Median5.91 × 1051.18 × 1075.19 × 1071.51 × 1076.93 × 10106.02 × 1082.31 × 1082.94 × 103
Mean6.04 × 1051.81 × 1078.56 × 1071.65 × 1077.75 × 10106.53 × 1083.13 × 1083.00 × 103
Std3.25 × 1051.87 × 1079.57 × 1077.77 × 1062.82 × 10104.55 × 1082.64 × 1087.24 × 102
p-value-6.65 × 10−6 +1.22 × 10−5 +7.71 × 10−16 +2.51 × 10−21 +1.82 × 10−10 +3.40 × 10−8 +2.77 × 10−14 −
F13Median2.46 × 1033.11 × 1034.13 × 1036.18 × 1031.06 × 1091.24 × 1041.18 × 1041.02 × 108
Mean4.14 × 1034.42 × 1036.81 × 1031.11 × 1049.85 × 1091.28 × 1082.56 × 1071.01 × 108
Std4.23 × 1033.81 × 1035.71 × 1031.41 × 1044.78 × 1092.63 × 1081.36 × 1082.98 × 107
p-value-6.57 × 10−1 =4.76 × 10−2 +1.32 × 10−2 +5.55 × 10−16 +1.13 × 10−2 +3.15 × 10−1 =1.22 × 10−25 +
F14Median7.92 × 1042.26 × 1059.20 × 1052.10 × 1062.37 × 1073.82 × 1065.15 × 1055.23 × 104
Mean8.59 × 1043.69 × 1051.48 × 1062.38 × 1063.30 × 1075.14 × 1062.70 × 1065.43 × 104
Std3.53 × 1044.94 × 1051.37 × 1068.62 × 1052.89 × 1076.48 × 1063.96 × 1062.17 × 104
p-value-3.25 × 10−3 +9.95 × 10−7 +1.18 × 10−20 +8.20 × 10−8 +9.52 × 10−5 +7.51 × 10−4 +8.07 × 10−5 −
F15Median1.06 × 1031.54 × 1032.32 × 1035.08 × 1044.72 × 1097.66 × 1032.72 × 1035.12 × 106
Mean2.64 × 1032.78 × 1034.88 × 1037.18 × 1045.09 × 1092.96 × 1071.18 × 1045.21 × 106
Std4.67 × 1033.21 × 1035.47 × 1036.60 × 1042.66 × 1091.11 × 1085.98 × 1051.52 × 106
p-value-6.26 × 10−1 =9.89 × 10−2 =5.50 × 10−7 +9.75 × 10−15 +1.55 × 10−1 =3.06 × 10−1 =5.68 × 10−26 +
F16Median1.26 × 1032.90 × 1033.71 × 1038.82 × 1038.86 × 1038.44 × 1034.64 × 1036.39 × 103
Mean1.19 × 1032.88 × 1033.82 × 1038.79 × 1038.79 × 1038.43 × 1034.96 × 1036.36 × 103
Std4.58 × 1024.97 × 1036.74 × 1023.41 × 1021.19 × 1033.33 × 1021.66 × 1031.85 × 103
p-value-1.89 × 10−19 +1.09 × 10−24 +2.53 × 10−58 +1.46 × 10−38 +2.54 × 10−57 +5.24 × 10−17 +4.67 × 10−21 +
F17Median1.31 × 1032.45 × 1033.22 × 1035.94 × 1032.20 × 1045.62 × 1033.42 × 1034.13 × 103
Mean1.34 × 1032.43 × 1033.07 × 1035.97 × 1031.41 × 1056.07 × 1033.35 × 1034.09 × 103
Std5.07 × 1024.69 × 1025.17 × 1022.44 × 1023.96 × 1051.62 × 1031.01 × 1033.01 × 102
p-value-1.68 × 10−11 +1.30 × 10−18 +2.09 × 10−46 +6.30 × 10−02 =1.39 × 10−21 +1.74 × 10−13 +8.14 × 10−33 +
F18Median2.80 × 1053.87 × 1052.76 × 1062.92 × 1074.18 × 1071.88 × 1076.31 × 1053.37 × 103
Mean2.91 × 1054.72 × 1053.38 × 1063.20 × 1074.62 × 1072.62 × 1071.32 × 1063.32 × 103
Std1.25 × 1053.09 × 1052.09 × 1061.06 × 1073.72 × 1072.16 × 1071.70 × 1063.55 × 102
p-value-7.87 × 10−3 +7.90 × 10−11 +4.09 × 10−23 +1.21 × 10−8 +2.32 × 10−8 +1.93 × 10−3 +2.68 × 10−18 −
FCategoryQualitySCDLPSOXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
F19Hybrid
Functions
Median8.30 × 1023.33 × 1032.30 × 1035.82 × 1033.08 × 1092.60 × 1041.37 × 1038.17 × 106
Mean2.07 × 1034.55 × 1034.86 × 1038.89 × 1033.87 × 1098.66 × 1079.63 × 1058.39 × 106
Std2.56 × 1035.32 × 1036.41 × 1038.15 × 1033.13 × 1092.13 × 1085.08 × 1062.39 × 106
p-value 5.23 × 10−2 =3.35 × 10−2 +6.55 × 10−5 +1.04 × 10−8 +3.30 × 10−2 +3.12 × 10−1 =1.78 × 10−26 +
F20Median1.18 × 1032.16 × 1032.88 × 1035.70 × 1033.86 × 1034.78 × 1033.46 × 1033.42 × 103
Mean1.87 × 1032.19 × 1032.83 × 1035.58 × 1033.75 × 1034.74 × 1033.59 × 1034.15 × 103
Std1.32 × 1034.39 × 1024.93 × 1025.74 × 1026.26 × 1022.62 × 1029.00 × 1022.05 × 103
p-value 4.84 × 10−1 =1.96 × 10−3 +1.72 × 10−19 +2.00 × 10−8 +7.71 × 10−16 +1.26 × 10−6 +1.24 × 10−5 +
F11-20w/t/l-6/4/09/1/010/0/09/1/09/1/07/3/07/0/3
F21Composition
Functions
Median2.88 × 1024.54 × 1027.91 × 1021.22 × 1031.62 × 1031.30 × 1037.46 × 1022.21 × 103
Mean2.94 × 1024.59 × 1027.87 × 1021.23 × 1031.62 × 1031.29 × 1037.89 × 1022.20 × 103
Std1.83 × 1015.25 × 1018.43 × 1013.66 × 1011.70 × 1023.07 × 1011.81 × 1022.25 × 102
p-value-6.13 × 10−24 +1.27 × 10−37 +9.30 × 10−72 +5.17 × 10−45 +7.16 × 10−77 +3.28 × 10−21 +4.79 × 10−47 +
F22Median9.97 × 1031.38 × 1041.46 × 1043.09 × 1041.98 × 1043.08 × 1042.30 × 1041.02 × 103
Mean1.27 × 1041.27 × 1041.46 × 1043.07 × 1041.99 × 1043.07 × 1042.35 × 1041.02 × 103
Std6.78 × 1034.40 × 1031.35 × 1038.81 × 1022.37 × 1033.97 × 1024.24 × 1032.84 × 101
p-value-7.26 × 10−1 =1.44 × 10−1 =1.53 × 10−20 +1.44 × 10−6 +1.28 × 10−20 +1.02 × 10−9 +4.54 × 10−13 −
F23Median7.60 × 1028.16 × 1021.04 × 1031.70 × 1032.58 × 1031.51 × 1031.39 × 1032.33 × 104
Mean7.62 × 1028.19 × 1021.07 × 1031.83 × 1032.53 × 1031.51 × 1031.39 × 1032.34 × 104
Std4.79 × 1015.65 × 1011.09 × 1023.60 × 1022.47 × 1022.84 × 1011.92 × 1025.35 × 102
p-value-6.24 × 10−4 +2.35 × 10−20 +8.10 × 10−23 +1.46 × 10−42 +1.22 × 10−58 +2.38 × 10−24 +3.44 × 10−87 +
F24Median1.24 × 1031.20 × 1031.52 × 1033.08 × 1034.08 × 1031.87 × 1032.05 × 1039.38 × 102
Mean1.24 × 1031.24 × 1031.55 × 1033.14 × 1034.12 × 1031.88 × 1032.03 × 1039.36 × 102
Std7.89 × 1011.18 × 1021.46 × 1026.53 × 1024.76 × 1027.17 × 1011.32 × 1022.01 × 101
p-value-9.47 × 10−1 =4.48 × 10−14 +2.23 × 10−22 +1.11 × 10−38 +1.37 × 10−38 +5.80 × 10−35 +4.00 × 10−28 −
F25Median8.22 × 1021.08 × 1031.29 × 1037.34 × 1022.32 × 1042.82 × 1031.79 × 1031.51 × 103
Mean8.00 × 1021.09 × 1031.35 × 1037.54 × 1022.31 × 1042.80 × 1031.77 × 1031.51 × 103
Std5.76 × 1017.59 × 1012.93 × 1025.64 × 1016.09 × 1032.29 × 1022.76 × 1022.84 × 101
p-value-1.71 × 10−24 +3.47 × 10−14 +3.03 × 10−3 −2.13 × 10−27 +3.78 × 10−47 +4.16 × 10−26 +8.76 × 10−54 +
F26Median6.20 × 1035.36 × 1031.06 × 1042.54 × 1043.86 × 1041.43 × 1041.04 × 1049.32 × 102
Mean6.16 × 1034.18 × 1031.14 × 1042.62 × 1043.74 × 1041.45 × 1041.10 × 1049.37 × 102
Std4.81 × 1022.45 × 1032.40 × 1037.04 × 1034.48 × 1036.46 × 10022.29 × 1033.87 × 101
p-value-2.09 × 10−5 −1.28 × 10−16 +4.66 × 10−22 +3.15 × 10−42 +5.88 × 10−52 +3.45 × 10−16 +3.19 × 10−53 −
F27Median7.49 × 1028.71 × 1021.12 × 1035.00 × 1023.20 × 1037.61 × 1021.23 × 1031.12 × 104
Mean7.66 × 1028.83 × 1021.12 × 1035.00 × 1023.32 × 1037.61 × 1021.26 × 1031.12 × 104
Std6.46 × 1017.08 × 1011.75 × 1020.00 × 1008.62 × 1024.86 × 1019.86 × 1013.26 × 102
p-value-2.15 × 10−7 +1.23 × 10−14 +5.36 × 10−30 −7.09 × 10−23 +7.67 × 10−1 =3.18 × 10−30 +6.83 × 10−80 +
F28Median5.64 × 1028.31 × 1021.33 × 1035.00 × 1022.88 × 1041.34 × 1042.20 × 1037.68 × 102
Mean5.72 × 1028.32 × 1021.37 × 1035.00 × 1022.81 × 1041.34 × 1042.29 × 1037.68 × 102
Std3.12 × 1014.26 × 1013.31 × 1020.00 × 1004.47 × 1032.57 × 1025.64 × 1022.39 × 101
p-value-1.11 × 10−31 +1.18 × 10−18 +6.71 × 10−18 −2.05 × 10−39 +3.12 × 10−91 +1.72 × 10−23 +2.04 × 10−34 +
F29Median1.77 × 1033.02 × 1033.90 × 1036.93 × 1031.08 × 1046.16 × 1034.38 × 1031.28 × 104
Mean1.82 × 1033.03 × 1033.91 × 1036.87 × 1031.71 × 1046.13 × 1034.43 × 1031.28 × 104
Std3.99 × 1024.92 × 1025.41 × 1023.37 × 1021.70 × 1048.58 × 1026.82 × 1023.67 × 101
p-value-2.42 × 10−15 +6.71 × 10−24 +2.20 × 10−50 +1.00 × 10−5 +2.60 × 10−32 +3.49 × 10−25 +2.66 × 10−76 +
F30Median5.68 × 1032.87 × 1041.04 × 1057.89 × 1027.81 × 1091.26 × 1041.84 × 1063.45 × 103
Mean7.04 × 1033.55 × 1041.46 × 1057.99 × 1028.40 × 1091.57 × 1083.54 × 1063.39 × 103
Std3.52 × 1032.37 × 1041.34 × 1052.13 × 1024.21 × 1093.12 × 1083.84 × 1062.84 × 102
p-value-2.97 × 10−7 +6.76 × 10−7 +1.83 × 10−13 −2.04 × 10−15 +8.68 × 10−3 +6.47 × 10−6 +6.99 × 10−7 −
F21-30w/t/l-7/2/19/1/06/0/410/0/09/1/010/0/06/0/4
w/t/l-20/6/325/3/123/2/426/3/026/3/026/3/020/0/9
rank1.692.663.974.867.455.934.974.48
Table 6. Statistical comparison results between SCDLPSO and the seven state-of-the-art PSO variants on the CEC 2017 benchmark set with different dimensions in terms of “w/t/l”.
Table 6. Statistical comparison results between SCDLPSO and the seven state-of-the-art PSO variants on the CEC 2017 benchmark set with different dimensions in terms of “w/t/l”.
CategoryDXPSOTCSPSODNSPSOAWPSOCLPSO_LSGLPSOCLPSO
Unimodal
Functions
300/1/11/1/02/0/02/0/01/1/00/1/12/0/0
500/1/11/1/02/0/02/0/01/1/01/0/12/0/0
1001/0/11/1/01/1/02/0/01/1/02/0/01/0/1
Simple
Multimodal
Functions
306/0/16/0/15/0/26/0/16/1/06/0/15/0/2
506/0/16/0/16/0/16/0/17/0/06/0/15/0/2
1006/0/16/0/16/1/06/1/07/0/07/0/06/0/1
Hybrid
Functions
305/4/19/1/08/1/17/3/09/1/08/2/08/0/2
505/3/24/6/09/0/110/0/06/3/15/5/06/2/2
1006/4/09/1/010/0/09/1/09/1/07/3/07/0/3
Composition
Functions
306/3/110/0/08/0/210/0/08/2/09/1/05/3/2
506/3/19/1/08/0/210/0/09/1/07/2/17/0/3
1007/2/19/1/06/0/410/0/09/1/010/0/06/0/4
Whole Set3017/8/426/2/123/1/525/3/124/5/023/4/220/3/6
5017/7/520/8/125/0/428/0/123/5/119/7/320/2/7
10020/6/325/3/123/2/426/3/026/3/026/3/020/0/9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yang, Q.; Hua, L.; Gao, X.; Xu, D.; Lu, Z.; Jeon, S.-W.; Zhang, J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics 2022, 10, 761. https://doi.org/10.3390/math10050761

AMA Style

Yang Q, Hua L, Gao X, Xu D, Lu Z, Jeon S-W, Zhang J. Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems. Mathematics. 2022; 10(5):761. https://doi.org/10.3390/math10050761

Chicago/Turabian Style

Yang, Qiang, Litao Hua, Xudong Gao, Dongdong Xu, Zhenyu Lu, Sang-Woon Jeon, and Jun Zhang. 2022. "Stochastic Cognitive Dominance Leading Particle Swarm Optimization for Multimodal Problems" Mathematics 10, no. 5: 761. https://doi.org/10.3390/math10050761

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop