Next Article in Journal
Review-Aware Recommendation Based on Polarity and Temporality
Next Article in Special Issue
Low-Cost Lung Cancer Classification in WSIs Using a Foundation Model and Evolving Prototypes
Previous Article in Journal
A Multi-Objective Evolutionary Computation Approach for Improving Neural Network-Based Surrogate Models in Structural Engineering
Previous Article in Special Issue
A Multi-Expert Evolutionary Boosting Method for Proactive Control in Unstable Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Heterogeneous Genetic Learning and Comprehensive Learning Strategy Particle Swarm Optimizer

1
School of Vehicle and Transportation Engineering, Taiyuan University of Science and Technology, Taiyuan 030024, China
2
State Key Laboratory of Precision Geodesy, Innovation Academy for Precision Measurement Science and Technology, Chinese Academic Sciences, Wuhan 430077, China
3
Shanxi Intelligent Transportation Laboratory Co., Ltd., Taiyuan 030036, China
4
College of Mining Engineering, Taiyuan University of Technology, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Algorithms 2025, 18(12), 755; https://doi.org/10.3390/a18120755
Submission received: 1 November 2025 / Revised: 24 November 2025 / Accepted: 25 November 2025 / Published: 28 November 2025
(This article belongs to the Special Issue Evolutionary and Swarm Computing for Emerging Applications)

Abstract

In canonical PSO, maintaining an appropriate balance between exploration and exploitation is vital for optimizing particle behavior. However, this balance can be difficult to achieve in some complex scenarios. To tackle this issue, this study proposes an innovative PSO variant, named the heterogeneous genetic learning and comprehensive learning strategy PSO (HGCLPSO). The HGCLPSO incorporates the genetic learning strategy (GLS) and comprehensive learning strategy (CLS) to form heterogeneous sub-populations, effectively balancing the exploration and exploitation capabilities. Furthermore, a potentially excellent gene activation (PEGA) mechanism is designed to update the archived position of gbest (Abest) by learning some excellence gene of individual particles to further enhance the GLS sub-population exploitation ability. A repulsive mechanism is incorporated into the CLS sub-population to prevent premature convergence and preserve diversity. Additionally, a local search operator based on the BFGS Quasi-Newton method is utilized to fine-tune the best solution during the later stages of evolution. To evaluate the performance of HGCLPSO, it is benchmarked against eight renowned PSO variants and six additional evolutionary algorithms using the CEC2014 and CEC2017 test suites as well as a real-world WSN coverage engineering problem. Experimental outcomes show that HGCLPSO obtains the optimal average rank in the majority of test problems, which verifies its robustness and competitiveness as an optimization tool for addressing continuous optimization tasks.

1. Introduction

In the field of computational intelligence, a multitude of swarm intelligence optimization algorithms have been proposed or evolved, such as genetic algorithms (GA) [1], particle swarm optimization (PSO) [2], Snow ablation optimizer [3], Starfish optimization algorithm (SFOA), [4] and others. These methods have increasingly become the go-to solutions for complex problems that traditional optimization algorithms struggle to address. Among them, the PSO algorithm has attracted widespread interest in many fields such as image encryption [5], multi-AGV path planning [6], UWB base station deployment optimization [7], satellite navigation [8,9], and resource allocation [10] due to its simplicity, few parameter requirements, fast convergence and strong optimization ability.
In the original PSO algorithm, each individual converges or concentrates on the personal history best position (pbest) and global best position (gbest) within the population, which can lead to a loss of diversity and premature convergence. As a result, its performance tends to degrade when faced with optimization problems that feature many local optima or are non-separable [11]. To address this, various PSO variations have been developed with the aim of striking the right balance between global exploration and local exploitation in different aspects of the problem, as well as enhance convergence speed in an effort to find global optimum solutions for diverse challenges. These variants can generally be classified into four main types: firstly, population neighborhood topology and multi-swarm tactics; secondly, parameter control techniques; thirdly, novel learning strategies; and lastly, hybrid PSO methods that integrate with other algorithms. However, a conflict often arises between uni-modal and multi-modal problems. For example, the former benefit from reduced population diversity, which aids in exploitation, while the latter require greater population diversity to help the algorithm move away from local optima and facilitate global exploration. A concise summary of these four categories is presented as follows.
The first category of PSO variants involves the introduction of new topological structures in the neighborhood and multi-swarm strategies to manage exploration and exploitation through different information-sharing mechanisms among particles, thus mitigating premature convergence. In FIPS [12], each individual updates its position by considering the pbest of all its neighbors, meaning that every particle is influenced by all individuals in the population. Nasir [13] developed a DNLPSO algorithm where a particle selects an exemplar from the best positions of its neighbors, allowing its velocity to be influenced by both historical and neighbor-based information. Qu recently proposed the LIPS algorithm to address multi-modal problems, utilizing local best information from nearby particles (based on Euclidean distance) to guide the search for the global optimum [14]. Liang [15] introduced the DMS-PSO algorithm in 2006 that divides the population into smaller swarms with a continuously evolving neighborhood structure. To tackle large-scale optimization problems, the CCPSO2 [16] algorithm splits the decision variables into smaller multi-swarms.
The second category focuses on adaptive parameter control techniques. In HPSO-TVAC, individual velocity updates use time-varying acceleration coefficients c1 and c2. In the FST-PSO method [17], a novel fuzzy logic (FL) approach independently calculates the inertia weight, cognitive/social component, and velocity limits for each particle, eliminating the need for predefined parameter settings. APSO employs evolutionary state estimation to categorize the population’s state into exploration, exploitation, convergence, and escape phases, and adjusts the inertia weights and acceleration coefficients based on these states [18]. Additionally, APSO-VI [19] dynamically adjusts the inertia weight based on the average absolute velocity, using feedback control to align with a specified nonlinear ideal velocity.
The third category of PSO variants introduces a novel learning approach to enhance search efficiency. This approach, known as the generalized “learning PSO” paradigm, integrates learning operators or specific mechanisms into the traditional PSO algorithm, allowing particles to acquire learning abilities to improve the universality and robustness. The CLPSO algorithm can be a typical representative of this category [20], where each particle updates its decision variables by learning from multiple previous pbest positions. This strategy can increase the population search effectiveness. OLPSO employs orthogonal experimental design (OED) to merge the pbest and gbest positions, creating a learning exemplar for each particle [21]. This orthogonal approach helps guide the search more effectively, improving overall performance. GOPSO utilizes Cauchy mutation and generalized opposition-based learning techniques [22]. Additionally, SL-PSO [23] incorporates social learning strategies, where particles learn from the demonstrators (better-performing particles within the swarm).
GL-PSO [24] divides operation into two stages. Genetic algorithm operators—crossover, mutation, and selection—are employed in the first stage to generate promising learning examples. The second stage updates the swarm’s position and velocity using the standard PSO algorithm. Another approach, Dimensional Learning PSO (DLPSO), introduced by Xu [25], enhances learning by allowing each particle’s pbest to learn from the corresponding dimension of the gbest. This results in a learning example that combines the best information from both the individual and global experiences. However, no single PSO algorithm with a single learning method has proven universally effective across all problem types. The Hybrid Collaborative Learning PSO (HCLPSO) [26] addresses this by dividing the population into two sub-populations: one focused on exploration, updating velocities using the Comprehensive Learning Strategy (CLS), and the other focused on exploitation, using a global version of the CLS for velocity updates. TSLPSO combines dimensional learning strategies (DLS) and CLS to achieve a relative balance between exploration and exploitation capabilities [25]. HCLDMS-PSO [27] further divides the population into two subgroups: one using CLS with gbest for exploitation, and the other employing the DMS-PSO algorithm to enhance exploration. In Self-Learning PSO (SLPSO) [28], each individual can deceptively switch between four different learning strategies, including learning from its own pbest, a randomly chosen nearby position, another particle’s pbest, and the gbest position. Numerous other learning mechanisms are also being actively researched, expanding the potential of PSO [29,30,31,32].
The fourth category are hybrid PSO with other algorithms to enhance its performance on complex problems. Each method integrated into a hybrid PSO brings its own strengths. For example, Xin [33] explored several hybrid combinations of Differential Evolution (DE) and PSO. Kran and Gündüz [34] developed a hybrid PSO and Artificial Bee Colony (ABC) algorithm. In those hybrid algorithms, the information is exchanged between the particle swarm and other individuals to improve search capabilities. In addition, various specific search operators have been integrated into PSO, such as mutation operators [25], two differential mutations [35], chaos-based initialization with robust update strategies [36], and aging mechanisms [37]. These auxiliary techniques aim to increase the diversity of the population and accelerate convergence.
Previous studies have highlighted issues with PSO, such as “oscillation” and the “two steps forward, one step back” phenomenon. These problems can be mitigated, and search efficiency enhanced, by using PSO variants with unique learning techniques [20,21,23,24,25,28]. However, for both uni-modal and multi-modal problems, it is difficult for a single learning strategy to achieve a balance between exploration and exploitation capability. In response, we propose a Heterogeneous Genetic Learning and Comprehensive Learning Strategy PSO (HGCLPSO), which combines the GLS and CLS to efficiently identify and preserve promising solutions. This approach is inspired by algorithms such as HCLPSO, HCLDMS-PSO, TSLPSO, and HCLDMS-PSO. The whole population is compartmentalized into two sections to guide the particle search. The GLS sub-population uses a global topology to generate learning examples, optimizing exploitation, while the CLS sub-population learns from the pbest of multiple particles across different dimensions, enhancing exploration. However, the GLS sub-population may suffer from a lack of exploitation capacity due to misleading information or insufficient convergence if the ideal solution is not found. To address this, we introduce a new Potential Excellent Gene Activation (PEGA) mechanism, which improves the exploitation ability of the GLS sub-population by updating the archived position of gbest (Abest) with high-quality genes from individual particles. Additionally, a repulsive mechanism is incorporated into the CLS sub-population, causing particles to generate repulsive responses based on their distance from the global optimal solution (Abest), preventing premature convergence and maintaining diversity. Lastly, a local search operator utilizing the BFGS Quasi-Newton method is employed in the latter phases of evolution to enhance the optimal solution.
The performance of HGCLPSO is initially measured using 30D and 50D CEC2014 test suite problems, and it is compared to eight advanced PSO variants. Next, the 30D CEC2017 test suite is utilized to further assess HGCLPSO, comparing it against six non- PSO meta heuristics and five advanced PSO variants. The algorithm’s effectiveness is then tested on a real-world coverage control problem in wireless sensor networks (WSNs). The experimental results demonstrate that HGCLPSO achieves the best average rank across most test problems, confirming it as a robust and competitive optimization tool for continuous optimization problems.

2. Related Work

2.1. Canonical PSO

In canonical PSO, each particle represents a potential solution with its own position and velocity. Initially, all particles are randomly assigned across the search space. Subsequently, each particle is guided toward pbest and gbest positions by the following equation:
v i d ( t + 1 ) = ω v i d ( t ) + c 1 r 1 [ p b e s t i d ( t ) x i d ( t ) ] + c 2 r 2 [ g b e s t d ( t ) x i d ( t ) ]
x i d ( t + 1 ) = x i d ( t ) + v i d ( t + 1 )
where d = 1 , 2 , ... , D are the dimensions of the optimization problem. The ith particle position and velocity are x i d ( t ) and v i d ( t ) ( i = 1 , 2 , ... , N ), respectively. ω is the inertia weight parameter, c 1 and c 2 are the acceleration coefficients, and r 1 and r 2 are two random numbers generated within the range of [0, 1]. In addition, p b e s t i d and g b e s t d are the pbest and gbest position for the ith particle in the dth dimension. The particle velocity update consists of three distinct components. The first component, inertia, retains the particle’s previous speed, reflecting its “memory”. The second component, known as “cognition", represents the particle’s own knowledge. The third component, “socialization", involves the exchange of information and collaboration between particles.

2.2. GL-PSO

The Genetic Learning PSO (GL-PSO) [24] employs a two-layer cascading structure to generate high-quality learning exemplars. In the first layer, GA operators—crossover, mutation, and selection [1]—are applied to produce promising learning examples. The second layer updates the particle velocity according to Equation (3):
v i d ( t + 1 ) = ω v i d ( t ) + c 1 r 1 [ e i d ( t ) x i d ( t ) ]
where e i d ( t ) is constructed to replace two exemplars p b e s t i d and g b e s t d to guide the evolution direction of particle i. The pseudo code for generating a promising exemplar for the ith particle is presented in Algorithm 1. In summary, each particle undergoes GA operators based on the historical data of particles in each generation. As a result, the generated learning exemplars are not only of high quality but also exhibit good diversity. Numerous numerical experiments have shown that this approach enhances search capability and efficiency of PSO.
Algorithm 1. Pseudo code for breeding a GLS exemplar e i d ( t )
1:For i = 1 to N do
2:/* Exemplar update: Crossover */
3:For d = 1 to D do
4:  Randomly select a particle k 1 , 2 , , N
5:  If  f ( P i ) < f ( P k ) then
6:    e i d = r d p b e s t i d + ( 1 r d ) g b e s t d
7:  else
8:    e i d = p b e s t k d
9:  End if
10:End for
11:/* Exemplar update: Mutation */
12:For d = 1 to D do
13:  If rand(0,1) < pm then
14:    e i d = r a n d ( l b d , u b d ) ;
15:  End if
16:End for
17:/* Exemplar update: Selection */
18: Evaluate f ( e i )
19:If  f ( e i ) < f ( E i ) then
20:   E i = e i ;
21:End if
22:If  f ( E i ) stagnated for s g generations then
23:  Select local optimal E j by 0.2 N tournament selection
24:End if
25:End for

2.3. CLPSO

In CLPSO [20], all particles update their velocity by considering both their pbest position and the pbest of other particles across different dimensions according to Equation (4):
v i d ( t + 1 ) = ω v i d ( t ) + c 1 r 1 [ p b e s t f i ( d ) d ( t ) x i d ( t ) ]
where f i ( d ) = [ f i ( 1 ) , f i ( 2 ) , f i ( D ) ] determines which particle’s pbest is to be followed in the dth dimension. Each individual has a distinct learning probability value, denoted as Pc, which is computed using Equation (5):
P c i = a + b * ( exp ( 10 ( i 1 ) / ( N 1 ) ) 1 ) ( exp ( 10 ) 1 )
where a = 0.05 , and b = 0.45 . A tournament selection method is employed to select the guiding particle. The specific or detailed implementation process can be referred to in the original literature. Accordingly, the learning example p b e s t f i ( d ) d is a new position that channels the particles in the search space into a new orientation. The refresh gap m is defined as the number of evaluations that must pass in order to prevent the function evaluations from being squandered in the wrong direction. A fresh p b e s t f i ( d ) d will be generated until the particle stops improving if the subsequent moves in generation m are not improved. In this manner, the flying direction of the particle is updated using the most accurate historical data available for all particles. This guarantees that the population diversity can be preserved by the CLPSO algorithm and a premature convergence can be avoided. The test shows that while the CLPSO is less effective at handling uni-modal problems, it has clear advantages for solving multi-modal challenges.

3. The Proposed HGCLPSO Method

In GL-PSO, the learning exemplar is created using both the particle’s pbest and gbest in a global topology, similarly to a global version of PSO. While this mechanism enables a high convergence rate, it can also lead to a rapid loss of particle diversity, reducing the algorithm’s exploration capability. Experimental results indicate that GL-PSO performs less effectively on complex multi-modal functions compared to uni-modal and simpler multi-modal functions [24], highlighting GLS’s strong exploitation ability. In contrast, CLPSO adjusts individual velocity by considering the pbest position of all other particles, which allows each dimension of a particle to draw knowledge from various sources. Therefore, CLS enhances population diversity and strengthens global exploration, which helps a particle escape a local optimum by learning from others, thus avoiding premature convergence. By combining the strengths of GLS and CLS, it is possible to effectively balance exploration and exploitation capabilities.
(1)
GLS sub-population and CLS sub-population hybrid
We suggest employing GLS and CLS to form heterogeneous sub-populations, which in turn boost the canonical PSO algorithm search capacity within the search space. The first sub-population is allocated to the GLS with the population size of N1 responsible for exploitation in the search range and the second sub-population is assigned to the CLS with the remaining population size N2 (i.e., N-N1) for exploration, respectively. In this way, it is worth noting here that the learning exemplar p b e s t f i ( d ) d constructed by CLS sub-population particles is learning from different dimensions of the whole population of individuals, not just the CLS sub-population range.
(2)
Potentially excellent gene activation mechanism (PEGA)
In the GLS sub-population, if the gbest has not yet converged to the optimal solution, particles in the GLS sub-population may maintain good diversity but lack sufficient exploitation, which negatively impacts the balance of search capabilities in the proposed HGCLPSO algorithm. In canonical PSO, the gbest is updated or replaced by the pbest information from all dimensions. This can result in valuable information being lost. There are two opposing factors that affect the gbest position: One is that improvements in certain dimensions of gbest can enhance the fitness of the overall best individual. On the other hand, while the overall fitness may improve, some dimensions of the gbest position may worsen, leading to the “two steps forward, one step back” phenomenon.
To address or mitigate this issue, we introduce the Potentially Excellent Gene Activation (PEGA) mechanism, which monitors the pbest position to help accelerate the improvement of the gbest position, specifically the archived gbest position (Abest). The Abest position is initially the same as the gbest position in the first generation. As a particle’s pbest improves over time, a counter, success_count, is incremented. When success_count exceeds a threshold value S, the pbest position may contain useful information in certain dimensions, even if its overall fitness is low. In such cases, the gbest position should incorporate potentially valuable information from the corresponding dimensions of the pbest. However, identifying which dimensions or combinations of dimensions from the improved pbest are beneficial for the Abest position is challenging. To address this, we define a learning probability (PL) for each dimension of Abest, based on a Sigmoid function, to determine how much information should be transferred from the pbest particle, as follows:
P L d = 1 1 + exp ( 1 5 ( d D 2 ) )
where P L d , d = 1 , 2 , , D distribute between 0.0 and 1.0 for the dth dimension of all particles. For 30D problems, the PL values for different dimensions are shown in Figure 1a. For a certain dimension d, if a random number is less than PL, the A b e s t d is replaced by p b e s t d , and the new Abest position fitness f i t ( A b e s t ) is evaluated. If the new Abest position performs better than the previous one, it replaces the old one; otherwise, the update is skipped. Algorithm 2 outlines the process for updating the Abest position. Introducing the learning probability has two key advantages: first, it helps HGCLPSO save computational resources, and second, it reduces the likelihood of incorporating potentially irrelevant information from the pbest position.
Algorithm 2. Pseudo code for updated Abest position
1:For i = 1 to N do
2:If success_count(i) > S then
3:  For d = 1 to D do
4:   If rand < PLd then
5:    temp = Abest;
6:     If tempd = = pbestd then
7:      continue
8:     End if
9:     tempd = pbestd
10:     If fit(temp) < fit(Abest) then
11:      Abestd = tempd
12:     End if
13:    End if
14:   End for
15:  End if
16:End for
After generating the Abest position, the formula for breeding a GLS exemplar e i d in Algorithm 1, line 6 will be replaced as follows:
e i d = r d p b e s t i d + ( 1 r d ) A b e s t d
(3)
Repulsive mechanism
Although the CLS sub-population can maintain high population diversity, with the development of evolution, its particles will gradually learn from the excellent genes in the GLS sub-population and gradually lose diversity. However, when solving complex problems with many local optimums, the population may locate a pseudo optimal area and spread misinformation. Then, the CLS sub-population particles are easily attracted to the same region. To prevent premature convergence in the CLS sub-population and avoid clustering, we introduce a repulsive mechanism [38]. If a particle in the CLS sub-population becomes too close to the current optimal position (Abest), its position and velocity are adjusted using Equations (8) and (9). Meanwhile, the GLS sub-population continues its operation unaffected by this mechanism.
x i d = x i d ± r d ( U d L d ) exp ( d i s t i ) d i s t i = d = 1 D ( A b e s t d x i d ) 2
v i d = L d + r d ( U d L d )
where r d is a random number that is uniformly distributed within the interval [0, 1], while U d and L d represent the upper and lower limits of the search space, respectively. It is worth noting that the second part in Equation (8) is an exponential decay function related to the distance d i s t i . According to the repulsive force curve shown in Figure 1b, if a particle in the CLS sub-population is closer to Abest, it will be repulsed by greater repulsive force to explore further regions of the search range, while a particle far from Abest is hardly affected. Since this repulsive mechanism has a strong destructive effect on the evolution of the CLS sub-population, it is operated every R generations to stimulate the CLS sub-population to retain high diversity.
(4)
BFGS Quasi-Newton local search
Since exploitation is key to improving solution accuracy, we incorporate the widely used BFGS Quasi-Newton method [39,40] in HGCLPSO as a local search operator. Only Abest is chosen to conduct the local-searching procedure at the later evolutionary stage of the HGCLPSO. In all tests, the Quasi-Newton technique receives no gradient information; hence, we assign [0.05 * MaxFEs] evaluations to refine Abest based on the BFGS Quasi-Newton method. In this study, the BFGS Quasi-Newton local-searching operator is realized using the function “fminunc” in Matlab R2021a.
Incorporating all the aforementioned features, the pseudo code for HGCLPSO is provided in Algorithm 3. Seeing in Supplementary Materials, the MATLAB source code for HGCLPSO can be accessed at https://github.com/wangshengliang2018/HGCLPSO (accessed on 2 September 2022).
Algorithm 3. Pseudo code for the proposed HGCLPSO algorithm
1:Set population size parameter N, N1, N2; Set learning probability parameter PL;
2:Set GLS and CLS parameter;
3:Initialize particle position Xi and velocity Vi (1 ≤ iN);
4:Evaluate Xi and record the fitness fit(Xi), fes = N;
5:Initialize the pbesti = Xi and the pbest fitness value fit(pbesti) = fit(Xi);
6:Initialize the gbest = [pbesti |min(fit(pbesti)),1 ≤ IN]; Abest = gbest;
7:Initialize the GLS exemplar by Algorithm 1 with Equation (7) for the first sub-population;
8:Initialize the CLS exemplar for the second sub-population;
9:While (iter <= max_iteration) and (fes <= 0.95*MaxFEs)
10:fit(last_pbest) = fit(pbest);
11:For i = 1:N1
12:  Update the GLS sub-population Xi and Vi according to Equations (3) and (2);
13:End for
14:If mod(iter, R) == 0 then
15:  For i = N1 + 1:N
16:  Update the CLS sub-population Xi and Vi according to Equations (8) and (9);
17:  End for
18:Else
19:  For i = N1 + 1:N
20:  Update the CLS sub-population Xi and Vi according to Equations (4) and (2);
21:  End for
22:End if
23:For i = 1:N
24:  Evaluate all Xi fitness;
25:  Update the pbesti and gbest;
26:  If fit(gbest) < fit(Abest) then
27:   Abest = gbest;
28:  End if
29:  If  fit(pbesti) < fit(last_pbesti) then
30:   success_count(i) = success_count(i) + 1;
31:  Else
32:   success_count(i) = 0;
33:  End if
34:  Update Abest position by Algorithm 2;
35:End for
36: Update the GLS exemplar by Algorithm 1 with Equation (7);
37: Update the CLS exemplar;
38:End While
39:Assign 0.05∗MaxFEs for Abest to carry out the BFGS Quasi-newton local search operator.
Population diversity is a key indicator for assessing both exploitation and exploration abilities. We analyze the diversity of the GLS sub-population, the CLS sub-population, and the overall population throughout the evolution process. The formulas for calculating the diversity measure are given in Equations (10) and (11):
D i v e r s i t y = 1 N i = 1 N d = 1 D ( X i d ( t ) X d ¯ ) 2
X d ¯ = 1 N i = 1 N X i d ( t )
where N represents the population size, and X d ¯ refers to the average position of the dth dimension across the entire population. Small population diversity suggests that the population is being exploited within a constrained area since the particles condense close to the population center. High population diversity suggests that a larger area is being explored and that the population’s particles are spread out from its core.
Figure 2 displays the diversity measure value curves of F1, F7, F9, F15, F19, and F27 in the CEC2014 test suite function [41]. According to Figure 2, the CLS sub-population often has higher diversity than the GLS sub-population, while the whole population diversity falls between the two. Since there are no data from gbest to serve as a central guiding direction, it is not surprising that the CLS subgroup maintains the maximum diversity. Additionally, the presence of the repulsive mechanism is clearly established to have caused a rise in the diversity of CLS sub-populations or mutation. In the early stages of evolution, the CLS population members are relatively distant from the gbest individuals, resulting in a weak repulsive force. As the evolution progresses, the CLS population moves closer to the ideal individuals, generating a stronger repulsion that helps prevent premature convergence. The global topological structure and PEGA updating mechanism for the GLS example generate the smallest diversity and the fastest convergence in the GLS sub-population, respectively. Due to the collaborative effort of the GLS and CLS sub-populations in balancing exploitation and exploration, the overall population diversity remains moderate. The role design of the GLS and CLS sub-populations fulfills our expectations, as demonstrated by the experimental results comparing diversity measures.

4. Experimental Results and Analysis

In Section 4.1, we modify the key parameters of the HGCLPSO algorithm using the CEC2014 test suite, focusing on uni-modal and simple multi-modal functions (F1~F16). Section 4.2 compares the performance of HGCLPSO with eight advanced PSO variants using the CEC2014 test suite to evaluate its robustness against shift and rotation functions. In Section 4.3, we compare HGCLPSO with other state-of-the-art evolutionary algorithms (EAs) on the CEC2017 test suite [42]. Section 4.4 applies HGCLPSO and other EAs to solve the coverage problem in WSNs. The CEC2014 test suite contains 30 test functions, and the problem definitions and code are available for download from the GitHub repository (https://github.com/P-N-Suganthan/CEC2014) (accessed on 2 September 2022). The CEC2017 test suite (https://github.com/P-N-Suganthan/CEC2017-BoundContrained) (accessed on 2 September 2022) consists of 29 test functions. In this suite, F2 is excluded due to its instability, especially in higher dimensions.
The evaluation criteria involve calculating the mean error and standard deviation for each algorithm, then ranking their performance. The final ranking for each algorithm is determined by averaging its individual ranks. In Section 4.2 and Section 4.3, we apply the widely used non-parametric Wilcoxon rank sum test [43] at a 0.05 significance level to statistically assess the significance of the differences between the results of the algorithms compared, strengthening the validity of the experimental findings. The symbols “+”, “=”, and “−” respectively indicate that the performance of the HGCLPSO algorithm is significantly better than, without significant difference, and worse than the compared algorithms. In addition, we also performed the Friedman test to determine whether the algorithm results were statistically significant, further supporting the experimental conclusions. All algorithms are implemented in MATLAB R2021a and executed on a laptop with an Intel Core i5 11400 CPU (2.60 GHz) and 8 GB RAM, running Microsoft Windows 10.

4.1. HGCLPSO Algorithm Parameter Tuning

There exist two parameters, i.e., two sub-population sizes (N1 and N2) and threshold value parameter S in the Abest update process, which need to be tuned in the proposed HGCLPSO algorithm. Our preliminary experiments indicate that setting the parameter S to 3 leads to excessive use of computational resources, which negatively impacts the population’s evolution. On the other hand, when S is set greater than 5, the update probability of Abest significantly decreases, hindering the GLS sub-population’s exploitation ability. However, when S is set to 4, these issues are well-balanced. Therefore, we will not discuss the tuning of this parameter further. We just use the CEC2014 benchmark function to test and compare the influence of two different sub-population sizes (N1 and N2) on the parameters. The runtime, population size, dimensions, and MaxFEs are set to 31, 40, 30, and 300,000, respectively. The final ranking is determined by calculating the average rank across 16 benchmark functions.
When N1 equals zero, the proposed HGCLPSO algorithm degenerates into the canonical CLPSO algorithm with the repulsive mechanism and BFGS Quasi-Newton local search. When N1 equals N, the proposed HGCLPSO algorithm will become the GL-PSO algorithm with the Abest update mechanism and BFGS Quasi-Newton local search. So, the above two cases are not included in our discussion of parameter adjustment. We assign the value of N1 from 0.1*N to 0.9*N with an interval of 0.1*N, for a total of nine cases for the final ranking statistical analysis. Table 1 presents the results of tuning the sizes of two sub-populations (N1 and N2). Based on the final rankings, the combination of N1 = 0.8*N and N2 = 0.2*N yielded the best average rank for F1–F16 compared to other configurations. Thus, the sub-population sizes N1 = 0.8*N and N2 = 0.2*N, along with the threshold parameter S = 4, are selected in subsequent experiments. While this configuration may not be optimal for every minimization problem, it generally enables HGCLPSO to achieve satisfactory solutions across various functions.

4.2. Experimental Results and Analysis of CEC2014 Test Suite

In this section, we compare HGCLPSO with eight other state-of-the-art PSO variants to validate its effectiveness on the CEC2014 test suite. These variants include CLPSO [20], OLPSO [21], SL-PSO [23], HCLPSO [26], GL-PSO [24], EPSO [44], TSLPSO [25], and HCLDMS-PSO [27]. Detailed parameter settings are provided in Table 2. CLPSO and GL-PSO are discussed in Section 2.2 and Section 2.3, respectively. OLPSO leverages an orthogonal learning strategy to generate learning exemplars for particle updates. SL-PSO integrates a dimension-dependent parameter management strategy with social learning for updating positions and velocities. HCLPSO employs a comprehensive learning strategy and divides the population into two subgroups to perform exploration and exploitation functions, respectively. EPSO combines five PSO strategies—such as inertia weight PSO and CLPSO-gbest—through a self-adaptive mechanism. TSLPSO uses DLS and CLS techniques to manage exploration and exploitation. Meanwhile, HCLDMS-PSO separates the population into CL and DMS subgroups, with the CL subgroup focused on exploitation using CLS and the DMS subgroup on exploration using DMS-PSO.

4.2.1. Results for 30 Dimensional Problems

The unified parameters for all 30 CEC2014 test problems are as follows: population size = 40, problem dimension = 30, number of runs = 31, and MaxFEs = 300,000. The experimental outcomes are listed in Table 3, with the best rank results highlighted in bold. Additionally, Table 4 presents the statistical results of the Wilcoxon rank sum test, while Table 5 displays the Friedman test rankings for all the algorithms when applied to 30-dimensional problems.
For F1–F3, HGCLPSO outperforms others on functions F1 and F3. GL-PSO, however, provides the best solution on F2. Notably, HGCLPSO ranks third on F2, demonstrating that it benefits from the strong exploitation capabilities of GL-PSO, contributing to its excellent performance on uni-modal problems. In the case of F4–F16, HGCLPSO ranks first on F4, F5, F8, and F12, and second on F10. SL-PSO performs best on multi-modal functions F6, F9, and F11. HCLDMS-PSO leads on F13, F15, and F16, while TSLPSO excels on F7 and F14. GL-PSO ranks highest on F10. For hybrid and composition functions F17–F30, HGCLPSO delivers the best results on F17, F20, and F21, as well as on F23, F25, and F30—totaling six out of 14 test functions. CLPSO performs best on F18, F24, and F29. HCLDMS-PSO achieves the best solution on F19 and F26, while SL-PSO excels on F27 and F28. HCLPSO outperforms others on F22. Overall, HGCLPSO demonstrates the strongest performance, mainly due to the exploration capabilities of its CLS sub-population. The final row of Table 3 summarizes the number of (Best/2nd Best/Worst) rankings for each algorithm. In total, HGCLPSO delivers the best performance on 12 out of 30 benchmark problems, achieving the highest overall ranking, while TSLPSO ranks second among the compared PSO algorithms.
Table 4 demonstrate that HGCLPSO outperforms the other eight PSO variants on most of the 30D CEC2014 test problems. Notably, it achieves significantly better solutions on F1, F3, F4, F17, F20, F21, F25, and F30, surpassing all other algorithms in these cases. Additionally, the Friedman test results in the last row of Table 5 reveal a p-value of 2.30461 × 10−13, indicating that there are statistically significant differences between the nine algorithms. The average rank of HGCLPSO across all 30D CEC2014 benchmark functions is 3.62, confirming that it is the top-performing PSO variant in the comparison.
To analyze the convergence performance, convergence graphs based on 31 runs for the 30D problems are presented. Figure 3 and Figure 4 show the convergence curve of the nine PSO variants for 12 CEC2014 benchmark functions, i.e., F1, F3, F4, F8, F10, F12, F17, F20, F21, F23, F25, and F30. It is evident from the graphs that HGCLPSO demonstrates a faster convergence rate compared to the other eight PSO variants in the early stages of evolution, particularly on F1, F10, F17, F21, and F25. While algorithms like TSLPSO, SL-PSO, and GL-PSO also show rapid convergence on functions such as F3, F4, F10, F12, F20, F23, and F30, HGCLPSO leverages its local search capabilities in the later stages to achieve better overall solutions. In summary, the convergence analysis confirms that HGCLPSO not only converges quickly but also consistently delivers superior performance across most 30D benchmark problems.

4.2.2. Results for 50 Dimensional Problems

For the 50D CEC2014 test suite functions, the parameters are set as follows: population size = 40, number of runs = 51, and MaxFEs = 500,000. The experimental results are summarized in Table 6, Table 7 and Table 8. Since the convergence behavior is similar to that observed for the 30D problems, the corresponding convergence graphs are not included in this section.
For F1–F16, HGCLPSO outperforms the others on F1 and F3, and achieves the best results on F4, F5, F8, F12, and F16, totaling seven out of 16 test functions. CLPSO performs best on the uni-modal function F2, with HGCLPSO ranking second. TSLPSO excels on functions F7, F14, and F15. HCLDMS-PSO delivers the best solutions for F9 and F13. SL-PSO performs best on F6 and F11, while GL-PSO excels on F10. For the hybrid and composition functions F17–F30, HGCLPSO shows excellent performance on F17, F20, F21, F23, F25, and F30, achieving the best results in six out of 14 test problems. HCLDMS-PSO leads on F22, F26, and F28. TSLPSO performs best on F19 and F24, while CLPSO is the top performer on F18 and F29. SL-PSO achieves the best results on F27. As indicated by the average ranks in Table 6, HGCLPSO achieves the optimal average rank of 3.03, while TSLPSO ranks second with an average of 3.07. This outcome mirrors the 30D test problem results, where both algorithms are influenced by HGCLPSO’s performance on specific functions like F6, F9, F11, F19, and F27.
Table 7 reveals that HGCLPSO significantly outperforms the eight other PSO variants on most of the 50D CEC2014 test functions. Additionally, Table 8 presents the Friedman test rankings for all nine algorithms applied to the 50D problems. The p-value of 3.4904 × 10−17, reported in the last row of Table 8, strongly suggests that there are substantial differences between the nine algorithms. Among them, HGCLPSO ranks the highest with an average rank of 3.37, indicating its superior performance.

4.2.3. Results for Computational Time

Table 9 presents the mean computation time of nine PSO algorithms. The computation time (in seconds) for each run was recorded for 30D and 50D CEC2014 problems. Among the compared algorithms, OLPSO requires the least computation time, followed by SL-PSO, which takes a slightly longer time. The computation times of the HGCLPSO and GL-PSO algorithms are nearly identical. The use of the CL strategy enhances the search performance of CLPSO, HCLPSO, and TSLPSO, but results in higher computational costs. EPSO and HCLDMS-PSO, which combine various PSO strategies to leverage their complementary strengths, also incur greater computational costs. Overall, HGCLPSO strikes a balance between competitive performance and computational efficiency, delivering excellent results without excessive time consumption.

4.3. Experimental Results and Analysis of CEC2017 Test Suite

In this section, the performance of HGCLPSO is also evaluated using 30D CEC2017 test suite problems [42], using the same parameter settings: MaxFEs set to 300,000, a population size of 40, and each algorithm is run independently 31 times. This comparison aims to show that HGCLPSO remains competitive when contrasted with several state-of-the-art EAs. In addition to HCLDMS-PSO, TSLPSO, GL-PSO, HCLPSO, and CLPSO discussed in Section 4.2, other advanced EAs are also included in the comparison, as follows:
  • moth flame optimization (MFO) [45]
  • whale optimization algorithm (WOA) [46]
  • Artificial Bee Colony Algorithm (ABC) [47]
  • butterfly optimization algorithm (BOA) [48]
  • bat algorithm (BA) [49]
  • grey wolf optimizer (GWO) [50]
The parameter settings for all PSO variants remain consistent with those used in the previous experiments, and the other EAs are based on the configurations from their original references. The mean errors and standard deviations are presented in Table 10, while the ranks of mean performance and the Wilcoxon rank sum test results are shown in Table 11.
From the results in Table 10 and Table 11, HGCLPSO performs best on 12 out of 29 test functions (F1, F3, F4, F12–F15, F18, F25, F27, F28, F30), ranking first. HCLDMS-PSO excels on functions F5, F7–F9, F16, F17, F20, F21, F23, F24, and F29. HCLPSO achieves the best rank results on F10 and F26. CLPSO provides the best solution for F19 and F22. TSLPSO performs best on F11, while ABC ranks first on F6. Based on the average ranks, HCLDMS-PSO takes the top spot, followed by HCLPSO in second and HCLDMS-PSO in third. Table 11 clearly shows that HGCLPSO is superior to most other evolutionary algorithms, with its performance being nearly on par with HCLDMS-PSO, TSLPSO, and HCLPSO. However, HGCLPSO outperforms algorithms like GL-PSO and CLPSO on most test functions. Notably, HGCLPSO also surpasses several well-known EAs, including ABC, MFO, WOA, BOA, GWO, and BA. Table 12 shows that HGCLPSO differs significantly from all other PSO variants and six other EAs on the CEC2017 test suite, with a Friedman test p-value of 3.98E-49 and average rank of 3.25.

4.4. HGCLPSO Performance on the WSNs Coverage Problem

HGCLPSO is further evaluated using the WSNs coverage control problem, a widely recognized real-world optimization challenge [51]. WSNs are integral to various applications such as target tracking, disaster warning, and wearable technology. These networks are typically large and dense, with significant overlap in their coverage areas. Random deployment of sensor nodes often fails to ensure complete coverage, leading to coverage gaps within the WSNs. Therefore, coverage control plays a crucial role in optimizing WSN architecture and minimizing energy consumption.
Suppose N sensor nodes are deployed randomly in a two-dimensional target monitoring area, which can be viewed as a node set G = n 1 , n 2 , n 3 , , n N , where n i denotes node i . The coordinate of node i is n i ( x i , y i ) , and its sensing radius is r i . If point P ( x p , y p ) is within the sensing radius of node n i , we assume that it is covered by node n i . The sensing rate p ( n i , P ) for pixel P covered by n i  is as follows:
p ( n i , P ) = 1 , d ( n i , P ) r i 0 , d ( n i , P ) > r i
The distance between pixel P and node n i is d ( n i , P ) = ( x i x p ) 2 + ( y i y p ) 2 . Because pixel P can be covered by several nodes at the same time, we consider whether pixel P is covered by node set G :
p ( G , P ) = 1 n i N 1 p ( n i , P ) ( i = 1 , 2 , 3 , , N )
Finally, the coverage rate of the target area is formulated as follows:
M a x i m u m F ( x ) = P j × k p ( G , P ) j × k
where j and k denote the length and width of the target area. Swarm intelligence is an effective approach for optimizing node deployment and enhancing network performance. This experiment uses a 30D WSNs coverage control problem with the sensor node N set to 15. The sensing radius r i is 15 m, while the target region is 100 × 100 m2 in size. For this experiment, all 15 EA algorithms from Section 4.2 and Section 4.3 were used. MaxFEs, run times, and population size were all set to 40, 30, and 300,000, respectively.
As shown in Table 13, HGCLPSO achieves the highest mean coverage rate of 0.9351. It outperforms EPSO and ABC, which rank second and third, respectively. Additionally, HGCLPSO surpasses six other EAs and seven PSO variants. The method demonstrates a notably fast convergence in the early stages, as evidenced by the coverage rate convergence curve in Figure 5. Overall, HGCLPSO is not only effective for a wide range of benchmark functions but also well-suited to address real-world engineering problems.

4.5. Discussion

The above experimental results demonstrate the effectiveness of the HGCLPSO algorithm in tackling a range of problems from the CEC2014 and CEC2017 test suites, as well as a real-world engineering challenge. It excels in terms of convergence accuracy, speed, and reliability. The learning approaches used by algorithms such as CLPSO, OLPSO, SL-PSO, and GL-PSO, which rely on a single learning strategy, are insufficient for addressing more complex problems. In contrast, HGCLPSO employs both GLS and CLS to construct exploitation and exploration sub-populations, respectively. Similarly to HCLPSO, TSLPSO, and HCLDMS-PSO, HGCLPSO uses different learning strategies to generate these sub-populations. A novel PEGA mechanism is introduced to enhance exploitation, and a repulsive mechanism is incorporated to prevent premature convergence of the CLS sub-population. Additionally, a local search operator is applied in the later stages of evolution to help HGCLPSO avoid local optima. Thanks to these strategies, HGCLPSO demonstrates strong optimization performance on most problems, achieving the best results on 12 and 13 functions out of the 30D and 50D CEC2014 test suite problems, respectively. However, the HGCLPSO algorithm does not perform as well on certain functions, such as F6, F7, F22, and F27, which affects its overall ranking. In the CEC2017 test suite, HGCLPSO shows sub-optimal performance on F7, F8, F9, F10, F21, and F24. This behavior can be explained by the “no free lunch” theorem [52], which states that no single algorithm can outperform all others across every type of problem.
The ranks from each function, along with two statistical tests, indicate that HGCLPSO, as well as three two-sub-population PSO variants utilizing the CL strategy (HCLPSO, TSLPSO, and HCLDMS-PSO), generally outperform the other PSO variants. By leveraging the enhanced exploration ability of the CL strategy, these PSO variants construct exploitation sub-populations through different mechanisms. This combination creates an effective paradigm to balance exploration and exploitation capacity, helping particles explore new areas, increase their chances of escaping local optima, and avoid premature convergence. Furthermore, in HGCLPSO, GL and CL exemplars guide particle searches, replacing the conventional pbest and gbest. This single-guided learning mechanism helps avoid the “oscillation” problem often seen with dual guidance in traditional PSOs. In GL-PSO, there is no guarantee that the learning exemplars will remain effective across all dimensions. If exemplars degrade in certain dimensions, particles learning from them may hinder the algorithm’s efficiency, leading to the “two steps forward, one step back” issue. To address this, the proposed PEGA mechanism helps preserve the valuable information in particles, ensuring that the exemplars do not degrade and mitigating this phenomenon.

5. Conclusions

In this paper, we introduce a novel PSO variant, HGCLPSO, which utilizes GLS and CLS to create two distinct sub-populations, effectively balancing exploration and exploitation—a long-standing core challenge in the field of swarm intelligence optimization. To enhance the exploitation capability of the GLS sub-population, we propose a Potentially Excellent Gene Activation (PEGA) mechanism. This mechanism updates the Abest position by learning from high-performing genes of individual particles, ensuring the discovery of better global solutions and filling the gap in targeted gene-driven optimization for PSO variants. Additionally, to prevent premature convergence in the CLS sub-population and preserve diversity, we incorporate a repulsive mechanism that increases the repulsive force when individuals are too close to the optimal position. Finally, in the later stages of evolution, a BFGS quasi-Newton local search operator is applied to the Abest to help avoid local optima, which strengthens the reliability of optimization results for complex problems.
We compare HGCLPSO with eight state-of-the-art PSO variants using the CEC2014 test suite, evaluating solution accuracy, convergence speed, statistical significance, and computational time. The results show that HGCLPSO achieves competitive accuracy while maintaining fast convergence. Additional experiments on the CEC2017 test suite and real-world engineering problems further demonstrate the superiority of HGCLPSO over several other PSO variants and EAs, highlighting its potential to provide more efficient and robust optimization tools for both academic research and industrial applications. The research significance of HGCLPSO lies in two key aspects: first, it enriches the theoretical framework of PSO variants by integrating multi-strategy collaboration and adaptive mechanism design, offering new insights for the development of swarm intelligence algorithms; second, it addresses critical limitations of existing algorithms in balancing exploration and exploitation, as well as maintaining diversity, which promotes the advancement of optimization technology for complex problems.
Future research will focus on developing new mechanisms or exploring alternative strategies to better balance exploitation and exploration capabilities, particularly for high-dimensional and multi-objective optimization problems. Additionally, in our ongoing studies, we plan to apply HGCLPSO to more complex real-world engineering challenges, such as renewable energy system optimization and intelligent manufacturing process scheduling, to expand its application scope and maximize its practical impact.

Supplementary Materials

The MATLAB source code for HGCLPSO is available at https://github.com/wangshengliang2018/HGCLPSO (accessed on 24 November 2025).

Author Contributions

S.W.: Conceptualization, Investigation, Methodology, Software, Writing—original draft, Formal analysis, Funding acquisition. Y.L. (Yuqing Li) and Y.L. (Yao Li): Investigation, Software, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was jointly supported by the Open Foundation of the State Key Laboratory of Precision Geodesy (Grant No. SKLPG2025-3-5), The open project special fund of Intelligent Transportation Laboratory in Shanxi Province (Grant No. 2024-ITLOP-KD-03), Taiyuan University of Science and Technology Scientific Research Initial Funding (Grant No. 20242027 and 20232112), Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (No. 2024L225), and Fundamental Research Program of Shanxi Province (Grant No. 202203021212284).

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

Author Shengliang Wang was employed by the company Shanxi Intelligent Transportation Laboratory Co., Ltd., Taiyuan, China. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
PSOParticle Swarm Optimization
HGCLPSOHeterogeneous genetic learning and comprehensive learning strategy PSO
GLSGenetic learning strategy
CLSComprehensive learning strategy
PEGAPotentially excellent gene activation
AbestArchived position of gbest
BFGSBroyden–Fletcher–Goldfarb–Shanno
CECCongress on Evolutionary Computation
WSNsWireless sensor networks
pbestpersonal history best position
gbestglobal best position
GL-PSOGenetic Learning PSO
CLPSOComprehensive learning PSO
PLlearning probability
EAsEvolutionary algorithms
OLPSOOrthogonal learning PSO
SL-PSOSocial learning PSO
HCLPSOHybrid Collaborative Learning PSO
EPSOEnsemble particle swarm optimizer
TSLPSOTwo-swarm learning PSO
HCLDMS-PSOHeterogeneous comprehensive learning and dynamic multi-swarm PSO
MFOMoth flame optimization
WOAWhale optimization algorithm
ABCArtificial Bee Colony Algorithm
BOAButterfly optimization algorithm
BABat algorithm
GWOGrey wolf optimizer

References

  1. Holland, J.H. Genetic algorithms and the optimal allocation of trials. SIAM J. Comput. 1973, 2, 88–105. [Google Scholar] [CrossRef]
  2. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  3. Deng, L.Y.; Liu, S.Y. Snow ablation optimizer: A novel metaheuristic technique for numerical optimization and engineering design. Expert Syst. Appl. 2023, 225, 18. [Google Scholar] [CrossRef]
  4. Zhong, C.; Li, G.; Meng, Z.; Li, H.; Yildiz, A.R.; Mirjalili, S. Starfish optimization algorithm (SFOA): A bio-inspired metaheuristic algorithm for global optimization compared with 100 optimizers. Neural Comput. Appl. 2024, 37, 3641–3683. [Google Scholar] [CrossRef]
  5. Kocak, O.; Erkan, U.; Toktas, A.; Gao, S. PSO-based image encryption scheme using modular integrated logistic exponential map. Expert Syst. Appl. 2024, 237, 121452. [Google Scholar] [CrossRef]
  6. Lin, S.; Liu, A.; Wang, J.; Kong, X. An improved fault-tolerant cultural-PSO with probability for multi-AGV path planning. Expert Syst. Appl. 2024, 237, 121510. [Google Scholar] [CrossRef]
  7. Wang, S.; Gao, M.; Li, L.a.; Lv, D.; Li, Y. UWB Base Station Deployment Optimization Method Considering NLOS Effects Based on Levy Flight-Improved Particle Swarm Optimizer. Sensors 2025, 25, 1785. [Google Scholar] [CrossRef]
  8. Zhao, W.; Liu, G.; Wang, S.; Gao, M.; Lv, D. Real-Time Estimation of GPS-BDS Inter-System Biases: An Improved Particle Swarm Optimization Algorithm. Remote Sens. 2021, 13, 3214. [Google Scholar] [CrossRef]
  9. Lv, D.; Liu, G.; Ou, J.; Wang, S.; Gao, M. Prediction of GPS Satellite Clock Offset Based on an Improved Particle Swarm Algorithm Optimized BP Neural Network. Remote Sens. 2022, 14, 2407. [Google Scholar] [CrossRef]
  10. Gong, Y.-J.; Zhang, J.; Chung, H.S.-H.; Chen, W.-N.; Zhan, Z.-H.; Li, Y.; Shi, Y.-H. An efficient resource allocation scheme using particle swarm optimization. IEEE Trans. Evol. Comput. 2012, 16, 801–816. [Google Scholar] [CrossRef]
  11. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Dynamic mentoring and self-regulation based particle swarm optimization algorithm for solving complex real-world optimization problems. Inform. Sci. 2016, 326, 1–24. [Google Scholar] [CrossRef]
  12. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  13. Nasir, M.; Das, S.; Maity, D.; Sengupta, S.; Halder, U.; Suganthan, P.N. A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization. Inform. Sci. 2012, 209, 16–36. [Google Scholar] [CrossRef]
  14. Qu, B.-Y.; Suganthan, P.N.; Das, S. A distance-based locally informed particle swarm model for multimodal optimization. IEEE Trans. Evol. Comput. 2013, 17, 387–402. [Google Scholar] [CrossRef]
  15. Liang, J.-J.; Suganthan, P.N. Dynamic multi-swarm particle swarm optimizer. In Proceedings of the IEEE Swarm Intelligence Symposium, Pasadena, CA, USA, 8–10 June 2005; pp. 124–129. [Google Scholar]
  16. Li, X.; Yao, X. Cooperatively coevolving particle swarms for large scale optimization. IEEE Trans. Evol. Comput. 2011, 16, 210–224. [Google Scholar] [CrossRef]
  17. Nobile, M.S.; Cazzaniga, P.; Besozzi, D.; Colombo, R.; Mauri, G.; Pasi, G. Fuzzy Self-Tuning PSO: A settings-free algorithm for global optimization. Swarm Evol. Comput. 2018, 39, 70–85. [Google Scholar] [CrossRef]
  18. Zhan, Z.-H.; Zhang, J.; Li, Y.; Chung, H.S.-H. Adaptive particle swarm optimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2009, 39, 1362–1381. [Google Scholar] [CrossRef]
  19. Xu, G. An adaptive parameter tuning of particle swarm optimization algorithm. Appl. Math. Comput. 2013, 219, 4560–4569. [Google Scholar] [CrossRef]
  20. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  21. Zhan, Z.H.; Zhang, J.; Yun, L.; Shi, Y.H. Orthogonal learning particle swarm optimization. IEEE Trans. Evol. Comput. 2011, 15, 832–847. [Google Scholar] [CrossRef]
  22. Wang, H.; Wu, Z.; Rahnamayan, S.; Liu, Y.; Ventresca, M. Enhancing particle swarm optimization using generalized opposition-based learning. Inform. Sci. 2011, 181, 4699–4714. [Google Scholar] [CrossRef]
  23. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inform. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  24. Gong, Y.-J.; Li, J.-J.; Zhou, Y.; Li, Y.; Chung, H.S.-H.; Shi, Y.-H.; Zhang, J. Genetic learning particle swarm optimization. IEEE Trans. Cybern. 2016, 46, 2277–2290. [Google Scholar] [CrossRef]
  25. Xu, G.P.; Cui, Q.L.; Shi, X.H.; Ge, H.W.; Zhan, Z.H.; Lee, H.P.; Liang, Y.C.; Tai, R.; Wu, C.G. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  26. Lynn, N.; Suganthan, P.N. Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. Swarm Evol. Comput. 2015, 24, 11–24. [Google Scholar] [CrossRef]
  27. Wang, S.L.; Liu, G.Y.; Gao, M.; Cao, S.L.; Guo, A.Z.; Wang, J.C. Heterogeneous comprehensive learning and dynamic multi- swarm particle swarm optimizer with two mutation operators. Inform. Sci. 2020, 540, 175–201. [Google Scholar] [CrossRef]
  28. Li, C.; Yang, S.; Nguyen, T.T. A self-learning particle swarm optimizer for global optimization problems. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2012, 42, 627–646. [Google Scholar]
  29. Wang, Y.; Li, B.; Weise, T.; Wang, J.; Yuan, B.; Tian, Q. Self-adaptive learning based particle swarm optimization. Inform. Sci. 2011, 181, 4515–4538. [Google Scholar] [CrossRef]
  30. Li, C.; Yang, S. An adaptive learning particle swarm optimizer for function optimization. In Proceedings of the 2009 IEEE Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 381–388. [Google Scholar]
  31. Zhang, X.; Wang, X.; Kang, Q.; Cheng, J. Differential mutation and novel social learning particle swarm optimization algorithm. Inform. Sci. 2019, 480, 109–129. [Google Scholar] [CrossRef]
  32. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Wu, Q. Dynamic multi-swarm differential learning particle swarm optimizer. Swarm Evol. Comput. 2018, 39, 209–221. [Google Scholar] [CrossRef]
  33. Xin, B.; Chen, J.; Zhang, J.; Fang, H.; Peng, Z.-H. Hybridizing differential evolution and particle swarm optimization to design powerful optimizers: A review and taxonomy. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2011, 42, 744–767. [Google Scholar] [CrossRef]
  34. KıRan, M.S.; GüNdüZ, M. A recombination-based hybridization of particle swarm optimization and artificial bee colony algorithm for continuous optimization problems. Appl. Soft Comput. 2013, 13, 2188–2203. [Google Scholar] [CrossRef]
  35. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Yang, Y.; Shi, Y. Particle swarm optimizer with two differential mutation. Appl. Soft Comput. 2017, 61, 314–330. [Google Scholar] [CrossRef]
  36. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68. [Google Scholar] [CrossRef]
  37. Chen, W.-N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.-H.; Chung, H.S.-H.; Li, Y.; Shi, Y.-H. Particle swarm optimization with an aging leader and challengers. IEEE Trans. Evol. Comput. 2012, 17, 241–258. [Google Scholar] [CrossRef]
  38. Lu, J.; Zhang, J.; Sheng, J. Enhanced multi-swarm cooperative particle swarm optimizer. Swarm Evol. Comput. 2022, 69, 100989. [Google Scholar] [CrossRef]
  39. Li, S.; Tan, M.; Tsang, I.W.; Kwok, J.T.-Y. A hybrid PSO-BFGS strategy for global optimization of multimodal functions. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2011, 41, 1003–1014. [Google Scholar]
  40. Zhao, S.-Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic multi-swarm particle swarm optimizer with local search for large scale global optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Hong Kong, China, 1–6 June 2008; pp. 3845–3852. [Google Scholar]
  41. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real Parameter Numerical Optimization; Technical Report; Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  42. Awad, N.H.; Ali, M.Z.; Suganthan, P.N.; Liang, J.J.; Qu, B.Y. Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore; Jordan University of Science and Technology: Ar-Ramtha, Jordan; Zhengzhou University: Zhengzhou, China, 2016. [Google Scholar]
  43. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  44. Lynn, N.; Suganthan, P.N. Ensemble particle swarm optimizer. Appl. Soft Comput. 2017, 55, 533–548. [Google Scholar] [CrossRef]
  45. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  47. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  48. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2018, 23, 715–734. [Google Scholar] [CrossRef]
  49. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  50. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  51. Wang, J.; Ju, C.; Gao, Y.; Sangaiah, A.K.; Kim, G.-j. A PSO based energy efficient coverage control algorithm for wireless sensor networks. Comput. Mater. Contin. 2018, 56, 433–446. [Google Scholar]
  52. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
Figure 1. PL curve in Equation (6) and repulsive force calculated about disti in Equation (8). (a) PL curve for 30D problems; (b) Repulsive force curve.
Figure 1. PL curve in Equation (6) and repulsive force calculated about disti in Equation (8). (a) PL curve for 30D problems; (b) Repulsive force curve.
Algorithms 18 00755 g001
Figure 2. Diversity measure value comparisons of the GLS sub-population, the CLS sub-population, and the whole population in CEC2014 test suite problems. (a) F1 Functions, (b) F7 Functions, (c) F9 Functions, (d) F15 Functions, (e) F19 Functions, (f) F27 Functions.
Figure 2. Diversity measure value comparisons of the GLS sub-population, the CLS sub-population, and the whole population in CEC2014 test suite problems. (a) F1 Functions, (b) F7 Functions, (c) F9 Functions, (d) F15 Functions, (e) F19 Functions, (f) F27 Functions.
Algorithms 18 00755 g002
Figure 3. Convergence performance curves for HGCLPSO and the eight other PSOs on the 30D CEC2014 benchmark functions. (a) F1: Rotated High Conditioned Elliptic Function; (b) F3: Rotated Discus Function; (c) F4: Shifted and Rotated Rosenbrock’s Function; (d) F8: Shifted Rastrigin’s Function; (e) F10: Shifted Schwefel’s Function; (f) F12: Shifted and Rotated Katsuura Function.
Figure 3. Convergence performance curves for HGCLPSO and the eight other PSOs on the 30D CEC2014 benchmark functions. (a) F1: Rotated High Conditioned Elliptic Function; (b) F3: Rotated Discus Function; (c) F4: Shifted and Rotated Rosenbrock’s Function; (d) F8: Shifted Rastrigin’s Function; (e) F10: Shifted Schwefel’s Function; (f) F12: Shifted and Rotated Katsuura Function.
Algorithms 18 00755 g003
Figure 4. Convergence performance curves for HGCLPSO and the eight other PSOs on 30D CEC2014 test suite problems. (a) F17: Hybrid Function 1 (N = 3); (b) F20: Hybrid Function 4 (N = 4); (c) F21: Hybrid Function 5 (N = 5); (d) F23: Composition Function 1 (N = 5); (e) F25: Composition Function 3 (N = 3); (f) F30: Composition Function 8 (N = 3).
Figure 4. Convergence performance curves for HGCLPSO and the eight other PSOs on 30D CEC2014 test suite problems. (a) F17: Hybrid Function 1 (N = 3); (b) F20: Hybrid Function 4 (N = 4); (c) F21: Hybrid Function 5 (N = 5); (d) F23: Composition Function 1 (N = 5); (e) F25: Composition Function 3 (N = 3); (f) F30: Composition Function 8 (N = 3).
Algorithms 18 00755 g004
Figure 5. Coverage rate convergence curves for 30D WSNs coverage problem.
Figure 5. Coverage rate convergence curves for 30D WSNs coverage problem.
Algorithms 18 00755 g005
Table 1. Calibration of two sub-population sizes (N1 and N2) in the HGCLPSO algorithm.
Table 1. Calibration of two sub-population sizes (N1 and N2) in the HGCLPSO algorithm.
FunctionsCriteria0.1*N+
0.9*N
0.2*N+
0.8*N
0.3*N+
0.7*N
0.4*N+
0.6*N
0.5*N+
0.5*N
0.6*N+
0.4*N
0.7*N+
0.3*N
0.8*N+
0.2*N
0.9*N+
0.1*N
F1mean4.83 × 10−026.76 × 10−031.05 × 10−021.65 × 10−023.79 × 10−022.76 × 10−011.13 × 10−018.32 × 10−027.06 × 10−02
std1.06 × 10−011.06 × 10−022.88 × 10−023.31 × 10−021.17 × 10−014.58 × 10−012.85 × 10−011.89 × 10−011.83 × 10−01
rank512349876
F2mean4.66 × 10+018.01 × 10+003.72 × 10+011.03 × 10+023.25 × 10+003.14 × 10+011.71 × 10+012.79 × 10−012.02 × 10+00
std8.34 × 10+011.88 × 10+018.14 × 10+011.81 × 10+028.89 × 10+005.93 × 10+013.61 × 10+014.38 × 10−012.82 × 10+00
rank847936512
F3mean1.85 × 10+004.88 × 10−011.35 × 10+001.95 × 10+003.28 × 10+002.08 × 10−011.45 × 10+003.83 × 10−023.20 × 10−01
std3.92 × 10+001.48 × 10+003.23 × 10+003.59 × 10+004.57 × 10+006.22 × 10−013.34 × 10+007.23 × 10−029.07 × 10−01
rank745892613
F4mean3.62 × 10−010.00 × 10+000.00 × 10+000.00 × 10+003.62 × 10−017.25 × 10−012.50 × 10−093.50 × 10−091.32 × 10−09
std1.20 × 10+000.00 × 10+000.00 × 10+000.00 × 10+001.20 × 10+001.61 × 10+008.28 × 10−098.64 × 10−094.38 × 10−09
rank611157342
F5mean2.00 × 10+012.00 × 10+012.00 × 10+012.00 × 10+012.00 × 10+012.00 × 10+012.00 × 10+012.00 × 10+012.00 × 10+01
std3.32 × 10−063.93 × 10−051.52 × 10−041.10 × 10−053.32 × 10−054.89 × 10−051.52 × 10−056.25 × 10−052.66 × 10−05
rank931854726
F6mean1.56 × 10+011.46 × 10+011.48 × 10+011.37 × 10+011.38 × 10+011.27 × 10+011.37 × 10+011.11 × 10+011.12 × 10+01
std2.54 × 10+002.33 × 10+002.21 × 10+004.55 × 10+002.98 × 10+004.61 × 10+004.67 × 10+004.05 × 10+003.40 × 10+00
rank978463512
F7mean7.00 × 10+003.81 × 10+002.37 × 10−014.03 × 10−027.39 × 10−034.93 × 10−037.38 × 10−038.51 × 10−035.82 × 10−03
std1.08 × 10+018.64 × 10+007.58 × 10−011.14 × 10−016.87 × 10−036.61 × 10−038.60 × 10−031.04 × 10−029.02 × 10−03
rank987641352
F8mean6.24 × 10+002.80 × 10+001.45 × 10+001.27 × 10+003.62 × 10−010.00 × 10+000.00 × 10+000.00 × 10+000.00 × 10+00
std5.52 × 10+005.15 × 10+002.76 × 10+002.82 × 10+001.20 × 10+000.00 × 10+000.00 × 10+000.00 × 10+000.00 × 10+00
rank654321111
F9mean4.66 × 10+015.07 × 10+016.38 × 10+015.46 × 10+016.01 × 10+015.68 × 10+015.69 × 10+016.30 × 10+015.10 × 10+01
std1.13 × 10+011.04 × 10+011.22 × 10+011.53 × 10+019.01 × 10+001.51 × 10+011.70 × 10+012.78 × 10+012.02 × 10+01
rank129475683
F10mean1.45 × 10+022.24 × 10+021.97 × 10+021.79 × 10+022.24 × 10+021.91 × 10+029.86 × 10+011.38 × 10+025.32 × 10+01
std1.73 × 10+021.55 × 10+021.67 × 10+021.51 × 10+021.55 × 10+021.59 × 10+021.46 × 10+021.65 × 10+021.21 × 10+02
rank497586231
F11mean2.62 × 10+032.23 × 10+032.66 × 10+032.48 × 10+032.43 × 10+032.52 × 10+032.46 × 10+032.33 × 10+032.99 × 10+03
std6.11 × 10+025.40 × 10+023.90 × 10+025.83 × 10+025.99 × 10+021.01 × 10+035.26 × 10+025.26 × 10+029.51 × 10+02
rank718536429
F12mean2.42 × 10−012.33 × 10−011.65 × 10−011.59 × 10−011.40 × 10−011.39 × 10−011.62 × 10−011.52 × 10−012.11 × 10−01
std6.58 × 10−028.16 × 10−021.06 × 10−016.09 × 10−026.27 × 10−027.31 × 10−027.32 × 10−028.68 × 10−021.20 × 10−01
rank986421537
F13mean2.96 × 10−013.04 × 10−013.21 × 10−013.16 × 10−013.19 × 10−013.60 × 10−013.41 × 10−013.23 × 10−013.42 × 10−01
std7.03 × 10−026.94 × 10−026.96 × 10−024.27 × 10−026.10 × 10−026.49 × 10−027.35 × 10−028.27 × 10−025.94 × 10−02
rank125349768
F14mean2.32 × 10−012.38 × 10−012.17 × 10−012.61 × 10−012.44 × 10−012.53 × 10−012.80 × 10−012.79 × 10−012.87 × 10−01
std3.43 × 10−023.83 × 10−023.73 × 10−023.89 × 10−023.27 × 10−025.62 × 10−025.24 × 10−024.76 × 10−026.47 × 10−02
rank231645879
F15mean1.83 × 10+011.54 × 10+012.92 × 10+012.71 × 10+011.07 × 10+019.74 × 10+001.63 × 10+011.61 × 10+013.89 × 10+01
std3.45 × 10+012.76 × 10+017.57 × 10+015.15 × 10+011.49 × 10+018.35 × 10+003.17 × 10+011.91 × 10+016.34 × 10+01
rank638721549
F16mean1.02 × 10+011.05 × 10+011.01 × 10+011.04 × 10+011.03 × 10+011.04 × 10+011.02 × 10+011.02 × 10+011.03 × 10+01
std6.74 × 10−018.60 × 10−018.41 × 10−015.41 × 10−018.60 × 10−016.64 × 10−016.48 × 10−018.33 × 10−019.51 × 10−01
rank291857346
Average rank5.69 4.38 5.00 5.25 4.56 4.56 4.88 3.694.75
Final rank826733514
Best/2nd Best/Worst2/2/43/2/24/1/11/0/10/3/14/1/21/1/04/2/02/4/3
Table 2. Parameter settings of the nine PSO variants.
Table 2. Parameter settings of the nine PSO variants.
AlgorithmParameter SettingYearRef.
CLPSO w = 0.9~0.2, c = 1.49445, Pc = 0.05~0.5, m = 52006[20]
OLPSOw = 0.9~0.4, c = 2.0, G = 52011[21]
SL-PSO N = M + D / 10 , M = 100, β = 0.01 , α = 0.5 2015[23]
HCLPSO w = 0.99~0.29, c1 = 2.5~0.5, c2 = 0.5~2.5, K: 3~1.52015[26]
GL-PSOw = 0.7298, c = 1.49618, pm = 0.01, sg = 72016[24]
EPSOReference the original paper2017[44]
TSLPSOw1 = w2 = 0.9~0.4, c1 = c2 = 1.49445, c3 = 0.5~2.52019[25]
HCLDMS-PSOw = 0.99~0.29, w2 = 0.99~0.29, c1 = 2.5~0.5, c2 = 0.5~2.5, pm = 0.12020[27]
HGCLPSOw1 = 0.7298, w2 = 0.9~0.4, c1 = 1.49618, c2 = 3~1.5, PL, S = 4, R = 200--
Table 3. Comparison of experimental results for nine PSO variants on 30D CEC2014 test functions.
Table 3. Comparison of experimental results for nine PSO variants on 30D CEC2014 test functions.
FunctionsCriteriaHGCLPSOHCLDMS-PSOTSLPSOESPOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
F1mean9.49 × 10−022.77 × 10+063.74 × 10+063.10 × 10+061.09 × 10+073.83 × 10+063.83 × 10+051.58 × 10+079.24 × 10+06
std3.37 × 10−012.96 × 10+062.68 × 10+062.37 × 10+061.32 × 10+073.47 × 10+062.96 × 10+058.90 × 10+062.77 × 10+06
rank135486297
F2mean6.80 × 10+003.67 × 10+022.05 × 10+019.62 × 10+012.56 × 10+001.05 × 10+011.32 × 10+044.80 × 10+053.00 × 10+00
std2.00 × 10+014.75 × 10+024.35 × 10+011.18 × 10+024.83 × 10+001.54 × 10+011.09 × 10+043.56 × 10+053.98 × 10+00
rank375614892
F3mean8.63 × 10−014.13 × 10+028.47 × 10+019.26 × 10+013.03 × 10+022.14 × 10+027.78 × 10+036.05 × 10+034.03 × 10+02
std2.38 × 10+006.00 × 10+022.61 × 10+029.59 × 10+011.34 × 10+032.79 × 10+025.67 × 10+034.83 × 10+033.95 × 10+02
rank172354986
F4mean5.14 × 10−011.44 × 10+027.75 × 10+011.11 × 10+021.51 × 10+021.23 × 10+022.55 × 10+011.68 × 10+027.40 × 10+01
std1.36 × 10+002.40 × 10+013.64 × 10+014.00 × 10+014.05 × 10+013.51 × 10+012.84 × 10+012.53 × 10+012.48 × 10+01
rank174586293
F5mean2.00 × 10+012.02 × 10+012.01 × 10+012.07 × 10+012.00 × 10+012.03 × 10+012.09 × 10+012.07 × 10+012.04 × 10+01
std1.02 × 10−055.85 × 10−025.77 × 10−028.10 × 10−022.61 × 10−024.13 × 10−026.15 × 10−027.18 × 10−024.84 × 10−02
rank143725986
F6mean9.59 × 10+002.96 × 10+006.88 × 10+006.12 × 10+008.39 × 10+007.78 × 10+007.93 × 10−018.22 × 10+001.27 × 10+01
std3.64 × 10+002.12 × 10+002.30 × 10+002.05 × 10+003.36 × 10+002.57 × 10+008.01 × 10−013.21 × 10+001.85 × 10+00
rank824375169
F7mean2.96 × 10−031.60 × 10−033.08 × 10−054.77 × 10−033.50 × 10−031.20 × 10−033.18 × 10−046.57 × 10−018.51 × 10−05
std1.27 × 10−022.99 × 10−036.07 × 10−056.20 × 10−037.49 × 10−032.00 × 10−031.77 × 10−031.83 × 10−019.18 × 10−05
rank651874392
F8mean9.90 × 10−142.25 × 10+012.35 × 10−133.43 × 10+002.49 × 10-132.27 × 10−121.74 × 10+012.92 × 10+001.17 × 10−13
std7.03 × 10−144.75 × 10+006.52 × 10−142.71 × 10+005.66 × 10-137.83 × 10−124.17 × 10+001.61 × 10+002.04 × 10−14
rank193745862
F9mean5.15 × 10+012.67 × 10+014.79 × 10+015.25 × 10+015.30 × 10+014.50 × 10+012.42 × 10+017.50 × 10+015.19 × 10+01
std1.52 × 10+015.97 × 10+008.01 × 10+001.26 × 10+011.86 × 10+019.77 × 10+002.22 × 10+012.89 × 10+016.47 × 10+00
rank524783196
F10mean8.11 × 10−017.86 × 10+026.49 × 10+001.40 × 10+025.16 × 10−014.97 × 10+003.21 × 10+021.46 × 10+021.44 × 10+00
std2.02 × 10+002.86 × 10+024.74 × 10+001.20 × 10+028.53 × 10−011.70 × 10+001.74 × 10+021.19 × 10+028.99 × 10−01
rank295614873
F11mean2.42 × 10+032.09 × 10+031.94 × 10+032.63 × 10+032.63 × 10+032.04 × 10+038.41 × 10+023.23 × 10+032.19 × 10+03
std5.13 × 10+024.61 × 10+025.19 × 10+028.01 × 10+026.60 × 10+023.69 × 10+024.37 × 10+027.92 × 10+022.95 × 10+02
rank642783195
F12mean1.22 × 10−013.24 × 10−011.74 × 10−018.79 × 10−011.25 × 10−013.23 × 10−012.16 × 10+001.20 × 10+003.25 × 10−01
std5.41 × 10−021.14 × 10−015.86 × 10−021.18 × 10−015.00 × 10−027.52 × 10−027.74 × 10−011.73 × 10−015.83 × 10−02
rank153724986
F13mean2.55 × 10−011.72 × 10−012.22 × 10−013.35 × 10−013.98 × 10−012.26 × 10−011.82 × 10−013.88 × 10−012.97 × 10−01
std6.88 × 10−023.86 × 10−026.40 × 10−026.45 × 10−021.21 × 10−016.56 × 10−023.49 × 10−027.25 × 10−024.40 × 10−02
rank513794286
F14mean2.39 × 10−012.41 × 10−012.23 × 10−012.72 × 10−013.42 × 10−012.28 × 10−014.02 × 10−012.83 × 10−012.35 × 10−01
std4.59 × 10−023.47 × 10−023.22 × 10−023.17 × 10−024.62 × 10−023.92 × 10−028.20 × 10−023.83 × 10−022.36 × 10−02
rank451682973
F15mean4.83 × 10+004.16 × 10+004.62 × 10+001.07 × 10+015.01 × 10+005.11 × 10+005.56 × 10+001.80 × 10+017.39 × 10+00
std1.39 × 10+008.56 × 10−011.41 × 10+003.69 × 10+001.52 × 10+001.54 × 10+004.48 × 10+001.91 × 10+001.07 × 10+00
rank312845697
F16mean1.02 × 10+019.06 × 10+009.90 × 10+001.11 × 10+019.97 × 10+009.87 × 10+001.20 × 10+011.18 × 10+011.01 × 10+01
std6.94 × 10−017.71 × 10−016.10 × 10−014.34 × 10−017.63 × 10−014.80 × 10−013.45 × 10−014.74 × 10−012.94 × 10−01
rank613742985
F17mean1.61 × 10+033.55 × 10+053.60 × 10+054.36 × 10+054.82 × 10+053.08 × 10+051.12 × 10+051.05 × 10+067.43 × 10+05
std3.57 × 10+031.64 × 10+053.37 × 10+052.93 × 10+059.50 × 10+051.97 × 10+057.20 × 10+045.32 × 10+053.76 × 10+05
rank145673298
F18mean1.31 × 10+023.03 × 10+033.21 × 10+021.14 × 10+035.01 × 10+033.73 × 10+021.30 × 10+038.92 × 10+041.05 × 10+02
std5.53 × 10+011.97 × 10+032.33 × 10+021.26 × 10+034.88 × 10+033.88 × 10+022.07 × 10+032.85 × 10+053.93 × 10+01
rank273584691
F19mean6.77 × 10+005.64 × 10+006.24 × 10+006.36 × 10+001.74 × 10+015.69 × 10+006.85 × 10+009.79 × 10+008.36 × 10+00
std2.79 × 10+001.42 × 10+001.63 × 10+001.24 × 10+002.51 × 10+011.45 × 10+001.25 × 10+004.85 × 10+001.54 × 10+00
rank513492687
F20mean2.09 × 10+024.91 × 10+026.37 × 10+025.18 × 10+027.00 × 10+034.12 × 10+021.77 × 10+049.81 × 10+032.28 × 10+03
std4.01 × 10+024.08 × 10+029.01 × 10+021.98 × 10+021.54 × 10+042.19 × 10+028.83 × 10+039.32 × 10+031.64 × 10+03
rank135472986
F21mean6.35 × 10+024.22 × 10+043.31 × 10+045.49 × 10+048.99 × 10+045.40 × 10+045.73 × 10+042.84 × 10+056.22 × 10+04
std4.21 × 10+023.51 × 10+042.43 × 10+044.28 × 10+043.66 × 10+054.77 × 10+044.26 × 10+043.08 × 10+053.40 × 10+04
rank132584697
F22mean1.58 × 10+021.43 × 10+021.30 × 10+021.77 × 10+024.25 × 10+021.06 × 10+021.67 × 10+022.88 × 10+021.50 × 10+02
std1.17 × 10+028.53 × 10+018.84 × 10+011.16 × 10+021.94 × 10+029.88 × 10+011.41 × 10+021.58 × 10+028.62 × 10+01
rank532791684
F23mean3.14 × 10+023.16 × 10+023.15 × 10+023.15 × 10+023.16 × 10+023.16 × 10+023.15 × 10+023.17 × 10+023.15 × 10+02
std1.83 × 10−091.52 × 10−014.11 × 10−023.99 × 10−022.98 × 10−011.32 × 10−011.22 × 10−123.83 × 10−012.30 × 10−02
rank183567294
F24mean2.26 × 10+022.26 × 10+022.25 × 10+022.26 × 10+022.32 × 10+022.26 × 10+022.30 × 10+022.30 × 10+022.22 × 10+02
std1.63 × 10+001.62 × 10+002.35 × 10+001.48 × 10+005.32 × 10+004.71 × 10+005.80 × 10+002.31 × 10+009.71 × 10+00
rank652493871
F25mean2.01 × 10+022.06 × 10+022.07 × 10+022.07 × 10+022.11 × 10+022.07 × 10+022.06 × 10+022.10 × 10+022.09 × 10+02
std3.69 × 10+001.27 × 10+001.13 × 10+001.24 × 10+003.10 × 10+001.21 × 10+001.76 × 10+002.23 × 10+009.39 × 10−01
rank134695287
F26mean1.00 × 10+021.00 × 10+021.00 × 10+021.00 × 10+021.07 × 10+021.00 × 10+021.10 × 10+021.04 × 10+021.00 × 10+02
std7.26 × 10−023.15 × 10−025.43 × 10−026.16 × 10−022.49 × 10+015.52 × 10−023.00 × 10+011.80 × 10+016.51 × 10−02
rank314682975
F27mean4.48 × 10+023.80 × 10+024.06 × 10+024.04 × 10+025.16 × 10+024.07 × 10+023.56 × 10+024.96 × 10+024.13 × 10+02
std4.74 × 10+013.49 × 10+014.10 × 10+002.22 × 10+009.03 × 10+014.07 × 10+004.46 × 10+019.50 × 10+014.61 × 10+00
rank724395186
F28mean9.23 × 10+029.18 × 10+029.23 × 10+029.28 × 10+021.02 × 10+039.50 × 10+029.05 × 10+021.20 × 10+031.01 × 10+03
std1.19 × 10+023.21 × 10+014.80 × 10+015.90 × 10+011.79 × 10+024.12 × 10+016.06 × 10+011.38 × 10+024.88 × 10+01
rank324586197
F29mean1.28 × 10+031.33 × 10+031.11 × 10+031.36 × 10+031.50 × 10+031.07 × 10+031.80 × 10+032.97 × 10+069.22 × 10+02
std3.06 × 10+022.23 × 10+021.48 × 10+022.11 × 10+024.47 × 10+021.17 × 10+025.00 × 10+029.21 × 10+061.08 × 10+02
rank453672891
F30mean1.30 × 10+034.40 × 10+032.72 × 10+033.15 × 10+031.31 × 10+043.24 × 10+033.26 × 10+037.21 × 10+033.71 × 10+03
std9.99 × 10+021.59 × 10+036.89 × 10+021.01 × 10+039.25 × 10+031.05 × 10+039.66 × 10+024.38 × 10+038.10 × 10+02
rank172394586
Average rank3.174.20 3.20 5.57 6.63 3.87 5.27 8.17 4.93
Final rank142783695
Best/2nd Best/Worst12/2/05/4/22/7/00/0/02/2/71/6/05/6/70/0/133/3/1
Table 4. Wilcoxon rank sum test results for 30D CEC2014 test suite problems.
Table 4. Wilcoxon rank sum test results for 30D CEC2014 test suite problems.
FunctionsPairwise Comparison HGCLPSO Versus
HCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
F1++++++++
F2+++=++++
F3++++++++
F4++++++++
F5+++=++++
F6==+
F7======+=
F8+++=+++=
F9====+=
F10+++=++++
F11==+=
F12+++=++++
F13=++=++
F14==++=++=
F15==+==++
F16=+=++=
F17++++++++
F18+++++++=
F19======++
F20++++++++
F21++++++++
F22=++=+=
F23++++++++
F24===+=++
F25++++++++
F26====
F27++
F28===++++
F29===++
F30++++++++
+/=/−15/8/715/9/620/8/216/14/016/7/719/3/828/2/018/9/3
Table 5. Ranks obtained through Friedman test for 30D CEC2014 test suite problems.
Table 5. Ranks obtained through Friedman test for 30D CEC2014 test suite problems.
AlgorithmHGCLPSOHCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
Average rank3.624.56 3.89 5.44 5.39 4.44 4.81 7.68 5.17
Final rank142873596
p-value2.30461 × 10−13
Table 6. Comparison of experimental results for nine PSO variants on 50D CEC2014 test functions.
Table 6. Comparison of experimental results for nine PSO variants on 50D CEC2014 test functions.
FunctionsCriteriaHGCLPSOHCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
F1mean2.76 × 10−015.97 × 10+065.17 × 10+061.01 × 10+071.14 × 10+078.06 × 10+067.75 × 10+051.13 × 10+081.86 × 10+07
std1.09 × 10+003.69 × 10+064.11 × 10+068.42 × 10+069.11 × 10+067.69 × 10+062.83 × 10+053.27 × 10+075.13 × 10+06
rank143675298
F2mean3.72 × 10+011.35 × 10+037.57 × 10+021.54 × 10+046.30 × 10+033.19 × 10+029.10 × 10+037.29 × 10+082.14 × 10+01
std1.32 × 10+021.34 × 10+032.64 × 10+032.30 × 10+046.68 × 10+032.72 × 10+028.63 × 10+032.58 × 10+081.99 × 10+01
rank254863791
F3mean1.11 × 10−012.29 × 10+033.34 × 10+021.54 × 10+032.23 × 10+032.16 × 10+031.92 × 10+041.00 × 10+053.47 × 10+03
std2.03 × 10−011.18 × 10+034.00 × 10+028.31 × 10+023.19 × 10+031.48 × 10+038.08 × 10+032.32 × 10+041.48 × 10+03
rank162354897
F4mean2.35 × 10−011.35 × 10+021.02 × 10+021.35 × 10+021.89 × 10+021.31 × 10+029.24 × 10+013.79 × 10+021.26 × 10+02
std9.47 × 10−012.84 × 10+011.83 × 10+012.18 × 10+014.77 × 10+011.56 × 10+016.28 × 10+005.63 × 10+011.57 × 10+01
rank173685294
F5mean2.00 × 10+012.04 × 10+012.01 × 10+012.08 × 10+012.00 × 10+012.04 × 10+012.11 × 10+012.12 × 10+012.05 × 10+01
std4.56 × 10−068.07 × 10−027.66 × 10−025.15 × 10−026.61 × 10−025.48 × 10−023.00 × 10−024.61 × 10−023.49 × 10−02
rank143725896
F6mean2.34 × 10+011.20 × 10+011.71 × 10+011.92 × 10+012.29 × 10+012.36 × 10+011.57 × 10+003.40 × 10+013.01 × 10+01
std4.98 × 10+003.76 × 10+004.35 × 10+004.35 × 10+005.75 × 10+004.25 × 10+001.42 × 10+003.22 × 10+001.92 × 10+00
rank623457198
F7mean4.52 × 10−035.05 × 10−034.96 × 10−043.76 × 10−035.84 × 10−035.87 × 10−031.21 × 10−037.27 × 10+003.20 × 10−03
std9.54 × 10−036.74 × 10−038.15 × 10−046.53 × 10−036.77 × 10−038.67 × 10−032.85 × 10−032.18 × 10+002.54 × 10−03
rank561478293
F8mean2.03 × 10−135.36 × 10+013.81 × 10−131.16 × 10+013.39 × 10−131.95 × 10−023.56 × 10+019.06 × 10+012.27 × 10−13
std2.75 × 10−139.14 × 10+007.48 × 10−145.24 × 10+009.49 × 10−131.39 × 10−018.32 × 10+001.54 × 10+012.27 × 10−14
rank184635792
F9mean1.15 × 10+025.71 × 10+011.05 × 10+021.15 × 10+021.06 × 10+021.13 × 10+021.78 × 10+023.76 × 10+021.21 × 10+02
std2.91 × 10+011.06 × 10+011.96 × 10+012.24 × 10+011.84 × 10+011.55 × 10+019.14 × 10+012.61 × 10+011.51 × 10+01
rank612534897
F10mean6.36 × 10−011.96 × 10+032.53 × 10+012.50 × 10+023.90 × 10−015.44 × 10+001.10 × 10+032.45 × 10+032.98 × 10+00
std8.06 × 10−014.58 × 10+024.77 × 10+012.16 × 10+023.51 × 10−011.33 × 10+004.49 × 10+025.74 × 10+021.00 × 10+00
rank285614793
F11mean4.93 × 10+034.80 × 10+034.81 × 10+034.99 × 10+034.96 × 10+034.17 × 10+032.04 × 10+031.18 × 10+044.62 × 10+03
std7.22 × 10+026.66 × 10+028.11 × 10+021.24 × 10+037.57 × 10+025.16 × 10+027.03 × 10+026.15 × 10+023.32 × 10+02
rank645872193
F12mean1.64 × 10−014.65 × 10−012.00 × 10−018.90 × 10−011.72 × 10−013.23 × 10−013.26 × 10+003.25 × 10+003.74 × 10−01
std5.24 × 10−021.49 × 10−016.04 × 10−021.11 × 10−018.60 × 10−026.90 × 10−024.21 × 10−013.54 × 10−014.64 × 10−02
rank163724985
F13mean3.52 × 10−012.28 × 10−013.32 × 10−014.82 × 10−014.56 × 10−013.44 × 10−012.92 × 10−018.54 × 10−013.74 × 10−01
std8.00 × 10−023.83 × 10−026.62 × 10−028.34 × 10−028.37 × 10−026.11 × 10−024.67 × 10−021.01 × 10−014.58 × 10−02
rank513874296
F14mean3.25 × 10−013.26 × 10−012.84 × 10−013.32 × 10−015.16 × 10−013.11 × 10−014.05 × 10−011.95 × 10+003.16 × 10−01
std1.63 × 10−011.86 × 10−013.75 × 10−023.64 × 10−022.25 × 10−013.65 × 10−021.36 × 10−011.07 × 10+001.98 × 10−02
rank451682793
F15mean1.46 × 10+019.62 × 10+008.99 × 10+002.08 × 10+011.41 × 10+011.20 × 10+012.84 × 10+011.14 × 10+021.78 × 10+01
std4.24 × 10+002.06 × 10+001.86 × 10+008.85 × 10+004.63 × 10+003.32 × 10+001.60 × 10+004.16 × 10+012.18 × 10+00
rank521743896
F16mean1.81 × 10+011.85 × 10+011.84 × 10+012.01 × 10+011.82 × 10+011.83 × 10+012.19 × 10+012.22 × 10+011.87 × 10+01
std8.86 × 10−017.27 × 10−017.20 × 10−016.18 × 10−019.31 × 10−016.97 × 10−012.89 × 10−013.02 × 10−014.30 × 10−01
rank154723896
F17mean1.88 × 10+035.51 × 10+051.43 × 10+069.88 × 10+052.28 × 10+067.43 × 10+059.63 × 10+047.19 × 10+063.00 × 10+06
std6.47 × 10+023.50 × 10+051.05 × 10+066.62 × 10+053.05 × 10+067.18 × 10+055.05 × 10+042.90 × 10+061.12 × 10+06
rank136574298
F18mean5.10 × 10+025.57 × 10+022.50 × 10+024.13 × 10+021.05 × 10+031.56 × 10+021.56 × 10+031.75 × 10+061.46 × 10+02
std8.15 × 10+025.97 × 10+021.53 × 10+022.09 × 10+021.15 × 10+035.35 × 10+011.40 × 10+031.51 × 10+063.98 × 10+01
rank563472891
F19mean4.74 × 10+014.87 × 10+011.76 × 10+012.50 × 10+015.51 × 10+013.56 × 10+011.92 × 10+017.57 × 10+011.90 × 10+01
std1.97 × 10+011.89 × 10+014.58 × 10+001.51 × 10+012.04 × 10+011.85 × 10+017.23 × 10+001.74 × 10+012.98 × 10+00
rank671485392
F20mean1.81 × 10+029.34 × 10+021.08 × 10+031.32 × 10+037.53 × 10+031.74 × 10+032.50 × 10+044.31 × 10+041.01 × 10+04
std3.91 × 10+025.20 × 10+021.11 × 10+036.22 × 10+026.02 × 10+031.44 × 10+031.10 × 10+041.83 × 10+044.25 × 10+03
rank123465897
F21mean1.36 × 10+033.54 × 10+056.35 × 10+055.07 × 10+057.43 × 10+054.41 × 10+051.15 × 10+054.15 × 10+061.68 × 10+06
std7.02 × 10+022.02 × 10+056.12 × 10+053.81 × 10+051.37 × 10+064.98 × 10+056.01 × 10+042.13 × 10+065.23 × 10+05
rank136574298
F22mean6.07 × 10+023.07 × 10+025.44 × 10+026.65 × 10+021.04 × 10+036.22 × 10+023.09 × 10+021.06 × 10+036.02 × 10+02
std2.69 × 10+021.32 × 10+021.71 × 10+022.11 × 10+023.33 × 10+021.51 × 10+022.00 × 10+022.97 × 10+021.55 × 10+02
rank513786294
F23mean3.37 × 10+023.47 × 10+023.44 × 10+023.45 × 10+023.46 × 10+023.45 × 10+023.44 × 10+023.60 × 10+023.44 × 10+02
std6.61 × 10−096.93 × 10−011.15 × 10−011.97 × 10−019.77 × 10−012.54 × 10−014.20 × 10−133.01 × 10+008.45 × 10−02
rank183675294
F24mean2.67 × 10+022.67 × 10+022.63 × 10+022.63 × 10+022.71 × 10+022.66 × 10+022.74 × 10+023.01 × 10+022.63 × 10+02
std3.89 × 10+004.85 × 10+004.30 × 10+003.73 × 10+006.30 × 10+003.44 × 10+003.20 × 10+004.09 × 10+003.23 × 10+00
rank561374892
F25mean2.00 × 10+022.15 × 10+022.15 × 10+022.17 × 10+022.24 × 10+022.17 × 10+022.11 × 10+022.34 × 10+022.17 × 10+02
std1.56 × 10−022.43 × 10+002.14 × 10+003.29 × 10+004.05 × 10+002.06 × 10+002.88 × 10+006.18 × 10+001.43 × 10+00
rank143785296
F26mean1.00 × 10+021.00 × 10+021.00 × 10+021.00 × 10+021.40 × 10+021.00 × 10+021.55 × 10+021.45 × 10+021.00 × 10+02
std4.37 × 10−016.34 × 10−027.68 × 10−027.47 × 10−024.96 × 10+016.21 × 10−025.00 × 10+018.71 × 10+016.09 × 10−02
rank213674985
F27mean7.31 × 10+025.18 × 10+026.66 × 10+027.26 × 10+029.22 × 10+027.88 × 10+024.48 × 10+021.14 × 10+036.78 × 10+02
std9.45 × 10+015.66 × 10+011.91 × 10+022.09 × 10+021.03 × 10+021.90 × 10+025.99 × 10+016.49 × 10+012.55 × 10+02
rank623587194
F28mean1.53 × 10+031.34 × 10+031.44 × 10+031.52 × 10+031.89 × 10+031.55 × 10+031.35 × 10+032.48 × 10+031.81 × 10+03
std2.00 × 10+027.52 × 10+011.16 × 10+021.62 × 10+024.48 × 10+021.03 × 10+021.68 × 10+021.61 × 10+021.12 × 10+02
rank513486297
F29mean1.81 × 10+032.56 × 10+031.87 × 10+032.47 × 10+037.66 × 10+061.72 × 10+032.45 × 10+031.81 × 10+071.43 × 10+03
std3.19 × 10+021.07 × 10+033.14 × 10+027.96 × 10+022.66 × 10+072.69 × 10+024.48 × 10+024.52 × 10+071.60 × 10+02
rank374682591
F30mean4.74 × 10+031.66 × 10+041.35 × 10+042.10 × 10+043.93 × 10+041.83 × 10+041.19 × 10+045.00 × 10+041.48 × 10+04
std2.26 × 10+031.95 × 10+032.37 × 10+034.13 × 10+031.69 × 10+043.04 × 10+031.85 × 10+031.37 × 10+041.61 × 10+03
rank153786294
Average rank3.034.33 3.07 5.70 6.03 4.43 4.77 8.93 4.70
Final rank132784695
Best/2nd Best/Worst13/3/05/4/05/2/00/0/01/3/00/4/03/11/20/0/283/3/0
Table 7. Wilcoxon rank sum test results with a significance level of α = 0.05 for 50D CEC2014 test suite problems.
Table 7. Wilcoxon rank sum test results with a significance level of α = 0.05 for 50D CEC2014 test suite problems.
FunctionsPairwise Comparison HGCLPSO Versus
HCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
F1++++++++
F2++++++++
F3++++++++
F4++++++++
F5+++=++++
F6==++
F7====+=
F8+++=++++
F9====++=
F10+++=++++
F11====+
F12+++=++++
F13+++
F14==+=++=
F15+=+++
F16+=+==+++
F17++++++++
F18===++
F19==+
F20++++++++
F21++++++++
F22==+=+=
F23++++++++
F24===+=++=
F25++++++++
F26===+=++=
F27=+=+
F28=+=++
F29+=++=++
F30++++++++
+/=/−16/7/714/7/918/10/218/12/014/11/522/0/830/0/018/6/6
Table 9. Comparison of mean computation time(s) for CEC2014 test suite problems.
Table 9. Comparison of mean computation time(s) for CEC2014 test suite problems.
AlgorithmHGCLPSOHCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
30D problems3.875.434.355.733.714.643.523.044.76
50D problems9.1113.299.8412.659.0210.488.347.1910.84
Table 10. The mean errors and standard deviation results obtained on 30D CEC2017 test suite problems.
Table 10. The mean errors and standard deviation results obtained on 30D CEC2017 test suite problems.
FunctionsF1F3F4F5F6F7
Metricsmeanstdmeanstdmeanstdmeanstdmeanstdmeanstd
HGCLPSO1.37 × 10+014.02 × 10+012.01 × 10−065.85 × 10−061.93 × 10+002.03 × 10+005.85 × 10+011.52 × 10+011.36 × 10−045.01 × 10−049.79 × 10+011.69 × 10+01
HCLDMS-PSO8.78 × 10+021.19 × 10+034.41 × 10+014.62 × 10+011.03 × 10+027.74 × 10+002.63 × 10+016.24 × 10+001.74 × 10−021.44 × 10−023.45 × 10+011.15 × 10+01
TSLPSO3.03 × 10+013.54 × 10+016.55 × 10+011.92 × 10+027.44 × 10+011.50 × 10+014.32 × 10+011.05 × 10+019.01 × 10−095.02 × 10−088.41 × 10+011.07 × 10+01
GLPSO3.75 × 10+034.92 × 10+034.33 × 10+002.37 × 10+011.52 × 10+012.68 × 10+016.47 × 10+012.06 × 10+011.27 × 10−032.48 × 10−039.35 × 10+011.54 × 10+01
HCLPSO1.33 × 10+022.90 × 10+023.73 × 10−045.99 × 10−044.82 × 10+012.94 × 10+015.02 × 10+019.75 × 10+003.41 × 10−139.74 × 10−148.44 × 10+019.21 × 10+00
CLPSO2.04 × 10+014.09 × 10+012.49 × 10+047.84 × 10+036.08 × 10+012.19 × 10+014.77 × 10+016.86 × 10+003.41 × 10−112.18 × 10−118.41 × 10+018.73 × 10+00
ABC2.63 × 10+023.25 × 10+021.27 × 10+052.62 × 10+043.91 × 10+013.03 × 10+019.69 × 10+011.41 × 10+011.83 × 10−144.25 × 10−141.11 × 10+021.14 × 10+01
MFO1.16 × 10+106.72 × 10+098.34 × 10+045.93 × 10+048.35 × 10+027.38 × 10+021.80 × 10+024.47 × 10+013.53 × 10+011.05 × 10+014.41 × 10+021.86 × 10+02
WOA2.31 × 10+061.75 × 10+061.63 × 10+056.60 × 10+041.43 × 10+024.02 × 10+012.91 × 10+025.50 × 10+017.02 × 10+011.16 × 10+015.22 × 10+027.94 × 10+01
BOA4.39 × 10+106.18 × 10+097.07 × 10+049.62 × 10+036.45 × 10+032.70 × 10+033.46 × 10+023.00 × 10+017.01 × 10+011.01 × 10+015.64 × 10+025.07 × 10+01
GWO1.43 × 10+091.19 × 10+093.66 × 10+041.10 × 10+041.72 × 10+027.70 × 10+011.05 × 10+024.12 × 10+016.62 × 10+003.75 × 10+001.45 × 10+024.67 × 10+01
BA8.76 × 10+101.72 × 10+102.72 × 10+059.36 × 10+042.84 × 10+048.27 × 10+035.58 × 10+026.76 × 10+011.24 × 10+021.51 × 10+011.65 × 10+032.51 × 10+02
FunctionsF8F9F10F11F12F13
Metricsmeanstdmeanstdmeanstdmeanstdmeanstdmeanstd
HGCLPSO5.96 × 10+011.14 × 10+019.99 × 10+011.46 × 10+022.96 × 10+035.04 × 10+025.00 × 10+012.10 × 10+018.73 × 10+023.72 × 10+023.06 × 10+023.72 × 10+02
HCLDMS-PSO2.89 × 10+017.10 × 10+007.56 × 10−021.26 × 10−012.22 × 10+035.36 × 10+024.81 × 10+013.17 × 10+011.10 × 10+051.70 × 10+053.28 × 10+034.00 × 10+03
TSLPSO5.08 × 10+018.79 × 10+002.35 × 10+012.73 × 10+012.33 × 10+035.55 × 10+024.35 × 10+012.23 × 10+011.37 × 10+052.31 × 10+058.47 × 10+025.08 × 10+02
GLPSO5.53 × 10+011.54 × 10+014.29 × 10+015.84 × 10+012.80 × 10+036.32 × 10+026.43 × 10+013.50 × 10+011.89 × 10+047.96 × 10+031.26 × 10+041.23 × 10+04
HCLPSO5.38 × 10+018.22 × 10+002.98 × 10+013.39 × 10+012.14 × 10+032.82 × 10+024.71 × 10+013.19 × 10+012.73 × 10+041.25 × 10+046.47 × 10+027.23 × 10+02
CLPSO5.34 × 10+016.91 × 10+007.66 × 10+013.60 × 10+012.14 × 10+032.84 × 10+028.17 × 10+012.10 × 10+014.64 × 10+052.66 × 10+053.33 × 10+022.57 × 10+02
ABC1.04 × 10+021.89 × 10+011.11 × 10+035.27 × 10+022.36 × 10+033.19 × 10+023.03 × 10+022.43 × 10+024.44 × 10+053.22 × 10+054.62 × 10+035.36 × 10+03
MFO2.00 × 10+023.83 × 10+015.88 × 10+031.94 × 10+034.32 × 10+037.91 × 10+022.85 × 10+033.80 × 10+031.86 × 10+083.40 × 10+088.34 × 10+073.03 × 10+08
WOA2.13 × 10+024.40 × 10+016.07 × 10+031.72 × 10+035.37 × 10+039.47 × 10+023.91 × 10+029.72 × 10+013.42 × 10+072.62 × 10+071.45 × 10+058.41 × 10+04
BOA3.01 × 10+021.99 × 10+017.68 × 10+039.56 × 10+027.66 × 10+032.35 × 10+024.00 × 10+031.35 × 10+036.03 × 10+092.19 × 10+092.52 × 10+092.32 × 10+09
GWO8.58 × 10+012.77 × 10+017.90 × 10+025.74 × 10+022.93 × 10+035.80 × 10+024.59 × 10+024.28 × 10+024.10 × 10+076.26 × 10+078.21 × 10+063.18 × 10+07
BA4.87 × 10+026.30 × 10+012.41 × 10+047.29 × 10+038.86 × 10+035.45 × 10+024.00 × 10+042.09 × 10+041.85 × 10+104.33 × 10+091.59 × 10+107.21 × 10+09
FunctionsF14F15F16F17F18F19
Metricsmeanstdmeanstdmeanstdmeanstdmeanstdmeanstd
HGCLPSO2.98 × 10+025.15 × 10+021.15 × 10+028.10 × 10+015.23 × 10+021.87 × 10+021.41 × 10+027.60 × 10+019.20 × 10+016.26 × 10+011.12 × 10+025.64 × 10+01
HCLDMS-PSO3.84 × 10+033.85 × 10+031.15 × 10+047.15 × 10+032.72 × 10+021.88 × 10+028.34 × 10+013.53 × 10+018.57 × 10+045.34 × 10+041.45 × 10+041.33 × 10+04
TSLPSO1.10 × 10+047.85 × 10+033.35 × 10+024.21 × 10+023.91 × 10+021.54 × 10+028.80 × 10+014.30 × 10+011.18 × 10+056.97 × 10+042.77 × 10+023.72 × 10+02
GLPSO7.15 × 10+036.92 × 10+035.58 × 10+035.05 × 10+039.79 × 10+022.76 × 10+022.95 × 10+021.60 × 10+021.10 × 10+056.31 × 10+047.31 × 10+037.40 × 10+03
HCLPSO7.94 × 10+036.90 × 10+033.23 × 10+023.74 × 10+025.78 × 10+021.59 × 10+021.62 × 10+029.48 × 10+011.07 × 10+057.10 × 10+042.05 × 10+023.74 × 10+02
CLPSO2.57 × 10+041.98 × 10+041.32 × 10+027.68 × 10+015.27 × 10+021.57 × 10+021.61 × 10+026.67 × 10+011.62 × 10+058.99 × 10+046.70 × 10+015.23 × 10+01
ABC1.12 × 10+058.61 × 10+048.96 × 10+026.87 × 10+026.94 × 10+022.05 × 10+023.06 × 10+021.00 × 10+022.07 × 10+051.11 × 10+051.67 × 10+032.15 × 10+03
MFO1.34 × 10+053.18 × 10+053.77 × 10+042.47 × 10+041.55 × 10+033.26 × 10+028.20 × 10+022.44 × 10+021.97 × 10+064.20 × 10+061.33 × 10+073.64 × 10+07
WOA8.01 × 10+059.48 × 10+058.12 × 10+047.55 × 10+041.87 × 10+033.64 × 10+027.56 × 10+022.76 × 10+022.08 × 10+062.15 × 10+062.89 × 10+062.11 × 10+06
BOA1.05 × 10+061.00 × 10+069.61 × 10+071.00 × 10+082.47 × 10+033.53 × 10+029.48 × 10+021.77 × 10+029.49 × 10+068.76 × 10+061.47 × 10+081.61 × 10+08
GWO2.80 × 10+054.07 × 10+055.25 × 10+059.48 × 10+057.70 × 10+022.49 × 10+022.83 × 10+021.74 × 10+021.16 × 10+061.62 × 10+065.61 × 10+051.48 × 10+06
BA2.96 × 10+072.83 × 10+072.83 × 10+091.84 × 10+096.57 × 10+032.13 × 10+031.23 × 10+041.96 × 10+043.10 × 10+082.57 × 10+084.40 × 10+092.49 × 10+09
FunctionsF20F21F22F23F24F25
Metricsmeanstdmeanstdmeanstdmeanstdmeanstdmeanstd
HGCLPSO2.04 × 10+026.46 × 10+012.67 × 10+021.94 × 10+014.80 × 10+021.02 × 10+034.06 × 10+021.25 × 10+015.00 × 10+022.38 × 10+013.81 × 10+023.29 × 10+00
HCLDMS-PSO8.69 × 10+015.03 × 10+012.28 × 10+026.67 × 10+001.81 × 10+024.51 × 10+023.80 × 10+027.09 × 10+004.45 × 10+027.18 × 10+003.90 × 10+022.73 × 10+00
TSLPSO1.22 × 10+027.21 × 10+012.55 × 10+021.20 × 10+013.61 × 10+027.03 × 10+023.95 × 10+028.95 × 10+004.58 × 10+026.83 × 10+013.88 × 10+029.78 × 10−01
GLPSO4.32 × 10+021.54 × 10+022.65 × 10+022.00 × 10+018.08 × 10+021.48 × 10+034.22 × 10+022.05 × 10+014.94 × 10+022.68 × 10+013.90 × 10+029.55 × 10+00
HCLPSO2.29 × 10+028.90 × 10+012.42 × 10+024.69 × 10+011.88 × 10+024.90 × 10+024.07 × 10+021.08 × 10+014.93 × 10+022.14 × 10+013.87 × 10+021.35 × 10+00
CLPSO1.89 × 10+026.10 × 10+012.36 × 10+024.57 × 10+011.28 × 10+024.63 × 10+014.03 × 10+029.74 × 10+004.59 × 10+021.11 × 10+023.87 × 10+028.79 × 10−01
ABC3.05 × 10+021.01 × 10+022.98 × 10+025.05 × 10+011.56 × 10+031.45 × 10+034.31 × 10+023.06 × 10+015.39 × 10+021.95 × 10+023.84 × 10+021.20 × 10+00
MFO7.12 × 10+022.27 × 10+023.80 × 10+024.41 × 10+014.27 × 10+031.38 × 10+035.33 × 10+023.10 × 10+015.72 × 10+023.41 × 10+019.22 × 10+025.70 × 10+02
WOA7.59 × 10+021.86 × 10+024.71 × 10+025.97 × 10+013.85 × 10+032.43 × 10+037.43 × 10+029.54 × 10+017.78 × 10+028.07 × 10+014.50 × 10+023.01 × 10+01
BOA8.74 × 10+021.30 × 10+024.56 × 10+028.94 × 10+012.60 × 10+031.06 × 10+038.01 × 10+028.90 × 10+019.20 × 10+021.19 × 10+021.62 × 10+034.15 × 10+02
GWO3.76 × 10+021.36 × 10+022.84 × 10+021.59 × 10+012.29 × 10+031.48 × 10+034.55 × 10+025.14 × 10+015.27 × 10+024.96 × 10+014.73 × 10+022.73 × 10+01
BA1.57 × 10+031.66 × 10+027.56 × 10+028.05 × 10+019.11 × 10+037.53 × 10+021.62 × 10+032.10 × 10+021.88 × 10+032.51 × 10+027.13 × 10+032.25 × 10+03
FunctionsF26F27F28F29F30
Metricsmeanstdmeanstdmeanstdmeanstdmeanstd
HGCLPSO1.10 × 10+037.38 × 10+025.00 × 10+022.24 × 10+013.30 × 10+025.81 × 10+015.68 × 10+021.02 × 10+023.41 × 10+037.29 × 10+02
HCLDMS-PSO1.26 × 10+031.09 × 10+025.18 × 10+025.78 × 10+004.25 × 10+022.56 × 10+014.95 × 10+024.04 × 10+018.80 × 10+033.20 × 10+03
TSLPSO7.77 × 10+026.20 × 10+025.16 × 10+026.19 × 10+003.96 × 10+024.10 × 10+015.35 × 10+027.00 × 10+015.91 × 10+032.00 × 10+03
GLPSO1.68 × 10+036.19 × 10+025.32 × 10+029.73 × 10+003.34 × 10+025.59 × 10+017.26 × 10+021.81 × 10+024.62 × 10+031.56 × 10+03
HCLPSO5.24 × 10+025.29 × 10+025.16 × 10+026.52 × 10+003.48 × 10+025.44 × 10+015.83 × 10+027.36 × 10+015.17 × 10+032.16 × 10+03
CLPSO6.63 × 10+024.19 × 10+025.13 × 10+025.53 × 10+004.28 × 10+021.21 × 10+015.65 × 10+027.02 × 10+016.64 × 10+031.95 × 10+03
ABC7.82 × 10+028.96 × 10+025.15 × 10+025.00 × 10+003.94 × 10+021.50 × 10+016.57 × 10+021.00 × 10+025.57 × 10+031.54 × 10+03
MFO3.26 × 10+035.44 × 10+025.58 × 10+022.42 × 10+011.28 × 10+038.45 × 10+021.31 × 10+032.79 × 10+026.40 × 10+057.56 × 10+05
WOA4.92 × 10+031.09 × 10+036.74 × 10+021.04 × 10+024.91 × 10+022.97 × 10+011.82 × 10+033.86 × 10+029.68 × 10+068.21 × 10+06
BOA5.76 × 10+031.14 × 10+037.47 × 10+029.18 × 10+012.68 × 10+037.12 × 10+022.31 × 10+034.15 × 10+022.28 × 10+081.41 × 10+08
GWO2.03 × 10+033.58 × 10+025.39 × 10+021.75 × 10+016.02 × 10+027.33 × 10+018.96 × 10+021.99 × 10+025.59 × 10+064.93 × 10+06
BA1.18 × 10+042.23 × 10+032.65 × 10+034.72 × 10+027.77 × 10+032.25 × 10+031.14 × 10+048.02 × 10+032.50 × 10+091.23 × 10+09
Table 11. Rank of mean performance and Wilcoxon rank sum test between HGCLPSO and other EAs on 30D CEC2017 test suite.
Table 11. Rank of mean performance and Wilcoxon rank sum test between HGCLPSO and other EAs on 30D CEC2017 test suite.
FunctionsHGCLPSOHCLDMS-PSOTSLPSOGL-PSOHCLPSOCLPSOABCMFOWOABOAGWOBA
F116(+)3(+)7(+)4(+)2(+)5(+)10(+)8(+)11(+)9(+)12(+)
F314(+)5(+)3(=)2(+)6(+)10(+)9(+)11(+)8(+)7(+)12(+)
F417(+)6(+)2(+)4(+)5(+)3(+)10(+)8(+)11(+)9(+)12(+)
F551(−)2(−)6(=)4(−)3(−)7(+)9(+)10(+)11(+)8(+)12(+)
F657(+)4(−)6(+)2(−)3(=)1(−)9(+)11(+)10(+)8(+)12(+)
F761(−)3(−)5(=)4(−)2(−)7(+)9(+)10(+)11(+)8(+)12(+)
F861(−)2(−)5(=)4(=)3(−)8(+)9(+)10(+)11(+)7(+)12(+)
F961(−)2(−)4(−)3(−)5(=)8(+)9(+)10(+)11(+)7(+)12(+)
F1083(−)4(−)6(=)1(−)2(−)5(−)9(+)10(+)11(+)7(=)12(+)
F1143(=)1(=)5(=)2(=)6(+)7(+)10(+)8(+)11(+)9(+)12(+)
F1214(+)5(+)2(+)3(+)7(+)6(+)10(+)8(+)11(+)9(+)12(+)
F1315(+)4(+)7(+)3(+)2(+)6(+)10(+)8(+)11(+)9(+)12(+)
F1412(+)5(+)3(+)4(+)6(+)7(+)8(+)10(+)11(+)9(+)12(+)
F1517(+)4(+)6(+)3(+)2(=)5(+)8(+)9(+)11(+)10(+)12(+)
F1631(−)2(−)8(+)5(=)4(=)6(+)9(+)10(+)11(+)7(+)12(+)
F1731(−)2(−)7(+)5(=)4(=)8(+)10(+)9(+)11(+)6(+)12(+)
F1812(+)5(+)4(+)3(+)6(+)7(+)9(+)10(+)11(+)8(+)12(+)
F1927(+)4(=)6(+)3(=)1(−)5(+)10(+)9(+)11(+)8(+)12(+)
F2041(−)2(−)8(+)5(=)3(=)6(+)9(+)10(+)11(+)7(+)12(+)
F2161(−)4(−)5(=)3(−)2(−)8(+)9(+)11(+)10(+)7(+)12(+)
F2252(=)4(=)6(−)3(=)1(+)7(+)11(+)10(+)9(+)8(+)12(+)
F2341(−)2(−)6(+)5(=)3(=)7(+)9(+)10(+)11(+)8(+)12(+)
F2461(−)2(−)5(=)4(=)3(=)8(+)9(+)10(+)11(+)7(+)12(+)
F2516(+)5(+)7(+)3(+)4(+)2(+)10(+)8(+)11(+)9(+)12(+)
F2656(=)3(=)7(+)1(−)2(=)4(=)9(+)10(+)11(+)8(+)12(+)
F2716(+)5(+)7(+)4(+)2(=)3(+)9(+)10(+)11(+)8(+)12(+)
F2816(+)5(+)2(−)3(+)7(+)4(+)10(+)8(+)11(+)9(+)12(+)
F2941(−)2(=)7(+)5(=)3(=)6(+)9(+)10(+)11(+)8(+)12(+)
F3017(+)5(+)2(+)3(+)6(+)4(+)8(+)10(+)11(+)9(+)12(+)
Average rank3.243.483.525.313.383.625.869.289.5210.768.0312.00
Final rank134625791011812
+/=/− 14/3/1212/5/1218/8/312/10/712/11/626/1/229/0/029/0/029/0/028/1/029/0/0
Table 12. Ranks obtained through Friedman test for 30D CEC2017 test suite problems.
Table 12. Ranks obtained through Friedman test for 30D CEC2017 test suite problems.
AlgorithmHGCLPSOHCLDMS-PSOTSLPSOGL-PSOHCLPSOCLPSOABCMFOWOABOAGWOBA
Average rank3.253.71 3.64 4.87 3.66 4.27 5.69 9.04 9.57 10.59 7.74 11.99
Final rank142635791011812
p-value3.98 × 10−49
Table 13. Comparison of performance on 30D WSNs coverage problem.
Table 13. Comparison of performance on 30D WSNs coverage problem.
AlgorithmHGCLPSOHCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSO
mean0.93510.93220.93060.93340.93270.93270.93250.9316
std8.99 × 10−041.25 × 10−040.00300.00130.00210.00250.00210.0011
rank17924568
AlgorithmCLPSOABCMFOWOABOAGWOBA
mean0.93050.93320.92000.90420.78110.92660.7446
std6.80 × 10−040.00210.00930.00930.00530.00480.0424
rank1031213141115
Table 8. Ranks obtained through Friedman test for 50D CEC2014 test suite problems.
Table 8. Ranks obtained through Friedman test for 50D CEC2014 test suite problems.
AlgorithmHGCLPSOHCLDMS-PSOTSLPSOEPSOGL-PSOHCLPSOSL-PSOOLPSOCLPSO
Average rank3.374.36 3.61 5.38 5.41 4.47 4.67 8.83 4.90
Final rank132784596
p-value3.4904 × 10−17
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Li, Y.; Li, Y. Heterogeneous Genetic Learning and Comprehensive Learning Strategy Particle Swarm Optimizer. Algorithms 2025, 18, 755. https://doi.org/10.3390/a18120755

AMA Style

Wang S, Li Y, Li Y. Heterogeneous Genetic Learning and Comprehensive Learning Strategy Particle Swarm Optimizer. Algorithms. 2025; 18(12):755. https://doi.org/10.3390/a18120755

Chicago/Turabian Style

Wang, Shengliang, Yuqing Li, and Yao Li. 2025. "Heterogeneous Genetic Learning and Comprehensive Learning Strategy Particle Swarm Optimizer" Algorithms 18, no. 12: 755. https://doi.org/10.3390/a18120755

APA Style

Wang, S., Li, Y., & Li, Y. (2025). Heterogeneous Genetic Learning and Comprehensive Learning Strategy Particle Swarm Optimizer. Algorithms, 18(12), 755. https://doi.org/10.3390/a18120755

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop