Next Article in Journal
Normal System in Laplace Expansion and Related Regression Modeling Problems
Previous Article in Journal
Soliton Dynamics and Modulation Instability in the (3+1)-Dimensional Generalized Fractional Kadomtsev–Petviashvili Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm

1
School of Management Science and Information Engineering, Jilin University of Finance and Economics, Changchun 130117, China
2
Jilin Province Key Laboratory of Fintech, Jilin University of Finance and Economics, Changchun 130117, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(5), 667; https://doi.org/10.3390/sym17050667 (registering DOI)
Submission received: 1 April 2025 / Revised: 23 April 2025 / Accepted: 25 April 2025 / Published: 27 April 2025

Abstract

:
The particle swarm optimization algorithm, known for its swift convergence and minimal parameters, is widely used to solve various complex optimization problems. Despite its merits, it encounters challenges such as susceptibility to premature convergence and reduced search capability in later stages. To improve the optimization performance of the algorithm, this study designs an advanced adaptive Group Learning Particle Swarm Optimization (GLPSO) algorithm based on the concept of symmetry. Firstly, a new coefficient is proposed to enable rapid convergence of the algorithm, allowing the population to quickly converge to the optimal position. By adopting a group learning mechanism, the optimal particles in each group are learned during particle updates; this enhances the diversity of particle learning in the algorithm, improving its global search capability. A novel adaptive perturbation rule is proposed, enabling particles to perform perturbations in a more suitable manner, thereby enhancing the population’s ability to escape local optima. GLPSO is compared with several advanced algorithms on internationally recognized high-dimensional benchmark functions and is also applied to some constrained engineering problems. The results indicate that GLPSO outperforms other improved algorithms on most test problems and provides competitive and high-quality solutions for engineering problems. Overall, GLPSO enhances its optimization capability while maintaining robustness.

1. Introduction

With the rapid development of science and technology, meta-heuristic algorithms play an important role in solving problems in various fields, such as engineering technology [1,2,3], biomedicine [4,5], and information science [6,7,8,9,10]. Traditional optimization methods, such as the gradient descent method [11] and the Newton method [12], face issues like high computational complexity and the tendency to become trapped in local optima when dealing with complex, nonlinear, and multimodal optimization problems, which limits their scope of application. Therefore, researchers continuously explore new optimization algorithms in hopes of addressing these challenges more effectively.
Meta-heuristic algorithms, by abstractly simulating real-world processes and applying diverse search strategies within the search space to explore and update solutions, can address most optimization problems in these fields. For example, the Genetic Algorithm (GA) [13] solves optimization problems by simulating the biological evolution process. The Simulated Annealing algorithm (SA) [14] is based on the principle of the annealing process in physics and prevents the algorithm from falling into local optima by gradually reducing the temperature. Ant Colony Optimization (ACO) [15] simulates the pheromone renewal mechanism in the ant foraging process, enabling effective solutions for complex path planning problems. The Grey Wolf Optimizer (GWO) [16] explores the solution space by mimicking the hunting behavior of wolves in nature. The core idea of particle swarm optimization (PSO) [17] is based on the foraging behavior of birds in nature, finding the optimal solution through information sharing and cooperation among particles.
Different meta-heuristic algorithms are suitable for solving different problems due to their unique advantages. Since the particle swarm optimization (PSO) algorithm was proposed by Kennedy and Eberhart in 1995, it has quickly become a prominent research focus in the optimization field. This is attributed to its straightforward principles, ease of implementation, minimal parameter demands, and swift convergence rate [18]. Although the PSO algorithm has achieved remarkable results in many fields, it still has some shortcomings, such as premature convergence, declining search capability in later stages, sensitivity to parameters, and other issues [19,20,21]. To further improve the performance and applicability of the PSO algorithm, many researchers have proposed enhancements from various perspectives. These improvements include the introduction of new particle update strategies [22,23,24,25], dynamic adjustment of algorithm parameters [26], dynamic regulation of population size [27], and the integration of concepts from other optimization algorithms [28,29,30,31], aiming to improve the algorithm’s convergence speed, enhance its global search ability, and reduce the risk of getting trapped in local optima.
Liang et al. proposed the Comprehensive Learning Particle Swarm Optimizer (CLPSO) algorithm [19], which uses the best historical information from all particles in the group to update particle velocity. This algorithm updates particle information and considers different dimensions independently. The innovative handling of particle dimensions in CLPSO allows it to escape local optima effectively and perform well on high-dimensional problems; however, for general-dimensional problems, its advantages are less pronounced. In high-dimensional cases, the high sensitivity of CLPSO to dimension can lead to a significant increase in the algorithm’s practical complexity and computational load. Modified Particle Swarm Optimization (MPSO) [31] adopts an adaptive updating strategy and introduces both random and mainstream learning strategies, improving the global and individual best learning strategies in PSO and effectively enhancing algorithm performance. However, MPSO still suffers from slow convergence and a tendency to become trapped in local optima. The Elite Archives-Driven Particle Swarm Optimization (EAPSO) [25] establishes an elite archive and uses different update strategies for different particles, updating only a subset of particles. While this approach reduces the computational complexity of the algorithm, it also decreases the convergence speed.
In addition, there are many new perspectives for improving PSO. Sukanta et al. combined an improved Backtracking Search Algorithm (BSA) with PSO to propose a new variant (e-mPSOBSA) for solving global optimization tasks [29]. Li et al. introduced a Pyramid PSO (PPSO) with competitive and cooperative mechanisms to update particle information [32]. Meng et al. proposed a PSO-based single-objective numerical optimization variant (PSO-sono) [33]. EOPSO [34] proposed an elite–ordinary collaborative PSO method that divides particles into elite and ordinary members based on their fitness values.
To address the issues of increasing computational complexity, slow convergence speed, and the tendency to fall into local optima in PSO variants, this paper proposes an adaptive perturbation-based Group Learning Particle Swarm Optimization algorithm, called GLPSO. By preserving the original advantages of the PSO algorithm, it overcomes the shortcomings of slow convergence and susceptibility to local optima, thereby improving the overall performance of the algorithm. The algorithm demonstrates rapid convergence during the initial iteration stage, while the perturbation strategies employed at different stages enhance its global search performance and help avoid local optima. This leads to faster attainment of better optimal solutions and provides new insights for further improvements in optimization problems characterized by multiple local optima. The contributions of this paper are as follows:
  • This research utilizes the Latin hypercube sampling and reverse learning mechanisms for population initialization, ensuring that the initial population is evenly distributed across the search space. This approach yields a more advantageous initial population, enhancing the probability of the algorithm finding the global optimal solution during the search process, preventing premature convergence, and improving algorithm stability.
  • A new golden sine perturbation coefficient is proposed and applied to the linearly decreasing inertia weight, allowing the algorithm to adaptively explore the search space more effectively during different search phases and enhancing the convergence speed of the algorithm.
  • A staged group learning mechanism is proposed. In each iteration, the population is randomly divided into groups to establish a group learning approach, with the number of group members adapting to changes at different stages of the iteration. The velocity update formula is improved, enhancing the diversity of population learning and increasing the algorithm’s global search capability.
  • An adaptive Gaussian perturbation strategy is proposed to apply adaptive perturbations to different particle velocities in the group, facilitating particles to escape local optima and more effectively balancing the algorithm’s exploration and exploitation capabilities.
This paper is organized as follows: Section 2 introduces the classical PSO, introduces the concepts of Latin hypercube sampling and opposition-based learning, and presents a Gaussian perturbation. Section 3 proposes an adaptive perturbation-based Group Learning Particle Swarm Optimization (GLPSO) algorithm. Section 4 presents the algorithm settings for the CEC-2017 test functions and analyzes the results. This study provides a detailed performance analysis of the algorithm in Section 5. In Section 6, GLPSO is applied to two engineering problems. Section 7 provides conclusions and outlines future research directions.

2. Basic Concepts

2.1. Classic PSO

PSO abstracts birds as particles by simulating their predatory behavior and utilizes information sharing among individuals in the group. This allows the entire group to search each region through a division of labor, continuously seeking the optimal solution while exploring the search space. The particle update diagram is illustrated in Figure 1.
In the search space, the position of the ith particle at the tth iteration is denoted as x i t . The velocity of the ith particle at the tth iteration is represented as v i t . The yellow particle represents the current particle being updated, the red particle represents the next position of the updated particle, and the green particles represent other particles in the population during the tth iteration. The (1) and (2) represent the update rule for the position of the particle in the t + 1th generation in classical PSO.
v i t + 1 = ω t · v i t + c 1 · r 1 · ( p b e s t i x i t ) + c 2 · r 2 · ( g b e s t x i t )
x i t + 1 = v i t + 1 + x i t
ω t is the linearly decreasing inertia weight, while c 1 and c 2 are learning factors. c 1 represents the influence of individual experience on particle velocity updates, and c 2 represents the influence of group experience on particle velocity updates. r 1 and r 2 are random numbers in the range of [0, 1]. p b e s t i denotes the historical best position of particle i, and g b e s t indicates the current global best position. v i t and v i t + 1 represent the velocities of particle i at the tth and (t + 1)th iterations, respectively. x i t and x i t + 1 represent the positions of particle i at the tth and (t + 1)th iterations, respectively, which denote the solutions to the problem.

2.2. Latin Hypercube Sampling

Latin hypercube sampling (LHS) is a method for approximate random sampling from a multivariate parameter distribution. LHS evenly distributes sampling points across each dimension using stratified sampling techniques to ensure comprehensive coverage of samples within the entire variable space. In high-dimensional problems, the required number of samples can be significantly reduced while enhancing the representativeness and statistical efficiency of the samples. G. Li et al. utilized Latin hypercube sampling to select test points, ensuring that the test points covered the entire experimental space and improving the model’s fitting accuracy [35]. Y. Wang et al. applied Latin hypercube sampling to enhance the AEO algorithm, making the initial population more evenly distributed across the overall variable space [36]. The basic steps of LHS are as follows:
  • Determine the number of sample points N;
  • Interval partitioning: Divide the cumulative distribution of the variable in the interval (0, 1) into N non-overlapping sub-intervals, ensuring that each interval contains an equal amount of probability mass;
  • Sample extraction: Randomly select one sample point independently from each partitioned interval to obtain N different sample points. This ensures that different sample points for the same variable do not fall within the same interval, avoiding overlap and ensuring the diversity and representativeness of the samples.

2.3. Opposition-Based Learning

The basic idea of the opposition-based learning (OBL) mechanism is to consider both a solution and its symmetric solution as candidate solutions during the problem-solving process. The optimal solution is then selected from these two sets of candidates to achieve better performance. This mechanism not only expands the search space of the population but also helps avoid invalid searches, allowing the algorithm to converge more quickly to the global optimal solution and enhance search efficiency. Currently, the reverse learning mechanism is widely used to improve meta-heuristic algorithms. C. Li et al. improved the initialization strategy of the butterfly algorithm using the reverse learning strategy, significantly enhancing the algorithm’s convergence speed [37]. M.A.K. Almansuri et al. combined the opposition-based learning strategy with chaos theory and proposed an improved White Shark Optimizer (WSO) algorithm, which prevents premature convergence and enhances the optimization performance of the algorithm [38].
The basic process of the OBL is as follows:
Generate the reverse solution: Given a solution in the search space x = ( x 1 , x 2 , , x n ) , its reverse solution x = ( x 1 , x 2 , , x n ) is defined as (3).
x m i n and x m a x represent the minimum and maximum values of the solution. The fitness of both the original solution and the reverse solution are evaluated (i.e., their objective function values are computed), and the solution with the better fitness value is retained in the population. Taking the minimization problem as an example, the selection is performed according to (4).
x i = x m i n + x m a x x i
x i = x i f i t n e s s i f i t n e s s i x i f i t n e s s i > f i t n e s s i
f i t n e s s i is the fitness value of the reverse solution generated by particle i through reverse learning.

2.4. Double-Faced Mirror Theory Boundary Optimization

In general, variants of the particle swarm optimization algorithm typically handle boundary conditions by setting out-of-bounds particles equal to the boundary values. This approach can lead to a large number of particles accumulating at the boundaries, which restricts their exploration of the entire search space. Wang et al. proposed a two-sided mirror reflection boundary processing method in 2020 [39]. The upper and lower boundaries are assumed to act as two mirrors, representing a propagating beam, where the size represents light intensity. The light beam is reflected off the mirrors multiple times, and due to medium loss, the light beam will eventually diminish, as illustrated in Figure 2.
The reflection boundary processing method of the double-sided mirror is as in (5),
y = x m a x m o d ( y x m a x , x m a x x m i n ) y > x m a x x m i n + m o d ( x m i n y , x m a x x m i n ) y < x m i n
Through the double-mirror reflection processing, the final particles remain within the search space, effectively addressing the issue of particle accumulation at the boundaries and enhancing the particles’ exploration capabilities of the entire space. The GLPSO algorithm uses the double-mirror reflection boundary processing method to handle the velocity of out-of-bounds particles.

3. Group Learning Particle Swarm Optimization Algorithm

This paper proposes an adaptive perturbation Group Learning Particle Swarm Optimization (GLPSO) algorithm. The flowchart of GLPSO is shown in Figure 3.
The innovation of GLPSO lies in the following aspects: It uses Latin hypercube sampling combined with the reverse learning mechanism to initialize the population, enhancing the quality of the initial population while also improving the robustness of the algorithm. A new golden sinusoidal disturbance coefficient combined with linearly reduced inertia weight is proposed to enhance the algorithm’s global search ability in the early iteration stage, allowing the GLPSO algorithm to converge and explore the dominant region of the solution space in the later stages. Additionally, improvements are made to the random group learning of the population and the velocity updating formula, increasing the diversity of population learning and enabling particles to search the search space more comprehensively. The algorithm introduces an adaptive Gaussian perturbation strategy, allowing the velocities of the group particles to adapt to perturbations during different periods, facilitating the particles in escaping local optima for a more effective global search.

3.1. Population Initialization

Latin hypercube sampling is stratified to ensure that the particle initialization positions are evenly distributed throughout the search space, with each subspace containing only one sample. This type of distribution enhances the representativeness of the population and improves the algorithm’s global exploration capability. GLPSO sets the population size to n. A matrix l h s D e s i g n of size n is generated using Latin hypercube sampling, with values ranging between [0,1], representing relative positions uniformly distributed across the corresponding dimensions. The values in l h s D e s i g n are then mapped to the real search space [ x m i n , x m a x ] using scaling (6) to obtain a set of Latin populations x i ( i = 1 , 2 , . . . , n ) .
x i = x m i n + ( x m a x x m i n ) · l h s D e s i g n
After obtaining the Latin population through Latin hypercube sampling, the reverse population is generated using (3) x i ( i = 1 , 2 , . . . , n ) . The greedy rule is then applied to select the solution with better fitness as the position of the initial particle according to (4), resulting in the formation of the initial population.
Through Latin hypercube sampling, the particles are evenly distributed in the solution space. Subsequently, the reverse solution of the original particle solutions is generated using the reverse learning mechanism, which enhances the diversity of the population. The greedy rule is then applied to select the solution with the better fitness value as the initial solution for the population, thereby improving the quality and global distribution of the initial population. As a result, the algorithm evolves from a more favorable initial state, maintains a stable global search trend throughout the iterative process, and enhances the convergence speed of the algorithm.

3.2. Stochastic Group Learning Mechanism

The original particle swarm optimization (PSO) algorithm relies on individual and global learning strategies to update particle positions, which can lead to issues such as becoming trapped in local optima and premature convergence.
In contrast, GLPSO introduces a novel random group learning strategy that enhances interaction among particles. This approach strengthens the algorithm’s global exploration capabilities, helping to mitigate the risk of premature convergence and improving the likelihood of finding more optimal solutions.
During each iteration, the population is randomly divided into groups based on the number of group members. When the number of remaining particles is insufficient to form a complete group, the remaining particles are assigned to one group. The fitness values of the particles within each group are compared, and the best particle in each group is recorded as g r o u p b e s t i t . During the velocity update, particles learn from their respective local best particles.
This random group learning mechanism provides particles with more opportunities to learn from different high-quality particles, enabling them to explore new directions and thereby enhancing the algorithm’s search capability. For particles that exceed the boundaries after learning, the two-sided mirror reflection boundary processing method is applied to handle their boundary conditions [39].
The velocity update method is influenced by the golden disturbance coefficient g c , represented by the golden inertia coefficient g ω t , which indicates the effect of the particle’s velocity from the previous iteration on its update. The g c calculation is given by (11).
In (7), t represents the current iteration count, p b e s t i denotes the current particle’s individual historical best position, g b e s t represents the current global best position, and g r o u p b e s t i t signifies the local best position of the group to which the current particle belongs.
The parameters c 1 , c 2 , and c 3 are learning factors, where c 1 indicates the influence of individual experience on the particle’s velocity update, c 2 represents the influence of group experience, and c 3 reflects the influence of group experience on the velocity update.
v i t + 1 = ω t · v i t + c 1 · r 1 · ( p b e s t i x i t ) + c 2 · r 2 · ( g b e s t x i t ) + c 3 · r 3 · ( g r o u p b e s t i t x i t )
x i t + 1 = v i t + 1 + x i t
The r 1 , r 2 , and r 3 are three random numbers in the range of [0,1], representing the extent to which individual, group, and local experiences affect the current particle’s update, respectively.
The particle update mode is illustrated in Figure 4. Assuming the population size n is 9, at the t iteration, the particles in the population are randomly divided into 3 groups. The optimal particles within each group are determined based on the fitness values of the particles. In this context, the purple particle denotes the optimal particle from the group to which the yellow particle belongs. The joint influence of p b e s t i , g b e s t , and g r o u p b e s t i t determines the updated position of the particle x i t + 1 .

3.3. Particle Renewal Based on Gold Disturbance Coefficient

Inertia weight ω t , as an essential parameter in PSO, provides particles with the adaptability to adjust to various environments, achieving a balance between exploration and exploitation. Typically, a linear inertia weight update method is used [31]. For instance, in (9), the inertia weight decreases linearly from ω m a x to ω m i n as the number of iterations increases [40].
This research introduces a golden sinusoidal disturbance coefficient ( g c ) in addition to the inertia weight ( ω t ) of the original PSO linear transformation. The g c calculation is shown in (10). The product of g c and ω t (denoted as g ω t ) is used to improve particle iteration speed in (11). By combining the golden ratio coefficient with a sine function, the control strength of the golden coefficient over particle velocity decreases nonlinearly with the number of iterations.
ω t = ω m a x ( ω m a x ω m i n ) · t T m a x
g c = g 1 + g 2 · ( 1 s i n p i 2 t ) · τ 1
g ω t = ω t · g c
In the early iteration stages, the g ω t value is large, allowing particles to explore a broader solution space, enhancing the algorithm’s global search capability and reducing the risk of premature convergence. In later iterations, the g ω t value decreases, enabling particles to perform refined searches within potential optimal solution regions.
Compared with the linearly decreasing coefficient ω t , the g c coefficient exerts a stronger initial control over particle velocity. Influenced by the sine function, its control strength decays rapidly, allowing the particles to maintain a certain velocity for local exploitation in the later stages of iteration. This type of decay better aligns with the evolutionary patterns observed in natural systems, supporting the need for fast exploration early on and slower refinement later. The g c coefficient achieves a better balance between global exploration and local exploitation at different stages of the GLPSO algorithm, thereby enhancing its overall stability.
In this algorithm, the maximum iteration count is denoted by T m a x , and t represents the current iteration. Parameters g 1 and g 2 control the upper and lower bounds of the g c coefficient, respectively. They are designed to constrain the change in particle velocity within a reasonable range, ensuring that the velocity does not become excessively large (which could destabilize the search) or too small (which could reduce the particles’ exploratory capability). After extensive experimentation, it was found that when g 1 = 0.1 , the algorithm maintains a consistent level of global exploration throughout the entire process. When g 2 = 1.6 , the algorithm’s exploration ability is enhanced during the early stages and gradually reduced during iteration to improve convergence. This facilitates a better balance between exploration and exploitation, under which the GLPSO algorithm achieves optimal performance. Additionally, τ is the golden ratio coefficient, which is approximately equal to 0.618.
Taking the iteration count T m a x = 1000 as an example, the change in g ω t over the iterations are shown in Figure 5. In the early stages of the iteration, their values decrease rapidly as the iteration count increases and then gradually level off in the later stages. Compared to the traditional inertia coefficient ( ω t ), g ω t allows for a broader exploration of the search space in the early stages of the iteration, enabling the algorithm to quickly converge towards potential optimal solution areas. In the later stages, the value of g ω t levels off more than the traditional inertia coefficient ( ω t ), allowing for more thorough exploration in the vicinity of the potential optimal solution area.

3.4. Adaptive Gaussian Perturbation Strategy

According to numerous experiments, perturbing particle velocity can effectively balance global and local search capabilities, preventing the algorithm from falling into local optima. GLPSO employs an adaptive Gaussian perturbation strategy to adjust particle velocity, which significantly enhances the algorithm’s convergence speed.
We apply selective Gaussian perturbation to each dimension of a particle’s velocity, with a perturbation probability p r . This means that each dimension of the velocity is perturbed with probability p r , which helps prevent particles from diverging or missing local optimal regions. It enhances the diversity and non-determinism of the search, making it more conducive to effective exploration and algorithm evolution. Multiple experiments have shown that when p r = 0.3 , the algorithm not only maintains optimization stability but also significantly improves its ability to escape local optima.
The standard deviation s i g m a of the Gaussian perturbation is defined, and for each selected velocity dimension, a Gaussian random value, g a u s s i a n _ v e l , is generated with a mean of 0 and a standard deviation of s i g m a . The disturbed velocity ( v e l ) is then updated as shown in (12).
v e l = g a u s s i a n _ v e l · v e l
In different update stages of small particle groups, different particles require perturbation to escape local optima. Thus, the adaptive Gaussian perturbation strategy is divided into two stages: the early iteration stage and the later iteration stage. In the early stage, there are significant differences in the average fitness values of each particle group. Based on the average fitness values and standard deviations of the different groups, targeted perturbation strategies are applied to protect high-performing particles and perturb those with lower performance, thereby enhancing the algorithm’s optimization speed.
Phase 1: The early iteration stage refers to the period from the first particle iteration to the mid-iteration; use q 1 to represent the number of particles in each group. The group’s average fitness g r o u p f i t m is calculated according to (13), where m is the number of groups, m = ( 1 , 2 , , q 1 n ) .
When solving for the minimum of a problem, a lower fitness value indicates a better optimization result from the algorithm. If the group’s mean value is relatively high, the overall optimization performance of the particles in that group is poor, requiring perturbation. The average fitness values of each group are compared, and the group with the highest fitness value is selected as the worst-performing group. Selective Gaussian perturbation is applied to all particles in the worst-performing group by (14). Through the first phase of updating, the particle swarm evolves toward the dominant region.
g r o u p f i t m = f i t 1 + f i t 2 + + f i t q 1 q 1
w o r s t f i t = a r g m a x { g r o u p f i t 1 , g r o u p f i t 2 , , g r o u p f i t m }
Phase 2: In the later iteration stage, particles are concentrated in the advantageous region and need to learn from more prominent particles. The number of particles in each group is set to q 2 .
In this stage, the algorithm shifts from the global exploration of the first phase to the local exploitation of promising positions. Set q 2 > q 1 to enable particles to strengthen the local exploitation of high-quality positions in this period. During group learning, particles benefit from learning from more dominant particles, allowing a more thorough exploration of the surrounding area.
The average fitness and standard deviation of the population are calculated using (15) and (16), while the average fitness value g r o u p f i t m and standard deviation g r o u p s t d m of each group are calculated according to (17) and (18), respectively. Different Gaussian perturbation strategies are selected for the particle groups based on a comparison of each group’s mean and standard deviation with those of the overall population.
m e a n f i t = i = 1 n f i t i n  
s t d = 1 n · i = 1 n ( f i t i m e a n f i t ) 2
m e a n f i t represents the average fitness value of the population; s t d indicates the standard deviation of the population; n is the population size; and f i t denotes the fitness value of particle i.
g r o u p f i t m = f i t 1 + f i t 2 + + f i t q 2 q 2
g r o u p s t d m = 1 q 2 · i = 1 q 2 ( f i t i g r o u p f i t m ) 2
g r o u p f i t indicates the intra-group average value of group m, and g r o u p s t d represents the standard deviation of group m. To compare the dispersion degree of each group of particles, we use the dispersion coefficient for comparison. Equations (19) and (20) calculate the dispersion degree among the population particles ( V σ ) and the dispersion degree among the particles in group m ( V m σ ) , respectively.
V σ = m e a n f i t s t d
V m σ = g r o u p f i t m g r o u p s t d m
When the group’s average value is less than the population average and the group dispersion coefficient is greater than or equal to that of the population t, it suggests that the particles within the group are performing ineffectively, and the worst-performing particle in the group will be perturbed to enhance the update range of the particles.
case 1 :   V m σ V σ perturb   the   worst   particle case 2 :   g r o u p f i t m m e a n f i t and   V m σ < V σ leave   undisturbed case 3 :   g r o u p f i t m > m e a n f i t and   V m σ < V σ perturb   the   whole   group   of   particles
If the group’s average value falls below the population average and the group’s dispersion coefficient is lower than that of the population, it indicates that the particles in the group are updating effectively, and no perturbation is required for the particles within the group.
When the average value of the group is greater than the population average and the group dispersion coefficient is less than the population dispersion coefficient, it indicates that the overall performance of the particles in that group is poor, and a full group perturbation will be applied to enhance the global search capability of the particles and escape from local optima, as shown in (21).
The overall algorithm flow is shown in Algorithm 1, where the working process of GLPSO is detailed. The input parameters include the maximum number of iterations T m a x , population size n, and problem dimension D. The dominant population is obtained through Latin hypercube sampling combined with the reverse learning mechanism, and the historical optimal position of each particle p b e s t i and the global optimal position of the population g b e s t are updated. After completing the population initialization, the particles enter the iterative update phase, where different update strategies are selected to perturb the particle velocities at different stages of iteration.
Algorithm 1 GLPSO
1:
Input: Population size n, maximum number of iterations T max , inertia weight ω , location boundary [ x min , x max ] , velocity boundary [ v min , v max ] , q 1 = 3 , q 2 = 10 , p r = 0.3
2:
Output: Optimal solution
3:
Initialize: x i is calculated by Equations (3), (4), and (6);
4:
Randomly initialize velocity vector v i ;
5:
Initialize g b e s t and p b e s t ;
6:
for  t = 1 to T max  do
7:
    if  t T max 2  then
8:
        Specify the number q 1 to randomize
9:
        for  i = 1 to n do
10:
            g r o u p f i t m is calculated by (13)
11:
           Find the worst group by (14)
12:
           Perform Gaussian perturbation on all particles in the worst group
13:
            v i t + 1 is calculated by (7); x i t + 1 is calculated by (8)
14:
           Check the boundaries
15:
           Update g b e s t ; Update p b e s t
16:
        end for
17:
    else
18:
        Specify the number q 2 to randomize
19:
        for  i = 1 to n do
20:
           Calculate dispersion coefficient V σ of the population and V g σ for each group according to (15)–(20)
21:
           Perturb different groups using the three disturbance strategies in (21).
22:
            v i t + 1 is calculated by (7); x i t + 1 is calculated by (8)
23:
           Check the boundaries
24:
           Update g b e s t ; Update p b e s t
25:
        end for
26:
    end if
27:
end for

4. Experimental Results and Discussions

4.1. Experiment Setup

In this experiment, tests were conducted on the CEC-2017 [41] test function set to evaluate the performance of GLPSO. The CEC-2017 test function set is one of the most comprehensive test suites for assessing and comparing the performance of evolutionary algorithms. These functions, shown in Table 1, are categorized into four classes: unimodal functions (f1–f3), simple multimodal functions (f4–f10), hybrid functions (f11–f20), and composition functions (f21–f22).
In this experiment, the proposed algorithm is compared with several classical meta-heuristic algorithms, including PSO [40], GA [13], and GWO [16]. To better assess the performance of the improved algorithm, GLPSO was also compared with various meta-heuristic algorithms, the classical PSO variant CLPSO [19], and advanced PSO variants such as MPSO [30], CAPSO [42], and EAPSO [25]. Details of these algorithms are provided in Table 2. Through extensive comparative experiments, it was confirmed that the optimal performance of GLPSO is achieved when q 1 = 3 , q 2 = 10 , and p r = 0.3 .

4.2. Empirical Analysis and Discussion

The proposed algorithm and the comparison algorithms were independently run 30 times on the CEC-2017 test function set in different dimensions, evaluated using three qualitative metrics. These metrics include the global optimal mean, standard deviation, and the convergence curve from 30 independent runs. A smaller mean indicates better optimization performance, while a smaller standard deviation reflects stronger robustness of the algorithm. Additionally, the algorithms’ performances are visually compared by ranking the global best mean of each algorithm. All evaluations were conducted under the same conditions, with the best results highlighted in overbold.
As shown in Table 3, when D = 30, GLPSO performs well on two unimodal functions (f2–f3), but temporarily falls into local optima on f1. On the simple multimodal functions (f4–f10), its performance is excellent. These functions have multiple local optima, and the adaptive Gaussian perturbation strategy enables GLPSO to escape local optima and conduct a global search more effectively, resulting in superior performance on these problems. For the hybrid functions (f11–f20), GLPSO shows good performance, particularly on test functions f11, f16, f17, and f20. On functions f12, f18, and f19, the optimal mean achieved by GLPSO is slightly lower than that of other comparison algorithms, ranking second. However, GLPSO performs poorly on function f13. In the composition functions (f21–f30), GLPSO performs well on f23, f24, f26, f28, and f29. However, its performance on functions f21, f22, and f25 is comparatively weaker.
Figure 6 shows the convergence performance of GLPSO and the comparison algorithms on the CEC2017 test function set when D = 30, providing a more intuitive illustration of GLPSO’s convergence speed and robustness. On most multimodal and composite functions, GLPSO demonstrates strong convergence capabilities. For certain functions (f5, f8, f10, f14, f16, f20), GLPSO achieves secondary convergence during the mid-stage of the optimization process by utilizing dynamic grouping and perturbation strategies, ultimately leading to excellent optimization results. However, on a few functions (e.g., f1), GLPSO shows a disadvantage, mainly due to the nature of these functions focusing on testing the algorithm’s local exploitation ability.
GLPSO was compared with several meta-heuristic algorithms, classical PSO variants, and advanced PSO variants. The test results indicate that the proposed algorithm ranks first in the 30-dimensional CEC-2017 test functions, demonstrating its superior optimization performance, particularly in handling problems with multiple local optima. However, there are still some optimization problems where the algorithm falls into local optima, indicating that further improvements can be made.
As shown in Table 4, when D = 50, the performance of GLPSO decreases compared to its performance in the 30-dimensional setting. However, for the CEC-2017 function set tested, GLPSO still outperforms the other comparison algorithms.
In the unimodal functions (f1–f3), GLPSO performs well on f2 and f3, while on f1, its performance is slightly worse than that of CLPSO. For the simple multimodal functions (f4–f10), GLPSO continues to perform well, although its performance on f10 is slightly lower compared to the 30-dimensional case. In the hybrid functions (f11–f20), GLPSO’s performance decreases compared to its 30-dimensional results but remains strong overall. Notably, it shows excellent optimization on functions f11, f18, and f19, while performing poorly on f13, f14, and f15. In the composition functions (f21–f30), GLPSO performs well on f23, f24, f26, f28, and f29, with only a weak performance on f27.
Based on the optimization results on the CEC-2017 test function set, GLPSO demonstrates excellent optimization performance, showing stability and strong optimization capabilities in most optimization problems. The combination of the random grouping strategy and the adaptive Gaussian perturbation strategy enables rapid secondary convergence during the mid-iteration phase.
As shown in Figure 6, functions f5, f7, f8, f10, f14, f16, f18, and f20 exhibit significantly rapid convergence during the mid-iteration phase. For the more complex composition functions (f21–f30), setting a higher number of iterations allows the improved strategy, which combines the random grouping strategy and the adaptive Gaussian perturbation strategy, to demonstrate greater advantages. The adaptive Gaussian perturbation strategy enhances the algorithm’s global search capability while maintaining robustness, achieving a balance between global exploration and local exploitation.
The Latin hypercube method ensures that particles are evenly distributed throughout the search space during initialization, which is beneficial for exploring the search space. The combination of the Latin hypercube and reverse learning strategy is employed for initialization, allowing the GLPSO algorithm to achieve a more advantageous particle population. When D = 30, the number of iterations is set to T m a x = 2, and the CEC-2017 test function is independently run 30 times. The GLPSO algorithm is compared with evolutionary algorithms such as MPSO [23], CAPSO [31], and EAPSO [20] to evaluate the globally optimal particle obtained from initialization. The global optimal results of 30 particle initializations are shown in Figure 7.
As demonstrated in Figure 7, the initialization strategy of GLPSO can achieve more advantageous particle positions more consistently across four different test functions. The combination of the Latin hypercube method and the reverse learning mechanism allows the GLPSO algorithm to obtain a stable and high-quality initialization population during the initialization phase.

5. Performance Analysis

5.1. Ablation Study Analysis

To evaluate the effectiveness of each improvement in GLPSO, an ablation study was conducted for qualitative analysis of the algorithm. Specifically, three variants were tested by removing individual components from GLPSO: the g c coefficient (GLPSO-gc), the group learning mechanism (GLPSO-gl), and the adaptive Gaussian perturbation strategy (GLPSO-gp). The population size was set to N = 30 , the problem dimension to D = 20 , and each algorithm, including the full GLPSO, was run on the CEC-2022 test function set [43] for 500 iterations, with 30 independent runs. The experimental results are shown in Figure 8.
The results indicate that each strategy within GLPSO is effective. These strategies interact and complement each other, balancing global exploration and local exploitation, ultimately forming a GLPSO algorithm that is both efficient and robust.

5.2. Complexity Analysis

To evaluate the computational efficiency and scalability of GLPSO, a time complexity analysis was performed for both the standard PSO algorithm and the improved algorithm. Let N be the number of particles, D be the problem dimension, T be the maximum number of iterations, C be the cost of each function evaluation, and G be the number of groups.
For standard PSO, the initialization time consists of generating the velocity and position for each particle and calculating the fitness values, which results in a complexity of O ( N D + N C ) . In the iteration phase, the complexity for each generation includes the update of velocity and position, the computation of fitness values, and the update of the global best, with a complexity of O ( N D + N C ) . Therefore, the overall time complexity of standard PSO is O ( T N ( D + C ) ) .
GLPSO, which uses Latin hypercube sampling combined with opposition-based learning for initialization, has a complexity of O ( N D l o g 2 D + N C ) during the initialization phase. In each iteration, the algorithm performs additional computations for the standard deviation, mean, grouping, and perturbation handling, so the time complexity in this phase is O ( N D + N C ) . Thus, the overall time complexity of GLPSO is also O ( T N ( D + C ) ) .
In summary, the time complexity expressions for GLPSO and standard PSO are the same, but GLPSO has a larger constant factor, making its computational complexity slightly higher than that of standard PSO.

5.3. Parameter Sensitivity Analysis

To evaluate the impact of the adaptive perturbation parameter p r on GLPSO, comparative experiments were conducted on the CEC-2017 and CEC2022 test function sets. Three different values of p r were tested: 0.2, 0.3, and 0.4. With other parameters kept constant, 500 iterations were performed for each experiment, and each algorithm was run independently 30 times to analyze the effects of different pr values on the algorithm’s performance.
Figure 9 shows the convergence of GLPSO on the CEC2022 dataset under different perturbation rates. The experimental results indicate that when p r is in the range of 0.2 to 0.4, the algorithm exhibits excellent convergence and strong robustness. On certain functions, a lower pr value leads to insufficient population diversity, causing premature convergence, while a higher pr value introduces too much randomness, affecting the convergence quality. When p r = 0.3 , GLPSO converges very quickly on certain functions (f2, f3, f5, f11).

6. Application to Engineering Problems

In previous experiments, GLPSO demonstrated excellent performance on unconstrained problems. To further evaluate its performance, the algorithm was compared with several advanced algorithms on two classic engineering problems from the 57 real-world constrained problem benchmark suite [44], with constraint handling implemented using a penalty function approach [45].
To ensure a fair comparison with other advanced algorithms, the parameter settings of GLPSO were kept consistent with those of the compared algorithms. The population size N was set to 100, and the maximum number of iterations was set to 15,000. The algorithm was independently run 30 times on the constrained problems, and the best result was recorded. The comparison results are shown in Table 5 and Table 6. It is worth noting that the solutions for comparison were directly extracted from the corresponding reference literature.

6.1. Tension/Compression Spring Design

This problem involves four constraints and three variables. As shown in Figure 10, it includes the diameter of the wire ( x 1 ) , the mean diameter of the coil ( x 2 ) , and the number of active coils ( x 3 ) . The objective is to optimize the weight of a tension/compression spring.
The mathematical model is as follows (22):
Minimize   f ( x ) = x 1 2 x 2 ( 2 + x 3 ) Subject   to :   g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 0 g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 g 4 ( x ) = x 1 + x 2 1.5 1 0
where 0.05 x 1 2 , 0.25 x 2 1.30 , and 2.00 x 3 15 .
Table 5 presents the optimization results of GLPSO, PSO, and several advanced algorithms, including PCFMO [46], SCSO [47], GJO [48], BWO [49,50], and ICHIMP-SHO [51], on the tension/compression spring design problem. As shown in the table, GLPSO achieved an optimal objective value of 0.012671, demonstrating outstanding performance among the advanced algorithms.

6.2. Weight Minimization of a Speed Reducer

This problem involves seven variables: the face width ( x 1 ) , the module of teeth ( x 2 ) , the number of teeth in the pinion ( x 3 ) , the length of the first shaft between bearings ( x 4 ) , the length of the second shaft between bearings ( x 5 ) , the diameter of the first shaft ( x 6 ) , and the diameter of the second shaft ( x 7 ) . The design of a small aircraft engine gearbox is optimized through seven nonlinear constraints and four linear constraints. The mathematical model is as (23):
Minimize   f ( x ) = 0.7854 x 2 2 x 1 ( 14.9334 x 3 43.0934 + 3.3333 x 3 2 )     + 0.7854 ( x 5 x 7 2 + x 4 x 6 2 ) 1.508 x 1 ( x 7 2 + x 6 2 ) + 7.477 ( x 7 3 + x 6 3 ) Subject   to   :   g 1 ( x ) = x 1 x 2 2 x 3 + 27 0 g 2 ( x ) = x 1 x 2 2 x 3 2 + 397.5 0 g 3 ( x ) = x 2 x 6 4 x 3 x 4 3 + 1.93 0 g 4 ( x ) = x 2 x 7 4 x 3 x 5 3 + 1.93 0 g 5 ( x ) = 10 x 6 3 16.91 × 10 6 + ( 745 x 4 x 2 1 x 3 1 ) 2 1100 0 g 6 ( x ) = 10 x 7 3 157.5 × 10 6 + ( 745 x 5 x 2 1 x 3 1 ) 2 850 0 g 7 ( x ) = x 2 x 3 40 0 g 8 ( x ) = x 1 x 2 1 + 5 0 g 9 ( x ) = x 1 x 2 1 12 0 g 10 ( x ) = 1.5 x 6 x 4 + 1.9 0 g 11 ( x ) = 1.1 x 7 x 5 + 1.9 0
where 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , x 4 8.3 , 7.3 x 5 , 2.9 x 6 3.9 , and 5 x 7 5.5 .
Table 6 presents the optimization results of GLPSO, PSO, and several advanced algorithms, including GSSA [52], HHSC [53], ASO [54], and AO [55], on the weight minimization of a speed reducer problem. As shown in the table, GLPSO achieved an optimal objective value of 3005.4617, demonstrating competitive performance among the compared advanced algorithms.
The experimental data indicates that GLPSO demonstrates competitive performance among the compared algorithms, with its performance ranking at a moderate level in this engineering case. Although it outperforms some algorithms, it does not consistently surpass all other algorithms in every case. This suggests that while GLPSO excels in certain types of problems, there is still room for improvement when addressing certain types of optimization problems.

7. Conclusions

The proposed GLPSO algorithm demonstrates strong optimization capabilities on various complex benchmark functions. However, it also has some limitations stemming from its structural design. GLPSO places greater emphasis on global exploration through group learning and Gaussian perturbation mechanisms. This exploration-biased strategy may lead to slower convergence or insufficient solution accuracy when dealing with problems that require stronger local exploitation capabilities. In the future, enhancements to the algorithm’s exploitation ability will be considered to further improve its robustness and generalizability across different types of optimization problems.

Author Contributions

Conceptualization and methodology, J.H.; experiment and writing—editing, Y.C.; writing—review and supervision, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by funds from the Jilin Province Science and Technology Development Program (YDZJ202301ZYTS482) and the Jilin Province Social Science Fund Project (2024B38).

Data Availability Statement

The code is at https://github.com/cyy0615/GLPSO-algorithm, accessed on 29 March 2025.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Song, B.; Wang, Z.; Zou, L. On global smooth path planning for mobile robots using a novel multimodal delayed PSO algorithm. Cogn. Comput. 2017, 9, 5–17. [Google Scholar] [CrossRef]
  2. Liu, Y.; Cheng, Q.; Gan, Y.; Wang, Y.; Li, Z.; Zhao, J. Multi-objective optimization of energy consumption in crude oil pipeline transportation system operation based on exergy loss analysis. Neurocomputing 2019, 332, 100–110. [Google Scholar] [CrossRef]
  3. Ghorpade, S.N.; Zennaro, M.; Chaudhari, B.S.; Saeed, R.A.; Alhumyani, H.; Abdel-Khalek, S. A novel enhanced quantum PSO for optimal network configuration in heterogeneous industrial IoT. IEEE Access 2021, 9, 134022–134036. [Google Scholar] [CrossRef]
  4. Bharti, V.; Biswas, B.; Shukla, K.K. A novel multiobjective gdwcn-pso algorithm and its application to medical data security. ACM Trans. Internet Technol. (TOIT) 2021, 21, 1–28. [Google Scholar] [CrossRef]
  5. Awasthi, D.; Khare, P.; Srivastava, V.K. PBNHWA: NIfTI image watermarking with aid of PSO and BO in wavelet domain with its authentication for telemedicine applications. Multimed. Tools Appl. 2024, Online First. [Google Scholar] [CrossRef]
  6. Lin, Y.; Jiang, Y.S.; Gong, Y.J.; Zhan, Z.H.; Zhang, J. A discrete multiobjective particle swarm optimizer for automated assembly of parallel cognitive diagnosis tests. IEEE Trans. Cybern. 2018, 49, 2792–2805. [Google Scholar] [CrossRef] [PubMed]
  7. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A novel sigmoid-function-based adaptive weighted particle swarm optimizer. IEEE Trans. Cybern. 2019, 51, 1085–1093. [Google Scholar] [CrossRef]
  8. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Wang, H.; Kwong, S.; Zhang, J. Automatic niching differential evolution with contour prediction approach for multimodal optimization problems. IEEE Trans. Evol. Comput. 2019, 24, 114–128. [Google Scholar] [CrossRef]
  9. Nagireddy, V.; Parwekar, P.; Mishra, T.K. Velocity adaptation based PSO for localization in wireless sensor networks. Evol. Intell. 2021, 14, 243–251. [Google Scholar] [CrossRef]
  10. Shang, C.; Gao, J.; Liu, H.; Liu, F. Short-term load forecasting based on PSO-KFCM daily load curve clustering and CNN-LSTM model. IEEE Access 2021, 9, 50344–50357. [Google Scholar] [CrossRef]
  11. Salihu, N.; Kumam, P.; Ibrahim, S.M.; Babando, H.A. A sufficient descent hybrid conjugate gradient method without line search consideration and application. Eng. Comput. 2024, 41, 1203–1232. [Google Scholar] [CrossRef]
  12. Shen, Z.; Yang, Y.; She, Q.; Wang, C.; Ma, J.; Lin, Z. Newton design: Designing CNNs with the family of Newton’s methods. Sci. China Inf. Sci. 2023, 66, 162101. [Google Scholar] [CrossRef]
  13. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA, 1998. [Google Scholar]
  14. Mandic, D.P. A generalized normalized gradient descent algorithm. IEEE Signal Process. Lett. 2004, 11, 115–118. [Google Scholar] [CrossRef]
  15. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2007, 1, 28–39. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  17. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November—1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  18. Li, W.; Liang, P.; Sun, B.; Sun, Y.; Huang, Y. Reinforcement learning-based particle swarm optimization with neighborhood differential mutation strategy. Swarm Evol. Comput. 2023, 78, 101274. [Google Scholar] [CrossRef]
  19. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  20. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8, 204–210. [Google Scholar] [CrossRef]
  21. Qu, B.Y.; Suganthan, P.N.; Das, S. A distance-based locally informed particle swarm model for multimodal optimization. IEEE Trans. Evol. Comput. 2012, 17, 387–402. [Google Scholar] [CrossRef]
  22. Cao, L.; Xu, L.; Goodman, E.D. A neighbor-based learning particle swarm optimizer with short-term and long-term memory for dynamic optimization problems. Inf. Sci. 2018, 453, 463–485. [Google Scholar] [CrossRef]
  23. Liu, H.R.; Cui, J.C.; Lu, Z.D.; Liu, D.Y.; Deng, Y.J. A hierarchical simple particle swarm optimization with mean dimensional information. Appl. Soft Comput. 2019, 76, 712–725. [Google Scholar] [CrossRef]
  24. Huang, Y.; Li, W.; Tian, F.; Meng, X. A fitness landscape ruggedness multiobjective differential evolution algorithm with a reinforcement learning strategy. Appl. Soft Comput. 2020, 96, 106693. [Google Scholar] [CrossRef]
  25. Zhang, Y. Elite archives-driven particle swarm optimization for large scale numerical optimization and its engineering applications. Swarm Evol. Comput. 2023, 76, 101212. [Google Scholar] [CrossRef]
  26. Wang, F.; Zhang, H.; Zhou, A. A particle swarm optimization algorithm for mixed-variable optimization problems. Swarm Evol. Comput. 2021, 60, 100808. [Google Scholar] [CrossRef]
  27. Shu, X.; Liu, Y.; Liu, J.; Yang, M.; Zhang, Q. Multi-objective particle swarm optimization with dynamic population size. J. Comput. Des. Eng. 2023, 10, 446–467. [Google Scholar] [CrossRef]
  28. Zeng, N.; Zhang, H.; Chen, Y.; Chen, B.; Liu, Y. Path planning for intelligent robot based on switching local evolutionary PSO algorithm. Assem. Autom. 2016, 36, 120–126. [Google Scholar] [CrossRef]
  29. Nama, S.; Saha, A.K.; Chakraborty, S.; Gandomi, A.H.; Abualigah, L. Boosting particle swarm optimization by backtracking search algorithm for optimization problems. Swarm Evol. Comput. 2023, 79, 101304. [Google Scholar] [CrossRef]
  30. Das, P.K.; Jena, P.K. Multi-robot path planning using improved particle swarm optimization algorithm through novel evolutionary operators. Appl. Soft Comput. 2020, 92, 106312. [Google Scholar] [CrossRef]
  31. Liu, H.; Zhang, X.W.; Tu, L.P. A modified particle swarm optimization using adaptive strategy. Expert Syst. Appl. 2020, 152, 113353. [Google Scholar] [CrossRef]
  32. Li, T.; Shi, J.; Deng, W.; Hu, Z. Pyramid particle swarm optimization with novel strategies of competition and cooperation. Appl. Soft Comput. 2022, 121, 108731. [Google Scholar] [CrossRef]
  33. Meng, Z.; Zhong, Y.; Mao, G.; Liang, Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2022, 586, 176–191. [Google Scholar] [CrossRef]
  34. Zhao, S.; Wang, D. Elite-ordinary synergistic particle swarm optimization. Inf. Sci. 2022, 609, 1567–1587. [Google Scholar] [CrossRef]
  35. Li, G.; Li, R.; Hou, H.; Zhang, G.; Li, Z. A data-driven motor optimization method based on support vector regression—Multi-objective, multivariate, and with a limited sample size. Electronics 2024, 13, 2231. [Google Scholar] [CrossRef]
  36. Wang, Y.; Zhang, J.; Zhang, M.; Wang, D.; Yang, M. Enhanced artificial ecosystem-based optimization for global optimization and constrained engineering problems. Clust. Comput. 2024, 27, 10053–10092. [Google Scholar] [CrossRef]
  37. Li, C.; Zhu, Y. A hybrid butterfly and Newton–Raphson swarm intelligence algorithm based on opposition-based learning. Clust. Comput. 2024, 27, 14469–14514. [Google Scholar] [CrossRef]
  38. Almansuri, M.A.K.; Yusupov, Z.; Rahebi, J.; Ghadami, R. Parameter Estimation of PV Solar Cells and Modules Using Deep Learning-Based White Shark Optimizer Algorithm. Symmetry 2025, 17, 533. [Google Scholar] [CrossRef]
  39. Wang, W.X.; Li, K.S.; Tao, X.Z.; Gu, F.H. An improved MOEA/D algorithm with an adaptive evolutionary strategy. Inf. Sci. 2020, 539, 1–15. [Google Scholar] [CrossRef]
  40. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In 1998 IEEE International Conference on Evolutionary Computation Proceedings; IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360); IEEE: New York, NY, USA, 1998; pp. 69–73. [Google Scholar] [CrossRef]
  41. Wu, G.; Mallipeddi, R.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2017 Competition on Constrained Real-Parameter Optimization; Technical Report; National University of Defense Technology: Changsha, China; Kyungpook National University: Daegu, Republic of Korea; Nanyang Technological University: Singapore, 2017. [Google Scholar]
  42. Duan, Y.; Chen, N.; Chang, L.; Ni, Y.; Kumar, S.S.; Zhang, P. CAPSO: Chaos adaptive particle swarm optimization algorithm. IEEE Access 2022, 10, 29393–29405. [Google Scholar] [CrossRef]
  43. Kumar, A.; Price, K.V.; Mohamed, A.W.; Hadi, A.A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the 2022 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2021. [Google Scholar]
  44. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  45. Gandomi, A.H.; Yang, X.S.; Alavi, A.H. Mixed variable structural optimization using firefly algorithm. Comput. Struct. 2011, 89, 2325–2336. [Google Scholar] [CrossRef]
  46. Chu, S.C.; Xu, X.W.; Yang, S.Y.; Pan, J.S. Parallel fish migration optimization with compact technology based on memory principle for wireless sensor networks. Knowl.-Based Syst. 2022, 241, 108124. [Google Scholar] [CrossRef]
  47. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  48. Chopra, N.; Ansari, M.M. Golden jackal optimization: A novel nature-inspired optimizer for engineering applications. Expert Syst. Appl. 2022, 198, 116924. [Google Scholar] [CrossRef]
  49. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  50. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  51. Kumari, C.L.; Kamboj, V.K.; Bath, S.K.; Tripathi, S.L.; Khatri, M.; Sehgal, S. A boosted chimp optimizer for numerical and engineering design optimization challenges. Eng. Comput. 2023, 39, 2463–2514. [Google Scholar] [CrossRef]
  52. Nautiyal, B.; Prakash, R.; Vimal, V.; Liang, G.; Chen, H. Improved salp swarm algorithm with mutation schemes for solving global optimization and engineering problems. Eng. Comput. 2022, 38, 3927–3949. [Google Scholar] [CrossRef]
  53. Abualigah, L.; Diabat, A.; Altalhi, M.; Elaziz, M.A. Improved gradual change-based Harris Hawks optimization for real-world engineering design problems. Eng. Comput. 2023, 39, 1843–1883. [Google Scholar] [CrossRef]
  54. Sun, P.; Liu, H.; Zhang, Y.; Tu, L.; Meng, Q. An intensify atom search optimization for engineering design problems. Appl. Math. Model. 2021, 89, 837–859. [Google Scholar] [CrossRef]
  55. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
Figure 1. Particle motion illustration of PSO.
Figure 1. Particle motion illustration of PSO.
Symmetry 17 00667 g001
Figure 2. Double-sided mirror reflection diagram.
Figure 2. Double-sided mirror reflection diagram.
Symmetry 17 00667 g002
Figure 3. Algorithm flow chart.
Figure 3. Algorithm flow chart.
Symmetry 17 00667 g003
Figure 4. Particle motion illustration of GLPSO.
Figure 4. Particle motion illustration of GLPSO.
Symmetry 17 00667 g004
Figure 5. The relationship between coefficient and number of iterations.
Figure 5. The relationship between coefficient and number of iterations.
Symmetry 17 00667 g005
Figure 6. Convergence curve ( D = 30 ).
Figure 6. Convergence curve ( D = 30 ).
Symmetry 17 00667 g006aSymmetry 17 00667 g006b
Figure 7. Initializations of the global optimal distribution.
Figure 7. Initializations of the global optimal distribution.
Symmetry 17 00667 g007aSymmetry 17 00667 g007b
Figure 8. Ablation experiment (CEC-2022).
Figure 8. Ablation experiment (CEC-2022).
Symmetry 17 00667 g008
Figure 9. Sensitivity analysis of the pr value (CEC-2022).
Figure 9. Sensitivity analysis of the pr value (CEC-2022).
Symmetry 17 00667 g009aSymmetry 17 00667 g009b
Figure 10. Diagram of the tension and compression spring design problem.
Figure 10. Diagram of the tension and compression spring design problem.
Symmetry 17 00667 g010
Table 1. CEC2017 functions.
Table 1. CEC2017 functions.
No.Functions F i * = F i ( x * )
Unimodal Functions1Shifted and Rotated Bent Cigar Function100
2Shifted and Rotated Sum of Different Power Function *200
3Shifted and Rotated Zakharo Function300
Simple Multimodal Functions4Shifted and Rotated Rosenbrock’s Function400
5Shifted and Rotated Rastrigin’s Function500
6Shifted and Rotated Expanded Scaffer’s F6
Function
600
7Shifted and Rotated Lunacek Bi_Rastrigin Function700
8Shifted and Rotated Non-Continuous Rastrigin’s
Function
800
9Shifted and Rotated Levy Function900
10Shifted and Rotated Schwefel’s Function1000
Hybrid Functions11Hybrid Function 1 (N = 3)1100
12Hybrid Function 2 (N = 3)1200
13Hybrid Function 3 (N = 3)1300
14Hybrid Function 4 (N = 4)1400
15Hybrid Function 5 (N = 4)1500
16Hybrid Function 6 (N = 4)1600
17Hybrid Function 6 (N = 5)1700
18Hybrid Function 6 (N = 5)1800
19Hybrid Function 6 (N = 5)1900
20Hybrid Function 6 (N = 6)2000
Composition Functions21Composition Function 1 (N = 3)2100
22Composition Function 2 (N = 3)2200
23Composition Function 3 (N = 4)2300
24Composition Function 4 (N = 4)2400
25Composition Function 5 (N = 5)2500
26Composition Function 6 (N = 5)2600
27Composition Function 7 (N = 6)2700
28Composition Function 8 (N = 6)2800
29Composition Function 9 (N = 3)2900
30Composition Function 10 (N = 3)3000
Search Range: [ 100 , 100 ] D
Table 2. Some well-known variants of PSO and other evolution algorithms.
Table 2. Some well-known variants of PSO and other evolution algorithms.
AlgorithmYearDefault Parameter Settings
PSO [40]1998D, T max = 1000 , ω max = 0.9 , ω min = 0.4 , c 1 = c 2 = 2 .
GA [13]1998D, T max = 1000 .
CLPSO [19]2006D, T max = 1000 , ω max = 0.9 , ω min = 0.4 , c 1 = c 2 = 1.49445 , m = 7 .
GWO [16]2014D, T max = 1000 .
MPSO [30]2020D, T max = 1000 , c ω = 4 × r × ( 1 r ) , ω = 0.9 0.4 , c 1 = c 2 = 2 .
CAPSO [42]2022D, T max = 1000 , ω max = 0.9 , ω min = 0.4 , c max = 2.5 , c min = 0.5 , ϵ = 0.2 .
EAPSO [25]2023D, T max = 1000 .
GLPSOPresentedD, T max = 1000 , ω max = 0.9 , ω min = 0.4 , q 1 = 3 , q 2 = 10 , p r = 0.3 .
Table 3. Comparisons of experimental results (30D).
Table 3. Comparisons of experimental results (30D).
PSOGAGWOMPSOCLPSOCAPSOEAPSOGLPSO
mean 7.95 × 10 3 4.84 × 10 10 7.83 × 10 9 2.63 × 10 9 3.87 × 10 7 6.47 × 10 9 2.00 × 10 3 8.10 × 10 4
f1std 1.05 × 10 4 4.36 × 10 9 6.12 × 10 9 1.21 × 10 9 1.51 × 10 7 3.83 × 10 9 1.73 × 10 3 1.57 × 10 4
rank28754613
mean 8.49 × 10 25 4.49 × 10 35 2.38 × 10 32 6.66 × 10 31 1.02 × 10 31 3.46 × 10 34 1.72 × 10 11 2.68 × 10 9
f2std 3.08 × 10 26 1.91 × 10 36 1.24 × 10 33 1.75 × 10 32 2.82 × 10 31 1.85 × 10 35 7.15 × 10 11 1.07 × 10 10
rank38654721
mean 5.82 × 10 4 5.74 × 10 4 5.24 × 10 4 7.64 × 10 4 1.09 × 10 5 3.39 × 10 4 4.56 × 10 4 9.89 × 10 3
f3std 1.84 × 10 4 7.75 × 10 3 8.81 × 10 3 1.40 × 10 4 1.61 × 10 4 2.09 × 10 4 2.24 × 10 4 4.15 × 10 3
rank65478231
mean 6.14 × 10 2 4.53 × 10 3 7.33 × 10 2 1.26 × 10 3 6.27 × 10 2 9.88 × 10 2 4.68 × 10 2 5.31 × 10 2
f4std 5.20 × 10 1 6.34 × 10 2 1.49 × 10 2 2.27 × 10 2 2.54 × 10 1 7.56 × 10 2 4.75 × 10 1 3.72 × 10 1
rank38574612
mean 5.93 × 10 2 7.41 × 10 2 6.05 × 10 2 7.43 × 10 2 6.52 × 10 2 6.38 × 10 2 6.08 × 10 2 5.43 × 10 2
f5std 3.44 × 10 1 3.12 × 10 1 1.94 × 10 1 1.64 × 10 1 1.42 × 10 1 3.63 × 10 1 2.77 × 10 1 1.05 × 10 1
rank27386541
mean 6.02 × 10 2 6.63 × 10 2 6.13 × 10 2 6.59 × 10 2 6.02 × 10 2 6.22 × 10 2 6.13 × 10 2 6.00 × 10 2
f6std 1.32 × 10 0 5.67 × 10 0 3.60 × 10 0 9.88 × 10 0 4.28 × 10 1 1.13 × 10 1 1.10 × 10 1 1.29 × 10 2
rank28573641
mean 8.40 × 10 2 1.28 × 10 3 9.33 × 10 2 1.14 × 10 3 8.89 × 10 2 8.20 × 10 2 8.64 × 10 2 7.76 × 10 2
f7std 4.07 × 10 1 5.09 × 10 1 6.45 × 10 1 4.69 × 10 1 1.56 × 10 1 2.35 × 10 1 4.09 × 10 1 2.89 × 10 1
rank38675241
mean 8.80 × 10 2 1.14 × 10 3 9.25 × 10 2 1.03 × 10 3 9.44 × 10 2 9.68 × 10 2 8.96 × 10 2 8.54 × 10 2
f8std 3.09 × 10 1 3.51 × 10 1 2.94 × 10 1 1.61 × 10 1 1.21 × 10 1 4.00 × 10 1 2.25 × 10 1 3.48 × 10 1
rank28475631
mean 1.60 × 10 3 8.04 × 10 3 3.82 × 10 3 2.75 × 10 3 3.75 × 10 3 7.27 × 10 3 3.14 × 10 3 9.20 × 10 2
f9std 6.32 × 10 2 1.35 × 10 3 1.24 × 10 3 4.47 × 10 2 1.28 × 10 3 4.26 × 10 3 2.04 × 10 3 2.04 × 10 1
rank28635741
mean 5.53 × 10 3 7.32 × 10 3 4.73 × 10 3 8.18 × 10 3 5.76 × 10 3 4.84 × 10 3 4.64 × 10 3 4.58 × 10 3
f10std 1.54 × 10 3 6.44 × 10 2 1.39 × 10 3 3.35 × 10 2 3.59 × 10 2 5.91 × 10 2 7.76 × 10 2 1.87 × 10 3
rank57386421
mean 1.24 × 10 3 7.27 × 10 3 4.50 × 10 3 1.93 × 10 3 3.63 × 10 3 3.03 × 10 3 1.28 × 10 3 1.16 × 10 3
f11std 6.08 × 10 1 1.43 × 10 3 2.58 × 10 3 2.20 × 10 2 7.13 × 10 2 8.09 × 10 3 9.01 × 10 1 2.68 × 10 1
rank28746531
mean 4.50 × 10 6 6.68 × 10 9 1.66 × 10 8 3.54 × 10 8 6.02 × 10 7 6.26 × 10 8 5.90 × 10 3 5.69 × 10 5
f12std 6.44 × 10 6 1.66 × 10 9 3.77 × 10 7 1.74 × 10 8 2.24 × 10 7 1.35 × 10 9 3.14 × 10 3 1.15 × 10 5
rank38564712
mean 3.54 × 10 3 8.34 × 10 8 1.87 × 10 7 2.37 × 10 7 4.76 × 10 4 1.36 × 10 4 4.64 × 10 3 1.07 × 10 4
f13std 2.21 × 10 3 3.21 × 10 8 2.84 × 10 7 2.28 × 10 7 3.02 × 10 4 1.59 × 10 4 2.61 × 10 3 2.57 × 10 3
rank18675423
mean 1.69 × 10 4 4.81 × 10 5 4.64 × 10 5 1.39 × 10 5 7.25 × 10 4 2.42 × 10 5 1.62 × 10 3 1.98 × 10 3
f14std 2.35 × 10 4 2.92 × 10 5 9.12 × 10 5 1.48 × 10 5 6.30 × 10 4 5.19 × 10 5 1.36 × 10 2 3.99 × 10 2
rank38754612
mean 5.32 × 10 3 5.93 × 10 7 1.68 × 10 6 1.77 × 10 6 1.82 × 10 4 5.15 × 10 3 1.80 × 10 3 5.24 × 10 3
f15std 4.56 × 10 3 2.75 × 10 7 7.90 × 10 6 2.17 × 10 6 1.40 × 10 4 5.81 × 10 3 1.82 × 10 2 2.63 × 10 3
rank48675213
mean 2.50 × 10 3 4.02 × 10 3 2.50 × 10 3 3.37 × 10 3 2.49 × 10 3 2.71 × 10 3 2.54 × 10 3 2.14 × 10 3
f16std 3.58 × 10 2 4.74 × 10 2 2.85 × 10 2 1.87 × 10 2 1.49 × 10 2 3.34 × 10 2 3.83 × 10 2 2.93 × 10 2
rank48372651
mean 2.13 × 10 3 2.83 × 10 3 2.05 × 10 3 2.52 × 10 3 2.09 × 10 3 2.20 × 10 3 2.25 × 10 3 1.85 × 10 3
f17std 2.26 × 10 2 2.04 × 10 2 1.29 × 10 2 2.07 × 10 2 9.49 × 10 1 1.98 × 10 2 2.65 × 10 2 6.74 × 10 1
rank48273561
mean 1.16 × 10 6 3.34 × 10 6 2.04 × 10 6 1.92 × 10 6 6.90 × 10 5 3.10 × 10 5 1.09 × 10 5 1.13 × 10 5
f18std 1.27 × 10 6 1.57 × 10 6 2.09 × 10 6 1.09 × 10 6 3.70 × 10 5 5.61 × 10 5 1.36 × 10 5 1.16 × 10 5
rank58764312
mean 1.22 × 10 4 3.68 × 10 6 1.44 × 10 6 7.45 × 10 5 3.21 × 10 4 5.15 × 10 5 1.05 × 10 4 4.09 × 10 3
f19std 6.58 × 10 3 3.60 × 10 6 3.02 × 10 6 2.47 × 10 6 2.90 × 10 4 1.85 × 10 6 8.69 × 10 3 1.97 × 10 3
rank38764521
mean 2.48 × 10 3 2.81 × 10 3 2.43 × 10 3 2.62 × 10 3 2.44 × 10 3 2.59 × 10 3 2.63 × 10 3 2.32 × 10 3
f20std 2.01 × 10 2 1.56 × 10 2 1.50 × 10 2 1.45 × 10 2 1.00 × 10 2 1.79 × 10 2 2.22 × 10 2 1.45 × 10 2
rank48263571
mean 2.21 × 10 3 2.24 × 10 3 2.20 × 10 3 2.25 × 10 3 2.24 × 10 3 2.22 × 10 3 2.20 × 10 3 2.21 × 10 3
f21std 2.45 × 10 0 9.12 × 10 1 3.97 × 10 0 7.99 × 10 1 6.08 × 10 1 4.84 × 10 0 2.96 × 10 5 8.80 × 10 1
rank37286514
mean 2.31 × 10 3 2.34 × 10 3 2.30 × 10 3 2.35 × 10 3 2.34 × 10 3 2.32 × 10 3 2.30 × 10 3 2.31 × 10 3
f22std 2.70 × 10 0 8.31 × 10 1 4.16 × 10 0 7.02 × 10 1 6.69 × 10 1 8.28 × 10 0 1.50 × 10 4 5.43 × 10 1
rank37286514
mean 2.94 × 10 3 5.01 × 10 3 2.87 × 10 3 3.20 × 10 3 2.98 × 10 3 3.73 × 10 3 2.88 × 10 3 2.85 × 10 3
f23std 6.16 × 10 1 2.53 × 10 2 1.10 × 10 2 3.41 × 10 1 2.03 × 10 1 5.42 × 10 2 2.43 × 10 1 2.27 × 10 1
rank68375412
mean 2.96 × 10 3 2.72 × 10 3 2.98 × 10 3 2.74 × 10 3 2.88 × 10 3 2.74 × 10 3 3.17 × 10 3 2.60 × 10 3
f24std 1.81 × 10 2 3.75 × 10 2 2.33 × 10 2 3.29 × 10 2 2.48 × 10 2 2.59 × 10 2 2.16 × 10 2 1.61 × 10 2
rank48675312
mean 2.99 × 10 3 2.93 × 10 3 3.25 × 10 3 3.55 × 10 3 3.14 × 10 3 3.04 × 10 3 2.98 × 10 3 3.00 × 10 3
f25std 5.72 × 10 1 2.85 × 10 1 1.31 × 10 2 1.46 × 10 2 3.52 × 10 1 2.13 × 10 2 6.40 × 10 1 3.78 × 10 1
rank31786524
mean 4.82 × 10 3 3.03 × 10 3 4.68 × 10 3 6.95 × 10 3 5.96 × 10 3 5.52 × 10 3 5.27 × 10 3 2.91 × 10 3
f26std 1.07 × 10 3 3.62 × 10 1 5.27 × 10 2 2.52 × 10 2 5.64 × 10 2 1.37 × 10 3 9.53 × 10 2 5.02 × 10 2
rank42387651
mean 4.11 × 10 3 6.11 × 10 3 3.20 × 10 3 5.06 × 10 3 3.88 × 10 3 3.97 × 10 3 3.72 × 10 3 3.67 × 10 3
f27std 1.86 × 10 2 2.44 × 10 2 2.82 × 10 4 4.21 × 10 2 8.57 × 10 1 1.62 × 10 2 1.36 × 10 2 1.01 × 10 2
rank68174532
mean 3.70 × 10 3 3.40 × 10 3 3.31 × 10 3 3.93 × 10 3 3.76 × 10 3 3.83 × 10 3 3.33 × 10 3 3.28 × 10 3
f28std 8.52 × 10 2 6.63 × 10 1 8.67 × 10 1 7.82 × 10 2 1.53 × 10 2 7.38 × 10 2 3.49 × 10 2 2.04 × 10 1
rank54286731
mean 3.69 × 10 3 4.15 × 10 3 3.38 × 10 3 4.29 × 10 3 3.76 × 10 3 3.77 × 10 3 3.88 × 10 3 3.42 × 10 3
f29std 2.03 × 10 2 1.81 × 10 2 2.31 × 10 2 3.16 × 10 2 1.41 × 10 2 2.14 × 10 2 2.38 × 10 2 1.35 × 10 2
rank37184562
mean 1.16 × 10 5 1.19 × 10 8 1.02 × 10 4 9.86 × 10 6 8.78 × 10 5 1.51 × 10 5 9.68 × 10 3 1.09 × 10 5
f30std 9.27 × 10 4 6.19 × 10 7 1.09 × 10 4 9.82 × 10 6 3.63 × 10 5 1.88 × 10 5 4.12 × 10 3 5.40 × 10 4
rank48276513
total rank1032091272011461528953
final rank38475621
Table 4. Comparisons of experimental results (50D).
Table 4. Comparisons of experimental results (50D).
PSOGAGWOMPSOCLPSOCAPSOEAPSOGLPSO
mean 1.11 × 10 7 8.91 × 10 10 2.62 × 10 10 1.02 × 10 10 2.99 × 10 7 1.25 × 10 10 8.43 × 10 3 2.70 × 10 5
f1std 2.08 × 10 7 5.49 × 10 9 8.24 × 10 9 1.85 × 10 9 9.29 × 10 6 7.01 × 10 9 9.64 × 10 3 2.88 × 10 4
rank38754612
mean 3.67 × 10 49 4.75 × 10 68 2.49 × 10 61 6.79 × 10 61 1.18 × 10 30 7.77 × 10 63 2.74 × 10 32 4.23 × 10 27
f2std 1.66 × 10 50 2.38 × 10 69 1.20 × 10 62 3.49 × 10 62 3.04 × 10 30 4.25 × 10 64 1.35 × 10 33 2.03 × 10 28
rank48562731
mean 1.92 × 10 5 1.24 × 10 5 1.45 × 10 5 1.92 × 10 5 9.27 × 10 4 1.64 × 10 5 2.12 × 10 5 8.13 × 10 4
f3std 3.89 × 10 4 1.02 × 10 4 3.69 × 10 4 3.73 × 10 4 1.55 × 10 4 3.72 × 10 4 5.75 × 10 4 1.65 × 10 4
rank73462581
mean 7.38 × 10 2 1.73 × 10 4 3.09 × 10 3 2.28 × 10 3 6.09 × 10 2 1.38 × 10 3 5.16 × 10 2 5.69 × 10 2
f4std 8.97 × 10 1 1.79 × 10 3 1.37 × 10 3 6.46 × 10 2 1.80 × 10 1 8.89 × 10 2 3.66 × 10 1 4.78 × 10 1
rank48763512
mean 7.30 × 10 2 1.05 × 10 3 7.54 × 10 2 9.74 × 10 2 6.47 × 10 2 7.98 × 10 2 7.15 × 10 2 5.93 × 10 2
f5std 7.42 × 10 1 4.38 × 10 1 3.67 × 10 1 2.13 × 10 1 1.31 × 10 1 5.91 × 10 1 6.54 × 10 1 1.90 × 10 1
rank48572631
mean 6.11 × 10 2 6.68 × 10 2 6.22 × 10 2 6.71 × 10 2 6.02 × 10 2 6.41 × 10 2 6.24 × 10 2 6.00 × 10 2
f6std 3.48 × 10 0 4.92 × 10 0 4.31 × 10 0 6.82 × 10 0 3.42 × 10 1 1.08 × 10 1 7.04 × 10 0 1.94 × 10 1
rank37482651
mean 1.10 × 10 3 2.00 × 10 3 1.26 × 10 3 1.71 × 10 3 8.85 × 10 2 1.03 × 10 3 1.12 × 10 3 8.78 × 10 2
f7std 8.91 × 10 1 1.08 × 10 2 9.65 × 10 1 1.17 × 10 2 1.44 × 10 1 9.20 × 10 1 8.71 × 10 1 1.01 × 10 2
rank48672351
mean 1.00 × 10 3 1.48 × 10 3 1.11 × 10 3 1.26 × 10 3 9.40 × 10 2 1.17 × 10 3 1.03 × 10 3 8.93 × 10 2
f8std 7.42 × 10 1 3.37 × 10 1 3.81 × 10 1 2.22 × 10 1 1.31 × 10 1 7.50 × 10 1 4.91 × 10 1 2.11 × 10 1
rank38572641
mean 5.14 × 10 3 2.17 × 10 4 9.69 × 10 3 7.01 × 10 3 3.21 × 10 3 2.16 × 10 4 7.91 × 10 3 1.04 × 10 3
f9std 3.77 × 10 3 2.76 × 10 3 3.35 × 10 3 1.34 × 10 3 5.11 × 10 2 8.16 × 10 3 3.32 × 10 3 8.50 × 10 1
rank38642751
mean 1.21 × 10 4 1.31 × 10 4 9.07 × 10 3 1.48 × 10 4 5.60 × 10 3 7.18 × 10 3 7.49 × 10 3 8.62 × 10 3
f10std 2.60 × 10 3 9.59 × 10 2 2.87 × 10 3 5.25 × 10 2 3.41 × 10 2 6.72 × 10 2 8.65 × 10 2 3.64 × 10 3
rank67581234
mean 2.11 × 10 3 3.31 × 10 4 1.68 × 10 4 7.03 × 10 3 3.13 × 10 3 1.25 × 10 4 1.45 × 10 3 1.32 × 10 3
f11std 2.36 × 10 2 5.07 × 10 3 8.94 × 10 3 1.20 × 10 3 7.50 × 10 2 1.19 × 10 4 1.29 × 10 2 6.75 × 10 1
rank38754621
mean 5.63 × 10 6 1.61 × 10 10 2.84 × 10 9 1.45 × 10 9 5.04 × 10 7 1.88 × 10 9 1.23 × 10 4 7.23 × 10 5
f12std 1.45 × 10 7 2.85 × 10 9 2.77 × 10 9 6.11 × 10 8 1.77 × 10 7 3.83 × 10 9 8.50 × 10 3 3.15 × 10 5
rank38754612
mean 2.03 × 10 5 5.65 × 10 9 3.23 × 10 8 3.49 × 10 8 2.45 × 10 4 6.20 × 10 8 5.05 × 10 4 5.31 × 10 4
f13std 7.92 × 10 5 1.10 × 10 9 3.25 × 10 8 3.03 × 10 8 1.30 × 10 4 1.43 × 10 9 5.11 × 10 4 1.50 × 10 4
rank48561723
mean 1.72 × 10 3 2.27 × 10 6 1.26 × 10 6 1.55 × 10 6 3.17 × 10 4 1.06 × 10 6 1.68 × 10 3 2.90 × 10 3
f14std 1.60 × 10 2 1.05 × 10 6 1.09 × 10 6 2.11 × 10 6 2.41 × 10 4 3.23 × 10 6 8.57 × 10 1 7.51 × 10 2
rank28674513
mean 2.88 × 10 3 1.64 × 10 9 7.08 × 10 7 3.53 × 10 7 7.75 × 10 3 1.80 × 10 4 3.83 × 10 3 1.26 × 10 4
f15std 1.32 × 10 3 4.09 × 10 8 7.79 × 10 7 3.09 × 10 7 5.04 × 10 3 1.65 × 10 4 2.24 × 10 3 4.09 × 10 3
rank18763524
mean 3.46 × 10 3 5.88 × 10 3 3.54 × 10 3 5.59 × 10 3 2.40 × 10 3 3.42 × 10 3 3.27 × 10 3 2.94 × 10 3
f16std 7.76 × 10 2 5.49 × 10 2 3.95 × 10 2 3.92 × 10 2 2.00 × 10 2 3.80 × 10 2 5.09 × 10 2 6.77 × 10 2
rank58671432
mean 2.88 × 10 3 4.07 × 10 3 2.59 × 10 3 3.92 × 10 3 2.07 × 10 3 3.03 × 10 3 2.92 × 10 3 2.26 × 10 3
f17std 3.35 × 10 2 3.96 × 10 2 3.05 × 10 2 3.22 × 10 2 8.74 × 10 1 3.69 × 10 2 3.43 × 10 2 1.97 × 10 2
rank48371652
mean 4.55 × 10 6 2.32 × 10 7 4.99 × 10 6 8.54 × 10 6 6.64 × 10 5 2.44 × 10 6 7.80 × 10 5 6.61 × 10 5
f18std 2.00 × 10 6 8.33 × 10 6 5.60 × 10 6 4.86 × 10 6 4.30 × 10 5 1.75 × 10 6 4.93 × 10 5 3.28 × 10 5
rank58672431
mean 6.51 × 10 5 1.64 × 10 9 2.54 × 10 7 1.81 × 10 7 1.71 × 10 4 8.62 × 10 7 1.15 × 10 4 8.83 × 10 3
f19std 2.45 × 10 6 5.96 × 10 8 6.05 × 10 7 1.56 × 10 7 1.03 × 10 4 2.31 × 10 8 1.17 × 10 4 3.88 × 10 3
rank48653721
mean 3.54 × 10 3 3.65 × 10 3 3.13 × 10 3 3.76 × 10 3 2.41 × 10 3 3.16 × 10 3 3.19 × 10 3 2.87 × 10 3
f20std 3.64 × 10 2 2.31 × 10 2 5.41 × 10 2 1.62 × 10 2 8.75 × 10 1 3.50 × 10 2 3.52 × 10 2 5.01 × 10 2
rank67381452
mean 2.25 × 10 3 2.25 × 10 3 2.25 × 10 3 2.26 × 10 3 2.30 × 10 3 2.25 × 10 3 2.25 × 10 3 2.25 × 10 3
f21std 8.39 × 10 3 5.40 × 10 3 8.53 × 10 13 1.35 × 10 0 7.07 × 10 1 5.07 × 10 3 8.58 × 10 6 3.86 × 10 5
rank46178523
mean 2.35 × 10 3 2.35 × 10 3 2.35 × 10 3 2.36 × 10 3 3.19 × 10 3 2.35 × 10 3 2.35 × 10 3 2.35 × 10 3
f22std 1.96 × 10 3 5.66 × 10 3 1.41 × 10 12 1.24 × 10 0 1.22 × 10 3 4.64 × 10 3 1.16 × 10 7 8.03 × 10 6
rank46178523
mean 3.42 × 10 3 7.97 × 10 3 3.15 × 10 3 3.94 × 10 3 2.97 × 10 3 5.10 × 10 3 3.28 × 10 3 3.08 × 10 3
f23std 1.13 × 10 2 3.84 × 10 2 1.21 × 10 2 7.49 × 10 1 1.92 × 10 1 5.49 × 10 2 1.02 × 10 2 4.20 × 10 1
rank58361742
mean 3.73 × 10 3 2.69 × 10 3 3.53 × 10 3 4.39 × 10 3 3.09 × 10 3 3.41 × 10 3 3.72 × 10 3 2.61 × 10 3
f24std 1.75 × 10 2 9.08 × 10 0 5.04 × 10 2 1.33 × 10 2 2.04 × 10 2 7.71 × 10 2 4.00 × 10 2 6.95 × 10 1
rank72583461
mean 3.18 × 10 3 3.81 × 10 3 3.96 × 10 3 4.35 × 10 3 3.11 × 10 3 3.18 × 10 3 3.06 × 10 3 3.07 × 10 3
f25std 5.20 × 10 1 1.56 × 10 2 4.62 × 10 2 3.13 × 10 2 1.56 × 10 1 1.63 × 10 2 4.73 × 10 1 3.65 × 10 1
rank56783412
mean 7.61 × 10 3 3.31 × 10 3 5.74 × 10 3 1.10 × 10 4 5.99 × 10 3 6.60 × 10 3 7.10 × 10 3 2.83 × 10 3
f26std 1.20 × 10 3 7.14 × 10 1 2.76 × 10 3 3.21 × 10 2 2.58 × 10 2 2.82 × 10 3 2.52 × 10 3 2.18 × 10 0
rank72384561
mean 5.33 × 10 3 1.09 × 10 4 3.20 × 10 3 7.78 × 10 3 3.82 × 10 3 4.79 × 10 3 4.39 × 10 3 4.45 × 10 3
f27std 4.24 × 10 2 4.64 × 10 2 2.25 × 10 4 5.01 × 10 2 5.93 × 10 1 4.00 × 10 2 3.12 × 10 2 2.59 × 10 2
rank68172534
mean 3.93 × 10 3 4.45 × 10 3 3.30 × 10 3 6.34 × 10 3 3.79 × 10 3 4.23 × 10 3 3.33 × 10 3 3.43 × 10 3
f28std 3.22 × 10 2 1.41 × 10 2 3.48 × 10 4 4.15 × 10 2 1.84 × 10 2 9.34 × 10 2 3.40 × 10 1 4.78 × 10 1
rank57184623
mean 4.72 × 10 3 6.26 × 10 3 3.79 × 10 3 6.66 × 10 3 3.71 × 10 3 4.60 × 10 3 4.72 × 10 3 4.05 × 10 3
f29std 3.63 × 10 2 4.15 × 10 2 3.95 × 10 2 4.33 × 10 2 1.43 × 10 2 3.09 × 10 2 3.80 × 10 2 3.17 × 10 2
rank57281463
mean 1.21 × 10 6 6.54 × 10 7 1.68 × 10 6 1.65 × 10 8 7.53 × 10 5 2.95 × 10 6 6.00 × 10 4 5.64 × 10 5
f30std 1.31 × 10 6 9.11 × 10 6 2.68 × 10 6 8.56 × 10 7 3.87 × 10 5 7.71 × 10 6 6.19 × 10 4 1.83 × 10 5
rank47583612
total rank130211139202831589760
final rank48572631
Table 5. Optimization results of the tension/compression spring design problem.
Table 5. Optimization results of the tension/compression spring design problem.
AlgorithmThe Optimal VariableOptimal Weight
x 1 x 2 x 3
PSO0.0524106810.37432757210.326602530.012674616
PCFMO0.0524050.3741610.3380.012867
SCSO0.05000.317514.02000.012717
GJO0.05157930.35405511.44840.01266752
BWO0.05170.356811.31320.012703
DO0.0512150.34541611.9837080.012669
ICHIMP-SHO0.0513240.34761411.860410.0126915
EAPSO0.05152190.352710211.5278380.012666
GLPSO0.05143550.35063311.65960.012671
Table 6. Optimization results of the weight minimization of a speed reducer.
Table 6. Optimization results of the weight minimization of a speed reducer.
Algorithm The Optimal VariableOptimal Weight
x 1 x 2 x 3 x 4 x 5 x 6 x 7
GSSA3.5009570.7017.007.3313177.8066923.3518515.286692997.5658
ASO3.500000.7017.07557.30008.11983.43665.29353051.31905
HHSC3.503790.7017.007.37.72940143.3565115.286692997.89844
AO3.50210.7017.007.30997.74763.36415.29943007.7328
GLPSO3.504020.70009917.0127.374667.800983.364265.288043005.4617
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Han, J.; Chen, Y.; Huang, X. An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm. Symmetry 2025, 17, 667. https://doi.org/10.3390/sym17050667

AMA Style

Han J, Chen Y, Huang X. An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm. Symmetry. 2025; 17(5):667. https://doi.org/10.3390/sym17050667

Chicago/Turabian Style

Han, Jialing, Yuyu Chen, and Xiaoqing Huang. 2025. "An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm" Symmetry 17, no. 5: 667. https://doi.org/10.3390/sym17050667

APA Style

Han, J., Chen, Y., & Huang, X. (2025). An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm. Symmetry, 17(5), 667. https://doi.org/10.3390/sym17050667

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop