Particle Swarm Optimization Algorithm Using Velocity Pausing and Adaptive Strategy

: Particle swarm optimization (PSO) as a swarm intelligence-based optimization algorithm has been widely applied to solve various real-world optimization problems. However, traditional PSO algorithms encounter issues such as premature convergence and an imbalance between global exploration and local exploitation capabilities when dealing with complex optimization tasks. To address these shortcomings, an enhanced PSO algorithm incorporating velocity pausing and adaptive strategies is proposed. By leveraging the search characteristics of velocity pausing and the terminal replacement mechanism, the problem of premature convergence inherent in standard PSO algorithms is mitigated. The algorithm further refines and controls the search space of the particle swarm through time-varying inertia coefficients, symmetric cooperative swarms concepts, and adaptive strategies, balancing global search and local exploitation. The performance of VASPSO was validated on 29 standard functions from Cec2017, comparing it against five PSO variants and seven swarm intelligence algorithms. Experimental results demonstrate that VASPSO exhibits considerable competitiveness when compared with 12 algorithms. The relevant code can be found on our project homepage.


Introduction
Swarm intelligence algorithms are computational methods inspired by collective behaviors observed in nature.They simulate interactions and cooperation among individuals within a group to solve complex problems.The development of swarm intelligence algorithms stems from recognizing the limitations of traditional, individual-based algorithms in tackling complex challenges.Conventional algorithms often rely on deterministic, single-agent methods, which may struggle to find global optima or encounter significant computational complexity for certain problems [1,2].To address these challenges, researchers have introduced a new algorithmic paradigm: swarm intelligence algorithms.These algorithms collectively solve problems by simulating information exchange, cooperation, and competition among group members.Notable examples of swarm intelligence algorithms include genetic algorithms, ant colony algorithms, and bee swarm algorithms, which have demonstrated significant success across various fields.
Particle swarm optimization (PSO), initially proposed by Eberhart and Kennedy in 1995 [3], is inspired by the behavior of bird flocks and fish schools.In PSO, solutions to problems are represented by a swarm of particles, with each particle embodying a solution, and the swarm collectively representing a potential set of solutions within the solution space.Particles optimize their positions and velocities by continuously searching and iterating within the solution space, learning from and memorizing the best solutions found.PSO offers several advantages: it has a rapid convergence rate and robust global search capabilities; its algorithmic structure is simple, facilitating easy implementation and application; and it adapts well to the continuity and non-linearity of various problems.Consequently, PSO has been extensively applied in domains such as function optimization, combinatorial optimization [4], machine learning [5], and image processing, among others.
Since its introduction, PSO has evolved through years of research and development, leading to numerous improvements.Some notable enhancements include the following: 1.
Enhancements in the topological structure, which concerns how individuals within the algorithm communicate with each other.Proposals include selecting the nearest particles as neighbors in each generation [19], recombining randomly divided subpopulations after evolving for certain generations [20], and optimizing algorithms using ring topological structures.The proposed HIDMS-PSO variant enhances the traditional fixed topology of the unit structure with new master-and slave-dominated topologies [21], a random triplet topological structure that allows each particle to communicate with two random particles in the group based on its best position [22], and the particle swarm optimization algorithm with an extra gradient method (EPSO) [23] have been introduced to increase algorithmic diversity; 3.
Improvements to learning strategies include the integration of four particle swarm search strategies into a strategy dynamics-based optimization method [24], a particle swarm optimization with multi-dimensional learning strategy based on adaptive inter-weight [25], and an adaptive hierarchical update particle swarm optimization algorithm employing a multi-choice comprehensive learning strategy comprising weighted composite sub-strategies and average evolutionary sub-strategies.Enhanced learning strategies and crossover operators (PSOLC) [26], and a multi-objective particle swarm with alternate learning strategies (MOPSOALS) [27], utilizing adaptive parameter updating exploratory and exploitative role learning strategies to improve optimization performance, have been proposed.To address large-scale variable problems, a multi-strategy learning particle swarm optimization algorithm (MSL-PSO) [28] employing different learning strategies at various stages has been introduced.To tackle deficiencies in solving complex optimization problems, a hybrid particle swarm optimization algorithm based on an adaptive strategy (ASPSO) [29] has been put forward.

4.
Hybrid research involving the integration of other algorithms with PSO to better address complex optimization problems includes combining genetic factors with PSO, such as selection operators [30], crossover operators [31,32], and mutation operators [33].Similarly, merging the simulated annealing algorithm to improve the global optimum update strategy in PSO [34], hybridizing with the grey wolf optimization algorithm [35], pattern search algorithm [36], crow search algorithm [37], hybrid particle filtering (PF) [38], and hybrid improved symbiotic organisms search (MSOS) [39] to fully leverage each algorithm's strengths.
Despite the maturity and successful application of these classic swarm intelligence algorithms in solving real-world optimization challenges, issues like premature convergence and suboptimal performance persist in complex application areas.Moreover, as societal demands evolve, real-world optimization problems increasingly exhibit high-dimensionality, multiple constraints, and multimodality, posing new challenges to classical optimization algorithms such as swarm intelligence algorithms.Against this backdrop, enhancing swarm intelligence algorithms to better tackle increasingly complex optimization problems has emerged as a research focal point.
To augment the particle swarm algorithm's capacity to solve complex optimization problems and to more effectively balance exploration and exploitation, this paper introduces an improved particle swarm algorithm dubbed VASPSO.VASPSO contributes in five ways as described below:

•
Introducing a time-varying inertia coefficient to enhance the algorithm's convergence speed and global search capability; • Adopting velocity pausing concepts to mitigate premature convergence issues in particle swarm algorithms; • Employing an adaptive strategy to dynamically fine-tune the balance between exploration and exploitation, thus boosting algorithm performance; • Incorporating symmetric cooperative swarms concepts, treating different particles with varied strategies to aid the algorithm in discovering optimal solutions; • Implementing a terminal replacement mechanism to excise the least performing particles, thereby refining the accuracy of the final solution; Moreover, to ascertain the viability of the newly proposed VASPSO algorithm, it was tested against 29 benchmark functions from CEC2017, comparing its performance with five PSO variants and seven other swarm intelligence algorithms.Experimental outcomes reveal VASPSO's superiority over other algorithms in most benchmark functions, underscoring its success.
The remainder of this paper is structured as follows: Section 2 elucidates the standard particle swarm algorithm.The VASPSO algorithm is expounded in Section 3, with Section 4 showcasing the test functions and outcomes of experimental simulations.Conclusions and directions for future work are presented in Section 5.

Standard Particle Swarm Optimization
Particle swarm optimization (PSO) is a swarm intelligence-based optimization algorithm inspired by the foraging behavior of bird flocks [3].The core principle of PSO is to optimize a search process by simulating the behavior of particles within a swarm.Each particle in the swarm represents a candidate solution, navigating the search space by updating its position and velocity based on both its own experience and the collective information from the swarm.Specifically, the position of a particle corresponds to a candidate solution's value, while its velocity determines the direction and magnitude of the search.Initially, particles are dispersed randomly throughout the search space with varying velocities.Their performance is assessed using a fitness function, which informs the updates to both the best position discovered by each particle (personal best) and the best overall position found by the swarm (global best).Throughout the algorithm, particles dynamically adjust their velocity and position, leveraging insights from their personal best achievements and the global best discoveries, in pursuit of optimal solutions.Below is the initialization of particles in the particle swarm optimization algorithm: The w in Formula ( 4) is called the inertia weight and is derived from Formula (3): The formula for updating the velocity and position of the particle is as follows: where c1 and c2 are called acceleration coefficients, V(t) represents the velocity of the particle at time t, X(t) represents the position of the particle at time t, P best represents the individual best position of the particle, and G best represents the global best position of the group.

Particle Swarm Optimization Algorithm Using Velocity Pausing and Adaptive Strategy
The proposed approach aims to mitigate premature convergence and enhance the optimization performance of particle swarm optimization (PSO) by integrating velocity pausing and adaptive strategy.In VASPSO, five key enhancements are elaborated below.

Time-Varying Inertia Weight
The modification of the inertia weight is a critical aspect of the particle swarm optimization (PSO) algorithm that significantly enhances its efficacy.Initially, the introduction of velocity pausing aimed to augment the global search capabilities of PSO and to mitigate premature convergence to local optima.However, while this strategy can enhance exploration, it may also, over time, lead to a slower convergence rate and, in some instances, convergence challenges.To address these challenges, particularly in the later stages of execution, a time-decreasing weight function is often implemented to adjust the inertia weight.This function is designed to progressively reduce the inertia weight across iterations, maintaining robust global search capabilities while simultaneously facilitating quicker convergence towards the optimal solution.By dynamically modulating the inertia weight, PSO strikes an improved balance between exploration and exploitation, thereby optimizing both the convergence speed and the algorithm's overall global search performance.
The inertia weight is calculated as follows: where T is the maximum number of iterations and b is the set constant.

Velocity Pausing
In 2023, Tareq M. Shami introduced the concept of velocity pausing into the particle swarm optimization (PSO) algorithm, marking a significant advancement in its performance [40].This modification allows each particle the option to temporarily suspend velocity updates during iterations, instead selecting from three predefined velocities: slow, fast, and constant.This addition of a third velocity option diverges from traditional PSO mechanics and increases the algorithm's flexibility, promoting a more balanced approach between exploration and exploitation.Such enhancements address prevalent challenges in classical PSO algorithms, including premature convergence and the tendency to become trapped in local optima.
The careful selection of the pausing coefficient is critical.When the pausing coefficient approaches 1, the particle's velocity update closely mirrors that of the classical PSO.Conversely, when the coefficient is set very low, it forces the particle's velocity to remain constant, potentially leading to slower convergence rates and difficulties in discovering superior solutions.
The formula for velocity pausing is as follows: where a is the coefficient of velocity pausing, and c 1 and c 2 are the learning factors.
The particle position update formula is as follows:

Adaptive Strategy
Traditional algorithms that employ a single-strategy approach for position updates often struggle to achieve a suitable balance between exploration and exploitation, particularly in complex scenarios characterized by multiple local optima or highly multimodal objective functions.In contrast, adaptive strategy position updates can significantly enhance the algorithm's search efficiency.This approach involves dynamically optimizing the search space that may yield the best solution by adaptively selecting the position update strategy during each iteration.Such improvements have proven to markedly increase search efficiency and have been widely adopted in enhancing both traditional and classical algorithms [29].
The core of the adaptive strategy for position updates resides in the iterative modification of the parameter p, which is adjusted based on changes in the fitness value.This responsive adjustment enables the algorithm to better navigate the solution landscape, effectively balancing the dual imperatives of exploring new possibilities and exploiting known good solutions.
The formula for parameter p is as follows: where N is the total population.f it() is the fitness of the corresponding particle, and p i is the ratio of the current fitness of the particle to the average fitness of the population.An estimate is obtained in each iteration.
The location update formula is as follows: When p i is small, the adaptation value of particle i exceeds the average level of the population.In this scenario, to augment the global exploration capabilities of particle i, the location update strategy X = X + V is employed.Conversely, when p i is large, indicating that particle i is performing below the average level of the population, the location update strategy X = ωX + (1 − ω)V is implemented to boost its local exploitation capabilities.

Symmetric Cooperative Swarms
Symmetry is a prominent characteristic in algorithms and their applications, playing a crucial role across various domains.In algorithms, symmetry often reveals patterns within data, aiding in simplifying complex problems and enhancing algorithm efficiency.One standout example is the divide-and-conquer approach.By leveraging this principle, the search space is decomposed for exploration, thereby reducing time complexity.In image processing, symmetry is frequently utilized to optimize rendering and image manipulation algorithms.For instance, in 3D graphics, objects typically exhibit symmetrical properties, allowing for reduced computational overhead in rendering and processing, thus improving graphics processing efficiency.In database optimization, exploiting data symmetry can reduce redundant storage and enhance query efficiency.Techniques such as normalization and denormalization are employed to eliminate duplicate and redundant information from data, while indexing and caching techniques accelerate data access and query operations.
In this paper, we also draw inspiration from the concept of symmetry.During each iteration, the entire particle swarm is divided based on the fitness of particles, segregating them into superior and inferior particle clusters.This symmetrical partitioning enables particles to undertake different responsibilities and tasks, facilitating better exploration of the solution space.
The elite group implements advanced strategies to finely balance exploration and exploitation.Initially, a velocity pausing strategy is applied, temporarily halting the velocity updates under specific conditions to allow prolonged investigation within local solution spaces, thereby enhancing the exploration of potentially superior solutions.Additionally, an adaptive strategy modifies the positional behavior based on the problem's characteristics and the group's current state, dynamically adjusting the balance between exploration and exploitation to suit the complexity and diversity of the problem.
Conversely, the non-elite group primarily updates positions by referencing the global best solution, fostering broader exploration of the solution space.This group's focus is on uncovering new potential solutions, thereby injecting diversity into the swarm.By orienting their position updates solely around the global best, the non-elite particles can explore more aggressively, expanding the search range for the algorithm.
Through this cooperative strategy, the particle swarm optimization algorithm achieves a more effective balance between exploration and exploitation.The elite group conducts targeted exploration and refined exploitation using sophisticated strategies, such as velocity pausing and adaptive positional updates.In contrast, the non-elite group pursues extensive exploration driven by the global best solution.This structured approach allows the algorithm to search more comprehensively for viable solutions, thereby enhancing both the performance and the efficacy of the algorithm.
A detailed flowchart of the subgroup collaborative process is shown in Figure 1: Subgroup Collaborative Process Flowchart.

Terminal Replacement Mechanism
The terminal replacement mechanism, also known as the 'last-place elimination mechanism,' is a widely used algorithmic method for selecting optimal solutions under conditions of limited resources or intense competition.Inspired by the biological principle of natural selection, exemplified by the concept of survival of the fittest, this mechanism is prevalent in various optimization and genetic algorithms, aiding in the efficient identification of superior solutions.
Within this mechanism, an initial set of candidate solutions is generated and subsequently ranked according to predefined evaluation criteria.Based on these rankings, candidates with the lowest scores are progressively eliminated, while those with higher scores are retained for additional evaluation rounds.This iterative process is repeated until the optimal solution is identified or a predetermined stopping condition is reached.
To better mimic the unpredictability of real-world scenarios and enhance the algorithm's robustness, the concept of an elimination factor has been incorporated into the particle swarm optimization (PSO) algorithm.This modification involves selectively replacing particles according to specific rules, with a predetermined probability that permits some particles to avoid elimination due to various factors.If a randomly generated number falls below the established elimination factor, a replacement operation is executed; otherwise, the particle is preserved for further evaluation.By integrating this elimination factor, the algorithm not only replicates the inherent uncertainties of real-life situations but also significantly enhances its adaptability and robustness.
Among them, Gbad is the least fit particle among all the individual optimal particles.
Nbest is generated by the intersection of the global best particle and any two individual best particles.
The fitness values of the new cross-generated particle Nbest and Gbad are compared.If Nbest is better than Gbad, Gbad is replaced by Nbest; otherwise, Gbad remains unchanged, and the eliminated particle is selected.
In summary, VASPSO is proposed as a fusion variant, and its pseudo-code is shown in Algorithm 1.
Algorithm 1 VASPSO Algorithm.For the second non-elite group 13: Update position according to velocity pausing Formula (8) Update Gbad according to Formula (11) 19: Update Nbest according to Formula (12) 20: Pbest(i, :) ← X(i, :) 31: f Pbest(i) ← f X(i) In this experiment, 29 benchmark functions from CEC2017 were selected to evaluate the performance of the proposed MPSO algorithm (f2 was excluded due to its unstable behavior at higher dimensions and significant performance variations in the same algorithm implemented in Matlab).These benchmark functions are divided into four categories: f1-f3 are unimodal functions, f4-f10 are multimodal functions, f11-f20 are hybrid functions, and f21-f30 are composition functions [41].These functions are listed in Table 1.
To be fair, each algorithm is run 50 times independently, and the termination condition for all algorithms is the maximum number of iterations, which is set to 600.

Comparison between VASPSO and 12 Algorithms
In this section, VASPSO is compared with five variant PSOs and seven other swarm intelligence algorithms, with the results listed in Table 2 through 7, respectively.Three quantitative metrics are used: mean (Mean), standard deviation (Std), and minimum value (Min).
The convergence curves of the algorithms are shown in Figures 2 and 3  From Table 3, it is evident that VASPSO generally outperforms the other five PSO variants.In terms of mean values, VASPSO shows superior performance on functions f 3 , f 10 , f 11 , and f 27 compared to the other variants.In terms of variance, VASPSO demonstrates greater stability on functions f 3 , f 5 , f 8 , f 9 , f 11 , f 16 , f 21 , f 25 , and f 27 than the other variants.In terms of minimum values, VASPSO excels on functions f 3 , f 4 , f 10 , f 11 , f 14 , f 18 , f 22 , f 25 , f 27 , f 28 , and f 29 compared to the other variants.Although VASPSO does not perform well in some cases, it still ranks as the best in performance when considering the average ranking across all functions.The convergence of the algorithm is shown in Figure 2.
From Table 4, it is clear that VASPSO overall surpasses the other seven swarm intelligence algorithms, with superior performance on functions f 3 , f 11 , f 13 , f 14 , f 15 , f 19 , and f 28 compared to the other algorithms.In terms of variance, VASPSO shows greater stability on functions f 11 , f 13 , f 14 , and f 15 than the other swarm intelligence algorithms.In terms of minimum values, VASPSO performs better on functions f 1 , f 3 , f 4 , f 10 , f 11 , f 13 , f 14 , f 18 , f 22 , f 25 , f 26 , f 28 , f 29 , and f 30 compared to the other algorithms.Despite some poor performances on specific functions, VASPSO still emerges as the best in the overall ranking.The convergence of the algorithm is shown in Figure 3.

Analysis of Statistical Results
Statistical tests are typically required to analyze experimental results.The Friedman test, a non-parametric statistical method, is employed to assess whether significant differences exist in the median values of multiple paired samples across three or more related groups [53].In the execution of the Friedman test, observations within each group are first ranked.Subsequently, the average ranks for each sample are calculated.The Friedman test statistic is then computed in a manner akin to that used in the analysis of variance, focusing on the sum of rank differences.This test is particularly suited for experimental designs involving multiple related samples, such as those evaluating the efficacy of different algorithms or treatment methods at various observation points.The progressive significance values from the Friedman test, as presented in Table 5, clearly demonstrate significant differences among the 12 algorithms analyzed.
The Wilcoxon signed-rank test is a non-parametric statistical method used to determine if there are significant differences between the median values of two related groups of samples.It is applicable in situations where the data do not follow a normal distribution and cannot be paired [54].When conducting the Wilcoxon signed-rank test, the first step is to calculate the difference between each pair of related samples, then take the absolute value of these differences, and rank them in order of magnitude, assigning corresponding ranks.Next, the signs of the differences are disregarded, retaining only the ranks.Finally, the presence of significant differences between the median values of the two groups of samples is determined based on the sum of ranks.In the Wilcoxon signed-rank test, the primary statistical measures of interest are the rank sum and the test statistic Z-value.Hypothesis testing is conducted by comparing the Z-value with a critical value or the p-value.If the Z-value is significantly greater than the critical value, or the p-value is less than the significance level, the null hypothesis can be rejected, indicating that there is a significant difference between the median values of the two sample groups.Considering a significance level of 0.05, and based on the statistical results presented in Table 6, it can be seen that the VASPSO algorithm outperforms the other algorithms.

Conclusions
This study aims to mitigate several prevalent issues associated with the particle swarm optimization algorithm, such as premature convergence and the imbalance between global exploration and local exploitation, through the introduction of an enhanced version known as VASPSO.In our approach, we incorporate velocity pausing and a terminal replacement mechanism, which are specifically designed to prevent premature convergence.Additionally, VASPSO utilizes time-varying inertia coefficients, the concept of symmetric cooperative swarms, and adaptive strategies for modulating the search step length, which collectively contribute to a more balanced optimization process.
To rigorously evaluate the effectiveness of VASPSO in addressing complex optimization challenges, we conducted a series of comparative experiments using the CEC2017 benchmark.The results from these experiments suggest that VASPSO not only improves upon the performance of many existing PSO variants but also shows promising capabilities when compared with other well-regarded swarm intelligence algorithms.
While VASPSO has shown promising results in experimental settings, the theoretical underpinnings and formal proofs supporting the VASPSO algorithm are still somewhat limited.Further theoretical analysis is crucial to fully understand the mechanisms and principles driving its performance; 2.
The classification of particles within the VASPSO algorithm currently employs a rigid structure that might oversimplify the inherent complexities of diverse optimization problems.This method of division may not adequately capture the nuanced characteristics of different scenarios, potentially limiting the algorithm's adaptability and effectiveness; 3.
Certain parameters within the algorithm may require adjustments to effectively respond to variations in the problem's characteristics.A static parameter setting might not always align optimally with the dynamic nature of different optimization challenges; 4.
VASPSO performs poorly on some benchmark functions (such as f5, f6).Despite achieving good results on some optimization problems, the algorithm may exhibit relatively poorer performance on certain specific benchmark functions.
Future work will unfold in the following aspects: 1.
Applying the VASPSO algorithm to real-world problems offers a significant opportunity to test and enhance its utility.For instance, in the field of engineering optimization, VASPSO could be particularly beneficial for complex tasks such as robot path planning and image processing.Integrating VASPSO with these practical applications not only allows for a robust evaluation of the algorithm's performance but also showcases its potential to provide effective and efficient solutions; 2.
Integrating VASPSO with other optimization algorithms could significantly enhance its performance and robustness.By combining the strengths of multiple algorithms, a more effective balance between global exploration and local exploitation can be achieved.This hybrid approach not only improves the overall search capability of VASPSO but also enhances its convergence performance; 3.
Combining VASPSO with neural networks can create a formidable optimization framework that leverages the strengths of both methodologies.By integrating the robust global search capabilities of VASPSO with the adaptive learning abilities of neural networks, this hybrid approach can facilitate more efficient parameter optimization and model training.This synergy enhances the algorithm's effectiveness in navigating complex parameter spaces and accelerates the convergence towards optimal solutions.

1 : 5 :
Initialize parameters, population size N, maximum iterations T 2: for t = 1 to T do 3: Calculate the fitness of each particle 4: Sort all particles and divide them into two groups: elite and non-elite based on fitness values For the first elite group 6: for i = 1 to HN do 7: if random number < r 1 then 8:Update velocity according to velocity pausing Formula(7) . The trajectory curves represent the search process of the particles in each dimension.The fluctuation of the former indicates that the particles are conducting a global search, while the stability of the latter suggests that the particles have reached a global or local optimum.

Figure 2 .
Figure 2. Convergence curve of algorithms for variants of PSO.

Table 1 .
The benchmark function used in this paper.

Table 3 .
Comparisons of experimental results between 7 evolutionary algorithms.

Table 4 .
Comparisons of experimental results between some well-known variants of PSO.
Figure 3. Convergence curve of algorithms for 7 evolutionary algorithms.

Table 5 .
Comparison of algorithms based on the Friedman test.

Table 6 .
Comparison of algorithms based on the Wilcoxon signed-rank test.