Improved Dual-Center Particle Swarm Optimization Algorithm

: This paper proposes an improved dual-center particle swarm optimization (IDCPSO) algorithm which can effectively improve some inherent defects of particle swarm optimization algorithms such as being prone to premature convergence and low optimization accuracy. Based on the in-depth analysis of the velocity updating formula, the most innovative feature is the vectorial decomposition of the velocity update formula of each particle to obtain three different flight directions. After combining these three directions, six different flight paths and eight intermediate positions can be obtained. This method allows the particles to search for the optimal solution in a wider space, and the individual extreme values are greatly improved. In addition, in order to improve the global extreme value, it is designed to construct the population virtual center and the optimal individual virtual center by using the optimal position and the current position searched by the particle. Combining the above strategies, an adaptive mutation factor that accumulates the coefficient of mutation according to the number of iterations is added to make the particle escape from the local optimum. By running the 12 typical test functions independently 50 times, the results show an average improvement of 97.9% for the minimum value and 97.7% for the average value. The IDCPSO algorithm in this paper is better than other improved particle swarm optimization algorithms in finding the optimum.


Introduction
Eberhart and Kennedy proposed a particle swarm optimization [1] (PSO) algorithm in 1995 based on the simulation of the foraging behavior of flocks of bird and fish.The particle swarm optimization technique developed has become one of the most effective methods for solving complex functional optimization problems.This algorithm relies on the population optimization mechanism, possesses excellent global search capability, can effectively deal with multi-peak optimization problems and has been widely used in the fields of science and engineering.Since it does not need to know the a priori knowledge of the objective function values, parameters, etc., it shows good adaptability and robustness in solving complex optimization problems.
However, PSO is easily trapped in the local optimal solution, which may lead to the stagnation of particle evolution.In addition, as the number of iterations increases, the convergence speed of the algorithm slows down.In order to enhance the performance of the PSO algorithm in optimization, many scholars have made a series of improvements.Some of these scholars have improved the inertia weights.Zhang et al. [2] proposed an adaptive method which determines the inertia weight of each particle in different dimensions according to the performance of each particle and the distance from its optimal position.Taherkhani et al. [3] proposed a new adaptive inertia weight adjusting approach based on Bayesian techniques in PSO which is used to set up a sound tradeoff between the exploration and exploitation characteristics.Xinliang et al. [4] proposed a random walk autonomous group particle swarm optimization algorithm (RW-AGPSO) which introduced Levy flight and a dynamic adjustment weight strategy to balance exploration and exploitation.Yansong et al. [5] proposed a hybrid dynamic particle swarm optimization (HDPSO) algorithm by adaptively adjusting the inertia weight and introducing the coefficient of variation.A portion of scholars has grouped populations to learn.Hongwei et al. [6] proposed a cooperative hierarchical particle swarm optimization framework and designed contingency leadership, interactive cognition and self-directed utilization operators.Gou et al. [7] divided the whole population into three subgroups, quantified the individual differences by the competition coefficient of each particle and selected the specific evolution method according to it and the current fitness combined with a restart strategy to regenerate the corresponding particles and enhance the diversity of the population.Lai et al. [8] proposed a multi-population parallel PSO algorithm based on penetration.It can adaptively determine when, how many and from which subpopulation to which subpopulation particles travel.Xu et al. [9] proposed a two-swarm learning PSO (TSLPSO) algorithm based on different learning strategies.One sub-swarm constructs learning samples through DLS to guide the local search of particles, and the other constructs learning samples through comprehensive learning strategies to guide the global search.Chnoor [10] proposed to combine the PSO algorithm with a new meta-heuristic to accomplish the assignment of tasks by dividing the particles into a group leader and group members through their behaviors.Some other scholars have designed adaptive strategies in PSO algorithms.For instance, Aziz et al. [11] proposed a hybrid update sequence adaptive switching PSO (Switch-PSO) algorithm whose update strategy adaptively switches between two traditional iterative strategies according to the performance of the best individual of the particle swarm.Jiang et al. [12] used different parameter values to adjust the global and local search ability of the PSO algorithm.Tang et al. [13] proposed a PSO algorithm with adaptive update strategy which adaptively updates the three main control parameters of particles in SAPSO.Shifei et al. [14] proposed a dynamic quantum particle swarm optimization algorithm (DQPSO), designed the particle search ability factor and used it as feedback to realize the dynamic adjustment of the contraction-expansion (CE) coefficient.Yawen et al. [15] proposed a dual adaptive strategy to overcome the problem where the single learning factor and inertia weight cannot adjust the optimization process well when optimizing complex functions.There are also some improved particle swarm algorithms that incorporate strategies from other algorithms.Chen et al. [16] proposed a particle swarm optimization algorithm with crossover operation (PSOCO) which constructs an effective particle swarm guidance example by crossover operation on the best position of each particle's personal history.Tian et al. [17] proposed an improved particle swarm optimization and chaos-based initialization and robust update mechanism.Using logistic mapping and sigmoid inertia weight, they designed a maximum focusing distance while performing wavelet mutation to enhance the diversity of the population.Ren et al. [18] introduced the simplex algorithm (SA) in the fixed-point theory into the optimization of PSO so that the optimization problem of the objective function is transformed into the problem of solving a fixed-point equation set.Fuqiang et al. [19] introduced circle mapping and the sine cosine factor to better balance the global exploration ability and local development ability of the algorithm.
Although the PSO algorithm is a classic algorithm, experts and scholars have never stopped researching it.In 2023-2024, many scholars published papers on the improvement of the PSO algorithm.Bratislav et al. [20] proposed a modified particle swarm algorithm (MPSO) which does not consider the particle velocities but adopts a Quasi-Reverse Learning (QRL) method to update the optimal positions of the individuals.Sulaiman et al. [21] proposed a modified hybrid (PSO-SAO) algorithm based on PSO and Scent Agent Optimization (SAO) which incorporates the trailing mode of the SAO algorithm into the PSO framework to efficiently regulate the velocity update of the original PSO and continuously introduces intelligences to track the molecules of higher concentration, thus guiding the particles of the PSO towards the optimal fitness position.Shiva Kumar Kannan and Urmila Diwekar [22] introduced an innovative PSO algorithm which combines Sobol and Halton random number sampling, resulting in improved convergence efficiency of the particles.Feng et al. [23] proposed particle swarm optimization based on an improved crowding distance (MOPSO-MCD) algorithm which is designed with an improved crowding distance (MCD) calculation method; it can more comprehensively evaluate the crowding relationship between individuals in the decision space and the goal space.In addition, an elite selection mechanism based on cosine similarity is combined with the offspring competition mechanism to further optimize the algorithm.Tian et al. [24] proposed a diversity-guided PSO algorithm with multi-level learning strategy (DPSO-MLS) which used chaotic opposition-based learning (OBL) combined with high-level and low-level learning mechanisms to make particles cover the entire search space and maintain its diversity.However, among the various improved PSO algorithms, there is still less literature on the in-depth analysis that can be performed from the bird's trajectory or further research literature on the dual-center particle swarm optimization (DCPSO) algorithm (Table 1).Publication Algorithms [2] Bayesian PSO (BPSO) algorithm [5] Hybrid dynamic PSO (HDPSO) algorithm [9] Two-swarm learning PSO (TSLPSO) algorithm [11] Switching PSO (Switch-PSO) algorithm [16] PSO algorithm with crossover operation (PSOCO) [25] Dual-center PSO (DCPSO) algorithm In order to solve the problems of the low optimization accuracy of the PSO algorithm and it being easy to fall into local optimal solutions, the contributions of this study are as follows: 1.
The velocity update formula of the PSO algorithm is deeply analyzed, and its vector decomposition yields three different flight directions which are arranged and combined to obtain six flight routes that are different from each other and eight intermediate positions; 2.
The optimal solution and the current solution searched by the particles are used to construct a weighted population virtual center and a weighted optimal individual virtual center.These two virtual centers are incorporated into the new way of updating the individual extremes and population extremes so that the two virtual centers follow the population to search for better positions; 3.
Linearly decreasing inertia weights are used to further adjust the velocity update formula; 4.
We determine whether the particle is caught in the local optimum and, if so, make the particle jump out of the local optimum based on the adaptive variance factor accrued from the number of iterations.
The rest of the paper is organized in the following manner.Section 2 introduces the basic concepts and formal definitions of the modified particle swarm optimization algorithm.Section 3 introduces the conventional dual-center particle swarm algorithm and the improvement strategies made in this paper.Section 4 shows that the algorithm of this paper has strong optimality finding ability by comparing it with the dual-center particle swarm optimization (DCPSO) algorithm, particle swarm optimization (PSO) algorithm, linear decreasing weight particle swarm optimization (LDWPSO) algorithm and adaptive particle swarm optimization (APSO) algorithm.

Particle Swarm Optimization Algorithm
Particle swarm optimization (PSO) [26,27] is to consider each individual in the population as a particle in space, ignoring mass and volume, so that these particles fly in the search space at a certain speed.The flight speed can be dynamically adjusted according to the different locations of the individual or the population reached.
In 1998, Yuhui Shi and Russell Eberhart introduced inertia weight [28] into the basic particle swarm optimization algorithm and proposed to dynamically adjust the inertia weight to balance the global nature of convergence and convergence speed.M particles fly in a D−dimensional search space, and the position of each particle is a potential feasible solution; f denotes the value of the objective function; t denotes the number of current iterations; t max denotes the maximum number of iterations; ) denotes the position of the i−th particle in the t−th iteration; and i,D ) denotes the velocity of the i−th particle in the t−th iteration.Its velocity and position update equations are as follows: x where c 1 and c 2 are the learning factors; r 1 and r 2 are pseudo-random numbers that obey the [0, 1] uniform distribution and are independent of each other; p i,j is the optimal position currently searched by the i−th particle; and p g,j is the optimal position currently searched by the population.In order to accelerate the convergence of the algorithm, the PSO algorithm adopts a linearly decreasing weight w so that the weight decreases linearly from the maximum value of w max to w min , and its change is shown in Equation (3).

Dual-Center Particle Swarm Optimization Algorithm
In this paper, we adopt the velocity decomposition method from Reference [25] to increase the position searched by the particle.The modified particle swarm velocity update method is decomposed into three parts: As in Figure 1, when the particle i flies from the position of the t moment to the position of the (t + 1) moment, from the point of view of fit vector decomposition, it embodies the particle flight route.The method changes the original flight path x to the zigzag motion path preferred by birds, which means the search space can be covered to a greater extent, and the individual extreme value and the population extreme value can be improved to a certain extent.

Diversified Design of Particle Motion Routes
In the conventional DCPSO algorithm, the update of particle position is based on the ordering of the particle swarm velocity update formula.In detail, it means that the particles fly according to Equation (1).The particle i firstly travels along the direction of the velocity of the t−th iteration, flying at a distance of wv (t) i , and arrives at A; then travels along the direction of i ), and arrives at B; then finally travels along the direction of gbest − x (t) i , flying at a distance of c 2 r 2 (gbest − x (t) i ), and arrives at the (t + 1) moment position.This decomposition method has its inherent limitations.The particle swarm simulates the behavior of a flock of birds searching for food, and the flock should have a high degree of freedom on the foraging path.Therefore, the particles do not have to strictly follow the direction of flying first along the direction parallel to v In this paper, the three different directions of particle movement are arranged and combined, and the results show that there are six different particle movement routes, as shown in Table 2. Therefore, in the conventional DCPSO algorithm, the flight route of particles (see Figure 1) is only one of many different routes, and the particles can have five other different movement routes.Figure 2

Center Particle Design Improvement
However, the use of average values to construct the generalized center particle (GCP) and the special center particle (SCP) in Reference [25] may make them susceptible to extreme values and, when the population size is small, and if the two center particles do not fly, may reduce the search speed of the particle swarm.Therefore, in this paper, the weight of the i−th particle in the center construction is calculated by Equation ( 6), and then the positions of the individual center and the population center are adaptively controlled by Equations ( 4) and (5). Figure 3 shows a combination diagram of the particle position change and optimal solution evolution process, and the trigonometric function f (x) = x sin x cos 2x − 2x sin 3x + 3x sin 4x is selected for testing.Where the hollow circle clearly records the position that the particle flies to in each iteration, the solid circle is the center point constructed in the algorithm flying to the position.As can be seen from Figure 3, since the center particle designed using the average value in Reference [25] does not have the ability to search and is susceptible to extreme values, it is more evenly dispersed over the solution interval, and it is more difficult to reach the global optimal position.On the other hand, the center particle of the algorithm in this paper is not only closer to the current optimal position but also has the ability to search and can easily search the optimal position in the later stage of the optimization search.The rate of convergence is also improved.
In Equations ( 4)-( 6), GCP (t) denotes the population virtual center constructed at the t time iteration; SCP (t) denotes the optimal individual virtual center constructed iteratively at the t time; X (t) i denotes the position of the i−th particle in the t time iteration; p i denotes the current optimal position of the i−th particle; and R (t) i denotes the weight occupied by the i−th particle in the virtual center construction.
When the objective function f is smaller and the R (t) i is larger, then the constructed population center is closer to the better position; similarly, the optimal individual center is closer to the better individual optimal position.The two centers constructed in this paper are virtual centers; when f (GCP (t) ) < f (X (t) i ), they replace the individual optimal position with GCP (t) , and, when f (SCP (t) ) < f (p i ), they replace the individual optimal position with SCP (t) .

Mutation Strategy
In the modified PSO algorithm, particles very easily fall into the local optimal trap and cannot jump out.Based on this problem, in recent years, many scholars have introduced the mutation strategy of genetic algorithms [29,30] into PSO algorithms.Wei et al. [31] and others used a local update strategy of neighborhood difference mutation (NDM) to increase the diversity of the algorithm ; Duan et al. [32] proposed an exact mutation strategy associated with two clustering coefficients which distinguishes the degree of mutation required by the particle population at different times and coordinates the performance of exploration and exploitation; Quanbin et al. [33] proposed a strategy that takes into account the suboptimal solutions that are chosen to discard the depth information of the mutation strategy.
In this paper, we introduce the mutation strategy into the genetic algorithm; in a genetic algorithm, it can produce gene templates that are not found in the original population, and, in the PSO algorithm, it will greatly enrich the diversity of the population, which helps to make the individual escape from the local optimal solution.First of all, each particle is assigned a small initial mutation probability value pm = 0.0005, then, through the calculation of Equation ( 7) to determine whether the particle is caught in the local optimum, if Q is greater than 0.9 five consecutive times, it is judged that the particle may be caught in the local optimum.At this time, it begins to accrue the coefficient of mutation in increments of 0.001 each time.When, in the interval (0, 1), it produces a random number less than pm, the particle is carried out according to the Equation ( 8) mutation.The pseudo-code for the mutation strategy in this paper is shown in Algorithm 1.
ξ in Equation ( 8) is the proportion coefficient, which is specified in Reference [5], in the range (0.2, 0.8), so, in this paper, we take a random value between 0.2 and 0.8; x i,j is the value of the j−dimension of the particle that needs to be varied, and pBest k,j is the value of the j−dimension of the optimal position of the randomly selected k-th individual.

Optimization Process Steps
The flowchart of the IDCPSO algorithm proposed in this paper is shown in Figure 4. Begin: Step 1. Initialize the particle swarm.Given the population size M, randomly generate the position X i and velocity V i of each particle.
Step 2. Evaluate the fitness of each particle according to the test function and find pbest i and gbest.
Step 3. Compare the fitness of pbest i in Step 2 with the fitness of eight intermediate positions to update the individual extreme values.
Step 4. Determine the positions of GCP and SCP according to Equations ( 4) and ( 5) and compare the fitness of gbest with the fitness of GCP and SCP to update the global extreme value.
Step 5. Determine whether the particle needs to be mutated according to Equation (7).If it needs to be mutated, then carry out the mutation operation according to Equation (8).
Step 6.If the maximum number of iterations is reached, terminate the execution of the algorithm; otherwise, turn to Step 2. End.

Analysis of Time Complexity
Let the total number of particle swarms be M, the particle dimension be D and the maximum number of iterations be I.The time complexity of the modified particle swarm optimization algorithm is O(MDI).Firstly, the time added to the IDCPSO algorithm within an iteration is calculated.When calculating the six different flight routes for each particle in Section 3.2, eight intermediate positions need to be calculated, and the added time is O(8MD).In Section 3.3, calculating the weights that each particle occupies in the construction of the virtual centers takes time O(2M), and the time spent in constructing the GCP and SCP is O(2MD).In Section 3.4, accumulating the coefficients of mutation and judging whether the particles need to be mutated takes time O(2), and, when the number of particles needing mutation is e, the added mutation time is O(2eD).Therefore, the time complexity of IDCPSO in this paper is O(I(2 + 10MD + 2eD + 2M)) in I iterations, and the time complexity of IDCPSO after omitting the low-order terms is O(MDI).The magnitude of the algorithmic time complexity does not change, indicating that, compared to the PSO algorithm, the improved algorithm in this paper increases the time spent, but the increase is not significant.

Simulation Experiment
In this paper, the 12 typical functions shown in Table 3 are used to test the performance of the improved dual-center particle swarm optimization algorithm (IDCPSO), and the comparison algorithms chosen are the dual-center particle swarm optimization algorithm [25] (DCPSO), particle swarm optimization algorithm [34] (PSO), linear decreasing particle swarm optimization algorithm [35] (LDWPSO) and adaptive particle swarm optimization algorithm [36] (APSO).The objective of the test function is to minimize the function within a specified range using the MatlabR2023a compilation algorithm, 2.60 GHz, 4 GB, on 64-bit operating system on a computer.In order to enhance the comparability, the population size M = 20; the dimension of the solution set space D = 40; the maximum number of iterations t max = 1000 (except for t max = 3000 in f 2 and t max = 10, 000 in f 11 ); and the other parameter settings in each algorithm are shown in Table 4.In order to reduce the impact of the randomness of the algorithms on the test results, five different algorithms were run independently 50 times for the 12 classical test functions, and the results of this test are shown in Table 5, including the minimum, mean and variance.

Simulation Results
Analyzing the data in Table 5, it can be seen that the algorithms in this paper are optimal in calculating the minimum value, followed by the DCPSO algorithm, and the accuracy of these two algorithms compared to the other algorithms in finding the optimal result is improved by at least two orders of magnitude.The PSO algorithm is the worst among the five test functions.In terms of variance, for the algorithms in this paper, except for the function f 2 and DCPSO algorithm, the improvement is not obvious; in the other test functions, there are more obvious improvements.The improvement is most noticeable in function f 11 .It is worth mentioning that the IDCPSO algorithm is able to obtain the optimal solution quickly and accurately on the function f 5 .In general, this paper's algorithm is not only far better than other functions in singlepeak function optimization, but also performs well in multi-peak function [37] optimization.In order to more intuitively reflect the superiority of this paper's algorithm, Figure 5 gives the optimal solution evolution diagrams of different functions with the above algorithms, from which it can be seen that this paper's algorithm is the best in both convergence speed and minimization optimization, which shows that the algorithm has a better optimization effect.

Conclusions
In this work, we proposed an improved dual-center particle swarm optimization (ID-CPSO) algorithm by adding five particle motion routes and other optimization strategies to the DCPSO algorithm.The algorithm analyzes the flight trajectories of vector-decomposed particles in depth by updating the velocity formula of the particles, constructs the center particles reasonably and, finally, combines with the introduction of the mutation factor, which accelerates the convergence speed of the algorithm, greatly optimizes the quality of the global extremes and strengthens the quality of the average solution of the population.The comparison test of this paper's algorithm with four other algorithms showed that the IDCPSO algorithm has a better optimization effect.
We have not yet applied the algorithm to practical engineering optimization problems such as the traveler's problem, the backpack problem and the shop floor scheduling problem.This provides us a future research direction for testing the IDCPSO algorithm further in future work.For example, to study the Vehicle Path Problem with Time Window (VRPTW), one can consider designing the position of the particle as a three-part problem that contains information about the sequence of the customers, the path segmentation information and the number of paths, which can then be solved using the IDCPSO algorithm.

Figure 1 .
Figure 1.Vector decomposition diagram of velocity update formula.

i
, then along the direction parallel to pbest − x (t) i and finally along the direction parallel to gbest − x (t) i , but, rather, they choose an arbitrary sequence of flight among the three different flight directions of w shows the six different routes of the i−th particle in the t moment of flight.In this way, during the flight of particles per unit of time, from only two (A, B) intermediate positions, to remove the repetition of the point, there are eight (A, B, C, D, E, F, G) intermediate positions, a great increase in the coverage of the search space, as far as possible, to avoid the phenomenon of early maturity of the swarm of particles.

Figure 2 .
Figure 2. Six different update routes of particles.

Table 1 .
The available literature on improved PSO algorithms.

Table 2 .
The update routes of particles.
is the coefficient of mutation for each particle

Table 3 .
Description of test functions.