Quantum-Behaved Particle Swarm Optimization with Weighted Mean Personal Best Position and Adaptive Local Attractor

: Motivated by concepts in quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was proposed as a variant of PSO with better global search ability. In this paper, a QPSO with weighted mean personal best position and adaptive local attractor (ALA-QPSO) is proposed to simultaneously enhance the search performance of QPSO and acquire good global optimal ability. In ALA-QPSO, the weighted mean personal best position is obtained by distinguishing the difference of the effect of the particles with different ﬁtness, and the adaptive local attractor is calculated using the sum of squares of deviations of the particles’ ﬁtness values as the coefﬁcient of the linear combination of the particle best known position and the entire swarm’s best known position. The proposed ALA-QPSO algorithm is tested on twelve benchmark functions, and compared with the basic Artiﬁcial Bee Colony and the other four QPSO variants. Experimental results show that ALA-QPSO performs better than those compared method in all of the benchmark functions in terms of better global search capability and faster convergence rate.


Introduction
Particle swarm optimization (PSO) is a stochastic population-based algorithm proposed by Kennedy and Eberhart [1], which is motivated by the intelligent collective behavior of some animals such as flocks of birds or schools of fish.In PSO, each particle is regarded as a potential solution.All particles have fitness values and velocities, and they fly in a multidimensional search space by learning from the historical information, which contains their memories of the personal best positions and knowledge of the global best position in the groups during the search process.PSO can be easily implemented and is computationally inexpensive, and has few parameters to adjust.For its superiority, PSO has rapidly developed with applications in solving real-world optimization problems in recent years [2,3].However, PSO is easily trapped into the local optima, and premature convergence appears when it is applied to complex multimodal problems [4].Many attempts have been made to improve the performance of the PSO [5].Inspired by quantum mechanics and trajectory analysis of PSO [6], Sun et al. [7,8] proposed a variant of PSO, which is named quantum-behaved PSO (QPSO).Unlike PSO, QPSO needs no velocity vectors for particles, and also has fewer parameters to adjust, making it easier to implement.Since QPSO was proposed, it has attracted much attention and different variants of QPSO have been proposed to enhance the performance from different aspects and successfully applied to solve a wide range of continuous optimization problems [9][10][11][12][13][14].In general, most current QPSO variants can be classified into three categories [15]: the improvement based on operators from other evolutionary algorithms, hybrid search methods, and cooperative methods.
Though those strategies have improved the performance of QPSO in terms of convergence speed and global optimality, it is rather difficult to improve the global search capability and accelerate the rate of convergence simultaneously.In classic QPSO, both the mean personal best position and the local attractor have a great influence on the performance of the algorithm.On one hand, the former is simply the average on the personal best position of all particles, which ignores the difference of the effect of the particles with different fitness on guiding particles to search global optimal solutions.Thus, it is not conducive to improve the global search ability of QPSO.On the other hand, the local attractor for a particle can be obtained as the weighted sum of its personal and global best positions.It has been found that there are few improvements concentrating on the local attractors in QPSO.A novel quantum-behaved particle swarm optimization with Gaussian distributed local attractor point (GAQPSO) is proposed by Sun et al. [16].In GAQPSO, the local attractor is subject to Gaussian distribution whose mean value is the original local attractor that is defined in classic QPSO.An enhanced QPSO based on a novel computing way of local attractor (EQPSO) is introduced by Jia et al. [17].In EQPSO, the local attractor is the weighted sum of particle personal and global best positions, using the function designed by the current and total numbers of iterations as the weight.This calculation method cannot monitor the population diversity in real time.Therefore it is not conducive to improve the global search ability of QPSO either.In general, diversity [18] is an important aspect of population-based optimization methods since it influences their performance, and diversity is closely linked to the tradeoff of exploration-exploitation.High diversity facilitates exploration, which is usually required during the initial iterations of the optimization algorithm.Low diversity is indicative of exploitation of a small area of the search space, desired during the latter part of the optimization process.Monitoring the diversity of QPSO populations to construct local attractors to guide particles optimization, thereby improving the algorithm's ability to search global optimum and accelerating the algorithm's convergence rate, this practice is rarely reported.In this paper, in order to balance the global and local searching abilities, we propose a set of weighted coefficients that can distinguish the fitness of particles to calculate the mean personal best position, and a novel way of computing the local attractor, furthermore, a new kind of quantum-behaved particle swarm optimization with weighted mean personal best position and adaptive local attractor is designed for numerical optimization.Experimental results show that our proposed method is effective.
The remainder of the paper is structured as follows.A brief introduction of PSO, two versions of QPSO and population diversity of PSO are presented in Section 2. The proposed QPSO with weighted mean personal best position and adaptive local attractor (ALA-QPSO) is given in Section 3. Section 4 provides the experimental results on benchmark functions.Some concluding remarks are given in Section 5.

Paritcle Swarm Optimization
Particle swarm optimization (PSO) is a stochastic population-based algorithm proposed by Kennedy and Eberhart [1], which is motivated by intelligent collective behavior of some animals such as flocks of birds or schools of fish.The candidate solutions for PSO are called particles.The movements of the particles are guided by their own best known position called pbest and the entire swarm's best known position called gbest.The position and velocity of the i−th particle is updated according to (1) and (2).
In ( 1) and ( 2), iD are the position and velocity vector of D-dimensional space for particle i at iteration t, respectively.c 1 and c 2 are positive constants which control the influence of pbest i and gbest in the search process, respectively.r 1 and r 2 are random values between 0 and 1. Inertia weight, ω, which was proposed by Shi and Eberhart [19], plays a significant role in balancing the algorithm's global and local search ability.The fitness value of each particle's position is determined by a fitness function.PSO is usually executed by repeatedly computing (1) and ( 2) until a specified number of iterations have reached or the velocity updates are close to zero during the iterations.The pseudo code for the PSO algorithm can be referred to the description proposed by Tian et al. [4].

Quantum-Behaved Particle Swarm Optimization
Trajectory analysis by Clerc and Kennedy [6] demonstrated that convergence of PSO may be achieved if each particle converges to its local attractor, for 1 ≤ d ≤ D, where t is the iteration number, ϕ is a random number uniformly distributed on (0, 1), that is ϕ ∼ U(0, 1), pbest i represents the best historical position found by particle i, and gbest is the current global best position found by the swarm.
Inspired by quantum mechanics and the trajectory analysis of PSO, two versions of quantum-behaved PSO (QPSO), called QPSO with delta potential well model (QDPSO) [7] and a revised QPSO (RQPSO) [8], were proposed one after another by Sun et al. in 2004.In QDPSO, the position of particle i at iteration t is updated according to (5) by employing the Monte Carlo method.
wherein, both u and rand are random numbers uniformly distributed on (0, 1) and α is a positive real number, named the contraction-expansion coefficient, which can be set as α = 0.5 + 0.5(T − t)/T to balance the global and local searching ability of QDPSO, wherein parameter t and T are the current and maximum iteration numbers, respectively.In RQPSO, a global point, denoted as mbest = (mbest 1 , • • • , mbest d , • • • , mbest D ) and called the mean personal best position, is introduced to enhance the global searching ability of QPSO.The global point corresponding to the t−th iteration is computed by Equation (6).
wherein, parameter t is the current iteration number and S is the number of particles.The position of particle i at iteration t in RQPSO is updated as shown Equation (7).
wherein, α and u have the same meaning as the corresponding parameters in (5), and mbest t d is the mean personal best position of the population for the d−th dimension at the t−th iteration.

Population Diversity
The population diversity of PSO is useful for measuring and dynamically adjusting the algorithm's ability of best path exploration [18].Lu et al. [20] proposed a method to measure the population diversity using (8).
In ( 8), σ 2 (t) is the sum of squares of deviations of the particles' fitness values, S stands for the swarm size, f (t) i is the fitness of the i−th particle at the t−th iteration, f (t) avg is the average fitness of the swarm at the t−th iteration, and F is the normalized calibration factor to confine σ 2 (t).Lu et al. [20] defined F as (9).
3. Quantum-Behaved Particle Swarm Optimization with Weighted Mean Personal Best Position and Adaptive Local Attractor (ALA-QPSO)

Weighted Mean Personal Best Position
In RQPSO, the mean personal best position of the population (mbest) is tracked by particle i during the searching process of the particle.An equal weight coefficient is used to construct the linear combination of each particle best position, which does not distinguish the difference of the effect of the particles with different fitness on guiding particle i to search global optimal solutions.Some useful information hidden in particles' personal best positions may be lost, which is not conducive to improve the global search ability of QPSO algorithm.For a minimum optimization problem, the elite are the particles which are corresponding to the minimum objective function value.The smaller the objective function value is, the better the particles' fitness.In societal settings, the elite contribute more towards qualitative improvements.Such an analogy is drawn into the mean personal best position calculation by assigning larger weights to particles having better fitness and smaller weights to comparatively poorer performing particles.In this paper, a weighted mean personal best position, which is calculated by Equations ( 10) and (11) accordingly to the feedback of fitness values of the particles, can be obtained for guiding particle i to find global optimal solutions for a minimum optimization problem.
In ( 10) and ( 11), t is the current iteration number, f i obj (t) is the objective function value corresponding to the i−th particle at the t−th iteration, S is the number of particles, and r i (t) is the coefficient of the best position of the i−th particle at the t−th iteration and is used to construct the weighted mean personal best position.
From (10), it can be deduced that the sum of r i (t) is 1 and r i (t) is between 0 and 1 at iteration t.When the sum of the objective function value corresponding to all particles is 0, the coefficient r i (t) is 1/S.Otherwise, the smaller f i obj (t) is, the larger r i (t) is.That is to say, when constructing a weighted mean personal best position to guide particles for finding optimal solutions for a minimize optimization problem, the larger a particle fitness is, the more important the particle best position is.Thus, the difference of the effect of the particles with different fitness is distinguished.

Adaptive Local Attractor
Trajectory analysis in [6] show that each particle in the PSO converges to its local attractor.From Equations ( 3) or ( 4), the local attractor combines the particle best known position (pbest i ) and the entire swarm's best known position (gbest).So, it is very useful to find an efficient way to combine the good information hidden in these two best known positions.
Generally, in population-based optimization methods, during the early stages of the optimization, it is desirable to encourage the individuals to wander through the entire search space, without gathering around local optima.On the other hand, during the latter stages, it is very important to enhance convergence toward the global optima, to find the optimum solution efficiently.Moreover, diversity is an important aspect of population-based optimization methods since it influences their performance, and diversity is closely linked to the exploration-exploitation tradeoff.High diversity facilitates exploration, which is usually required during the initial iterations of the optimization algorithm.A low diversity is indicative of exploitation of a small area of the search space, desired during the latter part of the optimization process.Furthermore, the experience of each particle have more influence on particles when they update their next position at the beginning of iterations, and the experience of other particles has more influence on particles when they update their next position at the later stage of iterations.
Equation (8) shows that σ 2 (t) belongs to the interval between 0 and S. When all particles locate in the same position, σ 2 (t) is zero.That stands for the swarm aggregation degree at its strongest.Contrarily, σ 2 (t) is S on the condition that all absolute differences between the current fitness of all particles and their average fitness equal to one.Thus, the sum of squares of deviations of the particles' fitness values generally shows a decreasing trend with the number of iterations increasing.
Considering those concerns, in this paper, we propose a novel way of computing the local attractor to achieve the above scheme, and its equation is shown as Equation (12).The objective of this development is to enhance the global search in the early part of the optimization and to encourage the particles to converge toward the global optima at the end of the search.Then, the position of particle i at iteration t is updated as shown (13) in our proposed QPSO with weighted mean personal best position and Adaptive Local Attractor (ALA-QPSO).
x t+1 id = In (12), σ 2 (t) is the sum of squares of deviations of the particles' fitness values at iteration t, S is the swarm size, and ϕ is a random number uniformly distributed on (0, 1).In (13), α and u have the same meaning as the corresponding parameters in (5), mpbest t d is the weighted mean personal best position for the d−th dimension at iteration t, and its calculation equation is shown as (11); Ala t id is the adaptive local attractor of particle i for the d−th dimension at iteration t.

Quantum-Behaved Particle Swarm Optimization with Adaptive Local Attractor (ALA-QPSO)
Appling the weighted mean personal best position in (11) and the adaptive local attractor in (12) to QPSO, the proposed QPSO with weighted mean personal best position and Adaptive Local Attractor (ALA-QPSO) is described in Algorithm 1 below.
Initialize the parameters including swarm size S, maximum iteration count T.

2.
Initialize particles in the population with random position vectors.

3.
Calculate pbest for each particle and gbest for the swarm.4.
Update the weighted mean personal best position mpbest t using (11).6.
Update the adaptive local attractor for each particle using (12).7.
Update the position for each particle using (13).8.
If terminating condition is not meet, go to step 3.

Benchmark Functions
In this section, twelve classical benchmark functions are listed in Table 1, which are used to verify the performance of the ALA-QPSO algorithm.These benchmark functions are widely adopted in numerical optimization methods [21].In this paper, the twelve benchmark functions are divided into three groups.The first group includes five unimodal functions ( f 1 ∼ f 5 ).The second group includes four complex multimodal functions( f 6 ∼ f 9 ), where f 7 owns a large number of local optima so that it is difficult to find the global optimization values, f 8 has many minor local optima, f 9 includes linkage components among variables so it is difficult to reach theoretical optimal values.The third group includes three rotated multimodal functions ( f 10 ∼ f 12 ).In Table 1, D denotes the solution space dimension, the global optimal solution (x * ) and the global optimal value ( f (x * )) of twelve classical benchmark functions are given in column 5 and column 6, respectively.

Experimental Settings
The benchmark functions defined in the previous subsection are used to test the performance of ALA-QPSO.Five algorithms are used to compare for benchmark functions: Artificial bee colony (ABC) [22], revised QPSO (RQPSO) [8], QPSO with Gaussian distributed attractor (GAQPSO) [16], an enhanced QPSO based on a novel computing way of local attractor (EQPSO) [17], and an improved QPSO with weighted mean best position (WQPSO) [23].
For all compared algorithms, the population size is set to 20 [22,24].For ABC, the limit number of iterations which a food source cannot be improved (Trial limit) is set to 100 [25].For all kinds of QPSO, the contraction-expansion coefficient (α) decreases linearly from 1.0 to 0.5 during the search process, the local attractors are calculated by Equation ( 4) except for ALA-QPSO.
Two group experiments are tested.Firstly, setting the convergence criterion as reaching the maximum number of iterations, the mean (Mean) and the standard deviation (SD) of the final fitness value at the end of iteration over multiple runs are compared to test the difference of these compared algorithms solution accuracy and stability.Secondly, setting the convergence criterion as not exceeding the maximum number of iterations and the value of the objective function reaching the allowable precision, the success rate (SR), average convergence iteration number (AIN), and average computational time (Time) over multiple runs are compared for testing the difference of these compared algorithms convergence speed.To obtain an unbiased CPU time comparison, all experiments are conducted using MATLAB R2018a on a personal computer with an Intel Core i7-7500U 2.7 GHz CPU and 16 GB memory.

Comparison of the Solution Accuracy and Stability
In the experiments, the first comparison of tested functions in Table 1 is conducted on 30, 50, and 100 dimensions, and the maximum iteration number (T) is set 10,000, 10,000, and 2000, respectively.All compared algorithms terminate calculation when the maximum iteration number is reached.The results are shown in Tables 2-4 in terms of the mean and standard deviation of the solutions obtained in the 30 independent runs by each algorithm.Note that the mean of solutions indicates the solution quality of the algorithms, and the standard deviation represents the stability of the algorithms.

Unimodal
Sphere  Mean: mean of objective values.SD: standard deviation.
Table 2 shows the results on 30 dimension problems.According to the results, ALA-QPSO can find better average objective function values than those of other algorithms on f 6 and f 8 , both ALA-QPSO and EQPSO can find the global optimum on the remaining functions.On the other hand, the standard deviations of the ALA-QPSO algorithm are smaller than those of ABC, GAQPSO, RQPSO, and WQPSO algorithms on all tested functions.Although the standard deviation of ALA-QPSO is larger than that of EQPSO on f 6 and f 8 , the ALA-QPSO algorithm provides smaller average objective function values than those of EQPSO on these two functions.
Table 3 shows that the results on 50 dimension problems.From Table 3, it can be got that both ALA-QPSO and EQPSO can find the global optimum on f 1 ~f3 , f 5 , f 7 and f 9 ~f12 , and can find equal average objective function value on f 8 .ALA-QPSO obtains better average objective function values than other algorithms on the remaining functions.The standard deviation of ALA-QPSO is less than or equal to that of EQPSO on all the test functions.ALA-QPSO and EQPSO achieve higher accuracy and stronger stability than other algorithms on all test functions.
Table 4 shows that the results on 100 dimension problems.From Table 4, one can see that both ALA-QPSO and EQPSO provide a solution to the true optimum values for six out of the twelve benchmark functions ( f 5 , f 7 and f 9 ~f12 ).ALA-QPSO can find better average objective function values than those of other algorithms on f 1 ~f4 and f 6 .Overall, the standard deviation of ALA-QPSO and EQPSO is smaller than that of other algorithms.Mean: mean of objective values.SD: standard deviation.
To present the total comparison on performance between ALA-QPSO and other algorithms, Table 5 shows the detailed results from the non-parametric Friedman test [26,27].The last column in this table denotes the corresponding measured p-values, which suggest the significant differences between the compared algorithms at the 0.05 significance level.From Table 5, it can be obtained that there is a significant difference of accuracy and stability among the compared algorithms.Since the Friedman test assigns the lowest rank to the best performing algorithm, the conclusion can be obtained that the ALA-QPSO algorithm has the highest precision and the strongest stability among the compared algorithms.Moreover, Figures 1 and 2 present the converging curves for the twelve selected functions.Note that the logarithmic of the mean of the best object value according to the t−th iteration, which is denoted log(y(t)), is as y-coordinate and the iteration number is as x-coordinate.Since the logarithmic function is a negative infinity (not shown in the figures) when the obtained best object value is zero, some curves stop after a certain number of function evaluations.From Figures 1 and 2, it can be obtained that both ALA-QPSO and EQPSO can find the global optimum of f 1 ~f5 , f 7 and f 9 ~f12 for 30 dimension problems, also, ALA-QPSO requires fewer iterations than EQPSO; ALA-QPSO can find smaller average objective function values than those of other algorithms on f 6 and f 8 for 30 dimension problems.The convergence curves of ALA-QPSO for all the functions sharply dropped off at a certain point during the early stage of optimization.

Comparison of the Convergence Speed and Reliability
Each algorithm stops when the maximum number of iterations is arrived at or the value of the objective function is less than or equal to its target accuracy threshold.Wherein, the accuracy threshold f 6 and f 8 is set as 2.8 × 10 1 and 5.0 × 10 −15 , respectively, and for the others is set as 1.0 × 10 −50 , the maximum iteration number is set as 10,000.Each algorithm was executed 30 times.Three comparing indexes are calculated for the twelve functions with 30 dimensions and are reported in Tables 6-8, including the success rate (SR), which is the percentage of the running number of experiments with the final population best solution meeting the accuracy threshold; the average convergence iteration number (AIN), which is the average number of iterations when the algorithm reaches the termination condition over multiple runs; and the average computational time (Time), which is the average of the running time that the algorithm meets the termination condition established for this comparison over multiple runs.Note that some of the compared algorithms may converge to the threshold for the tested functions, while the other may not.The minimum number of iterations required for the objective function meeting an accuracy threshold is used to calculate the AIN, while not meeting; the maximum number of iterations is used for the calculation.Table 6 presents the success rate of the six compared algorithms on the tested functions with 30-Dimension.Both ALA-QPSO and EQPSO can obtain 100% success rate on all tested functions.The success rate of the two algorithms is higher than those of other algorithms for ten out of the twelve benchmark functions.The results indicate that ALA-QPSO and EQPSO show the best reliability among all the compared algorithms.
Table 7 presents the average convergence iteration numbers of all the compared algorithms on the tested functions with 30-Dimension.The average convergence iteration number of ALA-QPSO is the minimum among those of six compared algorithms for all tested functions.The results show that ALA-QPSO needs the minimum number of convergence iteration to reach the accuracy Table 6 presents the success rate of the six compared algorithms on the tested functions with 30-Dimension.Both ALA-QPSO and EQPSO can obtain 100% success rate on all tested functions.The success rate of the two algorithms is higher than those of other algorithms for ten out of the twelve benchmark functions.The results indicate that ALA-QPSO and EQPSO show the best reliability among all the compared algorithms.
Table 7 presents the average convergence iteration numbers of all the compared algorithms on the tested functions with 30-Dimension.The average convergence iteration number of ALA-QPSO is the minimum among those of six compared algorithms for all tested functions.The results show that ALA-QPSO needs the minimum number of convergence iteration to reach the accuracy threshold and that the optimize speed of ALA-QPSO is faster than those of other compared algorithms for all the tested functions.8 presents the computational time of the six compared algorithms.It can be observed that the computational time of ALA-QPSO is the minimum among those compared algorithms.Since both ALA-QPSO and EQPSO can converge to the threshold for all tested functions, the computational time demonstrates the superiority of the proposed algorithm to some extent.To present the total comparison on convergence speed and reliability between ALA-QPSO and other algorithms, Table 9 shows the detailed results from the Friedman test.From Table 9, it can be obtained that there is a significant difference of convergence speed and reliability among those compared algorithms.Since the Friedman test assigns the lowest rank to the best performing algorithm, the conclusion can be obtained that the ALA-QPSO algorithm has the fastest convergence speed and the best reliability among all compared algorithms.

Conclusions
In this paper, a variant of QPSO, namely ALA-QPSO, is proposed to simultaneously enhance the search performance of QPSO and acquire good global optimal ability.Firstly, a weight parameter is introduced to distinguish the difference of the effect of the particles with different fitness, and is used to obtain the weighted mean personal best position of the population.Secondly, a linear combination of the particle best known position and the entire swarm's best known position is designed to form the adaptive local attractor by using the sum of squares of deviations of the particles' fitness values as the linear combination coefficient.The objective of this development is to enhance the global search in the early part of the optimization and to encourage the particles to converge toward the global optima at the end of the search.Finally, the proposed ALA-QPSO algorithm was tested on twelve benchmark functions, and compared with the basic Artificial Bee Colony and the other four QPSO variants.Experimental results show that ALA-QPSO performs better than the compared methods in all of the benchmark functions in terms of better global search capability and faster convergence rate.We conclude that distinguishing the fitness of particles to calculate the weighted mean personal best position and monitoring the diversity of QPSO population to construct adaptive local attractor to guide particles optimization, is feasible.Although the proposed ALA-QPSO exhibited superior performance in the experimental results reported in the previous subsections, it is applicable only to the unconstrained problems in continuous search space.More modifications need to be done to further extend the applicability of the proposed ALA-QPSO to a more general class of optimization problems, including discrete, multi-objective, constrained, as well as dynamic optimization problems.Our further work will focus on using some other strategies to construct more effective local attractor for the QPSO algorithm and applying ALA-QPSO to the real-world optimization problems.

Figure 1 .Figure 2 . 7 f
Figure 1.Convergence graphs of six algorithms on 30-D benchmark functions.f 1 ~f6 : y(t) represents the mean of the best object value according to the t−th.

Figure 2 .
Figure 2. Convergence graphs of six algorithms on 30-D benchmark functions.f 7 ~f12 : y(t) represents the mean of the best object value according to the t−th.

Table 1 .
High dimensional classical benchmark functions.

Table 5 .
Average ranking of algorithms for test functions, as obtained by the Friedman test.

Table 9 .
Average ranking of competitor algorithms for the twelve benchmark functions, as obtained by the Friedman test (S = 20; D = 30; T = 10,000).