An Enhanced Quantum-Behaved Particle Swarm Optimization Based on a Novel Computing Way of Local Attractor

Quantum-behaved particle swarm optimization (QPSO), a global optimization method, is a combination of particle swarm optimization (PSO) and quantum mechanics. It has a great performance in the aspects of search ability, convergence speed, solution accuracy and solving robustness. However, the traditional QPSO still cannot guarantee the finding of global optimum with probability 1 when the number of iterations is limited. A novel way of computing the local attractor for QPSO is proposed to improve QPSO’s performance in global searching, and this novel QPSO is denoted as EQPSO during which we can guarantee the particles are diversiform at the early stage of iterations, and have a good performance in local searching ability at the later stage of iteration. We also discuss this way of computing the local attractor in mathematics. The results of test functions are compared between EQPSO and other optimization techniques (including six different PSO and seven different optimization algorithms), and the results found by the EQPSO are better than other considered methods.


Introduction
QPSO (Quantum-behaved Particle Swarm Optimization) is a novel optimization method, which is proposed based on the PSO (Particle Swarm Optimization) algorithm and quantum mechanics.The QPSO algorithm has been greatly improved in the aspects of search ability, convergence speed, OPEN ACCESS solution accuracy and solving robustness, and it can also overcome the shortcoming that the standard PSO (SPSO) cannot guarantee the global convergence [1,2].For this reason, QPSO has been widely used in bio-medicine, antenna design, combinatorial optimization, signal processing, neural networks and other fields [3][4][5][6][7][8][9][10][11].
The disadvantage of a QPSO algorithm is that it cannot find the global optimum with probability 1 when the number of iterations is limited.The number of total iterations is usually a real number and cannot be set as infinity during the practical application.Therefore, we propose a novel way to compute the local attractor of QPSO to improve the global searching ability of QPSO when the number of iterations is limited.In the rest of this paper, we will firstly introduce the basic theory of SPSO and QPSO, then the proposed way of computing the local attractor is expressed and analyzed.Finally, the enhanced QPSO will be tested by the test functions, and its results are compared with six other PSO algorithms and seven optimization algorithms.

Standard Particle Swarm Optimization
PSO is an evolutionary computation algorithm based on swarm intelligence theory.The algorithm comes from the simulation of the bird predation behavior, and its emphases lie in the cooperation and competition between individuals.Because of its rapidity of calculation and ease of implementation, PSO algorithms have been successfully applied in system identification, neural network training, fuzzy system control and other application fields.
In PSO [20], each candidate solution is called a particle and all of the particles are seen to have no quality and volume.Each particle knows its best position, which is called the local optimum.The best position of all the particles is called the global optimum.Suppose a population including M particles moves in a D dimensional space with a certain velocity, then PSO will update the speed and position according to Equations ( 1) and (2).
)) 1 ( ( where , 1 c and 2 c are the learning factors, and generally, 1 , 1 r and 2 r are the random numbers uniformly distributed between 0 and 1. To improve the optimization ability, Shi [21,22] put forward a PSO algorithm with an inertial weight ω which can be decreased linearly from 1 to 0.1.Then Equation (1) can be changed to Equation (3). ) The algorithm expressed by Equations ( 2) and ( 3) is commonly referred to as the standard PSO (SPSO).Compared to PSO, the SPSO has a larger search space in the initial period and can obtain more precise results in the final period.It is worth mentioning here that although PSO has a relatively simple structure and runs very fast, this algorithm cannot guarantee the global convergence.

Quantum-Behaved Particle Swarm Optimization
QPSO is a new PSO algorithm and is inspired by the consideration of quantum mechanics with the PSO algorithms.It is superior to the traditional PSO algorithm not only in search ability but also in its accuracy.Particles of this model based on delta potential well can appear in any point of search space with a certain probability, and the QPSO algorithm can overcome the defect of the SPSO algorithm, that it cannot guarantee the global convergence with probability 1.
In the quantum space, a particle's velocity and position cannot be determined at the same time.Therefore, the state of a particle must be depicted by wave function ψ( , ) X t .The physical meaning of ψ( , ) X t is that the probability density of a particle's appearance in a certain position can be determined using the formula 2 ψ( , ) X t , and then the probability distribution function can be obtained.
As for the probability distribution function, through a Monte Carlo stochastic simulation method [23], the particle's position can be updated according to Equation (4).
) is the best location of all the particles; and the parameter L can be evaluated by Equation (6).
where mbest is the average optimal position of all the particles [24], and it can be computed by Equation (7). Therefore, where α is a parameter of the QPSO algorithm called the contraction-expansion coefficient.In this paper, α 0.5 0.5 ( , where Lc is the total number of iterations, and Cc is the current number of iterations.

Novel Computing Way of Local Attractor
It has been proved that QPSO can find the global optimum when the number of total iterations tends towards infinity [25].The number of total iterations is usually a real number, such as one thousand or one million, and this number cannot be set as infinity during the practical application.So the global searching ability of QPSO is limited when the number of iterations is limited.To give QPSO better performance in global searching ability in this case, we must guarantee that the particles are diversiform at the early stage of iterations, and have good performance in local searching ability at the later stage of iterations.A novel way of computing the local attractor is proposed to achieve the above scheme, and its equation is shown as Equation (9).
We find that the coefficients of Pid and Pgd are different from Equation (5).In this way, we can let the experience of each particle (Pid) have more influence on particles when they update their next position at the beginning of iterations, and the experience of other particles (Pgd) has more influence on particles when they update their next position at the later stage of iterations.For simplicity, we define . The range of 1 2 β , β is shown in Figure 2.
and Equation ( 10) can be rewritten as The probability of 1 2 β β  with two different ways of computing the local attractor is shown in Figure 3, where the computing ways of method 1 and 2 are Equations ( 5) and (9), respectively.It can be found that Pid has more influence on particles when they update their next position at the beginning of iterations and Pgd has more influence on particles when they update their next position at the later stage of iterations in the proposed QPSO.Meanwhile, this influence of Pid and Pgd on particles of the traditional QPSO is random, and the random probability value is 0.5 during all iterations.For simplicity, we define the novel QPSO proposed in this paper as the enhanced QPSO (EQPSO), and the traditional QPSO as the standard QPSO (SQPSO).
Finally, the EQPSO algorithm can be described as the following procedure: Step 1: Initialize the particles X and let id id P X  ; Step 2: Update id P according to the fitness function and gd P among each particles' best positions; Step 3: Compute mbest according to Equation (7); Step 4: Compute id p according to Equation (9); Step 5: Compute the new position id X according to Equation (8); Step 6: Repeat steps 2-5 until the algorithm satisfies the end condition.

Comparison of Different PSO Algorithms
In this section, we compare results of test functions found by EQPSO with six different PSO methods.Sphere Model, Generalized Rastrigin, Griewank, Ackley, Alpine, Schwefel's Problem and Generalized Rosenbrock were used as the test functions.The detailed information of these seven functions is shown in Table 1.Sphere Model is a nonlinear symmetric unimodal function, and its different dimensions are separable.Most algorithms can easily find the global optimum of Sphere Model, and it can be used for testing the optimization precision of EQPSO.Generalized Rastrigin, Griewank, Ackley, Alpine, Schwefel's Problem and Generalized Rosenbrock are complex and have many local minimums, so they are employed to test the global searching ability of EQPSO in this paper.
PSO uses Equation (1) to update particles' speed, while SPSO uses Equation (3).SQPSO employs Equation ( 5) to compute the local attractor, while EQPSO employs Equation (9).PSO, SPSO, SQPSO and EQPSO all initialize their particle swarm by random numbers uniformly distributed, and other sets of these four methods are as introduced in Section 2. The set of M1, M2 and M3 was the same as [6,7,11].The dimension of the all seven test functions was 30, and each program was repeated 10 times.All programs were run on MATLAB R2009a which was installed at a computer with a Windows 7 operating system and 4 Intel (R) Core (TM) i5-3470 CPUs @ 3.2GHz.The minimum value and its mean value found by each optimization algorithm during the 10 running times were used to evaluate its performance.All results are shown in Tables 2-8.It can be found from Tables 2-8 that the best results are acquired by EQPSO, which proves the global searching ability of EQPSO has been greatly improved with the help of the novel way of computing the local attractor.EQPSO can find the global minimum of Sphere Model, Generalized Rastrigin, Griewank, Alpine and Generalized Rosenbrock, and the result found by EQPSO is better than other six algorithms although they all do not find the global optimum of Ackley and Schwefel's Problem.We can also find that the EQPSO of either 20 or 40 particles obtains the best result, which illustrates that the EQPSO can acquire a good result under the condition that the population is small.
The convergence speeds of different optimization algorithms are shown in Figures 4-7 (the swarm size was 80 and the number of iterations was 3000) when used to optimize the Sphere Model, Generalized Rastrigin, Griewank and Ackley functions.It can be found that the convergence speed of EQPSO is faster than other considered PSO methods.EQPSO and the other considered PSO methods were also used to optimize the problems with constraints, and three functions with constraints of paper [26] (g07, g09 and g10, which are shown in Table 9) are used as the test functions.

  f f
The mechanism proposed in paper [26] was adopted to help the considered PSO methods of this paper to solve the functions with constraints.This mechanism is used to select the leaders, and it is based both on the feasible solutions and on the fitness value of a particle.In this mechanism, when two feasible particles are compared, the particle which has the highest fitness value wins.If one particle is not feasible and the other one is feasible, then the feasible particle wins.If both particles are infeasible, the particle that has the lowest fitness value wins.The idea is to choose as a leader the particle that, even when infeasible, lies closer to the feasible region.More detailed information about this mechanism can be found in paper [18].When the seven considered methods were used to optimize these three functions, the size of their particle swarm was 80, and the maximum number of iterations was 3000.The results are shown in Table 10.We can find from Table 10 that the results found by EQPSO are better than other PSO methods for functions g07, g09 and g10.

Comparison of Different Optimization Algorithms
In this section, we compare EQPSO with CMA-ES, GA, KH, an enhanced KH of paper [21], MBO, HS and APOA.All these methods were used to optimize the test functions of Table 1.For GA, KH, an enhanced KH of paper [21] (denoted as KH-E), MBO, HS, APOA and EQPSO, the population size is 20 and the maximal value of iterations is 1000.Other parameters of each algorithm were as follows: CMA-ES: SIGMA is a parameter which can determine the initial coordinate wise standard deviations for the search, and we set SIGMA to one third of the initial search region.GA: the selecting function was stochastic uniform, the crossover function was Intermediate, and the mutation function was Gaussian.The crossover fraction was 0.8.KH-E: The setting of KH-E was the same as paper [21].MBO: The BAR value was equal to the percentage of population for MBO, and we randomly divided the whole population into population1 and population2.HS: Harmony memory considering rate was 0.95 and pitch adjustment rate was 0.3.APOA: the value of phototropism operator was 0.1.All programs were run ten times, the best result and mean result were used to evaluate the eight methods, and the results are shown in Table 11.As we can see from Table 11, for function f7, the results found by CMA-ES are better than EQPSO, and other than that the results of EQPSO are better than other methods.

Statistical Test
Finally, the sign test, which is known as a statistical test method, was adopted to verify the significance of the results found by EQPSO.In this method, the overall performance of algorithms is based on number of cases on which an algorithm is the overall winner.A detailed introduction to the sign test can be seen in paper [27].Table 12 shows the critical number of wins needed to achieve both α 0.05  and α 0.1  levels of significance.An algorithm is significantly better than another if it performs better on at least the cases presented in each row of Table 12.The minimum value of 10 test functions (including all functions of Tables 1 and 9) found by PSO, SPSO, SQPSO, M1, M2, M3 and EQPSO (the size of particle swarm was 80 and the maximum number of iterations was 3000) were analyzed by sign test, and the results are shown in Table 13.The minimum value of 7 test functions (all functions of Table 1) found by CMA-ES, GA, KH, KH-E, MBO, APOA, HS and EQPSO were also analyzed by sign test, and the results are shown in Table 14.

Conclusions
The QPSO algorithm shows ideal performance in the aspects of search ability, convergence speed, solution accuracy and solving robustness.However, it cannot be guaranteed that the traditional QPSO can find the global optimum when the number of iterations is limited.
We propose a novel way of computing the local attractor of QPSO to improve the global searching ability of QPSO in this case.With this proposed computing way, we can guarantee the particles are diversiform at the early stage of iterations, and ensure good performance in local searching ability at the later stage of iterations.Results of the test functions have proved that the proposed method has better performance in global searching ability.
In this paper, we improve the optimization performance of EQPSO through controlling the iterative process, as one can find that the initial distribution of particles also influences the optimization ability of EQPSO.We will focus our research on finding an effective way to initialize the particles of EQPSO in our future studies, and we believe the optimization performance will be more ideal through this research.

Figure 3 .
Figure 3. Probability of 1 are two numbers in each cell.The second number is the minimum value found by each optimization method during the 10 running times, and the first number is the mean value of the 10 minimum value.

Table 7 .
Optimized results of the Schwefel's Problem function found by different PSO algorithms.

Figure 4 .
Figure 4. Convergence speed of different PSO algorithms when used to optimize the Sphere Model function.

Figure 5 .Figure 6 .Figure 7 .
Figure 5. Convergence speed of different PSO algorithms when used to optimize the Generalized Rastrigin function.

Table 2 .
Optimized results of the Sphere Model function found by different PSO algorithms.

Table 3 .
Optimized results of the Generalized Rastrigin function found by different PSO algorithms.

Table 4 .
Optimized results of the Griewank function found by different PSO algorithms.

Table 5 .
Optimized results of the Ackley function found by different PSO algorithms.

Table 6 .
Optimized results of the Alpine function found by different PSO algorithms.

Table 8 .
Optimized results of the Generalized Rosenbrock function found by different PSO algorithms.

Table 9 .
Test functions with constraints.

Table 10 .
Optimized results of g07, g09 and g10 found by different PSO algorithms.

Table 11 .
Optimization results of different optimization methods.

Table 12 .
Critical values for sign test.

Table 13 .
Results of sign test performed on EQPSO (different PSO algorithms).

Table 14 .
Results of sign test performed on EQPSO (different optimization algorithms).As we can see, in the α 0.05  level EQPSO shows a significant improvement over the other six PSO methods, GA, KH, KH-E, MBO, APOA and HS.For CMA-ES, EQPSO shows a significant improvement in the α 0.1  level.