DMO-QPSO: A multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition with diversity control

: The decomposition-based multi-objective evolutionary algorithm (MOEA/D) has shown remarkable effectiveness in solving multi-objective problems (MOPs). In this paper, we integrate the quantum-behaved particle swarm optimization (QPSO) algorithm with the MOEA/D framework in order to make the QPSO be able to solve MOPs effectively, with the advantage of the QPSO being fully used. We also employ a diversity controlling mechanism to avoid the premature convergence especially at the later stage of the search process, and thus further improve the performance of our proposed algorithm. In addition, we introduce a number of nondominated solutions to generate the global best for guiding other particles in the swarm. Experiments are conducted to compare the proposed algorithm, DMO-QPSO, with four multi-objective particle swarm optimization algorithms and one multi-objective evolutionary algorithm on 15 test functions, including both bi-objective and tri-objective problems. The results show that the performance of the proposed DMO-QPSO is better than other ﬁve algorithms in solving most of these test problems. Moreover, we further study the impact of two different decomposition approaches, i.e., the penalty-based boundary intersection (PBI) and Tchebycheff (TCH) approaches, as well as the polynomial mutation operator on the algorithmic performance of DMO-QPSO.


Introduction
The particle swarm optimization (PSO) algorithm, originally proposed by Kennedy and Eberhart in 1995, is a population-based metaheuristic that imitates the social behavior of birds flocking [1].In PSO, each particle is treated as a potential solution, and all particles follow their own experiences and the current optimal particle to fly through the solution space.As it requires fewer parameters to adjust and can be easily implemented, PSO has been rapidly developed in solving real-world optimization problems, including circuit design [2], job scheduling [3], data mining [4], path planning [5,6] and protein-ligand docking [7].In 1999, Moore and Chapman extended PSO to solve multi-objective problems (MOPs) for the first time in [8].Since then, a great interest has been aroused among researchers from different communities to tackle MOPs by using PSO.For example, Coello and Lechuga [9] introduced a proposal for multi-objective PSO, noted as MOPSO, which determines particles' flight directions by using the concept of Pareto dominance and adopts a global repository to store previously found nondominated solutions.Later in 2004, Coello et al. [10] presented an enhanced version of MOPSO which employs a mutation operator and a constraint-handling mechanism to improve the algorithmic performance of the original MOPSO.Raquel and Prospero [11] proposed the MOPSO-CD algorithm that selects the global best and updates the external archive of nondominated solutions by calculating the crowding distance of particles.In 2008, Peng and Zhang developed a new MOPSO algorithm adopting a decomposition approach, called MOPSO/D [12].It is based on a framework, named as MOEA/D [13], which converts an MOP into a number of single-objective optimization sub-problems and then simultaneously solves all these subproblems.In MOPSO/D, the particle's global best is defined by the solutions located within a certain neighborhood.Moubayed et al. [14] proposed a novel smart MOPSO based on decomposition (SDMOPSO) that realizes the information exchange between neighboring particles with fewer objective function evaluations and stores the leaders of the whole particle swarm using a crowding archive.dMOPSO, proposed by Martinez and Coello [15], selects the global best from a set of solutions according to the decomposition approach and thus update each particle's position.Moubayed et al. [16] realized a MOPSO, called D2MOPSO, which hybrids the approach of dominance and decomposition and introduces an archiving technique using crowding distance.There have also been some other MOPSOs proposed in recent years that have proved to be effective in solving complex MOPs, such as MPSO/D [17], MMOPSO [18], AgMOPSO [19], CMOPSO [20] and CMaPSO [21].
The quantum-behaved PSO (QPSO), proposed by Sun et al. [22], is a variant of PSO inspired by quantum mechanics and the trajectory analysis of PSO.The trajectory analysis clarified the idea that each particle in PSO is in a bound state.Specifically, each particle in PSO oscillates around and converges to its local attractor [23].In QPSO, the particle is assumed to have quantum behavior and further assumed to be attracted by a quantum delta potential well centered on its local attractor.Additionally, the concept of the mean best position was defined and employed in this algorithm to update particles' positions.In the terms of the update equation, which is different from that of PSO, on the one hand, QPSO has no velocity vectors for each particle to update, on the other hand, QPSO requires fewer parameters to adjust [24].Due to these advantages of QPSO, we incorporate it into the original MOEA/D framework for the purpose of obtaining a more effective algorithm for solving MOPs than other decomposition-based MOPSOs that uses the canonical PSO.
QPSO and other PSO variants generally have fast convergence speed due to more information exchange among particles.This is why such kinds of algorithms are more efficient to solve optimization problems than other population-based random search algorithms.Fast convergence speed means rapid diversity decline, which is desirable for the algorithm to find satisfying solutions quickly during the early stage of the search process.However, rapid diversity decline during the later stage of the search process results in aggregation of particles around the global best position and in turn the stagnation of the whole particle swarm (i.e., premature convergence).
Diversity maintenance is also essential when extending PSO to solve MOPs.During the past decade, researchers have done a lot of work on developing novel techniques to maintain diversity in their MOPSOs.For example, Qiu et al. [25] introduced a novel global best selection method, which is based on proportional distribution and K-means algorithm, to make particles converge to the Pareto front in a fast speed while maintaining diversity.Cheng et al. [26] presented the IMOPSO-PS algorithm, in which the preference strategy is applied for optimal distributed generation (DG) integration into the distribution system.This algorithm uses a circular nondominated selection of particles from one iteration to the next and performs mutation on particles to enhance the swarm diversity during the search process.
In this paper, we propose a multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named DMO-QPSO, which integrates the QPSO with the original MOEA/D framework and uses a strategy of diversity control.As in the literature [12,13], a neighborhood relationship is defined according to distances between the weight vectors of different sub-problems.Each sub-problem is solved utilizing the information only from its neighboring sub-problems.However, with the increasing number of iterations, the current best solutions to the neighbors of a sub-problem may get close to each other.This may result in a diversity loss of the new population produced in the next iteration, particularly at the later stage of the search process.Therefore, in DMO-QPSO, we do not adopt the neighborhood relationship described in the framework of MOEA/D and MOPSO/D.Meanwhile, we introduce a two-phased diversity controlling mechanism to make particles alternate between attraction and explosion states according to the swarm diversity.Particles move through the search space in the phase of attraction unless the swarm diversity declines to a threshold value that triggers the phase of explosion.Additionally, unlike MOPSO/D in which the global best is updated according to a decomposition approach, the proposed DMO-QPSO uses a vector set to store a pre-defined number of nondominated solutions and then randomly picks one as the current global best.All solutions in this vector set would have a chance to guide the movement of the whole particle swarm.The penalty-based boundary intersection (PBI) approach [27] is used in the algorithm owing to its advantage over other decomposition methods including the weighted sum (WS) and the Tchebycheff (TCH) [13].
The rest of this paper is organized as follows.Some preliminaries of MOP, PSO, QPSO and the framework of MOEA/D are given in Section 2. Section 3 describes the procedure of our proposed DMO-QPSO algorithm in detail.Section 4 presents the experimental results and analysis.Some further discussion on DMO-QPSO are introduced in Section 5. Finally, the paper is concluded in the last section.

Preliminaries
In this section, we first state the definition of MOPs and then describe the basic principles of the canonical PSO and QPSO.After that, some of the most commonly used decomposition methods and the original MOEA/D framework are presented.

Multi-Objective Optimization
A multi-objective optimization problem (MOP) can be stated as follows. minimize where x is the decision variable vector, Ω is the decision (variable) space, and m is the number of the real-valued objective functions.F : Ω → R m is the objective function vector, where R m is the objective space.The objectives in an MOP are mutually conflicting, so no one solution can minimize all the objectives at the same time.Improvement of one objective may lead to deterioration of another.In this situation, the Pareto optimal solutions become the best tradeoffs among different objectives.Therefore, most of multi-objective optimization algorithms are designed to find a finite number of Pareto optimal solutions to approximate the Pareto front (PF), which could be good representatives of the whole PF [28][29][30][31].In order to better understand the concept of Pareto optimality [32], some definitions are provided as follows.
Definition 1.Let x = (x 1 , . . . ,x m ), y = (y 1 , . . . ,y m ) ∈ R m .For all i = 1, . . ., m and x = y, if and only if f i (x) ≤ f i (y), then x dominates y, denoted as x ≺ y.Definition 2. Let x * ∈ Ω.If no solution x exists in Ω such that F(x) dominates F(x * ) , then x * is a Pareto optimal solution to MOP in Equation (1), and F(x * ) is a Pareto optimal (objective) vector.The set of all the Pareto optimal solutions is called the Pareto set (PS), denoted by PS = {x ∈ Ω|x is a Pareto optimal solution}.The set of all the Pareto optimal (objective) vectors is called the Pareto front (PF), denoted by PF = {F(x)|x ∈ PS}.

Particle Swarm Optimization
In the canonical PSO algorithm with N particles, each particle i (i = 1, . . ., N) has a position vector X i = (X i,1 , . . . ,X i,D ) and a velocity vector V i = (V i,1 , . . . ,V i,D ).D is the dimension of the search space.During each iteration t, particle i in the swarm is updated according to its personal best (pbest) position P i = (P i,1 , . . . ,P i,D ) and the global best (gbest) position P g = P g,1 , . . ., P g,D found by the whole swarm.The update strategies are presented as follows.

Quantum-Behaved Particle Swarm Optimization
In the quantum-behaved PSO (QPSO) algorithm with N particles, each particle i (i = 1, . . ., N) has a position vector X i = (X i,1 , . . . ,X i,D ) and a personal best (pbest) position P i = (P i,1 , . . . ,P i,D ).D is the dimension of the search space.During each iteration t, particle i in the swarm is updated as follows.
for j = 1, 2, • • • , D, where u i,j is a random number distributed on (0, 1) uniformly, q i = (q i,1 , . . . ,q i,D ) is the local attractor of particle i, calculated by ϕ is a random number distributed on (0, 1) uniformly and P g = P g,1 , . . ., P g,D is the global best (gbest) position found by particles during the search process.The contractionexpansion coefficient α was designed to control the convergence speed of the QPSO algorithm.C is the mean of the personal best positions of all the particles, namely, the mbest position, and it can be calculated as below.

The Decomposition Approaches
In the state-of-the-art multi-objective optimization algorithms based on decomposition, the most commonly used decomposition approaches are the Tchebycheff (TCH), the weighted sum (WS), and the penalty-based boundary intersection (PBI) approaches [13,28].These methods are supposed to decompose an MOP into a finite group of single-objective optimization sub-problems, so that a certain algorithm can solve these sub-problems effectively and efficiently.Let T be a weight vector for the ith subproblem (i = 1, . . ., N), satisfying ∑ m j=1 λ i j = 1 and λ i j > 0 for all j = 1, 2, . . ., m; and z * = (z * 1 , . . ., z * m ) T be a reference point.Below are the definitions of TCH and PBI approaches which will be used later in this paper.

•
Tchebycheff (TCH) approach: In the TCH approach, the sub-problem i is defined as • Penalty-based boundary intersection (PBI) approach: In PBI approach, the sub-problem i is defined as where • means the L 2 − norm, and θ > 0 is a penalty parameter.

MOEA/D
MOEA/D divides an MOP into N single-objective optimization sub-problems and attempts to simultaneously optimize all these sub-problems rather than directly solving the MOP.These sub-problems are linked together by their neighborhoods.The neighborhood of sub-problem i is defined as the sub-problems whose weight vectors are the T closest ones to its weight vector λ i and thus the neighborhood size of sub-problem i is T.
The MOEA/D algorithm maintains a population of N solutions x 1 , . . ., x N ∈ Ω, where x i (i = 1, . . ., N) is a feasible solution to the sub-problem i. FV i is the F-value (i.e., the fitness value) of x i , that is, FV i = F x i .z = (z 1 , . . . ,z m ) T is a reference point and z j (j = 1, . . ., m) is the minimal value of objective f j found so far.EP is an external population used to tore non-dominated solutions found during the search process.The main framework of MOEA/D is described in Algorithm 1.

3:
Successively select T weight vectors which are the closest to the weight vector λ i and store the indexes of these T weight vectors in B(i) = {i 1 , . . . ,i T }, i = 1, . . ., N; 4: Generate an initial population randomly; 5: Evaluate FV i , i = 1, . . ., N; 6: Initialize z = (z 1 , . . . ,z m ) T ; 7: while termination criterion is not fulfilled do 8: for i = 1, . . ., N do 9: Select two indexes k, l randomly from B(i); 10: Use the genetic operators to produce a new solution y from x k and x l ; 11: Repair y; 12: for j = 1, . . ., m do 13: if z j > f j (y) then 14: end for 17: for each j ∈ B(i) do 18: if g y λ i , z ≤ g x j λ i , z then 19: x j = y; FV i = F(y); 20: end if 21: end for 22: end for 23: Update EP; 24: end while

The Proposed DMO-QPSO
In this section, we propose an improved multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named as DMO-QPSO, which integrates the QPSO algorithm with the MOEA/D framework and adopts a mechanism to control the swarm diversity during the search process so as to avoid premature convergence and escape the local optimal area with a higher probability.
At the beginning of the proposed algorithm DMO-QPSO, we need to define a set of well-distributed weight vectors and then use a certain approach to decompose the original MOP into a group of single objective sub-problems.More precisely, let λ 1 , λ 2 , . . ., λ N to be the weight vectors, and the PBI approach is employed in this paper owing to its advantage over other decomposition approaches.
In DMO-QPSO, the swarm P with N particles is randomly initialized.Each particle i has a position vector X i and a personal best position P i .P i is initially set to be equal to X i .Then the mean best position of all particles can be easily obtained according to Equation (6).The global best position P g is produced in a natural way according to the Pareto dominance relationship among different personal best positions.More specifically, we firstly define a vector set GS, and the size of GS is pre-set, denoted as nGS.Then, the fast nondominated sorting approach [33] is applied to sort the set of all the personal best positions and GS.The lower nondomination rank of a solution is, the better it is.Therefore, we only select the ones in the lower nondomination ranks and then store them in GS.All of the solutions in GS are regarded as candidates for the global best employed in the next iteration for updating particles' positions.
It should be noted that the neighborhood in the original MOEA/D framework is formed according to distances between the weight vectors of different sub-problems.That is to say, the neighborhood of a sub-problem includes all sub-problems with the closest weight vectors.Hence, on the basis of this definition, solutions to neighboring sub-problems would be close in the decision space.It may enable the algorithm to converge faster at the early stage but brings the risk of diversity loss and premature convergence at the later stage.For this reason, we do not adopt in DMO-QPSO the neighborhood relationship stated in the original MOEA/D framework.
Furthermore, we measure the swarm diversity during the search process and make the swarm alternate between two phases, i.e., attraction and explosion, according to its diversity.At each iteration, the diversity of the particle swarm is calculated as below.
where D is the dimensionality of the problem, |A| is the length of longest diagonal in the search space, P is the particle swarm, |P | = N is the population size, X i,j is the jth value of particle i and X j is the jth value of the average point.According to the literature [34,35], the particle converges when the contraction-expansion coefficient α is less than 1.778 and otherwise it diverges.Therefore, we set a threshold, denoted as dlow, to the swarm diversity.
When the diversity drops below dlow (i.e., in the explosion phase), the value of α will be reset to a constant α 0 , larger than 1.778, to make particles diverge and thus increase the swarm diversity.Otherwise, α linearly decreases between the predefined interval [a, b] (i.e., in the attraction phase).Like MOEA/D, we also use an external population EP in the DMO-QPSO to store the nondominated solutions found during the search process.In each iteration step, we check the Pareto dominance relationship between the new generated solutions and the solutions in EP.Solutions in EP dominated by a new generated solution will be removed from EP and this new generated solution will be added to EP if no one in EP dominates it.The main process of the DMO-QPSO algorithm is presented in Algorithm 2.

Experimental Studies
This section presents the experiments conducted to investigate the performance of our proposed DMO-QPSO algorithm.Firstly, we introduce a set of MOPs used as benchmark functions.Next, the parameter settings for different algorithms and two performance metrics are described in detail.Finally, the comparison experiments and results analysis are presented.More precisely, we compared the DMO-QPSO with two recently proposed multiobjective PSOs (i.e., MMOPSO and CMOPSO) and other three multi-objective optimization algorithms, namely, MOPSO, MOPSO/D and MOEA/D-DE [36].The PBI approach is used in four decomposition-based algorithms (i.e., DMO-QPSO, MOPSO/D, MOEA/D-DE and MMOPSO).

Test Functions
We selected 15 test functions whose PFs have different characteristics including concavity, convexity, multi-frontality and disconnections.Twelve of these test functions are bi-objective (i.e., F1, F2, F3, F4, F5, F7, F8, F9 from the F test set [36], UF4, UF5, UF6, UF7 from the UF test set [37]) and the rest of them are tri-objective (i.e., F6 from the F test set and UF9, UF10 from the UF test set).As shown in references [36,37], the F test set and the UF test set are two sets of test instances for facilitating the study of the ability of MOEAs to solve problems with complicated PS shapes.Besides, we used 30 decision variables for the UF test set, problems from F1 to F5, and F9.Problems from F6 to F8 were tested by using 10 decision variables.

Parameter Setting
The setting of weight vectors λ 1 , λ 2 , . . ., λ N is decided by an integer H [36].More precisely, each individual weight in λ 1 , λ 2 , . . ., λ N takes a value from 0 H , 1 H , . . ., H H . Therefore, the population size N can be presented by N = C m−1 H+m−1 , where m is the number of objectives.H was 299 for the bi-objective test functions and 33 for the tri-objective ones.Consequently, the population size N was 300 for the bi-objective test functions and 595 for the tri-objective ones.The maximal number of iterations MaxIter was 500 and each algorithm runs 30 independent times for each test function.The size of external population nEP was set to be 100.Besides, the penalty factor θ in PBI was 5.0.
For DMO-QPSO, the size of GS was 10, and the contraction-expansion coefficient α in the attraction phase varied linearly from 1.0 to 0.5.The value of the lower bound of diversity dlow was set to be 0.05.When the diversity drops below dlow, we set the parameter α = α 0 = 2.0.
The details are listed in Table 1.
Table 1.Parameters setting for different algorithms.

Performance Metrics
In our experiments, the following performance metrics were used.

•
The inverted generational distance (IGD) [36]: It is proposed as a method of estimating the distance between the elements in a set of nondominated vectors and those in the Pareto optimal set, and can be stated as: where P * is a number of points which are evenly distributed in the objective space along the PF, and P is an approximation to the PF.d(v, P) is the minimal Euclidean distance between v and the points in P, and |P * | is the size of the set P * .P must be as close as possible to the PF of a certain test problem and cannot miss any part of the whole PF, so as to obtain a low value of IGD(P * , P).It can reflect both the diversity and the convergence of the obtained solutions to the real PF.

•
Coverage (C-metric) [13]: It can be stated as: where A, B are two approximations to the real PF of an MOP, C(A, B) is defined as the proportion of the solutions in B that are dominated by at least one solution in A.

Results and Discussion
Tables 2 and 3 present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO.It is clear that our proposed algorithm, DMO-QPSO, performed better than the other five algorithms on most of the test problems.It yielded the best mean IGD values on all the problems except on F7, F8, UF4, UF6 and UF10.According to Tables 2 and 3, both the mean and minimal IGD values on F7, F8, UF4 and UF10 obtained by DMO-QPSO are worse than those obtained by MOEA/D-DE.It should be noted that the performance of DMO-QPSO is still acceptable on UF6.The mean IGD value of DMO-QPSO on UF6 are slightly worse than that of MMOPSO, but DMO-QPSO got the lowest minimal IGD value among all of these algorithms.Besides, it is obvious that MOPSO/D performed the worst on almost all of the test problems except on F6 and UF4.+, −, ≈ denote that the performance of the corresponding algorithm is significantly better than, worse than, and similar to DMO-QPSO respectively by Wilcoxon rank sum test with α = 0.05.
In addition, the statistics by the Wilcoxon rank sum tests in Tables 2 and 3 also indicate that DMO-QPSO outperformed other five algorithms.The MMOPSO was the second best on F problems and the third best on UF problems respectively, while the CMOPSO was the second worst on both F and UF problems.Table 4 illustrates the total ranks of these algorithms on F and UF problems and gives their final ranks in the last column of the table.As shown in this table, MOEA/D-DE is the second-best algorithm, followed by MMOPSO.In contrast, MOPSO/D is the worst algorithm, followed by CMOPSO.a The numbers in this column are derived from those in column "Total" in Table 4. b The numbers in this column are derived from those in column "Total" in Table 5. c The numbers in this column are the sum of columns "F" and "UF".The average C-metric values are shown in Tables 5-9 which confirm the results above.It can be seen from these tables that the final solutions obtained by DMO-QPSO is better than those obtained by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO for most of the test functions.The results of the trial runs which have the lowest IGD values on 15 test functions produced by MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO are selected, respectively, and then plotted in Figures 1 and 2. The figures in Figures 1 and 2 clearly show the evolution of the IGD values for different algorithms versus the number of iterations on both F and UF problems.It can be seen that the results in these figures are in consistence with those in Tables 2 and 3.For the F problems, DMO-QPSO performed the best except on F6, F7 and F8.As we can see in Figure 1, MOEA/D-DE was the best on problems F7 and F8, the IGD values of which drop quickly at the early stage and then converge to values close to 0.08 and 0.20, respectively.MMOPSO obtained the second minimal IGD value on F8, followed by DMO-QPSO, but it declines even faster than MOEA/D-DE during the first 200 iteration steps.On F6, the IGD value of DMO-QPSO fluctuates for about 400 iteration steps during the whole search process and then reaches the value (0.1634) just slightly larger than that of MMOPSO (0.1632), which may be related to the variation of the swarm diversity.MOPSO/D had the worst performance on all of F problems except on F6.For the UF problems, DMO-QPSO still performed the best except on UF4 and UF10.It can be seen that the MOEA/D-DE was the best on UF4, the IGD value of which decreases rapidly to a value just below 0.1 while the value of DMO-QPSO is larger than 0.1.By contrast, MMOPSO had the worst performance on UF4, the IGD value of which fluctuates significantly between 0.5 and 0.9.On UF10, MOEA/D-DE and MOPSO had the similar performance, for the IGD values of them decrease rapidly during the first 80 iteration steps and then gradually converge to values around 1.0.DMO-QPSO was the third-best, followed by MMOPSO.MOPSO/D had the worst performance on all of UF problems except on UF4.Additionally, from all the figures in Figures 1 and 2, we can observe that within the same iteration steps, the IGD value of DMO-QPSO declines slowly at the early stage on several test problems compared to MOEA/D-DE but gets to a much smaller value at the later stage.If we perform a greater amount of iterations, the DMO-QPSO may obtain much better results than the other compared algorithms.
In summary, DMO-QPSO has better performance on most of the test functions compared to other tested algorithms, and it is promising in solving MOPs with complicated PS shapes.

Further Discussions
In this section, we further study the impact of different decomposition approaches (i.e., TCH and PBI) and the polynomial mutation on DMO-QPSO.Some comparison experiments were also conducted.

The Impact of Different Decomposition Approaches
As described in Section 2.4, TCH and PBI are two commonly used approaches for decomposition-based multi-objective optimization algorithms.In terms of solution uniformness, the TCH approach may perform worse than the PBI approach, especially for the problems having more than two objectives.Therefore, we tested MOPSO/D, MOEA/D-DE and DMO-QPSO using TCH or PBI as the decomposition method respectively.The parameter settings for each algorithm are the same as those presented in Section 4.2.Tables 10 and 11   According to Tables 10 and 11, algorithms (i.e., MOPSO/D and DMO-QPSO) using PBI performed better than those using TCH, particularly for solving tri-objective problems.As for DMO-QPSO, using PBI, to some extent, could help the algorithm to acquire better Pareto optimal solutions to approximate the entire PF.However, applying PBI to MOEA/D-DE does not show significant improvement compared to MOEA/D-DE using TCH.It may be related to the unique characteristics of the DE operators employed in MOEA/D-DE.

The Impact of Polynomial Mutation
Polynomial mutation was adopted in both MOPSO/D, MOEA/D-DE, MMOPSO and CMOPSO for producing new solutions as well as maintaining the population diversity, as stated in the literature [12,18,20,36].In order to investigate the impact of polynomial mutation on DMO-QPSO, we tested four DMO-QPSO variants, i.e., DMO-QPSO, DMO-QPSO-TCH, DMO-QPSO-pm and DMO-QPSO-TCH-pm, on different problems here.DMO-QPSO and DMO-QPSO-TCH used PBI and TCH as the decomposition method, respectively.DMO-QPSO-pm and DMO-QPSO-TCH-pm are two variants using the polynomial mutation operator on the basis of DMO-QPSO and DMO-QPSO-TCH respectively.The parameter settings for each algorithm are the same as those presented in Section 4.2.Tables 12 and 13 present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by different variants of DMO-QPSO.As we can see from Tables 12 and 13, the DMO-QPSO variants using PBI, i.e., DMO-QPSO and DMO-QPSO-pm, outperformed the DMO-QPSO variants using TCH, i.e., DMO-QPSO-TCH and DMO-QPSO-TCH-pm, on most of the test problems.These results also confirm the conclusion presented in Section 5.1.Besides, DMO-QPSO-pm shows an advantage over DMO-QPSO on several test problems.It should be pointed out that adopting the PBI approach and the polynomial mutation at the same time can effectively improve the algorithmic performance of DMO-QPSO.

Conclusions
This paper has proposed a multi-objective quantum-behaved particle swarm optimization algorithm based on decomposition, named DMO-QPSO, which integrates the QPSO algorithm with the original MOEA/D framework and adopts a strategy to control the swarm diversity.Without using the neighboring relationship defined in the MOEA/D framework, we employed a two-phased diversity controlling mechanism to avoid the premature convergence and make the algorithm escape sub-optimal solutions with a higher probability.In addition, we used a set of nondominated solutions to produce the global best so as to update the particle's position.The comparison experiments were carried out among six algorithms, namely, MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO, on 15 test functions with complicated PS shapes.The experimental results show that the proposed DMO-QPSO algorithm has an advantage over other five algorithms on most of the test problems.It has a slower convergence speed than MOEA/D-DE on some test problems at the early stage, but has better balance between exploration and exploitation, finally obtaining better solutions to an MOP.In addition, we further investigated the impact of different decomposition approaches, i.e., the TCH and PBI approaches, as well as the polynomial mutation on DMO-QPSO.It was shown that using PBI and the polynomial mutation can enhance the algorithmic performance of DMO-QPSO, particularly when the tri-objective problems were being solved.
In the future, we will focus on studying new methods for generating a set of weight vectors that are as uniformly distributed as possible, modifying the mechanism of diversity control in the DMO-QPSO algorithm for dealing with more complicated test problems, and improving the quality of the solutions obtained.In addition, we will extend the proposed algorithm to problems having more than three objectives.

Figure 1 .
Figure 1.Evolution of the IGD values of MOPSO, MOPSO/D, MOEA/D-DE, MMOPSO, CMOPSO and DMO-QPSO versus the number of iterations on F problems.
present the average, minimum and standard deviation (SD) of the IGD values of 30 final populations on different test functions that were produced by each algorithm.In these tables, MOPSO/D-TCH, MOEA/D-DE-TCH and DMO-QPSO-TCH stand for the variants of algorithms MOPSO/D, MOEA/D-DE and DMO-QPSO using the TCH approach, respectively.

Table 2 .
The mean, minimum and standard deviation of IGD values on F problems, where the best value for each test case is highlighted with a bold background.−, ≈ denote that the performance of the corresponding algorithm is significantly better than, worse than, and similar to DMO-QPSO respectively by Wilcoxon rank sum test with α = 0.05.

Table 3 .
The mean, minimum and standard deviation of IGD values on UF problems, where the best value for each test case is highlighted with a bold background.

Table 4 .
The final rank of different algorithms on F and UF problems.

Table 10 .
The mean, minimum and standard deviation of IGD values for different algorithms with TCH or PBI on F problems.

Table 11 .
The mean, minimum and standard deviation of IGD values for different algorithms with TCH or PBI on UF problems.

Table 12 .
The mean, minimum and standard deviation of IGD values for four DMO-QPSO variants on F problems.
'pm' in the table stands for the polynomial mutation, 'DMO-QPSO' means DMO-QPSO using PBI, and 'DMO-QPSO-pm' means the DMO-QPSO using both PBI and polynomial mutation.

Table 13 .
The mean, minimum and standard deviation of IGD values for four DMO-QPSO variants on UF problems.
'pm' in the table stands for the polynomial mutation, 'DMO-QPSO' means DMO-QPSO using PBI, and 'DMO-QPSO-pm' means the DMO-QPSO using both PBI and polynomial mutation.