Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity

: It is generally known that the balance between convergence and diversity is a key issue for solving multi-objective optimization problems. Thus, a chaotic multi-objective particle swarm optimization approach incorporating clone immunity (CICMOPSO) is proposed in this paper. First, points in a non-dominated solution set are mapped to a parallel-cell coordinate system. Then, the status of the particles is evaluated by the Pareto entropy and difference entropy. At the same time, the algorithm parameters are adjusted by feedback information. At the late stage of the algorithm, the local-search ability of the particle swarm still needs to be improved. Logistic mapping and the neighboring immune operator are used to maintain and change the external archive. Experimental test results show that the convergence and diversity of the algorithm are improved.


Introduction
Most problems [1][2][3][4][5][6] in engineering and science are multi-objective optimization problems, and these objectives usually conflict with each other.The focus of academics and engineers is on finding an optimal solution for these problems.For multi-objective optimization problems, however, when a certain objective is improved, it often simultaneously impacts other objectives.Usually, people make a trade-off and compromise to solve this problem.A very effective method to solve such problems is by using an intelligent algorithm.In recent years, intelligent algorithms have been paid increasing attention in many areas [7][8][9][10] and have achieved good results.However, when solving multi-objective optimization problems using intelligent algorithms, there are three goals that need to be satisfied: (1) particles must be as close to the Pareto optimal front as possible; (2) there must be a maximal number of particles in the Pareto optimal front; and (3) particles must be distributed as evenly as possible in the Pareto optimal front.
The particle swarm optimization (PSO) algorithm [11,12] is one of the most important and studied paradigms in computational swarm intelligence.It was put forward by Dr. Kennedy and Professor Eberhart in 1995.The simple form, simple parameters, and rapid convergence of PSO have contributed to its rapid development and it becoming a research hotspot over the last 20 years.For multi-objective PSO algorithms, there are six research directions [13]: (1) aggregating approaches, which combine all the objectives of the problem into a single one.In other words, the multi-objective problem is transformed into a single-objective problem.It is not a new idea, since aggregating functions can be derived from the well-known K-T conditions for non-dominated solutions.(2) Lexicographic ordering, in which the objectives are ranked in order of importance.The optimal solution is obtained by separately minimizing the objective functions, starting with the most important one and proceeding according to the order of importance of the objectives.Lexicographic ordering tends to be useful only when few objective functions are used, and it maybe sensitive to the ordering of the objectives.(3) Subpopulation approaches, which use several subpopulations as single-objective optimizers.In order to balance the objectives, the subpopulations exchange information with each other.Approaches: The information exchange with Sub-Population is not well controlled, and it is difficult to ensure concurrent evolution of each objective.(4) Pareto-based approaches, which use leader selection techniques based on Pareto dominance.This method is the mainstream method, such as multi-objective particle swarm (MOPSO) [14].It is useful for the multi-objective optimization problems with few objective functions, but when the objective functions increase, the selection pressure increases.(5) Combined approaches, that combine the PSO algorithm with other algorithms.Some examples include the genetic algorithm (GA) [15], the cultural algorithm [16], and the differential evolution (DE) algorithm [17].(6) Other approaches [18,19] that do not fall into the above five types, such as designing a threshold for the particle updating strategy [20].
The remainder of this paper is organized as follows.In Section 2, we describe the multi-objective optimization and PSO algorithm.Thereafter, in Section 3, we explain a computational method to improve the multi-objective PSO algorithm.Section 4 outlines the strategy and flow of this algorithm.Test problems, performance measures, and results are provided in Section 5, and conclusions are presented in Section 6.

Multi-Objective Optimization Problem
In general, a multi-objective optimization problem with n decision variables and M objective functions can be described as [21]: where, is the objective function vector, Y is the objective function space, g i (x) is the i th inequality constraint, and h j (x) is the j th equality constraint.
The following definitions are useful for the conceptual framework of a multi-objective optimization problem: We denote this dominance as x ≺ x.Definition 2. (Pareto optimality) A solution x * ∈ D is a Pareto optimal solution if there is no other x ∈ D that satisfies f (x) ≺ f (x * ).Definition 3. (Pareto optimal set) The Pareto optimal set is defined as the set of all Pareto optimal solutions.Definition 4. (Pareto optimal front) The Pareto optimal front consists of the values of the objective functions at the solutions in the Pareto optimal set.

Particle Swarm Optimization Algorithm
Let n be the dimension of the search space, x i = (x i1 , x i2 , • • • , x in ) be the current position of the i th particle in the swarm, p i,best = (p i1,best , p i2,best , • • • , p in,best ) be the best position of the i th particle at that time, and g best = (g 1,best , g 2,best , • • • , g n,best ) be the best position that any particle in the entire swarm has visited.The rate of the velocity of the i th particle is denoted as The position of each particle x t ij , and its velocity v t ij updated according to the following: where c 1 , c 2 are the learning factors, and r 1 , r 2 are random numbers in [0, 1].w is the inertia weight, which is defined as follows [22]: where, w max , w min are the maximum and minimum of inertia weight w, respectively, t is the time of the present iteration, and T max is the maximum number of iterations.

Parallel Cell Coordinate System
The parallel cell coordinate system (PCCS) [23] maps the target vector of non-dominated solutions in the external archive to a two-dimensional plane by parallel coordinates, and then rounds these coordinate values.Its mathematical formula is as follows: where x is a top integral function; k = 1, 2, • • • , K where K is the external archive size in the current iteration; m = 1, 2, • • • , M where M is the number of objective functions in the optimization problem; Consider Table 1 as an example, in which eight particles were included for three objectives.According to the formula (5), we can calculate L k,m and draw the Figure 1.In this paper, we use the Pareto entropy to estimate the distribution uniformity of the Pareto front, which is calculated with the following formula: When approximating the Pareto front map to the PCCS, Cell k,m (t) is the number of cell coordinate components in the kth row and mth column cell.
According to Pareto entropy, we know the distribution of particles in the current iteration.However, in order to judge the change between this iteration and the last iteration, a difference entropy Entropy is proposed [23]: Through the difference entropy, information about the external archive can be obtained, and we can dynamically adjust population using this information.

Difference Entropy Discussion
Previous work has proved that the maximum Pareto entropy is log M and the minimum Pareto entropy is log KM [23].Here, we discuss a few special situations for the difference entropy: (1) In the t th iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most nonuniform, and after maintaining the external archive, the corresponding coordinate components of all objective vectors are the most uniform.
Proof.Assume that, in the t th iteration, the distribution of parallel cell coordinates is Cell k,m (t) and the external archive size is K 1 .In the (t + 1) th iteration, the distribution of parallel cell coordinates is Cell k,m (t + 1), and the external archive size is K 2 .Next, make all components of the nth column collect in the nth column of c-row, so that Cell ( c,n t) = K 1 and Cell k =c,n (t) = 0. From the L'Hospital's rule, we can know lim x→0 x log(x) = 0, so we stipulate that 0 log(0) = 0.In the (t + 1) th iteration, Cell k,m (t + 1) = 1, that is to say, it always holds for kth row and mth column.We obtain Difference Entropy through the following formula: (2) In the t th iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most nonuniform and after maintaining the external archive, the corresponding coordinate components of all objective vectors remain the most nonuniform.7):

Proof. From Case (1), we know
(3) In the t th iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most uniform and after maintaining the external archive, the corresponding coordinate components of all objective vectors remain the most uniform.
Proof.From Cases (1) and (2), we know that Cell k,m (t) = 1 and Cell k,m (t + 1) = 1.So: (4) In the t th iteration, the corresponding coordinate components in the PCCS of all objective vectors on the Pareto front are the most uniform, but after maintaining the external archive, the corresponding coordinate components of all objective vectors are the most nonuniform.

Proof. From Case (1), we know
In general, there are only four cases for maintaining the external archive, that is, nonuniform to more nonuniform, nonuniform to relatively uniform, uniform to relatively nonuniform, and uniform to more uniform.It is certain that difference entropy has a maximum and minimum, that is to say, the difference entropy is bounded.

Sketch of State Inspection
The distribution of particles in an external archive can directly react with entropy, and the status of the external archive can be indirectly speculated by the difference entropy.This reaction may be a convergence condition, diverse condition, or stagnant; then, we can change the parameters of the algorithm through the feedback information of these particles.So, the primary question is how to test the state of particles.We directly use the threshold values from the literature [21]: where H is the number of elements in the external archive, K is the maximum capacity of the external archive, and M is the number of objective functions.Now, the determining conditions of the environment are given as follow: Obviously, the determining conditions are sensitive to changes in H. Whenever H increases or decreases, it is seen as a convergence condition because, when H changes, at least a non-dominated solution is introduced from the external archive.The external archive changes, and the PCCS may also change.This is because an H increase or decrease may lead to the maximum or minimum of one dimension in the objective vector change.

External Archive Update
In a multi-objective particle swarm optimization algorithm, the maintenance of the external archives plays a key role.First, the calculation method of the individual density is introduced [23].When the particle is mapped to the PCCS, the individual density of P i (i, 1 = 2, • • • , K, K is the external archive size) is obtained according to the following formula: where P j (j, 1 = 2, • • • , K, j = i, PCD(P i , P j ) is parallel cell distance between P i and P j , which can be calculated by the following formula: Then, the specific update steps in Algorithm 1 [23].Step 5: Place particles in ascending order by their individual density, and take the first K particles to compose A new .

Update Strategy of the Global Best Position and Personal Best Position
Particles are selected by the lattice dominance strength [23], the specific steps in Algorithm 2. (i) convergence: Step 4: Sort the particles in descending order by the lattice dominant strength and in ascending order by the individual density.
Step 5: Take the first a particles by the individual density and the first b particles by the lattice dominant strength to store in C.
Step 6: Randomly select one particle from C as the global best position.For the update strategy of the personal best position, we adopted the following method: [24]:

Parameter Selection Strategy
The PSO algorithm has a simple form and can be easily implemented, but always approaches the global and personal best position, and tends to a local best position after several iterations.In order to overcome the early-maturing phenomenon, the adaptive inertia weight based on the feedback information from status evaluation is used in this paper [25]. where Step w is the adjusting stepsize of w, and its formula is as follows: Step where w max = 0.9, w min = 0.4, and T max is the maximum number of iterations.The initial value of w is 0.9.
For c 1 and c 2 , nonlinear functions of the inertia weight w are used [23]: Thus, the learning factors c 1 and c 2 also adjust dynamically with the adjustment of the inertia weight.
According to Figure 2, the inertia weight decreased approximately linearly.With the inertia weight change, the learning factor changed.

Clone Immune Strategy
With each iteration of the algorithm, the diversity of the population decreases.In our paper, the clone, recombination, and mutation operations are used to solve this problem .
(1) Clone: For the external archive A, the population was cloned according to the crowding distance [26].The larger the crowding distance, the larger the clone population.Then, the specific steps of clone in Algorithm 3: Algorithm 3: Clone algorithm.
Input: (i) External archive A and number of objective functions M.
(ii) Maximum size of the clone population NC.Output: Clone population A .
Step 1: Calculate the crowding degree of the active population: padis.
Step 2: Calculate the number of clones of each particle with the following formula: Step 3: If the total number of clone particles exceeds NC, add the first NC particles to A and return A .
Sparse areas get more clone particles by the cloning operation, which can be performed several times.

Input: (i) Clone population A and External archive A.
(ii) Lower bound of particles: x min , upper bound of particles: x max , the size of the clone population: N , and parameter ε, r 1 .Output: Recombination population A .
Step 1: For each particle x (i) ∈ A , randomly select a particle x(t) ∈ A.
(3) Mutation: The main way to produce new particles is through a mutation that can enhance the diversity of particles [26], the specific steps in Algorithm 5.
(ii) Mutation parameter p m and parameters eta m , r 1 .
(iii) Lower bound of particles: x min , upper bound of particles: x max , and size of clone population: N .Output: Mutation population A .
Step 3: Generate a random number rand ∈ Then, The external archive will be updated by A and Algorithm 1.We can find that the clone operation can make particles inherit good information, the recombination operation can make good genes be inherited better, and the mutation operation can increase the diversity of particles.So, all operations make the algorithm more effective.

Chaotic Strategy
In order to enhance the diversity and local search ability of the algorithm in the late iterative stages, a local chaotic searching approach is used for the external archive, the specific steps in Algorithm 6.
(ii) Maximum number of chaotic searching agents M .Output: Clone population A c .
Step 1: Randomly generate a chaotic initial point y 0 = rand, and chaotic searching number m = 1.
Step 2: According to the logistic map equation in chaotic systems, the following chaotic series is generated: y j = 4y j (1 − y j ).
Step 3: Renew the positions based on the following equation: x(i) j = x(i) j + rand(−1) j y j .
Step 4: If the termination criterion is satisfied, then output the A c .Otherwise, let m = m + 1 and return to Step 2.

Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity (CICMOPSO)
Step 1: Initialize a population of particles X N , and each particle with random position vector x i and velocity vector v i .Further initialize the maximum size of the external archive K, maximum size of the active population N A, maximum size of the clone population NC, lower bound of particles x min , upper bound of particles x max , maximum number of generations T max , and the current generation number T = 0.
Step 2: Calculate the fitness of all particles in X N (T).
Step 3: Initialize an update to the external archive A.
Step 4: Calculate the parallel cell coordinates of particles with Equation (5).
Step 5: Calculate the Pareto entropy and difference entropy of particles with Equations ( 6) and (7).
Step 6: Evaluate the states of particles by the judgement condition.

Name
Objective Functions D Variable Bounds

Performance Indicators
(1) Convergence index [26]: Generational distance (GD) indicates the average distance between the obtained Pareto front and the true Pareto front.GD is defined as follow: where (2) Spacing index [26]: Spacing (S) specify the spread of the obtained Pareto front.It is defined as follow: where

Numerical Results
The convergence results are shown in Table 4.The CICMOPSO algorithm had better results than the NSGA-II and SPEA2 algorithms for all benchmark functions.For ZDT4, CICMOPSO was worse than NICPSO, but for other functions, CICMOPSO had the better performance.This finding indicates that the resulting Pareto fronts obtained by CICMOPSO are closer to the true Pareto fronts, and CICMOPSO can effectively improve convergence.From Table 5, we can see that NICPSO and CICMOPSO produced better results for the Spacing index (S) than NSGA-II and SPEA2.For ZDT1 and ZDT2, CICMOPSO has better performance than NICPSO, which has minimum mean and variance, but for other functions, NICPSO performed even better.It is well known that multi-objective optimization problems, especially when the number of objectives ≥3, are more difficult to solve.Simulation results in Table 6 show that CICMOPSO can solve most objective function problems.

Conclusions
A chaotic multi-objective particle swarm optimization algorithm incorporating clone immunity (CICMOPSO), which is used to solve multi-objective problems, was proposed in this paper.CICMOPSO uses the clone immunity strategy to maintain an external file and avoid the algorithm falling into a local optimal solution.Additionally, the Pareto entropy is used to dynamically adjust the algorithm parameters.The experimental results showed that CICMOPSO significantly outperforms all other algorithms based on the test problems with respect to two metrics.For problems with two-objective functions, CICMOPSO performed well,but for more-objective functions, the results are not satisfactory.we will improve CICMOPSO to make it suitable for solving more problems in the near future.DE differential evolution algorithm PCCS parallel cell coordinate system CICMOPSO chaos multi-objective particle swarm optimization incorporating clone immunity

Figure 1 .
Figure 1.Particle distribution in parallel cell coordinates system.

Algorithm 1 : 2 : 4 :
Improved external archive updating algorithm.Input: (i) External archive A. (ii) Maximum size of the external archive K. (iii) New solution P obtained by the algorithm.Output: Updated external archive A new .Step 1: Determine B = A ∪ P and calculate the objective vector value of B. Step Find the non-dominated solution set B for B. Step 3: If |B | ≤ K, then A new = B ; if not, turn to Step 4. Step Calculate the individual density of all particles in B .

Algorithm 2 : 1 : 2 :
Update strategy of the global best position.Input: (i) External archive A and the number of objective functions M.(ii) Status of the external archive.Output: Global best position.Step Calculate the lattice coordinate vector L k,m of particles in the external archive.Step Calculate the lattice dominant strength and individual density Density(i).Step3: Determine a and b by the status of the external archive.

Figure 2 .
Figure 2. The inertia weight and learning factors as a function of iteration number for ZDT1.(a) The curve of adaptive inertia weight and linear decreasing inertia weight; (b) The curve of adaptive learning factors c 1 and c 2 .
is the absolute difference between the two consecutive solutions in the obtained Pareto front PF, D is the average of all D p .
and only if the next statement is true:
kis the number of Pareto Solutions, n is the number of objective functions, PF j indicate the j th obtained Pareto solution, PF t j indicate the nearest point on true Pareto front from PF j .