An Orthogonal Multi-Swarm Cooperative PSO Algorithm with a Particle Trajectory Knowledge Base

A novel orthogonal multi-swarm cooperative particle swarm optimization (PSO) algorithm with a particle trajectory knowledge base is presented in this paper. Different from the traditional PSO algorithms and other variants of PSO, the proposed orthogonal multi-swarm cooperative PSO algorithm not only introduces an orthogonal initialization mechanism and a particle trajectory knowledge base for multi-dimensional optimization problems, but also conceives a new adaptive cooperation mechanism to accomplish the information interaction among swarms and particles. Experiments are conducted on a set of benchmark functions, and the results show its better performance compared with traditional PSO algorithm in aspects of convergence, computational efficiency and avoiding premature convergence.


Introduction
The Particle Swarm Optimization (PSO) algorithm is a stochastic, population-based evolutionary optimization technique and was conceived in 1995 by Kennedy and Eberhart [1,2].A member of the wide category of swarm intelligence methods, the PSO algorithm is modeled on the social behaviors observed in insects, animal herds, bird flocks, and fish schools where these swarms search for food in a collaborative manner.Compared with genetic algorithm (GA), the PSO algorithm can be easily implemented and is computationally inexpensive.Each particle in the PSO algorithm represents a potential solution and aims to search a specific search space with a velocity which is dynamically adjusted according to a set of novel velocity and position updating rules reflecting a particle's own and its companions' historical behaviors.The PSO algorithm has been studied and applied to solving complex mathematical problems and real engineering optimization problems [3][4][5][6].Nevertheless, similar to most stochastic algorithms, the performance of PSO algorithm will suffer from a significant degradation when it is applied to multi-dimensional or tightly coupled problems.To improve the performance of the traditional PSO algorithm and overcome the disadvantages of traditional PSO algorithm, different variations are presented and studied, such as hybrid PSO algorithm [7,8], multi-swarm PSO algorithm [9,10] and Quantum-behaved PSO algorithm [11].
There are some well-studied PSO algorithm performance issues such as dynamic of particle analysis [12,13], swam size [14] and swarm topology [15].Apart from these issues, some researches focus on improving the performance of PSO by combining it with other methods [6,16].There are also attempts to use multiple swarms to optimize different components of the solution vector cooperatively; the cooperative particle swarm optimization (CPSO) algorithm presented by Van Den Bergh and Andries [17] is a typical representative algorithm.Unlike the traditional PSO, the CPSO takes the aggregation behaviors among populations into consideration while the traditional PSO only considers the interaction among individual particles.Moreover, Liang and Suganthan [18] also provide a similar dynamic multi-swarm particle swarm optimization algorithm, where the whole population is divided into many small swarms.New regrouping and information exchange mechanism are introduced into this model.Experiments conducted show its better performances compared with the traditional PSO algorithm and other variants.
In this paper, we present an extension PSO variant based on the CPSO model and multi-swarm PSO model mentioned above called the multi-swarm orthogonal comprehensive particle swarm optimization algorithm with a knowledge base (MCPSO-K).In this model, the searching efficiency is improved by introducing an orthogonal selection method in the population initialization phase and specific iterative node.Moreover, different from all existing PSO variants, the synchrony of flocking behavior is thought to be a function of inter-population distance and the quality of each population to maintain an optimum distance among themselves and their neighbors.The traditional interaction mechanism is replaced by a new adaptive cooperation mechanism.In addition, this MCPSO-K model is instantiated and tested by some benchmark functions, and the MCPSO-K algorithm properties are analyzed.Experiments conducted reveal advantages of the new MCPSO-K algorithm in aspects of convergence, computational efficiency and avoiding premature convergence compared with the traditional PSO algorithm.
This paper is organized as follows: Section 2 makes a brief introduction of recent researches on the PSO and other PSO variants (especially, CPSO and multi-swarm PSO, which are the ideological basis of our MCPSO-K).Section 3 describes the basic model of the MCPSO-K in detail and makes some further discussions on the MCPSO-K model.Section 4 applies the MCPSO-K model to some test functions and compares the test results with the traditional PSO algorithm.Conclusions are given in Section 5.

PSO and Other PSO Variants
Compared with other evolutionary algorithms, the PSO algorithm requires only primitive mathematical operators and is computationally inexpensive in terms of both speed and memory requirements.The basic problem solving processes of PSO are as follows: Without loss of generality, an unconstrained multi-dimensional optimization problem can be formulated as a D-dimensional minimization problem expressed as follows: In order to solve this problem, the PSO algorithm first initializes a swarm consisting of m particles.The traditional PSO algorithm and most PSO variants adopt a randomized initialization mechanism, which is the reason for the unstable convergence process.Each particle in the swarm is expressed by a vector This vector is a point in the D-dimensional search space and means the position of the ith particle, which also represents a potential solution for the problem expressed by the Equation (1).
Then, the PSO uses the Equation (2) to update the velocity (v k+1 ij ) of each particle and Equation (3) to update the position (x k+1 ij ) of each particle in the iterative procedure.The velocity and position updating formulas are as follows: where the vector is the velocity of the ith particle and k is an indicator of the population iterative process.ω is an inertia weight which not only increases the probability for algorithm to converge, but also controls the whole searching process, which is proven to be linear or even dynamically determined by a fuzzy system [19].c 1 and c 2 are two positive constants, and r 1 and r 2 are two random functions in the range [0, 1] and may be different for each dimension and each particle.p k ij represents the best previous position yielding the best fitness value for the ith particle; and p k nj is the best position discovered by the whole population.The relationship between these tuning parameters and the performance of the PSO algorithm are discussed in References [20][21][22].
By these definitions, the particles will be attracted towards the location of the best fitness achieved thus far by the particle itself and the location of the best fitness achieved thus far across the whole population.After a specific number of iterations, p k ij and p k nj will tend towards stability, where the best position can be regarded as the optimal solution of the problem.The flow chart of the traditional PSO is given in Figure 1.
Symmetry 2017, 9, 15 3 of 21 between these tuning parameters and the performance of the PSO algorithm are discussed in References [20][21][22].By these definitions, the particles will be attracted towards the location of the best fitness achieved thus far by the particle itself and the location of the best fitness achieved thus far across the whole population.After a specific number of iterations, and will tend towards stability, where the best position can be regarded as the optimal solution of the problem.The flow chart of the traditional PSO is given in Figure 1.As mentioned above, the PSO algorithm begins as a simulation of a simplified social milieu.Based on the above characteristics, further researches were carried out to see the underlying rules that enabled large numbers of particles to search synchronously.These researches predicted that social sharing of information among particles offers the evolutionary advantage.
In order to improve the global astringency of PSO algorithm and overcome the premature convergence problem, more researchers explored various forms to realize this social information interaction.The algorithm variant based on multi-population concurrent evolution is one of the most efficiency variant.Liang et al. present a comprehensive learning particle swarm optimizer (CLPSO), which used a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity [23].This strategy enables the diversity of the swarm to be preserved to discourage premature convergence.Branke et al. use a number of smaller populations to track the most promising peaks over time, while a larger parent population is continuously searching for new peaks [24].This strategy was proven to be suitable for dynamic optimization problem.Li and Yang As mentioned above, the PSO algorithm begins as a simulation of a simplified social milieu.Based on the above characteristics, further researches were carried out to see the underlying rules that enabled large numbers of particles to search synchronously.These researches predicted that social sharing of information among particles offers the evolutionary advantage.
In order to improve the global astringency of PSO algorithm and overcome the premature convergence problem, more researchers explored various forms to realize this social information interaction.The algorithm variant based on multi-population concurrent evolution is one of the most efficiency variant.Liang et al. present a comprehensive learning particle swarm optimizer (CLPSO), which used a novel learning strategy whereby all other particles' historical best information is used to update a particle's velocity [23].This strategy enables the diversity of the swarm to be preserved to discourage premature convergence.Branke et al. use a number of smaller populations to track the most promising peaks over time, while a larger parent population is continuously searching for new peaks [24].This strategy was proven to be suitable for dynamic optimization problem.Li and Yang also present a multi-swarm algorithm based on fast particle swarm optimization for dynamic optimization problems [25].
All these researches show that cooperative mode can significantly affect optimization performance and increase its ability of "exploration" and "exploitation", where exploration means the ability of a search algorithm to explore different areas of search space in order to have high probability of finding good optimum, while exploitation means the ability to concentrate the search around a promising region in order to refine a candidate solution.Most of the variants based on the PSO algorithm focus on changing or improving the "exploration" or "exploitation" ability of the algorithm and maintain the basic interaction mechanism derived from the traditional PSO algorithm.Nevertheless, most tuning parameters in traditional PSO and other variants are largely determined by prior knowledge while the determination of some parameters has a significant influence on the performance of the algorithm.Some researches show that the PSO and other variants cannot perform well for lack of prior knowledge.In other words, the PSO and most other variants do not have an adaptive solving ability for an unknown problem.

Adaptive Cooperative Mechanism of Multi-Swarm Particles
Instead of changing the cooperative mode of the PSO algorithm, some variants focus on improving the performance of PSO by using different neighborhood structures [26,27].Kennedy claimed that the PSO with large neighborhood would perform better for simple problem and PSO with small neighborhoods might perform better on complex problems [28].Kennedy and Medas also discussed the effects of different neighborhood topological structures on the traditional PSO algorithm [29].Hu and Eberhart proposed a dynamically adjusted neighborhood when PSO is applied into solving a multi-objective optimization problem [30].These variants based on PSO introduce a dynamically adjusted neighborhood mechanism for each particle, where some closest particles are selected to be its new neighborhood.Different from the traditional PSO, these neighbors of a particle are all weighted and influence this particle.When these sub-groups are regarded as a small swarm, these variants can be seen as multi-swarm PSOs.
Apparently, the definition of cooperation mechanism among swarms and the regrouping method are core contents in these multi-swarm PSOs.Different from these variants above, our multi-swarm orthogonal cooperative PSO algorithm with a particle trajectory knowledge base (MCPSO-K) adopts an orthogonal initialization mechanism, which ensures the uniform distribution of swarms in initial phase and some specific iterative node in the optimization process.In addition, social sharing of information in the optimization process, which is the core content of PSO, is promoted from local level (particle level) to global level (swarm level) in MCPSO-K. Figure 2 shows a brief expression of the social sharing of information among swarms and particles in MCPSO-K.In Figure 2, we use three swarms (a, b, and c) with five particles in each swarm to show the social information sharing process among swarms and particles.These three swarms are distributed in the searching space uniformly using an orthogonal initialization mechanism.Each swarm has a virtual center indicating the weighted average of position and fitness value of particles in this swarm.Social information interaction in global level (swarm level) is based on this virtual center.Equations ( 4)-( 7) give the detail quantitative method for the associated variables mentioned in Figure 2.
where is a quantitative indicator for the social information interaction strength between swam a and swarm b; is a constant which indicate the strength of information interaction; has only two values, which declares whether these two swarms are positive correlation or negative correlation; is a predetermined parameter according to the character of problem; represent position of ith particle of the swarm a; * and * are the virtual center positions of swarm a and b, respectively; the numerators * and * in the fraction of Equation ( 4) represent the fitness values of particles in swarm a and b, respectively; and the denominator of Equation ( 4) * , * is an indicator of the distance between two swarms.
Different from most variants of PSO where particles have no mass, each particle in a swarm in MCPSO-K has mass property defined by its fitness value.Because the behaviors of swarms are determined by particles, the quantitative indicator defined above will ultimately act on each In Figure 2, we use three swarms (a, b, and c) with five particles in each swarm to show the social information sharing process among swarms and particles.These three swarms are distributed in the searching space uniformly using an orthogonal initialization mechanism.Each swarm has a virtual center indicating the weighted average of position and fitness value of particles in this swarm.Social information interaction in global level (swarm level) is based on this virtual center.Equations ( 4)-( 7) give the detail quantitative method for the associated variables mentioned in Figure 2.
where F ab is a quantitative indicator for the social information interaction strength between swam a and swarm b; k 1 is a constant which indicate the strength of information interaction; d 1 has only two values, which declares whether these two swarms are positive correlation or negative correlation; D swarms is a predetermined parameter according to the character of problem; x a i represent position of ith particle of the swarm a; x a * and x b * are the virtual center positions of swarm a and b, respectively; the numerators f (x a * ) and f (x b * ) in the fraction of Equation ( 4) represent the fitness values of particles in swarm a and b, respectively; and the denominator of Equation (4) x a * , x b * is an indicator of the distance between two swarms.Different from most variants of PSO where particles have no mass, each particle in a swarm in MCPSO-K has mass property defined by its fitness value.Because the behaviors of swarms are determined by particles, the quantitative indicator F ab defined above will ultimately act on each particle.Along with this global level (swarm level) information interaction, the local level (particle level) information interaction is defined as follows: where x a i has similar implication with x k+1 ij in Equation ( 3) and other parameters can be deduced from the definition in Equations ( 4)- (7).In order to make a clear statement of the behavior of particles, Sub-graph 1 in Figure 2 shows a partial enlarged view of the swarm a in MCPSO-K and Sub-graph 2 in Figure 2 shows a partial enlarged view of a particle in swarm a. From Sub-graphs 1 and 2, we can see that each particle has two source of influences.On the one hand, each particle is influenced by other particles belonging to a same swarm.On the other hand, all particles make up a swarm and interact with other swarms at the swarm level.
Similar to Equations ( 2) and (3), each particle will update its position and velocity information by updating formulas.Nevertheless, just as Sub-graph 3 shows, the velocity of a particle in MCPSO-K will be updated by the following definitions.
where a t a i is defined by the following equations.
Most parameters in Equations ( 10)-( 13) are similar to the definition above.Two new parameters, ε and δ, are weight coefficients that indicate the strength of global level interactions and particle level interactions.In this way, the cooperation mechanism of MCPSO-K is accomplished by Equations ( 4)- (13).Different from the traditional interaction mechanism, the cooperation mechanism of MCPSO-K has the following characteristics.
(1) All particles in MCPSO-K participate in the cooperative mechanism and influence each other, while influences mentioned above are not equal strength.On the premise of not increasing the number of particles, this cooperative mechanism can largely maintain the population diversity, which includes diversity of position information and diversity of velocity information.In addition, as a result of having mass property, each particle in the MCPSO-K model can adaptively adjust its search feature according to other companions' fitness value and the distance between two particles.
(2) The MCPSO-K introduced in this paper uses the concept of a virtual center of swarm to improve the sharing of information from particle level to swarm level.In addition, particles and swarms can maintain a reasonable distance by the definition of correlation coefficient in Equations ( 5) and (9).A reasonable distance can be obtained by a correlation analysis in advance.In this way, MCPSO-K can largely improve its solving ability for unknown problems.

Orthogonal Initialization Mechanism
During the optimization procedure of PSO, the initial value of each particles is proven to be very important to reduce the computation time and obtain a optimum solution [19].Determining reasonable values of each particle and optimizing the distribution of particles in the searching space have a great influence on the performance of PSO, especially in a multi-swarm cooperative PSO algorithm.While the traditional PSO algorithm and most PSO variants often adopt randomized initialization Symmetry 2017, 9, 15 7 of 19 mechanism which always causes an unstable convergence process, to solve this problem, the concept of orthogonal test design (OTD) is introduce into our MCPSO-K model in the swarm initialization phase and some specific iterative nodes.
It is well known that orthogonal test design [31] (OTD) is an efficient statistic design method.Necessary technical information about a problem can be acquired with the minimum number of tests using OTD.Design test uses OTD to determine minimum number of combinations of factor levels while these combinations are distributed over the whole space of all possible combinations.The factors are the chosen variables or parameters, which affect the chosen response variables, and each setting of a factor is regarded as a level of this factor.Nevertheless, different from the traditional OTD where specific values are set as levels of factors, value interval of a particle in each swarm in MCPSO-K model is regarded as a factor level participating in the initialization of particles in each swarm.In this way, this cooperative mechanism can largely maintain the population diversity in each generation by obtaining representative particles on the premise of not increasing the number of particles and avoiding premature convergence caused by uneven distribution of particles.Figure 3 shows the basic framework of the orthogonal initialization mechanism adopted in this paper.

Orthogonal Initialization Mechanism
During the optimization procedure of PSO, the initial value of each particles is proven to be very important to reduce the computation time and obtain a optimum solution [19].Determining reasonable values of each particle and optimizing the distribution of particles in the searching space have a great influence on the performance of PSO, especially in a multi-swarm cooperative PSO algorithm.While the traditional PSO algorithm and most PSO variants often adopt randomized initialization mechanism which always causes an unstable convergence process, to solve this problem, the concept of orthogonal test design (OTD) is introduce into our MCPSO-K model in the swarm initialization phase and some specific iterative nodes.
It is well known that orthogonal test design [31] (OTD) is an efficient statistic design method.Necessary technical information about a problem can be acquired with the minimum number of tests using OTD.Design test uses OTD to determine minimum number of combinations of factor levels while these combinations are distributed over the whole space of all possible combinations.The factors are the chosen variables or parameters, which affect the chosen response variables, and each setting of a factor is regarded as a level of this factor.Nevertheless, different from the traditional OTD where specific values are set as levels of factors, value interval of a particle in each swarm in MCPSO-K model is regarded as a factor level participating in the initialization of particles in each swarm.In this way, this cooperative mechanism can largely maintain the population diversity in each generation by obtaining representative particles on the premise of not increasing the number of particles and avoiding premature convergence caused by uneven distribution of particles.Figure 3 shows the basic framework of the orthogonal initialization mechanism adopted in this paper.In Figure 3, black and white roundness are used to indicate particles in different generations.We can see that OTD plays a role on the initialization phase and some specific iterative nodes (Generation n, Generation 2n, etc.) during the optimization process of MCPSO-K.Every OTD operation except the first generation is based on the value of particles obtained after some iterative operations including two sub-operations: reshaping the search space and orthogonal initialization.
(1) Sub-operation 1: Reshaping the search space In traditional PSO algorithm, most particles will move towards the best position which always causes a particle aggregation phenomenon and ultimately influences the population diversity.In addition, to maintain a reasonable distance between particles, we also plan the local search space by this sub-operation.This operation is based on the value interval of particles ( , ) obtained In Figure 3, black and white roundness are used to indicate particles in different generations.We can see that OTD plays a role on the initialization phase and some specific iterative nodes (Generation n, Generation 2n, etc.) during the optimization process of MCPSO-K.Every OTD operation except the first generation is based on the value of particles obtained after some iterative operations including two sub-operations: reshaping the search space and orthogonal initialization.
(1) Sub-operation 1: Reshaping the search space In traditional PSO algorithm, most particles will move towards the best position which always causes a particle aggregation phenomenon and ultimately influences the population diversity.In addition, to maintain a reasonable distance between particles, we also plan the local search space by this sub-operation.This operation is based on the value interval of particles ( x ij min , x ij max ) obtained after some iterative operations, where x ij min and x ij max indicate the minimum and maximum value of the ith particle in j dimension, respectively.To avoid the particle aggregation phenomenon largely, an expansion factor γ is introduced into this operation.The new search space will be determined by both the expansion factor and the value interval of particles.Thus, the value interval of each dimension of the target problem can be defined as γ•x ij min , 1 γ x ij max , (0 < γ < 1).(2) Sub-operation 2: Orthogonal initialization There are many orthogonal selection techniques including Full Factorial technique, Latin Hypercube technique, Optimal Latin Hypercube technique and so on.In this paper, we adopt Optimal Latin Hypercube (OLH) technique to accomplish this orthogonal initialization.The number of levels for each factor in OLH equals the number of points with random combinations.OLH allows more points and more combinations to be studied for each factor.Engineers have total freedom in selecting the number of designs to run as long as it is greater than the number of factors.
In order to decrease the computational cost and plan sub search spaces reasonably, the initial search space of each swarm in MCPSO-K is different.Figures 4-6 show three different partition methods for ith particle domain.
Figure 3 shows the first partition method for particle domain where the ith component of the position vector should be valued in the interval f ij min , f ij max .Some other partition methods are shown in Figures 4 and 5.
Symmetry 2017, 9, 15 8 of 21 after some iterative operations, where and indicate the minimum and maximum value of the ith particle in j dimension, respectively.To avoid the particle aggregation phenomenon largely, an expansion factor is introduced into this operation.The new search space will be determined by both the expansion factor and the value interval of particles.Thus, the value interval of each dimension of the target problem can be defined as (2) Sub-operation 2: Orthogonal initialization There are many orthogonal selection techniques including Full Factorial technique, Latin Hypercube technique, Optimal Latin Hypercube technique and so on.In this paper, we adopt Optimal Latin Hypercube (OLH) technique to accomplish this orthogonal initialization.The number of levels for each factor in OLH equals the number of points with random combinations.OLH allows more points and more combinations to be studied for each factor.Engineers have total freedom in selecting the number of designs to run as long as it is greater than the number of factors.
In order to decrease the computational cost and plan sub search spaces reasonably, the initial search space of each swarm in MCPSO-K is different.Figures 4-6 show three different partition methods for ith particle domain.
Figure 3 shows the first partition method for particle domain where the ith component of the position vector , , ⋯ , of particles in swarm 1,2, ⋯ , 1 should be valued in the interval , .Some other partition methods are shown in Figures 4 and 5.    after some iterative operations, where and indicate the minimum and maximum value of the ith particle in j dimension, respectively.To avoid the particle aggregation phenomenon largely, an expansion factor is introduced into this operation.The new search space will be determined by both the expansion factor and the value interval of particles.Thus, the value interval of each dimension of the target problem can be defined as (2) Sub-operation 2: Orthogonal initialization There are many orthogonal selection techniques including Full Factorial technique, Latin Hypercube technique, Optimal Latin Hypercube technique and so on.In this paper, we adopt Optimal Latin Hypercube (OLH) technique to accomplish this orthogonal initialization.The number of levels for each factor in OLH equals the number of points with random combinations.OLH allows more points and more combinations to be studied for each factor.Engineers have total freedom in selecting the number of designs to run as long as it is greater than the number of factors.
In order to decrease the computational cost and plan sub search spaces reasonably, the initial search space of each swarm in MCPSO-K is different.Figures 4-6 show three different partition methods for ith particle domain.
Figure 3 shows the first partition method for particle domain where the ith component of the position vector , , ⋯ , of particles in swarm 1,2, ⋯ , 1 should be valued in the interval , .Some other partition methods are shown in Figures 4 and 5.It is worth noting that the impact of size of overlap region in Figure 4 and the relation of and on performance of MCPSO-K are also a good research topic for future research.
The determination of these tuning parameters can be determined by performing sensitivity analysis on the target problem.

Particle Trajectory Knowledge Base
The traditional PSO algorithms and other variants all focus on the best positions achieved so far by the particle itself and the position of the best fitness achieved so far across the whole population.It is worth noting that the impact of size of overlap region δ i in Figure 4 and the relation of f ij max and f i(j+1) min on performance of MCPSO-K are also a good research topic for future research.The determination of these tuning parameters can be determined by performing sensitivity analysis on the target problem.

Particle Trajectory Knowledge Base
The traditional PSO algorithms and other variants all focus on the best positions achieved so far by the particle itself and the position of the best fitness achieved so far across the whole population.During the iterative procedure, trajectories of particles can cross each other or come into close proximity in the searching space, which means that the fitness function will be computed repeatedly.This is unacceptable in a complex problem, especially in solving some real engineering problems where the fitness value changes continuously and near positions cannot cause a significant difference in fitness value.Thus, the concept of recording the trajectory information is a good way to avoid repeated computation and is introduced into the MCPSO-K model.
As shown in Figure 7, where we use black roundness to indicate particles.The flow chart of the accomplishment of the particle trajectory knowledge base can be divided into three phases.First, an empty particle trajectory knowledge base indicated by an empty matrix is created by the information of the initialization of particles.Second, the knowledge base will record the information of the particles' position and fitness value.Third, before next iterative computation, particles that have similar position information with records in the knowledge base will be assigned the fitness value in the database directly.Note that the second and third phases are carried out simultaneously and this particle trajectory knowledge base is always expressed by a multiple dimensional matrix, where the dimension of this matrix depends on the character of a specific problem.It is worth noting that the impact of size of overlap region in Figure 4 and the relation of and on performance of MCPSO-K are also a good research topic for future research.
The determination of these tuning parameters can be determined by performing sensitivity analysis on the target problem.

Particle Trajectory Knowledge Base
The traditional PSO algorithms and other variants all focus on the best positions achieved so far by the particle itself and the position of the best fitness achieved so far across the whole population.During the iterative procedure, trajectories of particles can cross each other or come into close proximity in the searching space, which means that the fitness function will be computed repeatedly.This is unacceptable in a complex problem, especially in solving some real engineering problems where the fitness value changes continuously and near positions cannot cause a significant difference in fitness value.Thus, the concept of recording the trajectory information is a good way to avoid repeated computation and is introduced into the MCPSO-K model.
As shown in Figure 7, where we use black roundness to indicate particles.The flow chart of the accomplishment of the particle trajectory knowledge base can be divided into three phases.First, an empty particle trajectory knowledge base indicated by an empty matrix is created by the information of the initialization of particles.Second, the knowledge base will record the information of the particles' position and fitness value.Third, before next iterative computation, particles that have similar position information with records in the knowledge base will be assigned the fitness value in the database directly.Note that the second and third phases are carried out simultaneously and this particle trajectory knowledge base is always expressed by a multiple dimensional matrix, where the dimension of this matrix depends on the character of a specific problem.Whether to execute the assignment operation depends on the similarity between particles and records.Thus, the similarity evaluation between particles and the records in the database is a key step for this method.The similarity evaluation functions, based on the error between the position of current particle and the records in the database, are defined as follows.
F similarity = 1, f similarity ≤ e similarity 0, f similarity > e similarity ( 14) where F similarity in Equation ( 14) has only two values indicating the similarity hypothesis holds or not, based on the current similarity level f similarity and a predetermined similarity level e similarity ; x k ij , which is similar to the definition in Equation ( 2), and x i presents corresponding position of previous particles recorded in the trajectory database.ω i indicates the sensitivity of the corresponding position and the fitness value.
On the concrete operation process, this trajectory knowledge base can be expressed as a multi-dimensional matrix.It is worth noting that our study shows that implementing a sensitivity analysis based on finite number of computations before creating this particle trajectory knowledge base will contribute to the determination of the matrix dimension.In general terms, we expand spacing between adjacent values for those insensitive parameters and cut down spacing between adjacent values for those sensitive parameters.
The flow chart of the whole optimization process based on our MCPSO-K model is summarized in Figure 8. Update the position information by information interaction in both swarm-level and particle-level Update the velocity information by information interaction in both swarm-level and particle-level

Further Discussion about the MCPSO-K Model
Firstly, in order to dynamically adjust the ability of exploration and exploitation during searching procedure, it is beneficial to maintain a frequent information interaction among those swarms dispersed in the search space in the beginning of the optimization.As the optimization process progresses, it is more beneficial to decrease the frequency of information interaction in swarm level and increase the frequency of information interaction in particle level, so that the swarms could effectively obtain an optimum in a local space.In fact, the CPSO-SK and CPSO-HK presented by Van Den Bergh and Andries also adopt some form of shared memory to build the context vector, and it is hypothesized that this vector does not have to be updated during every cycle for the algorithm to work well.Thus, the information interaction mechanism in the swarm level in our MCPSO-K should

Further Discussion about the MCPSO-K Model
Firstly, in order to dynamically adjust the ability of exploration and exploitation during searching procedure, it is beneficial to maintain a frequent information interaction among those swarms dispersed in the search space in the beginning of the optimization.As the optimization process progresses, it is more beneficial to decrease the frequency of information interaction in swarm level and increase the frequency of information interaction in particle level, so that the swarms could effectively obtain an optimum in a local space.In fact, the CPSO-S K and CPSO-H K presented by Van Den Bergh and Andries also adopt some form of shared memory to build the context vector, and it is hypothesized that this vector does not have to be updated during every cycle for the algorithm to work well.Thus, the information interaction mechanism in the swarm level in our MCPSO-K should be dynamically adjusted while the information interaction mechanism in the particle level participates during the whole optimization process.
Secondly, no matter what method is used, the increase of the number of swarms and particles inevitably aggravates the computational burden.To determine a reasonable quantity of particles in each swarm is another way to reduce computation cost without influencing the performance of MCPSO-K algorithm.
Thirdly, each particle in PSO represents a complete vector that can be used as a potential solution.Each update operation in optimum iterative procedure is also performed on a full dimensional vector.This allows for the possibility that some components in the vector have moved closer to the best solution, while others actually moved away from the best solution.In this MCPSO-K model, we discuss a new updating mechanism where each swarm in MCPSO-K only makes a frequent update on some components in the vector and update other components through the information interaction in the swarm-level.
These discussions above will be considered in the implementation process of MCPSO-K in Table 1 and will be discussed briefly based on the experiments and analysis in the subsequent contents.To reduce the number of uncertainty factors in the MCPSO-K, the information exchange frequency in swarm level is used as stopping criteria for many operations in the optimization process.
Table 1.The pseudocode of the implementation process of MCPSO-K.

The Implementation Process of MCPSO
n: the number of swarms p: each swarm's population size h: information exchange frequency in swarm level and specific iterative node for orthogonal initialization Max_gen: max number of generations for stopping criteria e similarity : evaluation index for similarity between particles ε similarity : adaptive index for similarity evaluation index g = [g 1 , g 2 , • • • , g m ]: the number of partition in each dimension Implementation sensitivity analysis on a m dimensional optimization problem Determine the value of g = [g 1 , g 2 , • • • , g m ] Determine an orthogonal combination of parameters value interval based on the sensitivity analysis Create n swarms and each swarm has p particles Initialize n × p particles and create a trajectory database base on the fitness value of particles For j = 1 to h For i = 1 to j × Max_gen/h For i = 1 to n Update the position of each particle in swarm i and calculate the fitness value Calculate the similarity index for each particle If the similarity index calculated above ≤ h × ε similarity × e similarity /j Update the fitness value of particle by the trajectory database End Update the trajectory database End Update the position of each particle by the cooperative mechanism defined by Equations ( 4)-( 13) Execute the orthogonal initialization operation Update the best fitness of n swarms End End Output the best fitness value It must be said that, to streamline the process, h indicates the information exchange frequency in swarm level and can be regarded as a specific iterative node for orthogonal initialization.

Test Functions
We employ different types of benchmark functions, expressed as Equations ( 16)- (20), to test and compare the performance of the MCPSO-K algorithm we proposed.Table 1 lists the parameters used for these experiments, where the "iteration" column and "threshold" lists two normal stopping criteria for iteration and their base value.
Rosenbrock function (unimodal) Quadric function (unimodal) Ackley function (multimodal) The generalized Rastrigin function (multimodal) The generalized Griewank function (multimodal) In addition, we assume that ranges of all independent variables are determined at the interval [−100, 100].All other relevant parameters of these functions are shown in Table 2.

Test on The Number of Computation on Similar Positions
The purpose of introducing a particle trajectory knowledge base is to reduce the computation cost on similar positions of different particles during the whole iterative procedure.The number of computations on similar positions of these functions defined above is tested to prove the necessity of the trajectory knowledge base.Table 3 lists the average number of similar computations and identical computations during iterative procedure for each test function during 10 runs.We evaluate similar and identical computations of the function defined as Equations ( 14) and (15) and collect data on five kinds of similarity levels using traditional PSO algorithm.As a result of different convergence generations, the number of inefficient computations are collected until the convergence.The denominator in each column of Table 3 is the number of efficient searching calculations.In Table 3, the evaluation "criteria" can also be described as a variable range ∆x of each parameter, where their relations meet the following equation: where "criteria≤ 5" also means the variable range ∆x ≤ √ 25/20.Such a small value fluctuation of parameters will not have a significant impact on the fitness value; that is, it will not have a significant effect on the fitness value of particles.

Reliability and Accuracy of Algorithms
The reliability of algorithm is an important evaluation index for performance.The sensitivity to the quality of initialization is a significant aspect of this reliability index.The fitness value ranges of each test function in 10 representative runs after specific iterations are presented in Figures 9-18.Figures 9-13 show the fitness value range using traditional PSO algorithm and Figures 14-18 show the fitness value range using MCPSO-K algorithm.Table 4 lists the fluctuation of the historical convergence data of test functions in 10 runs using PSO.Table 5 lists the best final convergence value of each test function in 10 representative runs in 50 runs using PSO.Table 6 lists the fitness value of test functions using MCPSO-K at the same iteration.Table 7 lists the best final convergence value with MCPSO-K.It should be noted that, to make a reasonable comparison of traditional PSO and MCPSO-K, the number of swarms and particles in each swarm are equal.Thus, the number of iterations can be regarded as a good evaluation index for the performance of the algorithms.Symmetry       From these data above, the multi-swarm cooperative mechanism obtains a better searching accuracy and search stability.In addition, the cooperative approach introduced here performs better and better as the dimensionality of the problem increases (Figure 19).From these data above, the multi-swarm cooperative mechanism obtains a better searching accuracy and search stability.In addition, the cooperative approach introduced here performs better and better as the dimensionality of the problem increases (Figure 19).

Conclusions
In this paper, we present an improved multi-swarm cooperative PSO algorithm and compare its performance with the traditional PSO algorithm.The test function results indicate that the MCPSO-K presented in this paper performs better than the traditional PSO in the aspects of convergence,

Conclusions
In this paper, we present an improved multi-swarm cooperative PSO algorithm and compare its performance with the traditional PSO algorithm.The test function results indicate that the MCPSO-K presented in this paper performs better than the traditional PSO in the aspects of convergence, computational efficiency and avoiding premature convergence.
Main differences among the MCPSO-K, PSO and other variants can be summarized as follows: (1) A new information interaction mechanism is conceived, which is similar to gravitational action mechanism in astrophysics field.In this way, the algorithm can update the velocity of each particle dynamically.In other words, there is an adaptive mechanism to control the searching speed based on the fitness value and the distance of swarms.(2) Our MCPSO-K adopts an orthogonal initialization method to guarantee the uniform distribution of particles in search space, which avoids local minimum value and the premature convergence.(3) To greatly decrease the computational cost, a matrix recording the information of particle trajectory is proposed and used during the iteration.By defining a reasonable error range, a group of particles whose variable combination are similar to the particles in the database can be assigned by the value in the matrix directly.Thus, the computational cost by repetitive searches and useless searches is decreased.The test results show that the introduction of the particle trajectory database can decrease the computation cost significantly without influencing the final convergence value.

Figure 1 .
Figure 1.Flowchart of the traditional particle swarm optimization algorithm.

Figure 1 .
Figure 1.Flowchart of the traditional particle swarm optimization algorithm.

Figure 2 .
Figure 2. Social sharing of information among swarms and particles in multi-swarm orthogonal cooperative PSO algorithm with a particle trajectory knowledge base.

Figure 2 .
Figure 2. Social sharing of information among swarms and particles in multi-swarm orthogonal cooperative PSO algorithm with a particle trajectory knowledge base.

Figure 3 .
Figure 3.The framework of the orthogonal initialization mechanism.

Figure 3 .
Figure 3.The framework of the orthogonal initialization mechanism.

Figure 4 .
Figure 4.The first partition method for particle domain.

Figure 5 .
Figure 5. Second partition method for particle domain.

Figure 4 .
Figure 4.The first partition method for particle domain.

Figure 4 .
Figure 4.The first partition method for particle domain.

Figure 5 .
Figure 5. Second partition method for particle domain.

Figure 6 .
Figure 6.Third partition method for particle domain.

Figure 6 .
Figure 6.Third partition method for particle domain.

Symmetry 2017, 9 , 15 11 of 21
Initialize the position vectors and associated velocity vectorsSet the number of iterations Calculate the fitness and update the best position i=1 j=1

21 Figure 9 .
Figure 9.The fitness value of the Rosenbrock function during 10 runs using PSO.

Figure 9 .
Figure 9.The fitness value of the Rosenbrock function during 10 runs using PSO.

Figure 9 .
Figure 9.The fitness value of the Rosenbrock function during 10 runs using PSO.

Figure 10 .
Figure 10.The fitness value of the Quadric function during 10 runs using PSO.

Figure 11 .
Figure 11.The fitness value of the Ackley's Function during 10 runs using PSO.

Figure 10 .
Figure 10.The fitness value of the Quadric function during 10 runs using PSO.

Figure 9 .
Figure 9.The fitness value of the Rosenbrock function during 10 runs using PSO.

Figure 10 .
Figure 10.The fitness value of the Quadric function during 10 runs using PSO.

Figure 11 .
Figure 11.The fitness value of the Ackley's Function during 10 runs using PSO.Figure 11.The fitness value of the Ackley's Function during 10 runs using PSO.

Figure 12 .
Figure 12.The fitness value of the generalized Rastrigin function during 10 runs using PSO.

Figure 12 .Figure 13 .
Figure 12.The fitness value of the generalized Rastrigin function during 10 runs using PSO.

Figure 13 .
Figure 13.The fitness value of the generalized Griewank function during 10 runs using PSO.

Figure 14 .Figure 14 .
Figure 14.The fitness value of the Rosenbrock function using MCPSO-K.

Figure 16 . 21 Fitness
Figure 16.The fitness value of the Ackley's function using PSO.Figure 16.The fitness value of the Ackley's function using PSO.Symmetry 2017, 9, 15 18 of 21

Figure 17 .
Figure 17.The fitness value of the generalized Rastrigin function using MCPSO-K.

Figure 17 .
Figure 17.The fitness value of the generalized Rastrigin function using MCPSO-K.

Figure 17 .Figure 18 .
Figure 17.The fitness value of the generalized Rastrigin function using MCPSO-K.

Figure 18 .
Figure 18.The fitness value of the generalized Griewank function using MCPSO-K.

Figure 19 .
Figure 19.The comparison of algorithm convergence of PSO and MCPSO-K.

Figure 19 .
Figure 19.The comparison of algorithm convergence of PSO and MCPSO-K.

Table 2 .
Parameters used for experiments.

Table 3 .
The number of inefficient computations.

Table 4 .
The fluctuation of historical convergence data during 10 runs using PSO.

Table 5 .
The best final convergence value using PSO.

Table 5 .
The best final convergence value using PSO.
Fitness Value of Rosenbrock functionNumber of Iterations

Table 7 .
The best final convergence value using PSO.

Table 7 .
The best final convergence value using PSO.