Next Article in Journal
Multi-View Network Representation Learning Algorithm Research
Previous Article in Journal
Heterogeneous Distributed Big Data Clustering on Sparse Grids
Previous Article in Special Issue
Autonomous Population Regulation Using a Multi-Agent System in a Prey–Predator Model That Integrates Cellular Automata and the African Buffalo Optimization Metaheuristic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Coupling Algorithm Based on Glowworm Swarm Optimization and Bacterial Foraging Algorithm for Solving Multi-Objective Optimization Problems

1
Complex System and Computational Intelligent Laboratory, Taiyuan University of Science and Technology, Taiyuan 030024, China
2
Jiaxing Vocational Technical College, Jiaxing 314001, China
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(3), 61; https://doi.org/10.3390/a12030061
Submission received: 28 December 2018 / Revised: 30 January 2019 / Accepted: 5 March 2019 / Published: 11 March 2019

Abstract

:
In the real word, optimization problems in multi-objective optimization (MOP) and dynamic optimization can be seen everywhere. During the last decade, among various swarm intelligence algorithms for multi-objective optimization problems, glowworm swarm optimization (GSO) and bacterial foraging algorithm (BFO) have attracted increasing attention from scholars. Although many scholars have proposed improvement strategies for GSO and BFO to keep a good balance between convergence and diversity, there are still many problems to be solved carefully. In this paper, a new coupling algorithm based on GSO and BFO (MGSOBFO) is proposed for solving dynamic multi-objective optimization problems (dMOP). MGSOBFO is proposed to achieve a good balance between exploration and exploitation by dividing into two parts. Part I is in charge of exploitation by GSO and Part II is in charge of exploration by BFO. At the same time, the simulation binary crossover (SBX) and polynomial mutation are introduced into the MGSOBFO to enhance the convergence and diversity ability of the algorithm. In order to show the excellent performance of the algorithm, we experimentally compare MGSOBFO with three algorithms on the benchmark function. The results suggests that such a coupling algorithm has good performance and outperforms other algorithms which deal with dMOP.

1. Introduction

With the development of society, more and more real optimization problems involving industrial and scientific problems are common [1,2,3]. Usually, these optimization problems are noy independent, but rather a set of objective functions. The optimization problems with a set of objective functions are known as multi-objective optimization (MOP). In general, MOP requires a set of optimal tradeoff solutions in the case of two or more conflicting objectives. Typical examples include scheduling problems with available resources, vehicle routing in traffic networks of traffic flow, etc.
Generally speaking, Swarm intelligence optimization algorithms (SIOAs) are mostly inspired by the behaviors of biological swarm systems (e.g., bird flocking, foraging and courtship).There are several popular SIOAs, such as genetic algorithm (GA) [4], differential evolution algorithm (DE) [5], particle swarm optimization (PSO) [6,7], ant colony optimization (ACO) [8], artificial bee colony (ABC) [9,10], bat algorithm (BA) [11,12], bacteria foraging optimization algorithm (BFOA) [13], cuckoo search (CS) [14,15,16] and glowworm swarm optimization (GSO) [17,18], etc. In the past decades, these SIOAs have been widely applied to various optimization problems [19,20]. When projects or systems in real-life become large, some very complex optimization problems emerge, such as large-scale optimization problems and multi-objective optimization problems (MOPs). However, for these problems, it is found that these algorithms are originally designed to solve simple practical problems and the algorithms will not be suitable for solving the complex practical problems. So, the performance of most SIOAs encounters great challenges. Therefore, strong and effective SIOAs are required.
Up to now, most Swarm intelligence optimization algorithms have been proposed to solve multi-objective optimization problems. For example, Deb et al. proposed NSGA [21]. The algorithm is implemented hierarchically according to the dominance and non-dominance relations between individuals. However, the algorithm’s performance is affected by the high computational complexity of this algorithm, non-elitism strategy and relies heavily on Shared parameters. In 2000, Deb suggests a non-dominated sorting based on muli-objective evolutionary algorithms (MOEAs), called non-dominated sorting genetic algorithm II (NSGA-II) [22] to address these issues. Zhang et al. proposed MOEA/D [23], which is to decompose MOPs into multiple scalar sub-problems, and then the sub-problems are simultaneously optimized. Horn et al. proposed NPGA [24], which integrated the concept of Pareto dominance into the selection operation of GA and applied the niche to the entire population. Zitzler et al. proposed SPEA2 [25], which tried to mix adaptive value allocation, archive truncation, and the density selection strategies. Gong et al. [26] use of the strength Pareto genetic algorithm (GA) with immunity as a tool to solve multi-objective optimization problems in the maintenance of aircraft equipment and propose the PNIA algorithm.
In this paper, we focus on improving SIOAs for solving MOPs. As can be seen from the above review, most swarm intelligence algorithms have their own advantages and disadvantages. According to the no free lunch theorem [27], it is difficult to use one algorithm to solve all kinds of optimization problem. Recently, an ensemble strategy was proposed to benefit from both the availability of diverse approaches and the need to tune the associated parameters. The research has shown the general applicability of the ensemble strategy in solving diverse problems by using different populated optimization algorithms [28]. What’s more, the coupling rules are different such as the parallel method, serially method, and nested method, and so on. At present, there are many coupling algorithms in the research and the method has become a new research hotspot. Therefore, a new idea is formed by coupling two or more strategies of algorithms to make the algorithm inherit the advantages of different algorithms and overcome the disadvantages of a single algorithm. In this paper, a coupling algorithm is designed for many-objective optimization based on GSO and BFO to deal with the MOPs [29,30]. As we all know, a good balance between exploration and exploitation is important for optimization algorithm. MGSOBFO is proposed to achieve a good balance between exploration and exploitation by dividing into two parts. Part I is in charge of exploitation by GSO and the Part II is in charge of exploration by BFO. At the same time, the simulation binary crossover (SBX) [22] and polynomial mutation [22] are introduced into the MGSOBFO to enhance the convergence and diversity ability of the algorithm. Those methods not only have an effect on the convergence ability of the algorithm, but also have the effect of extending the coverage of population to avoid being trapped into the local optimum.
The rest of the article is organized as follows. In Section 2, we give a brief introduction of multi-objective optimization problems and standard GSO, BFO algorithm. In Section 3, we introduce the detail of proposed approach. Section 4 gives out the comparison results and experimental analyses of MGSOBFO algorithm. Finally, Section 5 gives some conclusions of the work and directions for future work.

2. Basic Concepts

2.1. The Multi-Objective Optimization Problems

In general, a multi-objective optimization problem can be defined as a vector function f that maps a tuple of n decision variables to tuple of m objectives.
Formally as follow:
M i n   y = f ( x ) = ( f 1 ( x ) , f 2 ( x ) , , f m ( x ) ) s u b j e c t   t o   x = ( x 1 , x 2 , x n ) R n
where x is called the decision vector, and f m ( x ) is the m-th sub-objective function. R n is parameter (decision variables) space.
As we all know that the objectives in multi-objective optimization problems are conflicting, no single solution can be found to be best in all solutions. So, the best tradeoffs among the objectives can be defined in terms of Pareto optimality. A solution vector x a = ( x 1 a , x 2 a , , x n a ) T is said to dominate another vector x b = ( x 1 b , x 2 b , , x n b ) T if and only if x i a x i b for i { 1 , 2 , , n } and i { 1 , 2 , , n } . The dominance relationship can be described like this: x i a x i b [31].

2.2. Standard Glowworm Swarm Optimization Algorithm (GSO)

Glowworm Swarm Optimization (GSO) [17,18] is a novel swarm intelligence search algorithm. The idea of the algorithm is to simulate the social behaviors of fireflies in nature by using fluorescein to make connections. The standard GSO algorithm consists of four stages, namely, the initialization, the updating luciferin, the updating position and the updating perception range stage. The following four stages of operation are described in detail.
(1)
Initialization
In the initialization stage, fireflies are randomly distributed in the decision feasible region. In addition, the initial luciferin and the sensing radius is the same for each firefly.
(2)
Updating luciferin
The luciferin of firefly is directly related to its location in the search space. And the higher the evaluation value of the position in the space, the higher the fitness of the individual, that is, the larger the fluorescent of the individual. The specific equation of updating fluoresce in is as follows.
l i ( t + 1 ) = ( 1 ρ ) l i ( t ) + γ J ( x i ( t + 1 ) )
where ρ denotes the luciferin volatility parameters of firefly; l i ( t ) denotes the luciferin value of the firefly i in the t th iteration; γ denotes the updating luciferin rate parameters of firefly; J ( x i ( t + 1 ) ) denotes the evaluation value of firefly i at position x i ( t + 1 ) in the t+1th iteration.
(3)
Roulette selection
For each iteration, the fireflies need to find the firefly that the luciferin value is larger than its own within the sensing range of the firefly. Then, the updating direction of the firefly position should be determined according to the roulette method. In addition, the selection probability of the neighboring firefly is also determined according to the luciferin value. The GSO algorithm will selects the individuals that meet the following two conditions to form a group.
  • Glowworm j needs to be within the perceived radius of glowworm i ;
  • The luciferin of glowworm j is brighter than that of glowworm i .
The specific equation for the selection probability of the neighboring firefly is as follows.
p i j ( t ) = l j ( t ) l i ( t ) k N i ( t ) l k ( t ) l i ( t )
N i ( t ) = { j : d i , j ( t ) < r d i ; l i ( t ) < l j ( t ) }
where, j N i ( t ) , and N i ( t ) represents the neighborhood set of firefly i in the tth iteration; r d i ( t ) represents the decision radius of firefly i in the tth iteration; d i , j ( t ) represents the space distance between firefly i and j in the tth iteration; p i j ( t ) represent sthe probability of firefly i to firefly j in the tth iteration.
When the neighborhood firefly j of firefly i is selected, firefly i will update its position as the following update equation.
x i ( t + 1 ) = x i ( t ) + s ( x j ( t ) x i ( t ) x j ( t ) x i ( t ) )
where, s represents the moving step size of firefly; x j ( t ) x i ( t ) represents the Euclidean space distance between firefly i and firefly j.
(4)
Neighborhood range update rule
After the position of the firefly is updated, the range of perception will be dynamically adjusted. The size of the perceived radius is determined by the number of firefly individuals within the perceived radius. The specific equation of the updating perceptual range is as follows.
r d i ( t + 1 ) = min { r s , max { 0 , r d i ( t ) + β ( n t | N i ( t ) | ) } }
where, r s represents the perception radius; n t represents the threshold for firefly neighborhood set; β is the parameter to adjust the size of firefly’s dynamic perception range.

2.3. Standard Bacterial Foraging Algorithm (BFO)

Passino proposed an algorithm BFO [13] to solve corresponding problems in 2001.Compared with the well-known EAs DE, genetic algorithm, and PSO, BFO shows excellent performance. The bacterial foraging optimization algorithm is inspired by the foraging strategies of the E. Coli bacterium cells. The basic principle of the bacterial foraging algorithm is to regard each Escherichia coli as a solution. BFO complete the search process of the optimal solution by the bacterial foraging behavior: chemotaxis, swarming, reproduction and elimination. The four parts are as follows:
The Chemotaxis
Chemotaxis is achieved by the following two operations: swimming and tumbling. When a bacterium meets a favorable environment, it will continue swimming in the same direction. When it meets an unfavorable environment, it will tumble. The process of movement can be defined as follow:
x i ( j + 1 , k , l ) = x i ( j , k , l ) + C ( i ) Δ ( i ) Δ Τ ( i ) Δ ( i )
where x i ( j , k , l ) represents the position of bacteria i when it approaches the j th reproduction and l th elimination and dispersal; C ( i ) is the step of chemotaxis; Δ ( i ) Δ Τ ( i ) Δ ( i ) is a random forward direction of movement.
Assuming that the objective function value of bacteria i at x i ( j + 1 , k , l ) is f ( x i ( j + 1 , k , l ) ) , bacteria i will continue to move in the same direction until the value of the objective function no longer decreases or the maximum number of steps is reached when f ( x i ( j + 1 , k , l ) ) < f ( x i ( j , k , l ) ) . In a sense, chemotaxis operation is a complex movement process interwoven with the operation of tumble and swimming, in which tumble represents the direction of optimization and swimming represents the degree of searching feasible solutions in a certain direction.
The Swarming
In the BFO algorithm, each of individual does not independently. They release two signals in the process of foraging, one called the attractor signal, the other called the rejection signal. The attractive signal is mainly used to attract other bacteria to get close to itself, while the repulsive signal is used to limit the distance between other bacteria individuals and themselves. So, the swarming can be expressed by (8), (9):
f c c ( x i , P ( j , k , l ) ) = r = 1 S f c c ( x i , x r ( j , k , l ) ) = r = 1 S [ d a t t r a c t an t exp ( w a t t r a c tan t p = 1 n ( x p i x p r ) 2 ) ] + r = 1 S [ h r e p e l l e n t exp ( w r e p e l l e n t p = 1 n ( x p i x p r ) 2 ) ]
J ( i , j + 1 , k , l ) = J ( i , j , k , l ) + f c c
where f c c ( x i , P ( j , k , l ) ) represent a objective function that varies with the population distribution. d a t t r a c t an t and w a t t r a c t an t represent the release quantity and diffusion rate of inducement signal, respectively, and h r e p e l l e n t and w r e p e l l e n t represent the release quantity and diffusion rate of rejection signal, respectively.
The Reproduction
With the continuous absorption of nutrients, E. coli will gradually grow as nutrients continue to be absorbed. Under appropriate conditions, each E. coli will asexually split into two bacteria. However, the bacteria will be eliminated for those bacteria with poor nutrition. In the reproduction, J h e a l t h i is used to represent the energy value of the ith bacteria, which determines the foraging ability of bacteria. And then the bacteria are sorted according to their health values. The bacteria with healthy values ranked in the first half are used for reproduction and the other half of bacteria are eliminated. The new reproduction has exactly the same foraging ability as the original bacteria. The value of J h e a l t h i is calculated by:
J h e a l t h i = j = 1 N c f ( x i ( j , k , l ) )
where J h e a l t h i represents the energy value of the ith bacteria; N c indicates the number of chemotaxis; f ( x i ( j , k , l ) ) is the fitness value of the ith bacteria after the jth chemotaxis, the kth reproduction and the lth elimination and dispersal operations.
The Elimination and Dispersal Operation
After the reproduction, the bacteria will execute the elimination and dispersal operation with a certain probability. The basic principle of elimination and dispersal operation is similar to the mutation operation in genetic algorithm, which can continue to search in unexploited areas and prevent the population from falling into local minima. The migration operation can be defined as follow:
x = { x n e w , i f   q < p e d x , o t h e r w i s e
where x n e w denotes the new position obtained through initialization, q , ( 0 < q < 1 ) is a uniformly distributed random number.

3. The MGSOBFO Algorithm

At the beginning, the GSO and BFO algorithm was proposed to solve the single objective optimization problem rather than multi-objective optimization problems (MOPs). Therefore, it is meaningful to improve the corresponding strategies so that these two algorithms can be used to solve multi-objective optimization problems. In this paper, we proposed a new coupling algorithm based on GSO and BFO (MGSOBFO). Next, we will introduce each process from a single target algorithm to multi-target algorithm.

3.1. Fast Non-Dominated Sorting Approach and Crowding Distance

Before introducing the multi-objection firefly bacteria foraging algorithm, we first introduce the following two basic concepts: fast non-dominant sorting and crowding distance [22].
(1)
Fast Non-dominated Sorting Approach
First, we calculated two values for each solution.① domination count N p , the number of solutions which dominate the solution q. ② S q , a set of solutions that the solution q dominates. The pseudo code of the MaBFOA operator is listed in Algorithm 1:
Algorithm 1: Fast non-dominated sort approach
for each p P
S p = 0 , n p = 0
 for each q P
if p q        // if p dominated q
then S p = S p { q }
 else if q p
then n p = n p + 1
 end
if n p = 0
then p r a n k = 1      // p belong to the first front
F 1 = F 1 { p }
end
i = 1        //Initialize the front counter
While F 1 0 , Q = 0 //Q represents the next front for store
For each q S p
n q = n q 1
if n q = 0 //q belong to the next front
then q r a n k = i + 1 ,
Q = Q { q }
i=i+1
F i = Q
end
(2)
Crowding-distance calculation approach
The crowding distance sorting procedure is shown in Figure 1a. The crowding-distance computation requires sorting the population according to each objective function value in ascending order of magnitude. All populations’ members are assigned a distance metric; we can compare two solutions for their extent of proximity with other solutions. The boundary solutions are assigned an infinite distance value. In Figure 1b, the crowding-distance of the I-th solution in its front is the side length of the cuboids. The crowded-comparison operator guides the selection process towards a uniformly spread-out Pareto-optimal front. The crowding distance of each individual be computed by Equation (12).
d i = k = 1 m | f k ( i 1 ) f k ( i + 1 ) |
In most situations, the last level is accepted partially. In such a case, these solutions with better crowding distances are picked up.

3.2. The Self-Adaptive for Chemotaxis

As we all know, in the bacterial foraging algorithm of single objective, we know that the best individual is chosen when the bacterial move one. However, in the multi-objective algorithm, the advantages and disadvantages by comparison between individuals cannot be concluded by comparing only one adaptive value as in the single-objective algorithm. Therefore, here we define a new Pareto dominance relation to compare two individuals.
In the MGSO-BFO, assuming that x 1 and x 2 are any two individuals in the population. The dominant relationship between x 1 and x 2 is defined as follows:
(a)
if x i x j , that means x i is better than x j ;
(b)
if x j x i , that means x j is better than x i ;
(c)
If there is no dominant relationship between x i and x j , normalization is carried out for different fitness values. The process is as follows:
Firstly, the proportion w of the two individuals in the objective value is calculated, respectively.
W i = f i f i + f j
W j = f j f i + f j
Finally, the sum of weighted is given as follows:
F = k = 1 M δ k | W j W i |
where δ k ( 0 < δ k < 1 , a n d k = 1 M δ k = 1 ) represents the weight coefficient of each objective, M represents the number of objective functions.
In the chemotaxis operation, each position of an individual is compared by above the Pareto dominance relationship mentioned. However, in the original algorithm, the original fixed step size cannot meet the requirements of convergence. So, there we make a new definition of the step size. The calculation formula is shown as follow:
C D = S D j + k + l
where C D represents the initialization step size in the D dimension, S D represents the step size in the D dimension. j , k , l represents the chemotaxis, replication and dispersion, respectively.
It can be seen from the above formulas that C D is large, which is conducive to global search at the beginning. With the iteration of the algorithm, it is conducive to local search in the later stage of the algorithm.

3.3. The Replication Operations Based on Crossover

In the standard BFO algorithm, the replication operation is to sort individuals according to the size of the function’s adaptive value, and then replace the poor half with the good half. However, in the case of multi-objective, this operation will lead to a great decrease in the diversity of the population, which is not conducive to the diversity distribution of the population. In this section, in order to maintain the diversity of the population, we introduce the better individuals in GSO into it, and perform crossover operations between the two. The simulated binary crossover is shown below:
X 1 j ( t ) = 0.5 [ ( 1 + γ j ) X 1 j ( t ) + ( 1 γ j ) X 2 j ( t ) ]
X 2 j ( t ) = 0.5 [ ( 1 γ j ) X 1 j ( t ) + ( 1 + γ j ) X 2 j ( t ) ]
where γ j = { ( 2 u j ) 1 η + 1    if   u j 0.5 ( 1 2 ( 1 u j ) ) 1 η + 1    o t h e r , u j U ( 0 , 1 ) , η = 1 .
The improved for replication operation as shown in Figure 2.

3.4. The Elimination and Dispersal Operations Based on Mutation

Generally speaking, we only consider the speed and accuracy of convergence in the single-objective optimization algorithm, but in the multi-objective optimization algorithm, we not only consider the convergence of the algorithm but also the diversity of the population. In the elimination and dispersal, they are randomly generated again for the individuals that meet certain conditions. Although this method can improve the diversity of the population to a certain extent, it does not make use of the convergence of the later algorithm in multi-objective optimization. In order to improve the convergence of the algorithm, polynomial mutation is introduced in the paper. The process of polynomial mutation is shown as fellow:
X 1 j ( t ) = X 1 j ( t ) + β j
where β j = { ( 2 u j ) 1 η + 1 1    u j < 0.5 ( 1 ( 2 ( 1 u j ) ) 1 η + 1 o t h e r , u j U ( 0 , 1 ) , η is the distribution exponent.
Through the above dispersing operation, the entire algorithm no longer disperses individuals randomly as before, but disperses individuals on a certain basis, which will be conducive to searching for better solutions.

3.5. The Flow Chart of MGSO-BFO

The flow chart of MGSO-BFO as show in Algorithm 2.
Algorithm 2: The MGSO-BFO
   Step 1: Create a random population N of size S, and initialize the required parameters;
   Step 2: Randomly generate the initial population
   Step 3: Elimination and Dispersal Operations loop: let j=0, j = j + 1, N e d (the number of Elimination and Dispersal Operations step);
   Step 4: The replication operations loop: let k = 0, k = k + 1; N r e (the number of replication step)
   Step 5: Chemotactic loop: let L = 0, L = L + 1; N c (the number of chemotactic step)
   Take the chemotactic step for the ith bacterium as follows:
     I. Calculate fitness function θ i for all objective functions.
     II. let J = θ i , (save a better fitness may be found so far)
     III. Tumble: create a random vector Δ ( i ) Δ Τ ( i ) Δ ( i )
     IV. Make movement with a self-adaption step(Specific see Formula (16)) for ith bacterium in direction.
              θ i ( j + 1 , k , l ) = θ i ( j , k , l ) + C ( i ) Δ ( i ) Δ Τ ( i ) Δ ( i )
     V. Computer θ i for all objective functions.
     VI. Swim:
      Let m = 0 (m respect the swim length counter)
     While m < N s
       (1) Let m = m + 1
       (2) If θ i J ( θ i dominated J ), let J = θ i
       (3) ues θ i ( j + 1 , k , l ) = θ i ( j + 1 , k , l ) + C ( i ) Δ ( i ) Δ Τ ( i ) Δ ( i ) to computer the new θ i .
     VII. If i S , go to process the next bacterium.
   Step 6: if L < N c , go to the Step5
   Step 7: The replication operations:
   I. Perform non-dominated sorting of BFO and GSO, and identify different front F 1 , F 2 ,
   II. Prepare composite population by combining the BFO (s/2) with GSO (s/2).
   III. Performs simulated binary crossover operations
   IV. Select a new population of size S from the composite population.
   IV. if k < N r e , go to the Step4
   Step 8: Elimination and Dispersal Operations
   I. For each bacterium i , if r a n d < ρ e d , use the mutation process shown in Section 3.4.
   II. if j < N e d , go to the Step 3, else go to Step 9.
   Step 9: End.

4. Experimental and Discussion

4.1. Test Set and Performance Measures

In the experiment, we use two benchmark sets ZDT [32] and SCH [33] test the performance of MGSO-BFO. For the ZDT test sets, it consists of six test instances, and we use five of them in the experiment. For more details about the test problems, please refer to Table 1.
As we all know, convergence and diversity are two important indices for multi-objective optimization algorithms. These two indices cannot be measured adequately with one performance metric as in single objective optimization. There are many performance metrics have been proposed. In this section, we employed GD [34], SP [34] and IGD [35] to evaluate the MGSO-BFO.
The definition of GD, SP and IGD as follow:
G D = 1 n i = 1 n d i s t i 2
S P = 1 n 1 i = 1 n ( d ¯ d i ) 2
I G D ( x , p ) = x P d i s t ( x , p ) | P |
where n is the number of Pareto optimal solution, d i is the minimum distance from i solution to Pareto front solution. d ¯ is the mean of d i . d ( v , Q ) is the minimum Euclidean distance between v and all the points in Q . p is the true Pareto front. Q is the optimal solution set by algorithm.
In the experiments, 30 independent runs are carried out on the machine with Intel Core i5-2400 3.10 GHz CPU, 6 GB memory, and windows 7 operating system with Matlab7.9. The stopping criterion is a fixed number of o iteration (setting to 100), while population size n = 50 for all algorithms. The external population of size is set as 100.

4.2. Comparison with State-of-the-Art Algorithm

In this section, we compared the coupling algorithm with the state-of-the-art evolutionary algorithms NSGA-II [22], SPEA2 [25], PNIA [26], MOEA/D [23]. The parameter values of these algorithms are listed in Table 2. For more details about these algorithms, please refer to the literature [25,26].
In our research work, each work compares MGSO-BFO with NSGA-II, SPEA2 PNIA, and MOEAD the typical simulation results are shown in Table A1. As we can be seen from Table A1, in terms of convergence, the algorithm proposed in this paper is superior to the other three classical algorithms in terms of SCH, ZDT1, ZDT2, ZDT3 and ZDT4, especially in terms of the convergence of ZDT1, ZDT2, ZDT3 and ZDT4. For ZDT6, however, the convergence of the MGSO-BFO is not as good as that of the other three algorithms. In terms of diversity, the MGSO-BFO algorithm in this paper shows good diversity on SCH, ZDT2, ZDT4 and ZDT6.However, the diversity of MGSO-BFO on ZDT1 and ZDT3 was lost to PNIA. In order to further demonstrate the effectiveness of the algorithm proposed in this paper, IGD indices of the four algorithms were tested. The experimental results are shown in Table A2. The experimental results also show that the proposed algorithm is superior to other algorithms on SCH, ZDT1, ZDT2, ZDT4 and ZDT6. For ZDT3, they are in the same order of magnitude and show the same performance as other algorithms. From what has been discussed above, we can come to the conclusion that the MGSO-BFO algorithm shows good performance whether for convergence or diversity.
Figure 3 shows the dynamic performance of the MGSO-BFO, NSGA-II, SPEA2, PNIA and MOEA/D. This figure demonstrates the abilities of those algorithms in converging to the true Pareto front and in finding diverse solutions in the front. For SCH, ZDT1, ZDT2, ZDT3 test functions, NSGA_II, SPEA2, PNIA, MOEA/D and MGSO-BFO show strong convergence and distribution, indicate the similarity between algorithms. It can be seen from the performance diagram of ZDT4 and ZDT6 that our algorithm can well converge to its real front surface, especially in ZDT4, NSGA-II and PNIA algorithms may be trapped in local optimization and cannot well converge to the real front.

5. Conclusions

In this paper, we have proposed a novel coupling algorithm named MGSO-BFO. Our proposed algorithm divided the population into two parts to achieve good balance between exploration and exploitation. Part I is in charge of exploitation by GSO and part II is in charge of exploration by BFO. What’s more, we introduced the simulation binary crossover (SBX) and polynomial mutation into the MGSOBFO to enhance the convergence and diversity ability of the algorithm. In order to demonstrate the effectiveness of the proposed algorithm in this paper, we experimentally compare MGSOBFO with NSGA-II, SPEA2, PNIA and MOEA/D on the benchmark function. The study shows that the non-dominated solution obtained by MGSO-BFA is better than those obtained by NSGA-II, SPEA2, PNIA and MOEA/D in terms of both convergence and diversity. However, we did not consider the expense of computational time in the whole experiment. Future research should include further modifications and take steps to analyze its impact on the convergence of MGSO-BFO. The fitness calculation-based selection process can be improved to reduce the computational complexity of MGSO-BFO.

Author Contributions

Writing—original draft preparation, Y.W.; writing—review and editing, Z.C.; visualization, W.L.

Funding

This work is supported by the National Natural Science Foundation of China under Grant No. 61806138, No. U1636220 and No. 61663028, Natural Science Foundation of Shanxi Province under Grant No. 201801D121127, PhD Research Startup Foundation of Taiyuan University of Science and Technology under Grant No. 20182002, Zhejiang Provincial Natural Science Foundation of China under Grant No. Y18F030036.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Comparison results of GD and SP values of four different algorithms.
Table A1. Comparison results of GD and SP values of four different algorithms.
ProblemsMetricsMOEA/DSPEA2PNIANSGA-IIMGSO-BFO
SCHMean (GD)4.83 × 10−24.41 × 10−24.81 × 10−26.17 × 10−24.75 × 10−2
std (GD)4.51 × 10−21.14 × 10−21.93 × 10−32.96 × 10−33.41 × 10−2
Mean (SP)1.71 × 10−27.01 × 10−34.96 × 10−31.75 × 10−21.63 × 10−2
std (SP)1.64 × 10−21.61 × 10−33.66 × 10−31.02 × 10−22.14 × 10−2
ZDT1mean (GD)3.67 × 10−82.80 × 10−56.56 × 10−42.58 × 10−43.97 × 10−15
std (GD)1.81 × 10−61.34 × 10−51.64 × 10−41.17 × 10−41.17 × 10−14
mean (SP)3.22 × 10−31.94 × 10−31.04 × 10−35.09 × 10−34.40 × 10−3
std (SP)4.21 × 10−33.38 × 10−41.06 × 10−33.92 × 10−37.30 × 10−3
ZDT2mean (GD)1.78 × 10−61.27 × 10−58.76 × 10−48.42 × 10−52.31 × 10−14
std (GD)2.61 × 10−41.21 × 10−51.17 × 10−31.32 × 10−44.10 × 10−14
mean (SP)8.25 × 10−42.13 × 10−112.36 × 10−32.81 × 10−38.35 × 10−4
std (SP)1.30 × 10−38.83 × 10−43.17 × 10−34.36 × 10−31.30 × 10−3
ZDT3mean (GD)3.20 × 10−84.43 × 10−61.88 × 10−41.40 × 10−41.71 × 10−14
std (GD)2.04 × 10−61.71 × 10−61.01 × 10−49.98 × 10−52.90 × 10−14
mean (SP)1.65 × 10−41.89 × 10−34.08 × 10−45.56 × 10−31.00 × 10−3
std (SP)4.34 × 10−35.99 × 10−45.47 × 10−43.81 × 10−31.70 × 10−3
ZDT4mean (GD)4.23 × 10−36.95 × 10−21.25 × 10−22.68 × 10−11.18 × 10−4
std (GD)2.21 × 10−24.19 × 10−21.03 × 10−21.30 × 10−12.39 × 10−2
mean (SP)2.10 × 10−24.37 × 10−31.46 × 10−35.09 × 10−31.70 × 10−3
std (SP)3.12 × 10−34.64 × 10−33.20 × 10−31.11 × 10−21.00 × 10−3
ZDT6mean (GD)6.22 × 10−31.33 × 10−31.78 × 10−35.49 × 10−37.5 × 10−3
std (GD)2.78 × 10−31.84 × 10−42.31 × 10−41.27 × 10−32.43 × 10−3
mean (SP)4.20 × 10−31.45 × 10−31.35 × 10−34.43 × 10−34.89 × 10−4
std (SP)2.44 × 10−45.74 × 10−49.87 × 10−42.05 × 10−32.30 × 10−3

Appendix B

Table A2. Comparison results of IGD values of four different algorithms.
Table A2. Comparison results of IGD values of four different algorithms.
ProblemsMetricsMOEA/DSPEA2PNIANSGA-IIMGSO-BFO
SCHmean (IGD)1.62372.74481.50492.73051.4388
Std (IGD)7.214512.27518.242812.21127.8809
ZDT1mean (IGD)0.43780.61750.53680.58610.4331
Std (IGD)2.75032.76152.94032.62102.3720
ZDT2mean (IGD)0.67340.70020.53960.77060.4595
Std (IGD)2.91453.13142.95153.44612.3168
ZDT3mean (IGD)0.21650.23840.53880.24450.3068
Std (IGD)1.13621.06612.95121.09812.7706
ZDT4mean (IGD)1.14362.17381.29833.49460.4898
Std (IGD)4.22649.72167.111315.62842.6827
ZDT6mean (IGD)0.43600.66460.50260.63570.4014
Std (IGD)2.10652.97202.75272.84312.1985

Appendix C

Table A3. The computation time of four different algorithms.
Table A3. The computation time of four different algorithms.
ProblemsMOEA/DSPEA2PNIANSGA-IIMGSO-BFO
SCH1.14 × 1022.74 × 1021.50 × 1022.73 × 1021.13 × 102
ZDT12.23 × 1023.56 × 1023.74 × 1022.40 × 1021.12 × 102
ZDT22.74 × 1022.24 × 1022.02 × 1022.14 × 1021.44 × 101
ZDT32.45 × 1022.52 × 1023.88 × 1022.38 × 1022.55 × 102
ZDT43.02 × 1033.45 × 1032.96 × 1023.10 × 1032.68 × 102
ZDT62.74 × 1022.97 × 1022.75 × 1032.82 × 1032.19 × 102

References

  1. Heller, L.; Sack, A. Unexpected failure of a Greedy choice Algorithm Proposed by Hoffman. Int. J. Math. Comput. Sci. 2017, 12, 117–126. [Google Scholar]
  2. Pisut, P.; Voratas, K. A two-level particle swarm optimization algorithm for open-shop scheduling problem. Int. J. Comput. Sci. Math. 2016, 7, 575–585. [Google Scholar]
  3. Zhu, H.; He, Y.; Wang, X.; Tsang, E.C. Discrete differential evolutions for the discounted {0-1} knapsack problem. Int. J. Bio-Inspir. Comput. 2017, 10, 219–238. [Google Scholar] [CrossRef]
  4. Fourman, M.P. Compaction of Symbolic Layout using Genetic Algorithms. In Genetic Algorithms and Their Applications. In Proceedings of the First Internation Conference on Genetic Algorithms, Pittsburg, PA, USA, 24–26 July 1985; pp. 141–153. [Google Scholar]
  5. Das, D.; Suganthan, P.N. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evolut. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  6. Figueiredo, E.M.N.; Carvalho, D.F.; BastosFilho, C.J.A. Many Objective Particle Swarm Optimization. Inf. Sci. 2016, 374, 115–134. [Google Scholar] [CrossRef]
  7. Cortés, P.; Muñuzuri, J.; Onieva, L.; Guadix, J. A discrete particle swarm optimisation algorithm to operate distributed energy generation networks efficiently. Int. J. Bio-Inspir. Comput. 2018, 12, 226–235. [Google Scholar] [CrossRef]
  8. Ning, J.; Zhang, Q.; Zhang, C.; Zhang, B. A best-path-updating information-guided ant colony optimization algorithm. Inf. Sci. 2018, 433–434, 142–162. [Google Scholar] [CrossRef]
  9. Wang, H.; Wu, Z.; Rahnamayan, S.; Sun, H.; Liu, Y.; Pan, J.S. Multi-strategy ensemble artificial bee colony algorithm. Inf. Sci. 2014, 279, 587–603. [Google Scholar] [CrossRef]
  10. Cui, Z.; Zhang, J.; Wang, Y.; Cao, Y.; Cai, X.; Zhang, W.; Chen, J. A pigeon-inspired optimization algorithm for many-objective optimization problems. Sci. China Inf. Sci. 2019. [Google Scholar] [CrossRef]
  11. Cai, X.; Gao, X.; Xue, Y. Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int. J. Bio-Inspir. Comput. 2016, 8, 205–214. [Google Scholar] [CrossRef]
  12. Cui, Z.; Li, F.; Zhang, W. Bat algorithm with principal component analysis. Int. J. Mach. Learn. Cybern. 2018. [Google Scholar] [CrossRef]
  13. Yang, C.; Ji, J.; Liu, J.; Yin, B. Bacterial foraging optimization using novel chemotaxis and conjugation strategies. Inf. Sci. 2016, 363, 72–95. [Google Scholar] [CrossRef]
  14. Zhang, M.; Wang, H.; Cui, Z.; Chen, J. Hybrid multi-objective cuckoo search with dynamical local search. Memet. Comput. 2018, 10, 199–208. [Google Scholar] [CrossRef]
  15. Cui, Z.; Sun, B.; Wang, G.; Xue, Y.; Chen, J. A novel oriented cuckoo search algorithm to improve DV-Hop performance for cyber-physical systems. J. Parallel Distrib. Comput. 2017, 103, 42–52. [Google Scholar] [CrossRef]
  16. Abdel-Baset, M.; Zhou, Y.; Ismail, M. An improved cuckoo search algorithm for integer programming problems. Int. J. Comput. Sci. Math. 2018, 9, 66–81. [Google Scholar] [CrossRef]
  17. Zhou, J.; Dong, S. Hybrid glowworm swarm optimization for task scheduling in the cloud environment. Eng. Optim. 2018, 50, 949–964. [Google Scholar] [CrossRef]
  18. Yu, G.; Feng, Y. Improving firefly algorithm using hybrid strategies. Int. J. Comput. Sci. Math. 2018, 9, 163–170. [Google Scholar] [CrossRef]
  19. Cui, Z.; Cao, Y.; Cai, X.; Cai, J.; Chen, J. Optimal LEACH protocol with modified bat algorithm for big data sensing systems in Internet of Things. J. Parallel Distrib. Comput. 2017. [Google Scholar] [CrossRef]
  20. Cai, X.; Wang, H.; Cui, Z.; Cai, J.; Xue, Y.; Wang, L. Bat algorithm with triangle-flipping strategy for numerical optimization. Int. J. Mach. Learn. Cybern. 2018, 9, 199–215. [Google Scholar] [CrossRef]
  21. Srinivas, N.; Deb, K. Muiltiobjective Optimization Using Nondominated Sorting in Genetic Algorithms. Evolut. Comput. 1994, 2, 221–248. [Google Scholar] [CrossRef]
  22. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evolut. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  23. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evolut. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  24. Horn, J.; Nafpliotis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multiobjective optimization. In Proceedings of the IEEE Conference on Evolutionary Computation IEEE World Congress on Computational Intelligence, Honolulu, HI, USA, 12–17 May 2002. [Google Scholar]
  25. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm for Multiobjective Optimization. In Evolutionary Methods for Design, Optimization and Control with Applications To Industrial Problems, Proceedings of the Eurogen 2001, Athens, Greece, 19–21 September2001; International Center for Numerical Methods in Engineering: Barcelona, Spain, 2002. [Google Scholar]
  26. Yuan, J.; Gang, X.; Zhen, Z.; Chen, B. The pareto optimal control of inverter based on multi-objective immune algorithm. In Proceedings of theInternational Conference on Power Electronics & Ecce Asia, Seoul, Korea, 1–5 June2015. [Google Scholar]
  27. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evolut. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  28. Qu, B.Y.; Suganthan, P.N. Constrained Multi-Objective Optimization Algorithm with Ensemble of Constraint Handling Methods. Eng. Optim. 2011, 43, 403. [Google Scholar] [CrossRef]
  29. Jin, Y.; Branke, J. Evolutionary optimization in uncertain environments-a survey. IEEE Trans. Evolut. Comput. 2005, 9, 303–317. [Google Scholar] [CrossRef]
  30. Yu, X.; Tang, K.; Chen, T.; Yao, X. Empirical analysis of evolutionary algorithms with immigrants schemes for dynamic optimization. Memet. Comput. 2009, 1, 3–24. [Google Scholar] [CrossRef]
  31. Zhang, M.; Zhu, Z.; Cui, Z.; Cai, X. NSGA-II with local perturbation. In Proceedings of the Control & Decision Conference, Chongqing, China, 28–30 May 2017. [Google Scholar]
  32. Zitzle, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithm: Empirical Results. Evolut. Comput. 2000, 8, 173. [Google Scholar] [CrossRef] [PubMed]
  33. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the First International Conference on Genetic Algorithms and Their Applications; L. Erlbaum Associates Inc.: Mahwah, NJ, USA, 1985; pp. 93–100. [Google Scholar]
  34. Schott, J.R. Fault tolerant design using single and multicriteria genetic algorithm optimization. Cell. Immunol. 1995, 37, 1–13. [Google Scholar]
  35. Mohammadi, A.; Omidvar, M.N.; Li, X. A new performance metric for user-preference based on multi-objective evolutionary algorithms. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 2825–2832. [Google Scholar]
Figure 1. (a) Crowding distance procedure; (b) Crowding distance calculation.
Figure 1. (a) Crowding distance procedure; (b) Crowding distance calculation.
Algorithms 12 00061 g001
Figure 2. flow chart of the improved for replication operation.
Figure 2. flow chart of the improved for replication operation.
Algorithms 12 00061 g002
Figure 3. The results of dynamic performance comparison.
Figure 3. The results of dynamic performance comparison.
Algorithms 12 00061 g003aAlgorithms 12 00061 g003b
Table 1. six test function for multi-objective optimization algorithm.
Table 1. six test function for multi-objective optimization algorithm.
FunctionDimensionRangeObjective FunctionOptimal SolutionFeature
SCH1 [ 10 3 , 10 3 ] f 1 ( x ) = x 2 f 2 ( x ) = ( x 2 ) 2 x [ 0 , 2 ] convex
ZDT130[0, 1] f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) ] g ( x ) = 1 + 9 i = 2 n x i n 1 x 1 [ 0 , 1 ] x i = 0 i = 2 n convex
ZDT230[0, 1] f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 ( x 1 g ( x ) ) 2 ] g ( x ) = 1 + 9 i = 2 n x i n 1 x 1 [ 0 , 1 ] x i = 0 i = 2 n Non-convex
ZDT330[0, 1] f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) x 1 g ( x ) sin ( 10 π x 1 ] g ( x ) = 1 + 9 i = 2 n x i n 1 x 1 [ 0 , 1 ] x i = 0 i = 2 n convex
discon-nected
ZDT410 x 1 [ 0 , 1 ] x i [ 5 , 5 ] i = 2 n f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) [ 1 x 1 g ( x ) ] g ( x ) = 1 + 10 ( n 1 ) + i = 2 n [ x i 2 10 cos ( 4 π x i ) ] x 1 [ 0 , 1 ] x i = 0 i = 2 n Non-convex
ZDT610[0, 1] f 1 ( x ) = 1 exp ( 4 x 1 ) sin 6 ( 6 π x 1 ) f 2 ( x ) = g ( x ) [ 1 ( f 1 ( x ) g ( x ) ) 2 ] g ( x ) = 1 + 9 ( i = 2 n x i n 1 ) 0.25 x 1 [ 0 , 1 ] x i = 0 i = 2 n Non-convex
non-uniformly
Table 2. Parameter settings of 4 different algorithms.
Table 2. Parameter settings of 4 different algorithms.
AlgorithmParameter (D Stands for Dimension)
NSGA-II [22] p c = 1 , p m = 1 D
SPEA2 [25] p c = 1 , p m = 1 D
PNIA [26] p c = 1 , p m = 1 D
MOEA/D [23] T = 20 , δ = 0.9 , n r = 2
MGSO-BFO p e d = 0.25 , N e d = 4 , N c = 20 , N s = 3

Share and Cite

MDPI and ACS Style

Wang, Y.; Cui, Z.; Li, W. A Novel Coupling Algorithm Based on Glowworm Swarm Optimization and Bacterial Foraging Algorithm for Solving Multi-Objective Optimization Problems. Algorithms 2019, 12, 61. https://doi.org/10.3390/a12030061

AMA Style

Wang Y, Cui Z, Li W. A Novel Coupling Algorithm Based on Glowworm Swarm Optimization and Bacterial Foraging Algorithm for Solving Multi-Objective Optimization Problems. Algorithms. 2019; 12(3):61. https://doi.org/10.3390/a12030061

Chicago/Turabian Style

Wang, Yechuang, Zhihua Cui, and Wuchao Li. 2019. "A Novel Coupling Algorithm Based on Glowworm Swarm Optimization and Bacterial Foraging Algorithm for Solving Multi-Objective Optimization Problems" Algorithms 12, no. 3: 61. https://doi.org/10.3390/a12030061

APA Style

Wang, Y., Cui, Z., & Li, W. (2019). A Novel Coupling Algorithm Based on Glowworm Swarm Optimization and Bacterial Foraging Algorithm for Solving Multi-Objective Optimization Problems. Algorithms, 12(3), 61. https://doi.org/10.3390/a12030061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop