Improving Monarch Butterﬂy Optimization Algorithm with Self-Adaptive Population

: Inspired by the migration behavior of monarch butterﬂies in nature, Wang et al. proposed a novel, promising, intelligent swarm-based algorithm, monarch butterﬂy optimization (MBO), for tackling global optimization problems. In the basic MBO algorithm, the butterﬂies in land 1 (subpopulation 1) and land 2 (subpopulation 2) are calculated according to the parameter p, which is unchanged during the entire optimization process. In our present work, a self-adaptive strategy is introduced to dynamically adjust the butterﬂies in land 1 and 2. Accordingly, the population size in subpopulation 1 and 2 are dynamically changed as the algorithm evolves in a linear way. After introducing the concept of a self-adaptive strategy, an improved MBO algorithm, called monarch butterﬂy optimization with self-adaptive population (SPMBO), is put forward. In SPMBO, only generated individuals who are better than before can be accepted as new individuals for the next generations in the migration operation. Finally, the proposed SPMBO algorithm is benchmarked by thirteen standard test functions with dimensions of 30 and 60. The experimental results indicate that the search ability of the proposed SPMBO approach signiﬁcantly outperforms the basic MBO algorithm on most test functions. This also implies the self-adaptive strategy is an effective way to improve the performance of the basic MBO algorithm.


Introduction
To optimize is to maximize or minimize given functions in a certain domain.In real life, human beings are driven to maximize profit or minimize cost.In mathematics and computer science, these real-world problems can be mathematically modeled, and then further tackled by various optimization techniques.In general, these optimization techniques can loosely be divided into two categories: traditional optimization methods and modern intelligent optimization algorithms.For each run, the traditional optimization methods will generate the same results under the same initial conditions; while modern intelligent optimization algorithms will generate fully different results even if the same conditions are provided.Since current problems are becoming more and more complicated, traditional optimization methods do not effeciently solve them.Therefore, more and more researchers have turned to modern intelligent optimization algorithms [1], which mainly include evolutionary computation [2], swarm intelligence, extreme learning machines [3], or artificial neural networks [4].Among the different kinds of intelligent algorithms, swarm intelligence (SI) algorithms [5][6][7][8] are one of the most representative paradigms.

Related Work
Since the monarch butterfly optimization algorithm [88] was proposed, many scholars have worked on MBO algorithm.In this section, some of the most representative work regarding MBO and other metaheuristic algorithms are summarized and reviewed.
Kaedi [89] proposed a population-based metaheuristic algorithm, namely the fractal-based algorithm, for tackling continuous optimization problems.In this algorithm, the density of high quality and promising points in an area is considered as a heuristic which estimates the degree of promise for finding the optimal solution in that area.The promising areas of state space are then iteratively detected and partitioned into self-similar and fractal-shaped subspaces, each being searched more precisely and more extensively.Comparison with some other, metaheuristic algorithms, demonstrated that this algorithm could find high quality solutions within an appropriate time.
Shams et al. [90] proposed a novel optimization algorithm, the ideal gas optimization (IGO) with the inspiration of the first law of thermodynamics and kinetic theory.In IGO, the searcher agents are a collection of molecules with pressure and temperature.IGO uses the interaction between gas systems and molecules to search the problem space for the solution.The IGO algorithm was benchmarked by an array of benchmarks.Comparison of the results with PSO and the genetic algorithm (GA) showed an advantage of the IGO approach.
Precup et al. [91] suggested a synergy of fuzzy logic and nature-inspired algorithms in the context of the nature-inspired optimal tuning of the input membership functions of a class of Takagi-Sugeno-Kang (TSK) fuzzy models dedicated to anti-lock braking systems (ABSs).The TSK fuzzy model structure and initial TSK fuzzy models were obtained by the modal equivalence principle in terms of placing local state-space models in the domain of TSK fuzzy models.The optimization problems were defined to minimize objective functions expressed as the average of the square modeling errors over the time horizon.Two representative nature-inspired algorithms, simulated annealing (SA) and PSO, were implemented to solve the optimization problems and to obtain optimal TSK fuzzy models.Baruah et al. [92] proposed a new online evolving clustering approach for streaming data.Unlike other approaches, which consider either the data density or distance from existing cluster centers, this approach uses cluster weight and distance before generating new clusters.To capture the dynamics of the data stream, the cluster weight is defined in both data and time space in such a way that it decays exponentially with time.A distinction is made between core and noncore clusters to effectively identify the real outliers.Experimental results with developed models showed that the proposed approach obtains results at par or better than existing approaches and significantly reduces the computational overhead, which makes it suitable for real-time applications.
Yi et al. [93] proposed a novel quantum-inspired MBO methodology, called QMBO, by incorporating quantum computation into the basic MBO algorithm.In QMBO, a certain number of the worst butterflies are updated by quantum operators.The path planning navigation problem for unmanned combat air vehicles (UCAVs) was modeled, and its optimal path was obtained by the proposed QMBO algorithm.Furthermore, B-Spline curves were utilized to refine the obtained path, making it more suitable for UCAVs.The UCAV path obtained by QMBO was studied and analyzed in comparison with the basic MBO.Experimental results showed that QMBO can find a much shorter path than MBO.
Ghetas et al. [94] incorporated the harmony search (HS) algorithm into the basic MBO algorithm, and proposed a variant of MBO, called MBHS, to deal with the standard benchmark problems.In MBHS, the HS algorithm is considered as a mutation operator to improve the butterfly adjusting operator, with the aim of accelerating the convergence rate of MBO.
Feng et al. [95] presented a novel binary MBO (BMBO) method used to address the 0-1 knapsack problem (0-1 KP).In BMBO, each butterfly individual is represented as a two-tuple string.Several individual allocation techniques are used to improve BMBO's performance.In order to keep the number of infeasible solutions to a minimum, a novel repair operator was applied.The comparative study of BMBO with other optimization techniques showed the superiority of the former in solving 0-1 KP.
Wang et al. [96,97] put forward another variant of the MBO method, the GCMBO.In GCMBO, two modification strategies, including a self-adaptive crossover (SAC) operator and a greedy strategy, were utilized to improve its search ability.
Feng et al. [98] combined chaos theory with the basic MBO algorithm, and proposed a novel chaotic MBO (CMBO) algorithm.The proposed CMBO algorithm enhanced the search effectiveness significantly.In CMBO, in order to tune the two main operators, the best chaotic map is selected from 12 maps.Meanwhile, some of the worst individuals are improved by using a Gaussian mutation operator to avoid premature convergence.
Ghanem and Jantan [99] combined ABC with elements from MBO to proposed a new hybrid metaheuristic algorithm named hybrid ABC/MBO (HAM).The combined method uses an updated butterfly adjusting operator, considered to be a mutation operator, with the aim of sharing information with the employee bees in ABC.
Wang et al. [100] proposed a discrete version of MBO (DMBO) that was applied successfully to tackle the Chinese TSP (CTSP).They also studied and analyzed the parameter butterfly adjusting rate (BAR).The chosen BAR was used to find the best solution for the CTSP.
Feng et al. [101] proposed a type of multi-strategy MBO (MMBO) technique for the discounted 0-1 knapsack problem (DKP).In MMBO, two modifications, including neighborhood mutation and Gaussian perturbation, are utilized to retain the diversity of the population.An array of experimental results showed that the neighborhood mutation and Gaussian perturbation were quite capable of providing significant improvement in the exploration and exploitation of the MMBO approach, respectively.Accordingly, two kinds of NMBO were proposed: NCMBO and GMMBO.
Feng et al. [102] combined MBO with seven kinds of DE mutation strategies, using the intrinsic mechanism of the search process of MBO and the character of the differential mutation operator.They presented a novel DEMBO based on MBO and an improved DE mutation strategy.In this work, the migration operator was replaced by a differential mutation operator with the aim of improving its global optimization ability.The overall performance of DEMBO was fully assessed using thirty typical discounted 0-1 knapsack problem instances.The experimental results demonstrated that DEMBO could enhance the search ability while not increasing the time complexity.Meanwhile, the approximation ratio of all the 0-1 KP instances obtained by DEMBO were close to 1.0.
Wang et al. [103] proposed a new population initialization strategy in order to improve MBO's performance.First, the whole search space is equally divided into NP (population size) parts at each dimension.Subsequently, two random distributions (the T and F distributions) are used to mutate the equally divided population.Accordingly, five variants of MBOs are proposed with a new initialization strategy.
Feng et al. [104] presented OMBO, a generalized opposition-based learning (OBL) [105,106] MBO with Gaussian perturbation.The authors used the OBL strategy on the portion of the individuals in the late stage of evolution, and used Gaussian perturbation on the individuals with poor fitness in each evolution.OBL guaranteed a higher convergence speed of OMBO, and Gaussian perturbation avoided the possibility of falling into a local optimum.For the sake of testing and verifying the effectiveness of OMBO, three categories of 15 large-scale 0-1 KP cases from 800 to 2000 dimensions were used.The experimental results indicated that OMBO could find high-quality solutions.
Chen et al. [107] proposed a new variant of MBO by introducing a greedy strategy to solve dynamic vehicle routing problems (DVRPs).In contrast to the basic MBO algorithm, the proposed algorithm accepted only butterfly individuals that had better fitness than before implementation of the migration and butterfly adjusting operators.Also, a later perturbation procedure was introduced to make a trade-off between global and local search.
Meng et al. [108] proposed an improved MBO (IMBO) for the sake of enhancing the optimization ability of MBO.In IMBO, the authors divided the two subpopulations in a dynamic and random fashion at each generation, instead of using the fixed strategy applied in the original MBO approach.Also, the butterfly individuals were updated in two different ways for the sake of maintaining the diversity of the population.
Faris et al. [109] modified the position updating strategy used in the basic MBO algorithm by utilizing both the previous solutions and the butterfly individuals with the best fitness at the time.For the sake of fully exploring the search behavior of the improved MBO (IMBO), it was benchmarked by 23 functions.Furthermore, the IMBO was applied to train neural networks.The IMBO-based trainer was verified on 15 machine learning datasets from the UCI repository.Experimental results showed that the IMBO algorithm could enhance the learning ability of neural networks significantly.
Ehteram et al. [110] used the MBO algorithm to address the utilization of a multi-reservoir system for the sake of improving production of hydroelectric energy.They studied three periods of dry (1963)(1964), wet (1951)(1952), and normal (1985)(1986) conditions in a four reservoir system.The experiments indicated that MBO can generate more energy compared to particle swarm optimization (PSO) and the genetic algorithm (GA).
Though many scholars have made several in-depth studies of the MBO algorithm from different aspects.The number of monarch butterflies in land 1 and 2 is unchanged.In this paper, a self-adaptive strategy is introduced to update the subpopulation sizes during the optimization process.A detailed description of the proposed algorithm will be given in the following sections.

Migration Operator
The number of butterflies located at land 1 and land 2 can be calculated as ceil(p * NP) (NP 1 , subpopulation 1, SP1) and NP − NP 1 (NP 2 , subpopulation 2, SP2), respectively.We use SP1 and SP2 to denote subpopulation 1 and subpopulation 2, respectively.Here, ceil(x) rounds x to the nearest integer not less than x.Therefore, when r ≤ p, then x t+1 i,k is generated by the following equation [88]: where x t+1 i,k is the kth element of x i , and x t r 1 ,k is the kth element of x r 1 .Butterfly r 1 is chosen from SP1 in a random fashion.In Equation (1), r is given in the following form: where peri is the migration period [88].In comparison, when r > p, then x t r 1 ,k is given by: where x t r 2 ,k is the kth element of x r 2 , and butterfly r 2 is chosen from SP2 in a random fashion.

Butterfly Adjusting Operator
For butterfly j, if rand is not more than p, the kth element is given as [88]: where x t+1 j,k is the kth element of x j .Similarly, x t best,k is the kth element of the best individual x best .On the other hand, when rand is bigger than p, it can be expressed as: where x t r 3 ,k is the kth element of x r 3 .Here, r 3 ∈ {1, 2, . . ., NP 2 }.
In this case, when rand is bigger than BAR, it can be calculated in another form [88]: where dx is the walk step of butterfly j.

SPMBO Algorithm
Even though the MBO algorithm was proposed only three years ago, it has received more and more attention from scholars and engineers [88].They have put forward many techniques to improve the search ability of the basic MBO algorithm.Also, MBO has been used to successfully solve all kinds of real-world problems.However, as mentioned previously, MBO uses a fixed number of butterflies in land 1 and land 2, and all the new butterfly individuals generated by the migration operator are accepted.In this paper, a new variant of the MBO algorithm will be proposed by introducing self-adaptive and greedy strategies.A detailed description of the SPMBO algorithm will be given below.

Self-Adaptive Strategy
As mentioned in Section 3.1, the number of butterflies in land 1 and land 2 are ceil(p*NP) (NP 1 , subpopulation 1) and NP-NP 1 (NP 2 , subpopulation 2), respectively.They are fixed and unchanged during the entire optimization process.Here, the parameter p is adjusted by a self-adaptive strategy in a dynamic way, which is updated as follows: where t is current generation, and a and b are constants given by: where t m is the maximum generation, and p min and p max are lower and upper bounds of parameter p, respectively.Clearly, p min and p max are in the range [0, 1].
For the basic MBO algorithm, when p = 0, all the butterflies are updated by the butterfly adjusting operator, while when p = 1, all the butterflies are updated by the migration operator.Apart from these two special cases, in order to extend the range of the parameter p, p min and p max are respectively assigned to 0.1 and 0.9 in our following experiments.From Equation ( 7), we can see, the parameter p is changed in a linear way from the lower bound p min to upper bound p max .

Greedy Strategy
In this subsection, we will make a further in-depth study on the migration operator.In the basic MBO algorithm, all the newly-generated butterfly individuals are accepted as the new butterfly individuals for the next generation.If the newly-generated butterfly individual is worse than before, this updating will lead to population degradation, and slow the convergence speed.More seriously, if this happens at the later stages of the search, the population will oscillate.
In this paper, a greedy strategy is introduced in the basic MBO algorithm.Only newly-generated butterfly individuals with better fitness will be accepted and passed to the next generation.This selection scheme guarantees the generated population is not worse than before, and the algorithm evolves in the proper way.After introducing the greedy strategy, for minimal problems, the new butterfly individual is given as: where x t+1 i,new is a newly-generated butterfly that will be passed to the next generation, and f (x t+1 i ) and f (x t i ) are the fitness of butterfly x t+1 i and x t i , respectively.After introducing this greedy strategy to the migration operator, the mainframe of the updated migration operator can be constructred, as shown in Algorithm 1.
Algorithm 1 Updated Migration Operator.Randomly select a butterfly in subpopulation 1 (say r 1 ); Randomly select a butterfly in subpopulation 2 (say r 2 ); Generate x t+1 i,new according to greedy strategy as shown in Equation (10).14: end for After incorporating the self-adaptive strategy and greedy strategy into the basic MBO algorithm, the SPMBO approach is complete; a description of the approach is given in Algorithm 2.
Algorithm 2 SPMBO Algorithm.Sort the butterfly population.

7:
Divide butterfly individuals into two subpopulations.Perform butterfly adjusting operator as the basic MBO algorithm.In order to verify the performance of the prosed SPMBO algorithm, thirteen benchmark problems are solved by the proposed approach.The thirteen benchmark problems are minimal functions, so the MBO and SPMBO algorithms are striving to search for the smallest possible function values.A more detailed description of these experiments can be found in Section 5.

Simulation Results
In this section, the proposed SPMBO algorithm will be fully verified from various aspects on thirteen 30-D and 60-D standard benchmark problems, as shown in Table 1.A more detailed description of the benchmark problems can be found on the website: http://www.sfu.ca/~ssurjano/optimization.html.In order to get a fair comparison, all implementations are carried out under the same conditions [49,111].

MBO vs. SPMBO
In this subsection, we compare the proposed SPMBO algorithm with the basic MBO approach.For MBO and SPMBO, the corresponding parameters are set as follows: max step S max = 1.0, butterfly adjusting rate BAR = 5/12, maximum generation t m = 50, migration period peri = 1.2, the migration ratio p = 5/12, the upper bound p max = 0.9, lower bound p min = 0.1, and population size NP = 50.Therefore, for the basic MBO, the number of butterflies in land 1 and land 2, i.e., NP 1 and NP 2 , are 21 and 29, respectively.According to Equations ( 7)-( 9), we find a = 41/490, b = 8/490, and the parameter p, is given by: where t is current generation, which is an integer between 1 and 50 in our present work.According to our above analyses, the trend of parameter p, the number of butterflies in land 1 NP 1 , and the number of butterflies in land 2 NP 2 for both the basic MBO algorithm and the proposed SPMBO algorithm are illustrated in Figure 1.
In essence, all the intelligent algorithms are stochastic algorithms, therefore, in order to remove the influence of randomness, thirty independent runs are performed.In the following experiments, the optimal solution for each test problem is highlighted in bold font.

D = 30
In this subsection, the dimension of thirteen benchmarks are set to 30.For MBO and SPMBO, they have the same function evaluations (FEs) at each generation.Therefore, the maximum generation (t m ) is considered as the stop condition, which is 50 as mentioned above.For the thirty implementations, the best, mean, and worst function values and Standard deviation (Std) values obtained by MBO and SPMBO are recorded in Table 2. From Table 2, it can be observed that, in terms of the mean and worst function values, SPMBO has the absolute advantage over the MBO algorithm on the thirteen benchmarks.By studying the SD values, we can see, the final functions values obtained by SPMBO are located in a smaller range than the basic MBO algorithm.This indicates that SPMBO is an effective intelligent algorithm that performs in a more stable way.Unfortunately, for best function values, SPMBO only narrowly defeated the basic MBO algorithm (7 vs. 6).
The superiority of SPMBO on functions F01-F13 is also shown in Figure 2, which clearly reveals that SPMBO performs better than MBO throughout the entire optimization process.In this subsection, the dimension of the thirteen benchmark problems are set to 60.As before, the maximum generation (t m ), i.e., function evaluations (FEs), is considered as the stop condition, which is again set to 50.After thirty implementations, the best, mean, and worst function values and SD values were obtained by MBO and SPMBO and are recorded in Table 3.
From Table 3, it can be observed that, for mean and worst function values, SPMBO has the absolute advantage over the MBO algorithm on the thirteen benchmarks, although MBO does perform better than SPMBO on two of the test functions.On ten benchmark problems, SPMBO has smaller SD values than MBO.This indicates that SPMBO performs more stably than MBO in most cases.However, for best function values, SPMBO is narrowly defeated by the basic MBO algorithm (6 vs. 7).Future research of SPMBO should foucus on how to improve the best function values.
The superiority of SPMBO on functions F01-F13 can also be seen in Figure 3, which clearly reveals that SPMBO performs better than MBO during the entire optimization process.

PSO vs. SPMBO
In order to further show the superiority of the proposed SPMBO algorithm, we compare the SPMBO with other metaheuristic algorithms.Here, PSO is taken as an example.The parameters in SPMBO are the same as in Section 5.1.For PSO, the parameters are chosen as: the inertial constant =0.3, the cognitive constant =1, and the social constant for swarm interaction =1.As mentioned in Section 5.1, thirty independent runs are performed with the aim of getting representative results.The optimal solution for each test problem is highlighted in bold font.In this subsection, the dimension of the thirteen benchmarks is set to 30.PSO and SPMBO have the same function evaluations (FEs) at each generation.Therefore, the maximum generation (t m ) is considered as the stop condition, which is 50 as mentioned above.After thirty implementations, the best, mean, and worst function values and SD values obtained by PSO and SPMBO were recorded and are shown in Table 4.
From Table 4, it can be observed that, for best and mean function values, SPMBO has the absolute advantage over the PSO algorithm.Through studying the worst values, we can see, the final function values obtained by SPMBO are a little better than PSO.Unfortunately, for SD function values, SPMBO is defeated by PSO .In this subsection, the dimension of the thirteen benchmark problems is set to 60.As before, the maximum generation (t m ), i.e., function evaluations (FEs), is considered as the stop condition and is set to 50.After thirty implementations, the best, mean, and worst function values and SD values were obtained for PSO and SPMBO and are recorded in Table 5.
From Table 5, it can be observed that, for best and mean function values, SPMBO has the absolute advantage over the PSO algorithm on all thirteen benchmarks.For worst values, SPMBO and PSO have similar performance.Unfortunately, for SD values, SPMBO is defeated by PSO.From the results in Tables 4 and 5, we can draw a brief conclusion, SPMBO performs better than PSO on most cases, though the SD of the SPMBO algorithm must be further improved.

Conclusions
Inspired by the migration behavior of monarch butterflies, one of the most promising swarm intelligence algorithms, monarch butterfly optimization (MBO), was proposed by Wang et al. in 2015.In the basic MBO algorithm, the number of butterflies in land 1 (NP 1 ) and land 2 (NP 2 ) is fixed, which is calculated according to the parameter p at the begin of the search.MBO includes two main operators: a migration operator and a butterfly adjusting operator.For the migration operator, all the generated butterfly individuals are accepted and passed to the next generation.In some cases, this is an ineffective way to find the best function values.In this paper, we introduced two techniques to overcome these drawbacks: self-adaptive and greedy strategies.The parameter p is linearly adjusted in a dynamic way.Therefore, at the beginning of the search, the number of butterflies in land 1 (NP 1 ) and land 2 (NP 2 ) is determined by the parameter p.Additionally, only newly-generated butterfly individuals having better fitness will be accepted and passed to the next generation.This greedy strategy will surely accelerate the convergence speed.The proposed SPMBO algorithm is tested by thirteen 30-D and 60-D test functions.The experimental results indicate that the search ability of the proposed SPMBO approach outperforms significantly the basic MBO algorithm on most test functions.
Despite showing various advantages of the SPMBO approach, the following points should be highlighted in our future research.On one hand, the parameter p is changed during the entire optimization process.In fact, if the algorithm performs in a good manner, there is no need to adjust the parameter p.Therefore, developing a method to adjust the parameter p in a more intelligent way is worthy of further studies.Second, for the updated migration operator, only better butterfly individuals are accepted and passed to the next generation.The butterfly individuals having worse fitness may include better elements, which may help the search algorithm.Thus, the migration operator should accept a few butterfly individuals with worse fitness.Last, only thirteen benchmarks were used to test our proposed SPMBO approach.In the future, more benchmark problems, especially real-world applications, should be used for further verifying SPMBO, such as image processing, video coding, and wireless sensor networks.

1 : 2 :
for i = 1 to NP 1 do // butterflies in SP1 for k = 1 to D do // D is the length of a butterfly individual 3:Randomly generate a number rand.

Figure 1 .
Figure 1.Trends of the parameter p, the number of butterflies in land 1 NP 1 , and the number of butterflies in land 2 NP 2 for the tested monarch butterfly optimization (MBO) and self-adaptive population MBO (SPMBO) algorithms.(a) p; (b) NP 1 ; (c) NP 2 .
Set the generation counter t = 1, and set the maximum generation t m , NP 1 , NP 2 , BAR, peri, and the lower (p min ) and upper (p max ) bounds of parameter p. Calculate the fitness according to the objective function.

Table 2 .
Best, mean, and worst function values and SD values obtained by MBO and SPMBO algorithms with dimension D = 30.

Table 3 .
Best, mean, and worst function values and SD values obtained by the MBO and SPMBO algorithms with D = 60.

Table 4 .
Best, mean, worst function values and SD values obtained by the PSO and SPMBO algorithms with dimension D = 30.

Table 5 .
Best, mean, worst function values and SD values obtained by the PSO and SPMBO algorithms with dimension D = 60.