A Chaotic Hybrid Butterﬂy Optimization Algorithm with Particle Swarm Optimization for High-Dimensional Optimization Problems

: In order to solve the problem that the butterﬂy optimization algorithm (BOA) is prone to low accuracy and slow convergence, the trend of study is to hybridize two or more algorithms to obtain a superior solution in the ﬁeld of optimization problems. A novel hybrid algorithm is proposed, namely HPSOBOA, and three methods are introduced to improve the basic BOA. Therefore, the initialization of BOA using a cubic one-dimensional map is introduced, and a nonlinear parameter control strategy is also performed. In addition, the particle swarm optimization (PSO) algorithm is hybridized with BOA in order to improve the basic BOA for global optimization. There are two experiments (including 26 well-known benchmark functions) that were conducted to verify the e ﬀ ectiveness of the proposed algorithm. The comparison results of experiments show that the hybrid HPSOBOA converges quickly and has better stability in numerical optimization problems with a high dimension compared with the PSO, BOA, and other kinds of well-known swarm optimization algorithms.


Introduction
The butterfly optimization algorithm (BOA) was proposed by Arora and Singh in 2018 [1]. The method and concept of this algorithm was proposed [2] firstly at the 2015 International Conference on Signal Processing, Computing and Control (2015 ISPCC). After the algorithm was proposed, the authors have performed many studies on BOA. Arora and Singh [3] proposed an improved butterfly optimization algorithm with ten chaotic maps for solving three engineering optimization problems. Arora and Singh [4] proposed a new hybrid optimization algorithm which combines the standard BOA with Artificial Bee Colony (ABC) algorithm. Arora and Singh [5] used the BOA to solve the node localization in wireless sensor networks and compared the results with the particle swarm optimization (PSO) algorithm and firefly algorithm (FA). Arora et al. [6] proposed a modified butterfly optimization algorithm for solving the mechanical design optimization problems. Singh and Anand [7] proposed a novel adaptive butterfly optimization algorithm, which a novel phenomenon of changing the sensory modality of the basic BOA. Sharma and Saha [8] proposed a novel hybrid algorithm (m-MBOA) to enhance the exploitation ability of BOA with the help of the mutualism phase of symbiosis organisms search (SOS). Yuan et al. [9] proposed an improved butterfly optimization algorithm, which is employed for optimizing the system performance that is analyzed based on annual cost, exergy and energy efficiencies, and pollutant emission reduction. Li et al. [10] proposed an improved BOA for engineering design problems using the cross-entropy method. A hybrid intelligent

The Basic Butterfly Optimization Algorithm (BOA)
The nature-inspired meta-heuristic algorithm is proposed, named BOA [1,2], which simulates the foraging and mating behavior of the butterfly. One of the main characteristics of BOA different from other meta-heuristics is that each butterfly has its own unique scent. The fragrance can be formulated as follows: where f i is the perceived magnitude of fragrance, c represents the sensory modality, and I is the stimulus intensity, and a represents the power exponent based on the degree of fragrance absorption. Theoretically any value of the sensory morphology coefficient c in the range [0,∞] can be taken. However, its value is determined by the particularity of the optimization problem in the iterative process of the BOA. The sensory modality c in the optimal search phase of the algorithm can be formulated as follows: where T max is the maximum number of iterations of the algorithm, and the initial value of parameter c is set to 0.01. In addition, there are two key steps in the algorithm, they are, respectively, global search phase and local search phase. The mathematical model of the butterflies' global search movements can be formulated as follows: x t+1 where x t i denotes the solution vector x i of the ith butterfly in t iteration and r means a random number in [0,1]. Here, g best is the current best solution found among all the solutions in the current stage. Particularly, f i represents the fragrance of the ith butterfly. The local search phase can be formulated as follows: x t+1 where x t j and x k i are jth and kth butterflies chosen randomly from the solution space. If x t j and x k i belong to the same iteration, it means that the butterfly becomes a local random walk. If not, this kind of random movement will diversify the solution.
Both global and local searches for food and a mating partner by the butterfly in nature can occur. Therefore, a switch probability p is set to convert the normal global search and the intensive local search. In each iteration, the BOA randomly generates a number in [0,1], which is compared with switch probability p to decide whether to conduct a global search or local search.

The Basic Particle Swarm Optimization (PSO) Model
PSO algorithm [18] is based on the swarm of birds moving for searching food in a multidimensional search space. The position and velocity are the important characteristics of PSO, which are used to find the optimal value.
Each individual is called a particle, and each particle is first initialized with random position and velocity within the search space. The position of the best global particle in the optimal solution is as follows: v t+1 i = w·v t i + c 1 ·rand 1 × p best − x t i + c 2 ·rand 2 × g best − x t i (5) x t+1 where v t+1 i and v t+1 represent the velocity of ith particle at iteration number (t) and (t + 1). Usually, c 1 = c 2 = 2, rand 1, and rand 2 are the random numbers in (0, 1). The w can be calculated as: where w max = 0.9, and w min = 0.2, and T max represents the maximum number of iterations.

The Proposed Algorithm
In this section, a novel hybrid algorithm is proposed, and the initialization of BOA by a cubic one-dimensional map is introduced, and a nonlinear parameter control strategy is also performed. In addition, the PSO algorithm is hybridized with BOA in order to improve the basic BOA for global optimization.

Cubic Map
Chaos is a relatively common phenomenon in nonlinear systems. The basic cubic map [50] can be calculated as follows: where α and β represent the chaos factors, and when β in (2.3, 3), the cubic map is chaotic. When α = 1, the cubic map is in the interval (−2, 2), and the sequence in (−1, 1) with α = 4. The cubic map can also be: where the ρ is control parameter. In Equation (8), the sequence of the cubic map is in (0, 1), and when ρ = 2.595, the chaotic variable z n generated at this time has better ergodicity. A graphical presentation of the cubic map for 1000 iterations is in Figure 1.
where and represent the velocity of ith particle at iteration number (t) and (t + 1). Usually, c1 = c2 = 2, , and are the random numbers in (0, 1). The can be calculated as: where = 0.9, and = 0.2, and represents the maximum number of iterations.

The Proposed Algorithm
In this section, a novel hybrid algorithm is proposed, and the initialization of BOA by a cubic one-dimensional map is introduced, and a nonlinear parameter control strategy is also performed. In addition, the PSO algorithm is hybridized with BOA in order to improve the basic BOA for global optimization.

Cubic Map
Chaos is a relatively common phenomenon in nonlinear systems. The basic cubic map [50] can be calculated as follows: where and represent the chaos factors, and when in (2.3, 3), the cubic map is chaotic. When = 1, the cubic map is in the interval (−2, 2), and the sequence in (−1, 1) with = 4. The cubic map can also be: where the is control parameter. In Equation (8), the sequence of the cubic map is in (0, 1), and when = 2.595, the chaotic variable generated at this time has better ergodicity. A graphical presentation of the cubic map for 1000 iterations is in Figure 1.
In Figure 1, it can be seen that the chaotic map can distribute the population of butterflies to the random value in the interval (0, 1) during the search phase.  We propose the cubic map to initialize the position of the algorithm, and in order to ensure that the initialized interval is in (0, 1), the z (0) of cubic map is set to 0.315 in the proposed algorithm. In Figure 1, it can be seen that the chaotic map can distribute the population of butterflies to the random value in the interval (0, 1) during the search phase.
We propose the cubic map to initialize the position of the algorithm, and in order to ensure that the initialized interval is in (0, 1), the z (0) of cubic map is set to 0.315 in the proposed algorithm.

Nonlinear Parameter Control Strategy
From Equations (1), (3), and (4), we can see that the power exponent a plays an important role in BOA's ability to find the best optimization. When a = 1, it means that no scent is absorbed-that is, the scent emitted by a specific butterfly is perceived by other butterflies-which means that the search Symmetry 2020, 12, 1800 5 of 27 range will be narrowed and the local exploration ability of the algorithm will be improved. When a = 0, it means that the fragrance emitted by any butterfly cannot be perceived by other butterflies, so the group will expand the search range-that is, improve the global exploration ability of the algorithm. However, a = 0.1 in basic BOA, and taking a as a fixed value cannot effectively balance the global and local search capabilities. Therefore, we propose a nonlinear parameter control strategy as: where a f irst and a f inal represent the initial value and final value of parameter a, µ is tuning parameter, and T max represents the maximum number of iterations. In this paper, µ = 2, T max = 500, a f irst = 0.1, and a f inal = 0.3. It can be seen from Figure 2a that for the intensity indicator coefficient a, the nonlinear control strategy based on the sine function proposed in this paper has a larger slope in the early stage, which can speed up the algorithm's global search ability. The mid-term slope is reduced, which is convenient for entering a local search. The later slope is gentle to allow the algorithm to search for the optimal solution. Therefore, it can effectively balance the global search and local search capabilities of the algorithm.

Nonlinear Parameter Control Strategy
From Equations (1), (3), and (4), we can see that the power exponent a plays an important role in BOA's ability to find the best optimization. When a = 1, it means that no scent is absorbed-that is, the scent emitted by a specific butterfly is perceived by other butterflies-which means that the search range will be narrowed and the local exploration ability of the algorithm will be improved. When a = 0, it means that the fragrance emitted by any butterfly cannot be perceived by other butterflies, so the group will expand the search range-that is, improve the global exploration ability of the algorithm. However, a = 0.1 in basic BOA, and taking a as a fixed value cannot effectively balance the global and local search capabilities. Therefore, we propose a nonlinear parameter control strategy as: where and represent the initial value and final value of parameter a, is tuning parameter, and represents the maximum number of iterations. In this paper, = 2, = 500, = 0.1, and = 0.3.
It can be seen from Figure 2a that for the intensity indicator coefficient a, the nonlinear control strategy based on the sine function proposed in this paper has a larger slope in the early stage, which can speed up the algorithm's global search ability. The mid-term slope is reduced, which is convenient for entering a local search. The later slope is gentle to allow the algorithm to search for the optimal solution. Therefore, it can effectively balance the global search and local search capabilities of the algorithm.  From Figure 2, It can be seen from (b) that the convergence curve of improved BOA with the nonlinear parameter control strategy is better than the basic BOA in the optimal test of Schwefel 1.2 function. The curve has many turning points, indicating that the improved algorithm has the ability to jump out of the global optimum from Figure 2b. The results of the main controlling parameter µ values of parameter a are shown in Figure 2c. As the value of parameter µ increases, the effect of the improvement strategy gradually worsens. It can be seen from (c) that the convergence curve of improved BOA with µ = 2 is best in the seven convergence curves. When µ ≥ 4, the convergence curve is worse than the original BOA.

Hybrid BOA with PSO
In this section, a novel hybrid PSOBOA is proposed, which is a combination of separate PSO and BOA. The major difference between PSO and BOA is how new individuals are generated. The drawback of the PSO algorithm is the limitation to cover a small space for solving high-dimensional optimization problems.
In order to combine the advantages of the two algorithms, we combine the functionality of both algorithms and do not use both algorithm one after another. In other words, it is heterogeneous because of the method involved to produce the final results of the two algorithms. The hybrid is proposed as follow: where C 1 = C 2 = 0.5, and w can be also calculated by Equation (7), r 1 and r 2 are the random number in (0, 1).
In addition, the mathematical model of the global search phase and local search phase in the basic BOA, which can be calculated by Equations (3) and (4). However, the global search phase of the hybrid PSOBOA can be formulated as follows: The local search phase of the hybrid PSOBOA can be formulated as follows: where X k i and X t j are jth and kth butterflies chosen randomly from the solution space, respectively. The pseudo-code of hybrid PSOBOA is shown in Algorithm 1. 1. Generate the initialize population of the butterflies X i (i = 1, 2, . . . , n) randomly 2. Initialize the parameter r 1 , r 2 , C 1 and C 2 3. Define senser modality c, power exponent a and switch probability p 4. Calculate the fitness value of each butterflies 5. While t = 1: the max iterations 6.
For each search agent 7.
Update the fragrance of current search agent by Equation (1) 8.
End for 9.
Find the best f 10.
For each search agent 11.
If r < p then 13.
Move towards best position by Equation ( In order to combine the advantages of the three improvement strategies proposed in this paper, a novel hybrid HPSOBOA is proposed in this section, which is a combination of the cubic map for the initial population, nonlinear parameter control strategy of power exponent a, PSO algorithm, and BOA.
The pseudo-code of novel HPSOBOA is shown in Algorithm 2.
Algorithm 2. Pseudo-code of novel HPSOBOA 1. Generate the initialize population of the butterflies X i (i = 1, 2, . . . , n) using cubic map 2. Initialize the parameter r 1 , r 2 , C 1 and C 2 and switch probability p 3. Define senser modality c and the initial value of power exponent a 4. Calculate the fitness value of each butterflies 5. While t = 1: the max iterations 6.
For each search agent 7.
Update the fragrance of current search agent by Equation (1) 8.
End for 9.
Find the best f 10.
For each search agent 11.
If r < p then 13.
End for 18.
Calculate the new fitness value of each butterflies 20.
If f new < best f 21.
Update the position of best f using Equation (12)
Update the value of power exponent a using Equation

Experiments and Comparison Results
In this section, we choose the 26 high-dimensional test functions from CEC benchmark functions, and the name, range, type, and theoretical optimal value of the test functions are shown in Table 1. Then, two experiments are performed with ten algorithms, including improved BOA, novel BOAs in this paper, and other swarm algorithms or natural science-based algorithms. The performance of experiment 1 was compared through experimental data, which were compared with six algorithms by six benchmark functions in dimensions 100 and 300, respectively. Then, the performance of experiment 2 was, respectively, compared with the ten algorithms by 26 high-dimensional test functions in Dim = 30. Finally, the statistical methods were conducted, and the boxplots for the 30 times fitness of 26 test functions were also compared. Step y 2 i − 10 cos(2πy i ) + 10 , u(x i , 10, 100, 4) where U represents unimodal, and M represents multimodal.

Numerical Optimization Funtions and Experiments
The experiments were carried out on the same experimental platform. The results of all the algorithms were compared using MATLAB 2018a installed over Windows 10 (64 bit), Intel (R) Core (TM) i5-10210U, and @2.11G with 16.0GB of RAM.

The 26 Test Functions
The properties of unimodal and multimodal benchmark functions for numerical optimization, which are also high-dimensional test functions, are listed in Table 1, where Dim indicates the dimension of the function, and Range is the boundary of the function's search space. These functions are used to test the performance of the algorithms. In order to analyze the effectiveness of the improvement strategies proposed in this paper, the comparison experiment for BOA [1], CBOA, PSOBOA, HPSOBOA, LBOA [5], and IBOA [9] was designed for six high-dimensional functions from Table 1 with Dim = 100 and Dim = 100 as experiment 1. Additionally, there are three unimodal problems and three multimodal problems. The CBOA combines the basic BOA with the cubic map and nonlinear control strategy of the power exponent a. The hybrid PSOBOA just combines the basic BOA with PSO algorithm, the novel HPSOBOA is a combination of three improvement strategies in Section 4. In addition, two improved BOAs are also compared in this experiment; LBOA [5] was proposed by Arora and Singh, which was used in the improved algorithm to solve the node localization in wireless sensor networks in 2017. The IBOA [9] was proposed by Yuan et al., which was employed for optimizing the system performance that was analyzed based on annual cost, exergy and energy efficiencies, and pollutant emission reduction in 2019.

Experiment 2: Comparison with Other Swarm Algorithms
In order to prove the novel hybrid algorithm superior to other swarm algorithms, the experiment 2 was designed for 26 benchmark functions with Dim = 30. There are ten algorithms in this experiment, and we chose four swarm intelligence optimization algorithms besides the six algorithms in the experimental one. The four swarm algorithms including PSO [18], GWO [27], SCA [22], and MAP [32] were proposed in different years, and their principles are also different. The PSO and GWO algorithms simulate the behavior of animals in nature. The SCA is a physics-based algorithm, which moves towards the best solution using a mathematical model based on sine and cosine functions. The MPA is based on the widespread foraging strategy, namely Lévy and Brownian movements in ocean predators along with optimal encounter rate policy in the bio-logical interaction between predator and prey.

Performance Measures
In order to analyze the performances of the algorithms, three criteria of different swarm algorithms are considered, including the Mean (Avg), the Standard deviation (Std), and the Success Rate (SR). Here, we will use the Mean which is defined as: where m is the number of optimization test runs, and F i is the best fitness value. The Standard deviation (Std) is defined as follows: The Success Rate (SR) is defined as follows: where m all is the total number of optimization test runs, and m su is the times of the algorithm successfully reached to the specified value that ε < 10 −15 is called the specified value.

Comparison of the Parameter Settings of Ten Algorithms
In the experiments, ten comparison algorithms were selected, namely, BOA [1], CBOA, PSOBOA, HPSOBOA, LBOA [5], IBOA [9], PSO [18], GWO [27], SCA [22], and MPA [32]. The parameter settings of the ten algorithms are shown in Table 2. In addition, the population number of each algorithm is set to 30, and the max iteration is set to 500. Each algorithm is run for 30 times, and the Mean (Avg), Standard deviation (Std), Success Rate (SR), and Friedman rank [51] of the results are all taken in the two experiments.

Results of Experiment 1
For the results of experiment 1, in order to analyze the robustness of the hybrid algorithm by three improved control strategies with other swarm intelligence algorithms, the convergence curves for six benchmark functions (Dim = 100) plots are shown in Figure 3.
It can be verified from the convergence curve that the proposed HPSOBOA converges faster than the other algorithms from Figure 3. The results show that the improved algorithm based on the three improvement strategies in this paper can effectively improve the convergence trend of the basic BOA when Dim = 100. From Figures 3 and 4a-f, it can be seen that the proposed HPSOBOA for those functions has a better convergence than the original BOA except the Schwefel 1.2 function when Dim = 300.

Results of Experiment 1
For the results of experiment 1, in order to analyze the robustness of the hybrid algorithm by three improved control strategies with other swarm intelligence algorithms, the convergence curves for six benchmark functions (Dim = 100) plots are shown in Figure 3. It can be verified from the convergence curve that the proposed HPSOBOA converges faster than the other algorithms from  In order to analyze the robustness of the hybrid HPSOBOA by three improved control strategies with other five algorithms, the dimension of the six optimization problems is set to 300, and the convergence curves for six benchmark functions plots are shown in Figure 4.
In order to analyze the robustness of the hybrid HPSOBOA by three improved control strategies with other five algorithms, the dimension of the six optimization problems is set to 300, and the convergence curves for six benchmark functions plots are shown in Figure 4. Figure 5 shows the box plots of optimization results of six high-dimensional problems by the six algorithms. The optimization result of the hybrid HPSOBOA is better than other algorithms from    In addition, statistical tests are essential to check significant improvements by novel algorithms over others, which were proposed. The Friedman rank test [51] was applied on the mean solutions, we used this method to compare the improved algorithms by different control strategies. The Avgrank and overall rank are shown in Table 3. From the Friedman rank, the HPSOBOA outperforms all the comparison algorithms on six numerical optimization problems (Schwefel 1.2, Sumsquare, Zakharov, Rastrigin, Ackley, and Alpine), and the order of six algorithms with Dim = 100 is HPSOBOA > PSOBOA > IBOA > LBOA > CABOA > BOA. However, when the Dim = 300, the order of six algorithms is that HPSOBOA > PSOBOA > IBOA > CABOA > LBOA > BOA.
From the results of the analysis, we can see that although the order of HPSOBOA is better than others, the IBOA with chaotic theory for improving the control parameters also performed well. Thus, different one-dimensional chaotic maps can also have a good performance for improving the basic BOA. In addition, statistical tests are essential to check significant improvements by novel algorithms over others, which were proposed. The Friedman rank test [51] was applied on the mean solutions, we used this method to compare the improved algorithms by different control strategies. The Avg-rank and overall rank are shown in Table 3. From the Friedman rank, the HPSOBOA outperforms all the comparison algorithms on six numerical optimization problems (Schwefel 1.2, Sumsquare, Zakharov, Rastrigin, Ackley, and Alpine), and the order of six algorithms with Dim = 100 is HPSOBOA > PSOBOA > IBOA > LBOA > CABOA > BOA. However, when the Dim = 300, the order of six algorithms is that From the results of the analysis, we can see that although the order of HPSOBOA is better than others, the IBOA with chaotic theory for improving the control parameters also performed well. Thus, different one-dimensional chaotic maps can also have a good performance for improving the basic BOA.

Results of Experiment 2
In experiment 2, the performance of the proposed algorithm was compared with the other optimization algorithms using the 26 test functions with Dim = 30. The statistical results include the Mean (Avg), the Standard deviation (Std), the Success Rate (SR), Friedman rank test [51], and Wilcoxon rank-sum test [52] because the statistical test is a significance method to analyze the improved algorithm, and these comparison results are presented in Tables 4-8.
The alpha is set to 0.05 in the Wilcoxon rank-sum (WRS) test and Friedman rank test, and there are two hypotheses called the null and alternative. The null hypothesis is a significant difference from the proposed algorithm and the others. According to the statistical value, the null is accepted if this statistical value is greater than the value of alpha; otherwise, the alternative is accepted. The p-value and the Friedman rank depicted that this supremacy is statistically significant. Note, the last row in Table 4, Table 5, Table 6 represents the rank of each algorithm with the number of the best solutions. The p-value and the Friedman rank depicted that this supremacy is statistically significant.
From the comparison results of Table 4, it is proved that the HPSOBOA yields the best results on the 26 test functions with Dim = 30 except F 6 , F 7 , F 10 , F 12 , F 13 , F 14 , F 17 , F 20 , F 21 , F 23 , and F 25 . For functions F 6 , F 7 , F 10 , F 12 , and F 23 , the hybrid HPSOBOA can obtain the optimal fitness value, which is close to other algorithms but slightly worse. However, for F 13 , F 14 , F 17 , F 20 , F 21 , and F 25 , the best solutions of these functions are searched by the other algorithms, such as GWO, PSO, MPA, and IBOA, and MPA obtains the best solution twice. Additionally, the IBOA also obtains the best solution twice, which is improved by the logistic map for the control parameters. Combining the comparison results in Tables 5 and 6, we can see that the IBOA is better than others in the SR rank, which is set to ε < 10 −15 , and is called the specified value, and the order of ten algorithms is IBOA > HPSOBOA > MPA > GWO > CABOA = LBOA > PSOBOA > PSO = SCA > BOA. The order of HPSOBOA and IBOA is only different once on the function F 17 , and the SR of HPSOBOA is 93.33%, but the SR of IBOA is 100% for searching the global optimization value, which is set to ε < 10 −15 , and is accepted in this paper. Therefore, the performance of the proposed algorithm needs to be improved in future work.
In addition, the comparison results of the Friedman rank test are shown in Table 6; from the Avg-rank, we can obtain that the final order of the rank means of the Friedman rank test-the ten swarm algorithms-is HPSOBOA > MPA > IBOA > PSOBOA > CABOA > GWO > LBOA > PSO > BOA > SCA. The WRS test values are given in Tables 7 and 8 for the 26 high-dimensional test functions of HPSOBOA vs. the others, respectively, where N/A means not applicable in Table 7. It can be seen from these tables that there is a significance different between the proposed hybrid HPSOBOA and the other algorithms for the 26 test functions with Dim = 30. In Table 8, if H=1, this indicates rejection of the null hypothesis at the 5% significance level. If H=0, this indicates a failure to reject the null hypothesis at the 5% significance level. In addition, Table 9 shows the comparison results of t-test for 26 benchmark functions of the proposed HPSOBOA with the other algorithms.  Figure 6 shows the box plots of the optimization results of 26 high-dimensional problems by the ten algorithms. It is clear from Figure 6 that the outcomes of the average of the fitness function are not normally distributed, in which each algorithm is run for 30 times for the 26 test functions. The values of SCA are relatively poor in the ten algorithms.   Figure 6 shows the box plots of the optimization results of 26 high-dimensional problems by the ten algorithms. It is clear from Figure 6 that the outcomes of the average of the fitness function are not normally distributed, in which each algorithm is run for 30 times for the 26 test functions. The values of SCA are relatively poor in the ten algorithms.

Conclusions and Future Work
In this paper, we proposed three improvement strategies, and they are as follows: (1) the initialization of BOA by cubic map; (2) a nonlinear parameter control strategy for the power exponent a; (3) hybrid PSO algorithm with BOA. These strategies all aim to improve the ability for global optimization of the basic BOA.

Conclusions and Future Work
In this paper, we proposed three improvement strategies, and they are as follows: (1) the initialization of BOA by cubic map; (2) a nonlinear parameter control strategy for the power exponent a; (3) hybrid PSO algorithm with BOA. These strategies all aim to improve the ability for global optimization of the basic BOA.
In order to analyze the effectiveness of the improvement strategies, a novel hybrid algorithm was compared with other swarm algorithms, and two experiments were designed. To deal with 26 high-dimensional optimization problems, a cubic map was employed for the initial population of HPSOBOA, and the experimental results show that the initial fitness value is superior to the BOA and other algorithms. In addition, the experimental results show that the one-dimensional chaotic maps may also have a good performance for improving the basic BOA. The MPA proposed in 2020 will be applied in more fields.
In future work, the performance of the proposed algorithm needs to be improved, and the improved BOA includes adjusting its control parameters to optimize algorithm performance. The two-dimensional and three-dimensional chaotic systems can also improve the BOA or other swarm intelligence algorithms in theory. The improved algorithm can also solve real-world problems, such as engineering problems, wireless sensor network (WSNs) deployment problems, proportional-integral-derivative (PID) control problems, and analysis of regional economic activity [53].