Next Article in Journal
Proposal for an Artificial Neural Network Tool for the Process of Generating Metasurface Unit Cell Geometries
Previous Article in Journal
Color Changes of a Heat-Polymerized Polymethyl-Methacrylate (PMMA) and Two 3D-Printed Denture Base Resin Materials Exposed to Staining Solutions: An In Vitro Spectrophotometric Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Multi-Strategy Improved Butterfly Optimization Algorithm

1
School of Information and Electronic Engineering, Zhejiang Gongshang University, Hangzhou 310018, China
2
Control System Laboratory, Graduate School of Engineering, Kogakuin University, Tokyo 163-8677, Japan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(24), 11547; https://doi.org/10.3390/app142411547
Submission received: 5 November 2024 / Revised: 27 November 2024 / Accepted: 6 December 2024 / Published: 11 December 2024

Abstract

:
To address the issues of poor population diversity, low accuracy, and susceptibility to local optima in the Butterfly Optimization Algorithm (BOA), an Improved Butterfly Optimization Algorithm with multiple strategies (IBOA) is proposed. The algorithm employs SPM mapping and reverse learning methods to initialize the population, enhancing its diversity; utilizes Lévy flight and trigonometric search strategies to update individual positions during global and local search phases, respectively, expanding the search scope of the algorithm and preventing it from falling into local optima; and finally, it introduces a simulated annealing mechanism to accept worse solutions with a certain probability, enriching the diversity of solutions during the optimization process. Simulation experimental results comparing the IBOA with Particle Swarm Optimization, BOA, and three other improved BOA algorithms on ten benchmark functions demonstrate that the IBOA has improved convergence speed and search accuracy.

1. Introduction

In recent years, with the development of natural computing and intelligent algorithms, metaheuristic algorithms have emerged. From the perspective of their formation principles, metaheuristic algorithms can be divided into three categories: those based on evolutionary mechanisms, such as Imperialist Competitive Algorithm [1], Geometrical Optimization Algorithm, etc. [2]; those based on physical principles, such as Simulated Annealing Algorithm, Surface Space Optimization, etc.; and those based on swarm intelligence, such as Particle Swarm Optimization, Grey Wolf Optimization Algorithm, etc. Among them, metaheuristic algorithms developed based on swarm intelligence have attracted much attention.
Arora, by observing the foraging and migration behaviors of butterflies in nature, proposed the Butterfly Optimization Algorithm (BOA) [3]. It has been widely used in path planning [4,5], fault detection [6], image processing [7], and other fields [8,9]. Although BOA has shown superior performance in various application scenarios, there is no absolute thing in the world, and naturally, there is no algorithm that can be applied to all optimization problems. BOA also has some drawbacks, such as the lack of a disturbance mechanisms within the population, susceptibility to local optimal solutions, and the zero-sum game between new individuals and the optimal individuals, which loses the opportunity to explore other areas. Therefore, since the algorithm was proposed, scholars have proposed different improvement strategies to address the shortcomings of the butterfly algorithm. At present, the common improvement directions of the algorithm can be roughly divided into six categories: heterogeneous integration, adaptive parameter, noise interference, information exchange, guide selection method, and update mechanism [10]. In addition, there are improved methods that refer to the concept of feedback control circuits in circuit analysis, so that butterflies can be affected by positive or negative feedback from other butterflies [11].
In this paper, an improved butterfly optimization algorithm (IBOA) is proposed, which integrates several strategies. The first step is to use SPM chaotic mapping and reverse learning to initialize the population in the experimental sample, and then select the most suitable individuals according to the fitness to form a new population. The second step is to introduce the Levy flight strategy in the global search stage. The Levy flight strategy is used to make individual butterflies jump long distances in the global search stage. In the third stage of local search, the perturbation characteristic of sine and cosine algorithm is used to expand the search range. Finally, the simulated annealing mechanism is introduced into BOA, and the quality of the global solution is improved by using the characteristic that the simulated annealing algorithm has a certain probability to accept the poor solution. The improved BOA overcomes the limitations of traditional BOA by enhancing its search ability and formulating more efficient search strategies, thereby achieving a faster convergence speed and a higher optimization accuracy.
The structure of this article is as follows. Section 2 provides a brief introduction to the standard BOA. Section 3 is a detailed description of the IBOA. Section 4 lists five comparison algorithms, test functions, experimental results, and statistical analysis.

2. Butterfly Optimization Algorithm

The Butterfly Optimization Algorithm (BOA) is a nature-inspired metaheuristic algorithm based on the foraging behavior of butterflies. This algorithm simulates the process by which butterflies locate food sources by sensing fragrance. Butterflies perceive the fragrance released by other butterflies and move according to the intensity of the fragrance, where each butterfly represents a potential solution, and the intensity of the fragrance they release is related to their fitness (the quality of the solution).
The intensity of the fragrance is the calculation formula:
f = c × I α
In the formula, c is the sensory coefficient, I is the stimulus intensity, which is related to the objective function, and α is the exponent. In Reference [3], c is taken as 0.01, and α is taken as 0.1.
In the preparatory work before the BOA algorithm runs, the positions of the population individuals are randomly generated, and the fragrance concentration of each butterfly individual is calculated in parallel. During the iteration process of the algorithm, both global and local searches may occur. To this end, a switch probability P is set to determine whether to perform global or local searches. In each iteration, a random number r between 0 and 1 is generated, and it is compared with the switch probability P to decide whether to carry out global or local searches.
If r ≤ P, then a global search is conducted, and the butterfly individuals will move towards the current global best position. The formula for this movement is as follows:
x i t + 1 = x i t + f i × r 2 × g * x i t
If r > P , then a local search is performed. The formula for this movement is as follows:
  x i t + 1 = x i t + f i × r 2 × x k t x j t
The final update formula for the BOA is as follows:
x i t + 1 = x i t + f i × r 2 × g * x i t ,               r P     x i t + f i × r 2 × x k t x j t ,                   r > P            
where P is typically set to 0.8, g* represents the global best solution, x i t represents the solution vector of the i-th butterfly at the t-th iteration, x k t and x j t represent the solution vectors of the k-th and j-th butterflies, respectively, randomly selected from the solution space, and f i represents the fragrance strength of the i-th butterfly.

3. Improvement of the BOA

3.1. Subsection Optimization of Initial Population Positions

The traditional initialization method of BOA involves randomly generating the positions of butterflies, which is uncertain and does not guarantee that the generated butterfly population will have thoroughness and diversity. This error can further be reflected in the global search efficiency and solution accuracy of the Butterfly Optimization Algorithm.
Introducing a chaotic function for parameter initialization can reduce the probability of getting stuck in local optima and enhance the intensity of global search and optimization. The Sine-Plane Map (SPM) is a classic chaotic map known for its ergodicity and randomness, and its performance is superior to that of the Logistic chaotic map and the Henon chaotic map [12]. Therefore, this paper uses SPM chaotic mapping with good traversal rows and uniformity to initialize the new population. The calculation formula is as follows:
x i + 1 = m o d x i η + μ sin π x i + r , 1 ,   0 x i < η m o d x i / η 0.5 η + μ sin π x i + r , 1 ,   η x i < 0.5 m o d 1 x i η 0.5 η + μ sin π 1 x i + r , 1 ,   0.5 x i < 1 η m o d 1 x i η + μ sin π 1 x i + r , 1 ,   1 η x i < 1  
In the formula, x(i) represents the chaotic variable of the i-th individual in the population, η∈(0, 0.5), μ∈(0, 1), and r is a random number between 0 and 1.
When using the SPM chaotic mapping, a set of initial chaotic sequences x N × d i m = { y i j ,   i = 1 ,   2 N ,   j = 1 ,   2 d i m } is randomly generated within the search interval (lb, ub). By mapping the chaotic vector to the algorithm’s solution space, the chaotic butterfly population Q 1 = { X i S P M , i = 1 ,   2 N } is obtained, where dim represents the dimension and N is the population size.
The new butterfly individual is generated by the following formula:
X i , j S P M = X l b j + X u b j X l b j × x i j  
X i S P M = X i , j S P M ,   j = 1 ,   2 d i m  
X u b j and X l b j represent the lower and upper bounds of the j-th dimension for the i-th butterfly individual in the new population, respectively.
Reverse learning is a concept in machine learning that was first proposed in 2005 [13] and has since been widely used to enhance the performance of classical algorithms. Reverse learning primarily involves generating an inverse solution and comparing it with the current solution, selecting the better one. Jiao C [14] applied reverse learning to the Particle Swarm Optimization algorithm, which not only increased the diversity of particles but also prevented them from falling into local extrema. It has been proven that this implementation method can improve the quality of solutions and reduce time complexity. However, the inverse solution generated by reverse learning has a fixed distance from the current solution and lacks randomness [15]. Therefore, to optimize the quality of the initial population of the algorithm to the greatest extent, this paper introduces a random reverse learning strategy after generating initial solutions with chaotic mapping. According to the current solution, an inverse solution is produced. The formula for generating the inverse solution is as follows:
x i o b l = r a n d × l b + u b x i
In the formula, x i o b l represents the reverse solution generated based on the reverse learning strategy, x i represents the feasible solution generated by mapping the chaotic sequence produced by the Sine-Plane Map (SPM) into the algorithm’s solution space, and l b and u b represent the lower and upper bounds of x, respectively.
The reverse butterfly population Q2 generated using Equation (8) is as follows:
X i , j o b l = r a n d × X u b j X l b j X i , j S P M
  X i o b l = X i , j o b l , j = 1 ,   2 d i m
The chaotic butterfly population Q1 generated by the SPM chaotic mapping competes with the reverse population Q2, which means calculating the fitness values of each butterfly in Q1 and Q2, then sorting them from smallest to largest, and taking the top N individuals as the final initialized butterfly optimization algorithm’s initial population X_new, where N is the population size of the butterfly optimization algorithm.

3.2. Lévy Flight Strategy

The Lévy flight strategy is a random walk strategy based on the Lévy distribution, used for solving optimization problems. The Lévy distribution’s probability density function is discontinuous at the mean, which allows it to model random processes with long-range correlations.
The purpose of introducing the Lévy flight strategy is to use the Lévy distribution to generate random step sizes, allowing butterfly individuals to make larger displacements in a short time during the search phase, thereby improving search efficiency.
The expression for levy(s), which represents the path of a random search, is as follows:
l e v y s = λ β Γ ( λ ) s i n ( π λ 2 ) π × 1 s 1 + λ
β = 1, λ is the exponent parameter, Γ denotes the gamma function, and s represents the step size of the Lévy flight, with the formula as follows:
s = u v 1 β
where u and v both follow a normal distribution. u ~ N 0 , σ 2 , v ~ N 0,1 , σ = Γ ( 1 + β ) s i n ( π λ 2 ) β Γ ( 1 + β 2 ) 2 β 1 2 1 β .
In this paper, the Lévy flight strategy is introduced during the global search phase of the Butterfly Algorithm, which allows the butterfly population to take larger leap steps, thereby achieving the goal of expanding the search range. After introducing the Lévy flight, the position update formula for butterflies during the global search phase of the algorithm is as follows:
x i t + 1 = x i t + f i × r 2 × g * x i t × l e v y s
In the formula, x i t + 1 represents the solution vector of the i-th butterfly at the t-th iteration, g * represents the global best solution, and r represents a random number between 0 and 1.

3.3. Sine Cosine Search Strategy

BOA possesses an effective optimization mechanism, which can enhance search efficiency and optimization accuracy during local searches. Inspired by the Sine Cosine Algorithm (SCA) [16], which uses the oscillatory characteristics of sine and cosine to explore unknown areas of the solution space, we guide the local search phase of the BOA with the sine and cosine terms from the SCA to lead the butterflies in random searches [17]. To provide more movement direction choices for the randomly searching butterflies, a random number q between (0, 1) is introduced. When q < 0.5, the local search phase butterfly position is updated using Equation (14); otherwise, it is updated using Equation (15).
x i t + 1 = x i t r 1 s i n r 2 + f i × r 2 × x k t x j t
x i t + 1 = x i t r 1 c o s r 2 + f i × r 2 × x k t x j t
  r 1 = 2 ( 1 t T ) , where t is the current iteration number, T is the maximum number of iterations, and r2 is a random number between (0, 2π).
As the number of iterations increases, r 1 will decrease rapidly. In the Sine Cosine Algorithm (SCA), when   r 1 < 1, the next exploration direction of individuals in the algorithm will be outside the solution and target space, which is beneficial for the algorithm to explore more space and escape from local optima.

3.4. Simulated Annealing Algorithm

The Butterfly Optimization Algorithm primarily relies on the attraction of fragrance between butterflies, which lacks a disturbance mechanism, making it prone to falling into local optima and attracting other butterfly individuals to converge on these local optima, resulting in the algorithm failing to achieve the theoretical best value. The Simulated Annealing (SA) algorithm [18] accepts worse solutions than the current one with a certain probability. In the process of finding the optimal solution, every new solution has the potential to be accepted. This mechanism of accepting worse solutions increases the diversity of the search and reduces the risk of getting stuck in local optima [19]. If a new solution is worse than the current one, the acceptance probability of the new solution is determined by the Metropolis criterion. The Metropolis criterion formula is as follows:
p = exp θ T
In the equation, θ represents the difference in fitness between the current solution and the new solution and T is the temperature, which is decreased according to a certain pattern during the search process.
After each update of the butterfly positions, the best individual is selected, and the fitness of this new individual is compared with the current best value. If the new individual is better than the current best, it replaces the current best individual. Otherwise, the Metropolis criterion is used to determine whether to accept the new individual.

3.5. IBOA Steps

In the research process, this paper mainly introduces the aforementioned four improvement strategies into the BOA algorithm to obtain the IBOA algorithm. The specific steps of the algorithm are as follows:
Step 1: Parameter Initialization. The population size of the algorithm is set to N, and the maximum number of iterations is T_MAX.
Step 2: The butterfly population generates two new populations, Q1 and Q2, using formulas (7) and (10). The current fitness values of each individual butterfly in the two populations are calculated based on the fitness function, and a comparison is made. The top N individuals are selected as the new butterfly population.
Step 3: Calculate the fitness of the current individuals and save the current optimal value f_min and the best optimal solution. Determine whether the current individual performs global search or local search based on the switch probability P. The update formula for the global search of butterfly positions is (11), and the update formula for local search is determined by the selection probability q using Formulas (14) or (15).
Step 4: Calculate the fitness of the positions after the update for each butterfly. If the fitness is better than the current optimal value, update the optimal position; otherwise, decide whether to accept the new individual according to Formula (16).
Step 5: End of the current iteration. Return to Step 3 to continue the next iteration until the maximum number of iterations T_MAX is reached.

3.6. Time Complexity

In the Improved Butterfly Optimization Algorithm (IBOA) proposed in this paper, with a population size of N, dimension of D, and a maximum number of iterations of T_MAX, the time complexity for generating the population through the Sine-Plane Map (SPM) and reverse learning is O(2N). The time complexity for updating the position of each butterfly individual is O(N × D), and the time complexity for the simulated annealing algorithm to determine whether to accept a worse solution is O(N). Therefore, the total time complexity is O((2N + N × D + N) × T_MAX) = O (N × D × T_MAX), which is comparable to the time complexity of the original BOA.

4. Simulations and Analyses

To verify the effectiveness of the improved algorithm proposed in this paper, ten classic benchmark functions are selected [20], among which F1 to F6 are unimodal functions. Their characteristic is that there is only one strict local extremum within the considered interval, which is mainly used to measure the convergence effect of the function. F7 to F10 are multimodal functions, which have multiple local optima or global optima within the interval and are mainly used to test the local optimization and search capabilities of the algorithm. Table 1 lists the specific information about these ten functions.
The experimental environment is the 12th Gen Intel(R) Core(TM) i7-1260P, Windows 11operating system, and the compilation tool is MATLAB R2020b.

4.1. Improve Strategy Effectiveness Analysis

To verify the effectiveness of the improved strategies, the BOA algorithm, the population initialization improved IBOA-1, the sine cosine search strategy improved IBOA-2, the Levy flight strategy improved IBOA-3, the IBOA-4 algorithm after introducing the simulated annealing algorithm, and the IBOA algorithm were compared. The six algorithms were run 50 times on 10 test functions, and the differences in the optimal values, means, and standard deviations of the test results were used to evaluate the effectiveness of the optimization scheme. For specific experimental results, please refer to Table 2.
In all test functions, both the single-strategy improved algorithm and the comprehensive improved IBOA algorithm are superior to the basic BOA algorithm, which verifies that the four improved IBOA strategies have good improvement effects. The optimization accuracy of IBOA is always better than other single strategy improvement algorithms.

4.2. Analysis of Function Optimization Results

To verify the feasibility of the improved algorithm proposed in this paper, select the traditional butterfly optimization algorithm (BOA). The traditional particle swarm optimization algorithm (PSO), and Butterfly Optimization Algorithm Combining Sine Cosine and Iterative Chaotic Map with Infinite Collapses (SIBOA) [21], Piecewise Weight and Mutation Opposition-Based Learning Butterfly Optimization Algorithm (FWMBOA) [22], and the Butterfly Optimization Algorithm Based on Convergence Factor and Salp Swarm (CFSSBOA) [23] were compared under the same reference function.
This is because SIBOA adopts the improved method of adaptive parameters and adds disturbance, FWMBOA adopts the improved method of increasing population information exchange, and CFSSBOA adopts the improved method of mixing with other algorithms. These are the classic BOA algorithm improvement methods. By comparison with these three algorithms, the performance and potential of new algorithms can be better evaluated.
To ensure a fair comparison, the basic parameters for all algorithms in the experiment are set to be the same, with the population size set to 100 and the maximum number of iterations set to 500. The specific parameter settings for each algorithm are as follows:
When solving the data, each benchmark function is tested using BOA, PSO, SIBOA, PWMBOA, CFSSBOA, and the improved Butterfly Optimization Algorithm (IBOA) proposed in this paper. Fifty runs of data are collected for each algorithm, and the best solution, worst solution, average value, and standard deviation are extracted from the data. The smaller the values of the best and worst solutions indicate higher optimization precision, and the smaller the average value and standard deviation indicate better stability in the optimization process. The test results are listed in Table 3.
The optimal value and the worst value can reflect the optimization quality of the algorithm. It can be seen from the data in Table 3 that for functions F1, F2, F3, F4, F8, and F10, the improved algorithm IBOA can achieve the theoretical optimal value when solving them. For functions where the optimal value of IBOA has not reached the theoretical optimal value, such as F5, F6, and F7, IBOA is the best-performing algorithm among multiple algorithms. For the F9 function, the optimal value of IBOA is the same as that of SIBOA, PWMBOA, and CFSSBOA, and is eleven orders of magnitude higher than BOA, which indicates that the introduction of multiple strategies can indeed enhance the performance of the algorithm.
From the perspective of average values and standard deviations, the standard deviation reflects the stability and reliability of an algorithm, while the average value reflects the convergence speed. According to the data in Table 3, when solving the F6 and F7 functions, IBOA has the smallest average value and a relatively small standard deviation. For the F1, F2, F3, F4, F8, F9, and F10 functions, the standard deviation of IBOA is 0, which indicates that the high-precision results obtained by IBOA on the benchmark functions are not accidental events.
This analysis suggests that IBOA not only converges quickly, as evidenced by the small average values, but also demonstrates a high level of stability and reliability, as indicated by the low standard deviations. The fact that the standard deviation is zero for several functions implies that IBOA consistently finds the optimal solution or a solution very close to it across multiple runs, which is a strong indicator of the algorithm’s robustness in these cases.
For comparison with Reference [21], under the conditions of 50 dimensions and a maximum of 500 iterations, all the test functions used in Reference [21] are included in the test functions of this paper. The results for solving F1, F3, F8, and F10 functions are the same for both, with both achieving the theoretical optimal values. The computational results for F2, F4, and F6 are better than those in Reference [21], indicating that the optimization accuracy of the algorithm in this paper is higher than that of Reference [21]. Compared with [22], the seven test functions used in [22] are included in the test functions in this paper. Compared with these seven test functions, F1, F3, F5, and F7 functions all reach the theoretical optimal value. For F2 function, although IBOA does not reach the theoretical optimal value, its accuracy still reaches 3.1931 × 10−320. For F4 function, IBOA’s optimization accuracy and standard deviation are better than those in the [22]. When compared with [23], for the F6 and F9 functions, IBOA demonstrates better average values and standard deviations than those reported in [23]. This clearly illustrates the superiority of the algorithm presented in this paper.
The total optimization time results for 50 runs of each algorithm on the benchmark function are shown in Table 4. Overall, the running time of the IBOA algorithm is higher than that of the BOA, but with the improved strategy, the increase in running time is within an acceptable range.
In summary, when optimizing the ten benchmark functions, the performance of the IBOA algorithm proposed in this paper has been significantly enhanced. Specifically, compared to the traditional BOA, IBOA has achieved higher optimization accuracy, and the convergence speed of the algorithm has also been improved, demonstrating a stronger optimization capability. When compared to the other three improved BOA algorithms, IBOA has also made significant progress in terms of optimization precision and stability.

4.3. Convergence Analysis

Figure 1 displays the average convergence curves for the benchmark test functions F1 to F10, with all functions having a dimension of 30 and each function being run independently 50 times to analyze the convergence of each improvement strategy. To better observe the convergence of the curves, the vertical coordinate is presented in logarithmic scale with base 10. When the optimization value of the algorithm is 0, the curve will overlap with the horizontal coordinate axis.
From Figure 1, it can be observed that, with the exception of the F7 function, IBOA not only achieves the optimal values but also exhibits a faster decline in its convergence curve compared to other strategy-improved algorithms as the number of iterations increases. Additionally, in terms of curve fitting, IBOA demonstrates excellent exploratory capabilities, continuing to explore the optimal values of the function when other algorithms exhibit a stagnation phenomenon. This is evident from the iteration diagrams of the F1, F2, F3, F4, F8, F9, and F10 functions. IBOA’s ability to escape from local optimal traps is also quite remarkable. On the F5 function, although IBOA falls into the local optimal trap, its optimization ability and convergence speed are the best of several algorithms. On the F7 function, the optimization accuracy of IBOA and CFSSBOA is close, both outperforming the other algorithms.
In summary, all the experimental results demonstrate that the IBOA algorithm, which integrates four improvement strategies, has superior optimization capability and precision compared to the traditional BOA. Furthermore, when compared to several other improved versions of the BOA, the IBOA shows significant enhancement.

4.4. Analysis of Wilcoxon Rank Sum Test Results

Table 5 shows the optimal value, worst value, average value and standard deviation of each algorithm on ten benchmark functions. However, the performance difference between algorithms cannot be clearly seen. Therefore, the Wilcoxon rank sum test is used to test the average result of the function at the significance level of 5%, and p is used to represent the test result. “+” indicates that there is a significant difference between the two comparison algorithms; “−” indicates that there is no significant difference between the two comparison algorithms when p > 5%. The notation “NaN” indicates that the performance of the two algorithms is comparable. Table 5 is the IBOA and BOA, PSO, SIBOA, PWMBOA, CFSSBOA Wilcoxon rank and inspection of p values. “+”, “=“, “−”, respectively, show whether the IBOA algorithm is better than, equal to, or inferior to the contrast algorithm, thus determining whether IBOA algorithm is significantly increased on the benchmark functions.
Based on Table 5, it is evident that IBOA has a definite advantage over the original algorithms BOA and PSO. For the improved algorithms SIBOA, PWMBOA, and CFSSBOA, the majority of the Wilcoxon rank-sum test p-values are below 5%, indicating a significant difference in performance between IBOA and these algorithms. This suggests that IBOA has superior optimization capabilities.

5. Conclusions

This paper presents an improved Butterfly Optimization Algorithm (IBOA) that integrates multiple strategies, incorporating Lévy flight and sine cosine operators into the global and local search stages of the BOA, respectively. This effectively enhances the algorithm’s convergence speed and optimization accuracy. The introduction of the simulated annealing algorithm to accept worse solutions helps to increase the diversity of solutions and expands the exploration space during the search process. The testing results from ten benchmark functions demonstrate the superiority and robustness of the IBOA algorithm.
The improvement method proposed in this paper is mainly improved from the perspective of position update to improve the performance of the algorithm. In the face of multi-dimensional functions, the optimization effect has been unstable. Therefore, in future research, knowledge related to machine learning can be added to explore the improved algorithm when combined with machine learning to improve the comprehensive performance of the algorithm.
Constrained by realistic conditions, this paper only simulates the proposed scheme to verify the effectiveness of the strategy. Future work may involve physical experimental validation and the application of IBOA to a robotic arm for further validation, thereby enhancing our research.

Author Contributions

Conceptualization, Q.H.; methodology, P.C.; writing—original draft preparation, P.C.; writing—review and editing, P.C. and Q.H.; supervision, Q.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4661–4667. [Google Scholar]
  2. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. Publ. IEEE Neural Netw. Counc. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  3. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  4. Zhai, R.; Xiao, P.; Shu, D.; Sun, Y.; Jiang, M. Application of Improved Butterfly Optimization Algorithm in Mobile Robot Path Planning. Electronics 2023, 12, 3424. [Google Scholar] [CrossRef]
  5. Mazaheri, H.; Goli, S.; Nourollah, A. Path planning in three-dimensional space based on butterfly optimization algorithm. Sci. Rep. 2024, 14, 2332. [Google Scholar] [CrossRef]
  6. Jin, Z.; Sun, Y. Bearing Fault Diagnosis Based on VMD Fuzzy Entropy and Improved Deep Belief Networks. J. Vib. Eng. Technol. 2022, 11, 577–587. [Google Scholar] [CrossRef]
  7. Javidan, S.M.; Banakar, A.; Vakilian, K.A.; Ampatzidis, Y.; Rahnama, K. Diagnosing the spores of tomato fungal diseases using microscopic image processing and machine learning. Multimed. Tools Appl. 2024, 83, 67283–67301. [Google Scholar] [CrossRef]
  8. Makhadmeh, S.N.; Al-Betar, M.A.; Abasi, A.K.; Awadallah, M.A.; Doush, I.A.; Alyasseri ZA, A.; Alomari, O.A. Recent Advances in Butterfly Optimization Algorithm, Its Versions and Applications. Arch. Comput. Methods Eng. State Art Rev. 2022, 30, 21–22. [Google Scholar] [CrossRef]
  9. Zhang, M.; Wang, D.; Yang, J. Hybrid-Flash Butterfly Optimization Algorithm with Logistic Mapping for Solving the Engineering Constrained Optimization Problems. Entropy 2022, 24, 525. [Google Scholar] [CrossRef]
  10. He, K.; Zhang, Y.; Wang, Y.K.; Zhou, R.H.; Zhang, H.Z. EABOA: Enhanced adaptive butterfly optimization algorithm for numerical optimization and engineering design problems. Alex. Eng. J. 2024, 87, 543–573. [Google Scholar] [CrossRef]
  11. Ding, Y.; Xia, Q.; Zhang, R.; Li, S. Review of literature survey of butterfly optimization algorithm. Sci. Technol. Eng. 2023, 23, 2705–2716. (In Chinese) [Google Scholar]
  12. Lu, Z.; Zhang, J. An Enhanced Dung Beetle Optimization Algorithm for Global Optimization. Curr. J. Appl. Sci. Technol. 2023, 42, 9–22. [Google Scholar] [CrossRef]
  13. Tizhoosh, H.R. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  14. Jiao, C.; Yu, K.; Zhou, Q. An Opposition-Based Learning Adaptive Chaotic Particle Swarm Optimization Algorithm. J. Bionic Eng. 2024, 21, 3076–3097. [Google Scholar] [CrossRef]
  15. Wu, F.; Li, S.; Zhang, J.; Lv, D.; Wu, X.; Li, M. An Improved Weighted Differential Evolution Algorithm Based on the Chaotic Mapping and Dynamic Reverse Learning Strategy. J. Phys. Conf. Ser. 2022, 2400, 012054. [Google Scholar] [CrossRef]
  16. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  17. Hussein, A.; Ahmed, F.; Salah, K. An effective hybrid approach based on arithmetic optimization algorithm and sine cosine algorithm for integrating battery energy storage system into distribution networks. J. Energy Storage 2022, 49, 104154. [Google Scholar]
  18. Xiao, S.; Peng, P.; Zheng, P.; Wu, Z. A Hybrid Adaptive Simulated Annealing and Tempering Algorithm for Solving the Half-Open Multi-Depot Vehicle Routing Problem. Mathematics 2024, 12, 947. [Google Scholar] [CrossRef]
  19. Ran, L.; Di, B. Electric vehicle charging scheduling based on improved bald eagle algorithm. J. Phys. Conf. Ser. 2023, 2656, 012024. [Google Scholar]
  20. Yao, X.; Liu, Y. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  21. Wang, Y.; Zhang, D. Butterfly Optimization Algorithm Combining Sine Cosine and Iterative Chaotic Map with Infinite Collapses. Pattern Recognit. Artif. Intell. 2020, 33, 660–669. (In Chinese) [Google Scholar] [CrossRef]
  22. Li, S.; He, Q.; Du, N. Piecewise Weight and Mutation Opposition-Based Learning Butterfly OptimizationAlgorithm. Comput. Eng. Appl. 2021, 57, 92–101. (In Chinese) [Google Scholar] [CrossRef]
  23. Zheng, H.Q.; Peng, S.Y.; Zhou, Y.Q. Butterfly optimization algorithm based on convergence factor and salp swarm. Microelectron. Comput. 2021, 38, 28–34. (In Chinese) [Google Scholar] [CrossRef]
Figure 1. Convergence curve of the algorithm on the function F 1 F 10 .
Figure 1. Convergence curve of the algorithm on the function F 1 F 10 .
Applsci 14 11547 g001
Table 1. Benchmark Functions.
Table 1. Benchmark Functions.
NumberTitle 2 FunctionVariable Range ValuesGlobal Optimal Value
F1Sphere[−100, 100]0
F2Schwefel2.22[−10, 10]0
F3Schwefel1.2[−100, 100]0
F4Schwefel2.21[−100, 100]0
F5Step[−100, 100]0
F6Quartic[−1.28, 1.28]0
F7Schwefel2.26[−500, 500]−418.9829 × D
F8Rastrigin[−5.12, 5.12]0
F9Ackley[−32, 32]0
F10Griewank[−600, 600]0
Table 2. Comparison of optimization results of different improvement strategies.
Table 2. Comparison of optimization results of different improvement strategies.
FunctionAlgorithmOptimal ValueAverage ValueStandard Deviation
F1IBOA000
BOA1.4878 × 10−72.4402 × 10−75.5853 × 10−8
IBOA-19.1267 × 10−82.1207 × 10−74.747 × 10−8
IBOA-22.3502 × 10−874.7501 × 10−662.2556 × 10−65
IBOA-35.0304 × 10−2841.1878 × 10−2490
IBOA-41.9111 × 10−113.1856 × 10−92.3931 × 10−9
F2IBOA000
BOA3.8163 × 10−149.7811 × 10−122.9634 × 10−11
IBOA-14.0197 × 10−151.6523 × 10−112.3754 × 10−11
IBOA-22.1354 × 10−2055.7688 × 10−1730
IBOA-35.6229 × 10−1811.6621 × 10−1201.1753 × 10−119
IBOA-46.3567 × 10−74.049 × 10−61.126 × 10−6
F3IBOA000
BOA1.1995 × 10−72.1382 × 10−74.4449 × 10−8
IBOA-11.0164 × 10−72.2642 × 10−74.7077 × 10−8
IBOA-22.4818 × 10−873.0135 × 10−671.7174 × 10−66
IBOA-31.365 × 10−2849.8693 × 10−2500
IBOA-46.476 × 10−95.825 × 10−82.787 × 10−8
F4IBOA000
BOA3.5748 × 10−56.9172 × 10−51.2262 × 10−5
IBOA-14.7552 × 10−56.8879 × 10−51.1012 × 10−5
IBOA-21.4145 × 10−2121.8653 × 10−1920
IBOA-32.2966 × 10−3095.6237 × 10−2700
IBOA-41.7483 × 10−78.0464 × 10−73.095 × 10−7
F5IBOA9.7965 × 10−104.3216 × 10−59.5949 × 10−5
BOA3.43814.37130.44399
IBOA-13.37710.0165020.0048471
IBOA-21.59122.72750.63549
IBOA-32.13594.06010.77525
IBOA-49.1907 × 10−71.2461 × 10−32.5234 × 10−3
F6IBOA3.6308 × 10−82.0451 × 10−51.8865 × 10−5
BOA0.000412550.00160450.00064071
IBOA-10.000389580.00145050.0005137
IBOA-25.0144 × 10−83.0865 × 10−52.9218 × 10−5
IBOA-34.023 × 10−73.1476 × 10−53.1526 × 10−5
IBOA-44.0126 × 10−68.0777 × 10−57.089 × 10−5
F7IBOA−12,569.4593−12,547.123327.4743
BOA−5110.4061−3928.5789317.0166
IBOA-1−5510.3021−4488.0117421.6456
IBOA-2−7036.16−4764.2174594.7845
IBOA-3−5728.157−3759.8554451.7682
IBOA-4−12,569.4835−12,569.29660.37961
F8IBOA000
BOA06.0982 × 10−122.2353 × 10−11
IBOA-103.6709 × 10−128.1933 × 10−12
IBOA-2000
IBOA-3000
IBOA-47.2021 × 10−111.5048 × 10−91.0752 × 10−9
F9IBOA4.4409 × 10−164.4409 × 10−160
BOA2.9769 × 10−54.9166 × 10−57.7772 × 10−6
IBOA-13.3185 × 10−64.6771 × 10−66.6874 × 10−7
IBOA-24.4409 × 10−164.4409 × 10−160
IBOA-34.4409 × 10−164.4409 × 10−160
IBOA-46.6504 × 10−88.2654 × 10−73.6639 × 10−7
F10IBOA000
BOA2.1432 × 10−87.5599 × 10−84.2717 × 10−8
IBOA-11.7504 × 10−87.0881 × 10−83.1355 × 10−8
IBOA-2000
IBOA-3000
IBOA-41.1449 × 10−102.5761 × 10−91.6938 × 10−9
Table 3. Algorithm Optimization Results.
Table 3. Algorithm Optimization Results.
FunctionAlgorithmOptimal ValueWorst ValueAverage ValueStandard Deviation
F1IBOA0000
BOA1.4878 × 10−74.4612 × 10−72.4402 × 10−75.5853 × 10−8
PSO0.00709010.0401860.0184410.0065241
SIBOA0000
PWMBOA0000
CFSSBOA9.6278 × 10−1487.0809 × 10−901.504 × 10−911.0016 × 10−90
F2IBOA0000
BOA3.0092 × 10−151.0444 × 10−105.515 × 10−121.5167 × 10−11
PSO0.306621.34570.535010.18852
SIBOA7.2297 × 10−1709.0327 × 10−1574.7102 × 10−1581.7459 × 10−157
PWMBOA01.3399 × 10−52.6798 × 10−71.8949 × 10−6
CFSSBOA1.4647 × 10−223.5006 × 10−97.037 × 10−114.95 × 10−10
F3IBOA0000
BOA1.1995 × 10−73.8247 × 10−72.1382 × 10−74.4449 × 10−8
PSO0.601279.96681.97451.4246
SIBOA0000
PWMBOA0000
CFSSBOA1.5661 × 10−1311.8499 × 10−643.7058 × 10−662.6161 × 10−65
F4IBOA0000
BOA3.5748 ×10−59.2498 ×10−56.9172 × 10−51.2262 × 10−5
PSO0.117730.996640.270030.14619
SIBOA4.9814 × 10−1784.7077 × 10−1732.4259 × 10−1740
PWMBOA0000
CFSSBOA3.1176 × 10−769.7379 × 10−571.977 × 10−581.3768 × 10−57
F5IBOA9.7965 × 10−100.000461894.3216 × 10−59.5949 ×10−5
BOA3.43815.29224.37130.44399
PSO0.00920250.0311530.0165020.0048471
SIBOA0.595941.36430.954360.15895
PWMBOA2.13595.7454.06010.77525
CFSSBOA2.8042 ×10−50.00452360.00113090.00096006
F6IBOA3.6308× 10−87.4089× 10−52.0451 × 10−51.8865 × 10−5
BOA0.000412550.00388540.00160450.00064071
PSO0.0163470.0861730.049250.016063
SIBOA1.2574 × 10−60.000164573.204 × 10−53.4709 × 10−5
PWMBOA7.9534 × 10−77.3818 × 10−52.1097 × 10−51.9239 × 10−5
CFSSBOA2.2182 × 10−060.000384270.000119919.2574 × 10−5
F7IBOA−12,569.4593−12,448.9448−12,547.123327.4743
BOA−5110.4061−3576.605−3928.5789317.0166
PSO−4115.5193−2551.9592−3242.5415346.589
SIBOA−12,285.712−7986.9681−9830.8295898.1468
PWMBOA−12,569.4865−9780.4795−11,605.3028864.0931
CFSSBOA−12,569.4835−12,567.3264−12,569.29660.37961
F8IBOA0000
BOA01.5558 × 10−106.0982 × 10−122.2353 × 10−11
PSO7.281660.361224.169510.5269
SIBOA0000
PWMBOA0000
CFSSBOA0000
F9IBOA4.4409 × 10−164.4409 × 10−164.4409 × 10−160
BOA2.9769 × 10−56.9054× 10−54.9166× 10−57.7772× 10−6
PSO0.0831375.45482.89060.78444
SIBOA4.4409 × 10−164.4409 × 10−164.4409 × 10−160
PWMBOA4.4409 × 10−164.4409 × 10−164.4409 × 10−160
CFSSBOA4.4409 × 10−164.4409 × 10−164.4409 × 10−160
F10IBOA0000
BOA2.1432 × 10−82.5276 × 10−77.5599 × 10−84.2717 × 10−8
PSO132.403210.8344180.041813.9491
SIBOA0000
PWMBOA0000
CFSSBOA0000
Table 4. The running time of test function.
Table 4. The running time of test function.
FunctionIBOABOAPSOSIBOAPWMBOACFSSBOA
F115.29412.2612.323512.5318.91112.6485
F215.535512.80452.2784512.52119.113.047
F342.737539.486515.774539.441559.99539.7375
F415.039512.232.373312.370518.65212.5055
F514.38911.54752.259511.811518.213512.0475
F631.493528.43210.359528.561543.48528.8715
F718.982516.93654.427516.80224.42215.6975
F816.052515.55253.533413.387522.441516.1945
F915.702514.2113.7788513.35222.077515.5895
F1017.818516.0155.644515.12125.305517.717
Table 5. IBOA and BOA, PSO, SIBOA, PWMBOA, CFSSBOA Wilcoxon rank and inspection of p values. “+”, “=“, “−”.
Table 5. IBOA and BOA, PSO, SIBOA, PWMBOA, CFSSBOA Wilcoxon rank and inspection of p values. “+”, “=“, “−”.
FunctionBOAPSOSIBOAPWMBOACFSSBOA
F13.3111 × 10−203.3111 × 10−20NaNNaN3.3111 × 10−20
F21.3493 × 10−161.3493 × 10−160.64664.4481 × 10−181.3493 × 10−16
F373.3111 × 10−203.3111 × 10−203.3111 × 10−203.3111 × 10−203.3111 × 10−20
F43.3111 × 10−203.3111 × 10−203.3111 × 10−20NaN3.3111 × 10−20
F57.0661 × 10−185.4572 × 10−47.0661 × 10−187.0661 × 10−181.2019 × 10−16
F67.0661 × 10−187.0661 × 10−180.00410.28373.0946 × 10−13
F77.0661 × 10−187.0661 × 10−187.0661 × 10−186.2162 × 10−62.6537 × 10−13
F82.2575 × 10−163.3111 × 10−20NaNNaNNaN
F93.3111 × 10−203.3111 × 10−20NaNNaNNaN
F103.3111 × 10−203.3111 × 10−20NaNNaNNaN
+/=/−10/0/010/0/05/4/14/5/17/3/0
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cao, P.; Huang, Q. Hybrid Multi-Strategy Improved Butterfly Optimization Algorithm. Appl. Sci. 2024, 14, 11547. https://doi.org/10.3390/app142411547

AMA Style

Cao P, Huang Q. Hybrid Multi-Strategy Improved Butterfly Optimization Algorithm. Applied Sciences. 2024; 14(24):11547. https://doi.org/10.3390/app142411547

Chicago/Turabian Style

Cao, Panpan, and Qingjiu Huang. 2024. "Hybrid Multi-Strategy Improved Butterfly Optimization Algorithm" Applied Sciences 14, no. 24: 11547. https://doi.org/10.3390/app142411547

APA Style

Cao, P., & Huang, Q. (2024). Hybrid Multi-Strategy Improved Butterfly Optimization Algorithm. Applied Sciences, 14(24), 11547. https://doi.org/10.3390/app142411547

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop