Next Article in Journal
Stability Results for Two-Dimensional Systems of Fractional-Order Difference Equations
Next Article in Special Issue
Multi-Objective Evolutionary Algorithms to Find Community Structures in Large Networks
Previous Article in Journal
Lp-Lq-Well Posedness for the Moore–Gibson–Thompson Equation with Two Temperatures on Cylindrical Domains
Previous Article in Special Issue
Energy-Efficient Clustering Routing Protocol for Wireless Sensor Networks Based on Yellow Saddle Goatfish Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Hybrid BA_ABC Algorithm for Global Optimization Problems

by
Gülnur Yildizdan
1,* and
Ömer Kaan Baykan
2
1
Kulu Vocational School, Selcuk University, Kulu, 42770 Konya, Turkey
2
Department of Computer Engineering, Faculty of Engineering and Natural Sciences, Konya Technical University, 42250 Konya, Turkey
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(10), 1749; https://doi.org/10.3390/math8101749
Submission received: 1 September 2020 / Revised: 20 September 2020 / Accepted: 21 September 2020 / Published: 12 October 2020
(This article belongs to the Special Issue Advances of Metaheuristic Computation)

Abstract

:
Bat Algorithm (BA) and Artificial Bee Colony Algorithm (ABC) are frequently used in solving global optimization problems. Many new algorithms in the literature are obtained by modifying these algorithms for both constrained and unconstrained optimization problems or using them in a hybrid manner with different algorithms. Although successful algorithms have been proposed, BA’s performance declines in complex and large-scale problems are still an ongoing problem. The inadequate global search capability of the BA resulting from its algorithm structure is the major cause of this problem. In this study, firstly, inertia weight was added to the speed formula to improve the search capability of the BA. Then, a new algorithm that operates in a hybrid manner with the ABC algorithm, whose diversity and global search capability is stronger than the BA, was proposed. The performance of the proposed algorithm (BA_ABC) was examined in four different test groups, including classic benchmark functions, CEC2005 small-scale test functions, CEC2010 large-scale test functions, and classical engineering design problems. The BA_ABC results were compared with different algorithms in the literature and current versions of the BA for each test group. The results were interpreted with the help of statistical tests. Furthermore, the contribution of BA and ABC algorithms, which constitute the hybrid algorithm, to the solutions is examined. The proposed algorithm has been found to produce successful and acceptable results.

1. Introduction

Meta-heuristic algorithms are often used in the solution of optimization problems. These algorithms use natural phenomena to achieve a specific purpose. Meta-heuristic algorithms have convergence features and can guarantee to find a solution close to the exact solution. Furthermore, these algorithms are frequently preferred for the solution of optimization problems due to their simple structures, easiness to understand, and realize [1]. Exploration and exploitation concepts are two important components for meta-heuristic algorithms. Exploration refers to the ability to explore various unknown regions of the solution space to find the global best value, while exploitation refers to the ability to use knowledge from previous best results to find better results. To achieve good performance in an optimization problem, the balance of these two components must be well adjusted [2,3].
The bat algorithm [4] was proposed by Xin-She Yang in 2010. The loudness (A) and pulse emission rate (r) parameters used in the bat algorithm significantly affect the exploration and exploitation abilities of the algorithm. As iterations progress, loudness decreases and pulse emission rate increases. This situation causes the exploitation ability to become prominent in the first iterations in the algorithm and the exploration ability to become prominent in the further iterations and decreases the probability of the inclusion of new solutions found in the following steps of the algorithm to the population. There are many studies conducted to balance BA’s exploration and exploitation ability and to overcome the mentioned structural problems. Cai et al. [5] added the optimal forage and random disturbance strategy concepts to the algorithm to determine the search direction of bats and improve their global search capability. Meng et al. [6] added the habitat selection and Doppler effect concepts to the bat algorithm. Thus, the algorithm has begun to imitate bat behaviors further. Cai et al. [7] developed the algorithm using the triangle-flipping strategy in the speed update formula that affects the global search capability of the bat algorithm. Zhu et al. [8] proposed a quantum-behaved bat algorithm. In this new algorithm, the position of each bat is determined by the optimal solution initially available, and by the mean best position, which can increase the convergence rate of the algorithm in the following iterations. Ghanem and Jantan [9] have included a special mutation operator that increases the diversity of the BA in the algorithm. Thus, they overcame the problem of getting stuck in the local minimum, which is frequently experienced in the bat algorithm. Shan and Cheng [10] proposed an advanced bat algorithm based on the covariance adaptive evolution process to diversify the direction and population distribution of the search in BA. Nawi et al. [11] proposed a new algorithm that determines the step size in BA using the Gaussian distributed random step logic to form sub-optimal solutions encountered in BA and to overcome the problems experienced in solving large-scale problems. Chakri et al. [12] added the directional echolocation strategy to the algorithm to overcome the early convergence caused by the low exploration ability of BA. Al-Betar and Awadallah [13] added the island model strategy to the algorithm to increase BA’s capability to control diversity. Gan et al. [14] developed the algorithm with iterative local search and stochastic inertia weight strategies to improve BA’s global search capability and reduce the risk of getting stuck at the local minimum. Topal and Altun [15] suggested the dynamic virtual bat’s algorithm. In the algorithm, there are explorer bats that explore the search area and exploiter bats that are very likely to determine the desired target and search locally, and the roles of the bats vary depending on their location. Wang et al. [16] proposed a novel bat algorithm that includes multiple strategies for speed and position determination formulas to overcome BA’s weakness in solving complex problems.
There are hybrid algorithms proposed in the literature to increase the performance of BA. Liu et al. [17] made three modifications in the algorithm to improve the performance of BA and developed a hybrid algorithm with the extremal optimization algorithm. Wang and Guo [18] proposed a hybrid algorithm that was created by adding the pitch adjustment operation of the harmony search algorithm as a mutation operator to the bat update process to speed up the convergence of the BA. Fister et al. [19] added the Differential Evolution Algorithm strategies to the BA to develop the best available solution that directs the found solutions to better regions of the search space. Imane and Nadjet [20] proposed a hybrid algorithm where BA’s new solution selection phase is transformed into a tabu search to detect overlapping communities in social networks. Cincy and Jeba [21] proposed a new algorithm that hybridizes the BA and ABC algorithm, improves convergence speed, and the optimal accuracy. Chaudhary and Banati [22] proposed the Swarm Bat Algorithm with Improved Search(SBAIS) algorithm, which uses BA and the shuffled complex evolution (SCE) algorithm together to improve the exploration ability of BA. Rauf et al. [23] proposed an algorithm that combines Adaptive inertia weight and Sugeno-Function Fuzzy Search concepts to overcome the BA’s premature convergence problem. Yıldızdan and Baykan [24] proposed a new algorithm using the hybrid differential evolution algorithm with an advanced BA algorithm to overcome the structural problems of BA and increase its exploration ability. Pan et al. [25] proposed a hybrid algorithm by developing a communication strategy between BA and ABC. In this hybrid algorithm, first, the population is divided into two subpopulations, then run in parallel with BA in one of the subpopulations and ABC in the other. At the end of each iteration, the worst individuals in the BA are replaced with the best individuals in the ABC; the worst individuals in the ABC are replaced by the best individuals in the BA. In the BA_ABC algorithm we proposed, dividing the population into two subpopulations and parallel run of BA and ABC algorithms on the subpopulations are similar to this article. Unlike this article, in the BA_ABC algorithm, when a certain number of iterations is completed, the performances of BA and ABC are evaluated. A certain number of the worst individuals of the failed algorithm are replaced by the best individuals of the more successful algorithm. Also, when an algorithm reaches a certain ratio in the exchange of information, the remaining iterations are continued with this algorithm over the entire population; that is, the parallel run is terminated.
Many studies have been done on the BA and its performance has been significantly improved. However, performance decrease due to increased problem complexity and size is still an ongoing problem. In this study, a hybrid algorithm using the ABC algorithm, which works in parallel with the BA algorithm and can exchange information with the BA algorithm, was proposed to overcome the problems predicted in the BA algorithm. The ABC algorithm was preferred due to its features, such as the fact that its global search capability was stronger than the BA, and it had a mechanism to get rid of the local minimum. The performance of the hybrid algorithm proposed in this study was tested on both small and large problems. Its performance on real-world problems was also examined through tests on selected engineering design problems. The results obtained for each function set from the proposed hybrid algorithm were compared with the BA and ABC algorithms, recently proposed algorithms in the literature or the BA versions. Since not all algorithms selected from the literature contained results for all function sets that we worked with, the algorithms compared for each set of functions varied.
The rest of this study is organized as follows. In the second section, which includes materials and methods, information regarding the BA algorithm, the ABC algorithm, and the structure of the proposed hybrid BA_ABC algorithm is explained, respectively. In the third section, experimental studies and their results are presented. In the fourth section, the contributions of BA and ABC algorithms to the hybrid system are examined. In the fifth section, the complexity of BA, ABC, and BA_ABC algorithms are compared. In the sixth section, the results are discussed. Finally, in the seventh section, the conclusion and future work are explained.

2. Materials and Methods

2.1. Notations and Nomenclatures

In this section, all notations and nomenclatures used for the bat algorithm, artificial bee colony algorithm, and the proposed hybrid algorithm are given. Table 1 and Table 2 show the notations and nomenclatures, respectively.

2.2. Bat Algorithm

The Bat Algorithm (BA) is an algorithm proposed by Xin-She Yang [4] in 2010, based on the detection of direction and distance (echolocation behavior) of an object/prey, using sound reverberation. Bats can detect their prey/obstacles through echolocation.
According to Yang, the bat algorithm takes place according to the following rules [26].
  • Each bat uses echolocation to measure how far the prey is.
  • Bats fly with speed vi to location xi at a fixed frequency range [fmin,fmax], emitting signals at various wavelengths (λ) and loudnesses (A) to detect their prey.
  • When bats calculate the distance to their prey, they can adjust the pulse emission rate (r) along with the wavelength of the signal they send.
  • Despite the variation in loudness, it is assumed that value A decreases from A0 with a large value to a fixed minimum value (Amin).
f i = f min + ( f max f min ) β
v i t = v i t 1 + ( x i t x ) f i
x i t + 1 = x i t + v i t
In the algorithm, the frequency, speed, and position values for the bat are calculated according to Equations (1)–(3). Frequency determines the step size in the algorithm. β refers to a random value in the range [0,1], and x* refers to the global best value.
x i t + 1 = x o l d + ε A t ¯
In the local search section of the algorithm, a new solution is created for each bat by local random step around this solution by selecting a solution among the best solutions available (xold). This is done according to Equation (4). In this equation, provided that ε ∊ [−1,1], ε is a random number that expresses the magnitude and direction of the step. A t ¯ refers to the average of the current loudness values of all bats.
A i t + 1 = α A i t
r i t = r i 0 [ 1 exp ( γ t ) ]
As bats approach their prey, loudness decreases, pulse emission rate increases (Equations (5) and (6)). α and γ values are also constant (α = γ = 0.9).

2.3. Artificial Bee Colony Algorithm

The Artificial Bee Colony algorithm (ABC) is a swarm intelligence based optimization algorithm proposed by Karaboga [27] in 2005. This algorithm was created with inspiration from the foraging behavior of honeybees. Bees in this algorithm are named as scouts, employed bees, and onlooker bees according to their duties. In the algorithm, the number of employed bees is generally equal to the total number of food sources. The employed bee of the source becomes a scout bee when the nectar in the source runs out. In the ABC algorithm, the locations of food sources represent possible solutions to the optimization problem, and the amount of nectar refers to the quality of the solution [28]. The steps of the ABC algorithm can be listed as follows.

2.3.1. Beginning

At this step, scout bees begin to search for food sources randomly. The initial food sources are produced according to Equation (7) between the lower (lbj) and upper limit (ubj) values of each parameter.
x i , j = l b j + r a n d ( 0 , 1 ) × ( u b j l b j )
In Equation (7), i = 1,2,…,FN, j = 1,2,…,D, where D is the number of parameters, and FN is the number of food sources. A counter that is used to indicate whether the amount of nectar of a food source has been exhausted is defined. This control parameter is called “limit”. The defined number of limits is an important control parameter in the algorithm [29].

2.3.2. The Employed Bee Phase

The employed bee searches for a new food source around the current food source. It evaluates the food source found. If the amount of nectar in the new food source is better, it saves the new source and deletes the old food source. The new food source is calculated according to Equation (8).
v i , j = x i , j + φ × ( x i , j x k , j )
In the equation, provided that k ∊ {1,2,…,FN}, where k is a randomly selected food source, and ϕ is a random number in the range [−1,1]. After the quality (objective function value) of the source found is calculated, the fitness value of the source is assigned.
f i t n e s s i = { 1 / 1 + f i f i 0 1 + | f i | f i < 0
In Equation (9), fi represents the value of the objective function, that is, the quality of the source solution. When choosing between the existing food source and the new food source, the greedy selection method is applied depending on the fitness value. If food source vi is better quality than existing source xi, vi is used as the new source and the limit counter is reset. Otherwise, the process continues with source xi, and the limit counter is increased by one.

2.3.3. The Onlooker Bee Phase

Onlooker bees gather the sources information they obtained from employed bees. Onlooker bees choose a source in line with the probabilities proportional to the fitness values. For each source, a probability value is calculated according to Equation (10).
p i = f i t n e s s i / k = 1 F S f i t n e s s k
In Equation (10), pi is the probability of the source i, and the fitnessi is expressed the fitness value of the source i. If the probability value calculated for each source is less than a random value produced between [0,1], onlooker bees produce a new source according to Equation (8). The existing source and the newly produced source are evaluated according to the greedy selection method.

2.3.4. The Scout Bee Phase

If a food source has been exhausted or if a solution has not been developed during a trial number (limit) determined for this source, the employed bee of the source becomes scout bee, and a new food source is generated randomly according to Equation (7) instead of source xi.

2.4. The Proposed Hybrid Algorithm (BA_ABC)

In the bat algorithm, r and A parameters are determinative in establishing the balance of exploitation and exploration. As the iteration progresses, the pulse emission rate increases, and loudness decreases due to the nature of the algorithm. The expression that performs the local search in the algorithm (Equation (4)) is executed depending on the condition that the parameter r is lower than a randomly generated number in the range [0,1]. Therefore, the increase in r value causes to perform a local search in the initial steps of the iteration and a global search in the next iterations. The inclusion of new and better solutions in the algorithm is done depending on the condition that parameter A is greater than a randomly generated number in the range [0,1]. As iterations progress, the decrease in A significantly lowers the possibility of including new solutions in the population towards the end of the iteration. In this study, a new hybrid algorithm has been proposed to overcome these structural problems of the BA and to improve global search capability.
In the literature, the inertia weight coefficient previously used in BA [24,30,31,32] was used in the velocity formula to improve the search capability of the bat algorithm in this study.
v i t = w × v i t 1 + ( x i t x ) × f i
Accordingly, as seen in Equation (11), the inertia weight coefficient (w) was added to the velocity formula and used by reducing it chaotically [33] in the range of [0.9, 0.4].
As explained in Section 2.3, the ABC algorithm generates new solutions just by making changes in a randomly selected dimension during the new solution-searching process. This approach provides a more detailed search around the current solution and prevents rapid convergence towards the current solution. With this structure, the ABC algorithm is a stronger algorithm than the BA algorithm in terms of variety and global search capability. In this study, a hybrid algorithm (BA_ABC), in which BA and ABC are operated in parallel, is proposed to benefit from these capabilities of the ABC. In the hybrid algorithm, the population is divided into two. New solutions are produced by applying the BA algorithm to individuals in the first half of the population and applying the ABC algorithm to individuals in the second half. The best solution value (x*) is updated with new solutions as better solutions are found by the algorithms. When each certain number of iterations (sc) is completed, the performances of the algorithms are examined. In other words, when they find a better solution than the solution defined for BA and ABC and the best solution available, the increased counters (ba_ni or bee_ni) are checked. In the past process, whichever algorithm has produced more new solutions (i.e., whichever counter reaches a larger number), some of the individuals with the best fitness values in the population of that algorithm (as many as the number of ac’s) supersede the same number of individuals in the other algorithm with the worst fitness values. Thus, an exchange of information between the algorithms is provided. After each information exchange, the counter of the successful algorithm (ba_sn or bee_sn), which has produced more new solutions, is increased. When any of these counters reach the maximum number of changes (mnc), the algorithm of the counter is considered to be more successful for that problem, and the remaining iterations are continued by executing this algorithm on the entire population.
Thanks to the proposed hybrid algorithm, the BA algorithm gains abilities such as developing global search capability, increasing diversity, and conducting detailed research around the optimal solution with the algorithm that is more successful towards the end of the iterations. The pseudo-code of the proposed algorithm is given in Algorithm 1. Some parameters used in the pseudo-code of the BA_ABC can be explained as follows.
x*: It is the individual with the best fitness value in the population.
Bat_new_individual (ba_ni) and bee_new_individual (bee_ni): These are the counters that keep records of how many new solutions are created by BA and ABC, respectively.
Success control (sc): It is a predetermined number of iterations. When iterations equal to each sc value are completed, the performance of the algorithms is examined. In other words, the number of new solutions produced by BA and ABC is checked. Accordingly, the direction of the information exchange is determined. In the study, sc value was chosen equal to the limit value, which is an important parameter in the ABC algorithm.
Bat_success_number (ba_sn) and bee_success_number (bee_sn): This is the counter that keeps track of how many times BA and ABC have been successful in the exchange of information.
Amount of change (ac): It is the number that determines how many individuals with good fitness values in the population of the algorithm with high success will supersede the individuals with bad fitness values in the population of the other algorithm when the iterations as many as the sc value are completed (in this study, this number was determined as 10% of the number of individuals in the population).
Maximum number of change (mnc): It is the number that determines how many times the information exchange will be made between populations. This number is calculated according to Equation (12) in the study.
mnc = ( max f ( ) _ iteration / sc   ) ×   0.6
The 0.6 multiplier in this equation was determined as a result of the tests performed. In the test performed, for the multiplier values of [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1], five general cycle averages were taken on the CEC2010 test functions and compared. According to these results, it was determined as 0.6. Accordingly, if an algorithm has been successful in 60% of the total number of changes, the remaining iterations are continued with this algorithm on the entire population.
Algorithm 1: The pseudo-code of BA_ABC
Input parameters: Population size (N), general number of cycles (gc), maximum iteration (max_i), dimension (D)
Output: Result of x*
1. Determine population size (N), general number of cycles (gc), maximum iteration (max_i), dimension (D)
2. Set target function f(x).
3. Construct the initial population of D-dimensional N individuals. x=(x1,x2,…,xN)D
4. Define the parameters of BA and ABC algorithms and bring them to the initial state.
5. Find the best value of the population x*
6. Set the sc value.
7. Determine the mnc value according to Equation (12).
8. G = 1
9.  While (G ≤ gc)
10.   ba_ni = 0, bee_ni = 0, ba_sn = 0, bee_sn = 0
11.   For t = 1: max_i
12.    If (ba_sn < mnc and bee_sn < mnc)
13.     For i = 1:N/2
14.      Generate a new solution xnew according to BA.
15.      If (f(xnew) < f(x*))
16.       Accept the new solution.
17.       Update x*
18.       ba_ni = ba_ni + 1;
19.      End if
20.     End For
21.     For i = (N/2 + 1):N
22.      Generate a new solution xnew according to ABC.
23.      If (f(xnew) < f(x*))
24.       Accept the new solution.
25.       Update x*
26.       bee_ni = bee_ni + 1;
27.      End if
28.     End For
29.     If (mod (t, sc) = = 0)
30.      If (ba_ni ≥ bee_ni)
31. Write the individuals as many as ac with the best fitness values in BA in the place of the same number of individuals with the worst fitness values in the ABC population.
32.       ba_sn = ba_sn + 1;
33.      Else
34. Write the individuals as many as ac with the best fitness values in ABC in the place of the same number of individuals with the worst fitness values in the BA population.
35.      bee_sn = bee_sn + 1;
36.      End if
37.     bee_ni = 0; ba_ni = 0;
38.     End if
39.    Else If (ba_sn ≥ mnc)
40.     Find the remaining number of iterations for BA (t_BA)
41.     For t1 = 1:t_BA
42.      For i = 1: N
43.       Generate a new solution xnew according to BA.
44.        If (f(xnew) < f(x*))
45.        Accept the new solution.
46.        Update x*
47.       End if
48.      End For
49.     End For
50.     Break;
51.    Else
52.     Find the remaining number of iterations for ABC (t_ABC)
53.     For t2 = 1:t_ABC
54.      For i = 1:N
55.       Generate a new solution xnew according to ABC.
56.       If (f(xnew) < f(x*))
57.       Accept the new solution.
58.       Update x*
59.      End if
60.     End For
61.    End For
62.    Break;
63.   End if
64.  End For
65. G = G + 1
66. End While
67. Display the results
The computational complexity of BA_ABC is calculated as follows. Since the algorithm is a hybrid algorithm, it is necessary to consider the complexity of BA and ABC algorithms separately. Since there are no extra loops in the BA and ABC algorithms used in the BA_ABC algorithm, the algorithms have same computational complexity as the standard algorithms. For a P problem, let the computational complexity of its fitness evaluation function be O(P). Accordingly, the worst-case complexity of the standard BA is calculated as O(P × N), where N is the population size [7]. Similarly, the computational complexity of the standard ABC is also O (P × N). Additionally, two scenarios should be considered when calculating the complexity of the proposed hybrid algorithm. The first is the parallel run of BA and ABC in different halves of the population until the total number of iterations (t) is completed. In this case, the computational complexity of BA_ABC is t × (O(P × N/2) + O(2× P × N//2)). The second is that the algorithms work in parallel at the beginning, then one of the algorithms becomes more successful, and the remaining iterations continue with that algorithm. In this case, the complexity of BA_ABC is t1 × O(P × N/2) + t2 × O(2 × P × N/2), where t1 and t2 are the iteration number of BA and ABC, respectively, and t = t1 +t2.

3. Experimental Studies

In this section, the performance of the proposed algorithm on different test sets was examined and the results obtained were interpreted. The best mean values in the tables were highlighted in bold. In the first part, the performance of BA_ABC on 10 selected benchmark test functions was examined for different dimensions. In the second part, the performance of BA_ABC on CEC2005 small-scale test functions [34] was evaluated. The obtained results were compared to the BA versions. In the third section, the performance of BA_ABC on CEC2010 large-scale test functions [35] was examined. The obtained results were compared with the results of different algorithms proposed in recent years in the literature. Also, in the fourth section, the performance of BA_ABC on classical engineering design problems was tested and compared with different algorithms selected from the literature. Results were statistically interpreted with Wilcoxon signed-rank and Friedman tests [36,37]. These are non-parametric tests. Wilcoxon signed-rank test evaluates the differences between the paired results and determines whether there is a significant difference between the results. The Friedman test determines the differences between two or more algorithms and sorts the algorithms according to mean rank values [38].
The common parameter values used in all test sets for BA, ABC, and BA_ABC are as follows. The number of individuals (N) varies depending on dimension but is generally chosen between 10 and 100. fmin = 0, fmax = 1, r = 0.5, A = 0.9. The number of general cycles, dimensions, and the number of function evaluations (FEs) were chosen according to the rules of the function set (if any) or according to the most frequently used values in the literature. In addition, these parameters were the same for all of the compared algorithms for each function set.
In this study, ABC’s limit value was chosen as 100 for small-scale problems and 2000 for large- scale problems. Since the ABC algorithm produces a new solution by making changes in one dimension, using a limit value similar to the small size in large-scale problems prevents a detailed search around the current solution. It often leads to the search for a new solution from a random point of the research space without much research. Therefore, it will be useful to increase the limit value depending on the dimension.

3.1. Performance of BA_ABC on Classic Benchmark Test Functions

The increase in the dimension of the problem is an important factor affecting the performance of metaheuristic algorithms. The performance of an algorithm that produces successful solutions for small scale problems usually decreases with the increase in dimension. In this section, the effect of dimension increase on the performance of the proposed algorithm is examined. The performance of the algorithm proposed was tested for different dimensions (10, 30, 50, 100, and 1000) on 10 classic benchmark functions that are frequently used in the literature. The maximum number of evaluations (FEs) was determined as 10,000 × D, and the number of general cycles as 25. The properties of the selected benchmark functions are given in Table 3.
The results obtained from BA_ABC were compared with standard BA and standard ABC algorithms for all dimensions. Comparison results are given in Table 4. Table 4 shows the mean value, standard deviation value, and statistical test results obtained for each dimension. In the Test (T) column, whether there was a significant difference between BA_ABC, and BA and ABC algorithms were examined by applying the Wilcoxon signed-rank test at α < 0.05 significance level. The results of this test were shown with the symbols +, =, and −, which means that one algorithm is significantly better, equal, and worse than another algorithm, respectively. The numbers in parentheses indicate the total number of functions where BA_ABC produced better, worse, and equal solutions compared to other algorithms, respectively. The table also includes mean rank values that express the Friedman test-based ordering of the results obtained from BA, ABC, and BA_ABC algorithms for each dimension. When the Wilcoxon sign rank test results were analyzed for all dimensions, a significant difference was found between BA_ABC and the other algorithms in 89 of the total comparisons. The other algorithms were more successful than BA_ABC in 13 of the remaining comparisons, and no significant difference was found between the compared algorithms in the 18. According to the Friedman test mean rank values given for each dimension, the BA_ABC algorithm ranked first in all dimensions, the ABC algorithm ranked second, and the BA algorithm ranked third. In general, the number of errors obtained from the algorithms was observed to have increased due to the increase in dimensions. However, optimum results were obtained from the BA_ABC algorithm in all dimensions.
Similar studies in the literature for the classical functions used in this section are generally conducted for small dimensions. Therefore, the BA_ABC algorithm was compared with the algorithms selected from the literature for D = 30. The mean and standard deviation values of the results are given in Table 5. The results obtained for 30 dimensions were compared with the method used in the study of Fister Jr et al. (HSABA) [19], and with the results from BA (Bat Algorithm), FA (Firefly Algorithm), DE (Differential Algorithm), and ABC (Artificial Bee Colony) in the same study and those from the recently proposed Modified Bat Algorithm hybridizing by Differential Evolution algorithm(MBADE) [24]. When Table 5 was examined, it was found that the proposed method produced the best average values in six functions. Wilcoxon test revealed that there was no significant difference between this method and the hybrid versions of BA like HSABA and MBADE; however, there was a significant difference between this method and other well-known algorithms. According to the results of the Friedman test, BA_ABC ranked first with a mean rank value of 1.8; the MBADE algorithm ranked second with a mean rank value of 2.3.

3.2. Performance of BA_ABC on CEC2005 Test Functions

In this section, the performance of BA_ABC on CEC2005 test functions [34] is examined. CEC2005 functions consist of 25 functions with different characteristics. These functions were modified and made more complex by performing operations such as translation, rotation, or hybridization. CEC2005 functions and features were given in Table 6. Under CEC2005 rules, the number of general cycles was determined as 25. The maximum number of evaluations was chosen as 10,000 × D. After each cycle, the function error f(x) − f(x*) was recorded and sorted in descending order. The 1st (best), 7th, 13th (median), 19th, and 25th (worst) values of the function error were taken, and mean and standard deviation values were calculated using all function error values.
Results of BA, ABC, and BA_ABC on CEC2005 test functions for D = 10 are given in Table 7. According to Table 7, BA, ABC produced the best mean values in 22 of 25 functions and BA in the remaining three. Again, BA_ABC produced better mean values in all functions than ABC. When BA and ABC were compared, it was seen that BA produced better mean values than ABC in 22 functions. When compared by the test line in the table, BA_ABC performed better in 22 functions compared to BA, and a significant difference was found between them. BA_ABC had an equal performance to that of BA in one of the remaining functions, and worse performance than that of BA in two functions. According to test results between ABC_BA and ABC, BA_ABC performed better in all functions, and there was a significant difference between them. According to the results of the Friedman test given at the bottom of Table 7, the BA_ABC algorithm ranked first with a mean rank value of 1.12, the BA algorithm ranked second with a mean rank value of 1.96, and the ABC algorithm ranked third with a mean rank value of 2.92.
BA_ABC was also compared with other BA versions in the literature. Accordingly, BA_ABC was compared with the recently proposed SBAIS [22], MBADE [24], a Novel Bat Algorithm with habitat selection and doppler effect in echoes (NBA) [6], Island Bat Algorithm (iBA) [13], Bat Algorithm based on Iterative Local Search and Stochastic Inertia Weight (ILSSIWBA) [14], and Global-best Bat-inspired Algorithm(GBA), Tournament Bat-inspired Algorithm (TBA), Proportional Bat-inspired Algorithm (PBA), Linear rank Bat-inspired Algorithm (LBA), Exponential rank Bat-inspired Algorithm (EBA), and Random Bat-inspired Algorithm (RBA) versions [26] proposed in the same study. The mean values of these algorithms are given in Table 8. Accordingly, BA_ABC produced the best mean values in five functions. According to the Wilcoxon test results, it was determined that BA_ABC had an equal performance to the MBADE, ILSSIWBA, TBA, and EBA algorithms, and there was no significant difference between them. BA_ABC performed worse than the SBAIS algorithm and better than the remaining algorithms, and there was a significant difference between them. According to Friedman test results, the SBAIS algorithm ranked first with a mean rank value of 2.78, and the MBADE algorithm ranked second with a mean rank value of 4.16. In addition, the BA_ABC ranked fourth with a mean rank value of 5.22.

3.3. Performance of BA_ABC on CEC2010 Large-Scale Test Functions

In this section, the performance of BA_ABC on large-scale test functions was evaluated. For this purpose, CEC2010 benchmark functions [35] were used. The properties of the functions were given in Table 9. According to CEC2010 rules, the maximum number of evaluations (FEs) was taken as 3.00 × 106 and the number of general cycles as 25. Comparative results of BA_ABC with BA and ABC were given in Table 10 for 3.00 × 106 FEs.
According to Table 10, BA_ABC produced the best mean value in 12 functions, BA in one function, and ABC in the remaining seven functions. When Table 10 test column was examined in general, it was seen that the BA_ABC algorithm was better than the BA algorithm in a total of 19 functions, and there was a significant difference between them. No significant difference was found between the BA and BA_ABC algorithms in function number five. When the test results of BA_ABC and ABC were examined, it was found that BA_ABC had significantly worse results in six functions, equal result in one, and better results in 13. According to the results of the Friedman test given at the bottom of Table 10, the BA_ABC algorithm ranked first with a mean rank value of 1.40, the ABC algorithm ranked second with a mean rank value of 1.85, and the BA algorithm ranked third with a mean rank value of 2.75.
For CEC2010 test functions; the BA_ABC algorithm was compared with recently proposed algorithms such as Adaptive Hybrid Differential Evolution with circular sliding window (AHDE) [39], Quantum-behaved Particle Swarm Optimization with Random Selection (RSQPSO) [40], Improved Sine Cosine Algorithm (ISCA) [41], Micro Differential Evolution with local Directional Search (μDSDE) [42], and Adaptive Enhanced Unidimensional Search (aEUS) [43]. The mean values of the algorithms are given in Table 11. According to the results, BA_ABC produced the best mean values in five functions, the AHDE algorithm in one, the RSQPSO algorithm in four, the ISCA algorithm in five, and the aEUS algorithm in five. The μDSDE algorithm, on the other hand, did not produce the best mean value in any function. When the Wilcoxon test results in Table 11 were examined, it was concluded that there was no significant difference between the BA_ABC and aEUS algorithms; however, BA_ABC was better than other algorithms and that there was a significant difference between them. When Friedman test results were examined, it was found that BA_ABC ranked first among the algorithms with a mean rank value of 2.65 and the aEUS algorithm ranked second with a mean rank value of 2.83.
Figure 1 shows the convergence graphs of BA, ABC, and BA_ABC algorithms for every six functions randomly selected from CEC2005 and CEC2010. When the graphs of the CEC2005 functions were examined, it was seen that the BA algorithm converged faster in the graph of function F7, and the BA_ABC algorithm converged faster in the other functions and produced a better solution. When the graphs of CEC2010 functions were examined, it was seen that the ABC algorithm produced the best solution in F3 and F20 functions. The convergence rate and characteristics of ABC and BA_ABC algorithms were similar in function F3. It can be said that BA produced the best solution in function F5, and converged faster than others, but was not able to produce new solutions towards the end of the iteration. In the remaining functions, BA_ABC converged faster and produced better solutions.

3.4. Performance of BA_ABC in Classical Engineering Design Problems

Engineering Design can be defined as the process of meeting the requirements that are needed to create a product. Today, meta-heuristic algorithms emerge as an alternative to traditional optimization methods used in this process. The method we proposed in this section was applied to three engineering optimization problems, including pressure vessel design problem, tension/compression spring design problem, and gear train design problem, which are frequently used in literature, and its performance was examined.

3.4.1. Pressure Vessel Design Problem

The pressure vessel design problem is a classic engineering design problem that aims to minimize pressure vessel welding, manufacturing, and material costs [44,45]. It is a problem with four decision variables and four constraints (thickness of shell Ts, the thickness of head Th, inner radius R, length of the cylindrical section of the vessel L). The schematic representation of the problem is given in Figure 2.
The mathematical model of the problem can be summarized as in Equation (13).
X   =   ( T s , T h , R , L )   =   ( x 1 , x 2 , x 3 , x 4 ) Minimize   f ( X )   =   0.6224   x 1 x 3 x 4   +   1.7781   x 2 x 3 2   +   3.1661   x 2 1 x 4   +   19.84   x 1 2 x 3 Subject   to   g 1   ( X )   =   x 1   +   0.0193   x 3     0 g 2   ( X )   =   x 2   +   0.0095   x 3     0 g 3 ( X )   =   λ x 3 2 x 4     4 / 3   λ x 3 3   +   1 , 296 , 000     0 g 4   ( X )   =   x 4     240     0 1   ×   0.0625     x 1 , x 2     99   ×   0.0625 ,   10     x 3 , x 4     200
The result obtained from the proposed method is given in Table 12, with the general cycle number taken as 30 and FES value taken as 30,000. The table shows the values of decision variables and constraint values for the best fitness value. The statistical information obtained after 30 general cycles are also shown in the table. Table 13 shows the comparison results between the ten algorithms selected from the literature and the BA_ABC algorithm. The BA_ABC algorithm has been shown to produce an acceptable result similar to the literature.

3.4.2. Tension/Compression Spring Design Problem

This design problem was studied by Belegundu and Arora [56] and it is an optimization problem whose main purpose is to reduce the weight of spring with three decision variables such as wire diameter (d), mean coil diameter (D) and the number of active coils (N). The schematic representation of the problem is given in Figure 3.
The mathematical model of the problem can be summarized as in Equation (14).
X   =   ( d   D   N )   =   x 1 , x 2 , x 3 Minimize   f ( X )   =   ( x 3 + 2 ) x 2 x 1 2 Subject   to   g 1   ( X )   =   1     ( x 2 3 x 3 / 717.854 x 1 4 )     0 g 2 ( X )   =   ( 4 x 2 2     x 1 x 2 / 12.566 ( x 2 x 1 3     x 1 4 ) )   +   ( 1 / 5108 x 1 2 )     0 g 3 ( X )   =   1     ( 140.45 x 1 / x 2 2 x 3 )     0 g 4 ( X )   =   ( ( x 1   +   x 2 ) / 1.5 )     1     0 0.005     x 1     2 ,   0.25     x 2     1.3 ,   2     x 3     15
The result obtained from the proposed method is given in Table 14, with the general cycle number taken as 30 and FES value taken as 1000. In Table 15, the comparison results of BA_ABC with the literature are given. The BA_ABC algorithm has been shown to produce an acceptable result similar to the literature.

3.4.3. Gear Train Design Problem

The gear train design problem is an optimization problem proposed by Sandgren [44], whose aim is to minimize the cost of the gear ratio of the gear train. This problem only has a boundary constraint. There is no constraint on equality or inequality. It has four decision variables, including Ta, Tb, Td, and Tf. The schematic representation of the problem is given in Figure 4.
The mathematical model of the problem can be summarized as in Equation (15).
X = (TaTbTdTf) = (x1,x2,x3,x4)
Minimize f(X) = ((1/6931) − (x2x3/x1x4))2
12 ≤ x1,x2,x3,x4 ≤ 60
The result obtained from the proposed method is given in Table 16, with the general cycle number taken as 30 and FES value taken as 1000. In Table 17, the comparison results of BA_ABC with the literature are given. When the results were examined, it was seen that the best solution was obtained with the BA_ABC algorithm.

4. Investigation of Contribution of BA and ABC Algorithms to the Solution of BA_ABC Algorithm

The contribution of the algorithms that make up the hybrid system to the solution produced by the hybrid system may vary depending on the problem or function studied. Sometimes an algorithm may be more successful at producing new solutions, or algorithms may perform similarly. Determining the algorithms’ contribution amounts can be a guide for future hybrid studies on these functions. For this purpose, in this section, the contribution of the BA and ABC algorithms carried out in parallel to the BA_ABC algorithm to the solution for each function is examined. The results of the contribution of the algorithms to the solution in CEC2005 functions in Table 18 and CEC2010 in Table 19 are stated. In these tables, the BA column indicates the number of times the BA produced more solutions and the number of times the BA was more successful than the ABC algorithm during the control process of the algorithms’ success status. Similarly, the ABC column shows the number of times ABC produced more solutions and was more successful. These columns also provide information about the direction of information exchange. For example, BA and ABC columns for the F1 function in Table 19 show that information exchange took place 15 times from the BA population to the ABC population and nine times from the ABC population to the BA population. BA+ and BA- columns respectively indicate the number of new successful solutions produced by BA and the number of new solutions that failed to develop the current best solution. Again, ABC+ and ABC- columns show the number of successful and unsuccessful solutions produced by ABC, respectively. C+ and C- columns indicate the number of successful and unsuccessful solutions produced in the process that continued with the successful algorithm, after the parallel operation of the algorithms had ended, respectively. In general, for the relevant algorithm, it can be said that the “+” symbol indicates the number of successful solutions that developed the current best solution, while the “-” symbol indicates the number of failed solutions that could not develop the current best solution.
In the function groups studied, mnc was determined as 15. Accordingly, if any of the BA or ABC columns took the value of 15, it meant that the algorithm was more successful in the information exchange process, and the remaining iterations would continue with this algorithm. Therefore, it can be concluded that the remaining iterations in F1 in Table 19 were continued with the BA algorithm, while those in F2 function were continued with the ABC algorithm.
When the tables are examined, it can be seen that the contribution of the algorithms to the BA_ABC solution changes depending on the function. According to Table 18, it can be said that the contribution of the ABC algorithm to the result is higher in the F25 function, and the contribution of the algorithms to the solution is similar in the F12 function. In the remaining 23 functions, it was determined that the information exchange was mostly from BA to ABC, and the iterations remaining after the information exchange process were continued with the BA algorithm. In other words, the contribution of the BA algorithm to BA_ABC was higher in 92% of the functions, while the ABC algorithm’s contribution to BA_ABC was higher in 4% of the functions. In the remaining 4%, their contribution to BA_ABC was similar. Therefore, it can be said that the BA algorithm is more successful in this set of functions, and it provided a better contribution to the solution.
When Table 19 is examined, it is seen that the information exchange took place from BA to ABC in a total of 14 functions (F1, F4, F6, F7, F8, F9, F12, F14, F15, F16, F17, F18, F19, F20) and the remaining iterations after the information exchange process was continued with the BA algorithm. It can be said that BA contributed more to the solution in these functions. In three of the remaining functions (F2, F10, F11), ABC contributed more to the solution, and the information exchange took place from ABC to BA. The remaining iterations after this process were continued with ABC. In F3, F5, and F13 functions, the contribution of the algorithms to the solution of the BA_ABC algorithm was similar. In other words, the contribution of the BA algorithm to BA_ABC was higher in 70% of the functions, while the ABC algorithm’s contribution to BA_ABC was higher in 15% of the functions. In the remaining 15%, their contribution to BA_ABC was similar.
According to the results given in Table 18 and Table 19, it was determined that the contribution of BA algorithm to the solutions of the hybrid BA_ABC algorithm was higher.

5. Algorithm Complexity

In this section, the complexity of BA, ABC, and BA_ABC algorithms were calculated according to the rules defined for the CEC2020 [57] test set, and the obtained results were compared. Algorithm complexity is calculated as follows.
T0 denotes the computing time for the problem given in Equation (16).
x = 0.55;
for i = 1: 1000000
x = x + x; x = x/2; x = x* x; x = sqrt(x); x = log(x);
x = exp(x); x = x/(x + 2);
end
T1 is the computing time just for Function (1) in the test set for a particular dimension(D) and 200,000 function evaluation number. The total computing time for the proposed algorithm with 200,000 evaluations of the same D dimensional function is T2. T2 is executed five times, and the average of the T2 values found is calculated (T2 = mean(T2)). Finally, the algorithm complexity is denoted by (T2 − T1)/T0 and evaluated according to the linear growth. Additionally, the algorithm complexities are calculated on the 5, 10, and 15 dimensions to determine the effect of dimension increase.
The algorithms were coded using Matlab R2016a, and algorithm complexity calculation was done by running the algorithms on a PC with Intel CPU (1.50 GHz) and 4 GB RAM. The complexity of the BA, ABC, and BA_ABC were shown in Table 20.
According to Table 20, it was seen that the complexities of all algorithms rise depending on the increased dimension. The complexity of the proposed BA_ABC algorithm was higher than the BA and ABC algorithms, in all dimensions. Furthermore, the lowest complexity values for all dimensions were obtained in the ABC algorithm.

6. Results and Discussion

In this study, to examine the performance of the proposed BA_ABC algorithm in different dimensions and on different test sets, the algorithm was tested on classical benchmark functions, CEC2005 small-scale test problems, CEC2010 large-scale test problems, and classical engineering design problems. Table 4 shows the results of BA, ABC, and BA_ABC algorithms in classical benchmark test functions, and Table 5 shows the comparative results of BA_ABC with other algorithms (BA, ABC, FA, DE, HSABA, and MBADE). When Table 4 was examined, it was determined that the rise in dimension increased the amount of error; however, BA_ABC performed better than BA and ABC algorithms in most functions of different dimensions. Statistical tests also confirmed that BA_ABC was more successful. According to the statistical test results in Table 5, it was found that BA_ABC showed similar performance with hybrid algorithms (HSABA and MBADE). There was a significant difference between the remaining algorithms, and BA_ABC performed better than these algorithms. According to Friedman test results, the BA_ABC algorithm ranked first with the smallest mean rank value. Table 7 shows the results of BA, ABC, and BA_ABC algorithms in CEC2005 small-scale test functions. When Table 7 was examined, it was seen that BA_ABC performed better in BA and ABC algorithms in most of the functions and ranked first according to Friedman results. The comparison results of the BA_ABC algorithm with the recently proposed bat algorithms (SBAIS, MBADE, NBA, iBA, ILSSIWBA, GBA, TBA, PBA, LBA, EBA, and RBA) are given in Table 8. In the table, the BA_ABC algorithm was found to be worse than one of the compared bat versions, similar to four, and better performing than the remaining six. BA_ABC ranked fifth among the algorithms. Table 10 shows the results of BA, ABC, and BA_ABC algorithms in CEC2010 large-scale test functions. Comparison results of BA_ABC with AHDE, RSQPSO, ISCA, μDSDE, and aEUS algorithms are given in Table 11. When Table 10 was examined, it was found that BA_ABC performed better in BA and ABC algorithms in most functions and ranked first according to Friedman results. In Table 11, it was found that the BA_ABC algorithm performed similarly with one of the compared algorithms and performed better than the remaining four. BA_ABC ranked first in the ranking between algorithms. Finally, the algorithm’s performance over classical engineering design problems was examined. The results of the three engineering design problems are given in Table 12, Table 13, Table 14, Table 15, Table 16, Table 17. When the results were examined, it was seen that the algorithm produced acceptable and successful results for these problems. Overall, the BA_ABC algorithm produced very successful results for all test sets and dimensions, and the success of the algorithm was verified by statistical test results. Also, the contributions of BA and ABC algorithms to the hybrid algorithm were examined in the fourth section. The results obtained for CEC2005 and CEC2010 are given in Table 18 and Table 19. It was determined that the BA algorithm contributed more to the hybrid system in most of the functions. Finally, in the fifth section, the complexity of BA, ABC, and BA_ABC algorithms were examined. The complexity of the BA_ABC algorithm was found to be higher than the standard algorithms.
However, the increased dimension-related performance loss is still an ongoing problem for BA_ABC. According to the results of Table 4, it can be said that BA_ABC is relatively less affected compared to BA and ABC algorithms. The structural difficulty of functions (shift, rotation, etc.) is another reason for the loss in BA_ABC performance. Despite this, the algorithm is seen to find more successful results than BA and ABC algorithms in most of the functions. Consequently, BA_ABC is a successful hybrid algorithm, and the reason for its success can be said to be the reduction of convergence speed to the current best solution using inertia weight and the increase of diversity and global search capability thanks to the hybrid system created with the ABC algorithm.
Techniques other than metaheuristic algorithms also can be used to improve the performance of BA_ABC. For example, using machine learning techniques with metaheuristic algorithms might be a good option. Fine-tuning of parameters in metaheuristic algorithms affects algorithm performance substantially. The most suitable parameter value may vary depending on the structural features and dimensions of the problem. Usually, researchers choose the parameters used for similar problems or try to find the most suitable parameter by performing tests with these values. Parameter selection of metaheuristic algorithms can be performed using machine learning strategies (such as fuzzy logic, Bayesian networks, neural networks, support vector machines, etc.). Population management and diversity are another factor affecting performance in metaheuristic algorithms. Machine learning techniques such as the Apriori algorithm, clustering techniques, etc., can be used for extracting information from previously visited solutions, discovering more promising search areas, and increasing population variety [58]. Furthermore, metaheuristic algorithms can be associated with the concept of big data, which has been frequently studied recently. Using metaheuristic algorithms in this field allows us to produce fast responses in real-time areas where data are regularly updated, to work with different data types at the same time, to cope with uncertainties, and to evaluate the information obtained through the use of the objective function [59]. As a result, it may be appropriate to use machine learning options for increasing the population diversity of the proposed BA_ABC and setting its parameters. Furthermore, examining the performance of the proposed BA_ABC algorithm on big data, it can contribute to the literature and present successful results.

7. Conclusions and Future Work

In this study, a new hybrid algorithm was proposed to improve BA’s global search capability and increase its performance on different test sets. In this algorithm, which is called BA_ABC, the population was divided into two, and the BA algorithm was performed in one part and the ABC algorithm in the other. When each certain number of iterations was completed, the performance of the algorithms was evaluated, and the information exchange was ensured by replacing some of the individuals with the best fitness values in the successful algorithm with the individuals with the worst fitness values in the population of the unsuccessful algorithm. When the maximum number of exchanges was reached, the remaining iterations were continued with the successful algorithm. Thanks to the proposed BA_ABC algorithm, performance decreases due to structural problems of the BA algorithm were reduced, and its global search capability was improved.
The BA_ABC algorithm was firstly tested on 10 classic benchmark functions. This test was done on dimensions of 10, 30, 50, 100, and 1000. Despite the increased dimension, the proposed algorithm was found to be more successful than the BA and ABC algorithms. Again, the BA_ABC algorithm performed better than the algorithms selected from the literature. Secondly, CEC2005 small-scale test functions were used to determine how the BA_ABC algorithm was performing compared to the latest BA versions. The algorithm performed better than the BA and ABC algorithms. It produced acceptable results compared to the BA versions. Thirdly, the performance of BA_ABC in large-scale problems was tested on CEC2010 large-scale test functions. It was determined that the proposed algorithm performed better than BA, ABC, and the latest algorithms in the literature. Finally, the BA_ABC algorithm was tested on three frequently used problems of classical engineering design problems. The BA_ABC algorithm produced acceptable results, which were similar to those in the literature in these problems. Also, the contribution of BA and ABC algorithms, which constituted the hybrid algorithm, to the solutions was examined on CEC2005 and CEC2010 functions. It was observed that the BA algorithm contributes more to the solutions of BA_ABC in most of the functions. Finally, in the calculation about the algorithm complexity, it was found that the complexity of the BA_ABC algorithm is more than the standard algorithms. In general, when all the results were examined, it was determined that the proposed algorithm produced successful and acceptable results in different test groups. As a future study, the hybrid system components used in the BA_ABC algorithm can be replaced with different algorithms and tested on CEC functions in recent years. Furthermore, machine learning techniques can be added to increase the performance of the algorithm, or its performance can be examined on big data problems as a different field of study.

Author Contributions

Formal analysis, G.Y. and Ö.K.B.; methodology, G.Y. and Ö.K.B.; software, G.Y.; writing—original draft, G.Y.; writing—review and editing, Ö.K.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Karaboga, D. Yapay Zekâ Optimizasyonu Algoritmaları; Atlas Yayın Dağıtım: İstanbul, Turkey, 2004. [Google Scholar]
  2. Gao, W.-F.; Liu, S.-Y. A modified artificial bee colony algorithm. Comput. Oper. Res. 2012, 39, 687–697. [Google Scholar] [CrossRef]
  3. Zhu, G.; Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 2010, 217, 3166–3173. [Google Scholar] [CrossRef]
  4. Yang, X.; Allan, R.J. Web-based Virtual Research Environments. Nat. Artif. Reason. 2009, 284, 65–80. [Google Scholar] [CrossRef]
  5. Cai, X.; Gao, X.Z.; Xue, Y. Improved bat algorithm with optimal forage strategy and random disturbance strategy. Int. J. Bio-Inspired Comput. 2016, 8, 205. [Google Scholar] [CrossRef]
  6. Meng, X.-B.; Gao, X.; Liu, Y.; Zhang, H. A novel bat algorithm with habitat selection and Doppler effect in echoes for optimization. Expert Syst. Appl. 2015, 42, 6350–6364. [Google Scholar] [CrossRef]
  7. Cai, X.; Wang, H.; Cui, Z.; Cai, J.; Xue, Y.; Wang, L. Bat algorithm with triangle-flipping strategy for numerical optimization. Int. J. Mach. Learn. Cybern. 2017, 9, 199–215. [Google Scholar] [CrossRef]
  8. Zhu, B.; Zhu, W.; Liu, Z.; Duan, Q.; Cao, L. A Novel Quantum-Behaved Bat Algorithm with Mean Best Position Directed for Numerical Optimization. Comput. Intell. Neurosci. 2016, 2016, 1–17. [Google Scholar] [CrossRef] [Green Version]
  9. Ghanem, W.A.H.M.; Jantan, A. An enhanced Bat algorithm with mutation operator for numerical optimization problems. Neural Comput. Appl. 2017, 31, 617–651. [Google Scholar] [CrossRef]
  10. Shan, X.; Cheng, H. Modified bat algorithm based on covariance adaptive evolution for global optimization problems. Soft Comput. 2017, 22, 5215–5230. [Google Scholar] [CrossRef]
  11. Nawi, N.M.; Rehman, M.Z.; Khan, A.; Chiroma, H.; Herawan, T. A Modified Bat Algorithm Based on Gaussian Distribution for Solving Optimization Problem. J. Comput. Theor. Nanosci. 2016, 13, 706–714. [Google Scholar] [CrossRef]
  12. Chakri, A.; Khelif, R.; Benouaret, M.; Yang, X.-S. New directional bat algorithm for continuous optimization problems. Expert Syst. Appl. 2017, 69, 159–175. [Google Scholar] [CrossRef] [Green Version]
  13. Al-Betar, M.A.; Awadallah, M.A. Island bat algorithm for optimization. Expert Syst. Appl. 2018, 107, 126–145. [Google Scholar] [CrossRef]
  14. Gan, C.; Cao, W.; Wu, M.; Chen, X. A new bat algorithm based on iterative local search and stochastic inertia weight. Expert Syst. Appl. 2018, 104, 202–212. [Google Scholar] [CrossRef]
  15. Topal, A.O.; Altun, O. A novel meta-heuristic algorithm: Dynamic Virtual Bats Algorithm. Inf. Sci. 2016, 354, 222–235. [Google Scholar] [CrossRef]
  16. Wang, Y.; Wang, P.; Zhang, J.; Cui, Z.; Cai, X.; Zhang, W.; Chen, J. A Novel Bat Algorithm with Multiple Strategies Coupling for Numerical Optimization. Mathematics 2019, 7, 135. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, Q.; Wu, L.; Xiao, W.; Wang, F.; Zhang, L. A novel hybrid bat algorithm for solving continuous optimization problems. Appl. Soft Comput. 2018, 73, 67–82. [Google Scholar] [CrossRef]
  18. Wang, G.-G.; Guo, L. A Novel Hybrid Bat Algorithm with Harmony Search for Global Numerical Optimization. J. Appl. Math. 2013, 2013, 1–21. [Google Scholar] [CrossRef]
  19. Fister, I.; Fong, S.; Brest, J.; Fister, I. A Novel Hybrid Self-Adaptive Bat Algorithm. Sci. World J. 2014, 2014, 709738. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Imane, M.; Nadjet, K. Hybrid Bat algorithm for overlapping community detection. IFAC-PapersOnLine 2016, 49, 1454–1459. [Google Scholar] [CrossRef]
  21. Cincy, W.; Jeba, J. Performance Analysis of Novel Hybrid A-BAT Algorithm in Crowdsourcing Environment. Int. J. Appl. Eng. Res. 2017, 12, 14964–14969. [Google Scholar]
  22. Chaudhary, R.; Banati, H. Swarm bat algorithm with improved search (SBAIS). Soft Comput. 2018, 23, 11461–11491. [Google Scholar] [CrossRef]
  23. Rauf, H.T.; Malik, S.; Shoaib, U.; Irfan, M.N.; Lali, M.I. Adaptive inertia weight Bat algorithm with Sugeno-Function fuzzy search. Appl. Soft Comput. 2020, 90, 106159. [Google Scholar] [CrossRef]
  24. Yildizdan, G.; Baykan, Ö.K. A novel modified bat algorithm hybridizing by differential evolution algorithm. Expert Syst. Appl. 2020, 141, 112949. [Google Scholar] [CrossRef]
  25. Nguyen, T.-T.; Pan, J.-S.; Dao, T.-K.; Kuo, M.-Y.; Horng, M.-F. Hybrid Bat Algorithm with Artificial Bee Colony. In Advances in Intelligent Systems and Computing; Springer: Berlin, Germany, 2014; Volume II, pp. 45–55. [Google Scholar]
  26. Al-Betar, M.A.; Awadallah, M.A.; Faris, H.; Yang, X.-S.; Khader, A.T.; AlOmari, O.A. Bat-inspired algorithms with natural selection mechanisms for global optimization. Neurocomputing 2018, 273, 448–465. [Google Scholar] [CrossRef]
  27. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Computer Engineering Department, Engineering Faculty, Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  28. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  29. Karaboga, D.; Akay, B. A comparative study of Artificial Bee Colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  30. Yılmaz, S.; Küçüksille, E.U. Improved Bat Algorithm (IBA) on Continuous Optimization Problems. Lect. Notes Softw. Eng. 2013, 1, 279–283. [Google Scholar] [CrossRef] [Green Version]
  31. Yılmaz, S.; Küçüksille, E.U. A new modification approach on bat algorithm for solving optimization problems. Appl. Soft Comput. 2015, 28, 259–275. [Google Scholar] [CrossRef]
  32. Arora, U.; Lodhi, E.A.; Saxena, T. PID Parameter Tuning Using Modified BAT Algorithm. J. Autom. Control. Eng. 2016, 4, 347–352. [Google Scholar] [CrossRef]
  33. Feng, Y.; Teng, G.-F.; Wang, A.-X.; Yao, Y.-M. Chaotic Inertia Weight in Particle Swarm Optimization. In Proceedings of the Second International Conference on Innovative Computing, Information and Control (ICICIC 2007), Kumamoto, Japan, 5–7 September 2007; p. 475. [Google Scholar]
  34. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
  35. Nanyang Technological University. Available online: https://www3.ntu.edu.sg/home/epnsugan/index_files/CEC10-LSO/CEC10.htm (accessed on 28 September 2020).
  36. Sheskin, D.J. Handbook of Parametric and Nonparametric Statistical Procedures; CRC Press: Boca Raton, FL, USA, 2020. [Google Scholar]
  37. García, S.; Molina, D.; Lozano, M.; Herrera, F. A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: A case study on the CEC’2005 Special Session on Real Parameter Optimization. J. Heuristics 2008, 15, 617–644. [Google Scholar] [CrossRef]
  38. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  39. Ge, H.; Sun, L.; Yang, X. Adaptive hybrid differential evolution with circular sliding window for large scale optimization. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; pp. 87–94. [Google Scholar]
  40. Fang, W.; Zhang, L.; Zhou, J.; Wu, X.; Sun, J. A novel quantum-behaved particle swarm optimization with random selection for large scale optimization. 2017 IEEE Congr. Evol. Comput. (CEC) 2017, 2746–2751. [Google Scholar] [CrossRef]
  41. Long, W.; Wu, T.; Liang, X.; Xu, S. Solving high-dimensional global optimization problems using an improved sine cosine algorithm. Expert Syst. Appl. 2019, 123, 108–126. [Google Scholar] [CrossRef]
  42. Yıldız, Y.E.; Topal, A.O. Large scale continuous global optimization based on micro differential evolution with local directional search. Inf. Sci. 2019, 477, 533–544. [Google Scholar] [CrossRef]
  43. Gardeux, V.; Omran, M.G.H.; Chelouah, R.; Siarry, P.; Glover, F. Adaptive pattern search for large-scale optimization. Appl. Intell. 2017, 35, 1095–1330. [Google Scholar] [CrossRef]
  44. Sandgren, E. Nonlinear Integer and Discrete Programming in Mechanical Design Optimization. J. Mech. Des. 1990, 112, 223–229. [Google Scholar] [CrossRef]
  45. Kannan, B.K.; Kramer, S.N. An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and Its Applications to Mechanical Design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
  46. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  47. Ben Guedria, N. Improved accelerated PSO algorithm for mechanical engineering optimization problems. Appl. Soft Comput. 2016, 40, 455–467. [Google Scholar] [CrossRef]
  48. Kohli, M.; Arora, S. Chaotic grey wolf optimization algorithm for constrained optimization problems. J. Comput. Des. Eng. 2017, 5, 458–472. [Google Scholar] [CrossRef]
  49. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm—A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  50. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  51. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [Google Scholar] [CrossRef]
  52. Garg, H. Solving structural engineering design optimization problems using an artificial bee colony algorithm. J. Ind. Manag. Optim. 2014, 10, 777–794. [Google Scholar] [CrossRef]
  53. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2012, 22, 1239–1255. [Google Scholar] [CrossRef]
  54. Zhang, Z.; Ding, S.; Jia, W. A hybrid optimization algorithm based on cuckoo search and differential evolution for solving constrained engineering problems. Eng. Appl. Artif. Intell. 2019, 85, 254–268. [Google Scholar] [CrossRef]
  55. Zhang, Y.; Jin, Z.; Chen, Y. Hybrid teaching–learning-based optimization and neural network algorithm for engineering design optimization problems. Knowl.-Based Syst. 2020, 187, 104836. [Google Scholar] [CrossRef]
  56. Belegundu, A.D.; Arora, J.S. A study of mathematical programmingmethods for structural optimization. Part II: Numerical results. Int. J. Numer. Methods Eng. 1985, 21, 1601–1623. [Google Scholar] [CrossRef]
  57. Nanyang Technological University. Available online: https://www.ntu.edu.sg/home/epnsugan/index_files/CEC2020/CEC2020-2.htm (accessed on 28 September 2020).
  58. Calvet, L.; De Armas, J.; Masip, D.; Juan, A.A. Learnheuristics: Hybridizing metaheuristics with machine learning for optimization with dynamic inputs. Open Math. 2017, 15, 261–280. [Google Scholar] [CrossRef]
  59. Dhaenens, C.; Jourdan, L. Metaheuristics for Big Data; Wiley Online Library: Hoboken, NJ, USA, 2016. [Google Scholar]
Figure 1. Convergence Graphics: (A) CEC2005 test functions; (B) CEC2010 test functions.
Figure 1. Convergence Graphics: (A) CEC2005 test functions; (B) CEC2010 test functions.
Mathematics 08 01749 g001aMathematics 08 01749 g001b
Figure 2. Structure of the pressure vessel design problem [46].
Figure 2. Structure of the pressure vessel design problem [46].
Mathematics 08 01749 g002
Figure 3. The structure of the tension/compression spring design problem [46].
Figure 3. The structure of the tension/compression spring design problem [46].
Mathematics 08 01749 g003
Figure 4. The structure of the gear train design problem [46].
Figure 4. The structure of the gear train design problem [46].
Mathematics 08 01749 g004
Table 1. Notations of Bat Algorithm (BA), Artificial Bee Colony (ABC), and BA_ABC.
Table 1. Notations of Bat Algorithm (BA), Artificial Bee Colony (ABC), and BA_ABC.
BA Notations
NThe number of individuals v i t 1 The velocity of the ith bat in the previous iteration
tThe iteration number x i t The current location of the ith bat
viThe speed of the ith bat x i t + 1 The new location of the ith bat
xiThe location of the ith batxoldA solution among the best solutions
ALoudnessx*Global best value
rPulse emission rateεA random value in the range [−1,1]
A0The max value of loudnessαConstant
AminThe min value of loudnessγConstant
fiThe frequency of the ith bat A i t The current loudness of the ith bat
fminThe min value of frequency A i t + 1 The new loudness of the ith bat
fmaxThe max value of frequency A t ¯ The average of loudness values of all bats
βA random value in the range [0,1] r i 0 The initial value of the pulse emission rate of ith bat
v i t The current velocity of the ith bat r i t The current pulse emission rate of ith bat
ABC Notations
FNThe number of food sourcevi,jThe jth parameter value of the new food source
DThe number of parameters(dimension)kA random value in the range [1, FN]
ubjThe upper limit value of the jth parameterxk,iA randomly selected food source
lbjThe lower limit value of the jth parameterϕA random value in the range [−1,1]
xiith food sourcefiThe value of the objective function
viA new food source located around the ith food sourcepiThe probability value of the ith food source
xi,jThe jth parameter value of the ith food sourcefitnessiThe fitness value of the ith food source
BA_ABC Notations
wThe inertia weight coefficientmax_iterationThe number of maximum iteration
xnewA new location located around the batf(xnew)The fitness value of the new location
f(x*)The fitness value of the global best
Table 2. Nomenclatures of BA, ABC, and BA_ABC.
Table 2. Nomenclatures of BA, ABC, and BA_ABC.
BA Nomenclatures
echolocationIt is the biological sonar used by some mammals such as bats, dolphins, and whales.
sonarThe system detects the location and condition of an object with sound waves.
preyThe optimal solution the bats want to find
loudnessThe intensity of the bat’s sound when the bat is approaching prey
pulse emission rateThe spread rate of sound produced by bats during echolocation
global best valueThe individual with the best fitness value in the population
stepAmount of progress in solution space
locationA possible solution for the problem
populationSet of possible solutions for the problem
ABC Nomenclatures
food sourceA possible solution for the problem
the amount of nectarThe quality of the solution
scout bee Bees looking for a random food source in the environment
employed beeA bee responsible for a food source and carrying information about that food source to the hive
onlooker beeA bee waiting in the hive and selecting a food source depending on the nectar quality and searching around
fitness valueThe value of the objective function
greedy selectionSelecting the one that is possible and closest to the result
BA_ABC Nomenclatures
inertia weightCoefficient determining the contribution of the previous speed
benchmark functionsThe function which is used to test the performance of any optimization problem
Table 3. Classic benchmark function.
Table 3. Classic benchmark function.
FunctionNameDefinitionDomain/Characterisricf*
F1Griewangk’s function f ( x ) = 1 / 4000 i = 1 n x i 2 i = 1 n cos ( x i / ( i ) 1 / 2 ) + 1 [−600, 600]/M0
F2Rastrigin’s function f ( x ) = 10 n + i = 1 n ( x i 2 10 cos ( 2 λ x i ) ) [−15, 15]/M0
F3Rosenbrock’s function f ( x ) = i = 1 n 1 ( 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ) [−15, 15]/M0
F4Ackley’s function f ( x ) = i = 1 n 1 ( 20 + exp ( 20 ) exp ( 0.2 0.5 ( x i + 1 2 + x i 2 ) ) exp ( 0.5 ( cos ( 2 λ x i + 1 ) + cos ( 2 λ x i ) ) ) ) [−32.768, 32.768]/M0
F5Schwefel’s function f ( x ) = 418.9829 n i = 1 n [ x i sin ( | x i | 1 / 2 ) ] [−500, 500]/M0
F6Sphere function f ( x ) = i = 1 n x i 2 [−600, 600]/U0
F7Easom’s function f ( x ) = ( 1 n ) ( i = 1 n cos 2 ( x i ) ) exp [ i = 1 n ( x i λ ) 2 ] [−2λ, 2λ]/U−1
F8Michalewicz’s function f ( x ) = i = 1 n sin ( x i ) [ sin ( i x i 2 / λ ) ] 20 [0, λ]/M*
F9Xin-She Yang’s function f ( x ) = ( i = 1 n | x i | ) exp [ i = 1 n sin ( x i 2 ) ] [−2λ, 2λ]/M0
F10Zakharov’s function f ( x ) = i = 1 n x i 2 + ( ( 1 / 2 ) i = 1 n i x i ) 2 + ( ( 1 / 2 ) i = 1 n i x i ) 4 [−5, 10]/U0
f*: optimal solution *: depending on the size (f* = −1.8013 for 2D). U: unimodal function. M: multimodal function.
Table 4. Results of BA, ABC, and BA_ABC algorithms in classical benchmark test function.
Table 4. Results of BA, ABC, and BA_ABC algorithms in classical benchmark test function.
D = 10D = 30D = 50
MeanStdT (+, −, =)MeanStdT (+, −, =)MeanStdT (+, −, =)
F1BA2.92 × 1011.25 × 101+(25,0,0)1.92 × 10−14.29 × 10−1+(25,0,0)5.14 × 1036.20 × 103+(25,0,0)
ABC4.70 × 10−35.19 × 10−3−(22,3,0)2.83 × 10−141.33 × 10−13+(24,0,1)5.34 × 10−142.58 × 10−13+(25,0,0)
BA_ABC8.80 × 10−38.16 × 10−3 1.07 × 10−161.49 × 10−16 2.26 × 10−162.25 × 10−16
F2BA5.46 × 1011.89 × 101+(25,0,0)2.00 × 1024.67 × 101+(25,0,0)3.75 × 1027.69 × 101+(25,0,0)
ABC00=(0,0,25)5.68 × 10−144.02 × 10−14=(11,3,11)1.57 × 10−122.60 × 10−12+(21,0,4)
BA_ABC00 3.87 × 10−143.92 × 10−14 8.87 × 10−145.46 × 10−14
F3BA1.12 × 1013.34 × 101+(24,1,0)3.25 × 1012.75 × 101+(25,0,0)4.23 × 1013.30 × 100+(25,0,0)
ABC1.58 × 10−11.73 × 10−1=(13,12,0)5.92 × 10−26.45 × 10−2+(23,2,0)3.96 × 10−26.14 × 10−2+(23,2,0)
BA_ABC3.37 × 10−25.13 × 10−1 5.38 × 10−31.92 × 10−2 5.29 × 10−31.24 × 10−2
F4BA1.42 × 1021.05 × 100+(25,0,0)4.74 × 1027.58 × 100+(25,0,0)8.19 × 1028.44 × 100+(25,0,0)
ABC1.40 × 1024.65 × 10−4+(25,0,0)4.50 × 1022.37 × 10−2+(23,2,0)7.61 × 1025.32 × 10−2+(24,1,0)
BA_ABC1.40 × 1021.12 × 10−4 4.50 × 1021.84 × 10−2 7.60 × 1026.65 × 10−2
F5BA1.72 × 1034.68 × 102+(25,0,0)5.36 × 1038.24 × 102+(25,0,0)9.38 × 1039.33 × 102+(25,0,0)
ABC1.27 × 10−43.96 × 10−13=(0,0,25)3.82 × 10−42.18 × 10−9=(1,0,24)1.32 × 10−14.58 × 10−1+(25,0,0)
BA_ABC1.27 × 10−42.52 × 10−13 3.82 × 10−41.23 × 10−12 6.36 × 10−41.10 × 10−11
F6BA7.83 × 1037.49 × 103+(25,0,0)6.77 × 1022.58 × 103+(25,0,0)3.73 × 10−47.18 × 10−5+(25,0,0)
ABC8.3 × 10−172.49 × 10−17+(25,0,0)5.73 × 10−168.52 × 10−17+(25,0,0)1.19 × 10−151.72 × 10−16+(25,0,0)
BA_ABC2.86 × 10−171.73 × 10−17 2.07 × 10−167.49 × 10−17 4.09 × 10−161.26 × 10−16
F7BA−9.92 × 10−14.77 × 10−3+(25,0,0)−5.72 × 10−14.92 × 10−1−(11,14,0)−7.27 × 10−31.27 × 10−2−(7,18,0)
ABC−7.16 × 10−31.29 × 10−2+(25,0,0)−6.07 × 10−1083.04 × 10−107+(25,0,0)−2.54 × 10−2550+(25,0,0)
BA_ABC−11.05 × 10−4 −2.14 × 10−14.01 × 10−1 −4.15 × 10−22.00 × 10−1
F8BA−5.89 × 1004.04 × 10−1+(25,0,0)−1.04 × 1018.39 × 10−1+(25,0,0)−1.42 × 1019.63 × 10−1+(25,0,0)
ABC−9.66 × 1001.56 × 10−7+(5,0,20)−2.96 × 1018.66 × 10−3=(19,6,0)−4.95 × 1012.05 × 10−2+(23,2,0)
BA_ABC−9.66 × 1002.51 × 10−13 −2.96 × 1011.92 × 10−2 −4.96 × 1011.48 × 10−2
F9BA2.33 × 10−34.94 × 10−4+(25,0,0)2.06 × 10−118.60 × 10−12+(25,0,0)7.34 × 10−204.18 × 10−20=(9,16,0)
ABC5.66 × 10−41.65 × 10−16=(0,0,25)3.51 × 10−126.71 × 10−16+(24,1,0)1.96 × 10−172.20 × 10−18+(25,0,0)
BA_ABC5.66 × 10−41.07 × 10−16 3.51 × 10−122.40 × 10−16 1.15 × 10−198.52 × 10−20
F10BA2.56 × 10−31.93 × 10−3+(25,0,0)2.94 × 10−26.67 × 10−3+(25,0,0)1.07 × 10−11.68 × 10−2+(25,0,0)
ABC3.90 × 1013.65 × 101+(25,0,0)2.49 × 1033.29 × 102+(25,0,0)5.09 × 1032.87 × 102+(25,0,0)
BA_ABC1.41 × 10−48.86 × 10−5 9.12 × 10−43.29 × 10−4 2.74 × 10−36.03 × 10−4
Mean Rank BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
2.801.851.352,702.001.302.602.301.10
D = 100D = 1000
MeanStdT ( +, − , =)MeanStdT (+, − , =)
F1BA1.84 × 10−17.05 × 10−1+(25,0,0)1.91 × 10−31.93 × 10−3+(25,0,0)
ABC6.83 × 10−156.19 × 10−15+(25,0,0)1.38 × 10−147.46 × 10−16+(23,2,0)
BA_ABC7.42 × 10−166.59 × 10−16 4.26 × 10−158.36 × 10−15
F2BA9.83 × 1021.23 × 102+(25,0,0)1.18 × 1043.13 × 103+(25,0,0)
ABC4.66 × 10−51.77 × 10−4+(25,0,0)2.84 × 1011.47 × 100+(25,0,0)
BA_ABC3.91 × 10−132.77 × 10−13 5.73 × 1001.53 × 100
F3BA9.42 × 1011.44 × 101+(20,5,0)1.05 × 1033.58 × 101+(25,0,0)
ABC2.88 × 10−22.65 × 10−2−(1,24,0)1.21 × 1024.53 × 101=(14,11,0)
BA_ABC4.30 × 1016.46 × 101 1.50 × 1021.08 × 102
F4BA1.60 × 1036.94 × 101+(25,0,0)1.55 × 1047.95 × 10−2−(0,25,0)
ABC1.54 × 1031.87 × 10−1+(25,0,0)1.55 × 1041.26 × 10−1+(25,0,0)
BA_ABC1.54 × 1032.40 × 10−1 1.55 × 1041.75 × 10−1
F5BA1.90 × 1041.43 × 103+(25,0,0)2.00 × 1054.34 × 103+(25,0,0)
ABC1.87 × 1029.32 × 101−(6,19,0)1.30 × 1045.45 × 102+(25,0,0)
BA_ABC4.12 × 1022.30 × 102 7.37 × 1031.64 × 103
F6BA5.26 × 1011.81 × 102+(25,0,0)1.31 × 1042.40 × 104+(25,0,0)
ABC3.60 × 10−155.56 × 10−16+(25,0,0)3.80 × 10−143.10 × 10−15=(19,6,0)
BA_ABC1.23 × 10−154.49 × 10−16 6.76 × 10−149.05 × 10−14
F7BA00=(0,0,25)00=(0,0,25)
ABC00=(0,0,25)00=(0,0,25)
BA_ABC00 00
F8BA−2.25 × 1011.22 × 100+(25,0,0)−1.49 × 1024.27 × 100+(25,0,0)
ABC−9.91 × 1015.52 × 10−2+(24,1,0)−9.68 × 1025.81 × 10−1+(25,0,0)
BA_ABC−9.93 × 1016.64 × 10−2 −9.80 × 1024.60 × 10−1
F9BA3.89 × 10−413.40 × 10−41=(11,14,0)00=(0,0,25)
ABC1.56 × 10−172.04 × 10−18+(25,0,0)3.02 × 10−737.60 × 10−73+(25,0,0)
BA_ABC3.50 × 10−412.93 × 10−41 00
F10BA1.14 × 10−13.62 × 10−2+(25,0,0)1.30 × 1048.67 × 102+(25,0,0)
ABC1.19 × 1044.77 × 102+(25,0,0)1.59 × 1051.12 × 103+(25,0,0)
BA_ABC1.14 × 10−21.72 × 10−3 1.43 × 1031.29 × 102
Mean Rank BAABCBA_ABCBAABCBA_ABC
2.701.951.352.552.001.45
Table 5. Comparison with the other evolutionary algorithm (D = 30).
Table 5. Comparison with the other evolutionary algorithm (D = 30).
Function BA_ABCBAFADEABCHSABAMBADE
F1Mean1.07 × 10−161.16 × 1006.65 × 10−11.05 × 1001.09 × 1007.71 × 10−26.39 × 10−3
Std1.49 × 10−161.15 × 1006.40 × 10−12.22 × 10−21.23 × 10−12.85 × 10−21.12 × 10−2
F2Mean3.87 × 10−149.28 × 1022.44 × 1022.28 × 1027.33 × 1014.63 × 1018.23 × 100
Std3.92 × 10−148.90 × 1022.35 × 1021.33 × 1012.24 × 1012.99 × 1014.10 × 100
F3Mean5.38 × 10−32.84 × 1061.12 × 1024.57 × 1025.18 × 1021.02 × 1021.14 × 101
Std1.92 × 10−22.95 × 1061.01 × 1022.27 × 1024.72 × 1021.41 × 1011.25 × 101
F4Mean4.50 × 1022.00 × 1012.11 × 1011.77 × 1007.17 × 1009.44 × 1004.56 × 102
Std1.84 × 10−22.00 × 1012.11 × 1013.17 × 10−11.03 × 1006.62 × 1001.16 × 100
F5Mean3.82 × 10−49.45 × 1036.78 × 1037.57 × 1032.64 × 1032.70 × 1021.17 × 102
Std1.23 × 10−129.52 × 1036.75 × 1034.40 × 1023.30 × 1023.06 × 1011.55 × 102
F6Mean2.07 × 10−165.87 × 10−25.19 × 1001.77 × 1021.63 × 1022.63 × 10−22.65 × 10−20
Std7.49 × 10−176.53 × 10−55.14 × 1007.12 × 1011.96 × 1021.29 × 10−132.91 × 10−20
F7Mean−2.14 × 10−10−3.81 × 10−30−2.76 × 10−175−1.76 × 10−1360−1
Std4.01 × 10−10−3.73 × 10−3008.79 × 10−13603.20 × 10−17
F8Mean−2.96 × 101−8.62 × 100−5.15 × 100−1.07 × 101−2.30 × 101−1.30 × 101−2.56 × 101
Std1.92 × 10−2−8.39 × 100−5.35 × 1006.70 × 10−16.98 × 10−1−1.36 × 1014.91 × 10−1
F9Mean3.51 × 10−121.57 × 10−111.70 × 10−42.46 × 10−111.10 × 10−116.06 × 10−126.81 × 10−12
Std2.40 × 10−161.03 × 10−114.72 × 10−51.20 × 10−121.91 × 10−123.85 × 10−124.42 × 10−13
F10Mean9.12 × 10−42.76 × 1021.32 × 1043.78 × 1012.53 × 1022.72 × 1011.46 × 10−9
Std3.29 × 10−42.82 × 1021.32 × 1048.74 × 1003.15 × 1011.37 × 1003.13 × 10−9
Wilcoxon Test
+ 999997
- 111113
= 000000
P 0.037(+)0.047(+)0.047(+)0.047(+)0.074(≈)0.059(≈)
Friedman Test
Mean Rank 1.805.905.604.804.303.302.30
Table 6. CEC2005 benchmark functions.
Table 6. CEC2005 benchmark functions.
FunctionNameBoundfmin
Unimodal Functions
F1Shifted Sphere Function [−100, 100]−450
F2Shifted Schwefel’s Problem 1.2 [−100, 100]−450
F3Shifted Rotated High Conditioned Elliptic Function [−100, 100]−450
F4Shifted Schwefel’s Problem 1.2 with Noise in Fitness[−100, 100]−450
F5Schwefel’s Problem 2.6 with Global Optimum on Bounds [−100, 100]−310
Multimodal functions
F6 Shifted Rosenbrock’s Function [−100, 100]390
F7Shifted Rotated Griewank’s Function without Bounds[0, 600]−180
F8Shifted Rotated Ackley’s Function with Global Optimum on Bounds[−32, 32]−140
F9Shifted Rastrigin’s Function[−5, 5]−330
F10Shifted Rotated Rastrigin’s Function[−5, 5]−330
F11Shifted Rotated Weierstrass Function [−0.5, 0.5]90
F12Schwefel’s Problem 2.13[−λ,λ]−460
Expanded functions
F13Expanded Extended Griewank’s plus Rosenbrock’s Function (F8F2) [−3, 1]−130
F14Expanded Rotated Extended Scaffe’s F6 [−100, 100]−300
Hybrid composition functions
F15Hybrid Composition Function 1[−5, 5]120
F16Rotated Hybrid Composition Function 1[−5, 5]120
F17Rotated Hybrid Composition Function 1 with Noise in Fitness[−5, 5]120
F18Rotated Hybrid Composition Function 2[−5, 5]10
F19Rotated Hybrid Composition Function 2 with a Narrow Basin for the Global Optimum[−5, 5]10
F20Rotated Hybrid Composition Function 2 with the Global Optimum on the Bounds[−5, 5]10
F21Rotated Hybrid Composition Function 3[−5, 5]360
F22Rotated Hybrid Composition Function 3 with High Condition Number Matrix[−5, 5]360
F23Non-Continuous Rotated Hybrid Composition Function 3[−5, 5]360
F24Rotated Hybrid Composition Function 4[−5, 5]260
F25Rotated Hybrid Composition Function 4 without Bounds [2, 5]260
Table 7. CEC2005 test results of BA, ABC, and BA_ABC (D = 10).
Table 7. CEC2005 test results of BA, ABC, and BA_ABC (D = 10).
Function 1st (Best)7th13th (Median)19th25th (Worst)MeanStdTest (+, −, =)
F1BA1.5 × 10−32.89 × 10−34.01 × 10−35.47 × 10−36.74 × 10−34.17 × 10−31.53 × 10−3+(25, 0, 0)
ABC2.5 × 1037.49 × 1039.43 × 1031.07 × 1041.16 × 1048.92 × 1032.37 × 103+(25, 0, 0)
BA_ABC0000000
F2BA2.44 × 10−49.29 × 10−41.16 × 10−31.71 × 10−32.71 × 10−31.34 × 10−36.83 × 10−4+(24, 1, 0)
ABC5.26 × 1038.72 × 1031.06 × 1041.33 × 1041.84 × 1041.09 × 1043.32 × 103+(25, 0, 0)
BA_ABC6.02 × 10−79.92 × 10−62.80 × 10−57.01 × 10−53.35 × 10−47.77 × 10−51.10 × 10−4
F3BA9.63 × 1029.16 × 1032.48 × 1047.85 × 1041.80 × 1054.28 × 1044.45 × 104+(23, 2, 0)
ABC9.37 × 1063.62 × 1075.72 × 1078.00 × 1071.35 × 1086.17 × 1073.39 × 107+(25, 0, 0)
BA_ABC1.73 × 1028.71 × 1022.11 × 1034.14 × 1039.90 × 1032.95 × 1032.62 × 103
F4BA2.81 × 1037.86 × 1031.05 × 1041.59 × 1042.25 × 1041.14 × 1045.16 × 103+(25, 0, 0)
ABC6.09 × 1031.00 × 1041.14 × 1041.17 × 1041.66 × 1041.10 × 1042.49 × 103+(25, 0, 0)
BA_ABC4.89 × 10−47.95 × 10−42.03 × 10−34.30 × 10−34.15 × 10−24.72 × 10−38.66 × 10−3
F5BA6.62 × 1015.41 × 1029.78 × 1021.71 × 1035.35 × 1031.20 × 1031.10 × 103+(25, 0, 0)
ABC7.66 × 1031.16 × 1041.32 × 1041.39 × 1041.52 × 1041.28 × 1041.76 × 103+(25, 0, 0)
BA_ABC8.59 × 10−81.81 × 10−53.11 × 10−41.16 × 10−21.99 × 1001.47 × 10−14.29 × 10−1
F6BA5.49 × 10−14.25 × 1005.73 × 1009.65 × 1003.01 × 1024.77 × 1018.98 × 101+(22, 3, 0)
ABC1.28 × 1087.02 × 1089.11 × 1081.17 × 1093.56 × 1091.15 × 1098.46 × 108+(25, 0, 0)
BA_ABC7.31 × 10−82.76 × 10−27.72 × 10−12.92 × 1001.68 × 1012.29 × 1003.98 × 100
F7BA8.92 × 1021.11 × 1031.26 × 1031.38 × 1031.78 × 1031.26 × 1032.21 × 102≈(13, 12, 0)
ABC1.93 × 1032.30 × 1032.47 × 1032.60 × 1032.88 × 1032.45 × 1032.06 × 102+(25, 0, 0)
BA_ABC1.27 × 1031.27 × 1031.27 × 1031.27 × 1031.27 × 1031.27 × 1034.57 × 10−13
F8BA2.01 × 1012.03 × 1012.03 × 1012.04 × 1012.04 × 1012.03 × 1019.19 × 10−2+(21, 4, 0)
ABC2.05 × 1012.07 × 1012.08 × 1012.08 × 1012.09 × 1012.07 × 1019.22 × 10−2+(25, 0, 0)
BA_ABC2.00 × 1012.01 × 1012.02 × 1012.03 × 1012.04 × 1012.02 × 1011.18 × 10−1
F9BA1.25 × 1012.45 × 1013.34 × 1014.16 × 1015.53 × 1013.32 × 1011.08 × 101+(25, 0, 0)
ABC5.21 × 1018.04 × 1018.52 × 1019.62 × 1011.05 × 1028.57 × 1011.32 × 101+(25, 0, 0)
BA_ABC0000000
F10BA2.04 × 1012.87 × 1013.41 × 1014.35 × 1015.57 × 1013.58 × 1019.62 × 100+(21, 4, 0)
ABC9.76 × 1011.08 × 1021.18 × 1021.25 × 1021.44 × 1021.18 × 1021.30 × 101+(25, 0, 0)
BA_ABC7.96 × 1001.39 × 1011.69 × 1012.38 × 1014.58 × 1011.99 × 1019.27 × 100
F11BA7.99 × 1008.62 × 1009.01 × 1009.47 × 1001.00 × 1019.10 × 1005.46 × 10−1+(25, 0, 0)
ABC9.99 × 1001.08 × 1011.15 × 1011.18 × 1011.26 × 1011.14 × 1017.32 × 10−1+(25, 0, 0)
BA_ABC2.33 × 1003.50 × 1004.74 × 1005.37 × 1006.06 × 1004.50 × 1001.10 × 100
F12BA3.87 × 1025.50 × 1028.12 × 1021.09 × 1032.49 × 1039.60 × 1025.47 × 102+(23, 2, 0)
ABC4.10 × 1046.65 × 1047.80 × 1049.00 × 1041.13 × 1057.80 × 1042.08 × 104+(25, 0, 0)
BA_ABC1.52 × 1018.42 × 1011.66 × 1023.16 × 1026.07 × 1022.17 × 1021.64 × 102
F13BA1.64 × 1002.82 × 1013.09 × 1013.54 × 1013.89 × 1013.12 × 1015.65 × 10−1+(25, 0, 0)
ABC8.03 × 1001.37 × 1011.68 × 1012.11 × 1013.47 × 1011.74 × 1015.90 × 100+(25, 0, 0)
BA_ABC2.84 × 10−141.42 × 10−11.93 × 10−12.39 × 10−14.15 × 10−12.03 × 10−11.08 × 10−1
F14BA3.54 × 1004.00 × 1004.25 × 1004.40 × 1004.54 × 1004.20 × 1002.66 × 10−1−(0, 25, 0)
ABC4.81 × 1004.83 × 1004.84 × 1004.85 × 1004.86 × 1004.84 × 1001.31 × 10−2+(24, 1, 0)
BA_ABC4.77 × 1004.80 × 1004.81 × 1004.81 × 1004.84 × 1004.80 × 1001.71 × 10−2
F15BA1.77 × 1023.01 × 1024.59 × 1025.53 × 1026.74 × 1024.43 × 1021.39 × 102+(25, 0, 0)
ABC4.93 × 1027.37 × 1027.65 × 1027.99 × 1028.43 × 1027.58 × 1026.91 × 101+(25, 0, 0)
BA_ABC0000000
F16BA1.43 × 1021.55 × 1021.70 × 1021.98 × 1026.60 × 1022.17 × 1021.28 × 102+(24, 1, 0)
ABC3.70 × 1023.90 × 1024.20 × 1024.55 × 1025.46 × 1024.31 × 1025.02 × 102+(25, 0, 0)
BA_ABC1.00 × 1021.17 × 1021.27 × 1021.35 × 1021.75 × 1021.29 × 1021.76 × 101
F17BA1.44 × 1021.73 × 1022.02 × 1022.16 × 1025.11 × 1022.20 × 1029.09 × 101+ (24, 1, 0)
ABC2.87 × 1023.86 × 1024.11 × 1024.45 × 1025.40 × 1024.22 × 1025.45 × 101+ (25, 0, 0)
BA_ABC1.22 × 1021.43 × 1021.58 × 1021.65 × 1021.73 × 1021.53 × 1021.54 × 101
F18BA3.41 × 1028.24 × 1029.31 × 1021.01 × 1031.05 × 1039.11 × 1021.48 × 102+ (24, 1, 0)
ABC1.12 × 1031.16 × 1031.19 × 1031.24 × 1031.34 × 1031.20 × 1035.21 × 101+ (25, 0, 0)
BA_ABC3.00 × 1023.57 × 1024.44 × 1025.00 × 1029.82 × 1025.00 × 1021.97 × 102
F19BA3.24 × 1028.22 × 1029.59 × 1029.86 × 1021.03 × 1038.88 × 1021.51 × 102+ (22, 3, 0)
ABC1.08 × 1031.17 × 1031.20 × 1031.23 × 1031.26 × 1031.19 × 1034.50 × 101+ (25, 0, 0)
BA_ABC3.56 × 1024.53 × 1025.00 × 1028.00 × 1029.32 × 1026.15 × 1021.96 × 102
F20BA3.03 × 1028.01 × 1029.67 × 1029.97 × 1021.09 × 1038.36 × 1022.59 × 102+ (20, 5, 0)
ABC1.02 × 1031.19 × 1031.23 × 1031.26 × 1031.28 × 1031.21 × 1035.94 × 101+ (25, 0, 0)
BA_ABC3.00 × 1023.56 × 1025.00 × 1028.00 × 1029.19 × 1025.28 × 1021.99 × 102
F21BA3.25 × 1029.22 × 1021.08 × 1031.16 × 1031.25 × 1039.50 × 1023.11 × 102+ (23, 2, 0)
ABC1.30 × 1031.37 × 1031.39 × 1031.42 × 1031.45 × 1031.39 × 1034.29 × 101+ (25, 0, 0)
BA_ABC2.00 × 1024.10 × 1024.11 × 1025.00 × 1029.00 × 1024.45 × 1021.57 × 102
F22BA7.67 × 1027.77 × 1028.74 × 1029.08 × 1029.37 × 1028.55 × 1026.24 × 101+ (25, 0, 0)
ABC9.85 × 1021.07 × 1031.12 × 1031.14 × 1031.30 × 1031.11 × 1037.10 × 101+ (25, 0, 0)
BA_ABC3.00 × 1027.58 × 1027.65 × 1027.71 × 1028.00 × 1027.29 × 1021.29 × 102
F23BA5.59 × 1027.21 × 1021.04 × 1031.22 × 1031.25 × 1039.67 × 1022.77 × 102+ (25, 0, 0)
ABC1.28 × 1031.36 × 1031.39 × 1031.41 × 1031.47 × 1031.39 × 1034.28 × 101+ (25, 0, 0)
BA_ABC4.25 × 1025.48 × 1025.48 × 1025.48 × 1025.59 × 1025.24 × 1025.04 × 101
F24BA2.05 × 1023.96 × 1024.09 × 1027.22 × 1021.24 × 1035.79 × 1023.10 × 102+ (25, 0, 0)
ABC1.09 × 1031.28 × 1031.32 × 1031.36 × 1031.38 × 1031.30 × 1037.27 × 101+ (25, 0, 0)
BA_ABC2.00 × 1022.00 × 1022.00 × 1022.00 × 1022.00 × 1022.00 × 1021.53 × 10−12
F25BA2.06 × 1023.95 × 1024.15 × 1025.63 × 1021.15 × 1035.34 × 1022.34 × 102− (3, 22, 0)
ABC1.34 × 1031.41 × 1031.42 × 1031.44 × 1031.48 × 1031.42 × 1032.80 × 101+ (25, 0, 0)
BA_ABC6.01 × 1026.23 × 1028.20 × 1028.24 × 1028.31 × 1027.41 × 1021.03 × 102
BAABCBA_ABC
Mean Rank1.962.921.12
Table 8. Comparison results of BA_ABC and other algorithms (D = 10).
Table 8. Comparison results of BA_ABC and other algorithms (D = 10).
FunctionAlgorithms
BA_ABCSBAISMBADENBAiBAILSSIWBAGBATBAPBALBAEBARBA
F102.43 × 10−50000000008.09 × 103
F27.77 × 10−51.90 × 10−704.38 × 10−700000007.88 × 103
F32.95 × 1031.06 × 1021.33 × 1033.46 × 1032.48 × 1011.67 × 1032.41 × 1021.43 × 1022.74 × 1035.11 × 1022.21 × 1021.66 × 107
F44.72 × 10−32.57 × 10202.54 × 1032.99 × 1031.92 × 10−43.81 × 1021.54 × 1029.79 × 1035.88 × 1031.77 × 1031.16 × 104
F51.47 × 10−15.25 × 10008.81 × 10−21.47 × 10−102.92 × 1016.26 × 1001.22 × 1022.56 × 1015.74 × 1003.60 × 103
F62.29 × 1009.94 × 10−107.97 × 10−11.69 × 10−35.64 × 1004.96 × 10−22.91 × 10−24.36 × 1022.20 × 10−21.44 × 1004.48 × 108
F71.27 × 1036.73 × 1004.85 × 1029.32 × 1004.59 × 1011.26 × 1037.04 × 1015.36 × 1018.74 × 1018.98 × 1015.42 × 1014.26 × 102
F82.02 × 1012.15 × 1002.03 × 1012.01 × 1012.00 × 1012.01 × 1012.01 × 1012.01 × 1012.02 × 1012.02 × 1012.02 × 1012.02 × 101
F902.17 × 10003.82 × 1012.48 × 1012.09 × 1013.94 × 1013.59 × 1017.35 × 1014.79 × 1015.96 × 1017.87 × 101
F101.99 × 1018.17 × 10−11.05 × 1016.64 × 1014.08 × 1011.97 × 1015.53 × 1014.58 × 1019.17 × 1015.22 × 1017.85 × 1019.88 × 101
F114.50 × 1001.21 × 1002.61 × 1007.07 × 1003.12 × 1005.54 × 1001.17 × 1001.30 × 1007.94 × 1001.44 × 1001.46 × 1008.66 × 100
F122.17 × 1021.75 × 1023.15 × 1023.69 × 1032.20 × 10−27.29 × 1017.54 × 1014.25 × 1012.82 × 1014.18 × 1017.40 × 1011.02 × 105
F132.03 × 10−11.52 × 1003.42 × 10−12.61 × 1002.56 × 1001.03 × 1001.68 × 1001.34 × 1001.43 × 1001.03 × 1001.99 × 1004.65 × 100
F144.80 × 1002.18 × 1004.62 × 1003.34 × 1003.92 × 1003.88 × 1003.97 × 1003.94 × 1004.03 × 1003.99 × 1004.24 × 1004.13 × 100
F1502.14 × 1022.36 × 1024.99 × 1022.83 × 1021.25 × 1026.52 × 1026.23 × 1026.62 × 1026.66 × 1027.31 × 1027.02 × 102
F161.29 × 1027.37 × 1011.14 × 1022.43 × 1021.93 × 1021.39 × 1024.58 × 1024.99 × 1024.80 × 1024.61 × 1024.38 × 1024.67 × 102
F171.53 × 1021.07 × 1021.14 × 1022.89 × 1021.81 × 1021.39 × 1022.03 × 1022.31 × 1024.73 × 1023.39 × 1023.76 × 1025.61 × 102
F185.00 × 1026.49 × 1023.88 × 1029.81 × 1029.53 × 1027.48 × 1021.15 × 1031.04 × 1031.08 × 1031.11 × 1031.07 × 1031.16 × 103
F196.15 × 1024.10 × 1023.97 × 1021.02 × 1038.97 × 1027.89 × 1029.77 × 1028.94 × 1021.00 × 1031.03 × 1031.09 × 1031.09 × 103
F205.28 × 1023.03 × 1025.14 × 1021.07 × 1038.05 × 1026.97 × 1028.80 × 1029.95 × 1021.01 × 1031.03 × 1031.08 × 1031.14 × 103
F214.45 × 1024.28 × 1025.00 × 1021.09 × 1031.23 × 1033.20 × 1021.29 × 1031.23 × 1031.29 × 1031.27 × 1031.29 × 1031.41 × 103
F227.29 × 1024.22 × 1027.29 × 1029.25 × 1028.54 × 1027.84 × 1028.26 × 1028.55 × 1029.10 × 1028.24 × 1029.82 × 1029.66 × 102
F235.24 × 1024.48 × 1025.60 × 1021.20 × 1031.23 × 1036.93 × 1021.42 × 1031.40 × 1031.39 × 1031.40 × 1031.42 × 1031.47 × 103
F242.00 × 1022.00 × 1022.00 × 1021.04 × 1039.49 × 1022.00 × 1021.34 × 1037.43 × 1029.76 × 1029.95 × 1027.60 × 1021.43 × 103
F257.41 × 1023.11 × 1024.65 × 1021.07 × 1039.46 × 1021.64 × 1038.14 × 1025.72 × 1029.82 × 1029.36 × 1025.43 × 1021.12 × 103
Wilcoxon Test
+ 76181512161518161522
- 17156811895782
= 14122112221
p value 0.026(-)0.085(≈)0.001(+)0.042(+)0.248(≈)0.037(+)0.092(≈)0.004(+)0.026(+)0.052(≈)1.15 × 10−4(+)
Friedman Test
Mean Rank5.222.784.167.445.244.786.925.748.667.588.0611.42
Table 9. CEC2010 benchmark functions.
Table 9. CEC2010 benchmark functions.
FunctionNameModalityRange
Separable functions
F1Shifted Elliptic FunctionUnimodal[−100, 100]
F2Shifted Rastrigin’s FunctionMultimodal[−5, 5]
F3Shifted Ackley’s FunctionMultimodal[−32, 32]
Single-group m-nonseparable functions
F4Single-group Shifted and m-rotated Elliptic FunctionUnimodal[−100, 100]
F5Single-group Shifted and m-rotated Rastrigin’s FunctionMultimodal[−5, 5]
F6Single-group Shifted and m-rotated Ackley’s FunctionMultimodal[−32, 32]
F7Single-group Shifted m-dimensional Schwefel’s Function 1.2Unimodal[−100, 100]
F8Single-group Shifted m-dimensional Rosenbrock’s FunctionMultimodal[−100, 100]
D/2m groups m-nonseparable functions
F9D/2m -group Shifted and m-rotated Elliptic FunctionUnimodal[−100, 100]
F10D/2m -group Shifted and m-rotated Rastrigin’s FunctionMultimodal[−5, 5]
F11D/2m -group Shifted and m-rotated Ackley’s FunctionMultimodal[−32, 32]
F12D/2m -group Shifted m-dimensional Schwefel’s Problem 1.2Unimodal[−100, 100]
F13D/2m -group Shifted m-dimensional Rosenbrock’s FunctionMultimodal[−100, 100]
D/m groups m-nonseparable functions
F14D/m -group Shifted and m-rotated Elliptic FunctionUnimodal[−100, 100]
F15D/m -group Shifted and m-rotated Rastrigin’s FunctionMultimodal[−5, 5]
F16D/m -group Shifted and m-rotated Ackley’s FunctionMultimodal[−32, 32]
F17D/m -group Shifted m-dimensional Schwefel’s Problem 1.2Unimodal[−100, 100]
F18D/m -group Shifted m-dimensional Rosenbrock’s FunctionMultimodal[−100, 100]
Nonseparable functions
F19Shifted Schwefel’s Problem 1.2Unimodal[−100, 100]
F20Shifted Rosenbrock’s FunctionMultimodal[−100, 100]
Table 10. CEC2010 test results of BA, ABC, and BA_ABC (D = 1000).
Table 10. CEC2010 test results of BA, ABC, and BA_ABC (D = 1000).
F1F2F3
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best7.89 × 1092.34 × 10−121.64 × 10−111.35 × 1043.12 × 1013.89 × 1011.98 × 1015.14 × 10−71.42 × 10−6
Median2.74 × 10108.80 × 10−121.72 × 10−101.44 × 1043.97 × 1014.85 × 1012.04 × 1019.57 × 10−73.34 × 10−6
Worst3.23 × 10103.29 × 10−116.05 × 10−82.46 × 1044.72 × 1015.30 × 1012.14 × 1011.89 × 10−66.43 × 10−6
Mean2.63 × 10101.10 × 10−114.81 × 10−91.73 × 1043.95 × 1014.74 × 1012.06 × 1011.00 × 10−63.54 × 10−6
Std4.93 × 1097.82 × 10−121.46 × 10−84.47 × 1033.88 × 1003.58 × 1007.83 × 10−13.69 × 10−71.21 × 10−6
Test(+, −, =)+(25, 0, 0)−(0, 25, 0) +(25, 0, 0)−(4, 21, 0) +(25, 0, 0)−(0, 25, 0)
F4F5F6
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best2.91 × 10113.58 × 10137.80 × 1092.83 × 1084.25 × 1082.78 × 1081.94 × 1071.97 × 1071.88 × 107
Median5.14 × 10114.57 × 10131.82 × 10103.76 × 1085.90 × 1083.71 × 1081.97 × 1072.00 × 1071.94 × 107
Worst8.87 × 10115.86 × 10134.07 × 10104.90 × 1087.23 × 1085.43 × 1082.08 × 1072.00 × 1071.98 × 107
Mean5.18 × 10114.63 × 10132.15 × 10103.74 × 1085.81 × 1083.81 × 1082.01 × 1071.99 × 1071.93 × 107
Std1.07 × 10116.46 × 10129.19 × 1096.04 × 1076.74 × 1077.12 × 1075.56 × 1058.69 × 1042.53 × 105
Test(+, −, =)+(25, 0, 0)+(25, 0, 0) ≈(14,11,0)+(25, 0, 0) +(24, 1, 0)+(25, 0, 0)
F7F8F9
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best5.73 × 1062.97 × 10106.09 × 1045.79 × 1063.75 × 1055.93 × 1048.82 × 1095.83 × 1082.63 × 106
Median6.05 × 1064.02 × 10101.64 × 1056.30 × 1063.33 × 1063.06 × 1052.99 × 10106.58 × 1083.52 × 106
Worst5.41 × 1084.86 × 10102.97 × 1052.21 × 1082.03 × 1077.24 × 1074.13 × 10107.06 × 1087.11 × 106
Mean4.11 × 1073.97 × 10101.74 × 1053.73 × 1076.34 × 1061.06 × 1072.96 × 10106.57 × 1083.78 × 106
Std1.25 × 1085.64 × 1095.32 × 1047.21 × 1075.49 × 1061.93 × 1076.21 × 1093.33 × 1071.05 × 106
Test(+, −, =)+(25, 0, 0)+(25, 0, 0) +(19, 6, 0)≈(16, 9, 0) +(25, 0, 0)+(25, 0, 0)
F10F11F12
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best1.35 × 1046.79 × 1034.21 × 1032.15 × 1022.01 × 1021.95 × 1022.04 × 1066.12 × 1052.19 × 10−1
Median 1.47 × 1047.27 × 1034.78 × 1032.34 × 1022.01 × 1021.96 × 1023.53 × 1066.62 × 1052.65 × 10−1
Worst2.45 × 1047.54 × 1037.48 × 1032.35 × 1022.02 × 1022.00 × 1024.01 × 1066.91 × 1053.16 × 10−1
Mean1.76 × 1047.23 × 1034.95 × 1032.27 × 1022.01 × 1021.96 × 1023.46 × 1066.65 × 1052.74 × 10−1
Std4.61 × 1031.92 × 1027.45 × 1028.74 × 1001.81 × 10−11.80 × 1003.68 × 1051.66 × 1042.37 × 10−2
Test(+, −, =)+(25, 0, 0)+(25, 0, 0) +(25, 0, 0)+(25, 0, 0) +(25, 0, 0)+(25, 0, 0)
F13F14F15
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best4.8 × 1093.90 × 1024.27 × 1029.23 × 1091.32 × 1097.48 × 1069.42 × 1031.39 × 1048.74 × 103
Median 1.02 × 10115.11 × 1028.36 × 1023.07 × 10101.44 × 1099.64 × 1061.49 × 1041.46 × 1049.38 × 103
Worst1.33 × 10111.16 × 1032.97 × 1033.94 × 10101.54 × 1091.18 × 1072.49 × 1041.49 × 1041.06 × 104
Mean1.00 × 10115.48 × 1021.11 × 1033.10 × 10101.44 × 1099.59 × 1061.90 × 1041.45 × 1049.45 × 103
Std2.53 × 10101.75 × 1026.47 × 1025.93 × 1096.88 × 1071.07 × 1065.35 × 1032.70 × 1024.76 × 102
Test(+, −, =)+(25, 0, 0)−(2, 23, 0) +(25, 0, 0)+(25, 0, 0) +(24, 1, 0)+(25, 0, 0)
F16F17F18
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best3.90 × 1024.03 × 1023.90 × 1022.69 × 1061.23 × 1061.15 × 1008.15 × 10101.36 × 1038.36 × 102
Median 3.96 × 1024.03 × 1023.91 × 1024.00 × 1061.30 × 1061.34 × 1008.79 × 10115.28 × 1031.05 × 104
Worst4.29 × 1024.04 × 1023.99 × 1025.44 × 1061.39 × 1061.63 × 1001.01 × 10121.54 × 1043.41 × 104
Mean4.09 × 1024.03 × 1023.92 × 1024.08 × 1061.33 × 1061.33 × 1008.58 × 10116.31 × 1031.38 × 104
Std1.61 × 1012.64 × 10−12.20 × 1005.57 × 1054.04 × 1041.27 × 10−11.77 × 10114.12 × 1031.07 × 104
Test(+, −, =)+(23, 2, 0)+(25, 0, 0) (25, 0, 0)+(25, 0, 0) +(25, 0, 0)−(7, 18, 0)
F19F20Mean Rank
BAABCBA_ABCBAABCBA_ABCBAABCBA_ABC
Best4.06 × 1066.87 × 1063.51 × 1041.50 × 10119.44 × 1002.46 × 1022.751.851.40
Median 5.70 × 1068.03 × 1064.71 × 1041.04 × 10122.47 × 1016.60 × 102
Worst7.51 × 1068.38 × 1066.60 × 1041.22 × 10127.11 × 1011.18 × 103
Mean5.86 × 1067.94 × 1064.72 × 1041.01 × 10122.80 × 1016.77 × 102
Std7.57 × 1053.73 × 1058.31 × 1031.98 × 10111.45 × 1012.26 × 102
Test(+, −, =)+(25, 0, 0)+(25, 0, 0) +(25, 0, 0)−(0, 25, 0)
Table 11. Comparison results of BA_ABC and other algorithms (D = 1000).
Table 11. Comparison results of BA_ABC and other algorithms (D = 1000).
FunctionAlgorithms
BA_ABCAHDERSQPSOISCAμDSDEaEUS
F14.81 × 10−91.81 × 10−208.66 × 10−321.55 × 10112.05 × 1048.32 × 10−24
F24.74 × 1013.57 × 10−61.18 × 1011.64 × 1001.47 × 1020
F33.54 × 10−63.73 × 10−91.27 × 10−143.10 × 10−12.45 × 1001.90 × 10−12
F42.15 × 10102.33 × 10127.09 × 10111.37 × 10123.16 × 10112.84 × 1011
F53.81 × 1083.88 × 1085.39 × 1083.27 × 1081.11 × 1087.18 × 107
F61.93 × 1071.97 × 1071.94 × 1071.29 × 1051.50 × 1071.99 × 107
F71.74 × 1052.01 × 1067.50 × 1003.71 × 10105.53 × 1052.73 × 103
F81.06 × 1079.09 × 1058.78 × 1071.59 × 10134.08 × 1071.29 × 108
F93.78 × 1067.06 × 1071.75 × 1071.66 × 10117.16 × 1087.57 × 106
F104.95 × 1035.84 × 1035.33 × 1032.96 × 1034.63 × 1037.15 × 103
F111.96 × 1022.01 × 1021.98 × 1021.89 × 1021.98 × 1021.99 × 102
F122.74 × 10−13.99 × 1041.19 × 1021.33 × 1062.59 × 1053.28 × 10−1
F131.11 × 1031.64 × 1031.06 × 1036.30 × 10101.10 × 1031.09 × 103
F149.59 × 1061.71 × 1085.58 × 1072.29 × 1081.85 × 1091.71 × 107
F159.45 × 1031.09 × 1041.34 × 1041.64 × 1038.79 × 1031.42 × 104
F163.92 × 1023.98 × 1029.37 × 1023.18 × 1023.90 × 1023.98 × 102
F171.33 × 1002.00 × 1052.31 × 1033.27 × 1061.03 × 1063.98 × 100
F181.38 × 1044.77 × 1032.28 × 1031.41 × 1043.01 × 1042.97 × 103
F194.72 × 1045.84 × 1053.08 × 1065.90 × 1064.56 × 1069.44 × 103
F206.77 × 1021.21 × 1031.13 × 1031.59 × 10115.29 × 1034.51 × 102
Wilcoxon Test
+ 1514131411
56769
= 00000
p 0.009(+)0.012(+)0.025(+)0.038(+)0.455(≈)
Friedman Test
Mean Rank2.654.033.234.254.032.83
Table 12. Optimal solutions of pressure vessel design problem by BA_ABC.
Table 12. Optimal solutions of pressure vessel design problem by BA_ABC.
x1x2x3x4f(x)
Decision variables0.77818780.384658640.32058199.98675885.3715
g1g2g3g4
Constraint values−6.0599 × 10−7−2.6680 × 10−7−0.4149−40.0133
BestMedianWorstMeanStd.
5885.37156008.13446248.70546017.595791.5592
Table 13. Comparison of BA_ABC with other algorithms for pressure vessel design optimization problem.
Table 13. Comparison of BA_ABC with other algorithms for pressure vessel design optimization problem.
AlgorithmsWorstMeanBestStd
EBA [31]6370.776173.676059.71142.33
IAPSO [47]6090.536068.756059.7114.01
CGWO [48]6188.115783.585034.18254.50
WCA [49]6590.216198.615885.33213.04
MBA [50]6392.506200.645889.32160.34
PSO_GA [51]5885.485885.385885.330.05
ABC [52]5895.125887.555885.402.74
BA [53]6318.956179.136059.71137.22
CSDE [54]1.52 × 10226261.416059.711.44 × 10−8
TLNNA [55]6114.955935.425885.3366.28
BA_ABC6248.706017.595885.3791.55
Table 14. Optimal solutions of tension/compression spring design problem by BA_ABC.
Table 14. Optimal solutions of tension/compression spring design problem by BA_ABC.
x1x2x3f(x)
Decision variables0.0540070.4177478.395650.012667
g1g2g3g4
Constraint values−0.002253−0.115968−4.177092−0.685497
BestMedianWorstMeanStd.
0.0126670.0126880.012733520.012687551.46 × 10−5
Table 15. Comparison of BA_ABC with other algorithms for the tension/compression spring design optimization problem (NA = Not Available).
Table 15. Comparison of BA_ABC with other algorithms for the tension/compression spring design optimization problem (NA = Not Available).
AlgorithmsWorstMeanBestStd
EBA [31]NANA0.01267NA
IAPSO [47]0.017820.013670.012661.57 × 10−3
CGWO [48]0.012170.012170.011951.04 × 10−5
WCA [49]0.012950.012740.012668.06 × 10−5
MBA [50]0.012900.012710.012666.30 × 10−5
ABC [52]0.012710.012660.012669.42 × 10−6
BA [53]0.016890.013500.012671.42 × 10−3
CSDE [54]0.012660.012690.012663.00 × 10−5
TLNNA [55]0.012830.012680.012663.24 × 10−5
BA_ABC0.012730.012680.012661.46 × 10−5
Table 16. Optimal solutions of gear train design problem by BA_ABC.
Table 16. Optimal solutions of gear train design problem by BA_ABC.
x1x2x3x4f(x)
Decision variables12.0019.752351.715331.76702.0732 × 10−14
BestMedianWorstMeanStd.
2.07 × 10−145.14 × 10−117.64 × 10−97.42 × 10−101.92 × 10−9
Table 17. Comparison of BA_ABC with other algorithms for the gear train design optimization problem.
Table 17. Comparison of BA_ABC with other algorithms for the gear train design optimization problem.
AlgorithmsWorstMeanBestStd
IAPSO [47]1.82 × 10−8 5.49 × 10−92.70 × 10−126.36 × 10−9
CGWO [48]2.71 × 10−10 7.09 × 10−112.83 × 10−131.02 × 10−10
MBA [50]2.06 × 10−82.47 × 10−92.70 × 10−123.94 × 10−9
BA5.11 × 10−27.89 × 10−31.31 × 10−81.38 × 10−2
ABC1.02 × 10−91.54 × 10−101.47 × 10−132.50 × 10−10
BA_ABC7.64 × 10−97.42 × 10−102.07 × 10−141.92 × 10−9
Table 18. Contribution of BA and ABC to BA_ABC solutions in CEC2005.
Table 18. Contribution of BA and ABC to BA_ABC solutions in CEC2005.
FunctionBAABCBA+BA-ABC+ABC-C+C-
F1152281319,79724244,978729824,872
F215061319,3378739,81336839,782
F315053019,420639,89465739,493
F415127121,0091442,5467436,086
F515137520,9059042,47012736,033
F615544026,16028452,91623619,964
F715587225,72818353,01762119,579
F815621627,18212656,2669616,114
F9155371522,88532152,879427815,922
F1015759928,66139558,12584911,371
F1115629327,63710855,7524616,164
F12151023533,09818566,48200
F1315933931,58136263,4787441,66
F1415135820,92213142,42937935,781
F15159341828,50246363,3775903650
F1615616327,76715955,7017416,136
F171595331,8671163,82924238
F1815467024,60016250,37863323,557
F1915632127,60923555,62516516,045
F20151122520,05513442,426195234,208
F2115734628,91432758,1937512,145
F2215517326,42820352,99635719,843
F2315411425,1564050,5009624,094
F2415875129,83929560,88510537177
F256154827,8822355,837016,210
Table 19. Contribution of BA and ABC to BA_ABC solutions in CEC2010.
Table 19. Contribution of BA and ABC to BA_ABC solutions in CEC2010.
FunctionBAABCBA+BA-ABC+ABC-C+C-
F11595155954,84547101,915,290707119,293
F2115497639,50318,1931,261,80770931,072,907
F312132329997,67189191,991,08100
F41523595676,40521251,357,87526,946933,054
F514113111996,88962431,993,75700
F61523739676,26117621,358,2382755957,245
F71524414684,47416691,349,4434699955,301
F81513737636,26211521,278,84967881,073,212
F91506718593,2827051,199,295152,8421,047,158
F10115767639,23311,5031,268,49758731,074,127
F119152091957,90811,0751,908,9261842118,158
F121508181591,8193561,199,64444,1171,155,883
F1313125294994,70691391,990,86100
F141507295592,7055011,199,499175,9311,024,069
F151553532796,46840441,595,9561946598,054
F161562989837,01140051,675,9951925478,075
F171507078592,9233851,199,614108,3361,091,664
F181597395952,60561251,913,8751472118,528
F191506052593,949341,199,965171,2821,028,718
F201537752712,24824291,437,5714593835,407
Table 20. The complexity of the BA, ABC, and BA_ABC algorithms.
Table 20. The complexity of the BA, ABC, and BA_ABC algorithms.
CEC2020 (Function 1)
BA
T0T1T2(T2 − T1)/T0
D = 50.29011.592213.071039.5684
D = 101.611013.102639.6125
D = 151.902913.673740.5749
ABC
T0T1T2(T2 − T1)/T0
D = 50.29011.592211.198733.1144
D = 101.611011.976435.7304
D = 151.902912.466736.4143
BA_ABC
T0T1T2(T2 − T1)/T0
D = 50.29011.592213.095739.6535
D = 101.611013.379940.5684
D = 151.902914.235442.5112

Share and Cite

MDPI and ACS Style

Yildizdan, G.; Baykan, Ö.K. A New Hybrid BA_ABC Algorithm for Global Optimization Problems. Mathematics 2020, 8, 1749. https://doi.org/10.3390/math8101749

AMA Style

Yildizdan G, Baykan ÖK. A New Hybrid BA_ABC Algorithm for Global Optimization Problems. Mathematics. 2020; 8(10):1749. https://doi.org/10.3390/math8101749

Chicago/Turabian Style

Yildizdan, Gülnur, and Ömer Kaan Baykan. 2020. "A New Hybrid BA_ABC Algorithm for Global Optimization Problems" Mathematics 8, no. 10: 1749. https://doi.org/10.3390/math8101749

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop