1. Introduction
Nature has long inspired human discoveries, and with advances in technology, researchers have developed new optimization algorithms based on natural phenomena. These stochastic algorithms, also known as meta-heuristics, have gained popularity since they can be easily formulated and modeled thanks to their structures that mimic evolutionary processes, animal or plant behavior, or physical events. Another reason for the widespread adoption of meta-heuristic algorithms is their ability to avoid local optima entrapment and premature convergence, issues that often affect traditional methods [
1,
2,
3,
4].
Typically, meta-heuristic s commence by generating an initial population of potential solutions obtained through random means. Once this initial population is formed, they evaluate the fitness of individuals using a fitness function. Subsequently, they embark on their nature-inspired search procedures. This iterative process continues until termination criteria are met; either the optimum solution is discovered or the maximum number of generations is reached.
One of the widely used meta-heuristic algorithms in the literature are Evolutionary Algorithms (EAs). EAs generally includes the following intelligent strategies: selection, recombination, and mutation. Some of the most known EAs are Genetic Algorithms (GAs) [
5], Differential Evolution (DE) [
6], Memetic Algorithms (MAs) [
7], and Biogeography-Based Optimizer (BBO) [
8].
The fact that living creatures in nature generally form flocks has led researchers to develop Swarm-based Algorithms (SBA). Among these algorithms, perhaps the most well-known is Particle Swarm Optimization (PSO). PSO simulates the social behavior of bird flocks and fish swarms [
9]. Each particle (potential solution) in PSO also possesses a velocity to explore the search space. Other renowned SBAs encompass Ant Colony Optimization (ACO) [
10], Cuckoo Search (CS) [
11], Artificial Bee Colony (ABC) [
12], Firefly Algorithm (FFA) [
13], and Grey Wolf Optimizer (GWO) [
14].
Furthermore, researchers inspired by fundamental subjects that have an important place in sciences such as physics, chemistry, and astronomy have led to the emergence of new meta-heuristic algorithms. Some common examples include Simulated Annealing (SA) [
15], Big-Bang Big-Crunch (BBBC) [
16], Gravitational Search (GS) [
17], Black Hole (BH) [
18], and Momentum Search algorithm (MSA) [
19].
Apart from the meta-heuristic algorithms mentioned above, there are also algorithms inspired by human behavior such as Tabu Search (TS) [
20], Harmony Search (HS) [
21], Imperialist Competitive Algorithm (ICA) [
22], Teaching Learning Based Optimization (TLBO) [
23], and Fireworks Algorithm (FWA) [
24].
In meta-heuristic algorithms, the idea that the population will improve by selecting the best individual is prevalent. However, the idea of more towards the good can cause the algorithm to get stuck in the local and not be able to reach the optimum value, especially in challenging test functions. One way to get rid of this situation may be to adapt bipolar behavior to the algorithm. Studies in this field have shown that adopting bipolar behavior increases the performance of the algorithm [
25,
26]. Thus, the idea of testing bipolar behavior on a new algorithm was born and the idea of adding bipolar behavior to Roosters Algorithm (RA) [
27], which is the basis of the algorithms introduced in this paper, was formed.
However, subsequent tests on RA have shown that under such challenging situations, RA cannot offer the expected results; therefore, it was determined that the algorithm required further improvement. The main purpose of this paper is to improve RA and present to the literature a new meta-heuristic algorithm that can provide much more successful results than both RA and meta-algorithms commonly used in optimization problems.
While selecting the meta-heuristic studies used in this paper, the algorithms used more commonly in the literature were taken into account. Accordingly, a comprehensive study on meta-heuristic algorithms shows that PSO and GA are the most cited publications (cited approximately 75,000 and 70,000 times by 2022, respectively) [
28]. Additionally, in this study, it can be seen that DE, CS, and GWO are the other most cited algorithms. Therefore, to test the performance of the proposed algorithm, BIRA was evaluated by comparing it with five of the most cited meta-heuristic algorithms: Standard GA (SGA), DE, PSO, CS, and GWO.
The contribution of the paper to the literature is as follows:
Besides improving the previous proposed algorithm, RA, new swarm-based meta-heuristic algorithms called IRA, BRA, and BIRA are introduced to the literature.
The evaluation involved 20 well-known benchmarks, 11 CEC2014, and 17 CEC2018 test functions (which are also CEC2020 test functions), three real-world engineering problems.
The significance of the results was assessed using Friedman and Wilcoxon Signed Rank statistical tests.
BIRA’s Matlab code will be made available for sharing in order to test the accuracy of the results described in this paper and to enable researchers to use this new algorithm in their studies.
The paper is organized as follows:
Section 2 describes how the idea of new versions of RA were achieved and introduces the proposed algorithms, IRA, BRA, and BIRA. In
Section 3, the information of the meta-heuristic algorithms that were employed in this study are given. In addition to presenting the utilized test functions, statistical tests, and the engineering problems,
Section 4 also shows the obtained results. Finally, the paper is concluded with
Section 5.
2. The New Versions of Roosters Algorithm
There are two key features that distinguish the new proposed algorithms from RA: bipolar behavior and dance technique.
The majority of meta-heuristic algorithms widely used in the literature generally allow only the best individuals to be selected from the population. However, this method causes individuals in a population to become uniform in subsequent generations, meaning that genetic diversity is lost. Unlike other meta-algorithms, algorithms with bipolar behavior have the advantage of increasing genetic diversity by allowing the worst individuals to mate at a certain rate, thus, overcoming problems such as localization.
In addition, the dance technique was designed to improve the RA algorithm enabled the algorithm to provide a more successful performance in investigating the search space, and assist the algorithm to find decent points in the space.
2.1. The Inspired Algorithm (RA)
Female organisms can engage in mating with multiple males [
29]. In the case of these polygamous species, a male’s reproductive success is influenced by the quality of his sperm, a concept rooted in the “sperm competition” theory [
29]. Chickens fall into this category of creatures and have the capacity to mate with multiple roosters, particularly those that are considered attractive.
Similar to humans, the courtship process between roosters and chickens commences with flirtatious behaviors [
27]. Typically, a rooster offers food to a chicken and showcases his dancing prowess to capture her interest. If the chicken finds him appealing, mating may occur. Nevertheless, there are instances where a rooster attempts to mate with a chicken without her consent.
On the other hand, the age of the rooster plays a significant role in the mating dynamics [
27]. For instance, chickens tend to avoid mating with older roosters due to their limited ability to fertilize eggs effectively.
The stronger rooster can mate with more chickens. However, since a chicken can mate with more than one rooster, sperm obtained from different roosters in the chicken’s pouch will compete. At this stage, the VSL (sperm cell velocity) value of the sperm will play a role in determining which one will win this competition [
29]. The sperm of the rooster with higher VSL is more appropriate to outcompete his rivals in the race to fertilize the egg.
Following the completion of the mating process, competition among sperm comes into play when a chicken has multiple male partners. The velocity of sperm cells, often estimated by the VSL (sperm cell velocity), holds considerable importance in determining which male is more likely to succeed in fertilizing the egg [
29]. A male with higher VSL is more appropriate to outcompete his rivals in the race to fertilize the egg.
Additionally, if a chicken is under stress or dealing with a health issue, it directly impacts the quality of the eggs she produces [
27], potentially causing disruptions in the egg’s DNA structure.
Inspired by the mating behavior of roosters and chickens, the RA algorithm was introduced [
27]. The RA could only give successful results in low populations, but when the population size was increased, it fell behind its counterparts and even got stuck at the local value. Having these disadvantages led to the idea that the algorithm should be improved, and by adding bipolar behavior to the developed algorithm, a new swarm-based meta-heuristic algorithm called BIRA emerged.
2.2. Improved Roosters Algorithm (IRA)
Although the RA algorithm is based on the mating behavior of roosters and chickens, it does not include some behavioral models. For instance, the principle of the rooster influencing the chicken by dancing is not modeled in the RA algorithm. Instead, the algorithm compares the fitness value of the chicken with the fitness value of the rooster who wants to mate. As a result of this comparison, if the value of the rooster is better than the chicken’s, it is assumed that the rooster impressed the chicken.
The idea that adding a more advanced mechanism instead of the above mentioned method could positively affect the performance of RA led to the modeling of the Improved Roosters Algorithm (IRA). In IRA, in order to impress the chicken, a new dance model based on the Arithmetic Spiral (AS) [
30] has been developed for a rooster. The equation of AS in polar coordinates can be:
where r is the length of the radius,
is the amount of the rotation (angular position of the radius), and “a” and “b” are real numbers. Thanks to the “a” parameter, the spiral is rotated by moving it outwards from the center point, while the distance between the loops is controlled through the “b” parameter. In this way, it is possible for successive turns of the spiral to occur at fixed intervals (the distance between successive turns of the spiral equals 2
a if
is measured in radians). This is why the spiral is called arithmetic.
The location of a point obtained using AS in the Cartesian Plane is as follows:
where
v is the velocity at time
t and
w is the angular velocity with an arbitrary point
.
In the IRA, the dance method based on AS, given according to the above equations (Equations (
1)–(
3)), has been developed. Thanks to this method, the algorithm places the chicken at the center point, allowing the rooster trying to impress her to dance around her according to AS, as shown in
Figure 1. The values a and b, which adjust the rotation of the spiral and the distance between its arms, are determined automatically by Matlab functions to create a two-turn spiral depending on the distance of the chicken to the rooster. In addition, for the full implementation of AS, the values v and w are determined to be the position of the rooster and the angle change during the two-turn cycle, respectively. While dancing, if the rooster finds a better position than the chicken has, this refers to the fact that she is influenced by him.
2.3. Bipolar Roosters Algorithm (BRA)
The bipolar behavioral system is a method that allows the selection of the best individual as well as the worst individual in the population at higher rates compared to other methods. Our previous studies, which were mentioned in
Section 1, have proved that adapting this system to an algorithm can increase the performance of the algorithm and prevent it from getting stuck at the local minima. This led to the idea of adding bipolar behavior to RA by applying the adaptation process mentioned above, which led to the formation of the Bipolar Roosters Algorithm (BRA).
In fact, the BRA algorithm behaves almost exactly like RA. The only difference with BRA is that the selection process has a bipolar behavior. At this process, depending on the individual’s current mood (which is bipolar), the individual mates with either the best mate candidate or the worst mate candidate.
The current mood of the individual is decided through an experimentally determined value, which we call the bipolarity value [
25]. In this study, based on the tests we performed in our previous studies, the bipolarity value was accepted as 0.25.
2.4. Bipolar Improved Roosters Algorithm (BIRA)
Although the dancing mechanism in IRA was successful in scanning the search space, it could not provide satisfactory results for some challenging test functions (as will be shown in
Section 4) due to getting stuck in local minima. The bipolar behavior of BRA helped to increase the performance of the algorithm and, therefore, much better results were obtained compared to IRA. However, the fact that BRA could not explore the search space sufficiently like IRA affected the performance of the algorithm, and again, the desired results were not obtained.
By taking into account the results acquired from the first two versions, the above-mentioned decent aspects of IRA and BRA were used and the hybrid version of these two algorithms, Bipolar Improved Roosters Algorithm (BIRA), was created (see Algorithm 1).
First of all, the algorithm specifies the input values: bipolarity value and the number of roosters that want to mate with the same chicken, called the “number of cages”. As all other meta algorithms do, the BIRA algorithm starts its process by creating an initial population. Individuals in the population are determined as chickens or roosters based on their fitness values. At this stage, the top 50% of the best individuals in the population are considered chickens. Then, BIRA randomly selects a certain number of roosters based on the cage size. Afterwards, each chicken in the population chooses a mate among the roosters in the cage. During this process, which depends on the chicken’s current mood, the chicken can choose either the best candidate or the worst candidate as its mate.
Furthermore, the algorithm also permits a chicken to mate with more than one rooster. In that case, in order to decide who is going to fertilize the egg, based on their VSL values, which are actually fitness values, the competition between the sperms of the roosters occur. As a result of the rivalry, the sperm which have the higher VSL value have a chance to fertilize the egg. Finally, the algorithm identifies the gender of the offspring, just as how gender is decided at the beginning of the algorithm.
The results that can be obtained with the algorithm are restricted by the given limit values. However, since the results are real number values, they can go beyond the limit values in some tests, which can cause the algorithm to produce incorrect results. For this reason, a repair section was added to the last stage of the algorithm, and thanks to this, when there is an individual that goes outside the limit value, the relevant value of this individual is assigned as the limit value.
Algorithm 1 Bipolar Improved Roosters Algorithm |
Identify bipolarity value and number of cages (n) Create the initial population Initialize chickens and roosters while do for do Randomly identify roosters in the cage Determine the best, , and the worst roosters, , in the cage dances around the if impresses the attractive chicken && then The mate is the best one else The mate is the worst one end if if has more than one male then Calculate VSL values of all sperms Allow sperm competitions The winner offspring fertilizes the egg else The male of the offspring fertilizes the egg end if return end for Identify males and females in the offspring end while
|
3. Utilized Meta-Heuristic Algorithms
3.1. Standard Genetic Algorithm
Holland introduced Genetic Algorithms (GAs) by inspiring Darwin’s Evolution Theory in order to establish a novel optimization method [
1]. The success of GAs in navigating search spaces [
5] has inspired numerous researchers, solidifying GAs as frequently employed algorithms in optimization problems.
The GA initiates its stochastic procedure by randomly generating the initial population, shown in Algorithm 2. Subsequently, the evaluation process commences through a fitness function, computing the fitness value for each individual in the population. Following the establishment of the initial population, the bio-inspired search arises with selection, crossover (also known as recombination), and mutation steps.
Selection methods typically favor the best individuals in the population (having the the highest fitness value) by allowing only them to survive for the next generation, and to mate.
After a selection method chooses adequate individuals for mating, the crossover step commences. In this step, selected individuals are combined to yield offspring superior to their parents [
5]. Participation in the crossover step is determined by the crossover probability value, (
).
The mutation step aims to preserve genetic diversity and prevent convergence to local minima [
5]. Similar to the crossover step, the decision for an individual to undergo mutation is dictated by the mutation probability value, (
). By altering genetic structures of individuals, the mutation step assures that the individuals change to become a different individuals from their previous states.
In the last stage of the algorithm, a new generation is obtained by mixing the individuals in the population with the newly generated individuals in a certain ratio. This process repeats, with the algorithm returning to the selection step until either the optimal solution is found or the maximum number of generations is reached.
The ST method is one of the most common selection methods in GAs due to its advantages [
31]. GAs utilizing ST as the selection method are commonly referred to as Standard Genetic Algorithms (SGA) in the literature [
32].
Algorithm 2 Standard Genetic Algorithm |
Population size = N; Randomly create the initial population Evaluate the fitness value of individuals in the population while do % Selection and Crossover for do Choose individual i via ST if then i attends Crossover Randomly choose a partner for individual i via ST Mate them else i is saved for the next generation end if end for % Mutation for do Randomly choose individual j if then j attends Mutation Changes the genetic structures of j else j passes Mutation step end if end for Combine the new individuals and the current population members Evaluate the fitness value of each individual end while
|
3.2. Differential Evolution
Another widely recognized EA employed in optimization problems is the DE algorithm [
6], which is a population-based stochastic algorithm.
Similar to GAs, DE begins its procedure by creating the first population (called initial). Unlike GAs, DE advances to the mutation step instead of proceeding with the selection step. While the algorithm is in this step, it generates a vector, known as the
donor vector, for each individual in the population. The donor vector is composed using three randomly chosen individuals from the population. In the mutation step, the scaling factor
F, a crucial parameter controlling differential variation [
33], plays a significant role. It is essential that
F falls within the range [0, 2] [
6].
Following the creation of the donor vector, the recombination (crossover) step commences. This step determines whether the individual or its donor vector is sent for the selection step, resulting in the creation of a
trial vector. The crossover rate value
CR guides this decision-making process. It is recommended to take the value in the range [0, 1] [
6].
Finally, the selection process is carried out by comparing the fitness value of each individual with the trial vector. Whichever of these two compared values is better, the individual continues to the next generation with that value (see Algorithm 3).
Algorithm 3 Differential Evolution |
Population size = N; Specify CR (crossover rate) and F (scaling factor) values Randomly create the initial population Evaluate the fitness value of individuals in the population while do for do Randomly choose three individuals where Generate a random integer where D is dimension % Trial Vector for do if or then else end if end for Replace , if it is better than end for Evaluate the fitness value of individuals in the new population end while
|
3.3. Particle Swarm Optimization
Originally, PSO was not conceptualized for optimization; rather, its inception aimed to emulate the collective behavior of bird flocks and fish swarms in nature [
9]. Upon examining how PSO behaved, it became clear that the algorithm was actually performing optimization.
By creating an initial population (swarm), where each member (particle) has a velocity to explore the search space, PSO begins its process, shown in Algorithm 4. On the other hand, two significant positions, the best position called pbest and the global best position called gbest, are also directly affect the movement of particles. pbest, gbest, or both are improved if a particle finds a better position than before.
The velocity vector for the ith individual can be computed as in Equation (
4).
where
and
are acceleration coefficients,
w is the inertia weight,
and
are real values from the range
(randomly selected), and
is the position of the ith individual at time t.
Algorithm 4 Particle Swarm Optimization |
Number of particles in the swarm = N; Assign random values to each particle in the swarm Evaluate the fitness value of particles in the swarm Specify and values while do for do Determine the velocity of particle i Determine the position of particle i Evaluate the fitness value of particle i, % For a minimization problem if then Assign as if then Assign as end if end if end for Determine the velocity of particle i Determine the position of particle i end while
|
3.4. Cuckoo Search
CS is a swarm-based optimization algorithm that was inspired from the behaviour of cuckoo birds [
11]. In the CS algorithm, potential solutions are associated with cuckoo eggs. Cuckoos typically deposit their fertilized eggs in the nests of other cuckoos, hoping that their offspring will be raised by surrogate parents. There are instances when cuckoos realize that the eggs in their nests do not belong to them, leading to either the removal of foreign eggs from the nests or the complete abandonment of the nests.
Lévy flights are stochastic walks where both direction and step lengths are determined by the Lévy distribution. These flights are observed in various animals and insects and are characterized by sequences of straight flights followed by sudden 90-degree turns. Compared to standard random walks, Lévy flights are more efficient for exploring large-scale search areas. This increased efficiency is mainly attributed to the fact that the variances of Lévy flights increase much more rapidly than those of normal random walks [
11].
The algorithm, shown in Algorithm 5, is governed by three fundamental rules:
Each cuckoo lays one egg at a time and places it in a randomly selected nest.
Nests with high-quality eggs are preserved and passed on to the next generations.
There is a fixed number of available host nests, and the likelihood of a cuckoo’s egg being discovered by the host bird is determined by a probability parameter, , within the range of [0, 1].
In such cases, the host bird has the option to either discard the egg or abandon the nest entirely and construct a brand-new one.
Algorithm 5 Cuckoo Search |
Objective function f(x) where Generate an initial population of n host nests from where i = 1, 2, ... n while or stopping criteria do Randomly get a cuckoo by Lévy flights Evaluate the fitness value of the cuckoo, Randomly select a nest (say, j) from the host nest % For a minimization problem if then Replace j by the new solution end if A fraction () of the worst nests are abandoned and new ones are built Keep the best solutions Rank the solutions and find the current best end while
|
3.5. Grey Wolf Optimizer
GWO drew its inspiration from the hunting tactics observed in grey wolf packs and the intricate dynamics of their hierarchical structure [
14]. Within this hierarchy, four distinct wolf roles emerge: alpha, beta, omega, and delta.
Alpha wolves assume leadership within the pack, tasked with pivotal decisions such as hunting strategies, choice of resting grounds, and determining the optimal time to rouse the pack. Betas play a supportive role, aiding alphas in decision-making processes. Deltas, however, encompass a diverse array of functions including scouting (alerting the pack to potential threats), sentineling (guarding the pack), elders (seasoned wolves, formerly alpha or beta), hunters (actively pursuing prey or procuring sustenance), and caretakers (attending to the needs of the infirm or injured). Omegas, often designated as scapegoats, are obliged to defer to the directives of all other ranks. It is upon this intricate social framework that GWO finds its conceptual basis.
GWO initiates its procedures by generating an initial population (referred to as a swarm) through randomization. Following this, it sets in motion its core parameters:
and
, see Algorithm 6.
comprises a vector of randomly selected values, falling in the range of [−1, 1], encouraging search agents to diverge from their target. On the other hand,
is also a vector, populated with random values within the range of [0, 2], thereby imbuing GWO with an element of stochastic behavior throughout its runtime [
14].
Subsequent to this initialization phase, the positions of grey wolves are established, thereby designating them as alpha, beta, or delta based on their proximity to the prey. This determination of positions extends to omega wolves as well, with their placements being contingent upon the positioning of the most proficient search agents.
Algorithm 6 Grey Wolf Optimizer (GWO) |
Population size of grey wolves = N; Randomly create the initial population Initialize A and C Randomly initialize the current positions of wolves Evaluate the fitness value of wolves in the population The wolf with the best fitness value is assigned as Alpha The wolf with the second best fitness value is assigned as Beta The wolf with the third best fitness value is assigned as Delta while do for do Determine the position of wolf i via the positions of best search agents end for Update the current positions of wolves Evaluate the fitness value of wolves in the population Update the wolves: Alpha, Beta, and Delta end while
|
4. Tests, Results, and Discussions
4.1. Preliminary Information About Tests
Researchers working in the field of optimization have initially used benchmarks to analyze the algorithms they modelled. With the advancement of technology, these test functions became insufficient; therefore, researchers have started to create new test functions by shifting, rotating, expanding, and hybridizing the well-known benchmarks.
In this paper, in addition to using twenty well-known benchmarks in the field of metaheuristics [
34,
35], which are shown in
Table 1, eleven CEC 2014 test functions [
36], see
Table 2, and seventeen CE2018 test functions [
37], displayed in
Table 3, were employed to examine the performance of the algorithms that were used in this article. The dimensions of the test functions in
Table 1,
Table 2 and
Table 3 were chosen as 2, 10, and 30, respectively. Since the search space of all CEC functions (CEC2014 and CEC2018) is [−100, 100], there is no need to add this information to the tables separately.
The functions used in the CEC 2020 test suite were obtained by selecting some of the CEC 2014 and CEC 2018 functions [
38]. Since all functions in the CEC 2020 test suite are included in
Table 2 or
Table 3 in this paper, it was not necessary to classify the CEC 2020 functions in a distinct table.
In order to display the significance of the results, Friedman and Wilcoxon Signed Rank tests were employed. The Friedman test, originally introduced by Friedman, is a non-parametric statistical test [
39]. It is commonly operated to assess variations in the performance of multiple algorithms. In the Friedman test, the test cases are organized in rows, while the outcomes of the compared algorithms are recorded in columns.
On the other hand, the Wilcoxon signed rank test is another non-parametric statistical test used to identify distinctions between two sets of data, which could represent samples or algorithms [
40]. In the standard procedure, this test begins by calculating the differences between the outcomes of two algorithms, both of which consist of N observations (where N denotes the number of tests). These differences form a vector, which is then ranked from 1 to N. The smallest value receives a rank of 1. Subsequently, the test computes two values:
and
. Based on these values, the test statistic, denoted as
T, is determined as the minimum of either
or
. This
T value is then used to compute the significant probability value (
p) associated with the test.
4.2. Results of the Test Functions
Table 4 shows the parameter settings of the meta-heuristic algorithms utilized. Based on standard parameter settings of SGA, DE, PSO, and GWO, respectively in [
41], [
33], [
9] and [
42], the parameter settings in
Table 4 were chosen. Moreover, the number of iterations and population size (number of nests for CS) were accepted as 100. The code of RA, BIRA, SGA, and PSO were modeled by implementing Matlab 2019a, while the code of GWO, DE, and CS were taken from [
42], [
43], and [
44], respectively.
In order to minimize the effects of randomization, the tests were repeated with 30 runs that have 30 different random seeds. As a result of these 30 runs, the median and standard deviation values of the obtained results are shared in the result tables. The reason for utilizing the median value instead of the mean value is that a highly deviant outcome that may exist within 30 runs can seriously affect the average. The evaluation will be made taking into account the best performance of the compared algorithms. In each table, the values in bold indicate the best value found for that test function.
4.2.1. Comparing Three Versions of RA
In this section, the BIRA algorithm is compared with the IRA and BRA algorithms to show that BIRA is superior to the others.
Table 5 shows the median values of the results obtained from tests with 30 random seeds for each function. When the results given in
Table 5 are examined, it is seen that BIRA gives much more successful results than IRA and BRA. Considering the obtained results, we aimed to show the contribution of the BIRA algorithm to the literature by comparing it with the commonly used meta-heuristic algorithms in terms of performance.
4.2.2. Comparing BIRA with the Metaheuristics
In
Table 6, the results of the compared algorithms for 20 test functions commonly used in the literature are shared. If these results are examined, it can be observed that RA offers successful results in only six test functions (F4, F8, F9, F10, F13, and F17), while BIRA, the improved version of RA, presents its best performance in 13 test functions (F1, F2, F4, F6, F7, F8, F9, F10, F11, F14, F16, F19, and F20). In addition, when the performance of BIRA is compared with the performance of other algorithms, it is possible to observe that BIRA gives superior results compared to other algorithms, except for PSO.
Table 7 shows the results of the algorithms in CEC2014 test functions. Although SGA, GWO, and RA display a much better performance in these test functions, they could not match the performance of BIRA. BIRA offered the best results in all tested functions. On the other hand, PSO could not show its success in the first test (shown in
Table 6) and fell far behind SGA, GWO, RA, and BIRA algorithms.
According to
Table 8, which demonstrates the results of CEC2018 test functions, the BIRA algorithm manages to find the global values of the functions in 15 of the 17 test functions used (except F32 and F40) and introduces predominant results compared to all compared algorithms. Even though it provides the best result for F40, it could not reach the global value of 1000. The same problem can be seen for the F32 function. Although BIRA’s inclusion of bad individuals in the selection process prevents local congestion, it can also cause deviation from the best in some cases, as with these functions. Furthermore, PSO performed better in this test than in the previous one; however, it still falls short of BIRA.
Figure 2 demonstrates the convergence curves of the compared algorithms in benchmarks. Based on the figure, BIRA is able to find the best result for F1, F2, F6, F8, F11, F15, F17, and F21 in the first iteration. Hence, BIRA’s convergence curve may not be seen in these test functions. When
Figure 2 is examined, it can be seen that GWO and PSO give the closest performance to BIRA. Although CS and DE generally improve the best results they find during each iteration, they cannot approach the performance of BIRA. Although SGA and RA provide successful results in some test functions such as F10, they are generally stuck in the locals and cannot reach the global values.
While
Figure 3 displays the trajectory of the algorithms in the CEC2014 test functions,
Figure 4 shows the convergence curves in tested CEC2018 functions. In
Figure 3, all algorithms were able to find the best result in the first iteration for the functions of F22 and F28. Additionally, BIRA was able to discover the global value in the first 15 iterations in all functions, except F27 and F30 functions. Considering the performances of other algorithms as indicated in
Figure 3, it is seen that DE and CS are stuck at the local minimums at the end of the first iterations and do not show any improvement. On the other hand, it can be observed that RA, SGA, PSO, and GWO give more successful results and continue to search for the best value throughout the iterations.
Besides, BIRA shows superior success in all of the test functions shown in the
Figure 4 and managed to reach the global value in its initial iterations. Based on the performance of other algorithms, it is obvious that PSO offers the closest performance to BIRA. While DE and CS generally continue to get stuck locally, SGA, RA, and GWO are successful in searching the space but they are not successful in finding the global value.
4.3. Statistical Results
IBM SPSS Statistics 22 was performed to acquire the test results of the Friedman while the signrank function of Matlab 2019a was utilized to obtain the test results of the Wilcoxon Signed Rank.
The Friedman Test commences its procedure by assigning ranks to each row based on the values within the respective columns. Subsequently, it calculates the total rank values for each column. To evaluate this test, the
statistic (also known as Chi-square) value, and the k − 1 degrees of freedom (df) value must be known, where
k means the number of compared algorithms. The appropriate Chi-square values corresponding to the degrees of freedom can be found in [
45]. For instance, in our case, with df = 6 and
, the expected
value is 12.592. If the computed
value exceeds the expected one, it leads to the rejection of the hypothesis that states “There is no difference between compared algorithms”.
It becomes evident which algorithm excels by referencing
Table 9. An algorithm with the lowest rank value indicates superior performance in comparison to the others.
Table 9 clearly shows that BIRA offers a better performance when comparing it to the other tested algorithms. Moreover, not only do the probability values fall below
, but the computed
values in
Table 10 also surpass the expected value of 12.592. Consequently, the hypothesis must be rejected, signifying that “There is a difference between compared algorithms”. Nevertheless, this test does not specifically identify which algorithms differ from the others. Therefore, the Wilcoxon Signed Rank Test is employed to elucidate these differences.
The Wilcoxon Signed Rank Test is performed in pairs: one algorithm is systematically selected from the first column of the table and then compared with all subsequent algorithms. Additionally, the p values indicate the degree of similarity between the compared algorithms. A value of ‘0’ implies no similarity between the outcomes of the compared algorithms, while ‘1’ indicates that they are identical.
In
Table 11, it can be seen that the
p values resulting from the Wilcoxon Signed Rank Test are generally very close to 0. This infers that no algorithm is similar to another in terms of their performance on all test functions. However, it is obvious to say that the performance of BIRA and PSO are closer than the others since their pairwise value is 0.269221.
4.4. Real-World Optimization Problems
To analyze the performance of the algorithms, three well-known real-world engineering problems were utilized: pressure vessel design, gear train design, and tension/compression spring design.
The parameter settings given in
Table 4 were applied while testing the algorithm performance in the real-life scenarios. While the algorithms used in this study were tested in real-life scenarios, each algorithm was run 30 times using 30 different random seeds and the best results obtained from these 30 repetitions were shared.
The pressure vessel design and tension/pressure spring design problems used in this section contain constraints. The algorithms must produce results by taking these constraints into account. However, as can be expected, some results in the tests performed may not comply with these constraints. That is why, for each iteration, if individuals in the population violate the constraints, these individuals are sentenced to death and the new generation is recreated from variants of individuals that did not violate the rules, and the process continues.
4.4.1. Pressure Vessel Design Problem
The pressure vessel, a containment unit for various gas or liquid pressures of different sizes, is susceptible to critical faults during manufacturing or operation, which could lead to severe damage or injury. Hence, it is crucial to design the pressure vessel with the correct structure [
46]. The design of the vessel, shown in
Figure 5, necessitates the adjustment of several parameters (materials): shell thickness (
), spherical head thickness (
), radius of cylindrical shell (
), and shell length (
).
The aim of the problem is to minimize the total cost of the materials, given in Equation (
5), without violating any constraints (given in Equations (
6)–(
11)).
where
and
and
are integer multipliers of 0.0625 in, and
and
are continuous integer values in the respectively following interval: 40 in
in and 20 in
in.
In
Table 12, the best results where the algorithms did not violate the constraints (given in Equations (
6)–(
11)) are shared.
In the pressure vessel design problem, PSO could not show the performance it showed in the test functions and fell behind GWO and BIRA. According to
Table 12, BIRA presented the most successful result compared to all tested algorithms by reaching a value of 7140.59.
4.4.2. Gear Train Design Problem
In the gear train design problem, the objective is to approach a gear ratio value of 1/6.931, or 0.1442793, as closely as possible [
47]. As shown in
Figure 6, a set of four gears constitutes the train. The tuning process involves determining the numbers of teeth on these gears.
If
,
,
, and
represent the numbers of teeth on gears A, B, D, and F, respectively, the goal is to minimize Equation (
12):
where
,
,
and
must be in the interval of [12, 60] and gear ratio is:
If the performances of the algorithms given in
Table 13 are examined, it is obvious that the performances of the PSO, CS, and BIRA algorithms are better than the other algorithms. Though the gear ratio values, obtained from Equation (
13), of all three algorithms are the same (the first six digits of the decimal part of the results), it can be observed that PSO is more successful than BIRA and CS (albeit by a small margin).
4.4.3. Tension/Compression Spring Design Problem
Firstly, Belegundu [
48] presented the tension/compression spring design problem, shown in
Figure 7. The main purpose of the problem is to minimize the cost of three parameters: wire diameter d (
), mean coil diameter D (
), and the number of active coils P (
). In addition to minimizing Equation (
14), it must not violate any constraints from Equations (
15)–(
18).
where
,
and
.
When the performances of the algorithms in
Table 14 are observed, it can be seen that SGA and DE are stuck at the same local value. Although PSO and GWO provide decent performances, they are behind BIRA.
5. Conclusions
This study introduces novel swarm-based meta-heuristic algorithms, IRA, BRA, and BIRA, which are modeled on the mating behavior of chickens and roosters. First of all, these three new versions were compared among themselves in terms of performance and it was observed that BIRA offered more successful results.
To evaluate the contribution of BIRA’s performance to the literature, it is compared with well-known metaheuristic algorithms: SGA, DE, PSO, CS, and GWO. Additionally, RA was included in this assessment to highlight BIRA’s superior performance compared to its predecessor (RA). This comparison involved the use of 20 widely recognized benchmark optimization functions, 11 CEC2014 test functions, and 17 CEC2018 test functions (which are also in the CE2020 test suite). Furthermore, Friedman and Wilcoxon Signed Rank statistical tests were conducted to establish the significance of the results. Moreover, three real-world engineering problems, pressure vessel, gear train design, and tension/compression spring design, were also utilized to test the performance of the algorithms.
When all the tests are examined, it is obvious that BIRA offers a much superior performance compared to RA. While the BIRA algorithm analyzes the search space better with the dance technique, it also tries to prevent it from getting stuck in local minima with its bipolar behavior.
BIRA showed successful results not only compared to RA but also compared to all tested algorithms, albeit by a small margin compared to PSO. However, BIRA not only produced new values for studies in the field of real-world engineering problems, but also provided reasonable results. Thanks to the superior performance offered by BIRA, it has been proven that bipolar behavior has a good effect on meta-heuristic algorithms.
Even though bipolar behavior is beneficial in reaching different points of the algorithm, it is obvious that the number of the worst individuals will decrease since the population will evolve throughout iterations. For this reason, one of the most important disadvantages of the algorithm is that the mutation phase, which allows the discovery of different points of an algorithm, is not included in this algorithm.
Hence, a mutation phase will be added to the algorithm as a future work, and then, its performance will be analyzed in various optimization challenges. Moreover, based on the idea that bipolar behavior can improve the performance of meta-heuristic algorithms, bipolar behavior will be added to several meta-heuristics and the performance of these algorithms will be examined.
This study contributes to the literature by providing a new perspective on algorithm design, demonstrating that bipolar behavior and dance-inspired movements can enhance optimization performance. These findings highlight the potential of biologically inspired techniques in advancing meta-heuristic algorithms and solving complex optimization problems more effectively.