Next Article in Journal
Application of Generalized Polynomial Chaos for Quantification of Uncertainties of Time Averages and Their Sensitivities in Chaotic Systems
Next Article in Special Issue
How to Identify Varying Lead–Lag Effects in Time Series Data: Implementation, Validation, and Application of the Generalized Causality Algorithm
Previous Article in Journal
Feasibility Pump Algorithm for Sparse Representation under Gaussian Noise
Previous Article in Special Issue
Confidence-Based Voting for the Design of Interpretable Ensembles with Fuzzy Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Success History-Based Position Adaptation in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms †

Department of Higher Mathematics, Reshetnev Siberian State University of Science and Technology, 660037 Krasnoyarsk, Russia
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in The proceedings of the 8th International Workshop on Mathematical Models and their Applications (IWMMA 2019) (Krasnoyarsk, Russian Federation, 18–21 November 2019).
Algorithms 2020, 13(4), 89; https://doi.org/10.3390/a13040089
Submission received: 19 March 2020 / Revised: 29 March 2020 / Accepted: 1 April 2020 / Published: 9 April 2020
(This article belongs to the Special Issue Mathematical Models and Their Applications)

Abstract

:
In this study, a new modification of the meta-heuristic approach called Co-Operation of Biology-Related Algorithms (COBRA) is proposed. Originally the COBRA approach was based on a fuzzy logic controller and used for solving real-parameter optimization problems. The basic idea consists of a cooperative work of six well-known biology-inspired algorithms, referred to as components. However, it was established that the search efficiency of COBRA depends on its ability to keep the exploitation and exploration balance when solving optimization problems. The new modification of the COBRA approach is based on other method for generating potential solutions. This method keeps a historical memory of successful positions found by individuals to lead them in different directions and therefore to improve the exploitation and exploration capabilities. The proposed technique was applied to the COBRA components and to its basic steps. The newly proposed meta-heuristic as well as other modifications of the COBRA approach and components were evaluated on three sets of various benchmark problems. The experimental results obtained by all algorithms with the same computational effort are presented and compared. It was concluded that the proposed modification outperformed other algorithms used in comparison. Therefore, its usefulness and workability were demonstrated.

1. Introduction

Many real-world problems can be formulated as optimization problems, which are characterized by different properties such as, for example, many local optima, non-separability, asymmetricity, etc. These problems arise from various scientific fields, such as engineering and related areas. For solving such kinds of problems, researchers have presented different methods over recent years, and heuristic optimization methods and their modifications are among them [1,2]. Random search-based and nature-inspired algorithms are faster and more efficient than traditional methods (Newton’s method, bisection method, Hooke-Jeeves method, etc.) while solving high-dimensional complex multi-modal optimization problems, for example [3]. However, they also have difficulties in keeping the balance between exploration (the procedure of finding completely new areas of the search space) and exploitation (the procedure of finding the regions of a search space close to previously visited points) when solving these problems [4,5,6].
Biology-inspired (population-based) algorithms such as Particle Swarm Optimization (PSO) [7], Ant Colony Optimization (ACO) [8], the Artificial Bee Colony (ABC) [9], the Whale Optimization Algorithm (WOA) [10], the Grey Wolf Optimizer (GWO) [11], the Artificial Algae Algorithm (AAA) [12], Moth-Flame Optimization (MFO) [13] and others are among the most popular and frequently used heuristic optimization methods. These algorithms imitate the behavior of a group of animals (individual or social) or some of their features. Thus, the process of optimization consists of generating a set of random solutions (called individuals in other words) and leading them to the optimal solution for a given problem.
Biology-inspired algorithms have found a variety of applications for real-world problems from different areas due to their high efficiency, for example [14,15,16]. These algorithms gained popularity among research due to the fact that they can be used for solving various optimization problems regardless of the objective function’s features. Nevertheless, according to the No Free Lunch (NFL) theorem there is no universal method for solving all optimization problems [17]. Therefore, researchers propose new optimization algorithms to increase the efficiency of the currently existing algorithms for solving a wider range of optimization problems.
One way to modify currently existing techniques consists of developing collective meta-heuristics, which use the advantages of several techniques at the same time and, therefore, are more efficient [18,19,20]. In [21] the meta-heuristic approach COBRA (Co-Operation of Biology-Related Algorithms) based on parallel functioning of several populations was proposed for solving unconstrained real-valued optimization problems. Its main idea can be described as the simultaneous work of several biology-inspired algorithms with similar schemes, which compete and cooperate with each other.
The original version of the COBRA consisted of six popular biology-inspired component algorithms, namely the Particle Swarm Optimization Algorithm (PSO) [7], the Cuckoo Search Algorithm (CSA) [22], the Firefly Algorithm (FFA) [23], the Bat Algorithm (BA) [24], the Fish School Search Algorithm (FSS) [25] and, finally, the Wolf Pack Search Algorithm (WPS) [26]. However, various other heuristics can be used as component algorithms (for example, the ones mentioned above) for COBRA as well as previously conducted experiments demonstrating that even the already chosen bio-inspired algorithms can be combined differently [27].
Later, the fuzzy logic-based controllers [28] were proposed for the automated selection of biology-inspired algorithms to be included in the ensemble from a predefined set, and the number of individuals in each population [29]. The idea of using fuzzy controllers for parameter adaptation of the heuristic was previously explored by researchers, for instance in [30,31], and their usefulness was established. The fuzzy-controlled COBRA modification was named COBRA-f and its efficiency was demonstrated in [32].
The COBRA-f approach, like the original COBRA algorithm, was developed for continuous optimization [29], but despite its effectiveness compared to the mentioned biology-inspired algorithms (in other words its components), the COBRA-f meta-heuristic still needs to address the problem of exploitation and exploration [33]. As was noted before, a variety of ideas has been proposed to find the exploration-exploitation balance in the population-based biology-related algorithms, including methods of parameter adaptation [34,35,36], island models [37,38], population size control [39,40], and many others. One of the most valuable ideas proposed for the Differential Evolution (DE) [41] algorithm in the study [42] is to use an external archive of potentially good solutions, which has limited size and updated during the optimization process. This idea is similar to the one used in multi-objective optimizers such as SPEA or SPEA2 [43], where an external archive of non-dominated solutions is maintained.
The main idea of the archive is to save promising solutions that may have valuable data about the search space and its potentially good areas, thereby highlighting the algorithms’ successful search history [43]. The idea of applying this information could be used to any biology-related optimization heuristic, for instance [44,45]. In this study, the idea of applying a success history-based archive of potentially good solutions is implemented for the COBRA-f algorithm.
This paper is an extended version of our paper published in the proceedings of the 8th International Workshop on Mathematical Models and their Applications (IWMMA 2019) (Krasnoyarsk, Russian Federation, 18–21 November 2019) [46]. Algorithm introduced in [46] was tested on two additional sets of benchmark functions. Moreover, population size changes were observed while solving various benchmark problems with 10 and 30 variables. It should be noted that in this study several modifications were discussed and the number of compared algorithms increased.
Therefore, in this paper, first original COBRA meta-heuristic and its version COBRA-f are presented, and then a description of the newly proposed method for the fuzzy-controlled COBRA is presented. The next section contains the experimental results obtained by the original COBRA algorithm, the COBRA-f with fuzzy controller and the proposed approach as well as the results obtained by the COBRA’s components with and without external archive are presented and discussed. The conclusions are given in the last section.

2. Co-Operation of Biology-Related Algorithms (COBRA)

Five biology-inspired optimization methods, to be more specific, Particle Swarm Optimization (PSO) [7], Wolf Pack Search (WPS) [26], the Firefly Algorithm (FFA) [23], the Cuckoo Search Algorithm (CSA) [22] and the Bat Algorithm (BA) [24] were used as basis for the meta-heuristic approach called Co-Operation of Biology-Related Algorithms or COBRA [21]. These algorithms are referred to as “component algorithms” of the COBRA approach. It should be noted that the number of component algorithms can be changed (increased or decreased), and it affects the workability of the COBRA meta-heuristic, which was proved in [27].
All mentioned population-based algorithms have their advantages and disadvantages. So, the possibility of using all of them simultaneously while solving any given optimization problem (namely their advantages) was the reason for the development of a new potentially better cooperative approach. Also experimental results show that it is hard to determine which algorithm should be used for a given problem, thus using a cooperative meta-heuristic means that there is no longer the necessity to choose one of the mentioned biology-inspired algorithms [21].
The optimization process of the COBRA approach starts with generating one population for each biology-inspired component algorithm, and therefore, with generating five (or six later when the Fish School Search (FSS) [25] algorithm was added to the collective) populations. After that all populations are executed in parallel or in other words are executed simultaneously, cooperating with each other.
All listed component algorithms are population-based heuristics, and thus, for each of them the population size or number of individuals (potential solutions) should be chosen beforehand, and this number does not change during the optimization process. However, the COBRA approach is a self-tuning meta-heuristic. Thus, first the minimum and maximum numbers of individuals throughout all populations are defined, and then the initial sizes of populations. Then the population size for each component algorithm changes automatically during the optimization process.
The number of individuals in the population for each component depends on the fitness values of these individuals, namely the population size can increase or decrease during the optimization process. If the overall population fitness value was not improved during a given number of iterations, then the size of each population increased, and vice versa, if the overall population fitness value was constantly improved during a given number of iterations, then the size of each population decreased. Moreover, a population size can increase by accepting individuals removed from other populations in case if its average fitness value is better than the average fitness value of all other populations. Thus, the “winner algorithm” can be determined as an algorithm whose population has the best average fitness value at every step.
The original algorithm COBRA additionally has a migration operator, which allows “communication” between the populations in ensemble. To be more specific, “communication” was determined in the following way: populations exchange individuals in such a way that a part of the worst individuals of each population is replaced by the best individuals of other populations. Thus, the group performance of all algorithms can be improved.
The performance of the COBRA meta-heuristic approach was evaluated on a set of various benchmark and real-world problems and the experiments showed that COBRA works successfully and is reliable on different optimization problems [21]. Moreover, the simulations showed that COBRA outperforms its component algorithms when the dimension grows or when complicated problems are solved, and therefore, it should be used instead of them [21].

3. Fuzzy-Controlled COBRA

As was mentioned in the previous section, the original COBRA approach has six similar biology-inspired component algorithms, which mimic the collective behavior of their corresponding animal groups, thereby allowing the global optima of real-valued functions to be found. Performance analysis showed that all of them are sufficiently effective for solving optimization problems, and their workability has been established [21,47].
However, there are various other algorithms which can be used as components for COBRA as well as previously conducted experiments demonstrating that even the biology-inspired algorithms already chosen can be combined in different ways. For example, in [27] five different combinations of the population-based heuristics for the COBRA algorithm were presented, and their efficiency was examined on test problems from the CEC 2013 competition [48]. It was established that three of them show the best results on test functions depending on the number of variables [27].
The described problem was solved by controllers based on fuzzy logic [28]. The fuzzy controller implements a more flexible parameter tuning algorithm, compared to the original approach used in COBRA [29]. The fuzzy controller operates by using special fuzzification, inference and defuzzification schemes [28], which allow generating real-valued outputs. In the mentioned study [29], component algorithms are rated with success values, which were used as the fuzzy-controller inputs, and the amount of population size changes as its outputs.
The controller based on fuzzy logic used in this study had 7 inputs, including 6 success rates of component algorithms and the success rate of the whole population, and 6 outputs, including the number of individuals to add or remove from every heuristic component algorithm. The success of every component was determined as the best achieved fitness value of the corresponding component. This choice was made in accordance with the research presented in [29]. The 7-th input variable was determined as the ratio of the number of steps, during which the best-found fitness value (found by all algorithms together) was improved, to the adaptation period, which was a parameter.
To obtain the output values, the Mamdani fuzzy inference procedure was used, and the rules had the following form:
R q : IF x 1 is A q 1 x n is A q n THEN y 1 is B q 1 y k is B q k ,
where R q is the q-th fuzzy rule, x = ( x 1 , ... , x n ) is the set of input values in n-dimensional space ( n = 7 in this case), y = ( y 1 , ... , y k ) is the set of outputs ( k = 6 ), A q i is the fuzzy set for the i-th input variable, B q j is the fuzzy set for the j-th output variable. The rule base consisted of 21 fuzzy rules and was structured as follows: the rules were each three rules were organized to describe the case when one of the components achieved better fitness values than the others (as there are six components, a total of 18 rules were set); the last 3 rules used the total success rate for all components (variable 7) to determine if solutions should be added or removed from all components, thus regulating the amount of available computational resources [29]. Part of the described rules base is presented in Table 1.
The input variables were set to be in [ 0 , 1 ] , and the fixed triangular fuzzy terms were used for this case. The fourth fuzzy term A 4 was added, “larger than 0” (opposite to A 1 ) in addition to three classical terms A 1 , A 2 and A 3 and the “Don’t Care” ( D C ) condition. The  A 4 and ‘ D C are needed to simplify the rules and decrease their number [29].
The output variables were also set using 3 triangular fuzzy terms. These terms were symmetrical, and their positions were determined according to 2 pairs of values, which encoded the right and left positions of the central term, and the middle position of the left and right terms, and the minimal and maximal values for the side terms. These values were especially optimized with the PSO heuristic [7] and the following parameters were found: [ 12 ; 2 ; 0 ; 19 ] according to [29]. The defuzzification was performed by calculating using the center of mass approach for the shape obtained after the fuzzy inference.
The “communication”, or in other words the migration operator, did not have any changes. The fuzzy-controlled COBRA performance was evaluated on a set of benchmark optimization problems with 10 and 30 variables from [48]. The experimental results have shown that the COBRA-f algorithm can find best solutions for many benchmark problems. Moreover, the COBRA-f meta-heuristic algorithm was compared to its components, as well as original COBRA. Thus, the simulations and the comparison have shown that the COBRA-f algorithm is superior to the previously proposed biology-related component algorithms, especially with the growth of the dimension [29].

4. Proposed Approach

In this study, a new modification consisting of using the success history-based position adaptation of potential solutions (SHPA) is introduced. The main idea is to improve the search diversity of biology-inspired component algorithms of the fuzzy-controlled COBRA meta-heuristic approach and consequentially COBRA’s efficiency. The key concept of the proposed technique is described below.
First, one population for each component algorithm is generated, namely the set of potential solutions called individuals and represented as real-valued vectors with length D is randomly generated, where D is the number of dimensions for a given optimization problem. It should be noted that on this step the population size for each component is chosen beforehand and will be changed later automatically by the fuzzy controller. Also, additionally for each population (component algorithm) an external archive for best-found positions is created. At the beginning the external archive is empty and then its size can increase to the maximum value, which is chosen by the end-user and stays the same during the work of the component algorithm.
The best position found by a given individual or in other words the local best-found position in the search space for each individual in each population is saved. Initially each individual’s current coordinates are used as its local best. If later a better position is found, then it will be used as the local best and the previous one will be stored in the mentioned external archive.
The pseudo-code introduced in Algorithm 1 for a minimization problem can describe the process of updating the external archive for each component algorithm.
Algorithm 1 The process of updating the external archive for component algorithms
1:
The objective function is f
2:
fori in 1...6 do
3:
     N i is the size of the i-th population
4:
     A i is the external archive for the i-th population
5:
     | A i | is the maximum size for the i-th external archive
6:
     k i is the current number of individuals stored in the archive A i ( k i | A i | )
7:
    for j in 1... k i do
8:
         A i j is the j-th individual stored in the archive A i
9:
    end for
10:
    for j in 1... N i do
11:
         P i j is the j-th individual in the i-th population
12:
         l o c a l i j is the local best for each P i j
13:
    end for
14:
end for
15:
fori in 1...6 do
16:
    for j in 1... N i do
17:
        if f ( P i j ) < f ( l o c a l i j ) then
18:
           if ( k i + 1 | A i | ) then
19:
                A i ( k i + 1 ) = l o c a l i j
20:
                k i = k i + 1
21:
           end if
22:
           if ( k i + 1 > | A i | ) then
23:
               randomly choose the integer r from 1 to | A i |
24:
               if f ( l o c a l i j ) < f ( A i r ) then
25:
                    A i r = l o c a l i j
26:
               end if
27:
           end if
28:
            l o c a l i j = P i j
29:
        end if
30:
    end for
31:
end for
As was already mentioned, all component algorithms are executed in parallel after generating of six populations (one for each of them) and creation of the external archives. Thus, when individuals change their positions in the search space according to the formulas given for the considered component algorithm they can use with some probability p a the potential solutions stored in the i-th external archive, where i = 1 , ... , 6 .
It should be noted that the value of the probability p a depends on the considered biology-inspired component algorithm. More specifically, previously conducted research showed that only three components of the COBRA approach, namely the Firefly Algorithm, the Cuckoo Search Algorithm and the Bat Algorithm, demonstrate statistically better results by using an archive for the individual’s position adaptation [49]. Thus, only these three algorithms use archives during their execution.
First, let us consider the Bat Algorithm [24]. Each i-th individual from the population in the Bat Algorithm is represented by its coordinates x i = ( x i 1 , ... , x i D ) and velocity v i = ( v i 1 , ... , v i D ) , where D is the number of dimensions of the search space. The following formulas are used for updating velocities and locations/solutions in the BA approach:
v i ( t + 1 ) = v i ( t ) + ( x i ( t ) x * ) · f i ,
x i ( t + 1 ) = v i ( t + 1 ) + x i ( t ) ,
where t and ( t + 1 ) are the numbers indicating the current and the next iterations, x * is the current best-found solution by the whole population, and  f i is the frequency of the emitted pulses for the i-th individual [24]. Thus, with the probability p a instead of x * the randomly chosen individual from the external archive (if it is not empty) will be used. It should be noted that the external archive is also selected randomly (it is not necessarily the external archive created for the BA population). It is done with the expectation that individuals will move in multiple directions and, therefore, will be able to find better solutions.
For the other two biology-inspired component algorithms, FFA and CSA, the external archives were used in the similar way: with a given probability p a the current point of attraction x * was changed to a stored in the archive solution (from a randomly chosen archive). To be more specific, in the CSA approach individuals were sorted according to the objective function [22]. Then part of the worst ones was removed from the population and new individuals instead of them were generated by using the external archives with a given probability p a . On the other hand, in the FFA approach a firefly or individual moves towards another firefly or individual if the latest has a better objective function value [23]. Thus, while using the proposed technique for the FFA approach the firefly can be moved also towards individuals from the external archives.
There are two basic steps after the simultaneous execution of all component algorithms: the fuzzy controller makes a decision about the population sizes of components (this step is called competition) and migration, or in other words the exchanging of individuals between populations (co-operation). To be more specific, the size of each population can decrease by removing some of individuals from the population to the minimal value chosen by the end-user or increase (the overall maximum size of all populations together is also established by the end-user beforehand). While increasing the population size or in other words adding new individuals, these new individuals can be generated by using the scheme in Algorithm 2.
Algorithm 2 Generating of the new individuals
1:
p a d d i is the probability for using the normal distribution N ( a , σ ) with mean value a and standard deviation σ by the i-th population
2:
| A c i | is the current archive size of the i-th population
3:
a l g b e s t i is the currently best-found position by the i-th population
4:
Generate random number r a n d from the interval [ 0 , 1 ]
5:
if r a n d p a d d i and | A c i | > 0 then
6:
    Generate random integer r from [ 1 , | A c i | ]
7:
     a = 0.5 · ( A c i r + a l g b e s t i )
8:
     σ = | A c i r a l g b e s t i |
9:
    Generate new individual i n d n e w = N ( a , σ )
10:
else
11:
    Generate new individual i n d n e w around the a l g b e s t i
12:
end if
As was already noted, all populations communicate with each other by exchanging individuals. However, in this modification of the fuzzy-controlled COBRA, part of the worst individuals of each population is replaced by the new individuals generated by a scheme similar to the one described above (using normal distribution), but instead of a l g b e s t i the current best-found position by all populations is used and the external archive is also randomly chosen.
Thus, the proposed success history-based position adaptation method of the potential solutions depends on the probability p a (there are three values for this probability, or more specifically, one value for each component algorithm that uses its archive during the execution), the maximum archive size | A i | and probabilities p a d d i (one for each component algorithm).

5. Results and Discussions

5.1. Numerical Benchmarks

To check the efficiency of the proposed algorithm, the modified fuzzy-controlled COBRA algorithm is tested on three different sets of test problems, which are 23 classical problems [50], nine standard benchmark problems with 10 and 30 variables [50], and 16 problems taken from the CEC 2014 competition [51]. These functions have been widely used in the literature [49] or [52], for example.
These functions are known as SET-1, SET-2 and SET-3, respectively. These functions are based on a set of classical benchmark functions such as Ackley’s, Rastrigin’s, Katsuura’s, Griewank’s, Weierstrass’s, Sphere’s, HappyCat’s, Swefel’s, HGBat and Rosenbrock’s functions. They span a diverse set of features such as noise in the fitness function, non-separable, multimodality, ill-conditioning and rotation, among others. The functions in the stated sets of test problems are separated into three groups: unimodal, high-dimensional and low-dimensional multi-modal benchmark functions.

5.2. Compared Algorithms and Parametric Setup

The performance of the suggested modification of the COBRA algorithm (which will be called COBRA-SHA hereinafter) was compared with other state-of-the-art algorithms like PSO [7], WPS [26], FFA [23], CSA [22], BA [24] and FSS [25]. These algorithms have several parameters that should be initialized before running. The optimal control parameters usually depend on problems and they are unknown without prior knowledge. Therefore, the initial values of the necessary parameters for all algorithms were taken from original papers dedicated to them and proposed by authors.
Furthermore, the proposed approach was compared with modifications of the FFA, CSA and BA algorithms, which also use the external archives, as it was established previously that their usage improves the workability of the listed heuristics [48]. Let us denote them as FFA-a, CSA-a and BA-a, respectively.
To show the advantage of the proposed modification more clearly, it was also compared with the fuzzy-controlled COBRA-f [29] and also with a similar modification of the COBRA meta-heuristic, in which unlike COBRA-SHA, each component algorithm can use only its own external archive (this modification was named COBRA-fas) [53]. Parameters of the fuzzy controllers for the COBRA-fas and COBRA-SHA approaches were found by PSO in the same way as for the COBRA-f algorithm [10], namely the following parameters were obtained: [ 3 ; 2 ; 0 ; 10 ] , [ 3 ; 2 ; 5 ; 10 ] and [ 12 ; 2 ; 0 ; 19 ] respectively. Thus, the fuzzy sets for the outputs of the obtained controllers can be represented by Figure 1.
For all mentioned biology-inspired component algorithms, the initial population size was equal to 100 on each of 23 benchmark functions from SET-1 for comparison, while the maximum number of iterations was equal to 1000. Thus, to check the efficiency of the proposed algorithm COBRA-SHA, the maximum number of function evaluations was set to 100 , 000 . The same number of function evaluations was used for the fuzzy-controlled COBRA-f and modification COBRA-fas. There were also 30 program runs of all algorithms, included in the comparison, for benchmark problems from SET-1.
While solving optimization problems from SET-2, the maximum generation number was 5000 and the population size for each component algorithm as well as for the FFA-a, CSA-a and BA-a modifications was equal to 100. Therefore, the maximum number of function evaluations for the COBRA-f, COBRA-fas and COBRA-SHA algorithms was set to 500 , 000 . It should be noted that the number of programs runs of all algorithms for benchmark problems from SET-2 was the same as for the problems from SET-1.
Finally, 16 test functions taken from the CEC 2014 Special Session on Real-Parameter Optimization [51] were solved 51 times by all mentioned heuristics. All these functions are minimization problems; they are all also shifted and scaled. The same search ranges were defined for all of them: 100 , 100 D , where D = 30 is the number of dimensions. For all algorithms included in the comparison, the maximum number of function evaluations was equal to 300 , 000 . The population size for component algorithms and their modifications was set to 100.
During the experiments, the maximum archive size A s for each component of the COBRA-fas and COBRA-SHA meta-heuristics as well as for the FFA-a, CSA-a and BA-a algorithms was equal to 50. In addition, previously conducted experiments showed that the probability of using the external archive should have the following values for FFA-a, CSA-a and BA-a: 0.75 , 0.6 and 0.15 respectively [49]. The same probabilities were used for the respective components of the COBRA-fas and COBRA-SHA approaches. For the rest of their component algorithms, the probability of using the external archive was set to 0 (the archive was not used specifically during the execution of a given component algorithm but was updated if conditions applied). Finally, the probability p a d d i for the i-th ( i = 1 , ... , 6 ) component algorithm of the COBRA-SHA meta-heuristic was set to 0.25 .
For the collective meta-heuristic COBRA-f and its modifications mentioned in this study, while solving problems from SET-1, SET-2 and SET-3 the minimum population size for each component was set to 0, but if the total sum of population sizes was equal to 0 then all population sizes increased to 10. Additionally, the maximum total sum of population sizes was set to 300.

5.3. Numerical Analysis on Benchmark Functions

5.3.1. Numerical Results for SET-1

Each of the 23 problems was solved by all the stated algorithms, and experimental results such as mean value ( m e a n ), standard deviation ( S D ), median value ( m e d ) and worst ( w o r s t ) of the best-so-far solution in the last iteration are reported. The obtained results are presented in Table 2. The outcomes, namely the mean and standard deviation values, are averaged over the number of program runs, which was equal to 30, and the best results are shown in bold type in Table 2.
From Table 2 it can be observed that the proposed approach COBRA-SHA outperformed other compared state-of-art approaches and their modifications as well as COBRA-f and the similar modification COBRA-fas on the first two unimodal functions ( f 1 and f 2 ) in terms of the mean, standard deviation, median, and worst value of the results. Regarding function f 3 , COBRA-SHA was outperformed only by the modification COBRA-fas in terms of the median value, while it was the best among the compared algorithms according to the other statistical results.
The fuzzy-controlled COBRA outperformed the other algorithms on the function f 4 . Regarding the fifth unimodal function, while the CSA modification with the external archive demonstrated the best results in terms of the mean, standard deviation and the worst values, the median value obtained by the proposed approach COBRA was better. Several algorithms, including COBRA-f, COBRA-fas and COBRA-SHA, were able to find the optimum value for the function f 6 during each program run. Finally, regarding function f 7 , COBRA-fas and CSA-a outperformed the other algorithms.
For multi-modal functions f 8 f 13 with many local minima, the final results are more important because this function can reflect the algorithm’s ability to escape from poor local optima and obtain the near global optimum. For functions f 9 , f 10 and f 11 , COBRA-SHA was successful in finding the global minimum as well as the fuzzy-controlled COBRA and the similar modification COBRA-fas. For function f 8 , CSA with the external archive (CSA-a) outperformed the other algorithms included in the comparison. Regarding f 12 , the proposed approach COBRA-SHA was the best in terms of the median value, while CSA-a outperformed all the compared algorithms according to the other statistical results. Moreover, for functions f 13 the proposed modification COBRA-SHA produced better results compared to the others.
For f 14 f 23 with only a few local minima, the dimension of the function is also small. For functions f 14 , f 16 , f 17 , f 18 , f 21 , f 22 and f 23 , COBRA-SHA was successful in finding the global minimum. Regarding f 14 and f 16 , PSO, COBRA-f, COBRA-fas and COBRA-SHA produced the same results. For function f 17 , PSO, FSS, COBRA-f, COBRA-fas and COBRA-SHA also gave the same values. Regarding f 18 COBRA-f, COBRA-fas and COBRA-SHA produced the same mean, standard deviation, median and worst values. Finally, for function f 23 the two similar modifications proposed in this study, namely COBRA-fas and COBRA-SHA, demonstrated the same results.
From Table 2, it can be observed that the COBRA-SHA approach performs better than the other algorithms on the multi-modal low-dimensional benchmarks. For example, regarding function f 15 , the COBRA-SHA approach outperformed other algorithms included in the comparison in terms of the mean, median and worst values. However, for function f 19 COBRA-f and COBRA-fas were able to find the optimum value during each program run and they outperformed COBRA-SHA. Finally, regarding function f 20 , the best mean and median values were found by the proposed modifications COBRA-fas and COBRA-SHA.
Additionally, in Table 3 the results of the comparison between COBRA-SHA and the other mentioned algorithms according to the Mann-Whitney statistical test with significance level p = 0.01 are presented. The following notations are used in Table 3: “+” means that COBRA-SHA was better compared to a given algorithm, similarly “−” means that the proposed algorithm was statistically worse, and ”=” means that there was no significant difference between their results.
The results of the Mann-Whitney statistical test are presented in Figure 2. The values on the graph represent the total score, i.e., number of improvements, deteriorations and non-significant differences between COBRA-SHA and other approaches.
In addition, all the mentioned algorithms were compared with the proposed modification COBRA-SHA according to the Friedman statistical test. The obtained results are demonstrated in Figure 3. The following notations were used in Figure 3: COBRA-f was denoted as “COBRA”, COBRA-fas was denoted as “C-FAS” and for COBRA-SHA the notation “C ARC” was used. The Friedman ranking was performed for every test function separately and used the results of all runs for ranking.
Thus, it was established that the results obtained by the proposed approach are statistically better according to the Friedman and Mann-Whitney tests than the results obtained by the stated biology-inspired algorithms (PSO, WPS, FSS, FFA, CSA and BA) and their modifications with the external archive (FFA-a, CSA-a, BA-a). Despite this, it can be seen that the results achieved by FFA-a, CSA-a and BA-a are statistically better than the ones found by their original versions. Moreover, COBRA-SHA statically outperformed the fuzzy-controlled COBRA-f. However, there is almost no difference between the results obtained by COBRA-SHA and the similar modification COBRA-fas on functions from SET-1.

5.3.2. Numerical Results for SET-2

To show the advantage of the proposed modification COBRA-SHA more clearly, it was compared with the same algorithms (mentioned previously) by using benchmark functions from SET-2. The functions used in SET-2 are Sphere, Rosenbrock, Quadric, Schwefel, Griewank, Weierstrass, Quartic, Rastrigin and Ackley, which are frequently used benchmark functions to test the performance of various optimization algorithms. These functions can be described as continuous, differentiable, separable, scalable and multi-modal.
The experimental results obtained for 10- and 30-dimensional functions by the listed biology-inspired algorithms and their modifications are shown in Table 4 and Table 5. From these tables, it can be observed that the COBRA-SHA approach performs better than the other algorithms included in the comparison.
For example, regarding function f 1 , the COBRA-SHA approach outperformed the other algorithms included in the comparison in terms of the mean, best and worst values when D = 10 . However, for the same function with D = 30 COBRA-f was able to find the best value during 51 program runs, while COBRA-SHA was still better than the others in terms of the mean and worst values. Similarly, for function f 3 the best value was found by COBRA-f, and COBRA-SHA was able to achieve better mean and worst values both with D = 10 and with D = 30 .
Regarding functions f 5 and f 8 with 10 and 30 variables, COBRA-f, COBRA-fas and COBRA-SHA were able to find the optimum solutions during each program run. It should be noted that for function f 4 with 10 variables, the proposed modifications CSA-a, COBRA-fas and COBRA-SHA also achieved the optimum value during each program run, while the modification BA-a and the original algorithm COBRA-f found the optimum several times. On the other hand, for the same function but with 30 variables COBRA-SHA outperformed the other algorithms included in comparison. Additionally, for the last function f 9 both with D = 10 and with D = 30 COBRA-f, COBRA-fas, COBRA-SHA, BA and its modification BA-a demonstrated the same good results.
As for the second function f 2 ( D = 10 and D = 30 ), CSA-a outperformed the other algorithms in terms of mean and worst values, but the best value was found by the COBRA-fas approach. Regarding function f 6 with 10 variables, the PSO algorithm demonstrated the best results, while for that benchmark problem with 30 variables COBRA-fas outperformed every algorithm included in comparison. Finally, for function f 7 with D = 10 , BA and BA-a gave better results, and with D = 30 COBRA-SHA did the same.
Additionally, in Table 6 the results of the comparison between COBRA-SHA and the other mentioned algorithms according to the Mann-Whitney statistical test with significance level p = 0.01 are presented. The same notations as in Table 3 are used in Table 6. The results of the Mann-Whitney statistical test are presented in Figure 4 and Figure 5.
In addition, all the stated algorithms were compared with the proposed modification COBRA-SHA according to the Friedman statistical test. The obtained results are demonstrated in Figure 6 and Figure 7. The following notations were used in Figure 6 and Figure 7: COBRA-fas was denoted as “C-FAS” and for COBRA-SHA the notation “C-SHA” was used.
It was again established that the results obtained by the proposed approach are statistically better according to the Friedman and Mann-Whitney tests than the results obtained by the mentioned biology-inspired algorithms (PSO, WPS, FSS, FFA, CSA and BA) and their modifications with the external archive (FFA-a, CSA-a, BA-a). Moreover, COBRA-SHA statically outperformed the fuzzy-controlled COBRA-f in 11 out of 18 cases. However, as for SET-1 there is almost no difference between the results obtained by COBRA-SHA and the similar modification COBRA-fas on functions from SET-2.

5.3.3. Numerical Results for SET-3

The next step was to test and compare the stated biology-inspired algorithms and their modifications by using benchmark functions from SET-3. The 16 functions with D = 30 used in SET-3 were taken from the CEC 2014 competition [51]. All these functions are minimization problems with a shifted and rotated global optimum, which is randomly distributed in [ 80 , 80 ] . The search range for all problems was [ 100 , 100 ] . The statistical results in terms of mean, standard deviation and best solution of different algorithms for functions from CEC 2014 are listed in Table 10. The best results are shown in bold.
From Table 7, it can be observed that the COBRA-SHA approach in most cases performs better than the other algorithms included in the comparison in terms of the mean value. To be more specific, this happened for the first three unimodal functions f 1 , f 2 and f 3 . Moreover, for function f 2 COBRA-SHA outperformed the other algorithms by all criteria. However, for f 1 and f 3 the best results (out of 51 program runs) were found by the COBRA-fas approach.
Regarding the multi-modal functions f 4 , f 8 , f 9 , f 13 and f 16 , COBRA-SHA was able to outperform all the biology-inspired algorithms included in the comparison in terms of mean and best values. The COBRA-SHA modification performed better than the other algorithms for the rest of the multi-modal functions (namely f 6 , f 7 , f 10 , f 11 , f 12 , f 14 and f 15 ) except the fifth benchmark problem, but it was able to find the best value for f 5 . The fuzzy-controlled COBRA-f found the best solution for function f 14 and gave the best mean value for function f 5 . As with the COBRA-SHA modification, COBRA-fas gave the best values for functions f 7 and f 10 . For functions f 6 and f 11 , the best values were found by the WPS algorithm, while for function f 15 it was found by the FFA algorithm. Finally, the modification CSA-a was able to achieve the best value for function f 12 .
The results of the comparison between COBRA-SHA and the other mentioned algorithms according to the Mann-Whitney statistical test with significance level p = 0.01 are presented in Table 8 (the same notations are used). The results of the Mann-Whitney statistical test are presented in Figure 8. Then all the stated algorithms were compared with the proposed modification COBRA-SHA according to the Friedman statistical test. The obtained results are demonstrated in Figure 9 (the same notations as in Figure 6 and Figure 7 are used).
Thus, it was established that the results obtained by the proposed approach are statistically better according to the Friedman and Mann-Whitney tests than the results obtained by the mentioned biology-inspired algorithms (PSO, WPS, FSS, FFA, CSA and BA) and their modifications with the external archive (FFA-a, CSA-a, BA-a). Moreover, COBRA-SHA statically outperformed the fuzzy-controlled COBRA-f. Furthermore, the experimental results for the benchmark problems from SET-3 showed that the COBRA-SHA approach is more useful for solving complex multi-modal optimization problems than the similar modification COBRA-fas. Therefore, the workability and usefulness of the proposed COBRA-SHA algorithm were demonstrated.

5.3.4. Population Sizes Change

Additionally, in this study, population size changes were observed while solving benchmark problems from SET-2 and SET-3 with 10 and 30 variables. Figure 10 shows the change of the COBRA-f, COBRA-fas and COBRA-SHA component population sizes during the optimization process on three functions from SET-2 with 10 variables, namely Schwefel’s function (the first column), Weierstrass’s function (the second column) and Ackley’s function (the third column), with the best-found fuzzy-controller parameters.
The figures on the first row demonstrate the original fuzzy-controlled COBRA-f tuning procedure behavior, the figures on the second row show the COBRA-fas modification, and finally the figures on the third row show the proposed COBRA-SHA approach. The behavior of these three tuning methods is quite different. The standard COBRA-f tends to give all resources to one component (which can be seen for Weierstrass’s and Ackley’s functions). However, for Schwefel’s function, which is a complex optimization problem with many local minima, there was competition between the PSO and BA approaches for resources while the FFA component still had the biggest population size.
The COBRA-fas modification demonstrated similar behavior for Schwefel’s function (CSA had the largest amount of resources while FSS and BA competed for “second place”). However, for Ackley’s function all the components had population sizes with the number of individuals within the range [ 22 , 23 ] during the optimization process. It should be noted that the same solutions were found by the COBRA-f and COBRA-fas approaches, but while COBRA-f was able to find a solution with 300 individuals throughout all populations, the COBRA-fas modification used only 133.
Finally, the proposed algorithm COBRA-SHA increased all population sizes simultaneously (but differently): each population contained at least 20 individuals. Nevertheless, the largest amounts of resources were usually given to two components: for example, in the case of Schwefel’s function the winners were the FFA and BA algorithms, but for Weierstrass’s function, they were the WPS and FSS algorithms. It should be noted that while solving Ackley’s problem by COBRA-SHA all components had 49–51 individuals in their populations.
Next, Figure 11 shows the change of the COBRA-f, COBRA-fas and COBRA-SHA component population sizes during the optimization process on three functions from SET-2 with 30 variables, namely the Sphere function (the first column), the Quartic function (the second column) and Ackley’s function (the third column) with the best-found fuzzy-controller parameters. The algorithms demonstrate the same behavior as in the previous case (benchmark problems from SET-2 but with 10 variables).
However, let us consider the optimization process while solving the Quartic problem with the COBRA-fas approach. First, the BA algorithm appeared to be the best choice for the fuzzy controller. Thus, it increased the BA’s population to 40 individuals, when other populations had minimal sizes. After that, the population sizes did not change, and only after more than 250 , 000 calculations was the FFA approach able to improve the optimization process, with its population size starting to increase gradually. Therefore, in the end FFA had the largest amount of resources.
Finally, Figure 12 shows the change in the COBRA-f, COBRA-fas and COBRA-SHA component population sizes during the optimization process on three functions from SET-3, namely Rotated Discus Function (the first column), Shifted and Rotated HGBat Function (the second column) and Shifted and Rotated Expanded Scaffer’s F6 Function (the third column) with the best-found fuzzy-controller parameters. The first problem is unimodal, and the others are multi-modal.
The standard COBRA-f tuning method usually makes multiple oscillations, but the winning component is changed over time. The COBRA-fas method with the first problem could not make a decision regarding which component was the best during the first 150 , 000 calculations, but later the population of the WPS algorithm started to increase gradually and the population of the FSS algorithm became three times greater than it was initially. For the second stated problem, the PSO component appeared to be the most successful at the beginning of the optimization process. However, its population size did not change after 20 , 000 calculations. On the other hand, the population size of the FSS component increased after 50 , 000 calculations and it had the largest amount of resources by the end of the optimization process. For the last problem, as with the Discuss function, COBRA-fas could not determine the winner component algorithm at first, but then it increased the population of the FFA approach and minimized the sizes of all other populations down to zero simultaneously.
As for the COBRA-SHA modification, it did not minimize population sizes down to zero, thus, provided a more diverse set of potential solutions. Regarding the Discuss problem, even though the FFA component gave better results than other biology-inspired algorithms at first, the WPS component started to outperform it quite early. Therefore, by the end of the optimization process the WPS component had the largest amount of resources, and the FFA approach the second largest, while the populations of other components had at least 20 individuals. A similar situation can be observed for the HGBat Function with FSS and PSO components as winners. As for the last benchmark problem, during the first 70 , 000 calculations the population of the FFA algorithm increased and consisted of more than 100 individuals, while at the same time, the population sizes of other algorithms were close to 20 and did not change. Nevertheless, the population size of the WPS component then started to increase significantly, still delivering goal function improvements. By the end of the optimization process, the WPS component algorithm had the largest number of individuals, yet that number was close to the number of individuals of the FFA’s population.
Thus, the demonstrated cases represent different scenarios where the resource tuning is helpful, as it is able to change the algorithm structure in accordance with the current requirements.

6. Conclusions

In this paper, a new modification of the meta-heuristic COBRA, namely the COBRA-SHA meta-heuristic, is proposed for solving real-valued optimization problems. To be more specific, a new modification is based on an alternative way of generating potential solutions. The stated technique uses a historical memory of successful positions found by individuals to guide them in different directions and thus to improve their exploration and exploitation abilities. The proposed method was applied to the components of the COBRA approach and to its basic procedures. The COBRA-SHA algorithm was tested using the three sets of benchmark functions. The experimental results show that the performance of the proposed algorithm is superior to that of the other biology-inspired algorithms in exploiting the optimum and it also has advantages in exploration.
Still, in this study several of the simplest variants of the biology-inspired component algorithms have been used for the proposed approach. Thus, further work should be focused on implementing their newer versions in the collective of the COBRA-SHA algorithm, as well as on comparisons with them. Moreover, there are still several parameters introduced for this modification, which were chosen empirically. Therefore, the performance of the COBRA-SHA approach should be tested for different parameter adaptation schemes. Moreover, the proposed modification should be applied for other optimization problem types (constrained, large-scale, multi-objective and so on).

Author Contributions

Conceptualization, S.A.; Methodology, S.A.; Software, S.A., V.S.; Validation, S.A., V.S., D.E.; Formal analysis, O.S.; Investigation, S.A., V.S., D.E.; Resources, S.A.; Data curation, S.A.; Writing—original draft preparation, S.A.; Writing—review and editing, V.S.; Visualization, V.S.; Supervision, O.S.; Project administration, S.A., O.S.; Funding acquisition, O.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the internal grant No299 for young researchers in Reshetnev Siberian State University of Science and Technology in 2020.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eberhart, R.C. Computational Intelligence: Concepts to Implementations; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 2007. [Google Scholar]
  2. Winker, P.; Gilli, M. Applications of optimization heuristics to estimation and modelling problems. Comput. Stat. Data Anal. 2004, 47, 211–223. [Google Scholar] [CrossRef]
  3. Nazari-Heris, M.; Mohammadi-Ivatloo, B.; Gharehpetian, G.B. A comprehensive review of heuristic optimization algorithms for optimal combined heat and power dispatch from economic and environmental perspectives. Renew. Sustain. Energy Rev. 2018, 81, 2128–2143. [Google Scholar] [CrossRef]
  4. Crepinsek, M.; Mernik, M.; Liu, S.H. Analysis of Exploration and Exploitation in Evolutionary Algorithms by Ancestry Trees. Int. J. Innov. Comput. Appl. 2011, 3, 11–19. [Google Scholar] [CrossRef]
  5. Žilinskas, A.; Gimbutienė, G. A hybrid of Bayesian approach based global search with clustering aided local refinement. Commun. Nonlinear Sci. Numer. Simul. 2019, 78, 104857. [Google Scholar] [CrossRef]
  6. Pepelyshev, A.; Zhigljavsky, A.; Žilinskas, A. Performance of global random search algorithms for large dimensions. J. Glob. Optim. 2018, 71, 57–71. [Google Scholar] [CrossRef] [Green Version]
  7. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
  8. Dorigo, M.; Birattari, M.; Stutzle, T. Ant Colony Optimization. Comp. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  9. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  12. Uymaz, S.A.; Tezel, G.; Yel, E. Artificial Algae Algorithm (AAA) for Nonlinear Global Optimization. Appl. Soft Comput. 2015, 31, 153–171. [Google Scholar] [CrossRef]
  13. Mirjalili, S. Moth-flame Optimization Algorithm. Know.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  14. Wang, J. A New Cat Swarm Optimization with Adaptive Parameter Control. In Genetic and Evolutionary Computing; Sun, H., Yang, C.Y., Lin, C.W., Pan, J.S., Snasel, V., Abraham, A., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 69–78. [Google Scholar]
  15. Abbasi-ghalehtaki, R.; Khotanlou, H.; Esmaeilpour, M. Fuzzy Evolutionary Cellular Learning Automata model for text summarization. Swarm Evol. Comput. 2016, 30. [Google Scholar] [CrossRef]
  16. Lim, Z.Y.; Ponnambalam, S.G.; Izui, K. Nature Inspired Algorithms to Optimize Robot Workcell Layouts. Appl. Soft Comput. 2016, 49, 570–589. [Google Scholar] [CrossRef]
  17. Wolpert, D.H.; Macready, W.G. No Free Lunch Theorems for Optimization. Trans. Evol. Comp 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  18. Parsopoulos, K.E. Parallel Cooperative Micro-particle Swarm Optimization: A Master-slave Model. Appl. Soft Comput. 2012, 12, 3552–3579. [Google Scholar] [CrossRef]
  19. Van den Bergh, F.; Engelbrecht, A.P. Training product unit networks using cooperative particle swarm optimisers. In Proceedings of the IJCNN’01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222), Washington, DC, USA, 15–19 July 2001; Volume 1, pp. 126–131. [Google Scholar] [CrossRef]
  20. Mohammed, E.A.; Mohamed, K. Cooperative Particle Swarm Optimizers: A Powerful and Promising Approach. In Stigmergic Optimization; Springer: Berlin/Heidelberg, Germany, 2006; pp. 239–259. [Google Scholar] [CrossRef]
  21. Akhmedova, S.; Semenkin, E. Co-Operation of Biology Related Algorithms. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2207–2214. [Google Scholar] [CrossRef]
  22. Yang, X.; Deb, S. Cuckoo search via levy flights. In Proceedings of the World Congress on Nature and Biologically Inspired Computing, Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar] [CrossRef]
  23. Yang, X. Firefly algorithms for multimodal optimization. In Proceedings of the 5th Symposium on Stochastic Algorithms, Foundations and Applications, Sapporo, Japan, 26–28 October 2009; pp. 169–178. [Google Scholar] [CrossRef] [Green Version]
  24. Yang, X. A new metaheuristic bat-inspired algorithm. Nat. Inspired Coop. Strateg. Optim. Stud. Comput. Intell. 2010, 284, 65–74. [Google Scholar] [CrossRef] [Green Version]
  25. Filho, C.J.A.B.; De Lima Neto, F.B.; Lins, A.J.C.C.; Nascimento, A.I.S.; Lima, M.P. Fish School Search. In Nature-Inspired Algorithms for Optimisation; Chiong, R., Ed.; Springer: Berlin/Heidelberg, Germany, 2009; pp. 261–277. [Google Scholar] [CrossRef]
  26. Yang, C.; Tu, X.; Chen, J. Algorithm of marriage in honey bees optimization based on the wolf pack search. In Proceedings of the International Conference on Intelligent Pervasive Computing, Jeju City, Korea, 11–13 October 2007; pp. 462–467. [Google Scholar] [CrossRef]
  27. Akhmedova, S.; Semenkin, E. Investigation into the efficiency of different bionic algorithm combinations for a COBRA meta-heuristic. IOP Conf. Ser. Mater. Sci. Eng. 2017, 173, 012001. [Google Scholar] [CrossRef] [Green Version]
  28. Lee, C.C. Fuzzy logic in control systems: Fuzzy logic controller. I. IEEE Trans. Syst. Man Cybern. 1990, 20, 404–418. [Google Scholar] [CrossRef] [Green Version]
  29. Akhmedova, S.; Semenkin, E.; Stanovov, V.; Vishnevskaya, S. Fuzzy Logic Controller Design for Tuning the Cooperation of Biology-Inspired Algorithms. In Advances in Swarm Intelligence; Tan, Y., Takagi, H., Shi, Y., Niu, B., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 269–276. [Google Scholar]
  30. Shi, Y.; Eberhart, R.C. Fuzzy adaptive particle swarm optimization. In Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No.01TH8546), Seoul, Korea, 27–30 May 2001; Volume 1, pp. 101–106. [Google Scholar] [CrossRef]
  31. Zhang, W.; Liu, Y. Fuzzy logic controlled particle swarm for reactive power optimization considering voltage stability. In Proceedings of the 2005 International Power Engineering Conference, Singapore, 29 November–2 December 2005; pp. 1–555. [Google Scholar] [CrossRef]
  32. Akhmedova, S.; Semenkin, E.; Stanovov, V. Semi-supervised SVM with Fuzzy Controlled Cooperation of Biology Related Algorithms. In Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics, ICINCO 2017, Madrid, Spain, 26–28 July 2017; pp. 64–71. [Google Scholar] [CrossRef] [Green Version]
  33. Črepinšek, M.; Liu, S.H.; Mernik, M. Exploration and Exploitation in Evolutionary Algorithms: A Survey. ACM Comput. Surv. 2013, 45, 35:1–35:33. [Google Scholar] [CrossRef]
  34. Wang, J.; Zhou, B.; Zhou, S. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation. Intell. Neurosci. 2016, 2016, 2959370. [Google Scholar] [CrossRef] [Green Version]
  35. Tian, Y.; Gao, W.; Yan, S. An Improved Inertia Weight Firefly Optimization Algorithm and Application. In Proceedings of the 2012 International Conference on Control Engineering and Communication Technology, Liaoning, China, 7–9 December 2012; pp. 64–68. [Google Scholar] [CrossRef]
  36. Gao, Y.; An, X.; Liu, J. A Particle Swarm Optimization Algorithm with Logarithm Decreasing Inertia Weight and Chaos Mutation. In Proceedings of the 2008 International Conference on Computational Intelligence and Security, Suzhou, China, 13–17 December 2008; Volume 1, pp. 61–65. [Google Scholar] [CrossRef]
  37. Abadlia, H.; Smairi, N.; Ghedira, K. Particle Swarm Optimization Based on Dynamic Island Model. In Proceedings of the 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Boston, MA, USA, 6–8 November 2017; pp. 709–716. [Google Scholar] [CrossRef]
  38. Kushida, J.; Hara, A.; Takahama, T.; Kido, A. Island-based differential evolution with varying subpopulation size. In Proceedings of the 2013 IEEE 6th International Workshop on Computational Intelligence and Applications (IWCIA), Hiroshima, Japan, 13 July 2013; pp. 119–124. [Google Scholar] [CrossRef]
  39. Lacerda, M.; Neto, H.; Ludermir, T.; Kuchen, H.; Lima Neto, F. Population Size Control for Efficiency and Efficacy Optimization in Population Based Metaheuristics. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar] [CrossRef]
  40. Alander, J.T. On optimal population size of genetic algorithms. In Proceedings of the CompEuro 1992 Proceedings Computer Systems and Software Engineering, The Hague, The Netherlands, 4–8 May 1992; pp. 65–70. [Google Scholar] [CrossRef]
  41. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  42. Zhang, J.; Sanderson, A.C. JADE: Adaptive Differential Evolution With Optional External Archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  43. Zitzler, E.; Thiele, L. Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Trans. Evol. Comput. 1999, 3, 257–271. [Google Scholar] [CrossRef] [Green Version]
  44. Wang, H.; Wu, Z.; Zhou, X.; Rahnamayan, S. Accelerating artificial bee colony algorithm by using an external archive. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 517–521. [Google Scholar] [CrossRef]
  45. Xue, B.; Qin, A.K.; Zhang, M. An archive based particle swarm optimisation for feature selection in classification. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 3119–3126. [Google Scholar] [CrossRef]
  46. Akhmedova, S.; Stanovov, V.; Erokhin, D.; Semenkina, O. Ensemble of the Nature-Inspired Algorithms with Success-History Based Position Adaptation; IOP Publishing: Bristol, UK, 2020; Volume 734, p. 012089. [Google Scholar] [CrossRef]
  47. Singh, S.; Arora, S. A Conceptual Comparison of Firefly Algorithm, Bat Algorithm and Cuckoo Search. In Proceedings of the 2013 International Conference on Control, Computing, Communication and Materials (ICCCCM), Allahabad, India, 3–4 August 2013; pp. 1–4. [Google Scholar] [CrossRef]
  48. Liang, J.; Qu, B.Y.; Suganthan, P.; Hernández-Díaz, A. Problem Definitions and Evaluation Criteria for the CEC 2013 Special Session on Real-Parameter Optimization; Technical Report 201212; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China, 2013. [Google Scholar]
  49. Akhmedova, S.; Stanovov, V.; Erokhin, D.; Semenkin, E. Position adaptation of candidate solutions based on their success history in nature-inspired algorithms. Int. J. Inf. Technol. Secur. 2019, 11, 21–32. [Google Scholar]
  50. Nenavath, H.; Jatoth, R.; Das, S. A synergy of the sine-cosine algorithm and particle swarm optimizer for improved global optimization and object tracking. Swarm Evol. Comput. 2018. [Google Scholar] [CrossRef]
  51. Liang, J.; Qu, B.Y.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report; Nanyang Technological University: Singapore, 2013. [Google Scholar]
  52. Jamil, M.; Yang, X.S. A Literature Survey of Benchmark Functions For Global Optimization Problems. Int. J. Math. Model. Numer. Optim. 2013, 4. [Google Scholar] [CrossRef] [Green Version]
  53. Akhmedova, S.; Stanovov, V.; Erokhin, D.; Semenkin, E. Success History Based Position Adaptation in Co-Operation of Biology Related Algorithms. In Proceedings of the The Tenth International Conference on Swarm Intelligence, Chiang Mai, Thailand, 26–30 July 2019. [Google Scholar]
Figure 1. Fuzzy terms for all 6 outputs.
Figure 1. Fuzzy terms for all 6 outputs.
Algorithms 13 00089 g001
Figure 2. Results of the Mann-Whitney statistical test with p = 0.01 , comparison of COBRA-SHA with other approaches (SET-1).
Figure 2. Results of the Mann-Whitney statistical test with p = 0.01 , comparison of COBRA-SHA with other approaches (SET-1).
Algorithms 13 00089 g002
Figure 3. Results of the Friedman statistical test for SET-1.
Figure 3. Results of the Friedman statistical test for SET-1.
Algorithms 13 00089 g003
Figure 4. Results of the Mann-Whitney statistical test with p = 0.01 , comparison of COBRA-SHA with other approaches (SET-2, D = 10 ).
Figure 4. Results of the Mann-Whitney statistical test with p = 0.01 , comparison of COBRA-SHA with other approaches (SET-2, D = 10 ).
Algorithms 13 00089 g004
Figure 5. Results of the Mann-Whitney statistical test with p = 0.01 , comparison of COBRA-SHA with other approaches (SET-2, D = 30 ).
Figure 5. Results of the Mann-Whitney statistical test with p = 0.01 , comparison of COBRA-SHA with other approaches (SET-2, D = 30 ).
Algorithms 13 00089 g005
Figure 6. Results of the Friedman statistical test for SET-2 ( D = 10 ).
Figure 6. Results of the Friedman statistical test for SET-2 ( D = 10 ).
Algorithms 13 00089 g006
Figure 7. Results of the Friedman statistical test for SET-2 ( D = 30 ).
Figure 7. Results of the Friedman statistical test for SET-2 ( D = 30 ).
Algorithms 13 00089 g007
Figure 8. Results of the Mann-Whitney statistical test with p = 0.01 for SET-3, comparison of COBRA-SHA with other approaches.
Figure 8. Results of the Mann-Whitney statistical test with p = 0.01 for SET-3, comparison of COBRA-SHA with other approaches.
Algorithms 13 00089 g008
Figure 9. Results of the Friedman statistical test for SET-3.
Figure 9. Results of the Friedman statistical test for SET-3.
Algorithms 13 00089 g009
Figure 10. Population size changes for SET-2 with 10 variables.
Figure 10. Population size changes for SET-2 with 10 variables.
Algorithms 13 00089 g010
Figure 11. Population size changes for SET-2 with 30 variables.
Figure 11. Population size changes for SET-2 with 30 variables.
Algorithms 13 00089 g011
Figure 12. Population size changes for SET-3.
Figure 12. Population size changes for SET-3.
Algorithms 13 00089 g012
Table 1. Part of the rule base.
Table 1. Part of the rule base.
Rule
1IF x 1 is A 3 x 2 x 6 is A 4 x 7 is D C THEN y 1 is B 3 y 2 y 6 is B 1
2IF x 1 is A 2 x 2 x 6 is A 4 x 7 is D C THEN y 1 is B 2 y 2 y 6 is B 2
3IF x 1 is A 1 x 2 x 6 is A 4 x 7 is D C THEN y 1 is B 1 y 2 y 6 is B 3
19IF x 1 x 6 is D C x 7 is A 1 THEN y 1 is B 1
20IF x 1 x 6 is D C x 7 is A 2 THEN y 1 is B 2
21IF x 1 x 6 is D C x 7 is A 3 THEN y 1 is B 3
Table 2. Minimization results of 23 benchmark functions from SET-1 for compared algorithms.
Table 2. Minimization results of 23 benchmark functions from SET-1 for compared algorithms.
f PSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fasCOBRA-SHA
1mean1.59E−065.79E−060.0007575.18E−060.003130.0003880.0013145.71E−077.78E−053.00E−109.75E−338.34E−104
sd2.43E−064.19E−067.54E−056.99E−063.73E−050.0003320.0003671.85E−076.51E−059.00E−105.16E−324.49E−103
med1.36E−075.19E−060.0007191.49E−060.0031230.0004730.0011725.39E−076.60E−057.08E−233.50E−960
worst5.62E−061.92E−050.0009071.76E−050.0033310.0008510.0025261.08E−060.0002213.00E−092.88E−312.50E−102
2mean0.1866670.2274030.0281720.0011340.4038810.0351590.2810520.0031970.0315175.06E−053.37E−329.86E−38
sd0.0763030.0127880.0127150.0006821.034620.0227330.4619510.0006030.0240530.0001111.81E−315.31E−37
med0.20.2259120.0261710.0010280.0506730.03101022450.0029040.0294925.30E−172.10E−662.09E−130
worst0.30.2454050.0545230.0023395.532590.0834242.307180.0041410.0736880.00031.01E−302.96E−36
3mean0.0025860.2380680.0285310.0792770.0953340.0164870.0238810.0030360.0042843.15E−082.00E−245.79E−45
sd0.0047290.0514570.0046590.0954240.1457680.0205470.0436870.0012820.0037491.70E−071.07E−233.12E−44
med0.0013770.2241950.0296810.0239660.0234250.0066780.0106450.0028620.0030247.74E−316.79E−1168.30E−87
worst0.0155630.3176260.0333840.2237890.6074880.0730990.2380690.0088040.0147039.45E−075.98E−231.74E−43
4mean0.7879910.2254980.0335730.0401280.2325270.0021590.0880040.0032790.0026971.77E−182.94E−142.16E−16
sd0.0725340.0376010.0107260.0209810.0974670.0013560.1272330.0005190.0014751.73E−181.32E−131.16E−15
med0.7635440.1970980.0341170.0522920.2002870.0019250.0492110.0029010.0029511.40E−187.62E−393.76E−25
worst0.9043160.2808720.0493040.0621390.6672880.0046930.649080.0040110.0046916.37E−187.32E−136.47E−15
5mean24.393526.76830.96880.04477732.98730.56267829.88960.0018520.540810.6324810.7090680.447074
sd1.172260.2607720.3973440.0119046.951870.388682.732870.0004020.3343561.596470.6513271.21442
med25.314126.741731.11820.04943729.7310.20646528.68060.0016170.5470220.0761410.3969986.09E−06
worst25.432827.109231.13020.05100858.51460.98109739.76380.0028820.9755697.067452.791675.06674
6mean4.37E−07000.000840.06940800.0357590.0008560000
sd3.80E−07000.0003060.00102900.0118960.0002460000
med5.39E−07000.0007320.06930600.0323950.0010380000
worst1.21E−06000.0012170.07156700.0622170.0010380000
7mean0.0227980.0110720.0340650.0013860.1198580.0003630.0234820.00030.0002830.0009710.0001640.000183
sd0.0096440.0063790.0061150.0007260.0244290.0009160.0098037.16E−050.0004550.0024370.0001840.000123
med0.0180510.0139060.03543150.0018510.111580.0001720.020190.0003539.16E−050.0001480.0001050.000172
worst0.0410910.021560.0456580.0022480.1732040.0051270.0607960.0004650.0016170.0095130.0009440.000562
8mean−3365.88−3715.84−1953.42−3833.45−2004.41−4113.93−2233.67−4189.83−4095.29−4080.25−4129.2−4187.24
sd348.7080.365664349.539102.13434.4298241.696218.2540340.29297.391325.69613.9255
med−3597.64−3715.98−1924.47−3833−2004.41−4189.83−2291.01−4189.83−4189.83−4189.83−4189.83−4189.83
worst−2999.38−3714.49−1596.36−3594.6−1969.98−3107.78−1300.58−4189.83−2709.11−2973.94−2375.27−4112.25
9mean25.14970.44348525.95960.00344331.10390.01407621.60721.42E−050.012228000
sd14.5690.1502929.924430.0001827.90540.0143052.16562.47E−060.013029000
med23.38260.47666923.39990.00342926.3810.00896921.74691.46E−050.006788000
worst44.98720.61206940.53180.00372444.87810.04684226.21291.46E−050.039491000
10mean2.335240.2865030.0227240.0072192.19393−4.44E−161.879080.001914−4.44E−16−4.44E−16−4.44E−16−4.44E−16
sd5.221050.1070990.0062660.0339850.5660600.6752310.0001360000
med0.0002380.2332520.0228540.0007112.0135−4.44E−161.646450.001889−4.44E−16−4.44E−16−4.44E−16−4.44E−16
worst14.00990.4984790.0329240.1901524.68214−4.44E−164.789620.002645−4.44E−16−4.44E−16−4.44E−16−4.44E−16
11mean0.0223670.6678550.029140.0053821.37616.23E−060.3307170.0044047.38E−066.68E−1200
sd0.0120690.069760.0075090.000530.10584.95E−060.2496470.0004216.54E−063.60E−1100
med0.0172440.6643880.031150.0055281.337415.67E−060.2462320.0044195.33E−06000
worst0.0416320.7622230.0406750.006091.686131.65E−051.441680.0052442.27E−052.00E−1000
12mean0.8328040.0020540.0302860.0071051.238210.2853611.112341.62E−050.1656090.0004710.000154.74E−05
sd2.41120.0008810.0101610.0259640.4747510.3038340.5381478.53E−060.1366020.0017610.0006070.000253
med0.0326220.0019470.0272992.22E−051.091730.157080.9144241.48E−050.1443592.45E−053.37E−058.97E−16
worst8.06640.0036080.0549650.1042463.434081.17813.499613.61E−050.5430770.0095780.0034190.001412
13mean0.0697190.0389750.0309592.46E−050.8364760.2921570.4776421.47E−050.3994840.0021640.0011141.15E−09
sd0.1347670.0177120.0138198.74E−050.1205990.2041970.2393677.57E−060.2060860.005970.0045353.02E−09
med0.0001750.0481610.0339393.92E−060.8010980.3980780.4032631.03E−050.1443590.0004423.14E−162.08E−18
worst0.5217170.0657030.0529930.0004921.203270.7996611.400092.66E−050.7981640.0309620.023751.04E−08
14mean0.9980.9981.56420.9981.99470.9981.015660.9980.9980.9980.9980.998
sd07.08E−120.8926331.57E−160.9798121.75E−160.060819.90E−164.74E−16000
med0.9980.9981.06050.9981.9920.9980.9980.9980.9980.9980.9980.998
worst0.9980.9983.96860.9986.90340.9981.241360.9980.9980.9980.9980.998
15mean0.0006410.0059240.0005240.0003660.0019270.0039830.0008640.0003240.0039810.0005660.000310.000307
sd0.0002160.0087072.17E−198.05E−050.0001190.0023920.0009776.55E−050.0027940.0002691.13E−051.12E−07
med0.0007830.0006730.0005240.0003650.0019750.0029920.0005580.0003080.0026150.0003890.0003070.000307
worst0.0007830.0203630.0005240.0007830.0022420.0084160.0057580.0006530.0086630.0009410.0003690.000308
16mean−1.0316−1.0316−1.0316−1.0316−1.0281−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
sd01.97E−076.66E−162.69E−050.001495.51E−163.97E−065.20E−082.88E−07000
med−1.0316−1.0316−1.0316−1.0316−1.0268−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
worst−1.0316−1.0316−1.0316−1.0315−1.0268−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
17mean0.397890.397890.397890.3986040.398090.397890.397890.397890.397890.397890.397890.39789
sd01.34E−0800.0015962.64E−082.01E−152.12E−055.42E−153.19E−16000
med0.397890.397890.397890.3979180.398090.397890.397890.397890.397890.397890.397890.39789
worst0.397890.397890.397890.4032150.398090.397890.3980060.398010.397890.397890.397890.39789
18mean3.00E+003.000013.01453.0813.027983.00E+003.000113.017323.00E+003.00E+003.00E+003.00E+00
sd2.22E−151.41E−068.88E−160.0324410.0816172.86E−140.0004160.0036921.42E−14000
med3.00E+003.00E+003.01453.05793.01283.00E+003.00E+003.01843.00E+003.00E+003.00E+003.00E+00
worst3.00E+003.000013.01453.14643.46753.00E+003.002173.01843.00E+003.00E+003.00E+003.00E+00
19mean−3.8628−3.8628−3.8617−3.8627−3.8312−3.2190−3.8627−3.8472−3.8628−3.8628−3.8628−3.8628
sd3.11E−152.76E−070.0005588.61E−050.0225150.6381750.000430.0160261.46E−06003.44E−06
med−3.8628−3.8628−3.8619−3.8627−3.8328−3.7754−3.8628−3.8614−3.8628−3.8628−3.8628−3.8628
worst−3.8628−3.8628−3.8609−3.8623−3.7831−2.2391−3.8604−3.8257−3.8628−3.8628−3.8628−3.8628
20mean−3.2429−3.3223−3.3137−3.2012−3.2179−3.3192−3.2908−3.2523−3.3223−3.3221−3.3223−3.3223
sd0.0561945.58E−050.0022820.0015060.3714320.0164190.1172590.0786328.15E−050.0010240.0001260.000142
med−3.2032−3.3222−3.3121−3.2017−3.3223−3.3222−3.3224−3.3121−3.3223−3.3223−3.3224−3.3224
worst−3.2032−3.3222−3.3121−3.1964−1.3854−3.2307−2.684−3.1471−3.322−3.3166−3.3217−3.3216
21mean−4.4259−10.1532−10.147−10.1531−9.67012−10.153−10.1231−10.1532−9.74075−10.153−10.153−10.1532
sd3.15961.64E−058.88E−154.68E−051.214320.0009130.1384992.23E−051.5563300.0012170
med−2.6829−10.1532−10.147−10.1531−10.1387−10.1532−10.1532−10.1532−10.1532−10.153−10.1532−10.1532
worst−2.6829−10.1532−10.147−10.153−4.07347−10.1481−9.38354−10.1531−2.56105−10.153−10.1464−10.1532
22mean−5.2731−10.4029−10.3966−10.4027−9.89302−9.46839−10.3104−10.4029−9.85423−10.4028−10.4029−10.4029
sd0.9527336.63E−060.0027020.0001780.1005722.110490.4060342.20E−051.597680.0002725.62E−050
med−5.0877−10.4029−10.3964−10.4028−9.9117−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029−10.4029
worst−5.0877−10.4029−10.3937−10.4021−9.35142−3.55911−8.14783−10.4028−5.02718−10.4014−10.4026−10.4029
23mean−4.0308−10.5364−10.531−10.5362−9.37719−10.2025−10.4969−10.5364−10.5309−10.5227−10.5364−10.5364
sd2.91281.07E−050.0020130.0001541.67991.208320.166595.05E−050.0292620.07343800
med−2.8066−10.5364−10.53−10.5362−10.0336−10.5364−10.5364−10.5364−10.5363−10.5364−10.5364−10.5364
worst−2.4217−10.5364−10.53−10.5359−4.41148−4.61133−9.61849−10.5362−10.3733−10.1272−10.5364−10.5364
Table 3. Results of the Mann-Whitney statistical test with p = 0.01 for SET-1, comparison of COBRA-SHA with other approaches.
Table 3. Results of the Mann-Whitney statistical test with p = 0.01 for SET-1, comparison of COBRA-SHA with other approaches.
PSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fas
+181520222314181612104
=58300957111318
00010000001
Total181520222314181612103
Table 4. Experimental results for 10-dimensional functions from SET-2.
Table 4. Experimental results for 10-dimensional functions from SET-2.
f PSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fasCOBRA-SHA
1best1.63E−248.63E−080.0001616.56E−080.0005996.67E−100.0002734.90E−084.75E−13000
worst3.51E−243.69E−060.0004031.70E−070.0026394.44E−060.0017231.50E−073.99E−062.81E−156.47E−161.05E−223
mean8.60E−248.42E−070.000261.07E−070.0009951.10E−060.0006781.05E−079.46E−071.88E−162.16E−173.58E−225
sd1.06E−239.00E−079.64E−053.02E−080.000581.15E−060.0004052.64E−081.11E−067.02E−161.16E−160
2best0.0495140.2138447.401860.00117211.40980.08905110.74380.0001260.0889810.0440437.54E−241.25E−11
worst0.173050.3305758.463110.01007520.06670.08995416.02950.0015010.0899810.1894791.79020.015126
mean0.0822670.2579687.991090.00516815.61370.089313.4520.0006790.0892170.0993370.0639240.001354
sd0.0411290.0433180.3938170.002633.0230.0002871.962090.0003730.0002730.0311520.3208570.003361
3best1.45E−171.70E−150.0004847.54E−050.0275440.0010730.0077733.83E−050.00023106.29E−1700
worst6.19E−163.77E−050.0013540.000290.0923370.0091790.0515930.0002460.0073691.39E−114.25E−135.67E−104
mean1.65E−162.66E−060.0009590.0001930.0587740.0052570.0254860.0001630.0031617.51E−131.54E−141.89E−105
sd1.72E−169.38E−060.0003045.54E−050.0266810.001910.0156225.10E−050.00192.89E−127.64E−141.02E−104
4best118.438236.891870.76236.881970.43916.0291773.07900000
worst950.541830.592658.191191.972807.619507.5692882.4990474.631899.2300
mean526.392584.912217.53586.122436.108333.1462387.664063.22665.3600
sd227.166152.965254.724316.432292.505136.605313.1440132.54340.7200
5best0.0172410.0136630.0040565.71E−050.1148191.99E−090.0200864.81E−052.76E−09000
worst0.0762420.0747250.0086990.0001680.318234.46E−080.0385370.0001512.72E−08000
mean0.0525420.0435170.0066240.0001090.2099341.04E−080.0292659.81E−058.10E−09000
sd0.0177330.0138050.0013112.35E−050.0588161.19E−080.004843.10E−056.80E−09000
6best−7.15E−09−9.76E−100.0012330.0007450.0520970.0718870.0385340.0007390.016961−7.06E−09−7.15E−09−1.59E−09
worst−7.15E−096.46E−130.0022080.0012410.078050.2813160.0624140.0011210.2909651.06E−14−7.36E−10−3.69E−10
mean−7.15E−09−2.27E−100.0015150.0010670.0638430.1461930.0508230.0009840.13815−1.21E−09−3.28E−09−1.26E−09
sd03.1E−100.0002010.0001050.0066250.0451550.0065929.61E−050.0590342.28E−092.07E−094.01E−10
7best3.59E−059.19E−050.0009434.12E−050.0108962.98E−110.0011422.35E−056.21E−129.63E−111.05E−093.60E−07
worst0.000440.0024630.003620.0005450.0790710.0001040.0110060.0002950.0001220.0013660.0008890.000625
mean0.0001720.0008110.0022050.0002510.0258082.53E−050.005010.0001124.31E−050.0001870.0001063.87E−05
sd0.0001050.0005990.0006450.0001210.0160322.81E−050.003135.91E−053.97E−050.0002770.000180.000117
8best01.59E−050.0798870.0003981.165381.22E−050.9136144.70E−066.98e−06000
worst5.969750.0004824.112650.0008973.770510.0042593.106528.42E−060.002881000
mean1.492440.0001541.464150.0006882.678990.0011661.925216.88E−060.000892000
sd1.424560.0001401.066610.0001010.8821950.0010230.6794041.05E−060.000828000
9best6.33E−100.0038010.0233370.0004090.308134−4.44E−160.3369580.000284−4.44E−16−4.44E−16−4.44E−16−4.44E−16
worst4.37E−090.0096050.0531860.0006010.622152−4.44E−160.6621980.000501−4.44E−16−4.44E−16−4.44E−16−4.44E−16
mean2.30E−090.0066370.0379140.0005610.551782−4.44E−160.4732580.000432−4.44E−16−4.44E−16−4.44E−16−4.44E−16
sd1.08E−090.0019270.008964.21E−050.07840400.0931975.53E−050000
Table 5. Experimental results for 30-dimensional functions from SET-2.
Table 5. Experimental results for 30-dimensional functions from SET-2.
f PSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fasCOBRA-SHA
1best7.48E−096.36E−060.0006787.94E−070.0074673.26E−070.0035073.28E−071.53E−0703.50E−2450
worst4.42E−078.57E−050.0023391.61E−060.0124577.42E−060.007991.85E−065.68E−061.39E−099.89E−172.12E−153
mean8.05E−082.57E−050.0012511.21E−060.0098753.15E−060.0067199.97E−072.32E−065.22E−115.10E−187.08E−155
sd1.17E−071.98E−050.0005452.23E−070.0018312.10E−060.0016423.29E−071.60E−062.50E−101.99E−173.81E−154
2best4.0231123.326528.72360.03861924.22720.11348323.34480.0023020.1856280.2859185.70E−243.50E−13
worst95.485426.444331.97580.06725939.65490.9976132.67990.0043640.9890791.490148.562076.13598
mean27.13325.849430.340.05550433.76130.35717327.16040.0032690.3663280.7358020.3309070.453617
sd23.79750.6075710.9611720.0075425.932070.3096723.49770.0006270.307630.2439951.540651.44316
3best5.07E−062.71E−120.0600120.0010280.3225040.0001340.1759580.000939.35E−0505.95E−2870
worst0.0001350.0018320.2370430.0027910.5503350.8509510.3712750.0022750.6790431.44E−073.09E−061.39E−83
mean2.85E−050.0001370.1145840.0016760.4133670.3028920.2483860.0013260.21679.40E−091.03E−074.64E−85
sd2.63E−050.0004540.0499790.0004050.0619470.2687160.0634390.0003360.2048562.96E−085.56E−072.50E−84
4best1430.16711.097256.252065.454302.271065.993839.31172.041667.29479.79973.990.69
worst4009.782493.498048.993772.66232.183575.74976.62230.443808.238765.018428.672164.89
mean2880.631722.747718.992931.714970.562355.6164306.471616.22483.3643636.123448.09992.24
sd665.536346.134251.117396.32505.547693.659324.354281.819484.672009.841529.11752.587
5best2.01E−050.021250.0319640.0045790.9853178.23E−070.1622010.0030884.06E−07000
worst0.0716290.2013010.0543360.0065154.013425.35E−050.2789350.0050831.83E−05000
mean0.0159250.0686280.0448080.0055491.949257.87E−060.2175940.0040984.55E−06000
sd0.017540.0358890.0052580.0004440.6290161.08E−050.0305190.0004823.51E−06000
6best−1.43E−08−2.11E−090.0091110.0039850.1598041.645880.1370220.00377641.40487−1.52E−08−2.14E−08−4.76E−09
worst4.000058.02E−110.0126090.0046520.2153853.775690.1773110.0044443.08984−1.42E−14−2.07E−08−3.46E−09
mean0.393583−3.62E−100.0107570.004410.1925162.743610.1582960.0041922.51587−1.77E−09−2.14E−08−4.72E−09
sd0.8873195.25E−100.0008620.000160.0140270.4303950.0104870.0001690.3946774.42E−091.75E−102.36E−10
7best0.0006490.0017280.0181160.0030750.0421882.70E−060.0220630.0002327.90E−051.50E−067.50E−076.40E−08
worst0.0025630.0434890.0528110.0086070.3294530.0012550.1225120.0007970.0013080.002490.0011020.000455
mean0.0013120.0085380.0302780.0053950.1098270.0006150.0513770.0005320.0005220.0006340.0002174.33e−05
sd0.0004420.0077380.008890.0012070.0653030.0003570.0247710.0001050.0003120.0007410.0002990.000103
8best5.306450.0049857.808480.00096519.18486.28E−0513.41423.00E−054.60E−05000
worst45.15190.03966416.55510.00117744.80640.01924328.18593.77E−050.040512000
mean22.13460.0115812.16110.00108231.74360.00669620.6673.40E−050.005412000
sd8.625440.0069532.03925.77E−057.300510.0059764.83781.97E−060.008099000
9best0.0003710.0223250.0047680.0006161.20685−4.44E−161.219790.000601−4.44E−16−4.44E−16−4.44E−16−4.44E−16
worst0.0029490.0688540.0352690.0006461.6977−4.44E−161.62120.000644−4.44E−16−4.44E−16−4.44E−16−4.44E−16
mean0.0014440.0362040.0255150.0006341.4192−4.44E−161.378730.000631−4.44E−16−4.44E−16−4.44E−16−4.44E−16
sd0.0006980.011330.0076935.91E−060.12777200.0942047.96E−060000
Table 6. Results of the Mann-Whitney statistical test with p = 0.01 for SET-2, comparison of COBRA-SHA with other approaches.
Table 6. Results of the Mann-Whitney statistical test with p = 0.01 for SET-2, comparison of COBRA-SHA with other approaches.
DPSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fas
10+89999798752
=00000201245
10000000002
30+89989898864
=00010101132
10000000003
Total141818171815181615111
Table 7. Minimization results of 16 benchmark functions from SET-3 for compared algorithms.
Table 7. Minimization results of 16 benchmark functions from SET-3 for compared algorithms.
f PSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fasCOBRA-SHA
1mean8.39E+073.98E+061.51E+091.19E+081.21E+099.61E+074.35E+089.68E+074.13E+073.29E+062.90E+068.04E+05
sd8.02E+074.58E+064.12E+083.48E+072.33E+089.70E+072.02E+086.10E+062.15E+071.19E+062.96E+064.52E+05
best3.85E+063.37E+057.10E+085.95E+077.26E+082.84E+073.02E+088.89E+071.07E+071.17E+061.60E+052.89E+05
2mean7.21E+071924421.38E+091.17E+081.10E+096.02E+074.68E+085.71E+074.03E+0793605.641.972190.998929
sd8.41E+0790259.23.43E+083.26E+072.05E+086.46E+072.08E+081.29E+072.95E+071.43E+055.099842.037952
best2.21E+06701246.55E+084.28E+076.67E+082.84E+073.14E+084.70E+072.62E+077012.770.0004362.19E-05
3mean12214.13026.728982.223591.593556.533275.333076.882458.542741.962336.282412.751085.76
sd103.242128.77933.59822.8482530.092423.511832.3736.15312507.812143.462682.251553.823
best131.23141.635681.4981934.53734.757431.92252.7532402.45200.13376.7570.17752117.579
4mean593.719101.177460.351689.687113.594489.22997.9647416.944388.562142.13983.822878.8988
sd466.56733.5444404.367143.18428.6232208.08223.035105.033173.89248.012332.082742.76286
best113.05322.289181.996355.98490.372227.322382.7883356.9542.7559720.60714.442910.086259
5mean20.685320.2331220.836821.232820.992620.943920.232720.992520.758620.037920.273220.155
sd0.2388670.3158970.0537970.0657820.0544570.1354020.2681220.0693870.0800940.1127780.4020760.281863
best20.083120.035520.621221.060120.809520.695320.021420.892820.654220.000120.001419.9997
6mean18.828318.869845.158748.012241.220443.077640.947238.847838.523918.264715.185513.5046
sd3.421984.002681.606561.656271.552772.362561.701322.007841.760744.13082.969691.99827
best10.78128.1627641.88744.14536.807838.009435.68337.506736.525210.68818.383678.86692
7mean51.88660.51851558.69230.75326983.42121.1289458.98040.5922041.089890.4115810.1043030.003761
sd25.08870.30839124.12620.10567642.04670.18925830.60960.0840480.1530350.1164030.587170.005962
best0.8267910.04583724.00780.5053527.686580.5446824.821370.5247590.7336260.20717401.99E-06
8mean62.38896.431767.253848.418275.220149.364163.728846.625749.49668.2202614.97157.70399
sd24.417749.305617.42483.3122221.8423.5625628.3174.477132.7919317.02682.493955.69825
best22.925239.798735.018839.192239.508839.921131.984333.911545.13261.000278.148990.025269
9mean127.545608.887505.51498.126370.515207.649308.54489.142112.552123.304143.37493.82118
sd31.791444.591834.533739.957925.841239.620924.323944.3328.28338.349855.196823.69127
best71.7975512.787439.895401.902302.235107.96259.269354.22461.285555.10569.08248.7532
10mean2174.142177.712092.543066.583368.451606.312251.612782.381563.79789.262773.36440.916
sd530.2211979.64704.005362.634948.91667.3303566.22352.35942.8626829.9961030.89308.284
best1184.98420.536744.6112060.351867.11422.061181.371810.781461.7416.89560.3831257.76201
11mean3283.82661.237360.364232.334048.153430.393375.723916.733173.483063.512640.342144.361
sd676.481503.675350.278377.418382.531541.859314.884361.479405.867537.292528.1465329.8605
best2158.171393.716469.713514.683146.972454.842474.33227.442724.361843.911488.291408.18
12mean1.389760.6911411.928910.5265623.644010.541542.309590.4300850.4172870.2871380.662620.244787
sd0.6210950.1382720.3348280.115410.5538650.1335140.4770870.4098380.0826420.1128530.523440.184052
best0.2518040.3420531.217560.2561582.224350.3035771.360580.0849310.3168760.1162260.1061710.111626
13mean0.9481960.5722780.8932160.5677080.7912440.8865690.7462480.5155270.902040.5087580.5068730.386105
sd0.560970.0397520.3512530.0407770.0695770.0754310.0680780.0975570.0574020.1126160.1161070.103661
best0.5071780.4805670.6137090.4867250.7033180.7134840.5984440.3089690.785020.2904960.2775320.19549
14mean1.102030.8099760.9394550.5490191.812421.617331.182730.5507611.627750.2899250.5125430.272963
sd0.7610131.340690.4041220.0764611.130610.1904280.7698680.0846410.1704420.0937590.2966110.04739
best0.5838660.2584880.5169750.3414370.5952861.086810.5004110.3684131.175830.1353450.1974020.190261
15mean16.488111.154913.134118.375738.192720.87628.031618.110621.551414.851610.220310.0955
sd33.31293.264550.1753242.8822587.33532.7968162.20563.037032.398594.828083.904244.73805
best3.737796.2238412.865111.70770.67784814.23421.3046210.050316.1327.929924.057264.13327
16mean11.861110.6817113.215613.583613.462313.508312.970713.455613.103111.542512.37519.99059
sd0.4397750.7662010.2287760.2023260.2725790.326760.1913610.2087460.3637150.4541960.9055791.13569
best10.85198.3241412.300613.071812.860312.780612.621913.065212.74510.631210.41917.30814
Table 8. Results of the Mann-Whitney statistical test with p = 0.01 for SET-3, comparison of COBRA-SHA with other approaches.
Table 8. Results of the Mann-Whitney statistical test with p = 0.01 for SET-3, comparison of COBRA-SHA with other approaches.
PSOWPSFSSCSAFFABAFFA-aCSA-aBA-aCOBRA-fCOBRA-fas
+1515161615161515161211
=11001011045
00000000000
Total1515161615161515161211

Share and Cite

MDPI and ACS Style

Akhmedova, S.; Stanovov, V.; Erokhin, D.; Semenkina, O. Success History-Based Position Adaptation in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms. Algorithms 2020, 13, 89. https://doi.org/10.3390/a13040089

AMA Style

Akhmedova S, Stanovov V, Erokhin D, Semenkina O. Success History-Based Position Adaptation in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms. Algorithms. 2020; 13(4):89. https://doi.org/10.3390/a13040089

Chicago/Turabian Style

Akhmedova, Shakhnaz, Vladimir Stanovov, Danil Erokhin, and Olga Semenkina. 2020. "Success History-Based Position Adaptation in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms" Algorithms 13, no. 4: 89. https://doi.org/10.3390/a13040089

APA Style

Akhmedova, S., Stanovov, V., Erokhin, D., & Semenkina, O. (2020). Success History-Based Position Adaptation in Fuzzy-Controlled Ensemble of Biology-Inspired Algorithms. Algorithms, 13(4), 89. https://doi.org/10.3390/a13040089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop