Next Article in Journal
t-Norm Fuzzy Incidence Graphs
Next Article in Special Issue
Urban-Tissue Optimization through Evolutionary Computation
Previous Article in Journal
A Novel Distributed Economic Model Predictive Control Approach for Building Air-Conditioning Systems in Microgrids
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Developed Artificial Bee Colony Algorithm Based on Cloud Model

1
School of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, China
2
Jiangsu Key Laboratory for NSLSCS, Nanjing Normal University, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Mathematics 2018, 6(4), 61; https://doi.org/10.3390/math6040061
Submission received: 11 March 2018 / Revised: 7 April 2018 / Accepted: 10 April 2018 / Published: 18 April 2018
(This article belongs to the Special Issue Evolutionary Computation)

Abstract

:
The Artificial Bee Colony (ABC) algorithm is a bionic intelligent optimization method. The cloud model is a kind of uncertainty conversion model between a qualitative concept T ˜ that is presented by nature language and its quantitative expression, which integrates probability theory and the fuzzy mathematics. A developed ABC algorithm based on cloud model is proposed to enhance accuracy of the basic ABC algorithm and avoid getting trapped into local optima by introducing a new select mechanism, replacing the onlooker bees’ search formula and changing the scout bees’ updating formula. Experiments on CEC15 show that the new algorithm has a faster convergence speed and higher accuracy than the basic ABC and some cloud model based ABC variants.

1. Introduction

In recent years, the development of metaheuristics [1,2,3] has advanced. Many scholars have made a lot of contributions in this area. The Artificial Bee Colony algorithm (ABC) [4] is a novel swarm intelligence algorithm among the sets of metaheuristics. The ABC algorithm is an optimization algorithm which mimics the foraging behavior of the honey bee. It provides a population-based search procedure in which individuals called food positions are modified by the artificial bees with the increasing of iterations. The bee’s aim is to discover the places of food sources with high nectar amount.
From 2007 to 2009, Karaboga et al. [5,6,7] presented a comparative study on optimizing a large set of numerical test functions. They compared the performance of the ABC algorithm with the genetic algorithm (GA), particle swarm optimization (PSO), differential evolution (DE) and evolution strategy (ES). The simulation results show that ABC can be efficiently employed to solve engineering problems with high dimensions. It can produce very good and effective results at a low computational cost by using only three control parameters (population size, maximum number of fitness evaluations, limit). Akay et al. [8] studied the parameter tuning of the ABC algorithm and investigated the effect of control parameters. Afterwards, two modified versions of the ABC algorithm were proposed successively by Akay et al. [9] and Zhang et al. [10] for efficiently solving real-parameter numerical optimization problems. Aderhold et al. [11] studied the influence of the population size about the optimization behavior of ABC and also proposed two variants of ABC which used the new position update of the artificial bees.
However, ABC was unsuccessful on some complex unimodal and multimodal functions [7]. So, some modified artificial colony algorithms were put forward in order to improve the performance of the basic ABC. Zhu et al. [12] proposed an improved ABC algorithm called gbest-guided ABC (GABC) by incorporating the information of global best solution into the solution search equation to guide the search of candidate solutions. The experimental results tested on six benchmark functions showed that GABC outperformed basic ABC. Wu et al. [13] described an improved ABC algorithm to enhance the global search ability of basic ABC. Guo et al. [14] presented a novel search strategy and the improved algorithm is called global ABC which has great advantages of convergence property and solution quality. In 2013, Yu et al. [15] proposed a modified artificial bee colony algorithm in which global best is introduced to modify the update equation of employed and onlooker bees. Simulation results on the problem of peak-to-average power ratio reduction in orthogonal frequency division multiplexing signal and multi-level image segmentation showed that the new algorithm had better performance than the basic ABC algorithm with the same computational complexity. Rajasekhar et al. [16] proposed a simple and effective variant of the ABC algorithm based on the improved self-adaptive mechanism of Rechenbergs 1/5th success rule to enhance the exploitation capabilities of the basic ABC. Yaghoobi [17] proposed an improved artificial bee colony algorithm for global numerical optimization in 2017 from three aspects: initialising the population based on chaos theory; utilising multiple searches in employee and onlooker bee phases; controlling the frequency of perturbation by a modification rate.
Multi-objective evolutionary algorithms (MOEAs) gained wide attention to solve various optimization problems in fields of science and engineering. In 2011, Zou et al. [18] presented a novel algorithm based on ABC to deal with multi-objective optimization problems. The concept of Pareto dominance was used to determine the flight direction of a bee, and the nondominated solution vectors which had been found in an external archive were maintained in the proposed algorithm. The proposed approach was highly competitive and was a viable alternative to solve multi-objective optimization problems.
The performances of Pareto-dominance based MOEAs degrade if the number of objective functions is greater than three. Amarjeet et al. [19] proposed a Fuzzy-Pareto dominance driven Artificial Bee Colony (FP-ABC) to solve the many-objective software module clustering problems (MaSMCPs) effectively and efficiently in 2017. The contribution of the article was as follows: the selection process of candidate solution was improved by fuzzy-Pareto dominance; two external archives had been integrated into the ABC algorithm to balance the convergence and diversity. A comparative study validated the supremacy of the proposed approach compared to the existing many-objective optimization algorithms.
A decomposition-based ABC algorithm [20] was also proposed to handle many-objective optimization problems (MaOPs). In the proposed algorithm, an MaOP was converted into a number of subproblems which were simultaneously optimized by a modified ABC algorithm. With the help of a set of weight vectors, a good diversity among solutions is maintained by the decomposition-based algorithm. The ABC algorithm is highly effective when solving a scalar optimization problem with a fast convergence speed. Therefore, the new algorithm can balance the convergence and diversity well. Moreover, subproblems in the proposed algorithm were handled unequally, and computational resources were dynamically allocated through specially designed onlooker bees and scout bees, which indeed contributed to performance improvements of the algorithm. The proposed algorithm could approximate a set of well-converged and properly distributed nondominated solutions for MaOPs with the high quality of solutions and the rapid running speed.
The basic ABC algorithms is often combined with other algorithms and techniques. In 2016, an additional update equation [21] for all ABC-based optimization algorithms was developed to speed up the convergence utilizing Bollinger bands [22] which is a technical analysis tool to predict maximum or minimum future stock prices. Wang et al. [23] proposed a hybridization method based on krill herd [24] and ABC (KHABC) in 2017. A neighbor food source for onlooker bees in ABC was obtained from the global optimal solutions, which was found by the KHABC algorithm. During the information exchange process, the globally best solutions were shared by the krill and bees. The effectiveness of the proposed methodology was tested for the continuous and discrete optimization.
In this paper, another technique called cloud model [25] will be embedded into the ABC algorithm. Cloud model is an uncertainty conversion model between a qualitative concept and its quantitative expression. In 1999, an uncertainty reasoning mechanism of the cloud model was presented and the cloud model theory was be expanded after that. In addition, Li successfully applied cloud model to inverted pendulum [26]. Some scholars combined cloud model with ABC because cloud model has the characteristics of stable tendentiousness and randomness [27,28,29].
We propose a new algorithm which inherits the excellent exploration ability of the basic ABC algorithm and the stability and randomness of the cloud model by modifying the selection mechanism of onlookers, the search formula of onlookers and scout bees’ update formula. The innovation points of the new algorithm are:
  • The population becomes more diverse in the whole search process by using a different selection mechanism of onlookers, in which the worse individual will have a larger selection probability than in basic ABC;
  • Local search ability of the algorithm can be improved by applying the normal cloud generator as the search formula of onlookers to control the search of onlookers in a suitable range;
  • Historical optimal solutions can be used by Y-conditional cloud generator as scout bees’s update formula to ensure that the algorithm not only jumps out of local optimum but also avoids a blind random search.
The remainder of the paper is structured as follows. Section 2 provides the description of the basic ABC algorithm, followed by the details and framework of the developed ABC algorithm based on cloud model, as shown in Section 3. Subsequently, Section 4 gives the experiment results on CEC15 of the comparison among the proposed DCABC, the basic ABC, and other ABC variants based on cloud model. Then in Section 5, the current work is summarized, and the acknowledgements are given in the end.

2. The Basic ABC Algorithm

There are three kinds of bees, namely, employed bees, onlooker bees and scout bees in the ABC algorithm. The total population number is N s ; the number of employed bees is N e and onlookers is N u (General define N e = N u = N s 2 ). In the initialization phase, food sources in the population are randomly generated and assigned to employed bees as
X i j = X m i n j + r a n d ( 0 , 1 ) ( X m a x j X m i n j ) ,
where j { 1 , 2 , , D } , X m a x , X m i n are the upper and lower bounds of the solution vectors, D is the dimension of the decision variable.
Each employed bee X i generates a new food source V i in the neighborhood of its present position:
V i j = X i j + ϕ i j ( X i j X k j ) ,
where j { 1 , 2 , , D } , k { 1 , 2 , , N e } , k must be different from i, k and j are random generating indexes, ϕ i j is a random number between [−1, 1]. At the same time, we should guarantee V i in the field of definition domain. V i will be compared to X i and the employed bee exploits the better food source by greedy selection mechanism in terms of fitness value f i t i in Equation (3):
f i t i = 1 1 + f i f i 0 1 + a b s ( f i ) f i < 0
where f i is the objective value of solution X i or V i . Equation (3) is used to calculate fitness values for a minimization problem, while for maximization problems the objective function can be directly used as a fitness function.
An onlooker bee evaluates the fitness value of all employed bees, and uses the roulette wheel method to select a food source X i updated the same as employed bees according to its probability value P calculated by the following expression:
P = 0.9 f i t ( X i ) max m = 1 N e f i t ( X m ) + 0.1 ,
If a food source X i cannot be improved beyond a predetermined number (limit) of trial counters, it will be abandoned and the corresponding employed bee will become a scout bee randomly produced by Equation (1). The algorithm will be terminated after repeating a predefined maximum number of cycles, denoted as M a x _ C y c l e s . The flow chart of ABC algorithm is shown in Algorithm 1.
Algorithm 1: The basic ABC algorithm.
Initialization phase
   Initialize the food sources using Equation (1).
   Evaluate the fitness value of the food sources using Equation (3), set the current generation t = 0 .
While t M a x _ C y c l e s do
   Employed bees phase
    Send employed bees to produce new solutions via Equation (2).
    Apply greedy selection to evaluate the new solutions.
    Calculate the probability using Equation (4).
   Onlooker bees phase
    Send onlooker bees to produce new solutions via Equation (2).
    Apply greedy selection to evaluate the new solutions.
   Scout bee phase
    Send one scout bee produced by Equation (1) into the search area for discovering a new food source.
    Memorize the best solution found so far.
    t = t + 1 .
end while
Return the best solution.

3. A Developed Artificial Bee Colony Algorithm Based on Cloud Model (DCABC)

The ABC algorithm is a relatively new and mature swarm intelligence optimization algorithm. Compared to GA and PSO, ABC has a higher robustness [6]. More and more scholars want to improve the performance of the ABC algorithm. Zhang [27] put forward an algorithm named PABC with the new select scheme based on cloud model. For the individual with a better fitness characteristic, the value of probability was likely relatively high, and vice versa. Lin et al. [29] proposed an improved ABC algorithm based on cloud model (cmABC) to solve the problem that the basic ABC algorithm suffered from slow convergence and easy stagnation in local optima by calculating food source through the normal cloud particle operator and reducing the radius of the local search space. In cmABC, the author also introduced a new selection strategy that made the inferior individual have more chances to be selected for maintaining diversity. In addition, the best solution found over time was used to explore a new position in the algorithm. A number of experiments on composition functions showed that the proposed algorithm had been improved in terms of convergence speed and solution quality. In this section, we propose a developed ABC algorithm named DCABC which is based on cloud model with a new choice mechanism of onlookers and new search strategies of onlooker bees and scouts.

3.1. Cloud Model

Professor Li presented an uncertainty conversion model between a qualitative concept T ˜ [30] presented by nature language and its quantitative expression which is called cloud model on the basis of traditional fuzzy set theory and probability statistics. He developed and improved a complete set of cloud theory [31] which consists of cloud model, virtual cloud, cloud operations, cloud transform, uncertain reasoning and so on.
Suppose U is a quantitative domain of discourse that are represented by precise values (one-dimensional, two-dimensional or multi-dimensional), and T ˜ is a qualitative concept in U. X is an arbitrary element in U and a random implementation of qualitative concept T ˜ . The degree of certainty of X to T ˜ expressed as μ ( X ) is an random number that has stable tendency. The distribution of X on the domain of discourse U is called cloud model, or simply ‘cloud’ for short. Each pare (X, μ ( X ) ) is called a cloud droplet, and cloud model can be formulated as follows:
X U μ ( X ) [ 0 , 1 ]
The normal cloud is scattered point cloud model based on normal distribution or half-normal distribution. Normal cloud model uses a set of independent parameters to work together in order to express digital characteristics of a qualitative concept and reflect the uncertainty of the concept. Based on the normal distribution function and membership function, this group of parameters are represented by three digital characteristics including expectation E x , entropy E n , and hyper entropy  H e .
Expectation E x is a point which can best represent the qualitative concept in domain of discourse space. It can be considered as the center of gravity of all cloud drop, which can best represent the coordinates of the qualitative concept on number field. Entropy E n stands for the measurable granularity of the qualitative concept. Entropy also reflects the uncertainty and fuzzy degree of the qualitative concept. Fuzzy degree means value range that can be accepted by the qualitative concept in domain of discourse. Hyper entropy H e is the measure of entropy’s uncertainty, namely entropy of E n . It reflects randomness of samples which represent qualitative concept values, and reveals the relevance of fuzziness and randomness. Hyper entropy can also reflect the aggregation extent of cloud droplets.
Given three digital characteristics E x , E n and H e , forward cloud generator in Equations (6)–(8) can produce N cloud droplets of the normal cloud model (Algorithm 2), which are two-dimensional points ( x i , μ i ) ( i { 1 , 2 , , N } ).
E n i = N ( E n , H e 2 ) ,
x i = N ( E x , ( E n i ) 2 ) ,
μ i = exp { ( x i E x ) 2 2 ( E n i ) 2 } ,
Algorithm 2: Forward cloud generator algorithm.
Input : E x , E n , H e and N.
Output : quantitative value x i of ith cloud droplet and its degree of certainty μ i .
Forward cloud generator
  Generate a normal random number E n i with expectation E n and hyper entropy H e by Equation (6);
  Generate a normal random number x i with expectation E x and hyper entropy E n i by Equation (7).
Drop ( x i , μ i )
  Calculate μ i by Equation (8);
  A cloud droplet ( x i , μ i ) is get.
Repeat
  Repeat the above step until N cloud droplets have come into being. (Figure 1)
Given three digital characteristics ( E x , E n , H e ) and a specific degree of certainty μ , cloud generator refers to Y-conditional cloud generator based on uncertainty reasoning of cloud model. In other words, every cloud droplet( x i , μ ) has the same degree of certainty which belongs to concept  T ˜ . The formula of Y-conditional cloud generator (Algorithm 3) is:
x i = E x ± 2 l n ( μ ) E n i ,
Algorithm 3: Y-conditional cloud generator algorithm.
Input : E x , E n , H e , N and μ .
Output : Quantitative values x i of ith cloud droplet and its degree of certainty μ .
Y-conditional cloud generator
  Get a normal random number E n i with expectation E n and hyper entropy H e by Equation (6);
  Calculate x i with E x , E n i and μ by Equation (9).
Drop ( x i , μ )
  A Y-conditional cloud droplet ( x i , μ ) is get.
Repeat
  Repeat the above step until get N cloud droplets. (Figure 2)

3.2. New Choice Mechanism for Onlookers

3.2.1. New Choice Mechanism

In the basic ABC algorithm, onlooker bees choose the good-quality nectars by employing the roulette wheel selection scheme. That is to say, the bigger the nectar’s fitness value, the higher the probability it will be chosen by onlookers. The selection mechanism contains three parts: calculating the selection probability of each solution in population according to its fitness value; selecting the candidate solution using the roulette wheel selection method; starting the local search of onlooker bees around the candidate solution. However, the selection scheme is so greedy that it is easy to lead to the rapid decrease of population diversity and fall into local optimum. We hope to obtain a reasonable selection scheme.
Zhang et al. [27] improved the selection strategy based on cloud model with three digital characteristics E x , E n and H e in Equation (10):
E x = max i = 1 N e f i t i E n = E x f i t i 12 H e = E n 3
The possibility of the current individual which is the best can be regarded as the choice probabilities and can be produced by the positive cloud generator. Thinking differently, it will be found that the worst individual also contains useful information after several loop iterations. So, we ensure that the worst individual has larger selection probability. Equation (10) pays more attention to the inferior individuals. Detailed positive cloud generator operations can be described as follows:
E x = min i = 1 N e f i t i E n = f i t i E x 12 H e = E n 3
The selective probability of the corresponding individual is adjusted as follows:
P = exp { ( x E x ) 2 2 ( E n ) 2 } ,
where, E n = N ( E n , H e 2 ) , x = N ( E x , ( E n ) 2 ) , N is a normal random number generator.
We find that the individuals closer to Ex(inferior individuals) will get the higher possibility, namely, selection probability.

3.2.2. Efficiency Analysis

In our proposed algorithm DCABC, Equation (4) is used as the probability selection formula for onlookers when a random number r a n d between 0 and 1 are less than or equal to 0.5; otherwise the selection formula is set by the new choice mechanism in Equation (12). The goal of processing selection probability in two cases is to avoid the algorithm plunging into local optimum.
To test the effectiveness of the current selection mechanism, the modified and the basic ABC run independently on CEC15 [32] with dimensions(D) 10, 30 and 50, respectively. We set the initial population size N s = 40 . The number of employed bees equals to the number of onlookers, which is N e = N u = N s 2 . The value of ‘ l i m i t ’ equals to N e D [33]. Every experiment is repeated 30 times. The maximum number of function evaluations ( M a x F E S ) is set as D 10,000 for all functions [34]. The simulation results is recorded in Table 1. It can be easily observed that the ABC with new choice mechanism is superior to the basic ABC on most functions. This implies that the new choice mechanism improves the performance of the basic ABC.

3.3. The New Search Strategy of Onlooker Bees

Lin et al. [29] proposed an improved ABC algorithm based on cloud model (cmABC). By calculating a candidate food source through the normal cloud operator and reducing the radius of the local search, the cmABC algorithm was proved to enhance the convergence speed, exploitation capability and solution quality on the experiments of composition functions. In cmABC, three digital characteristics of cloud model ( E x , E n , H e ) are given as:
E x = X i j E n = e x H e = E n 10
where X i is the current food sources position, j { 1 , 2 , , D } , e x is variable. The forward cloud generator can produce a normal random number V i j , which will correspond to the new food sources position of jth dimension. Detailed operations were described as:
E n = N ( E n , H e 2 ) V i j = N ( E x , E n 2 )
The greater the value of entropy En, the wider the distribution of cloud droplets and vice versa. When the search iteration reached a certain number of times, the population was closer and closer to the optimal solution. A nonlinear decrease strategy to self-adaptive adjust the value of e x was used in cmABC for the sake of improving the precision of solution and controlling the bees’ search range:
e x = ( E m a x E m i n ) ( t / T m a x ) 2 + E m a x
where t { 1 , 2 , 3 , , T m a x } was the current number of iterations, T m a x was the maximum number of cycles. The values of parameters E m a x and E m i n were set to 5 and 10 4 , respectively. In order not to specify too many parameters, in this paper, three digital characteristics of cloud model ( E x , E n , H e ) are given as
E x = X i j E n = 2 3 | X i j X k j | H e = E n 10
where j { 1 , 2 , , D } , k { 1 , 2 , , N e } , k must be different from i, k, j are random generating indexes. This amendment is based on the stable tendency and randomness of normal cloud model. The entropy E n is selected by ‘ 3 σ ’ principle of normal cloud model, which can control the onlooker bees to search in a suitable range.

3.4. Search Strategy of Scouts Combined with Y Conditional Cloud Generator

Employed and onlooker bees look for a better food source around their neighborhoods in each cycle of the search. If the fitness value of a food source is not improved by a predetermined number of trials that is equal to the value of ‘ L i m i t ’, then that food source is abandoned by its employed bee and the employed bee associated with that food source becomes a scout bee. In the basic ABC, the scout randomly finds a new food source to replace the abandoned one by Equation (2), which makes the convergence rate of the basic ABC slow for not taking full advantage of the historical optimal solution information. In this section we make the scout bee search a candidate position around the historical optimal value f i t b e s t (corresponding to G l o b a l m i n ) by Y-conditional cloud operator. Search strategy of scouts combined with Y-conditional cloud generator is described in Algorithm 4. The purpose of setting μ ( 0 , 0.5 ) in Step 3 is to guarantee population diversity. Cloud droplets which have smaller membership degrees are farther from center E x , that is to say the new food source is farther from historical optimum G l o b a l m i n . However, the historical optimum information is used to generate a scout, therefore aimless searching of scout bees in the basic ABC algorithm can be avoided to a certain degree.
Algorithm 4: Search strategy of scouts combined with Y-conditional cloud generator.
Step 1  Set expectation E x as G l o b a l P a r a m s , which is the position parameters of G l o b a l m i n .
Step 2  Entropy E n = ( X m a x X m i n ) / N e .
Step 3  Hyper entropy H e = E n / c 2 , where c 2 = 10 .
Step 4  Randomly generate membership degrees μ j ( 0 , 0.5 ) , where j { 1 , 2 , , D } .
Step 5  Obtain the new food resource X i according to Equation (9).

3.5. DCABC Algorithm

Pseudo code of DCABC algorithm proposed for solving unconstrained optimization problems is given in Algorithm 5. M a x F E S represents the maximum number of function evaluations. F E S represents the number of function evaluations.
Algorithm 5: Pseudo code of DCABC algorithm.
Initialization phase
   Using Equation (1) initialize the population of solutions X i j , i = 1, 2,…, N e , j = 1, 2,…, D.
   Evaluate the fitness of the population by Equation (3), set the current F E S = N e .
While F E S M a x F E S do
   Employed bees phase
    Send employed bees to produce new solutions via Equation (2).
    Apply greedy selection to evaluate the new solutions.
    If r a n d less than or equal to 0.5, Calculate the selective probability using Equation (4);
    Otherwise calculate the probability using Equations (11) and (12).
   Onlooker bees phase
    Send onlooker bees to produce new solutions via Equations (14) and (16).
    Apply greedy selection to evaluate the new solutions.
   Scout bee phase
    Send one scout bee generated by Algorithm 4 into the search area for discovering a new food source.
    Memorize the best solution found so far.
end while
Return the best solution.

4. Experimental Study of DCABC

4.1. Evaluation Functions

Comparing the proposed DCABC with the basic ABC and the other ABC variants, such as GABC [12], cmABC [29] and PABC [27], the experimental results of benchmark functions with 10, 30 and 50 decision variables in CEC15 [32] are given under the same machine with an Intel 3.20 GHz CPU, 8GB memory, and the operating system is Windows 7 with MATLAB 9.0 (R2016a). All functions in CEC15 have different optimal values f ( x ) .

4.2. Parameters Settings

For all compared algorithms including DCABC, the size of initial population is 40, an equal split of employed bees and onlookers. l i m i t equals to N e D [33]; The dimension is set as 10, 30 and 50 in turn. In Equation (2) of GABC [12], C = 1.5 . In cmABC [29], E m a x = 5 , E m i n = 10 4 . The M a x F E S is D 10 , 000 , which is used as the terminal criterion of five algorithms. Every experiment is repeated 30 times each starting from a random population with different random seeds, the mean results (Mean) and the standard deviation (Std) of each algorithm are recorded with the format of f ( x ) f ( x ) in Table 2, Table 3 and Table 4. The best results are highlighted in boldface. Rank records the performance-rank of five algorithms for dealing with each benchmark function according to their mean results. The overall rank for each algorithm is defined according to their mean rank values over 15 benchmark problems. The number of (Best/2ndBest/Worst) is counted for each algorithm.

4.3. Experiments Analysis

DCABC algorithm is better than four other compared algorithms on dimension 10. It can be seen from Table 2 that DCABC has the best performance on 10 of 15 test problems. DCABC is only worse than ABC, PABC on one and two functions ( f 9 , f 3 and f 9 ). It is worth noting that GABC and cmABC surpass DCABC only on functions f 3 , f 5 , and f 14 . GABC and cmABC generate the best results only on functions f 4 and f 9 , respectively.
From Table 3, DCABC ranks NO.1 on 11 of 15 functions with dimension 30. Actually, DCABC is superior to ABC and GABC on all the functions. In contrast, DCABC is inferior to cmABC and PABC on functions f 3 and f 9 , and cmABC shows the best performance on functions f 3 , f 5 , f 9 , and f 14 .
In Table 4, DCABC outperforms all compared algorithms with dimension 50 on functions f 1 , f 3 , f 4 , f 5 , f 6 , f 8 , f 9 , f 10 , f 12 , and f 15 . DCABC cannot beat ABC, GABC, cmABC and PABC on f 7 and f 11 . PABC shows the best performance on f 2 and f 14 . cmABC is superior to all other algorithms on f 3 and f 7 . GABC is competitive on function f 13 . ABC has the best results on function f 11 , f 12 and f 14 . It is worth noting that the overall performance of DCABC is the best.
Unimodal function f 1 , hybrid function f 8 and composition functions f 10 are chosen to exhibit the convergence precision of all compared algorithms. Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 are the convergence graphs of five algorithms. The horizontal axis is the number of function evaluations (FES), and the vertical axis is the function values over one independent run. In all the figures, DCABC is represented by the black line with circles, and it has larger descend gradient and gets the minimal error values among the five algorithms. The convergence speed of DCABC is also obviously superior to the other four algorithms.
On the whole, compared to the other three modified ABC algorithms, DCABC can show the best performance on most of the functions, that is to say this new algorithm is more stable and the solutions obtained by it have higher precision than other algorithms.

5. Conclusions

In the present study, a developed artificial bee colony algorithm based on cloud model, namely DCABC, is proposed for the continuous optimization. By using a new selection mechanism, the worse individual in DCABC has a larger probability to be selected than in basic ABC. DCABC also improves the local search ability by applying the normal cloud generator as onlookers bees’ formula to control the search of onlookers in a suitable range. Moreover, historical optimal solutions are used by Y conditional cloud generator when updating the scout bee to ensure the algorithm jump out of the local optimal. The effectiveness of the proposed method is tested on CEC15. The results clearly show the superiority of DCABC over ABC, GABC, cmABC and PABC.
However, there are quite a few issues that merit further investigation such as the diversity of DCABC. In addition, we hope to show the performance of DCABC by Null Hypothesis Significance Testing (NHST) [35,36] in our future work. We only test the new algorithm on classical benchmark functions and have not used it to solve practical problems, such as fault diagnosis [37], path plan [38], Knapsack [39,40,41], multi-objective optimization [42], gesture segmentation [43], unit commitment problem [44], and so on. There is an increasing interest in prompting the performance of DCABC, which will be our future research direction.

Acknowledgments

This research is partly supported by Humanity and Social Science Youth foundation of Ministry of Education of China (Grant No. 12YJCZH179), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant No. 16KJA110001), the National Natural Science Foundation of China (Grant No. 11371197), the Foundation of Jiangsu Key Laboratory for NSLSCS (Grant No. 201601).

Author Contributions

These authors contributed equally to this paper.

Conflicts of Interest

No conflict of interest exists in the submission of this article, and it is approved by all authors for publication.

References

  1. Sörensen, K.; Sevaux, M.; Glover, F. A History of Metaheuristics. In Handbook of Heuristics; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  2. Sörensen, K. Metaheuristics-the metaphor exposed. Int. Trans. Oper. Res. 2015, 22, 3–18. [Google Scholar] [CrossRef]
  3. Črepinšek, M.; Liu, S.; Mernik, M. Exploration and exploitation in evolutionary algorithms: A survey. ACM Comput. Surv. 2013, 45, 1–33. [Google Scholar] [CrossRef]
  4. Karaboga, D. An Idea Based on Honey bee Swarm for Numerical Optimization; Technical Report-tr06; Engineering Faculty, Computer Engineering Department, Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  5. Karaboga, D.; Basturk, B. Artificial Bee Colony (ABC) optimization algorithm for solving constrained optimization problems. Found. Fuzzy Log. Soft Comput. 2007, 4529, 789–798. [Google Scholar]
  6. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  7. Karaboga, D.; Akay, B. A comparative study of artificial bee colony algorithm. Appl. Math. Comput. 2009, 214, 108–132. [Google Scholar] [CrossRef]
  8. Akay, B.; Karaboga, D. Parameter tuning for the artificial bee colony algorithm. Comput. Collect. Intell. 2009, 5796, 608–619. [Google Scholar]
  9. Akay, B.; Karaboga, D. A modified artificial bee colony algorithm for real-parameter optimization. Swarm Intell. Appl. 2012, 192, 120–142. [Google Scholar] [CrossRef]
  10. Zhang, D.; Guan, X.; Tang, Y.; Tang, Y. Modified artificial bee colony algorithms for numerical optimization. In Proceedings of the 2011 3rd International Workshop on Intelligent Systems and Applications (ISA), Wuhan, China, 28–29 May 2011; pp. 1–4. [Google Scholar]
  11. Aderhold, A.; Diwold, K.; Scheidler, A.; Middendorf, M. Artificial bee colony optimization: A new selection scheme and its performance. Nat. Inspired Coop. Strateg. Optim. (NICSO 2010) 2010, 284, 283–294. [Google Scholar]
  12. Zhu, G.; Kwong, S. Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl. Math. Comput. 2010, 217, 3166–3173. [Google Scholar] [CrossRef]
  13. Wu, X.; Hao, D.; Xu, C. An improved method of artificial bee colony algorithm. Appl. Mech. Mater. 2012, 101–102, 315–319. [Google Scholar] [CrossRef]
  14. Guo, P.; Cheng, W.; Liang, J. Global artificial bee colony search algorithm for numerical function optimization. In Proceedings of the 2011 Seventh International Conference on Natural Computation (ICNC), Shanghai, China, 26–28 July 2011; Volume 3, pp. 1280–1283. [Google Scholar]
  15. Yu, X.; Zhu, Z. A modified artificial bee colony algorithm with its applications in signal processing. Int. J. Comput. Appl. Technol. 2013, 47, 297–303. [Google Scholar] [CrossRef]
  16. Rajasekhar, A.; Pant, M. An improved self-adaptive artificial bee colony algorithm for global optimisation. Int. J. Swarm Intell. 2014, 1, 115–132. [Google Scholar] [CrossRef]
  17. Yaghoobi, T.; Esmaeili, E. An improved artificial bee colony algorithm for global numerical optimisation. Int. J. Bio-Inspired Comput. 2017, 9, 251–258. [Google Scholar] [CrossRef]
  18. Zou, W.; Zhu, Y.; Chen, H.; Zhang, B. Solving multiobjective optimization problems using artificial bee colony algorithm. Discret. Dyn. Nat. Soc. 2011, 2, 1–37. [Google Scholar] [CrossRef]
  19. Amarjeet; Chhabra, J.K. FP-ABC: Fuzzy Pareto-Dominance Driven Artificial Bee Colony Algorithm for Many-Objective Software Module Clustering. Comput. Lang. Syst. Struct. 2018, 51, 1–21. [Google Scholar]
  20. Xiang, Y.; Zhou, Y.; Tang, L.; Chen, Z. A Decomposition-Based Many-Objective Artificial Bee Colony Algorithm. IEEE Trans. Cybern. 2017, 99, 1–14. [Google Scholar] [CrossRef]
  21. Koçer, B. Bollinger bands approach on boosting ABC algorithm and its variants. Appl. Soft Comput. 2016, 49, 292–312. [Google Scholar] [CrossRef]
  22. Bollinger Bands—Trademark Details. 2011. Available online: Justia.com (accessed on 1 April 2018).
  23. Wang, H.; Yi, J. An improved optimization method based on krill herd and artificial bee colony with information exchange. Memet. Comput. 2017, 2, 1–22. [Google Scholar] [CrossRef]
  24. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  25. Li, D.; Liu, C.; Du, Y.; Han, X. Artificial Intelligence with Uncertainty. J. Softw. 2004, 15, 1583–1594. [Google Scholar]
  26. Chen, H.; Li, D.; Shen, D.; Zhang, F. A clouds model applied to controlling inverted pendulum. J. Comput. Res. Dev. 1999, 36, 1180–1187. [Google Scholar]
  27. Zhang, C.; Pang, Y. Sequential blind signal extraction adopting an artificial bee colony Algorithm algorithm. J. Inf. Comput. Sci. 2012, 9, 5551–5559. [Google Scholar]
  28. He, D.; Jia, R. Cloud model-based Artificial Bee Colony algorithm’s application in the logistics location problem. In Proceedings of the International Conference on Information Management, Innovation Management and Industrial Engineering, Sanya, China, 20–21 October 2012. [Google Scholar]
  29. Lin, X.; Ye, D. Artificial Bee Colony algorithm based on cloud mutation. J. Comput. Appl. 2012, 32, 2538–2541. [Google Scholar] [CrossRef]
  30. Li, D.; Meng, H.; Shi, X. Membership clouds and membership clouds generators. Comput. Res. Dev. 1995, 42, 32–41. [Google Scholar]
  31. Di, K.; Li, D.; Li, D. Cloud theory and its applications in spatial data mining and knowledge discovery. J. Image Graph. 1999, 4, 930–935. [Google Scholar]
  32. Chen, Q.; Liu, B.; Zhang, Q.; Liang, J.; Suganthan, P.N.; Qu, B. Problem Definition and Evaluation Criteria for CEC 2015 Special Session and Competition on Bound Constrained Single-Objective Computationally Expensive Numerical Optimization; Technical Report; Computational Intelligence Laboratory, Zhengzhou University: Zhengzhou, China; Nanyang Technological University: Singapore, 2014. [Google Scholar]
  33. Veček, N.; Liu, S.; Črepinšek, M.; Mernik, M. On the Importance of the Artificial Bee Colony Control Parameter Limit. Inf. Technol. Control 2017, 46, 566–604. [Google Scholar] [CrossRef]
  34. Mernik, M.; Liu, S.; Karaboga, D. On clarifying misconceptions when comparing variants of the Artificial Bee Colony Algorithm by offering a new implementation. Inf. Sci. 2015, 291, 115–127. [Google Scholar] [CrossRef]
  35. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  36. Veček, N.; Mernik, M.; Črepinšek, M. A chess rating system for evolutionary algorithms: A new method for the comparison and ranking of evolutionary algorithms. Inf. Sci. 2014, 277, 656–679. [Google Scholar] [CrossRef]
  37. Yi, J.; Wang, J.; Wang, G. Improved probabilistic neural networks with self-adaptive strategies for transformer fault diagnosis problem. Adv. Mech. Eng. 2016, 8, 1–13. [Google Scholar] [CrossRef]
  38. Wang, G.; Chu, H.; Mirjalili, S. Three-dimensional path planning for UCAV using an improved bat algorithm. Aerosp. Sci. Technol. 2016, 49, 231–238. [Google Scholar] [CrossRef]
  39. Feng, Y.; Wang, G.; Deb, S.; Lu, M.; Zhao, X. Solving 0-1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2017, 28, 1619–1634. [Google Scholar] [CrossRef]
  40. Feng, Y.; Wang, G.; Li, W.; Li, N. Multi-strategy monarch butterfly optimization algorithm for discounted 0-1 knapsack problem. Neural Comput. Appl. 2017. [Google Scholar] [CrossRef]
  41. Feng, Y.; Wang, G.; Dong, J.; Wang, L. Opposition-based learning monarch butterfly optimization with Gaussian perturbation for large-scale 0-1 knapsack problem. Comput. Electr. Eng. 2017. [Google Scholar] [CrossRef]
  42. Rizk-Allah, R.M.; El-Sehiemy, R.A.; Deb, S.; Wang, G. A novel fruit fly framework for multi-objective shape design of tubular linear synchronous motor. J. Supercomput. 2017, 73, 1235–1256. [Google Scholar] [CrossRef]
  43. Liu, K.; Gong, D.; Meng, F.; Chen, H.; Wang, G. Gesture segmentation based on a two-phase estimation of distribution algorithm. Inf. Sci. 2017, 394–395, 88–105. [Google Scholar] [CrossRef]
  44. Srikanth, K.; Panwar, L.K.; Panigrahi, B.K.; Herrera-Viedma, E.; Sangaiah, A.K.; Wang, G. Meta-heuristic framework: Quantum inspired binary grey wolf optimizer for unit commitment problem. Comput. Electr. Eng. 2017. [Google Scholar] [CrossRef]
Figure 1. Forward Cloud Generator.
Figure 1. Forward Cloud Generator.
Mathematics 06 00061 g001
Figure 2. Y-Conditional cloud generator.
Figure 2. Y-Conditional cloud generator.
Mathematics 06 00061 g002
Figure 3. Convergence curves of five algorithms for f 1 with D = 30. (The optimal value of f 1 is 100).
Figure 3. Convergence curves of five algorithms for f 1 with D = 30. (The optimal value of f 1 is 100).
Mathematics 06 00061 g003
Figure 4. Convergence curves of five algorithms for f 1 with D = 50. (The optimal value of f 1 is 100).
Figure 4. Convergence curves of five algorithms for f 1 with D = 50. (The optimal value of f 1 is 100).
Mathematics 06 00061 g004
Figure 5. Convergence curves of five algorithms for f 8 with D = 30. (The optimal value of f 8 is 800).
Figure 5. Convergence curves of five algorithms for f 8 with D = 30. (The optimal value of f 8 is 800).
Mathematics 06 00061 g005
Figure 6. Convergence curves of five algorithms for f 8 with D = 50. (The optimal value of f 8 is 800).
Figure 6. Convergence curves of five algorithms for f 8 with D = 50. (The optimal value of f 8 is 800).
Mathematics 06 00061 g006
Figure 7. Convergence curves of five algorithms for f 10 with D = 30. (The optimal value of f 10 is 1000).
Figure 7. Convergence curves of five algorithms for f 10 with D = 30. (The optimal value of f 10 is 1000).
Mathematics 06 00061 g007
Figure 8. Convergence curves of five algorithms for f 10 with D = 50. (The optimal value of f 10 is 1000).
Figure 8. Convergence curves of five algorithms for f 10 with D = 50. (The optimal value of f 10 is 1000).
Mathematics 06 00061 g008
Table 1. Experimental Results between ABC with the new choice mechanism (NCMABC) and the basic ABC.
Table 1. Experimental Results between ABC with the new choice mechanism (NCMABC) and the basic ABC.
FunctionsCriteriaABC (10D)NCMABC (10D)ABC (30D)NCMABC (30D)ABC (50D)NCMABC (50D)
f 1 Mean1.36e+061.11e+063.73e+063.48e+061.20e+071.05e+07
Std1.12e+066.92e+051.46e+061.42e+063.69e+063.49e+06
Rank212121
f 2 Mean8.57e+027.23e+027.26e+025.84e+021.43e+031.31e+03
Std7.52e+028.35e+026.06e+026.77e+021.20e+031.07e+03
Rank212121
f 3 Mean2.02e+011.95e+012.01e+012.02e+012.02e+012.02e+01
Std3.94e-023.12e+004.10e-024.14e-024.52e-024.13e-02
Rank211221
f 4 Mean1.26e+011.18e+019.77e+019.56e+012.34e+022.32e+02
Std4.18e+013.67e+001.80e+011.78e+012.93e+012.89e+01
Rank212121
f 5 Mean4.31e+023.80e+022.42e+032.37e+034.24e+034.19e+03
Std1.49e+021.51e+022.86e+022.71e+024.77e+023.75e+02
Rank212121
f 6 Mean5.03e+034.47e+031.38e+061.24e+062.20e+061.95e+06
Std4.41e+033.10e+037.15e+026.35e+028.30e+027.53e+05
Rank212121
f 7 Mean8.72e+017.90e-019.39e+009.18e+001.85e+011.59e+01
Std2.58e+012.87e-011.25e+001.13e+008.52e+002.02e+00
Rank212121
f 8 Mean1.32e+041.81e+044.11e+053.90e+052.89e+062.15e+06
Std2.14e+043.02e+043.06e+051.96e+059.87e+058.05e+05
Rank122121
f 9 Mean9.48e+019.32e+011.21e+021.19e+021.62e+021.33e+02
Std2.20e+012.35e+014.44e+014.41e+011.13e+027.43e+01
Rank212121
f 10 Mean4.37e+034.60e+036.88e+055.07e+058.07e+059.57e+05
Std4.24e+037.02e+013.57e+052.90e+054.68e+054.39e+05
Rank122112
f 11 Mean3.01e+022.82e+023.22e+023.21e+023.58e+023.62e+02
Std4.70e-014.01e-137.47e+007.40e+001.69e+021.65e+02
Rank212112
f 12 Mean1.04e+021.04e+021.07e+021.07e+021.10e+021.10e+02
Std7.72e-018.03e-018.07e-016.08e-017.41e-017.99e-01
Rank111111
f 13 Mean3.13e+013.14e+011.02e+021.04e+021.89e+021.89e+02
Std2.04e+002.21e+004.35e+003.58e+004.91e+004.25e+02
Rank121211
f 14 Mean1.86e+031.81e+033.06e+043.06e+045.00e+044.95e+04
Std1.37e+031.31e+034.51e+035.57e+031.77e+031.42e+01
Rank211121
f 15 Mean1.00e+021.00e+021.00e+021.00e+021.02e+021.01e+02
Std2.27e-115.54e-126.56e-026.99e-031.23e+001.14e+00
Rank111121
Mean rank1.671.21.671.131.731.13
Overall rank212121
Table 2. Experimental Results about ABC and other ABC variants (10D).
Table 2. Experimental Results about ABC and other ABC variants (10D).
FunctionsCriteriaABCGABCcmABCPABCDCABC
f 1 Mean1.37e+065.88e+051.39e+061.43e+068.06e+02
Std1.12e+064.73e+058.96e+059.72e+052.31e+03
Rank32451
f 2 Mean8.57e+022.32e+034.23e+028.51e+022.88e+02
Std7.52e+022.45e+033.52e+025.30e+021.20e+03
Rank45231
f 3 Mean2.01e+011.88e+011.87e+011.98e+012.00e+01
Std3.9e-025.07e+004.77e+001.81e+007.88e-03
Rank52134
f 4 Mean1.26e+015.04e+009.30e+001.23e+017.89e+00
Std4.18e+001.42e+003.28e+003.94e+002.99e+00
Rank51342
f 5 Mean4.31e+022.31e+021.82e+023.66e+022.61e+02
Std1.49e+021.16e+029.83e+011.24e+021.51e+02
Rank52143
f 6 Mean5.03e+033.16e+033.20e+034.13e+031.68e+02
Std4.40e+032.36e+032.96e+034.95e+031.97e+02
Rank52341
f 7 Mean8.72e-014.02e-015.07e-018.65e-013.99e-01
Std2.58e-012.37e-013.29e-012.68e-014.24e-01
Rank42351
f 8 Mean1.32e+045.04e+037.38e+031.26e+044.25e+02
Std2.14e+044.33e+036.80e+032.43e+041.20e+03
Rank52341
f 9 Mean9.48e+011.00e+026.48e+019.59e+011.00e+02
Std2.20e+015.66e-024.79e+011.76e+014.12e-02
Rank24134
f 10 Mean4.37e+031.84e+033.28e+034.69e+034.41e+02
Std4.24e+039.70e+025.22e+034.73e+031.88e+02
Rank42351
f 11 Mean3.00e+22.45e+022.49e+022.90e+021.92e+02
Std4.69e-11.14e+021.07e+024.81e+011.44e+02
Rank52341
f 12 Mean1.04e+021.03e+021.04e+021.04e+021.03e+02
Std7.72e-014.95e-016.33e-016.29e-018.95e-01
Rank21221
f 13 Mean3.13e+012.89e+012.95e+013.11e+012.77e+01
Std2.04e+002.70e+002.30e+002.01e+002.74e+00
Rank52341
f 14 Mean1.86e+034.76e+024.47e+021.41e+031.14e+03
Std1.38e+038.49e+029.03e+021.32e+031.39e+03
Rank52143
f 15 Mean1.00e+021.00e+021.00e+021.00e+021.00e+02
Std2.28e-117.75e-061.25e-091.59e-115.42e-04
Rank11111
Mean rank4.002.132.263.671.73
Overall rank42341
Best/2nd Best/Worst 1/1/83/10/15/2/01/1/310/1/0
Table 3. Experimental Results about ABC and other ABC variants (30D).
Table 3. Experimental Results about ABC and other ABC variants (30D).
FunctionsCriteriaABCGABCcmABCPABCDCABC
f 1 Mean3.73e+063.11e+061.39e+061.43e+068.06e+02
Std1.46e+062.05e+068.96e+059.72e+052.31e+03
Rank54231
f 2 Mean7.26e+022.85e+034.23e+028.51e+022.88e+02
Std6.06e+023.77e+033.52e+025.30e+021.20e+03
Rank35241
f 3 Mean2.01e+012.02e+011.87e+011.98e+012.00e+01
Std4.10e-028.32e-024.77e+001.81e+007.88e-03
Rank45123
f 4 Mean9.77e+015.50e+019.31e+001.23e+017.89e+00
Std1.80e+019.75e+003.28e+003.94e+002.99e+00
Rank54231
f 5 Mean2.42e+031.92e+031.82e+023.66e+022.61e+02
Std2.86e+022.96e+029.83e+011.24e+021.51e+02
Rank54132
f 6 Mean1.38e+061.45e+063.20e+034.13e+031.68e+02
Std7.15e+057.51e+052.96e+034.95e+031.97e+02
Rank45231
f 7 Mean9.39e+007.04e+005.07e-018.65e-013.99e-01
Std1.25e+001.81e+003.29e-012.68e-014.24e-01
Rank54231
f 8 Mean4.11e+053.31e+057.38e+031.26e+044.25e+02
Std3.06e+051.91e+056.80e+032.43e+041.20e+03
Rank54231
f 9 Mean1.21e+021.05e+026.48e+019.59e+011.00e+02
Std4.44e+014.89e-014.79e+011.76e+014.12e-02
Rank54123
f 10 Mean6.88e+056.84e+053.28e+034.69e+034.41e+02
Std3.57e+055.28e+055.22e+034.73e+031.88e+02
Rank54231
f 11 Mean3.22e+023.49e+022.49e+022.90e+021.92e+02
Std7.47e+001.11e+021.07e+024.81e+011.44e+02
Rank45231
f 12 Mean1.07e+021.07e+021.04e+021.04e+021.03e+02
Std8.07e-015.78e-016.33e-016.29e-018.95e-01
Rank33221
f 13 Mean1.03e+029.91e+012.95e+013.11e+012.77e+01
Std4.35e+002.65e+002.30e+002.01e+002.74e+00
Rank54231
f 14 Mean3.06e+043.14e+044.47e+021.41e+031.14e+03
Std4.51e+026.51e+029.03e+021.32e+031.39e+03
Rank45132
f 15 Mean1.00e+021.00e+021.00e+021.00e+021.00e+02
Std6.56e+027.75e-061.25e+001.59-115.42e-04
Rank11111
Mean rank4.204.071.672.731.40
Overall rank54231
Best/2nd Best/Worst 1/0/81/0/55/10/01/3/011/2/0
Table 4. Experimental Results about ABC and other ABC variants (50D).
Table 4. Experimental Results about ABC and other ABC variants (50D).
FunctionsCriteriaABCGABCcmABCPABCDCABC
f 1 Mean1.20e+071.05e+071.00e+071.28e+073.92e+02
Std3.69e+064.49e+002.63e+064.65e+061.36e+03
Rank43251
f 2 Mean1.43e+037.88e+031.52e+031.37e+033.74e+03
Std1.20e+037.18e+031.04e+031.32e+037.84e+03
Rank25314
f 3 Mean2.02e+012.02e+012.00e+012.01e+012.00e+01
Std4.12e-026.70e-026.64e-033.41e-021.56e-02
Rank33121
f 4 Mean2.34e+022.30e+022.71e+022.19e+021.53e+02
Std2.93e+012.90e+013.35e+013.04+012.42e+01
Rank43521
f 5 Mean4.24e+034.76e+034.14e+034.10e+033.93e+03
Std4.77e+024.03e+024.09e+023.34e+024.31e+02
Rank52431
f 6 Mean2.20e+062.45e+062.11e+062.30e+062.25e+03
Std7.53e+051.26e+066.63e+058.94e+051.10e+03
Rank35241
f 7 Mean1.85e+011.91e+011.57e+011.72e+013.04e+01
Std8.52e+009.91e+001.48e+006.53e+001.56e+01
Rank34125
f 8 Mean2.15e+062.97e+062.35e+063.25e+063.48e+03
Std8.87e+051.65e+066.22e+051.02e+061.03e+04
Rank24351
f 9 Mean1.62e+021.08e+021.08e+021.52e+021.07e+02
Std1.13e+025.21e-018.44e-019.20e+015.74e-01
Rank42231
f 10 Mean8.07e+051.10e+069.27e+059.86e+053.59e+03
Std4.68e+056.62e+053.07e+054.57e+057.45e+02
Rank25341
f 11 Mean3.58e+026.68e+023.99e+024.27e+028.16e+02
Std1.69e+024.05e+022.72e+022.89e+024.05e+02
Rank14235
f 12 Mean1.10e+021.19e+021.10e+021.10e+021.10e+02
Std7.41e-014.97e-018.35e-016.39e-019.15e-01
Rank12111
f 13 Mean1.89e+021.85e+021.93e+021.88e+021.87e+02
Std4.91e+004.43e+005.34e+006.20e+005.88e+00
Rank41532
f 14 Mean4.99e+045.51e+045.02e+044.99e+045.41e+04
Std1.77e+036.10+032.45e+031.76e+034.57e+03
Rank14213
f 15 Mean1.02e+021.00e+021.00e+021.00e+021.00e+02
Std1.23e+002.00e-073.03e-011.59-113.89e-04
Rank21111
Mean rank2.733.202.472.671.93
Overall rank45231
Best/2nd Best/Worst 3/4/12/3/34/5/24/3/210/1/2

Share and Cite

MDPI and ACS Style

Jin, Y.; Sun, Y.; Ma, H. A Developed Artificial Bee Colony Algorithm Based on Cloud Model. Mathematics 2018, 6, 61. https://doi.org/10.3390/math6040061

AMA Style

Jin Y, Sun Y, Ma H. A Developed Artificial Bee Colony Algorithm Based on Cloud Model. Mathematics. 2018; 6(4):61. https://doi.org/10.3390/math6040061

Chicago/Turabian Style

Jin, Ye, Yuehong Sun, and Hongjiao Ma. 2018. "A Developed Artificial Bee Colony Algorithm Based on Cloud Model" Mathematics 6, no. 4: 61. https://doi.org/10.3390/math6040061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop