Next Article in Journal
Parametric Acoustic Array and Its Application in Underwater Acoustic Engineering
Next Article in Special Issue
MIMU/Odometer Fusion with State Constraints for Vehicle Positioning during BeiDou Signal Outage: Testing and Results
Previous Article in Journal
Vacant Parking Slot Detection in the Around View Image Based on Deep Learning
Previous Article in Special Issue
A Robust Multi-Sensor Data Fusion Clustering Algorithm Based on Density Peaks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Hybrid Algorithm Based on Grey Wolf Optimizer and Fireworks Algorithm

1
School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(7), 2147; https://doi.org/10.3390/s20072147
Submission received: 17 February 2020 / Revised: 28 March 2020 / Accepted: 5 April 2020 / Published: 10 April 2020

Abstract

:
Grey wolf optimizer (GWO) is a meta-heuristic algorithm inspired by the hierarchy of grey wolves (Canis lupus). Fireworks algorithm (FWA) is a nature-inspired optimization method mimicking the explosion process of fireworks for optimization problems. Both of them have a strong optimal search capability. However, in some cases, GWO converges to the local optimum and FWA converges slowly. In this paper, a new hybrid algorithm (named as FWGWO) is proposed, which fuses the advantages of these two algorithms to achieve global optima effectively. The proposed algorithm combines the exploration ability of the fireworks algorithm with the exploitation ability of the grey wolf optimizer (GWO) by setting a balance coefficient. In order to test the competence of the proposed hybrid FWGWO, 16 well-known benchmark functions having a wide range of dimensions and varied complexities are used in this paper. The results of the proposed FWGWO are compared to nine other algorithms, including the standard FWA, the native GWO, enhanced grey wolf optimizer (EGWO), and augmented grey wolf optimizer (AGWO). The experimental results show that the FWGWO effectively improves the global optimal search capability and convergence speed of the GWO and FWA.

1. Introduction

Finding an optimal solution in high-dimensional complex space is a common issue in many engineering fields [1]. When solving such problems, deterministic algorithms find it difficult to find the optimal value, the calculation cost is too high, and the calculation time is too long [2]. The meta-heuristic optimization algorithm has, due to its simplicity and strong searching ability, been widely used in solving complex problems. In recent years, scholars around the world have done a lot of research on these algorithms. Many natural-inspired meta-heuristic algorithms have been proposed, such as Particle Swarm Optimization (PSO) [3], Ant Colony Optimization (ACO) [4], Artificial bee colony algorithm (ABC) [5], Whale Optimization Algorithm (WOA) [6], Bird Swarm Algorithm (BSA) [7], Grey Wolf Optimizer (GWO) [8], Fireworks Algorithm (FWA) [9], Biogeography-based optimization (BBO) [10], and Moth Flame Optimization (MFO) [11].
Inspired by the leadership hierarchy and the mechanism of hunting of grey wolves, Gray Wolf Optimizer, a new meta-heuristic optimization algorithm, was proposed by Mirjalili in 2014 [8]. The GWO algorithm mimics the hunting mechanism to search the optima. In [8], several benchmark functions are used to evaluate the performance of GWO. The experimental results show that the GWO algorithm is feasible and superior to PSO, Gravitational Search Algorithm (GSA) [12], and Differential Evolution (DE) [13] in the accuracy of the solution and convergence speed. Due to the advantages of fewer adjustment parameters and a faster convergence, the GWO algorithm has been applied in a series of engineering problems such as the speed control of DC motors [14], parameter estimation in surface waves [15], and load frequency control of the Multi-microgrid System [16]. However, when facing problems with multi optimal solutions like multidimensional feature selection, GWO converges to local optima and fails to find the global optimal solution. Inspired by observing fireworks explosions, Tan and Zhu proposed the fireworks algorithm (FWA) in 2010. FWA mimics the explosion of fireworks that produce sparks and illuminate the surrounding area. The global optimal solution was found by controlling the amplitude of the explosion and the number of sparks generated by explosion. Several benchmark functions are employed to test the competence of FWA. The experimental results [9] show that FWA has a better solution accuracy and faster convergence speed than SPSO (Standard particle swarm optimization) [17] and CPSO (Clonal particle swarm optimization) [18].
When searching for the optimal value in a high-dimensional space, the single meta-heuristic algorithm always has some disadvantages, such as a low accuracy, poor generalization ability, and poor local optima avoidance ability. The hybrid algorithm utilizes the differences between the two optimization algorithms, combines their advantages, and makes up for shortcomings to improve the overall performance of solving complex optimization problems [19]. The solution searched for by the genetic algorithm (GA) [20] is not accurate enough, and the evolutionary strategy (ES) tends to fall into a local minimum when searching for the optimal value. Therefore, a hybrid algorithm in [21] combined GA and ES to make up for these shortcomings. This algorithm is used for electromagnetic optimization problems and achieved satisfactory results. In [22], two hybrid models were established by Mafarja to design feature selection techniques based on WOA. In the first model, the simulated annealing (SA) algorithm is inserted into the WOA algorithm. In the second model, SA and WOA are used separately. SA is used to exploit the search space found by the WOA algorithm. In [23], Alomoush proposed a hybrid algorithm based on Gray Wolf Optimizer (GWO) and Harmony Search (HS). GWO is used to update the pitch adjustment rate and bandwidth in the HS to improve the global optimization capabilities of the hybrid algorithm. In [24], the grey wolf optimizer and crow search algorithm are combined by Sankalap Arora. The hybrid algorithm is tested on 21 data sets as a feature selection technique. The results show that the algorithm has many advantages in solving complex optimization problems. In [25], biogeography-based optimization (BBO) and differential evolution (DE) are fused by sharing population information. In BBO/DE, the migration operators in BBO are changed as the increase of the iteration count. The results indicate that the hybrid algorithm is more effective when compared to traditional BBO and DE. In [26], a novel hybrid algorithm combining grey wolf optimizer (GWO) with particle swarm optimization (PSO) is used as a load balancing technique in the cloud computing environment. The conclusions indicate that the hybrid algorithm improves the convergence speed and simplicity when compared with other algorithms. In [27], ZHU proposed a novel hybrid algorithm based on the grey wolf optimizer (GWO) and differential evolution (DE). This algorithm is tested on 23 benchmark functions and a non-deterministic polynomial hard problem. The experimental results show that this algorithm has a good performance in exploration. A global optimization algorithm combining biogeography-based optimization (BBO) with fireworks algorithm (FWA) is proposed in [28]. The BBO migration operators have been inserted into the FWA to enhance the information exchange between the population and to improve the global optimization capability. In [29], WOA-SCA, composed of the whale optimization algorithm (WOA) and sine cosine algorithm (SCA) is proposed. SCA and WOA are used for exploration and exploitation, respectively. The WOA-SCA optimization algorithm is used to distribute DGs (distributed generators) and DSTATCOMs (distribution flexible alternating current transmission devices) to enhance the voltage profile by minimizing the total power losses in the system.
As mentioned above, GWO advances itself strongly on exploitation. However, in some cases it converges prematurely and gets trapped in a local optimum. FWA not only has a high exploration capability, but also has the disadvantage of a slow convergence speed compared to the recent algorithms. This paper proposes a new hybrid algorithm which combines the exploration ability of FWA with the exploitation ability of GWO to increase the convergence characteristics.
The major work of this paper is summarized as follows:
(1)
A novel hybrid algorithm based on GWO and FWA is proposed.
(2)
The proposed algorithm is tested on 16 benchmark functions with a wide range of dimensions and varied complexities.
(3)
The performance of the proposed approach is compared with standard GWO, FWA, Moth Flame Optimization (MFO), Crow Search Algorithm (CSA) [30], improved Particle Swarm Optimization(IPSO) [31], Biogeography-based optimization (BBO), Particle Swarm Optimization (PSO), Enhanced GWO (EGWO) [32], and Augmented GWO (AGWO) [33].
The rest of paper is arranged as follows: Section 2 introduces the GWO algorithm and FWA algorithm used in this paper. In Section 3, the FWGWO hybrid algorithm is proposed. Section 4 shows the experimental results and comparison of the algorithms used in the test function. Finally, the conclusions are given in Section 5.

2. Algorithms

2.1. Grey Wolf Optimizer

The Grey Wolf Optimizer (GWO) is a meta-heuristic algorithm proposed by Mirgalili et al. [8] in 2014. The GWO algorithm mimics the hunting mechanism and the leadership hierarchy of wolves to search the optima. Grey wolves are social animals, with an average of 5 to 12 wolves in each group. They also have a strict hierarchy. There are four levels in wolves’ hierarchy, called alpha ( α ), beta ( β ), delta ( δ ), and omega ( ω ). Alpha is the first level. It is responsible for making decisions like hunting, finding a place to sleep, the waking time, and so on. The second level is alpha’s candidate, the beta wolves, which help alpha in making decisions or in engaging in other activities. The third level is delta. The delta wolves are under the command of the first two levels, mainly responsible for reconnaissance, sentry, guard, and other tasks. The last level of a pack is omega. Omega wolves have to submit to the wolves in the first three levels. The omega wolves maintain the integrity of the hierarchical structure. The mathematical model of the GWO algorithm’s hierarchy and hunting behavior is as follows:

2.1.1. Hierarchical Structure

The GWO algorithm is a mathematical model based on the social hierarchy of wolves. The fittest solution found is considered as the alpha ( α ). The second and third best solutions are beta ( β ) and delta ( δ ), respectively. The rest of the solutions are omega ( ω ). In the GWO algorithm, alpha, beta, and delta collectively command omega to search for the solution space.

2.1.2. Encircling Prey

The grey wolf encircles the prey when hunting. The mathematical equations for this behavior are shown in Equations (1) and (2):
D = | C · X P ( t ) X ( t ) |
X ( t + 1 ) = X P ( t ) A · D
where D is the distance from the wolf to the prey. X and X p represent the position vector of the grey wolf and the position vector of the prey, respectively. t indicates the current iteration. A and C are coefficient vectors and are calculated as follows:
A = 2 a · r 1 a
C = 2 · r 2
where a decreases linearly with the number of iterations from 2 to 0. r 1 and r 2 are the random vectors in [0,1].

2.1.3. Hunting

The grey wolves can easily encircle the prey with the ability to recognize its location. The whole hunting process is usually led by the alpha. However, in a complex search space, it is impossible to get the location of the prey at the beginning. Therefore, GWO consider that the first three best solutions, alpha, beta, and delta, have more information about the location of the prey. Then, the other wolves update their positions based on these three positions, as shown in Figure 1.
The mathematical equations of this phase are as follows:
D α = | C 1 · X α X | , D β = | C 2 · X β X | , D δ = | C 3 · X δ X |
X 1 = X α A 1 · ( D α ) , X 2 = X β A 2 · ( D β ) , X 3 = X δ A 3 · ( D δ )
X ( t + 1 ) = X 1 + X 2 + X 3 3
where, X α , X β , and X δ are the position vectors of alpha, beta, and delta; D α , D β , and D δ represent the distances from the search agent to alpha, beta, and delta respectively; C 1 , C 2 , and C 3 are coefficient vectors; and X 1 , X 2 , and X 3 are the step lengths in the direction toward alpha, beta, and delta. X and X ( t + 1 ) indicate the position of the search agent before and after the update.

2.1.4. Search for Prey (Exploration) and Attacking Prey (Exploitation)

When the prey stops moving, the grey wolf completes the hunting process by attacking. In order to mimic the process of the grey wolf approaching the prey, the GWO algorithm causes a to linearly decrease from 2 to 0, as shown in Equation (8):
a = 2 ( 2 × t / M a x i t e r )
According to Equation (3), A is a random value that lies in the range [−2a, 2a]. GWO uses A to force wolves to move closer or farther away from their prey. If A < 1, the wolf will be forced to attack towards the prey. If A > 1, the wolf will be forced to diverge from the prey (local minimum) to find a new fitter prey. C is a random value that lies in the range [0,2] which is employed to help GWO avoid being trapped in the local optima. The pseudo code of the GWO algorithm is presented in Algorithm 1.
Algorithm   1   Pseudo   Code   of   GWO
1 . Initialize   the   wolf   population   X i ( i = 1 , 2 , , n ) 2 . Initialize   a , A , and   C 3 .   Calculate   the   fitness   of   each   search   agent 4 .   X α = the   best   search   agent 5 .   X β = the   sec ond   best   search   agent 6 .   X δ = the   third   best   search   agent 7 .   while   ( t < Max   number   of   iterations ) 8 . for   each   search   agent 9 .    Update   the   position   of   the   current   search   agent   by   equation ( 7 ) 10 . end   for 11 . Update a , A , and C 12 . Calculate   the   fitness   of   all   search   agents 13 . Update X α , X β , and   X δ 14 . t = t + 1 15 .   end   while 16 .   return   X α

2.2. Fireworks Algorithm

Inspired by the behavior according to which fireworks burst into sparks in the night sky and illuminate the surrounding area, Tan and Zhu proposed the Fireworks Algorithm (FWA) in 2010. In FWA, a firework is considered as a viable solution in the search space of the optimization problem. Furthermore, the process of sparking fireworks is deemed as the process of searching the neighborhood of these fireworks. The specific steps of the fireworks algorithm are as follows:
(1)
N fireworks are randomly generated in the search space, and each firework represents a feasible solution.
(2)
Evaluate the quality of these fireworks. Consider that the optimal location may be close to the fireworks with a better fitness; these fireworks get a smaller search amplitude and more explosion sparks to search the surrounding area. On the contrary, those fireworks with a bad fitness will get fewer explosion sparks and a larger search amplitude. The number of explosion sparks and the explosion amplitude of fireworks are calculated as shown in Equations (9) and (10):
S i = m · y max f ( x i ) + ξ i = 1 n ( y max f ( x i ) ) + ξ
A i = A ^ · f ( x i ) y min + ξ i = 1 n ( f ( x i ) y min ) + ξ
where S i is the number of explosion sparks. A i represents the amplitude of the explosion. y min = min ( f ( x i ) ) , ( i = 1 , 2 , , N ) is the minimum fitness among the N fireworks. y max = max ( f ( x i ) ) , ( i = 1 , 2 , , N ) is the maximum fitness value among the N fireworks. m and A ^ are parameters controlling the total number of sparks and the maximum explosion amplitude, respectively. ξ is the smallest constant in the computer, utilized to avoid a zero-division-error. To avoid that the good fireworks produce far more explosive sparks than the fireworks with a poor fitness, Equation (11) is used to bound the number of sparks that are generated:
S ^ i = { r o u n d ( a · m ) if   S i < a m r o u n d ( b · m ) if   S i > b m , a < b < 1 r o u n d ( S i ) if   S i   otherwise
where a , b are constant parameters, and r o u n d ( ) represents the rounding function.
(3)
To guarantee the diversity of the fireworks, another method of generating sparks, Gaussian explosion, is designed in FWA. For the randomly selected fireworks, a number of dimensions are randomly selected and updated, as shown in Equation (12), to get the position of a Gaussian spark at the dimension k:
x ^ i k = x i k × e
where e N ( 1 , 1 ) , N ( 1 , 1 ) is a Gaussian random value with mean 1 and standard deviation 1.
The generated explosion sparks and Gaussian sparks may exceed the boundaries of the search space. Equation (13) maps the sparks beyond the boundary at dimension k to a new position:
x ˜ k j = x k min + | x ˜ k j | % ( x k max x k min )
where x k max represents the upper bound of the search space at dimension k, and x k min is the lower bound of the search space at dimension k.
(4)
N locations should be selected as the next generation of fireworks from the explosion sparks, Gaussian sparks, and the current fireworks. In FWA, the location with the best fitness is always kept for the next iteration. Then, N 1 locations are chosen, determined by their distance to other locations. The distance and the selection probability of x i is defined as follows:
R ( x i ) = j k d ( x i , x j ) = j k x i x j
p ( x i ) = R ( x i ) j k R ( x j )
In this selection strategy, if there are many other locations around x i , the selection probability will be reduced to keep the diversity of the next generation.
The execution flow of the FWA algorithm is shown in Algorithm 2.
Algorithm   2   Pseudo   Code   of   FWA
1 .   Randomly   select   n   locations   for   fireworks 2 .   while   stop   criteria = flase   do 3 . Set   off   n   fireworks   respectively   at   the   n   locations 4 . for each   firework x i do 5 .    Calculate   the   number   of   sparks   that   the   firework   yields   by   equation   ( 9 ) 6 .    Obtain   locations   of   each   sparks   of   the   firework x i 7 . end   for 8 .   for   k = 1 : number   of   Gaussian   sparks   do 9 .    Randomly   select   a   firework x j 10 .    Generate   a   Gaussian   spark   for   the   firework 11 .   end   for 12 .   Select   the   best   location   and   keep   it   for   next   explosion   generation 13 .   Randomly   select   n 1   locations   from   the   two   types   of   sparks   and   the   current   fireworks 14 .   according   to   the   probability   given   in   equation   ( 15 ) 15 .   end   while

3. Hybrid FWGWO Algorithm

3.1. Establishment of FWGWO

As mentioned in Section 2, GWO is strong at exploitation but weak at avoiding a premature convergence and local optimum. The FWA algorithm has a strong exploration capability, but it lacks in exploitation. In this section, the FWGWO hybrid algorithm is proposed to combine the GWO exploitation capability with the FWA exploration capability to obtain a better global optimization capability. FWGWO alternately uses the FWA algorithm for exploration in the search space and the GWO algorithm for exploitation to search the global optimum without changing the general operation of the GWO and FWA algorithms. In order to balance the exploration with the exploitation, an adaptive equilibrium coefficient is proposed in this paper. Updating the position X α means that the best fitness has been changed. When the current position is closer to the optimal solution, the coefficient p will be updated to change the search strategy, as shown in Equation (16):
p = 0.9 × ( 1 cos ( π 2 · t M a x i t e r ) )
where p represents the adaptive balance coefficient. t is the current number of iterations. M a x i t e r indicates the maximum number of iterations.
A random value r in [0, 1] is set for a comparison with the adaptive balance coefficient p . If r > p , the next iteration will be executed using the FWA algorithm. Otherwise, the GWO algorithm is used for this iteration. The function curve of p is shown in Figure 2. The value of p is small and slowly increases in the early optimization stage to make sure that FWGWO explores a huge search space by multiple calls of FWA to avoid being trapped in the local minimum. In the later optimization phases, the algorithm exploits small regions to efficiently search the optimum with the rapidly increasing p . To avoid that only GWO is executed in the final stage of the FWGWO algorithm, the value of p is growing in [0, 0.9]. In some cases, after one iteration of the FWA algorithm, the FWGWO algorithm will proceed to the next iteration of the FWA algorithm without further exploitation so as to escape the current local optimal space and miss the global optimal solution. To avoid these cases, the FWGWO algorithm exploits the current region with at least T iterations of the GWO algorithm before proceeding to the next FWA algorithm. In this paper, T is set to 10. The variable k is defined to count how many GWO iterations have occurred since the last FWA iteration. k is initialized at the beginning of FWGWO. After each GWO iteration, k increases itself by 1. If k > T and r > p , FWA will be used to execute the iteration. At the last of these iterations, k will be set to 0.
The pseudo code of the hybrid FWGWO algorithm is shown in Algorithm 3.
Algorithm   3   Pseudo   Code   of   FWGWO
1 .   Initialize   the   wolf   population   X i ( i = 1 , 2 , , n ) 2 .   Initialize   a ,   A ,   k ,   t   and   C 3 .   Calculate   the   fitness   of   each   search   agent 4 .   X α = the   best   search   agent 5 .   X β = the   sec ond   best   search   agent 6 .   X δ = the   third   best   search   agent 7 .   while   ( t < Max   number   of   iterations ) 8 . for each   search   agent 9 .    Update   the   position   of   the   current   search   agent   by   equation   ( 7 ) 10 . end   for 11 . Update a , A , and C 12 . Calculate   the   fitness   of   all   search   agents 13 . Update X α , X β , and   X δ 14 . if X α   changed   then 15 .    Update   the   adaptive   balance   coefficient   ( p )   by   equation   ( 16 ) 16 . end   if 17 . if   k   > =   T   and   r a n d ( )   >   p   then 18 .    for   e a c h s e a r c h a g e n t   x i   do 19 .     Generate   explosion   sparks 20 .    end   for 21 . Generate m Gaussian   spark   for   the   search   agents 22 . Selec new search agents by equation   ( 15 ) 23 . t = t + 1 24 . k = 0 25 . end   if 26 . k = k + 1 t = t + 1 27 . end   while 28 . return   X α

3.2. Time Complexity Analysis of FWGWO

The time complexity of FWGWO is mainly determined by the population size n , dimensions of the solution space d , and max number of iterations t . O (FWGWO) = O (population initialization) + O (Calculation of fitness for the entire population) + O (Update of population position). The time complexity of the population initialization is O ( n × d ), and the time complexity for calculating the fitness of the entire population is O ( t × n × d ). As for the part with the updating position, the time complexity consists of two parts: O = O (position update with GWO) + O (position update with FWA). The time complexity of this part is O ( 10 / 11 × t × n × d + 1 / 11 × t × 3 × n × d ) ≈ O( t × n × d ). Therefore, the time complexity of the FWGWO algorithm is O ( n × d + t × n × d + t × n × d ) = O ( ( 2 t + 1 ) × n × d ). For comparison, the time complexity of GWO is O ( ( 2 t + 1 ) × n × d ), and the time complexity of FWA is O ( ( 4 t + 1 ) × n × d ). The time complexity of FWGWO is the same as that of GWO and smaller than that of FWA.

4. Experimental Section and Results

In this section, 16 benchmark functions with different characteristics are used to evaluate the proposed FWGWO hybrid algorithm. The experiment results are compared with nine other algorithms to verify the superiority of FWGWO.

4.1. Compared Algorithms

A total of nine algorithms, including IPSO, PSO, BBO, CSA, MFO, FWA, GWO, AGWO, and EGWO, were selected for comparison with the FWGWO algorithm proposed in this paper. These nine algorithms contain both classical algorithms and new algorithms proposed in recent years. The parameter settings of these algorithms are shown in Table 1. These parameters are chosen based on the parameters of these algorithms in the original papers. For all algorithms in this experiment, the size of the population is 20, the dimension is 100 dimensions, and the maximum number of iterations is 500. This experiment has been carried out with different experimental parameters. The size of the population has been set to 10, 20, 30, and 50. The dimension has been set to 20, 50, 100, and 500. The results of these experiments are similar. Considering the length of this paper, the typical parameters mentioned above are taken as an example.

4.2. Benchmark Functions

Sixteen benchmark functions listed in Table A1 (see in Appendix A) are utilized to evaluate the optimized performance of FWGWO. These benchmark functions can be divided into two categories. In the first category, the unimodal functions f1-f8 with only one global optimum are used to test the exploitation capability. In the second category, the multimodal functions f9–f16 with more than two local optima are utilized to assess the ability of FWGWO to find global optima.

4.3. Performance Metrics

Three performance indicators, the mean fitness, fitness variance, and best fitness, are selected in this paper to evaluate the results of the experiments. The mathematical equations are defined as follows:
M e a n F i t n e s s = 1 M i = 1 M G i
s t d = i = 1 M ( G i M e a n F i t n e s s ) 2 M
where M represents the total number of independently repeated experiments. M is 30 in this paper. G represents the function fitness of each experiment, and i is the count of repeated experiments.
In addition, a nonparametric statistical test, Wilcoxon’s rank-sum test [34], is utilized to show that the proposed FWGWO algorithm provides a significant improvement over other algorithms. The test was carried out with the results of the FWGWO and other algorithms in each benchmark function at a 5% significance level. All experiments are carried out using MATLAB R2017a on a computer with Inter i5-5200U 2.2GHz and an 8 GB memory.

4.4. Comparison and Analysis of Simulation Results

Thirty independently repeated experiments were implemented using the 10 algorithms mentioned above on each benchmark function. Table 2 shows the test results of these algorithms on each benchmark function. The best, mean, and variance of the fitness obtained by each algorithm are listed in Table 2, and the best results are bolded. Compared with other algorithms, the average of the fitness obtained by the FWGWO algorithm after 500 iterations is better and closer to the global optimal value on most of the benchmark functions. In terms of the best fitness obtained in 30 repetitions, FWGWO has a better performance than other algorithms in all 16 functions. What is more, the best fitness obtained by FWGWO is 0 on functions f9, f11, and f16. This shows that the FWGWO algorithm has found the global optimal value on these functions. These cases verified the superiority of FWGWO over other algorithms.
Furthermore, the variances of the results of these 30 independent experiments are also smaller. As can be seen in Table 2, the optimization results of FWGWO on the unimodal functions f1–f5, whether the mean or the variance, are superior to other algorithms. As mentioned above, the unimodal function is utilized to test the exploitation ability of the algorithm. From these results, it can be seen that, compared with FWA, GWO, and other algorithms, FWGWO can find a better solution in a limited number of iterations and has a better exploitation ability. It can also be seen from Table 2 that FWGWO has better results in seven of the eight functions on the multimodal functions f9–f16. The multimodal functions are used to test the exploration ability of the algorithm. It can be known from the experiment results that the FWGWO algorithm has a better global optimization ability than other algorithms used in this paper, including the original GWO, the standard FWA, and the enhanced GWO.
Figure 3 and Figure 4 show the convergence process of FWGWO and other algorithms mentioned in this paper on 16 reference benchmark functions. The curves we provided in Figure 3 and Figure 4 are the average fitness curves of 30 repetitions. As shown in Figure 3, at the first few iterations, FWGWO’s convergence is slightly slower than that of other algorithms. In this process, the algorithm explores a lot to search for a better exploitation of the local areas. In the following convergence process, FWGWO converges quickly in this region, searching for a better solution than other algorithms after a total of 500 iterations. Figure 4 shows the comparison of the convergence of FWGWO with other algorithms on multimodal functions. As mentioned above, the multimodal functions are used to test the local optima avoidance ability of the algorithm.
In Figure 4, we can see that unlike other algorithms that have been trapped in the local optimum, FWGWO continues to search for a better fitness on most benchmark functions and keeps approaching global optima. This proves that the FWGWO algorithm has a better global optimization searching ability when compared with other algorithms. According to the above comprehensive description, the hybrid algorithm FWGWO is superior to other algorithms in both the accuracy of the solution and the convergence. It can be concluded that the FWGWO algorithm has better overall performances on most optimization problems than other algorithms.
The mean running time of 30 repetitions has been shown in Table 3. The results show that the running time of the FWGWO algorithm is a little bigger than that of GWO and much smaller than that of FWA. As mentioned in Section 3, FWA has a larger time complexity, while GWO has a smaller time complexity. The location update strategy of FWA has been used in FWGWO a few times. Therefore, FWGWO has a bigger running time than GWO. Considering the improvement of the convergence performance, it is acceptable to sacrifice a little running time.
Wilcoxon’s nonparametric statistical test is conducted at the 5% significance level in order to determine whether the FWGWO provides a significant improvement compared to other algorithms or not. The results of different algorithms on the benchmark function were employed to test the Wilcoxon rank sum, and p and h values were obtained as significant level indicators. If the p value is less than 0.05, the null hypothesis is rejected. At this time, the h value is 1, and the two algorithms tested are considered significantly different. Conversely, when the p value is greater than 0.05, the h value is 0, and the two algorithms tested are considered to not be significantly different. In this paper, the Wilcoxon rank sum is tested with the results of 30 repeated experiments on 16 benchmark functions by the FWGWO algorithm and other algorithms. The test results are shown in Table 4. In most cases, the h values of the test results are 1, except that the results’ p values for AGWO and FWGWO on f8, f11, f15, and f16 are greater than 0.05 and the h values are 0. This means that the optimization efficiency of FWGWO and AGWO is similar in f8, f11, f15, and f16. The results show that in most cases the performance of the FWGWO algorithm is significantly improved when compared with other algorithms.

5. Conclusions

In this paper, a hybrid algorithm FWGWO based on Grey Wolf Optimizer (GWO) and Fireworks Algorithm (FWA) is proposed for the optimization of multidimensional complex space. The FWGWO algorithm combines the good exploitation ability of GWO with the strong exploration ability of the FWA algorithm. In order to balance the exploitation with the exploration, an adaptive balance coefficient is employed in this algorithm. The probability of exploitation or exploration is controlled by the balance coefficient. By changing the balance coefficient, the FWGWO algorithm can avoid the local optimal value as much as possible and has a fast convergence speed. In order to verify the performance superiority of the FWGWO algorithm, the FWGWO algorithm was tested 30 times with IPSO, PSO, BBO, CSA, MFO, FWA, GWO, AGWO, and EGWO algorithms on 16 benchmark functions, and their test results were compared with each other. The compared results show that the FWGWO algorithm has a better global optimization ability and faster convergence speed compared with other algorithms. In addition, the Wilcoxon rank sum test was used to test the optimization results. The test results show that the FWGWO algorithm has a significant improvement compared to other algorithms.
This paper presents a new method for combining two algorithms. In the future, this method could be used in combination with other enhanced algorithms. Furthermore, the FWGWO can be applied to solve single- and multi-objective optimization problems like the feature selection problem.

Author Contributions

Data curation, Z.Y.; Formal analysis, Z.Y.; Funding acquisition, S.Z. and W.X.; Methodology, Z.Y.; Resources, Z.Y.; Software, Z.Y.; Supervision, S.Z.; Validation, Z.Y.; Writing – original draft, Z.Y.; Writing – review & editing, Z.Y. and S.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China under grant No. 61673056 and 61673055, the Beijing Natural Science Foundation under grant No.4182039, and the National Key Research and Development Program of China under Grant 2017YFB1401203.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under grant No. 61673056 and 61673055, the Beijing Natural Science Foundation under grant No.4182039, and the National Key Research and Development Program of China under Grant 2017YFB1401203. The authors would like to thank the anonymous reviewers for their insightful suggestions that will help us to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict interest.

Appendix A

Table A1. Benchmark functions.
Table A1. Benchmark functions.
FunctionRange f min
F 1 = i = 1 D x i 2 [ 100 , 100 ] 0
F 2 = i = 1 D ( j = 1 i x j ) [ 100 , 100 ] 0
F 3 = max { | x i | , 1 i D } [ 100 , 100 ] 0
F 4 = i = 1 D 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [ 30 , 30 ] 0
F 5 = i = 1 D [ x i + 0.5 ] 2 [ 100 , 100 ] 0
F 6 = 10 6 · x 1 2 + i = 2 D x i 6 [ 1 , 1 ] 0
F 7 = i = 2 D ( 10 6 ) ( i 1 ) / ( n 1 ) · x i 2 [ 100 , 100 ] 0
F 8 = i = 2 D i x i 2 [ 10 , 10 ] 0
F 9 = i = 1 D [ x i 2 10 cos ( 2 π x i ) + 10 ] [ 5.12 , 5.12 ] −418.982
F 10 = 20 exp ( 0.2 1 D i = 1 D x i 2 ) exp ( 1 D i = 1 D cos ( 2 π x i ) ) + 20 + e [ 32 , 32 ] 0
F 11 = 1 4000 i = 1 D x i 2 i = 1 D cos ( x i i ) + 1 [ 60 , 60 ] 0
F 12 = π n { 10 sin ( π y 1 ) + i = 1 D 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 D u ( x i , 5 , 100 , 4 ) [ 50 , 50 ] 0
F 13 = 0.1 { sin ( 3 π x 1 ) + i = 1 D 1 ( x i 1 ) 2 [ 1 + 10 sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 } + i = 1 D u ( x i , 5 , 100 , 4 ) [ 50 , 50 ] 0
F 14 = i = 1 D | x i sin ( x i ) + 0.1 x i | [ 10 , 10 ] 0
F 15 = 0.5 + ( ( sin ( i = 1 D x i 2 ) ) 2 0.5 ) · ( 1 + 0.001 ( i = 1 D x i 2 ) ) 2 [ 100 , 100 ] 0
F 16 = i = 1 D 1 [ x i 2 + 2 x i + 1 2 0.3 cos ( 3 π x i ) 0.4 cos ( 4 π x i + 1 ) + 0.7 ] [ 15 , 15 ] 0

References

  1. Şenel, F.A.; Gökçe, F.; Yüksel, A.S.; Yiğit, T. A novel hybrid PSO–GWO algorithm for optimization problems. Eng. Comput. 2018, 35, 1359–1373. [Google Scholar] [CrossRef]
  2. Brownlee, J. Clever Algorithms: Nature-Inspired Programming Recipes; Lulu Press: Morrisville, NC, USA, 2011. [Google Scholar]
  3. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  4. Colorni, A.; Dorigo, M.; Maniezzo, V. Distributed optimization by ant colonies. In Proceedings of the European Conference on Artificial Life, Paris, France, 11–13 December 1991. [Google Scholar]
  5. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-TR06; Erciyes University: Kayseri, Turkey, 2005. [Google Scholar]
  6. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  7. Meng, X.; Gao, X.Z.; Lu, L.; Liu, Y.; Zhang, H. A new bio-inspired optimization algorithm: Bird swarm algorithm. J. Exp. Theor. Artif. Intell. 2016, 28, 673–687. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  9. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. Comput. Vis. 2010, 6145, 355–364. [Google Scholar]
  10. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  11. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  12. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  13. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  14. Agarwal, J.; Parmar, G.; Gupta, R.; Sikander, A. Analysis of grey wolf optimizer based fractional order PID controller in speed control of DC motor. Microsyst. Technol. 2018, 24, 4997–5006. [Google Scholar] [CrossRef]
  15. Song, X.; Tang, L.; Zhao, S.; Zhang, X.; Li, L.; Huang, J.; Cai, W. Grey wolf optimizer for parameter estimation in surface waves. Soil Dyn. Earthq. Eng. 2015, 75, 147–157. [Google Scholar] [CrossRef]
  16. Yammani, C.; Maheswarapu, S. Load frequency control of multi-microgrid system considering renewable energy sources using grey wolf optimization. Smart Sci. 2019, 7, 198–217. [Google Scholar]
  17. Bratton, D.; Kennedy, J. Defining a standard for particle swarm optimization. In Proceedings of the IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 120–127. [Google Scholar]
  18. Tan, Y.; Xiao, Z.M. Clonal particle swarm optimization and its applications. In Proceedings of the IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2303–2309. [Google Scholar]
  19. Mirjalili, S.; Wang, G.-G.; Coelho, L.D.S. Binary optimization using hybrid particle swarm optimization and gravitational search algorithm. Neural Comput. Appl. 2014, 25, 1423–1435. [Google Scholar] [CrossRef]
  20. Goldberg, D.E. Genetic Algorithms in Search Optimization and Machine Learning; Addison-Wesley: Boston, MA, USA, 1989. [Google Scholar]
  21. Choi, K.; Jang, D.-H.; Kang, S.-I.; Lee, J.-H.; Chung, T.-K.; Kim, H.-S.; Jung, T.-K. Hybrid algorithm combing genetic algorithm with evolution strategy for antenna design. IEEE Trans. Magn. 2015, 52, 1–4. [Google Scholar] [CrossRef]
  22. Mafarja, M.; Mirjalili, S. Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
  23. Alomoush, A.A.; Alsewari, A.A.; Alamri, H.S.; Alamri, H.S.; Zamli, K.Z. Hybrid Harmony search algorithm with grey wolf optimizer and modified opposition-based learning. IEEE Access 2019, 7, 68764–68785. [Google Scholar] [CrossRef]
  24. Arora, S.; Singh, H.; Sharma, M.; Sharma, S.; Anand, P. A new hybrid algorithm based on grey wolf optimization and crow search algorithm for unconstrained function optimization and feature selection. IEEE Access 2019, 7, 26343–26361. [Google Scholar] [CrossRef]
  25. Kumar, S.; Pant, M.; Dixit, A.; Bansal, R. BBO-DE: Hybrid algorithm based on BBO and DE. In Proceedings of the International Conference on Computing, Communication and Automation (ICCCA), Greater Noida, India, 5–6 May 2017; pp. 379–383. [Google Scholar]
  26. Gohil, B.N.; Patel, D.R. A hybrid GWO-PSO algorithm for load balancing in cloud computing environment. In Proceedings of the Second International Conference on Green Computing and Internet of Things (ICGCIoT), Bangalore, India, 16–18 August 2018; pp. 185–191. [Google Scholar]
  27. Zhu, A.; Xu, C.; Li, Z.; Wu, J.; Liu, Z. Hybridizing grey wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC. J. Syst. Eng. Electron. 2015, 26, 317–328. [Google Scholar] [CrossRef]
  28. Zhang, B.; Zhang, M.-X.; Zheng, Y. A hybrid biogeography-based optimization and fireworks algorithm. In Proceedings of the IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 3200–3206. [Google Scholar]
  29. Selim, A.; Kamel, S.; Jurado, F. Voltage profile improvement in active distribution networks using hybrid WOA-SCA optimization algorithm. In Proceedings of the Twentieth International Middle East Power Systems Conference (MEPCON), Cairo, Egypt, 18–20 December 2018; pp. 1064–1068. [Google Scholar]
  30. Alireza, A. A novel metaheuristic method for solving con- strained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar]
  31. Jiang, Y.; Hu, T.; Huang, C.; Wu, X. An improved particle swarm optimization algorithm. Appl. Math. Comput. 2007, 193, 231–239. [Google Scholar] [CrossRef]
  32. Sharma, S.; Salgotra, R.; Singh, U. An enhanced grey wolf optimizer for numerical optimization. In Proceedings of the International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), Coimbatore, India, 17–18 March 2017; pp. 1–6. [Google Scholar]
  33. Hamour, H.; Kamel, S.; Nasrat, L.; Yu, J. Distribution network reconfiguration using augmented grey wolf optimization algorithm for power loss minimization. In Proceedings of the International Conference on Innovative Trends in Computer Engineering (ITCE), Aswan, Egypt, 2–4 February 2019; pp. 450–454. [Google Scholar]
  34. Derrac, J.; García, S.; Molina, D.; Herrera, F.; Cabrera, D.M. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. Position updating in GWO.
Figure 1. Position updating in GWO.
Sensors 20 02147 g001
Figure 2. Adaptive balance coefficient.
Figure 2. Adaptive balance coefficient.
Sensors 20 02147 g002
Figure 3. Convergence curves of the unimodal functions.
Figure 3. Convergence curves of the unimodal functions.
Sensors 20 02147 g003
Figure 4. Convergence curves of the multimodal functions.
Figure 4. Convergence curves of the multimodal functions.
Sensors 20 02147 g004
Table 1. Parameter settings.
Table 1. Parameter settings.
ParameterValue(s)
GWO a Linearly decreased from 2 to 0
FWATotal number of sparks50
Maximum explosion amplitude40
Number of Gaussian sparks 5
a 0.04
b 0.8
ISPOInertia w(wMin,wMax)[0.4,0.9]
Acceleration constants(c1,c2)[2,2]
PSOInertia w(wMin,wMax)[0.6,0.9]
Acceleration constants(c1,c2)[2,2]
AGWO a a = 2 ( cos ( r a n d ( ) ) × t / M a x _ i t e r )
EGWO a a = r a n d ( )
BBOImmigration probability[0,1]
Mutation probability0.005
Habitat modification probability1.0
Step size1.0
Migration rate1.0
Maximum immigration1.0
CSAFlight length2
Awareness probability0.1
MFO a a = 1 + I t e r a t i o n × ( ( 1 ) / M a x _ i t e r a t i o n )
Table 2. Results of the benchmark functions.
Table 2. Results of the benchmark functions.
FunctionGWOFWAIPSOPSOAGWOEGWOBBOCSAMFOFWGWO
F1Mean1.99 × 10−103.73 × 1021.57 × 1042.21 × 1023.93 × 10−171.45 × 10−132.87 × 1021.06 × 1037.04 × 1045.16 × 10−18
Std1.19 × 10−105.49 × 1035.18 × 1033.01 × 1017.20 × 10−176.23 × 10−142.47 × 1011.83 × 1021.35 × 1041.93 × 10−16
Best2.70 × 10−114.847.40 × 1031.50 × 1024.61 × 10−186.15 × 10−162.54 × 1027.09 × 1024.69 × 1046.68 × 10−24
F2Mean1.53 × 1036.14 × 1041.44 × 1052.71 × 1041.79 × 1043.34 × 1045.74 × 1048.49 × 1032.70 × 1051.74
Std1.35 × 1037.11 × 1043.11 × 1046.72 × 1031.81 × 1041.41 × 1048.78 × 1031.52 × 1035.48 × 1046.01 × 101
Best1.20 × 1021.32 × 1018.33 × 1041.55 × 1042.54 × 1026.75 × 1034.38 × 1045.54 × 1031.64 × 1054.97 × 10−7
F3Mean3.033.36 × 1016.00 × 1011.54 × 1014.99 × 1017.28 × 1012.99 × 1011.57 × 1019.41 × 1011.02 × 10−3
Std2.171.44 × 1014.321.663.13 × 1017.823.011.401.723.06 × 10−3
Best2.24 × 10−18.24 × 10−14.84 × 1011.26 × 1016.56 × 10−15.61 × 1012.35 × 1011.32 × 1019.13 × 1015.61 × 10−7
F4Mean9.81 × 1019.38 × 1039.12 × 1062.95 × 1059.82 × 1019.83 × 1017.86 × 1034.01 × 1042.04 × 1089.00 × 101
Std5.22 × 10−12.50 × 1063.99 × 1066.54 × 1045.50 × 10−16.68 × 10−11.39 × 1031.80 × 1048.37 × 1071.74 × 101
Best9.65 × 1011.00 × 1024.79 × 1061.87 × 1059.71 × 1019.63 × 1015.16 × 1032.26 × 1046.98 × 1071.66 × 101
F5Mean1.16 × 1012.61 × 1031.59 × 1042.13 × 1021.50 × 1011.56 × 1012.86 × 1021.03 × 1037.46 × 1043.02 × 10−2
Std9.88 × 10−12.14 × 1034.71 × 1033.34 × 1016.26 × 10−19.66 × 10−12.88 × 1011.07 × 1021.39 × 1042.42 × 10−2
Best9.542.44 × 1018.20 × 1031.56 × 1021.40 × 1011.40 × 1012.26 × 1027.95 × 1025.23 × 1041.27 × 10−2
F6Mean1.69 × 10−291.733.583.72 × 1012.42 × 10−393.44 × 10−231.58 × 10−42.79 × 10−21.80 × 1012.09 × 10−35
Std2.50 × 10−294.11 × 10−12.521.11 × 1011.03 × 10−356.04 × 10−222.03 × 10−31.65 × 10−25.181.21 × 10−34
Best4.86 × 10−348.30 × 10−81.05 × 10−11.18 × 1015.32 × 10−461.38 × 10−356.48 × 10−78.35 × 10−32.001.80 × 10−46
F7Mean2.47 × 10−72.39 × 1074.13 × 1088.83 × 1067.99 × 10−141.13 × 10−101.14 × 1072.89 × 1071.46 × 1091.06 × 10−13
Std1.62 × 10−78.81 × 1072.66 × 1082.57 × 1068.94 × 10−141.44 × 10−102.50 × 1069.00 × 1068.26 × 1082.12 × 10−12
Best4.80 × 10−81.31 × 1016.05 × 1074.00 × 1063.74 × 10−151.15 × 10−127.24 × 1061.75 × 1071.16 × 1081.00 × 10−18
F8Mean5.38 × 10−113.72 × 1027.33 × 1031.02 × 1041.80 × 10−172.54 × 10−141.68 × 1024.68 × 1023.30 × 1044.94 × 10−16
Std4.61 × 10−117.11 × 1022.62 × 1031.30 × 1032.04 × 10−171.31 × 10−142.58 × 1017.19 × 1017.03 × 1035.01 × 10−16
Best1.19 × 10−114.81 × 10−23.38 × 1037.58 × 1035.21 × 10−192.86 × 10−161.35 × 1023.14 × 1021.40 × 1041.42 × 10−21
F9Mean1.09 × 1012.66 × 1025.88 × 1021.24 × 1032.54 × 10−18.72 × 1023.97 × 1023.72 × 1028.99 × 1024.43 × 10−2
Std8.831.04 × 1026.85 × 1019.61 × 1016.46 × 10−51.80 × 1025.40 × 1014.50 × 1017.18 × 1018.31 × 10−2
Best5.96 × 10−73.06 × 10−24.79 × 1021.05 × 1030.004.45 × 1023.17 × 1023.04 × 1026.92 × 1020.00
F10Mean1.27 × 10−64.981.58 × 1016.961.06 × 10−92.37 × 10−83.716.801.99 × 1014.21 × 10−10
Std3.85 × 10−72.939.91 × 10−13.72 × 10−17.18 × 10−102.69 × 10−81.18 × 10−15.39 × 10−11.52 × 10−15.61 × 10−10
Best6.16 × 10−71.05 × 10−21.43 × 1016.061.97 × 10−101.55 × 10−93.496.071.95 × 1012.93 × 10−14
F11Mean4.78 × 10−33.71 × 10−22.521.031.20 × 10−31.00 × 10−28.33 × 10−11.097.731.11 × 10−17
Std1.09 × 10−24.83 × 10−13.68 × 10−12.44 × 10−29.13 × 10−31.22 × 10−23.47 × 10−21.60 × 10−21.521.20 × 10−16
Best1.52 × 10−133.47 × 10−21.709.72 × 10−10.000.007.60 × 10−11.074.440.00
F12Mean3.83 × 10−13.343.57 × 1061.87 × 1015.37 × 10−11.38 × 1017.951.10 × 1014.31 × 1082.13 × 10−3
Std6.78 × 10−25.95 × 1051.56 × 1064.444.11 × 10−28.572.373.031.65 × 1086.89 × 10−3
Best2.41 × 10−11.201.56 × 1057.794.06 × 10−16.15 × 10−12.336.061.34 × 1086.35 × 10−4
F13Mean7.371.21 × 1011.69 × 1078.76 × 1028.324.50 × 1011.68 × 1011.67 × 1028.61 × 1081.29 × 10−1
Std4.10 × 10−17.62 × 1061.19 × 1078.25 × 1023.16 × 10−13.71 × 1012.363.12 × 1013.28 × 1087.60 × 10−2
Best6.361.01 × 1011.87 × 1061.97 × 1027.599.891.26 × 1018.56 × 1012.15 × 1083.67 × 10−2
F14Mean4.50 × 10−31.645.10 × 1011.05 × 1027.70 × 10−51.25 × 1022.19 × 1012.46 × 1018.11 × 1012.46 × 10−6
Std2.87 × 10−39.691.17 × 1011.26 × 1018.55 × 10−42.23 × 1013.888.161.41 × 1016.63 × 10−6
Best7.96 × 10−76.83 × 10−33.26 × 1017.72 × 1015.32 × 10−126.85 × 1011.28 × 1011.50 × 1015.54 × 1011.06 × 10−14
F15Mean1.41 × 10−24.92 × 10−15.00 × 10−13.95 × 10−14.32 × 10−31.30 × 10−24.99 × 10−14.88 × 10−15.00 × 10−13.13 × 10−3
Std2.33 × 10−31.46 × 10−14.43 × 10−51.71 × 10−21.45 × 10−32.74 × 10−36.33 × 10−42.78 × 10−32.05 × 10−61.18 × 10−3
Best9.29 × 10−31.91 × 10−25.00 × 10−13.56 × 10−13.13 × 10−36.22 × 10−34.97 × 10−14.82 × 10−15.00 × 10−12.58 × 10−6
F16Mean1.38 × 10−108.501.07 × 1037.13 × 1027.40 × 10−182.48 × 10−137.18 × 1011.58 × 1024.92 × 1031.87 × 10−15
Std7.46 × 10−117.48 × 1013.89 × 1029.40 × 1017.97 × 10−171.04 × 1013.269.911.30 × 1032.09 × 10−15
Best2.63 × 10−114.28 × 10−15.92 × 1024.96 × 1020.000.006.25 × 1011.23 × 1022.89 × 1030.00
Table 3. Running time (seconds) of different algorithms.
Table 3. Running time (seconds) of different algorithms.
FunctionGWOFWAIPSOPSOAGWOEGWOBBOCSAMFOFWGWO
F10.656256.18750.8281250.56250.593750.5312510.656251.2968750.9531251.390625
F22.062511.51.81251.8281252.093751.73437513.890634.8906252.343752.46875
F30.3756.750.2343750.156250.250.1718759.468750.8906250.4531250.8125
F40.4531255.56250.250.18750.3281250.2031258.6718750.8906250.4531250.671875
F50.406256.218750.218750.156250.250.14062510.578130.750.4218750.671875
F61.031258.531250.81250.7968750.843750.718759.093752.6406251.06252.5
F70.6093756.6406250.43750.3750.5156250.3759.6718751.2968750.6250.90625
F80.3593755.3750.218750.156250.2343750.1406259.43750.718750.4531250.671875
F90.4218756.3281250.31250.250.2656250.18758.281251.0468750.4531250.734375
F100.4218757.1718750.31250.250.281250.18758.31250.968750.4218750.703125
F110.4531256.2656250.3281250.250.343750.18758.6093750.906250.4843751.484375
F120.9531257.56250.8281250.7187510.656259.093752.31251.0468751.28125
F131.0156258.0468750.7968750.7343750.8906250.7031259.0781252.3593751.0468751.34375
F140.5156255.8750.2343750.218750.218750.156258.56250.9531250.43750.625
F150.531257.250.2656250.18750.3281250.156258.1250.9218750.4218750.65625
F160.4531256.1093750.2968750.2343750.281250.18758.31251.0156250.4843750.78125
Table 4. Wilcoxon’s rank test of FWGWO and other algorithms on 16 benchmark functions.
Table 4. Wilcoxon’s rank test of FWGWO and other algorithms on 16 benchmark functions.
FunctionGWOFWAIPSOPSOAGWOEGWOBBOCSAMFO
F1p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−121.82 × 10−77.86 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F2p-value7.86 × 10−121.65 × 10−116.51 × 10−126.51 × 10−127.15 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F3p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F4p-value2.05 × 10−106.51 × 10−126.51 × 10−126.51 × 10−121.72 × 10−106.28 × 10−106.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F5p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F6p-value7.15 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.29 × 10−31.14 × 10−116.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F7p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−122.05 × 10−27.08 × 10−116.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F8p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−128.45 × 10−24.13 × 10−116.51 × 10−126.51 × 10−126.51 × 10−12
h-value111101111
F9p-value3.14 × 10−121.73 × 10−121.16 × 10−121.16 × 10−121.19 × 10−31.16 × 10−121.16 × 10−121.16 × 10−121.16 × 10−12
h-value111111111
F10p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−121.27 × 10−77.86 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F11p-value5.54 × 10−135.54 × 10−135.54 × 10−135.54 × 10−133.62 × 10−16.72 × 10−75.54 × 10−135.54 × 10−135.54 × 10−13
h-value111101111
F12p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F13p-value6.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F14p-value9.47 × 10−126.51 × 10−126.51 × 10−126.51 × 10−122.36 × 10−36.51 × 10−126.51 × 10−126.51 × 10−126.51 × 10−12
h-value111111111
F15p-value2.05 × 10−106.51 × 10−121.11 × 10−106.51 × 10−121.12 × 10−14.81 × 10−81.01 × 10−108.47 × 10−111.21 × 10−10
h-value111101111
F16p-value1.71 × 10−121.71 × 10−121.71 × 10−121.71 × 10−126.75 × 10−21.99 × 10−101.71 × 10−121.71 × 10−121.71 × 10−12
h-value111101111

Share and Cite

MDPI and ACS Style

Yue, Z.; Zhang, S.; Xiao, W. A Novel Hybrid Algorithm Based on Grey Wolf Optimizer and Fireworks Algorithm. Sensors 2020, 20, 2147. https://doi.org/10.3390/s20072147

AMA Style

Yue Z, Zhang S, Xiao W. A Novel Hybrid Algorithm Based on Grey Wolf Optimizer and Fireworks Algorithm. Sensors. 2020; 20(7):2147. https://doi.org/10.3390/s20072147

Chicago/Turabian Style

Yue, Zhihang, Sen Zhang, and Wendong Xiao. 2020. "A Novel Hybrid Algorithm Based on Grey Wolf Optimizer and Fireworks Algorithm" Sensors 20, no. 7: 2147. https://doi.org/10.3390/s20072147

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop