Next Article in Journal
A Gentle Introduction to Applications of Algorithmic Metatheorems for Space and Circuit Classes
Next Article in Special Issue
A Hybrid Course Recommendation System by Integrating Collaborative Filtering and Artificial Immune Systems
Previous Article in Journal
Joint Antenna Selection and Beamforming Algorithms for Physical Layer Multicasting with Massive Antennas
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Opposition-Based Adaptive Fireworks Algorithm

Department of Computer Information Engineering, Guangdong Technical College of Water Resource and Electrical Engineering, Guangzhou 510635, China
Algorithms 2016, 9(3), 43; https://doi.org/10.3390/a9030043
Submission received: 2 April 2016 / Revised: 13 June 2016 / Accepted: 4 July 2016 / Published: 8 July 2016
(This article belongs to the Special Issue Metaheuristic Algorithms in Optimization and Applications)

Abstract

:
A fireworks algorithm (FWA) is a recent swarm intelligence algorithm that is inspired by observing fireworks explosions. An adaptive fireworks algorithm (AFWA) proposes additional adaptive amplitudes to improve the performance of the enhanced fireworks algorithm (EFWA). The purpose of this paper is to add opposition-based learning (OBL) to AFWA with the goal of further boosting performance and achieving global optimization. Twelve benchmark functions are tested in use of an opposition-based adaptive fireworks algorithm (OAFWA). The final results conclude that OAFWA significantly outperformed EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA was compared with a bat algorithm (BA), differential evolution (DE), self-adapting control parameters in differential evolution (jDE), a firefly algorithm (FA), and a standard particle swarm optimization 2011 (SPSO2011) algorithm. The research results indicate that OAFWA ranks the highest of the six algorithms for both solution accuracy and runtime cost.

Graphical Abstract

1. Introduction

In the past twenty years several swarm intelligence algorithms, inspired by natural phenomena or social behavior, have been proposed to solve various real-world and complex global optimization problems. Observation of the behavior of ants searching for food lead to an ant colony optimization (ACO) [1] algorithm, proposed in 1992. Particle swarm optimization (PSO) [2], announced in 1995, is an algorithm that simulates the behavior of a flock of birds flying to their destination. PSO can be employed in the economic statistical design of X ¯ control charts, a class of mixed discrete-continuous nonlinear problems [3] and used in solving multidimensional knapsack problems [4,5], etc. Mimicking the natural adaptations of the biological species, differential evolution (DE) [6] was published in 1997. Inspired by the behavior of the flashing characteristics of fireflies, a firefly algorithm (FA) [7] was presented in 2009, and a bat algorithm (BA) [8] was proposed in 2010 which is based on the echolocation of microbats.
The fireworks algorithm (FWA) [9] is considered a novel swarm intelligence algorithm. It was introduced in 2010 and mimics the fireworks explosion process. The FWA provides an optimized solution for searching a fireworks location. In the event of a firework randomly exploding, there are explosive and Gaussian sparks produced in addition to the initial explosion. To determine the firework’s local space, a calculation of explosive amplitude and number of explosive sparks is made based on other fireworks and fitness functions. Fireworks and sparks are then filtered based on fitness and diversity. Using repetition, FWA focuses on smaller areas for optimized solutions.
Various types of real-world optimization problems have been solved by applying FWA, such as factorization of a non-negative matrix [10], the design of digital filters [11], parameter optimization for the detection of spam [12], reconfiguration of networks [13], mass minimization of trusses [14], parameter estimation of chaotic systems [15], and scheduling of multi-satellite control resources [16].
However, there are disadvantages to the FWA approach. Although the original algorithm worked well on functions in which the optimum is located at the origin of the search space, when the optimum of origin is more distant it becomes more challenging to locate the correct solution. Thus, the quality of the results of the original FWA deteriorates severely with the increasing distance between the function optimum and the origin. Additionally, the computational cost per iteration is high for FWA compared to other optimization algorithms. For these reasons, the enhanced fireworks algorithm (EFWA) [17] was introduced to enhance FWA.
The explosion amplitude is a significant variable and affects the performance of both the FWA and EFWA. In EFWA, the amplitude is near zero with the best fireworks, thus employing an amplitude check with a minimum. An amplitude like this is calculated according to the maximum number of evaluations, which leads to a local search without adaption around the best fireworks. Thus, the adaptive fireworks algorithm (AFWA) introduced an adaptive amplitude [18] to improve the performance of EFWA. In AFWA, the adaptive amplitude is calculated from a distance between filtered sparks and the best fireworks.
AFWA improved the performance of EFWA on 25 of the 28 CEC13’s benchmark functions [18], but our in-depth experiments indicate that the solution accuracy of AFWA is lower than that of EFWA. To improve the performance of AFWA, opposition-based learning was added and used to accelerate the convergence speed and increase the solution accuracy.
Opposition-based learning (OBL) was first proposed in 2005 [19]. OBL simultaneously considers a solution and its opposite solution; the fitter one is then chosen as a candidate solution in order to accelerate convergence and improve solution accuracy. It has been used to enhance various optimization algorithms, such as differential evolution [20,21], particle swarm optimization [22], the ant colony system [23], the firefly algorithm [24], the artificial bee colony [25], and the shuffled frog leaping algorithm [26]. Inspired by these studies, OBL was added to AFWA and used to boost the performance of AFWA.
The remainder of this paper is organized in the following manner. Both AFWA and OAFWA are summarized in Section 2. In the third section, twelve benchmark functions and implementations are listed. Section 4 covers simulations that have been conducted, while Section 5 presents our conclusion.

2. Opposition-Based Adaptive Fireworks Algorithm

2.1. Adaptive Fireworks Algorithm

Suppose that N denotes the quantity of fireworks, while d stands for the number of dimensions, and x i stands for each firework in AFWA. The explosive amplitude A i and the number of explosion sparks S i can be defined according to the following expressions:
A i = A ^ · f ( x i ) y min + ϵ i = 1 N ( f ( x i ) y min ) + ϵ
S i = M e · y max f ( x i ) + ϵ i = 1 N ( y max f ( x i ) ) + ϵ
where y max = max( f ( x i ) ), y min = min( f ( x i ) ), and A ^ and M e are two constants. ϵ denotes the machine epsilon, i = ( 1 ,   2 , , d ).
In addition, the number of sparks S i is defined by:
S i = { S min i f   S i < S min S max i f   S i > S max S i o t h e r w i s e
where S min and S max are the lower bound and upper bound of the S i .
Based on the above A i and   S i , Algorithm 1 is performed by generating explosion sparks for x i as follows:
Algorithm 1 Generating Explosion Sparks
1: for j = 1 to Si do
2: for each dimension k = 1, 2, …, d do
3: obtain r1 from U(0, 1)
4: if r1 < 0.5 then
5: obtain r from U(−1, 1)
6: s i j ( k ) x i ( k ) + r · A i
7: if s i j ( k ) L B     s i j ( k ) U B then
8: obtain r again from U(0, 1)
9: s i j ( k ) L B + r · ( U B L B )
10: end if
11: end if
12: end for
13: end for
14: return s i j
where x [ L B ,   U B ] . U(a, b) denotes a uniform distribution between a and b.
After generating explosion sparks, Algorithm 2 is performed for generating Gaussian sparks as follows:
Algorithm 2 Generating Gaussian Sparks
1: for j = 1 to NG do
2: Randomly choose i from 1, 2, ..., m
3: obtain r from N(0, 1)
4: for each dimension k = 1, 2, …, d do
5: G j ( k ) x i ( k ) + r · ( x * ( k ) x i ( k ) )
6: if G j ( k ) L B     G j ( k ) U B then
7: obtain r from U(0, 1)
8: G j ( k ) L B + r · ( U B L B )
9: end if
10: end for
11: end for
12: return G j
where NG is the quantity of Gaussian sparks, m stands for the quantity of fireworks, x * denotes the best firework, and N(0, 1) denotes normal distribution with an average of 0 and standard deviation of 1.
For the best sparks among the above explosion sparks and Gaussian sparks, the adaptive amplitude of fireworks A * in generation g   + 1 is defined as follows [18]:
A * ( g + 1 ) = { U B L B , g = 0   o r   f ( s i ) < f ( x ) 0.5 · ( λ · | | s i s * | | + A * ( g ) ) , o t h e r w i s e
where s 1 s n denotes all sparks generated in generation g , s * denotes the best spark and x stands for fireworks in generation g .
The above parameter λ is suggested to be a fixed value of 1.3, empirically.
Algorithm 3 demonstrates the complete version of the AFWA.
Algorithm 3 Pseudo-Code of AFWA
1: randomly choosing m fireworks
2: assess their fitness
3: repeat
4: obtain Ai (except for A*) based on Equation (1)
5: obtain Si based on Equations (2) and (3)
6: produce explosion sparks based on Algorithm 1
7: produce Gaussian sparks based on Algorithm 2
8: assess all sparks’ fitness
9: obtain A* based on Equation (4)
10: retain the best spark as a firework
11: randomly select other m − 1 fireworks
12: until termination condition is satisfied
13: return the best fitness and a firework location

2.2. Opposition-Based Learning

Definition 1. 
Assume P = (x1, x2, ..., xn) is a solution in n-dimensional space, where x1, x2, …, xn R and xi [ a i , b i ], i {1, 2, …, n}. The opposite solution OP = ( x ˘ 1 , x ˘ 2 , …, x ˘ n ) is defined as follows [19]:
x ˘ i = a i + b i x i
In fact, according to probability theory, 50% of the time an opposite solution is better. Therefore, based on a solution and an opposite solution, OBL has the potential to accelerate convergence and improve solution accuracy.
Definition 2. 
The quasi-opposite solution QOP = ( x ˘ 1 q ,   x ˘ 2 q , ,   x ˘ n q ) is defined as follows [21]:
x ˘ i q = rand ( a i + b i 2 , x ˘ i )
It is proved that the quasi-opposite solution QOP is more likely than the opposite solution OP to be closer to the solution.
Figure 1 illustrates the quasi-opposite solution QOP in the one-dimensional case.

2.3. Opposition-Based Adaptive Fireworks Algorithm

The OBL is added to AFWA in two stages: opposition-based population initialization and opposition-based generation jumping [20].

2.3.1. Opposition-Based Population Initialization

In the initialization stage, both a random solution and a quasi-opposite solution QOP are considered in order to obtain fitter starting candidate solutions.
Algorithm 4 is performed for opposition-based population initialization as follows:
Algorithm 4 Opposition-Based Population Initialization
1: randomly initialize fireworks pop with a size of m
2: calculate a quasi opposite fireworks Qpop based on Equation (6)
3: assess 2 × m fireworks’ fitness
4: return the fittest individuals from {popOpop} as initial fireworks

2.3.2. Opposition-Based Generation Jumping

In the second stage, the current population of AFWA is forced to jump into some new candidate solutions based on a jumping rate, Jr.
Algorithm 5 Opposition-Based Generation Jumping
1: if (rand(0, 1) < Jr)
2: dynamically calculate boundaries of current m fireworks
3: calculate a quasi opposite fireworks Qpop based on Equation (6)
4: assess 2 × m fireworks’ fitness
5: end if
6: return the fittest individuals from {popOpop} as current fireworks

2.3.3. Opposition-Based Adaptive Fireworks Algorithm

Algorithm 6 demonstrates the complete version of the OAFWA.
Algorithm 6 Pseudo-Code of OAFWA
1: opposition-based population initialization based on Algorithm 4
2: repeat
3: obtain Ai (except for A*) based on Equation (1)
4: obtain Si based on Equations (2) and (3)
5: produce explosion sparks based on Algorithm 1
6: produce Gaussian sparks based on Algorithm 2
7: assess all sparks’ fitness
8: obtain A* based on Equation (4)
9: retain the best spark as a firework
10: randomly select other m – 1 fireworks
11: opposition-based generation jumping based on Algorithm 5
12: until termination condition is satisfied
13: return the best fitness and a firework location

3. Benchmark Functions and Implementation

3.1. Benchmark Functions

In order to assess the performances of OAFWA, twelve standardized benchmark functions [27] are employed. The functions are uni-modal and multi-modal. The global minimum is zero.
Table 1 presents a list of uni-modal (F1~F6) and multi-modal (F7~F12) functions and their features.

3.2. Success Criterion

We utilize the success rates S r to compare performances of different FWA based algorithms including EFWA, AFWA, and OAFWA.
S r can be defined as follows:
S r = 100 · N successful N all
where N successful is the number of trials that were successful, and N all stands for the total number of trials. When an experiment locates a solution that is close in range to the global optimum, it is found to be a success. A successful trial is defined by:
i = 1 D ( X i g b X i * ) ( U B L B ) × 10 4
where D denotes the dimensions of the test function, and X i g b denotes the dimension of the best result by the algorithm.

3.3. Initialization

We tested the benchmark functions using 100 independent algorithms based on various FWA based algorithms. In order to fully evaluate the performance of OAFWA, statistical measures were used including the worst, best, median, and mean objective values. Standard deviations were also determined.
In FWA based algorithms, m = 4, A ^ = 100, Me = 20, S min = 8, S max = 13, NG = 2 (population size is equivalent to 40), and Jr = 0.4. Evaluation times from 65,000 to 200,000 were used for different functions.
Finally, we used Matlab 7.0 (The MathWorks Inc., Natick, MA, USA) software on a notebook PC with a 2.3 GHZ CPU (Intel Core i3-2350, Intel Corporation, Santa Clara, CA, USA) and 4GB RAM, and Windows 7 (64 bit, Microsoft Corporation, Redmond, WA, USA).

4. Simulation Studies and Discussions

4.1. Comparison with FWA-Based Algorithms

To assess the performance of OAFWA, OAFWA is compared with AFWA, and then with FWA-based algorithms including EFWA and AFWA.

4.1.1. Comparison with AFWA

To compare the performances of OAFWA and AFWA, both algorithms were tested on twelve benchmark functions.
Table 2 shows the statistical results for OAFWA. Table 3 shows the statistical results for AFWA. Table 4 shows the comparison of OAFWA and AFWA for solution accuracy from Table 2 and Table 3.
The statistical results from Table 2, Table 3 and Table 4 indicate that the accuracy of the best solution and the mean solution for OAFWA were improved by average values of 10−288 and 10−53, respectively, as compared to AFWA. Thus, OAFWA can increase the performance of AFWA and achieve a significantly more accurate solution.
To evaluate whether the OAFWA results were significantly different from those of the AFWA, the OAFWA mean results during iteration for each benchmark function were compared with those of the AFWA. The Wilcoxon signed-rank test, which is a safe and robust, non-parametric test for pairwise statistical comparisons [28], was utilized at the 5% level to detect significant differences between these pairwise samples for each benchmark function.
The signrank function in Matlab 7.0 was used to run the Wilcoxon signed-rank test, as shown in Table 5.
Here P is the p-value under test and H is the result of the hypothesis test. A value of H = 1 indicates the rejection of the null hypothesis at the 5% significance level.
Table 5 indicates that OAFWA showed a large improvement over AFWA.

4.1.2. Comparison with FWA-Based Algorithms

To compare performances of OAFWA, AFWA, and EFWA, EFWA was tested for twelve benchmark functions. Success rates and mean error ranks for three FWA based algorithms were obtained.
Table 6 shows the statistical results for EFWA.
Table 7 compares three FWA-based algorithms.
The results from Table 6 and Table 7 indicate that the mean error of AFWA is larger than that of EFWA. The Sr of AFWA is lower that of EFWA. Thus, AFWA is not better than EFWA; The Sr of OAFWA is the highest and OAFWA ranks the highest among three FWA algorithms. Thus, OAFWA greatly improved the performance of AFWA.
For the runtimes of EFWA, AFWA and OAFWA, Table 2 and Table 3 indicate that the time cost of OAFWA is not much different from that of AFWA. But Table 2 and Table 6 indicate the time cost of OAFWA drops significantly as compared with that of EFWA.
Figure 2 shows searching curves for EFWA, AFWA, and OAFWA.
Figure 2 shows that OAFWA results in a global optimum for all twelve functions and with fast convergence. However, the alternative EFWA methods do not always locate the global optimum solutions. These include EFWA for F9 and F11. AFWA is the worst among the three algorithms. It is bad for a majority of functions. Thus, OAFWA is the best one in terms of solution accuracy.

4.2. Comparison with Other Swarm Intelligence Algorithms

Additionally, OAFWA was compared with alternative swarm intelligent algorithms, including BA, DE, self-adapting control parameters in differential evolution (jDE) [29], FA, and SPSO2011 [30]. The resulting evaluation times (iteration) and population size are the same for each function. Two cases of population size are tested, 20 and 40, respectively. The parameters are shown in Table 8.
How to tune the parameters of the above algorithms is a challenging issue. These parameters are obtained from the literature [31] for BA, [29] for DE and jDE, and [30] for SPSO2011. For FA, reducing randomness increases the convergence. α t + 1 = ( 1 δ ) a t [32] is employed to gradually decrease α . δ is a small constant related to iteration.
For OAFWA, m = 2, A ^ = 100, Me = 20, S min = 3, S max = 6, NG = 1 (population size is equivalent to 20), and Jr = 0.4.
Table 9 and Table 10 present the mean errors for the six algorithms when population sizes are 20 and 40, respectively.
Table 9 and Table 10 indicate that certain algorithms perform well for some functions, but less well for others. Overall, OAFWA performance was shown to have more stability than the other algorithms.
Figure 3 and Figure 4 present the runtime cost of the six algorithms for twelve functions when population sizes are 20 and 40, respectively.
Figure 3 shows that the runtime cost of DE is the most expensive among the six algorithms, except for F7, F8, F10, and F12. Figure 4 shows that the runtime cost of SPSO2011 is the most expensive among the six algorithms. The time cost of OAFWA is the least.
Table 11 and Table 12 present the ranks of the six algorithms for twelve benchmark functions when population sizes are 20 and 40, respectively.
Table 11 and Table 12 indicate that the OAFWA ranks the best (1.42 and 1.00) out of the six algorithms.

5. Conclusions

OAFWA was developed by applying OBL to AFWA. Twelve benchmark functions were investigated for OAFWA. Results indicated a large boost in performance in OAFWA over AFWA when using OBL. The accuracy of the best solution and the mean solution for OAFWA were improved significantly when compared to AFWA.
The experiments clearly indicate that OAFWA can perform significantly better than EFWA and AFWA in terms of solution accuracy. Additionally, OAFWA is compared with BA, DE, jDE, FA and SPSO2011. Overall, the research demonstrates that OAFWA performed the best for both solution accuracy and runtime cost.

Acknowledgments

The author is thankful to the anonymous reviewers for their valuable comments to improve the technical content and the presentation of the paper.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Dorigo, M. Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 1992. [Google Scholar]
  2. Kennedy, J.; Eberhart, R.C. Particles warm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Piscataway, NJ, USA, 27 November–1 December 1995; pp. 1942–1948.
  3. Chih, M.C.; Yeh, L.L.; Li, F.C. Particle swarm optimization for the economic and economic statistical designs of the X ¯ control chart. Appl. Soft Comput. 2011, 11, 5053–5067. [Google Scholar] [CrossRef]
  4. Chih, M.C.; Lin, C.J.; Chern, M.S.; Ou, T.Y. Particle swarm optimization with time-varying acceleration coefficients for the multidimensional knapsack problem. Appl. Math. Model. 2014, 38, 1338–1350. [Google Scholar] [CrossRef]
  5. Chih, M.C. Self-adaptive check and repair operator-based particle swarm optimization for the multidimensional knapsack problem. Appl. Soft Comput. 2015, 26, 378–389. [Google Scholar] [CrossRef]
  6. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  7. Yang, X.S. Firefly algorithms for multimodal optimization. In Proceedings of the 5th International Conference on Stochastic Algorithms: Foundation and Applications, Sapporo, Japan, 26–28 October 2009; Volume 5792, pp. 169–178.
  8. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NISCO 2010); Gonzalez, J.R., Ed.; Springer: Berlin, Germany, 2010; pp. 65–74. [Google Scholar]
  9. Tan, Y.; Zhu, Y.C. Fireworks algorithm for optimization. In Advances in Swarm Intelligence; Springer: Berlin, Germany, 2010; pp. 355–364. [Google Scholar]
  10. Andreas, J.; Tan, Y. Using population based algorithms for initializing nonnegative matrix factorization. In Advances in Swarm Intelligence; Springer: Berlin, Germany, 2011; pp. 307–316. [Google Scholar]
  11. Gao, H.Y.; Diao, M. Cultural firework algorithm and its application for digital filters design. Int. J. Model. Identif. Control 2011, 4, 324–331. [Google Scholar] [CrossRef]
  12. Wen, R.; Mi, G.Y.; Tan, Y. Parameter optimization of local-concentration model for spam detection by using fireworks algorithm. In Proceedings of the 4th International Conference on Swarm Intelligence, Harbin, China, 12–15 June 2013; pp. 439–450.
  13. Imran, A.M.; Kowsalya, M.; Kothari, D.P. A novel integration technique for optimal network reconfiguration and distributed generation placement in power distribution networks. Int. J. Electr. Power 2014, 63, 461–472. [Google Scholar] [CrossRef]
  14. Nantiwat, P.; Bureerat, S. Comparative performance of meta-heuristic algorithms for mass minimisation of trusses with dynamic constraints. Adv. Eng. Softw. 2014, 75, 1–13. [Google Scholar] [CrossRef]
  15. Li, H.; Bai, P.; Xue, J.; Zhu, J.; Zhang, H. Parameter estimation of chaotic systems using fireworks algorithm. In Advances in Swarm Intelligence; Springer: Berlin, Germany, 2015; pp. 457–467. [Google Scholar]
  16. Liu, Z.B.; Feng, Z.R.; Ke, L.J. Fireworks algorithm for the multi-satellite control. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation, Sendai, Japan, 25–28 May 2015; pp. 1280–1286.
  17. Zheng, S.Q.; Janecek, A.; Tan, Y. Enhanced fireworks algorithm. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 2069–2077.
  18. Li, J.Z.; Zheng, S.Q.; Tan, Y. Adaptive fireworks algorithm. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation, Beijing, China, 6–11 July 2014; pp. 3214–3221.
  19. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the 2005 International Conference on Computational Intelligence for Modeling, Control and Automation, Vienna, Austria, 28–30 November 2005; pp. 695–701.
  20. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Opposition-based differential evolution algorithms. In Proceedings of the 2006 IEEE Congress on Evolutionary Computation, Vancouver, BC, Canada, 16–21 July 2006; pp. 7363–7370.
  21. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M. Quasi-oppositional differential evolution. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2229–2236.
  22. Wang, H.; Liu, Y.; Zeng, S.Y.; Li, H.; Li, C.H. Opposition-based particle swarm algorithm with Cauchy mutation. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 4750–4756.
  23. Malisia, A.R.; Tizhoosh, H.R. Applying opposition-based ideas to the ant colony system. In Proceedings of the 2007 IEEE Swarm Intelligence Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 182–189.
  24. Yu, S.H.; Zhu, S.L.; Ma, Y. Enhancing firefly algorithm using generalized opposition-based learning. Computing 2015, 97, 741–754. [Google Scholar] [CrossRef]
  25. Zhao, J.; Lv, L.; Sun, H. Artificial bee colony using opposition-based learning. In Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2015; Volume 329, pp. 3–10. [Google Scholar]
  26. Morteza, A.A.; Hosein, A.R. Opposition-based learning in shuffled frog leaping: An application for parameter identification. Inf. Sci. 2015, 291, 19–42. [Google Scholar]
  27. Mitic, M.; Miljkovic, Z. Chaotic fruit fly optimization algorithm. Knowl. Based Syst. 2015, 89, 446–458. [Google Scholar]
  28. Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  29. Brest, J.; Greiner, S.; Boskovic, B.; Mernik, M.; Zumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evolut. Comput. 2006, 6, 646–657. [Google Scholar]
  30. Zambrano, M.; Bigiarini, M.; Rojas, R. Standard particle swarm optimization 2011 at CEC-2013: A baseline for future PSO improvements. In Proceedings of the 2013 IEEE Congress on Evolutionary Computation (CEC), Cancun, Mexico, 20–23 June 2013; pp. 2337–2344.
  31. Selim, Y.; Ecir, U.K. A new modification approach on bat algorithm for solving optimization problems. Appl. Soft Comput. 2015, 28, 259–275. [Google Scholar]
  32. Yang, X.S. Nature-Inspired Metaheuristic Algorithms, 2nd ed.; Luniver Press: Frome, UK, 2010; pp. 81–96. [Google Scholar]
Figure 1. The quasi-opposite solution QOP.
Figure 1. The quasi-opposite solution QOP.
Algorithms 09 00043 g001
Figure 2. The EFWA, AFWA, and OAFWA searching curves.
Figure 2. The EFWA, AFWA, and OAFWA searching curves.
Algorithms 09 00043 g002aAlgorithms 09 00043 g002b
Figure 3. Runtime cost of the six algorithms for twelve functions when popSize = 20.
Figure 3. Runtime cost of the six algorithms for twelve functions when popSize = 20.
Algorithms 09 00043 g003
Figure 4. Runtime cost of the six algorithms for twelve functions when popSize = 40.
Figure 4. Runtime cost of the six algorithms for twelve functions when popSize = 40.
Algorithms 09 00043 g004
Table 1. Twelve benchmark functions.
Table 1. Twelve benchmark functions.
FunctionDimensionRange
F 1 ( x ) = i = 1 n x i 2 40[−10, 10]
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 40[−10, 10]
F 3 ( x ) = i = 1 n ( j = 1 i x j ) 2 40[−10, 10]
F 4 ( x ) = i = 1 n i x i 2 40[−30, 30]
F 5 ( x ) = max { | x i | , 1 x i n } 40[−100, 100]
F 6 ( x ) = 10 6 x 1 2 + i = 2 n x i 2 40[−100, 100]
F 7 ( x ) = i = 1 n ( x i 2 10   cos ( 2 π x i ) + 10 ) 40[−1, 1]
F 8 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n c o s ( x i i ) + 1 40[−100, 100]
F 9 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 )   exp ( 1 n i = 1 n c o s ( 2 π x i ) ) + 20 + e 40[−10, 10]
F 10 ( x ) = i = 1 n / 4 ( ( x 4 i 3 + 10 x 4 i 2 ) 2 + 5 ( x 4 i 1 x 4 i ) 2   + ( x 4 i 2 2 x 4 i 1 ) 4 + 10 ( x 4 i 3 x 4 i ) 4 ) 40[−10, 10]
F 11 ( x ) = i = 1 n | x i sin ( x i ) + 0.1 x i   | 40[−10, 10]
F 12 ( x ) = 1 c o s ( 2 π i = 1 n x i 2 ) + 0.1 i = 1 n x i 2 40[−10, 10]
Table 2. Statistical results for OAFWA.
Table 2. Statistical results for OAFWA.
Func.BestMeanMedianWorstStd. Dev.Time (s)
F1<1 × 10−3152.46 × 10−341.91 × 10−341.15 × 10−332.14 × 10−343.23
F2<1 × 10−3158.21 × 10173.26 × 10235.31 × 10−161.22 × 10−165.91
F3<1 × 10−3153.56 × 10−381.95 × 10−382.74 × 10−375.32 × 10−3830.56
F4<1 × 10−3153.28 × 10−382.52 × 10421.94 × 10−374.66 × 10−387.63
F5<1 × 10−3157.39 × 10178.07 × 10−171.75 × 10−164.24 × 10−175.88
F6<1 × 10−3157.17 × 10468.21 × 10473.18 × 10443.24 × 10455.54
F7<1 × 10−3151.03 × 10−12<1 × 10−3151.02 × 10−101.02 × 10−114.62
F8<1 × 10−315<1 × 10−315<1 × 10−315<1 × 10−315<1 × 10−3158.73
F98.88 × 10−169.59 × 10−168.88 × 10164.44 × 10−155.00 × 10−169.61
F10<1 × 10−3152.00 × 1040<1 × 10−3152.15 × 10−394.11 × 104023.63
F11<1 × 10−3155.27 × 10−175.32 × 10−171.06 × 10−162.71 × 10−175.84
F12<1 × 10−3151.32 × 10−331.15 × 10−335.11 × 10−331.00 × 10337.40
Table 3. Statistical results for AFWA.
Table 3. Statistical results for AFWA.
Func.BestMeanMedianWorstStd. Dev.Time (s)
F11.36 × 10764.463.41.57 × 10236.63.23
F21.25 × 10−31.072.43 × 10−138.93.955.59
F36.45 × 102.45 × 1022.49 × 1024.11 × 10288.230.20
F44.42 × 1031.48 × 1041.42 × 1042.54 × 1044.84 × 1037.50
F531.043.744.252.24.865.69
F61.06 × 1042.19 × 1042.19 × 1043.18 × 1043.69 × 1035.38
F71.51 × 10−57.665.3231.87.374.54
F82.20 × 10−103.88 × 10−19.86 × 10−34.687.42 × 10−18.28
F95.51 × 1072.112.173.576.35 × 10−19.39
F107.40 × 1026.75 × 1036.28 × 1032.29 × 1044.43 × 10323.35
F112.11 × 10−12.391.986.691.565.64
F122.00 × 10−15.58 × 10−15.00 × 10−11.101.85 × 10−17.17
Table 4. Comparison of OAFWA and AFWA for solution accuracy.
Table 4. Comparison of OAFWA and AFWA for solution accuracy.
Func.BestAccuracy ImprovedMeanAccuracy Improved
AFWAOAFWAAFWAOAFWA
F11.36 × 107<1 × 10−315−30864.42.46 × 1034−35
F21.25 × 103<1 × 10−315−3121.078.21 × 1017−17
F364.5<1 × 10−315−3162.45 × 1023.56 × 1038−40
F44.42 × 103<1 × 10−315−3181.48 × 1043.28 × 1038−42
F531.0<1 × 10−315−31643.77.39 × 1017−18
F61.06 × 104<1 × 10−315−3192.19 × 1047.17 × 1046−50
F71.51 × 105<1 × 10−315−3107.661.03 × 1012−12
F82.20 × 10−10<1 × 10−315−3053.88 × 101<1 × 10−315−314
F95.51 × 1078.88 × 10−16−92.119.59 × 1016−16
F107.40 × 102<1 × 10−315−3176.75 × 1032.00 × 1040−43
F112.11 × 10−1<1 × 10−315−3142.395.27 × 1017−17
F122.00 × 10−1<1 × 10−315−3145.58 × 1011.32 × 1033−32
Average−288−53
Table 5. Wilcoxon signed-rank test results for OAFWA and AFWA.
Table 5. Wilcoxon signed-rank test results for OAFWA and AFWA.
F1F2F3F4F5F6
H111111
P4.14 × 101083.33 × 10−165<1 × 10−3152.01 × 102301.63 × 10−1811.63 × 10−181
F7F8F9F10F11F12
H111111
P1.39 × 101328.13 × 10−1981.01 × 10246<1 × 10−3153.33 × 10−1652.01 × 10230
Table 6. Statistical results for EFWA.
Table 6. Statistical results for EFWA.
Func.BestMeanMedianWorstStd. dev.Time (s)
F11.66 × 10−32.63 × 10−32.63 × 10−33.44 × 10−33.83 × 1044.34
F22.07 × 10−15.67 × 10−12.75 × 10−12.845.89 × 10−18.88
F37.52 × 10−31.44 × 1021.40 × 1022.77 × 1023.67 × 10−342.54
F43.31 × 10−15.22 × 10−14.92 × 10−18.91 × 10−11.23 × 10−113.55
F51.62 × 10−17.01 × 10−12.18 × 10−19.261.479.22
F62.01 × 10−13.28 × 10−13.12 × 10−14.75 × 10−16.67 × 1029.11
F73.22 × 10−35.04 × 10−35.07 × 10−36.87 × 10−36.73 × 1046.46
F86.01 × 10−31.63 × 1021.48 × 1024.91 × 1028.80 × 10−312.90
F92.395.055.088.511.2016.82
F101.68 × 10−12.69 × 10−12.64 × 10−14.57 × 10−15.86 × 10237.31
F119.73 × 10−14.143.8013.62.168.98
F122.00 × 10−13.15 × 10−13.00 × 10−14.00 × 1015.39 × 10213.46
Table 7. Success rates and mean error ranks for three FWA-based algorithms.
Table 7. Success rates and mean error ranks for three FWA-based algorithms.
Func.EFWAAFWAOAFWA
Mean ErrorSrRankMean ErrorSrRankMean ErrorSrRank
F12.63 × 10−32264.4132.46 × 10−341001
F25.67 × 10−1221.073138.21 × 10−171001
F31.44 × 102022.45 × 102633.56 × 10−381001
F45.22 × 101021.48 × 104033.28 × 10−381001
F57.01 × 10−10243.7037.39 × 10−171001
F63.28 × 10−1022.19 × 104037.17 × 10461001
F75.04 × 10−310027.66031.03 × 10−121001
F81.63 × 102023.88 × 10−103<1 × 10−3151001
F95.05032.11029.59 × 10−161001
F102.69 × 10−1026.75 × 103032.00 × 10401001
F114.14032.39025.27 × 10−171001
F123.15 × 10−1025.58 × 10−1031.32 × 10−331001
Average8.72.173.22.831001
Table 8. Parameters of the algorithms.
Table 8. Parameters of the algorithms.
AlgorithmsParameters
BAA = 0.95, r = 0.8, fmin = 0, f = 1.0
DEF = 0.5, CR = 0.9
jDE τ 1   =   τ 2 = 0.1, F [0.1, 1.0], CR [0, 1]
FA α = 0.9, β = 0.25, γ = 1.0
SPSO2011w = 0.7213, c1 = 1.1931, c2 = 1.1931
Table 9. Mean errors for the six algorithms when popSize = 20.
Table 9. Mean errors for the six algorithms when popSize = 20.
Func.BADEjDEFASPSO2011OAFWAIteration
F13.27 × 10−311.73.67 × 10−356.45 × 1078.15 × 10236.75 × 102365,000
F230.43.373.49 × 10−354.37 × 1026.272.02 × 10−16100,000
F32.10 × 1027.705.42 × 10434.83.70 × 1061.19 × 10−38190,000
F41.53 × 1012.27 × 1035.48 × 10761.19 × 10−38.57 × 1042.26 × 10−38140,000
F532.340.527.12.28 × 10212.82.22 × 10−17110,000
F61.23 × 1053.80 × 1032.54 × 10−587.37 × 10−59.85 × 1033.56 × 1042110,000
F718.934.312.819.848.5<1 × 10−31580,000
F81.221.079.30 × 10−31.78 × 10−38.13 × 10−32.41 × 10−16120,000
F96.644.485.29 × 1023.82 × 1042.713.40 × 10−12150,000
F104.82 × 10−15.21 × 1032.06 × 10451.69 × 1023.23 × 10−31.17 × 1040200,000
F114.856.58 × 10−15.66 × 10−168.21 × 10−13.453.03 × 10−17100,000
F1216.72.411.92 × 10−12.58 × 10−11.92 × 10−16.38 × 10−34140,000
Table 10. Mean errors for the six algorithms when popSize = 40.
Table 10. Mean errors for the six algorithms when popSize = 40.
Func.BADEjDEFASPSO2011OAFWAIteration
F13.22 × 10−37.77 × 10−103.27 × 10−172.02 × 10−54.16 × 10−312.46 × 10−3465,000
F27.151.28 × 10−151.60 × 10−151.51 × 10−13.068.21 × 10−17100,000
F32.03 × 1023.83 × 10−57.91 × 10240.23.25 × 1083.56 × 10−38190,000
F41.23 × 1015.12 × 10221.38 × 10−371.29 × 10−15.77 × 1063.28 × 10−38140,000
F525.123.97.279.04 × 1025.997.39E × 10−17110,000
F68.05 × 1042.21 × 1062.07 × 10282.26 × 10−33.56 × 1037.17E × 1046110,000
F719.226.98.0818.449.11.03 × 10−1280,000
F85.25 × 10−18.39 × 10−32.46 × 1041.52 × 10−35.22 × 10−3<1 × 10−315120,000
F95.535.39 × 10−17.99 × 10−153.16 × 1021.549.59 × 10−16150,000
F104.62 × 10−12.85 × 1061.20 × 10−308.71 × 1021.41 × 10−32.00 × 1040200,000
F113.782.56 × 10−151.01 × 10−39.77 × 10−12.17 × 10−15.27 × 10−17100,000
F1211.11.59 × 10−19.99 × 1022.28 × 10−11.11 × 10−11.32 × 10−33140,000
Table 11. Ranks for the six algorithms when popSize = 20.
Table 11. Ranks for the six algorithms when popSize = 20.
Func.F1F2F3F4F5F6F7F8F9F10F11F12Rank
BA5645563665655.17
DE6456645556344.92
jDE1131412431222.08
FA4364234224433.41
SPSO20113523356343523.67
OAFWA2212121112111.42
Table 12. Ranks for the six algorithms when popSize = 40.
Table 12. Ranks for the six algorithms when popSize = 40.
Func.F1F2F3F4F5F6F7F8F9F10F11F12Rank
BA6645564666665.50
DE4233635543243.67
jDE3352422222322.67
FA5466243335554.25
SPSO20112524356454433.91
OAFWA1111111111111.00

Share and Cite

MDPI and ACS Style

Gong, C. Opposition-Based Adaptive Fireworks Algorithm. Algorithms 2016, 9, 43. https://doi.org/10.3390/a9030043

AMA Style

Gong C. Opposition-Based Adaptive Fireworks Algorithm. Algorithms. 2016; 9(3):43. https://doi.org/10.3390/a9030043

Chicago/Turabian Style

Gong, Chibing. 2016. "Opposition-Based Adaptive Fireworks Algorithm" Algorithms 9, no. 3: 43. https://doi.org/10.3390/a9030043

APA Style

Gong, C. (2016). Opposition-Based Adaptive Fireworks Algorithm. Algorithms, 9(3), 43. https://doi.org/10.3390/a9030043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop