In this section, we describe the basic expression, upper and lower bounds, and theoretical optimal value of the test function. Then, we selected nine comparison algorithms and set their parameters. Then, the performance of nine comparison algorithms in 13 non-fixed dimensions and 10 fixed dimensions was compared through experimental data. Thirteen functions of non-fixed dimensions were set up in the dimensions 10,30 and 100, respectively. The convergence precision of the algorithm is compared in different aspects. Finally, the convergence of the algorithm is tested and compared.
4.1. Test Function and Experimental Parameter Setting
In the experiment, 23 benchmark functions were selected according to [
30] and [
31], among which the first 13 functions were functions of variable dimensions and the last 10 functions were functions of fixed dimensions. A complete set of benchmark functions, consisting of 23 different functions (single mode and multi-mode), is used to evaluate the performance of the algorithm. The definition, function image, upper and lower bounds, dimension settings and minimum of the reference function are shown in
Table 1 and
Table 2. We introduce reference functions such as F1, F2 and F3, up to F23. For F1 to F7, there is a single peak optimization problem with an extreme point in a given search area. These functions are used to study the convergence rate and optimization accuracy of the algorithm. F8–F23 are multi-modal optimization functions with multiple local extremum points in a given search area. They are used to evaluate the ability to jump out of local optima and seek global optima. In addition, the dimensions of d = 10, d = 30 and d = 100 are set for the first 13 functions in this paper, so as to test different algorithms.
Meanwhile, eight comparison algorithms were selected, namely, the original moth-flame algorithm (MFO), sine and cosine algorithm (SCA), bat algorithm (BA), spotted hyena algorithm (SHO), particle swarm optimization algorithm (PSO), whale algorithm (WOA), grey wolf algorithm (GWO) and salp swarm algorithm (SSA). The parameter settings in these comparison algorithms are shown in
Table 3. In addition to the parameter settings in
Table 3, the population number of each algorithm is set to 30, and the number of iterations is set to 1000. The experiment was conducted on the 64-bit operating system of Windows 7 with the software MATLAB 2014a version, and the processor was an Intel(R) Core (TM) i5-5200U
[email protected] GHz 2.20 GHz with 4.00 GB of RAM.
Table 1 shows the expressions, images and upper and lower bounds of 13 benchmark functions. There, L represents the lower bound, U represents the upper bound of the functions, D represents the dimension of the function and f
min represents the theoretical optimal value.
Table 2 shows the ten complex dimension test functions.
4.3. Comparison with Other Algorithms
In order to verify the effectiveness of the IMFO algorithm, eight different algorithms are used to test the mathematical benchmark functions in this section. Different algorithm processing function run results are shown in
Table 4, where N/A means that the algorithm is not suitable for solving this function.
Table 4 shows the comparison of the means and standard deviations of the nine algorithms when the dimension is 10. It should be noted that the best optimal solution obtained is highlighted in bold font.
As shown in the
Table 4 results, we found that the improved moth-flame optimization algorithm is better than the other algorithms for solving the average (Ave) and standard deviation (Std) values of the benchmark functions F1, F3, F4, F7, F9, F10, F11, F15, F16, F17, F18 and F19. In addition, the standard deviations (Std) of the base functions F5, F21, F22 and F23 of the improved IMFO algorithm can obtain the optimal value. The best average can also be obtained for F8. Particle swarm optimization (PSO) ranks second for the performance of F6, F12 and F13. Additionally, the spotted hyena optimizer (SHO) obtains the best value on the function F2, and it obtains the minimum average on the function F16. The salp swarm algorithm (SSA) obtains the best value on the function F14. Finally, as we can see, BA can obtain the minimum average on the function F5. In addition, GWO can obtain the minimum average on the functions F17, F21, F22 and F23. SCA has the best stability in solving the function F8.
The results indicate the superiority of the proposed algorithm. Analysis of the averages and standard deviations reveals that the proposed IMFO algorithm shows competitive performance in comparison with the compared algorithms. The comparison results for the selected unimodal functions (F1–F7) are shown in
Table 4. Note that, except for F2, F5 and F6, IMFO is better than the compared eight algorithms for all the other test functions. In particular, a huge improvement is achieved for the benchmark functions F1 and F3. These results verify that IMFO has excellent optimization accuracy with one-global minimal functions. For the multi-modal functions (F8–F23), IMFO also performs better than other algorithms. IMFO can find superior average results for the test functions F8, F9, F10, F11, F15, F16, F17, F18, F19 and F20. These results mean that the improved IMFO algorithm has a good ability to jump out of local optima and seek global optima.
It is obvious that IMFO better solves these 23 benchmark functions than the other algorithms. At the same time, it can be seen from
Table 4 that the variance of the IMFO algorithm on 16 test functions is the minimum value of the nine comparison functions, indicating that the improved moth-flame optimization algorithm has good robustness.
Table 5 shows a comparison of the improved algorithm with the other algorithms for the first 13 functions, when d = 30, in terms of experimental data.
It can be seen that, compared with the other algorithms, IMFO algorithm for functions F3, F4, F9, F10, F11 and F12 obtains minimum values. Secondly, the WOA algorithm is the best in solving function F1. SCA performs the most stably in solving the function F8. When d = 30, SHO can still obtain the minimum value in solving function F2. At this point, the GWO algorithm performs best in solving the function F7. The PSO algorithm can obtain the minimum value when solving F13 and the minimum average value when solving F6. Finally, the BA algorithm can obtain the minimum average value when solving F5, and it has the best robustness and the most stable result when solving F6. Therefore, when d = 30, it can be seen from the solution results of the nine algorithms on the 13 test functions that the solution effect of IMFO is still optimal compared with that of the other algorithms.
Table 6 compares the improved algorithm with the other algorithms for the first 13 functions in terms of experimental data, when d = 100.
Compared with other algorithms, the IMFO algorithm for the functions F3, F4, F7, F9, F10 and F11 obtains minimum values with a significant effect. At the same time, when solving functions F6, F8 and F13, the minimum variance can be obtained, which indicates that the IMFO algorithm has good stability and strong robustness in solving high-dimensional problems. At this point, compared with d = 10 and d = 30, the WOA algorithm shows obvious advantages in solving the results of functions. It has the best performance in solving functions F1, F2 and F5. Meanwhile, it can obtain the minimum average when solving functions F8, F12 and F13. The BA algorithm can obtain the minimum average value when solving F6 and the best stability when solving F12. However, when the dimension increases, MFO, SCA, BA, SHO, PSO, GWO and SSA produce poor results. The improved moth-flame optimization algorithm and the whale algorithm produce better results, but the improved moth-flame optimization algorithm is still the best on the whole.
4.4. Convergence Test
A convergence test refers to drawing different convergence images when running different test functions with different algorithms, through which the convergence speed and convergence accuracy of the algorithm can be compared. In order to further study the performance and effect of the improved algorithm, this section tests the convergence of nine different algorithms under the conditions that the dimension d of the 13 functions from F1 to F13 is 10, the maximum number of iterations is 1000 and the population number is 30, and analyzes the convergence speed and calculation accuracy of the algorithm.
To further illustrate the advantages of the improved algorithm, the convergence behavior is shown in
Figure 5. According to the convergence curve shown in
Figure 5, it can be verified that the proposed IMFO converges faster than the other algorithms. The results show that the improved algorithm based on Lévy flight and dimension-by-dimension evaluation can effectively improve the convergence trend of the original algorithm. Twelve functions are selected to demonstrate the convergence test of the algorithm. It can be seen from the image in (a) to (l) that the improved moth-flame optimization algorithm for those functions has a better convergence than the original moth-flame optimization algorithm.
Compared with the other algorithms, the improved moth-flame optimization algorithm has faster convergence speed and higher calculation accuracy on the 10 functions F1 (a), F2 (b), F3 (c), F4 (d), F5 (e), F7 (f), F9 (g), F10 (h), F11 (i) and F15 (j). On the two function F16 (k) and F18 (l), there is not much difference in the convergence trends for those algorithms. As we can see, the improved moth-flame optimization algorithm converges the most quickly and has higher convergence precision on the functions F1 (a), F3 (c), F4 (d) and F11 (i). On the function F2 (b), GWO, IMFO and WOA perform better than the other algorithms. GWO has best convergence speed at the beginning, but its later convergence is slow. IMFO and WOA converge faster than GWO, and they have smaller convergence. However, WOA is slightly better than IMFO in the solution of the function F2 (b). On the functions F1 (a) and F4 (d), although the GWO algorithm converges quickly in the early stage, the IMFO algorithm converges the fastest in the later period. Furthermore, the value of function F2 (b) solved by the IMFO algorithm is the smallest compared with that with the other algorithms. On the functions F3 (c) and F11 (i), the IMFO algorithm has an absolute advantage over the other algorithms. On the function F7 (f), the solution with SCA is slightly better than that with IMFO. The GWO algorithm almost has the same result as the IMFO algorithm when solving the function F9. The IMFO algorithm ranks third in the result for solving the function F5 (e). The SHO algorithm has the best performance in solving the function F10 (h), which has better convergence speed than the IMFO algorithm. However, they have the same convergence value. On the function F15 (j), the WOA, GWO and IMFO algorithms have almost similar performance in solving it. Generally, the IMFO algorithm has faster convergence speed and higher convergence precision than the other algorithms.
4.5. Statistical Analysis
Derrac et al. proposed in literature [
32] that statistical tests should be carried out to evaluate the performance of improved evolutionary algorithms. In other words, comparing algorithms based on average, standard deviation and convergence analysis is not enough. Statistical tests are needed to verify that the proposed improved algorithm shows significant improvement and advantages over other existing algorithms. In this section, the Wilcoxon rank sum test and Friedman test are used to verify that the proposed IMFO algorithm has significant improvements and advantages over other comparison algorithms.
In order to determine whether each result of IMFO is statistically significantly different from the best results of the other algorithms, the Wilcoxon rank sum test was used at the significance level of 5%. The dimension d of the 13 functions from F1 to F13 is 10, the maximum number of iterations is 1000, and the population number is 30. The p-value for Wilcoxon’s rank-sum test on the benchmark functions was calculated based on the results of 30 independent running algorithms.
In
Table 7, the
p values calculated in the Wilcoxon rank sum tests for all the benchmark functions and other algorithms are given. For example, if the best algorithm is IMFO, paired comparisons are made between IMFO and MFO, IMFO and SCA, and so on. Because the best algorithm cannot be compared to itself, the best algorithm in each function is marked N/A, meaning “not applicable”. This means that the corresponding algorithm can compare itself with no statistical data in the rank sum test. In addition, in
Table 7, N/A means not applicable. According to Derrac et al.,
p < 0.05 can be considered as a strong indicator for rejecting the null hypothesis. According to the results in
Table 7, the
p value of IMFO is basically less than 0.05. The
p value was greater than 0.05 only for F5, F8, F14 and F19 for IMFO vs. MFO, F14 for IMFO vs. SCA, F18 for IMFO vs. BA, F14 and F20 for IMFO vs. PSO, F8 and F9 for IMFO vs. WOA, and F5 and F15 for IMFO vs. GWO. This shows that the superiority of the algorithm is statistically significant. In other words, the IMFO algorithm has higher convergence precision than the other algorithms.
Furthermore, to make the statistical test results more convincing, we performed the Friedman test on the average values calculated by the nine algorithms on 23 benchmark functions in
Section 4.3. The significance level was set as 0.05. When the
p value was less than 0.05, it could be considered that several algorithms had statistically significant differences. When the algorithm’s
p value is greater than the significance level of 0.05, it can be considered that there is no statistically significant difference between the algorithms.
According to the calculation results in
Table 8, when the dimension of the current 13 functions is set to 10, the rank means of the nine algorithms are 2.71 (IMFO), 5.45 (MFO), 5.52 (SCA), 6.52 (BA), 7.86 (SHO), 3.38 (PSO), 4.24 (WOA), 4.19 (GWO) and 5.12 (SSA). The priority order is IMFO, PSO, GWO, WOA, SSA, MFO, SCA, BA and SHO. Here,
; this indicates that there are significant differences among the nine algorithms.
Similarly, when the dimension of the first 13 functions is set to 30, the rank means of the nine algorithms are 2.92 (IMFO), 8.25 (MFO), 6.83 (SCA), 4.92 (BA), 5.67 (SHO), 4.17 (PSO), 3.50 (WOA), 3.17 (GWO) and 5.58(SSA). The priority order is IMFO, GWO, WOA, PSO, BA, SSA, SHO, SCA and MFO. Here, ; this indicates that there are significant differences among the nine algorithms.
Finally, when the dimension of the first 13 functions is set to 100, the rank means of the nine algorithms are 2.63 (IMFO), 7.17 (MFO), 6.83 (SCA), 4.00 (BA), 5.83 (SHO), 2.79 (PSO), 3.08 (WOA), 6.25 (GWO) and 6.42 (SSA). The priority order is IMFO, PSO, WOA, BA, SHO, GWO, SSA, SCA and MFO. Here, ; this indicates that there are significant differences among the nine algorithms.
In the fourth chapter, eight comparison algorithms are selected to conduct 30 experiments and the mean values and variance of 23 test functions are solved, which are compared with those from the improved moth-flame optimization algorithm. According to the experimental data, the IMFO algorithm has good stability in solving function values. In addition, 12 of the 23 test functions were randomly selected for convergence analysis in this paper. As can be seen from the convergence trend graph of
Figure 5, the IMFO algorithm has better convergence speed and relatively high convergence accuracy on the whole. Finally, both the Wilcoxon rank sum test and Friedman test indicate that the improved IMFO algorithm has good performance. In Chapter 5, the improved moth-flame optimization algorithm is applied to solve four engineering problems—namely, pressure vessel, compression/tension spring, welded beam and trusses design problems—to illustrate the application of the IMFO algorithm in practice.