An Improved Moth-Flame Optimization Algorithm for Engineering Problems

In this paper, an improved moth-flame optimization algorithm (IMFO) is presented to solve engineering problems. Two novel effective strategies composed of Lévy flight and dimension-by-dimension evaluation are synchronously introduced into the moth-flame optimization algorithm (MFO) to maintain a great global exploration ability and effective balance between the global and local search. The search strategy of Lévy flight is used as a regulator of the moth-position update mechanism of global search to maintain a good research population diversity and expand the algorithm’s global search capability, and the dimension-by-dimension evaluation mechanism is added, which can effectively improve the quality of the solution and balance the global search and local development capability. To substantiate the efficacy of the enhanced algorithm, the proposed algorithm is then tested on a set of 23 benchmark test functions. It is also used to solve four classical engineering design problems, with great progress. In terms of test functions, the experimental results and analysis show that the proposed method is effective and better than other well-known nature-inspired algorithms in terms of convergence speed and accuracy. Additionally, the results of the solution of the engineering problems demonstrate the merits of this algorithm in solving challenging problems with constrained and unknown search spaces.


Introduction
The moth-flame optimization algorithm [1] was recently proposed by Mirjalili in 2015, which is one of the latest algorithms that has gained extensive attention in recent years. This new intelligent algorithm is based on the biological behavior of moths fighting flames in nature. Its biological principle is the moth's night flight mechanism. The moth updates its position by spiraling around the flame. The moth-flame optimization algorithm (MFO) algorithm has the advantages of having few setting parameters, being easy to understand and implement, and having fast convergence. Nevertheless, we can see from the literature that there is still room for improvement in the moth-flame optimization algorithm. Therefore, in the past two years, many scholars have tried to improve the algorithm in different aspects. Furthermore, the algorithm is widely used in physics, medicine, economics and other fields. Eventually, many scholars have put forward examples about the application of the algorithm in different fields to solve practical problems.
The moth-flame optimization algorithm is widely used in the field of physics; the related research is as follows. Hitarth Buch [2] proposed an improved adaptive moth-flame optimization algorithm, which was effectively used to solve the optimal power flow problem. Avishek Das [3] used the

Basic Model of Moth-Flame Optimization Algorithm
In the MFO algorithm, the moth is assumed to be the candidate solution to the problem, and the variable to be solved is the position of the moth in space. By changing their position vectors, moths can fly in one, two, three and even higher dimensions. Since the MFO algorithm is essentially a swarm intelligence optimization algorithm, the moth population can be represented as follows in the matrix: where n represents the number of moths and d represents the number of control variables to be solved (dimension of the optimization problem). For these moths, it is also assumed that there is a corresponding list of fitness value vectors, represented as follows: In the MFO algorithm, each moth is required to update its own position only with the unique flame corresponding to it, so as to avoid the algorithm falling into the local optimal value, which greatly enhances the algorithm's global search ability. Therefore, the positions of the flame and moth in the search space are variable matrices of the same dimension.
For these flames, it is also assumed that there exists a corresponding column of fitness value vectors, represented as follows: During the iteration process, the update strategy of the variables in the two matrices is different. Moths are actually search individuals that move within the search space, and the flame is the best position that the iteratively optimized moths can achieve so far. Each individual moth surrounds a corresponding flame, and when a better solution is found, it is updated to the location of the flame in the next generation. With this mechanism, the algorithm is able to find the global optimal solution.
In order to carry out mathematical modeling for the flight behavior of a moth to a flame, the updating mechanism for the position of each moth relative to a flame can be expressed by the following equation:

Basic Model of Moth-Flame Optimization Algorithm
In the MFO algorithm, the moth is assumed to be the candidate solution to the problem, and the variable to be solved is the position of the moth in space. By changing their position vectors, moths can fly in one, two, three and even higher dimensions. Since the MFO algorithm is essentially a swarm intelligence optimization algorithm, the moth population can be represented as follows in the matrix: where n represents the number of moths and d represents the number of control variables to be solved (dimension of the optimization problem). For these moths, it is also assumed that there is a corresponding list of fitness value vectors, represented as follows: In the MFO algorithm, each moth is required to update its own position only with the unique flame corresponding to it, so as to avoid the algorithm falling into the local optimal value, which greatly enhances the algorithm's global search ability. Therefore, the positions of the flame and moth in the search space are variable matrices of the same dimension.
For these flames, it is also assumed that there exists a corresponding column of fitness value vectors, represented as follows: During the iteration process, the update strategy of the variables in the two matrices is different. Moths are actually search individuals that move within the search space, and the flame is the best position that the iteratively optimized moths can achieve so far. Each individual moth surrounds a corresponding flame, and when a better solution is found, it is updated to the location of the flame in the next generation. With this mechanism, the algorithm is able to find the global optimal solution.
In order to carry out mathematical modeling for the flight behavior of a moth to a flame, the updating mechanism for the position of each moth relative to a flame can be expressed by the following equation: where Mi represents the ith moth, F j represents the jth flame and S represents the helical function. This function satisfies the following conditions: (1) The initial point of the helical function is selected from the initial space position of the moth; (2) The end point of the spiral is the space position corresponding to the contemporary flame; (3) The fluctuation range of the spiral should not exceed its search space.
According to the above conditions, the helical function of moth flight path is defined as follows: S M i , F j = D i ·e bt ·cos(2πt) + F j (6) t = (a − 1) * rand + 1 (7) a = −1 + Iteration * − 1 T max (8) where Di is the linear distance between the ith moth and the jth flame, b is the logarithmic helix shape constant defined, and the path coefficient t is the random number in [−1,1]. The magnitude of t is represented by Formula (7), where a is represented by Formula (8), and its magnitude decreases linearly from −1 to −2. The expression of Di is as follows: Formula (9) simulates the path of the moth's spiral flight. From this equation, it can be seen that the next position of the moth's renewal is determined by the flame it surrounds in the contemporary era. The coefficient t (t ∈ [−1, 1]) in the helix function represents the distance between the moth's position and the flame in the next optimization iteration, (t = −1) represents the closest position to the flame, and t = 1 represents the farthest position. The spiral equation shows that moths can fly around the flame rather than just in the space between them, thus ensuring the algorithm's global search capability and local development capability.
When this model is adopted, it has the following characteristics: (1) By randomly selecting parameters t, a moth can converge to any field of flame; (2) The smaller the value of t, the closer the moth is to the flame; (3) As the moth gets closer and closer to the flame, its position around the flame is updated more and more rapidly.
The above flame position update mechanism can ensure the local development ability of the moth around the flame. To improve the chances of finding a better solution, the best solution found in the current generation is used as the location of the next generation of moths around the flame. Therefore, the flame position matrix F usually contains the optimal solution currently found. In the optimization process, each moth updates its position according to the matrix F. The path coefficient r in the MFO algorithm are internal random numbers in [r, 1], and the variable r decreases linearly according to the number of iterations in the optimization iteration process in [−1, −2]. In this process, the moth will approach the flame in its corresponding sequence more precisely as the iteration progresses. After each iteration, the flames are reordered based on fitness values. In the next generation, the moth updates its position according to the flame corresponding to it in the updated sequence. The first moth always updates its position relative to the flame with the best fitness value, and the last moth updates its position relative to the one with the worst fitness value in the list.
If each location update of n moths is based on n different locations in the search space, the local development capability of the algorithm will be reduced. In order to solve this problem, an adaptive mechanism is proposed for the number of flames, so that the number of flames can be reduced adaptively in the iterative process, thus balancing the algorithm's global search capability and local development capability in the search space. The formula is as follows: where l is the current iteration number, N is the initial maximum number of flames set, and T represents the maximum number of iterations set. At the same time, due to flame reduction, the moth corresponding to the reduced flame in the sequence in each generation updates its position according to the flame with the worst current fitness value.

An Improved Moth-Flame Optimization Algorithm (IMFO)
In order to improve the global exploration ability of the original moth-flame optimization algorithm, adding the Lévy flight mechanism to the global exploration flight path of the moth can effectively expand the search space of the moth and improve the global search ability of the moth. While improving the global search ability of the algorithm, the local development ability of the algorithm is also improved by means of the update strategy based on greedy reservation. By combining the two strategies, the global search and local development capabilities of the algorithm can be effectively balanced, and the original algorithm can be improved.

Lévy Flight
Lévy flight [27] is a Markov process proposed by Paul Pierre Lévy, a famous French mathematician. A random walk is a mathematical statistical model that consists of a series of trajectories, each of which is random, used to represent irregular patterns of change. Lévy flight is a typical random walk mechanism, which represents a class of non-Gaussian stochastic processes and is related to the Lévy stable distribution. Its steady increment obeys the Lévy steady distribution. Lévy flight is characterized by many small steps but occasionally large steps, so that moving entities do not repeatedly search the same place, changing the behavior of a system. Although its motion direction is random, its motion step size has an exponential rate distribution. The combination of the moth-flame optimization algorithm and Lévy flight strategy can expand the search range of the algorithm, increase the diversity of the population, and make it easier for the algorithm to jump out of the local optimum.
For example, bats with Lévy's flight behavior [27] are more likely to find food. At present, Lévy flight has been successfully applied to the optimization field, and the results show that Lévy flight has achieved satisfactory results. In the global update of the moth algorithm, the Lévy flight mechanism is added to expand the search scope of the algorithm, making it difficult for the algorithm to fall into local optimization. The improved Formula (11) is: Here, t is the current iteration number, M i is the ith moth, F j is the jth flame, and D i is the distance between the ith moth and the jth flame. When the moth spiral flight updates its position, the addition Symmetry 2020, 12, 1234 7 of 30 of the Lévy flight mechanism can expand the search range of the moth and prevent it from falling into local optimization. The formula of Lévy flight is as follows [28]: where r 1 and r 2 are random numbers between [0,1], ϕ is a constant 1.5, and the δ formula is as follows: where τ(x + 1) = x!; 200 step sizes have been drawn to form a consecutive 50 steps of Lévy flight as shown in Figure 2. This article describes three Lévy flight paths as shown in Figure 2 (a), (b), (c). In the picture, the parameters are set as follows: is a constant 1.5, the number of steps is set as 200, and the number of dimensions is set as 10. As we can see, the paths in the three pictures are very different, which can effectively demonstrate that Lévy flight is a random walk mechanism. However, there are some long walks after short walks. In Lévy flights, exploratory local searches over short distances are the same as occasional longer walks, ensuring that the system does not fall into local optimality. Lévy flight maximizes resource search efficiency in uncertain environments. The random number algorithm is used to draw three Lévy flight paths with 200 consecutive steps on the plane, as shown in Figure 2. The strategy ensures that the improved moth-flame optimization algorithm avoids falling into local optimality.

Dimension-By-Dimension Evaluation Strategy
In the standard MFO algorithm, the full-dimension update evaluation strategy is adopted; that is, after updating all the dimension information, the updated solution is evaluated according to the value of the objective function. This method, [29], will, to some extent, obscure the information of the evolutionary dimension, waste the known evaluation times and worsen the convergence rate of the solution. At the same time, such an updating mode causes mutual interference between dimensions, which affects the convergence speed and optimization accuracy of the algorithm. Assuming that the objective function and the global optimal solution for 0,0,0 , the optimal value of 0. Supposing the algorithm iteration solution is the first k generation of the ith solution for , 0.5,0,0.5 , the objective function value of , 0.5. In the process of the iteration algorithm, the first hypothesis according to Formula (5) is that the , overall updates to (0, -1, 0), resulting in the objective function value of , 1. According to the objective function evaluation mechanism, , 1 0.5; the updated solution compared to the original objective function value is big, so the algorithm will retain the original objective function values and discard the updated value of the target function. However, the value of the updated solution in the first dimension evolves from the previous 0.5 to 0 and and the value of the third dimension evolves from 0.5 to 0, but since the value of the second dimension degrades from 0 to −1, the algorithm will discard the updated solution directly in the evaluation strategy of the full dimension update. Therefore, this evaluation mechanism will waste the evaluation times of the solution and worsen the convergence This article describes three Lévy flight paths as shown in Figure 2a-c. In the picture, the parameters are set as follows: ϕ is a constant 1.5, the number of steps is set as 200, and the number of dimensions is set as 10. As we can see, the paths in the three pictures are very different, which can effectively demonstrate that Lévy flight is a random walk mechanism. However, there are some long walks after short walks. In Lévy flights, exploratory local searches over short distances are the same as occasional longer walks, ensuring that the system does not fall into local optimality. Lévy flight maximizes resource search efficiency in uncertain environments. The random number algorithm is used to draw three Lévy flight paths with 200 consecutive steps on the plane, as shown in Figure 2. The strategy ensures that the improved moth-flame optimization algorithm avoids falling into local optimality.

Dimension-By-Dimension Evaluation Strategy
In the standard MFO algorithm, the full-dimension update evaluation strategy is adopted; that is, after updating all the dimension information, the updated solution is evaluated according to the value of the objective function. This method, [29], will, to some extent, obscure the information of the evolutionary dimension, waste the known evaluation times and worsen the convergence rate of the solution. At the same time, such an updating mode causes mutual interference between dimensions, which affects the convergence speed and optimization accuracy of the algorithm. Assuming that the objective function f (x) = x 2 1 + x 2 2 + x 2 3 and the global optimal solution for x opt = (0, 0, 0), the optimal value of f (x) = 0. Supposing the algorithm iteration solution is the first k generation of the ith solution for x k,i = (0.5, 0, 0.5), the objective function value of f x k,i = 0.5. In the process of the iteration algorithm, the first hypothesis according to Formula (5) is that the x k,i overall updates to (0, −1, 0), resulting in the objective function value of f x k+1,i = 1. According to the objective function evaluation mechanism, f x k+1,i = 1 > 0.5; the updated solution compared to the original objective function value is big, so the algorithm will retain the original objective function values and discard the updated value of the target function. However, the value of the updated solution in the first dimension evolves from the previous 0.5 to 0 and and the value of the third dimension evolves from 0.5 to 0, but since the value of the second dimension degrades from 0 to −1, the algorithm will discard the updated solution directly in the evaluation strategy of the full dimension update. Therefore, this evaluation mechanism will waste the evaluation times of the solution and worsen the convergence rate.
The improved dimension-by-dimension update strategy can prevent the above problems. The improved moth-flame optimization algorithm based on the greedy retention of the one-dimension evaluation strategy can consider the information updating of each dimension. The idea of this strategy is as follows: the values of one dimension are updated to form a new solution with the values of other dimensions; then, the new solution is evaluated according to the fitness of the objective function. If the quality of the current solution can be improved, the updated result of this dimension for the solution is retained. Otherwise, the updated current dimension value is discarded, the previous dimension information is kept, and the next dimension is updated, using this greedy retention method until the update of each dimension is completed. For example, suppose that the value of the first dimension of the algorithm is updated to 0 in the iteration process, the solution after the dimension update is (0,0,0.5), and the value of the objective function obtained by the update is f x k+1,i = 0.25 < 0.5. At this point, the algorithm retains the update of the first dimension and then updates the next dimension. If the updated value of the first dimension is 1, the updated solution is (0,0,0.5), and the value of the objective function is f x k+1,i = 1.25 > 0.5, the algorithm based on the dimension-by-dimension evaluation update mechanism will discard the update of the current dimension and carry out the update operation of the next dimension. This updating mechanism can prevent the waste of the number of evaluation solutions and optimize the convergence speed of the algorithm.
In order to demonstrate the effectiveness of dimension-by-dimension valuation, the F2 test function shown in Figure 3a is randomly selected in this paper. The formula for this function is The function definition field is [−10,10], and the theoretical optimal value is 0. This function is a continuous unimodal function., which is mostly used to investigate the optimization accuracy of the algorithm. When the dimension of function F2 (a) is set to 100, date1 (b) represents the convergence trend of the standard moth-flame optimization algorithm for function F2, and date2 (b) represents the convergence trend of the algorithm for function F2 after adding dimension-by-dimension evaluation into the standard algorithm in Figure 3b. It can be seen that the algorithm of dimension-by-dimension evaluation can make the convergence of the function faster. optimal value of 0.Suppose algorithm iteration solution is the first k generation of the ith solution for , 0.5,0,0.5 , the objective function value of , 0.5. In the process of iteration algorithm, the first hypothesis according to formula (7) the , overall updated to (0, 1, 1), then get the objective function value of , 2. According to the objective function evaluation mechanism, , 2 0.5, the updated solution without the original objective function value of the objective function value is small, so the algorithm will retain the original objective function values, and discard the updated value of the target function. However, the value of the updated solution in the first dimension evolves from the previous 0.5 to 0, but since the value of the second dimension degrades from 0 to -1 and the value of the third dimension degrades from 0.5 to 1, the algorithm will discard the updated solution directly in the evaluation strategy of the full dimension update. Therefore, this evaluation mechanism will waste the evaluation times of the solution and worsen the convergence rate. The improved dimension-by-dimension update strategy can avoid the above problems. The improved moth-flame optimization algorithm based on the greedy retention of the one-dimension evaluation strategy can consider the information updating of each dimension. The idea of this strategy is: the values of one dimension are updated to form a new solution with the values of other dimensions; then the new solution was evaluated according to the fitness of the objective function. If the quality of the current solution can be improved, the updated result of this dimension for the solution will be retained. Otherwise, the update of the current dimension value is given up, the previous dimension information is kept, and the next dimension is updated, using this greedy retention method until the update of each dimension is completed. For example, suppose that the value of the first dimension of the algorithm is updated to 0 in the iteration process, and the solution after the dimension update is (0,0,0.5), and the value of the objective function obtained by the update is   The evolutionary dimension of the solution is paid attention to by using the strategy of updating dimension-by-dimension based on greedy reservation. Eliminating the influence of the degenerate dimension on the solution saves the evaluation time wasted by random updating. It can effectively Symmetry 2020, 12, 1234 9 of 30 suppress the interference between different dimensions, improve the convergence rate of the algorithm and improve the local development ability of the algorithm.
The pseudo-code of the improved algorithm is as follows: Update according to Equation (8)  12: for i = 1 : size(Moth_pos, 1) 13: for j = 1 : size(Moth_pos, 2) 14: if i ≤ f lame_no 15: Update Di according to Equation (9)  The flowchart in Figure 4 of the improved algorithm is as follows. It can be seen that the improvements are as follows: first, the Lévy flight mechanism is added to the global search, which can expand the global search capability of the algorithm and prevent the algorithm falling into local optimization. Secondly, adding the dimensionality evaluation mechanism into the algorithm can effectively balance the global exploration ability and local development ability, reduce the running times of the algorithm and improve the running efficiency of the algorithm. When calculating 23 test functions, the precision of the algorithm can be improved and the function can be converged faster.
In solving engineering problems, better solutions can be obtained than with other algorithms.
Through the flowchart, we can clearly see the improved moth flame algorithm process and improvements compared with the original moth flame algorithm. The Lévy flight and dimension-bydimension evaluation mechanism can balance the global search capability and local developability of the algorithm so that the algorithm can achieve better results in solving engineering problems.
improvements are as follows: first, the Lévy flight mechanism is added to the global search, which can expand the global search capability of the algorithm and prevent the algorithm falling into local optimization. Secondly, adding the dimensionality evaluation mechanism into the algorithm can effectively balance the global exploration ability and local development ability, reduce the running times of the algorithm and improve the running efficiency of the algorithm. When calculating 23 test functions, the precision of the algorithm can be improved and the function can be converged faster.
In solving engineering problems, better solutions can be obtained than with other algorithms.

Experimental Studies and Comparisons
In this section, we describe the basic expression, upper and lower bounds, and theoretical optimal value of the test function. Then, we selected nine comparison algorithms and set their parameters. Then, the performance of nine comparison algorithms in 13 non-fixed dimensions and 10 fixed dimensions was compared through experimental data. Thirteen functions of non-fixed dimensions were set up in the dimensions 10,30 and 100, respectively. The convergence precision of the algorithm is compared in different aspects. Finally, the convergence of the algorithm is tested and compared.

Test Function and Experimental Parameter Setting
In the experiment, 23 benchmark functions were selected according to [30] and [31], among which the first 13 functions were functions of variable dimensions and the last 10 functions were functions of fixed dimensions. A complete set of benchmark functions, consisting of 23 different functions (single mode and multi-mode), is used to evaluate the performance of the algorithm. The definition, function image, upper and lower bounds, dimension settings and minimum of the reference function are shown in Tables 1 and 2. We introduce reference functions such as F1, F2 and F3, up to F23. For F1 to F7, there is a single peak optimization problem with an extreme point in a given search area. These functions are used to study the convergence rate and optimization accuracy of the algorithm. F8-F23 are multi-modal optimization functions with multiple local extremum points in a given search area. They are used to evaluate the ability to jump out of local optima and seek global optima. In addition, the dimensions of d = 10, d = 30 and d = 100 are set for the first 13 functions in this paper, so as to test different algorithms.
Symmetry 2020, 12, x FOR PEER REVIEW 11 of 33 Symmetry 2020, 12, x FOR PEER REVIEW 11 of 33 Symmetry 2020, 12, x FOR PEER REVIEW 11 of 33 Symmetry 2020, 12, x FOR PEER REVIEW 11 of 33  5.12 5. 12 10 0 u(x i , 10, 100, 4) −50            Meanwhile, eight comparison algorithms were selected, namely, the original moth-flame algorithm (MFO), sine and cosine algorithm (SCA), bat algorithm (BA), spotted hyena algorithm (SHO), particle swarm optimization algorithm (PSO), whale algorithm (WOA), grey wolf algorithm (GWO) and salp swarm algorithm (SSA). The parameter settings in these comparison algorithms are shown in Table 3. In addition to the parameter settings in Table 3, the population number of each algorithm is set to 30, and the number of iterations is set to 1000. The experiment was conducted on the 64-bit operating system of Windows 7 with the software MATLAB 2014a version, and the processor was an Intel(R) Core (TM) i5-5200U CPU@2.20 GHz 2.20 GHz with 4.00 GB of RAM.  Table 1 shows the expressions, images and upper and lower bounds of 13 benchmark functions. There, L represents the lower bound, U represents the upper bound of the functions, D represents the dimension of the function and f min represents the theoretical optimal value. Table 2 shows the ten complex dimension test functions. Table 3 shows the parameter settings of the different algorithms during the experiment. In addition, in the process of running the algorithm, the population quantities were set to 30 and the maximum number of iterations was set to 1000.

Comparison with Other Algorithms
In order to verify the effectiveness of the IMFO algorithm, eight different algorithms are used to test the mathematical benchmark functions in this section. Different algorithm processing function run results are shown in Table 4, where N/A means that the algorithm is not suitable for solving this function. Table 4 shows the comparison of the means and standard deviations of the nine algorithms when the dimension is 10. It should be noted that the best optimal solution obtained is highlighted in bold font. As shown in the Table 4 results, we found that the improved moth-flame optimization algorithm is better than the other algorithms for solving the average (Ave) and standard deviation (Std) values of the benchmark functions F1, F3, F4, F7, F9, F10, F11, F15, F16, F17, F18 and F19. In addition, the standard deviations (Std) of the base functions F5, F21, F22 and F23 of the improved IMFO algorithm can obtain the optimal value. The best average can also be obtained for F8. Particle swarm optimization (PSO) ranks second for the performance of F6, F12 and F13. Additionally, the spotted hyena optimizer (SHO) obtains the best value on the function F2, and it obtains the minimum average on the function F16. The salp swarm algorithm (SSA) obtains the best value on the function F14. Finally, as we can see, BA can obtain the minimum average on the function F5. In addition, GWO can obtain the minimum average on the functions F17, F21, F22 and F23. SCA has the best stability in solving the function F8.
The results indicate the superiority of the proposed algorithm. Analysis of the averages and standard deviations reveals that the proposed IMFO algorithm shows competitive performance in comparison with the compared algorithms. The comparison results for the selected unimodal functions (F1-F7) are shown in Table 4. Note that, except for F2, F5 and F6, IMFO is better than the compared eight algorithms for all the other test functions. In particular, a huge improvement is achieved for the benchmark functions F1 and F3. These results verify that IMFO has excellent optimization accuracy with one-global minimal functions. For the multi-modal functions (F8-F23), IMFO also performs better than other algorithms. IMFO can find superior average results for the test functions F8, F9, F10, F11, F15, F16, F17, F18, F19 and F20. These results mean that the improved IMFO algorithm has a good ability to jump out of local optima and seek global optima.
It is obvious that IMFO better solves these 23 benchmark functions than the other algorithms. At the same time, it can be seen from Table 4 that the variance of the IMFO algorithm on 16 test functions is the minimum value of the nine comparison functions, indicating that the improved moth-flame optimization algorithm has good robustness. Table 5 shows a comparison of the improved algorithm with the other algorithms for the first 13 functions, when d = 30, in terms of experimental data.
It can be seen that, compared with the other algorithms, IMFO algorithm for functions F3, F4, F9, F10, F11 and F12 obtains minimum values. Secondly, the WOA algorithm is the best in solving function F1. SCA performs the most stably in solving the function F8. When d = 30, SHO can still obtain the minimum value in solving function F2. At this point, the GWO algorithm performs best in solving the function F7. The PSO algorithm can obtain the minimum value when solving F13 and the minimum average value when solving F6. Finally, the BA algorithm can obtain the minimum average value when solving F5, and it has the best robustness and the most stable result when solving F6. Therefore, when d = 30, it can be seen from the solution results of the nine algorithms on the 13 test functions that the solution effect of IMFO is still optimal compared with that of the other algorithms. Table 6 compares the improved algorithm with the other algorithms for the first 13 functions in terms of experimental data, when d = 100.
Compared with other algorithms, the IMFO algorithm for the functions F3, F4, F7, F9, F10 and F11 obtains minimum values with a significant effect. At the same time, when solving functions F6, F8 and F13, the minimum variance can be obtained, which indicates that the IMFO algorithm has good stability and strong robustness in solving high-dimensional problems. At this point, compared with d = 10 and d = 30, the WOA algorithm shows obvious advantages in solving the results of functions. It has the best performance in solving functions F1, F2 and F5. Meanwhile, it can obtain the minimum average when solving functions F8, F12 and F13. The BA algorithm can obtain the minimum average value when solving F6 and the best stability when solving F12. However, when the dimension increases, MFO, SCA, BA, SHO, PSO, GWO and SSA produce poor results. The improved moth-flame optimization algorithm and the whale algorithm produce better results, but the improved moth-flame optimization algorithm is still the best on the whole.

Convergence Test
A convergence test refers to drawing different convergence images when running different test functions with different algorithms, through which the convergence speed and convergence accuracy of the algorithm can be compared. In order to further study the performance and effect of the improved algorithm, this section tests the convergence of nine different algorithms under the conditions that the dimension d of the 13 functions from F1 to F13 is 10, the maximum number of iterations is 1000 and the population number is 30, and analyzes the convergence speed and calculation accuracy of the algorithm.
To further illustrate the advantages of the improved algorithm, the convergence behavior is shown in Figure 5. According to the convergence curve shown in Figure 5, it can be verified that the proposed IMFO converges faster than the other algorithms. The results show that the improved algorithm based on Lévy flight and dimension-by-dimension evaluation can effectively improve the convergence trend of the original algorithm. Twelve functions are selected to demonstrate the convergence test of the algorithm. It can be seen from the image in (a) to (l) that the improved moth-flame optimization algorithm for those functions has a better convergence than the original moth-flame optimization algorithm.
functions. It has the best performance in solving functions F1, F2 and F5. Meanwhile, it can obtain the minimum average when solving functions F8, F12 and F13. The BA algorithm can obtain the minimum average value when solving F6 and the best stability when solving F12. However, when the dimension increases, MFO, SCA, BA, SHO, PSO, GWO and SSA produce poor results. The improved moth-flame optimization algorithm and the whale algorithm produce better results, but the improved moth-flame optimization algorithm is still the best on the whole.

Convergence Test
A convergence test refers to drawing different convergence images when running different test functions with different algorithms, through which the convergence speed and convergence accuracy of the algorithm can be compared. In order to further study the performance and effect of the improved algorithm, this section tests the convergence of nine different algorithms under the conditions that the dimension d of the 13 functions from F1 to F13 is 10, the maximum number of iterations is 1000 and the population number is 30, and analyzes the convergence speed and calculation accuracy of the algorithm.
To further illustrate the advantages of the improved algorithm, the convergence behavior is shown in Figure 5. According to the convergence curve shown in Figure 5, it can be verified that the proposed IMFO converges faster than the other algorithms. The results show that the improved algorithm based on Lévy flight and dimension-by-dimension evaluation can effectively improve the convergence trend of the original algorithm. Twelve functions are selected to demonstrate the convergence test of the algorithm. It can be seen from the image in (a) to (l) that the improved mothflame optimization algorithm for those functions has a better convergence than the original mothflame optimization algorithm.  Compared with the other algorithms, the improved moth-flame optimization algorithm has faster convergence speed and higher calculation accuracy on the 10 functions F1 (a), F2 (b), F3 (c), F4 (d), F5 (e), F7 (f), F9 (g), F10 (h), F11 (i) and F15 (j). On the two function F16 (k) and F18 (l), there is not much difference in the convergence trends for those algorithms. As we can see, the improved moth-flame optimization algorithm converges the most quickly and has higher convergence precision on the functions F1 (a), F3 (c), F4 (d) and F11 (i). On the function F2 (b), GWO, IMFO and WOA perform better than the other algorithms. GWO has best convergence speed at the beginning, but its later convergence is slow. IMFO and WOA converge faster than GWO, and they have smaller convergence. However, WOA is slightly better than IMFO in the solution of the function F2 (b). On the functions F1 (a) and F4 (d), although the GWO algorithm converges quickly in the early stage, the IMFO algorithm converges the fastest in the later period. Furthermore, the value of function F2 (b) solved by the IMFO algorithm is the smallest compared with that with the other algorithms. On the functions F3 (c) and F11 (i), the IMFO algorithm has an absolute advantage over the other algorithms. On the function F7 (f), the solution with SCA is slightly better than that with IMFO. The GWO algorithm almost has the same result as the IMFO algorithm when solving the function F9. The IMFO algorithm ranks third in the result for solving the function F5 (e). The SHO algorithm has the best performance in solving the function F10 (h), which has better convergence speed than the IMFO algorithm. However, they have the same convergence value. On the function F15 (j), the WOA, GWO and IMFO algorithms have almost similar performance in solving it. Generally, the IMFO algorithm has faster convergence speed and higher convergence precision than the other algorithms.

Statistical Analysis
Derrac et al. proposed in literature [32] that statistical tests should be carried out to evaluate the performance of improved evolutionary algorithms. In other words, comparing algorithms based on average, standard deviation and convergence analysis is not enough. Statistical tests are needed to verify that the proposed improved algorithm shows significant improvement and advantages over other existing algorithms. In this section, the Wilcoxon rank sum test and Friedman test are used to verify that the proposed IMFO algorithm has significant improvements and advantages over other comparison algorithms.
In order to determine whether each result of IMFO is statistically significantly different from the best results of the other algorithms, the Wilcoxon rank sum test was used at the significance level of 5%. The dimension d of the 13 functions from F1 to F13 is 10, the maximum number of iterations is 1000, and the population number is 30. The p-value for Wilcoxon's rank-sum test on the benchmark functions was calculated based on the results of 30 independent running algorithms.
In Table 7, the p values calculated in the Wilcoxon rank sum tests for all the benchmark functions and other algorithms are given. For example, if the best algorithm is IMFO, paired comparisons are made between IMFO and MFO, IMFO and SCA, and so on. Because the best algorithm cannot be compared to itself, the best algorithm in each function is marked N/A, meaning "not applicable". This means that the corresponding algorithm can compare itself with no statistical data in the rank sum test. In addition, in Table 7, N/A means not applicable. According to Derrac et al., p < 0.05 can be considered as a strong indicator for rejecting the null hypothesis. According to the results in Table 7, the p value of IMFO is basically less than 0.05. The p value was greater than 0.05 only for F5, F8, F14 and F19 for IMFO vs. MFO, F14 for IMFO vs. SCA, F18 for IMFO vs. BA, F14 and F20 for IMFO vs. PSO, F8 and F9 for IMFO vs. WOA, and F5 and F15 for IMFO vs. GWO. This shows that the superiority of the algorithm is statistically significant. In other words, the IMFO algorithm has higher convergence precision than the other algorithms. Furthermore, to make the statistical test results more convincing, we performed the Friedman test on the average values calculated by the nine algorithms on 23 benchmark functions in Section 4.3. The significance level was set as 0.05. When the p value was less than 0.05, it could be considered that several algorithms had statistically significant differences. When the algorithm's p value is greater than the significance level of 0.05, it can be considered that there is no statistically significant difference between the algorithms.
According to the calculation results in Table 8  Similarly In the fourth chapter, eight comparison algorithms are selected to conduct 30 experiments and the mean values and variance of 23 test functions are solved, which are compared with those from the improved moth-flame optimization algorithm. According to the experimental data, the IMFO algorithm has good stability in solving function values. In addition, 12 of the 23 test functions were randomly selected for convergence analysis in this paper. As can be seen from the convergence trend graph of Figure 5, the IMFO algorithm has better convergence speed and relatively high convergence accuracy on the whole. Finally, both the Wilcoxon rank sum test and Friedman test indicate that the improved IMFO algorithm has good performance. In Chapter 5, the improved moth-flame optimization algorithm is applied to solve four engineering problems-namely, pressure vessel, compression/tension spring, welded beam and trusses design problems-to illustrate the application of the IMFO algorithm in practice.

IMFO for Engineering Problems
This subsection analyzes the performance and efficiency of IMFO by solving four constrained real engineering problems, namely, the pressure vessel problem, tension/compression spring problem, welding beam problem and three-bar truss problem. These problems have many inequality constraints, so the IMFO tries to handle them in processing. The number of solutions in all the experiments is 30, and the maximum number of iterations is 1000.

Pressure Vessel Design Problem
The purpose of the pressure vessel problem is to minimize the total cost of the cylindrical pressure vessel [33]. There are four design variables and four constraints based on the thickness of the shell (T s ), the thickness of the head (T h ), the internal radius (R), and the length of the cylindrical section without considering the head (L). Figure 6 shows the pressure vessel and parameters involved in the design. The design can be expressed as follows: Variable ranges: In this section, the pressure vessel problem is solved by the IMFO algorithm, and the results are compared to those from the MFO [1], OMFO [22] , ES [31], CSDE [33], CPSO [34], LFD [35], WOA [36], MOSCA [37], RDWOA [38], CCMWOA [39], LWOA [40], GA [41], BFGSOLMFO [42] and IMFO [43]. Table 9 shows the IMFO is able to obtain the best solution for this problem. The results show that the IMFO algorithm can outperform all the other algorithms and outperform by MOSCA [37] and IMFO [43].
As can be seen from Table 9, when the four parameters , , R and L are set as 0.7781948, 0.3846621, 40.32097 and 199.9812, respectively, the minimum value of IMFO is 5885.3778. Compared with the other methods, IMFO achieves better results in the pressure vessel design problem. Therefore, this shows that the algorithm is feasible and effective in solving engineering problems with constraints.  Consider In this section, the pressure vessel problem is solved by the IMFO algorithm, and the results are compared to those from the MFO [1], OMFO [22], ES [31], CSDE [33], CPSO [34], LFD [35], WOA [36], MOSCA [37], RDWOA [38], CCMWOA [39], LWOA [40], GA [41], BFGSOLMFO [42] and IMFO [43]. Table 9 shows the IMFO is able to obtain the best solution for this problem. The results show that the IMFO algorithm can outperform all the other algorithms and outperform by MOSCA [37] and IMFO [43]. As can be seen from Table 9, when the four parameters T s , T h , R and L are set as 0.7781948, 0.3846621, 40.32097 and 199.9812, respectively, the minimum value of IMFO is 5885.3778. Compared with the other methods, IMFO achieves better results in the pressure vessel design problem. Therefore, this shows that the algorithm is feasible and effective in solving engineering problems with constraints.

Compression/Tension Spring Design Problem
The goal of this design problem is to minimize the weight of the compression/tension spring [27]. Figure 7 shows the spring and parameters. In this problem, there are three variables and four constraints. These variables include the coil diameter (d), mean coil diameter (D) and number of effective coils (N). The mathematical model for this problem is:

Compression/Tension Spring Design Problem
The goal of this design problem is to minimize the weight of the compression/tension spring [27]. Figure 7 shows the spring and parameters. In this problem, there are three variables and four constraints. These variables include the coil diameter (d), mean coil diameter (D) and number of effective coils (N). The mathematical model for this problem is: Consider ⃗ Objective function:  In this section, the compression/tension design problem is solved using the IMFO algorithm, and the results are compared to those from the MFO [1], LAFBA [27], ES [31], CPSO [34], LFD [35], WOA [36], CCMWOA [39], LWOA [40], GA [41] and RO [44]. Table 10 shows the decision variables and constraint values of the best solutions obtained by different methods. The results show that the IMFO algorithm can outperform all the other algorithms.  In this section, the compression/tension design problem is solved using the IMFO algorithm, and the results are compared to those from the MFO [1], LAFBA [27], ES [31], CPSO [34], LFD [35], WOA [36], CCMWOA [39], LWOA [40], GA [41] and RO [44]. Table 10 shows the decision variables and constraint values of the best solutions obtained by different methods. The results show that the IMFO algorithm can outperform all the other algorithms.

Welded Beam Design Problem
The objective of this model is to determine the minimum manufacturing cost [36]. Figure 8 shows the welded beam and parameters involved in the design. In this case, four variables affect the manufacturing cost, including the height of the reinforcement (t), the thickness of the weld (h), the thickness of the reinforcement (b) and the length of the reinforcement (l). The optimal design of this problem must satisfy the constraint conditions: shear stress (τ), bending stress in the beam (θ), deflection of the beam (δ), and buckling load (p c ). The mathematical model of the problem can be described as: As can be seen from Table 10, when the three parameters , and N are set as 0.05159, 0.354337 and 11.4301, respectively, the minimum value of IMFO is 0.012666. In this compression, IMFO can outperform all the other algorithms and is equal to CCMWOA [39].  [27] 0.051663 0.356074 11.333400 0.0126720 ES [31] 0.051989 0.363965 10.890522 0.0126810 CPSO [34] 0.051728 0.357644 11.244543 0.0126747 LFD [35] 0.0517 0.3575 11.2442 0.0127 WOA [36] 0.051207 0.345215 12.0043032 0.0126763 CCMWOA [39] 0.051843 0.360444 11.07410 0.0126660 LWOA [40] 0.051124 0.342922 12.16119 0.0126920 GA [41] 0.051480 0.351661 11.632201 1.0127048 RO [44] 0.051370 0.349096 11.762790 0.0126788 IMFO 0.05159 0.354337 11.4301 0.012666

Welded Beam Design Problem
The objective of this model is to determine the minimum manufacturing cost [36]. Figure 8 shows the welded beam and parameters involved in the design. In this case, four variables affect the manufacturing cost, including the height of the reinforcement (t), the thickness of the weld (h), the thickness of the reinforcement (b) and the length of the reinforcement (l). The optimal design of this problem must satisfy the constraint conditions: shear stress (τ), bending stress in the beam (θ), deflection of the beam (δ), and buckling load ( ). The mathematical model of the problem can be described as:

Three-Bar Truss Design Problem
The design of a three-bar truss is a structural optimization problem in the field of civil engineering [41]. Figure 9 shows the three-bar truss and parameters involved in the design. In order to minimize the volume of the three-bar truss, the constraints are stressed on each truss member. The three-bar truss design problem has a difficult constrained search space, so it is applied to evaluate the optimization power of the proposed algorithms. The formulation of this problem is as follows: Consider Variable range 0 ≤ x 1 , x 2 ≤ 1, l = 100cm, P = 2kN/cm 2 , σ = 2kN/cm 2 .
This problem has been solved by many researchers as a benchmark optimization problem in the literature. The three-bar truss design problem is solved using the IMFO algorithm, and the results are compared to those from SFO [45], m-SCA [46], WOA [36], Tsai [47] and MFO [1]. The results show that IMFO can outperform all the other algorithms except Tsai, which outperforms it [47].

Three-Bar Truss Design Problem
The design of a three-bar truss is a structural optimization problem in the field of civil engineering [41]. Figure 9 shows the three-bar truss and parameters involved in the design. In order to minimize the volume of the three-bar truss, the constraints are stressed on each truss member. The three-bar truss design problem has a difficult constrained search space, so it is applied to evaluate the optimization power of the proposed algorithms. The formulation of this problem is as follows:   Table 12 shows that IMFO is more competitive than the other four algorithms except Tsai [47] in terms of accuracy and function evolution cost. The best objective function value for the three-bar truss design problem solved with the IMFO algorithm is 263.8959, and the two parameters are 0.78899 and 0.40736. In this chapter, we can see that the improved moth-flame optimization algorithm is very effective in solving the above four engineering problems. Especially in the solution of the pressure vessel problem, the effect is very obvious. Therefore, it can be seen from this section that the improved algorithm has good optimization performance for solving the problem.

Conclusions and Future Work
In this article, two strategies are used: Lévy flight and dimension-by-dimension evaluation. Lévy flight is used in the global search of moths. In order to further improve the global search performance of moths, Lévy flight is added into the update mechanism of the moths to expand the global search capability of the algorithm. The Lévy flight mechanism plays the role of random walk, and its application in the process of the global search of moths can help moths to better update their position, effectively expand the search range of the moths, and improve the global search ability of the algorithm. In the overall operation mechanism of the algorithm, adding the dimension-by-dimension evaluation mechanism can effectively improve the operation efficiency of the algorithm and balance the global search and local development. Dimension-by-dimension evaluation can effectively evaluate the solution of each dimension obtained by the algorithm. If the current solution is better than the initial solution, the current solution is retained and the next dimension is updated. On the contrary, if the current solution obtained after updating a dimension is worse than the original solution, the update of the current solution is abandoned and the update operation of the next dimension is continued. By adding a dimension-by-dimension evaluation mechanism to the algorithm, the evaluation times of the solution can be effectively used to avoid unnecessary waste, so as to improve the convergence speed of the algorithm. The improved algorithm can obtain good results when solving 23 benchmark functions. On the whole, it has a fast convergence speed and high convergence precision. The results of the experiments show that the proposed algorithm applying the Wilcoxon rank sum test and Friedman test on 23 benchmark functions has a better performance than the other compared algorithms. IMFO outperforms the compared algorithms in terms of the statistical results and convergence rate. At the same time, the improved algorithm can obtain better results in solving four engineering problems, especially in the design of pressure vessels.
There are several suggestions as future directions for the improved moth-flame algorithm: (1) enhancing the theoretical research of the IMFO algorithm; (2) improving the mobility mechanism [48] based on the position of the current optimal moth to enable the moth to move with global orientation and expand the sharing of information between moths to improve the overall evolutionary optimization performance of the IMFO algorithm; (3) combining the improved moth-flame optimization algorithm with other algorithms such as the differential evolution (DE) algorithm [49] to further improve the performance of the IMFO algorithm; (4) investigating how to extend the IMFO algorithm to handle other constrained optimizations, multi-objective optimizations, and combinatorial optimization problems. The application of IMFO is desirable for solving more complex real-world problems.