Next Article in Journal
Improved Hybrid Firefly Algorithm with Probability Attraction Model
Previous Article in Journal
A QCA Analysis of Knowledge Co-Creation Based on University–Industry Relationships
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Enhanced Harris Hawks Optimization for Global Optimization and Deep Learning-Based Channel Estimation Problems

School of Information Engineering, Tianjin University of Commerce, Tianjin 300134, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(2), 390; https://doi.org/10.3390/math11020390
Submission received: 13 December 2022 / Revised: 2 January 2023 / Accepted: 10 January 2023 / Published: 11 January 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
Harris Hawks Optimization (HHO) simulates the cooperative hunting behavior of Harris hawks and it has the advantages of fewer control parameters, simple principles, and excellent exploitation ability. However, HHO also has the disadvantages of slow convergence and easy falling into local optimality. Aiming at the above shortcomings, this paper proposes a Multi-strategy Enhanced Harris Hawks Optimization (MEHHO). Firstly, the map-compass operator and Cauchy mutation strategy are used to increase the population diversity and improve the ability of the algorithm to jump out of the local optimal. Secondly, a spiral motion strategy is introduced to improve the exploration phase to enhance search efficiency. Finally, the convergence speed and accuracy of the algorithm are improved by greedy selection to fully retain the dominant individuals. The global search capability of the proposed MEHHO is verified by 28 benchmark test functions, and then the parameters of the deep learning network used for channel estimation are optimized by using the MEHHO to verify the practicability of the MEHHO. Experimental results show that the proposed MEHHO has more advantages in solving global optimization problems and improving the accuracy of the channel estimation method based on deep learning.

1. Introduction

A meta-heuristic algorithm is a kind of algorithm that is inspired by nature, obtains inspiration from evolutionary rules, physical rules and biological social behavior in nature, and establishes a mathematical model by combining a stochastic algorithm and a local search algorithm. Inspired by different mechanisms, a variety of algorithms have been proposed. Some classical algorithms, such as Genetic Algorithms (GA) [1] and Differential Evolution Algorithm (DE) [2], are typical algorithms based on biological evolutionary rules. Simulated Annealing Algorithms (SA) [3], Henry Gas Solubility Optimization (HGSO) [4], and Atom Search Optimization (ASO) [5] are algorithms based on physical laws. Particle Swarm Optimization (PSO) [6], Ant Colony Optimization (ACO) [7], Artificial Bee Colony Algorithm (ABC) [8], Whale Optimization Algorithm (WOA) [9], and Salp Swarm Algorithm (SSA) [10] are algorithms based on biological social behavior, also known as Swarm Intelligence Algorithms. According to the No Free Lunch theory, there is no algorithm suitable for solving all optimization problems. Therefore, researchers have proposed many novel meta-heuristic algorithms and improved classical algorithms to solve a variety of more complex practical problems. New meta-heuristic algorithms include the Butterfly Optimization Algorithm (BOA) [11], Marine Predators Algorithm (MPA) [12], Mayfly Algorithm (MA) [13], Aquila Optimizer (AO) [14], Arithmetic Optimization Algorithm (AOA) [15], Sand Cat Swarm Optimization (SCSO) [16], Weighted Mean of Vectors (INFO) [17], Runge Kutta optimizer (RUN) [18], and so on. With its advantages of good adaptability, independent and efficient exploration mechanism, no cost of gradient information, and simple implementation, the meta-heuristic algorithm has been widely used in different fields, and provided powerful tools for all walks of life to solve daily optimization problems, such as fault diagnosis [19,20], feature selection [21,22], microchannel radiator design [23], recognition watermarking [24], medical detection [25,26,27], path planning [28,29], chart pattern recognition [30], and Internet of Things [31,32,33], and so on.
The Harris Hawks Optimization (HHO) is a meta-heuristic algorithm proposed by A. Heidari et al. [34] based on the hunting behavior of Harris hawks. HHO has introduced the concepts of escape energy, population center, Lévy flight, etc., and has a superior solving ability in reference functions and constraint engineering problems. In addition, HHO has the advantages of fewer control parameters, easy programming and implementation. Therefore, HHO has been widely used in many fields and achieved good results. For example, HHO has been successfully applied to solve the optimization problem of model parameters in the photovoltaic field. A. Ramadan et al. [35] used the improved HHO to estimate the required parameters of different photovoltaic models and constructed their models with high precision. M. Naeijian et al. [36] used the improved HHO to find the optimal parameters of the single-diode model, the double-diode model, and the triple-diode model. H. Chen et al. [37] proposed a diversified enhanced HHO to efficiently identify the parameters of photovoltaic cells. In addition, HHO has been widely used to solve the multi-stage image segmentation threshold optimization problem in the image field. For example, H. Jia et al. [38] improved HHO by introducing dynamic control parameters and mutation operators, and applied them to the optimal segmentation threshold of multi-stage satellite images. A. Wunnava et al. [39] proposed a differential evolution adaptive HHO, which was used for two-dimensional Masi entropy multi-level image threshold segmentation. E. R. Esparza et al. [40] used the minimum cross entropy as a fitness function to propose an HHO-based optimal threshold solution method for multistage segmentation. They ran tests on medical images. In addition to the above fields, HHO has also been successfully applied in the field of wireless sensor networks (WSN) to solve the optimal parameter problem. For example, M. Srinivas et al. [41] proposed an energy-saving optimization method based on improved HHO to extend the life of WSN S. J. Bhat et al. [42] chose the area as the fitness function to reduce the search area and used their HHO-AM to solve the optimal parameters to improve the positioning accuracy of WSN.
Similar to other meta-heuristic algorithms, HHO also has certain disadvantages, such as slow convergence due to low search efficiency in the exploration stage, and lack of population diversity leading to easy falling into local optimality. Because of the shortcomings of HHO, scholars have put forward some improvements. Aiming at the problem of few exploration behaviors in the exploration stage, A. Kaveh et al. [43] improved the exploration performance of the algorithm and accelerated the convergence rate by combining the imperialist competitive algorithm and making use of its excellent space exploration ability. A. Dehkordi et al. [44] used nine chaotic maps with different mathematical equations to enhance the population diversity and improve the exploration behavior of the algorithm. S. Gupta et al. [45] balanced the exploration and development stages of the algorithm by introducing nonlinear energy parameters to update the energy factor. To improve the diversity of the population, Q. Fan et al. [46] have proposed a quasi-reflection HHO, which introduces the quasi-reflection learning mechanism to increase the diversity of the population and thus improve the convergence accuracy of the algorithm. To improve the quality of candidate solutions in the process of global search, C. Liu [47] designed the improved algorithm to update the position according to the best individuals in the population in the process of global exploration, instead of searching aimlessly. To solve the problem of falling into the local optimal, L. Abualigah et al. [48] proposed two new search methods, which used the sine function and cosine function, respectively, to improve the convergence speed and the ability to jump out of the local optimal of the original algorithm.
In view of the shortcomings of HHO in the exploration stage, such as slow convergence speed and stagnation of sub-optimal solution, this paper completed the following work: (1) Spiral motion is introduced in the exploration stage to better simulate the hunting behavior of Harris hawks and improve the efficiency of exploration stage. (2) Map-compass operator and Cauchy mutation are used to enhance the population diversity, fully search the area near the optimal solution, and enhance the ability of the algorithm to jump out of the local optimal to solve the suboptimal solution stagnation problem. (3) By greedy selection to retain better individuals, unnecessary consumption is reduced in the search process and further accelerates the speed of early convergence.
The rest of the paper is organized as follows. Section 2 introduces the basic concept, principles, and implementation process of HHO. Section 3 shows the details of the proposed improvement strategies based on the different shortcomings. Section 4 validates the performance of the proposed Multi-strategy Enhanced Harris Hawks Optimization (MEHHO) through several experiments, and presents the results and discussion. Section 5 performs parameter optimization of the deep learning network used for channel estimation using the MEHHO. Section 6 summarizes the whole paper and puts forward the prospect of future research work.

2. Harris Hawks Optimization

In this section, the proposed HHO is modeled based on the hunting behavior of Harris hawks. The HHO simulates their cooperative hunting with multiple mechanisms in two stages, the exploration stage and the exploitation stage. The parameter E denotes the escape energy of the prey, and HHO realizes the transition from the exploration phase to the exploitation phase according to E . The formula is shown in Equation (1):
E = 2 E 0 1 t T
where E 0 is the initial energy state of prey, which varies randomly between 1 , 1 during each iteration. The calculation formula is E 0 = 2 * r a n d 1 , r a n d is the random number of 0 , 1 , T and t are the maximum and the current number of iterations.

2.1. Exploration Phase

When E 1 , hawks are in the exploration phase. They may perch randomly in tall trees in search of prey, or they may stalk and monitor prey with their companions. Assuming that a position choice is made between the following two strategies with equal probability:
X t + 1 i , j = X t r a n d , j r 1 X t r a n d , j 2 r 2 X t i , j   q 0.5 X t p r e y , j X t a v , j r 3 L B j + r 4 U B j L B j q < 0.5
where X t i , j and X t + 1 i , j are the current position of the i th hawk in the j th dimension and the new position of the next iteration, respectively; i 1 , N p o p , j 1 , D i m , N p o p is the total number of hawks; D i m is the corresponding problem dimension; X t r a n d , j and X t p r e y , j are the position of a hawk and the position of the prey randomly selected in the j th dimension, respectively. r 1 , r 2 , r 3 , r 4 and q are five different random numbers in the range of 0 to 1 . L B j and U B j are the lower and upper bounds of the search space in the j th dimension, X t a v , j is the current average position of hawks in the j th dimension:
X t a v , j = 1 N p o p i = 1 N p o p X t i , j

2.2. Exploitation Phase

When E < 1 , the algorithm enters the development stage. During this phase, the hawks will conduct a raid hunt on the prey that are stalked and observed during the exploration phase. However, in the process of hunting in nature, the prey will also try to escape from the hunt, so hawks will also adopt different pursuit modes for the various escape behaviors of the prey. HHO proposed four strategies to simulate this chasing and hunting behavior, each of which is described below. HHO uses the escape energy E and the escape probability r to determine which strategy to adopt.

2.2.1. Soft Besiege

In the case of E 0.5 and r 0.5 , the prey has enough energy to escape in a random jump, yet hawks have surrounded the prey. At this point, the hawk chooses to use soft besiege to consume the physical strength of the prey, and then successfully hunt. The mathematical models are shown in Equations (4)–(6):
X t + 1 i , j = Δ X t j E × J u m p X t p r e y , j X t i , j
Δ X t j = X t p r e y , j X t i , j
J u m p = 2 ( 1 r 5 )
where Δ X t j is the distance between the current position of the prey in the j th dimension and the current position of the i th hawk; r 5 is a random number between 0 , 1 ; and J u m p represents random jump intensity, which varies randomly between 0 , 2 in each iteration.

2.2.2. Hard Besiege

In the case of E < 0.5 and r 0.5 , the prey does not have enough energy to escape, and the hawks have surrounded the prey; then, the hawks will choose a hard besiege fast raid hunt. This behavior model is shown in Equation (7):
X t + 1 i , j = X t p r e y , j E × Δ X t j

2.2.3. Soft Besiege with Progressive Rapid Dives

In the case of E 0.5 and r < 0.5 , the prey has enough energy to escape the siege and make a zigzag movement, and the hawks have not completely formed an encircling attack on the prey. At this point, the hawks choose to continue to expend prey energy and gradually establish a complete encircle. The process descriptions of this strategy are shown in Equations (8)–(11):
X t + 1 i , j = Y t + 1 i , j i f     f Y t + 1 i , j < f X t i , j Z t + 1 i , j i f   f Z t + 1 i , j < f X t i , j    
Y t + 1 i , j = X t p r e y , j E × J u m p X t p r e y , j X t i , j
Z t + 1 i , j = Y t + 1 i , j + S j × L j
L j L F μ j , ν j , β j
where S j is a random number; L F is the Lévy flight function:
L F u j , ν j , β j = 0.01 × u j × σ ν j 1 β j
σ = Γ 1 + β × sin π β 2 Γ 1 + β 2 × β × 2 β 1 2 1 β
where u and ν are random numbers between 0 , 1 ; u N 0 , δ 2 ; ν N 0 , 1 ; default β = 1.5 .

2.2.4. Hard Besiege with Progressive Rapid Dives

In the case of E < 0.5 and r < 0.5 , the prey does not have enough energy to escape, but hawks do not completely surround them, so they will choose this strategy to accelerate and shorten the average position distance between them and the prey to form a hard encircling circle before the raid. The models of this strategy are shown in Equations (14)–(16):
X t + 1 i , j = Y t + 1 i , j i f     f Y t + 1 i , j < f X t i , j Z t + 1 i , j i f   f Z t + 1 i , j < f X t i , j    
Y t + 1 i , j = X t p r e y , j E × J u m p X t p r e y , j X t a v , j
Z t + 1 i , j = Y t + 1 i , j + S j × L j

3. Multi-Strategy Enhanced Harris Hawks Optimization

In view of the shortcomings of the original HHO, the following improvement strategies are proposed: First, the map-compass operator and Cauchy mutation strategy are introduced to achieve global optimal value-guidance and sufficient search of the current optimal solution neighborhood, increasing the population diversity, and improving the ability of the algorithm to jump out of the local optimal; secondly, the spiral motion strategy is adopted in the exploration stage to improve the exploration efficiency and accelerate the convergence speed in the exploration stage. Finally, through greedy selection the dominant individuals are fully retained.

3.1. Improved Strategy Based on Map-Compass Operator and Cauchy Mutation

One of the main disadvantages of HHO is the lack of diversity in solving complex optimization problems, which leads to the precocious phenomenon. Therefore, the map-compass operator is introduced to perturb the optimal individual before the algorithm enters the exploration stage, and Cauchy mutation is integrated to help the algorithm jump out of the local optimal.
This paper draws on the idea of the map-compass operator in the Pigeon Swarm Optimization algorithm [49] to perturb the global extreme value so that it can lead all hawks to fly to a new location. This strategy can increase the diversity of the hawk population and improve the probability of finding a better solution. Therefore, before the exploration stage, the position update formula is added, as shown in Equation (17):
X t + 1 i , j = X t p r e y , j × e τ × t + X t p r e y , j X t i , j
where τ is the map-compass factor; the value range is 0 , 1 .
At the same time, Cauchy mutation is used to make HHO search the neighborhood of the current optimal solution more fully and in a more diversified way, to further improve the ability of the algorithm to jump out of the local optimal. In probability theory, the Cauchy distribution is a very common continuous distribution, and its probability function form is shown in Equation (18):
f x = 1 π γ γ 2 + x η 2 , γ > 0 , < x <
where η is a random real number that lies in the interval , . A special case is the standard Cauchy distribution obtained when γ = 1 and η = 0 , and its probability density function is shown in Equation (19):
f x = 1 π 1 1 + x 2
At present, the commonly used variations in meta-heuristic algorithms are the Gaussian variation and the Cauchy variation [50,51]. The comparison between the Gaussian distribution and the Cauchy distribution is shown in Figure 1. As shown in Figure 1, the graph of the Cauchy distribution has a relatively small peak value near the origin. After the mutation of the HHO, the search for the global optimal value will be increased, and the search time for the adjacent local interval will be reduced. MEHHO’s search ability for the global optimal value has been significantly improved, and the global exploration ability and local mining ability of the algorithm can be better balanced. In addition, compared with the Gaussian distribution, the two wings of the Cauchy distribution are flatter and wider, and the closer they are to the horizontal axis in the horizontal direction, the slower they fall. From the perspective of probability, the Cauchy distribution has a wider distribution range and allows a larger runout variation, which is more suitable for improving the global exploration ability of the algorithm. Therefore, the Cauchy mutation is selected in this paper to generate more diverse population individuals to search the main search space, so that HHO can quickly jump out of the local optimal value points.
The position update formula of the fusion map-compass operator and the Cauchy mutation is shown in Equation (20):
X t + 1 i , j = X t p r e y , j × e τ × t + C a u c h y 0 , 1 × X t p r e y , j X t i , j
where C a u c h y 0 , 1 is the standard Cauchy distribution.

3.2. Position Update Mechanism Based on Spiral Motion and Greedy Strategy

Slow convergence is another drawback of HHO. In the exploration phase, individuals rely on other members of the population to update their positions, which is partly influenced by the initial population distribution. In the mathematical model when q 0.5 , the combination of random numbers and differential position vectors does not adequately improve the exploration efficiency of the algorithm. In order to improve the slow convergence of HHO, the spiral motion strategy is introduced in the exploration stage. Spiral motion is a rotational motion around a fixed point at a constant angular velocity and gradually moving away from this point, as shown in Figure 2. Among many algorithms, the spiral strategy has been verified to be an effective strategy to improve the search ability of the algorithm [52,53,54,55], and the spiral movement is more in line with the hunting behavior of hawks in nature. The spiral motion mainly improves the mathematical model when q 0.5 , and Formula (2) is changed to Equation (21) after the introduction of the spiral motion:
U t + 1 i , j = X t r a n d , j e h f cos ( 2 π f ) X t r a n d , j 2 r 2 X t i , j   q 0.5 X t p r e y , j X t a v , j r 3 L B j + r 4 U B j L B j q < 0.5
where h is the limiting constant of the logarithmic spiral shape; f is the random number in the interval 1 , 1 ; and U t + 1 i , j is the new position of the next iteration of the i th hawk in the j th dimension.
Although the spiral motion strategy can improve the exploration efficiency of the algorithm, it cannot determine whether or not the new position obtained is better. To avoid missing the optimal solution, the greedy choice strategy is introduced in the exploration stage. The fitness values of the two positions before and after the update are compared to determine whether to update the positions in order to retain better individuals. The process is described as follows:
X t + 1 i , j = U t + 1 i , j ,                 i f f U t + 1 i , j f X t i , j X t i , j ,                 i f     f U t + 1 i , j > f X t i , j
where the expression of U t + 1 i , j is Formula (21).
The pseudo-code of MEHHO proposed in this paper is shown in Algorithm 1.
Algorithm 1. Pseudo-code of Proposed MEHHO
Inputs: The population size N p o p and the maximum iterations T .
Outputs: The location of prey.
1: Initialize the random population X i i = 1 , 2 , , N p o p in a provided search space.
2: While  t < T  do
3:  Calculate the fitness values of each hawk.
4:  Select the best individual position as the prey position.
5:  Update the location using Equation (20) that incorporates the map-compass operator and the Cauchy mutation, calculate the individual fitness again and update X p r e y .
6:  for (each hawk) do
7:   Update the initial energy E 0 and jump strength J u m p .
8:   Update the E using Equation (1).
9:   if  E 1  then
10:    Update the location of members using Equation (22).
11:   if  E < 1  then
12:    if  E 0.5 , r 0.5  then
13:     Update the location of members using Equation (4).
14:    else if  E < 0.5 , r 0.5  then
15:     Update the location of members using Equation (7).
16:    else if  E 0.5 , r < 0.5  then
17:     Update the location of members using Equation (8).
18:    else if  E < 0.5 , r < 0.5  then
19:     Update the location of members using Equation (14).
20: Return  X p r e y .

4. Experiment and Discussion: Global Optimization

To verify the performance of the MEHHO in global optimization problems, three experiments were designed: (1) MEHHO was compared with six meta-heuristic algorithms; (2) MEHHO and original HHO were compared in different dimensions; (3) MEHHO was compared with three improved HHOs. In the experiment, 28 test functions (as shown in Table 1, Table 2 and Table 3) were selected for comparative testing, including 10 unimodal benchmark functions (F1–F10), which have only one global optimal value point and are generally used to test the development ability of the algorithm. Ten multimodal benchmark functions (F11–F20) and eight fixed-dimension multimodal benchmark functions (F21–F28) were used to test the exploration ability of the algorithm and the ability to jump out of the local optimal value.

4.1. Comparison with Other Meta-Heuristic Algorithms

To verify the effectiveness and significance of MEHHO, six meta-heuristic algorithms are selected for comparison in this section. They are Harris Hawks Optimization (HHO, 2019) [34], Whale Optimization Algorithm (WOA, 2016) [9], Marine Predators Algorithm (MPA, 2020) [12], Grey Wolf Optimizer (GWO, 2014) [56], Particle Swarm Optimization (PSO, 1995) [6], Butterfly Optimization Algorithm (BOA, 2019) [11]. To ensure the objectivity of the results, the same parameters are set for each algorithm: the population size is all 30, the maximum number of iterations is 500, and the dimension of the test functions F1–F20 is 30. After each test function is run 30 times independently, its mean value, standard deviation, and computing time (unit: s) for each algorithm to reach their respective optimal value in Table 4, Table 5 and Table 6 are calculated. The experimental results are shown in Table 4, Table 5 and Table 6, where the bold fonts indicate better optimization results. To directly show the optimization performance of the MEHHO, the convergence curves of some benchmark test functions of the seven algorithms (F1–F9, F11–F19, F26–F28) are listed in this paper, as shown in Figure 3, Figure 4 and Figure 5.
As can be seen from Table 4, for the unimodal functions, the proposed MEHHO has the strongest optimization ability, which is superior to the other six meta-heuristic algorithms. In the problem of the unimodal benchmark test functions, for functions F1–F4 and F8–F11, the optimal solution can be close to the theoretical optimal value. For functions F5 and F7, under the condition that the optimization accuracy of the other meta-heuristic algorithms is generally poor, the MEHHO cannot find the theoretical optimal value, but its solving accuracy is the highest. For function F6, the MEHHO is second only to MPA and superior to other comparison algorithms. The standard deviations of F1–F4 and F8–F10 both reach 0, indicating that the MEHHO has high robustness and stronger stability than HHO for solving such unimodal problems. The computation time also proves the validity of MEHHO. In most test functions, the computation time of PSO is short, because the statistical time in tables is the time for algorithms to reach their respective optimal values, while PSO tends to fall into local optimal. In most unimodal functions, MEHHO takes a shorter computation time by a factor of 1 than HHO while maintaining the highest solution accuracy.
As can be seen from Table 5, in the multimodal benchmark functions (F11–F20) problem, for function F11, the proposed MEHHO, HHO, and MPA can all reach the global optimal value. For functions F12, F13, and F17, the performance of the improved algorithm is similar to that of the original HHO, and all of them can well approach the theoretical optimal value. For functions F14 and F15, the MEHHO has the highest search accuracy. For functions F16 and F18–F20, the MEHHO is superior to all the comparison algorithms, and the optimal solution is very close to the theoretical optimal value. A lower standard deviation for most problems indicates that the MEHHO is robust. In summary, compared with the other six algorithms, the proposed MEHHO has a better optimization performance and stronger stability in the test functions. For function F20, MEHHO lost some execution time in order to become closer to the theoretical optimal value. Among other multimodal test functions, the MEHHO takes less time than HHO while maintaining the highest accuracy. For functions F11 and F13, the MEHHO takes less time to execute than MPA with the same precision.
Fixed-dimension multimodal functions can be used to test the algorithm’s ability to balance exploration and exploitation in the search process. In the fixed dimension optimization problem, the number of optimal solutions is not unique, but because the dimension is fixed, it requires sufficient exploration and development of the algorithm to solve this kind of problem. The results in Table 6 show that the MEHHO can reach the theoretical optimal value of the problem on functions F21–F23. In F26–F28, the MEHHO is closer to the theoretical optimal value than HHO, and is second only to MPA, indicating that the proposed MEHHO can achieve a more stable balance between exploration and development. For functions F24 and F25, the MEHHO is unable to reach the theoretical optimal value, and the standard deviation of the MEHHO is not dominant in most of the fixed-dimension multimodal functions. Further research will be conducted on how to improve the search accuracy and robustness of the algorithm on such functions. MEHHO takes less time to compute than HHO. Compared with other algorithms, the MEHHO needs further research in relation to computing time.
In the convergence diagram shown in Figure 3, Figure 4 and Figure 5, the horizontal axis represents the number of iterations and the vertical axis represents the best fitness value. As can be seen from Figure 3, Figure 4 and Figure 5, the convergence curve of the MEHHO decreases faster than the other six comparison algorithms, and the convergence accuracy can reach a higher level faster. As shown in the convergence curve of F1–F10 in Figure 3, the convergence speed and accuracy of the improved MEHHO are greatly improved except for functions F5 and F6, and there is an inflection point in the convergence process of the algorithm, indicating that the ability to jump out of the local extreme value is better than other comparison algorithms. It can be seen from Figure 4 that for functions F12, F13, and F17, HHO and MEHHO have a similar optimization accuracy and are better than other algorithms. However, it can be seen from the convergence curve that the MEHHO has a faster convergence speed than the original algorithm. In addition to the relatively small improvement in the optimization accuracy of functions F14 and F15, the convergence performance of the proposed MEHHO for functions F16, F18, and F19 has been significantly improved. As can be seen from Figure 5, there is little performance gap between the several algorithms in fixed-dimension multimodal functions, but the MEHHO still achieves better results. The convergence rate of F26 and F27 is faster than that of the original HHO, and it is closer to the global optimal value of the target function. In conclusion, the convergence speed and precision of the MEHHO are superior.

4.2. Comparison and Significance Verification with Original Harris Hawks Optimization in Different Dimensions

To further verify the optimization ability of the proposed MEHHO in different dimensions (50/100/500), the performance of the proposed MEHHO and the original HHO is tested in different dimensions. Set the population size of both algorithms as 30, the maximum number of iterations as 500, and calculate the mean, standard deviation, maximum and minimum values of each test function after running it 30 times independently. The results are shown in Table 7, Table 8 and Table 9, where the bold fonts indicate better optimization results.
The results in Table 7, Table 8 and Table 9 show that for the unimodal benchmark test functions, in the functions F1–F4 and F8–F10, the proposed MEHHO can be very close to the theoretical optimal value in 500 iterations of 50, 100, and 500 dimensions, and the standard deviation value is 0, which indicates that the proposed MEHHO has strong robustness. Although the MEHHO does not reach the theoretical optimal value in functions F5–F7, its search accuracy is better than the original HHO in all dimensions. For the multimodal benchmark test functions, in the functions F11, F12, F13, and F17, the proposed MEHHO and the original HHO can reach the theoretical optimal value as far as possible. In functions F16, F18, and F19, the proposed MEHHO is superior to the original HHO and can be closer to the global optimal value. Although the MEHHO is superior to the original algorithm in F14, F15, and F20, it cannot reach the theoretical optimal value. From the perspective of the overall analysis, compared with the original HHO, the MEHHO has excellent optimization ability and can provide a better solution for the objective function. Moreover, the dimensional test results verify that the proposed MEHHO has stronger competitiveness in higher dimensions.
Due to the contingency in the operation process of the algorithm, although it may have a good performance on the mean value, it is impossible to accurately evaluate the results of each operation. Therefore, to judge the significance of the results of the improved algorithm, Wilcoxon signed-rank test is performed in this section with a significance level of 0.05. Table 10 and Table 11 list the test results of the MEHHO and the original HHO under different dimensions of unimodal and multimodal benchmark functions, and Table 12 lists the test results under the fixed-dimension multimodal benchmark functions. When the test p-value is less than 0.05, it is considered that the comparison algorithm has a significant difference; otherwise, it is considered that the comparison algorithm has little difference in performance. The +/−/= in the conclusion indicate that the MEHHO is better than/inferior to/equal to the original algorithm, respectively. According to the statistical results in Table 10, Table 11 and Table 12, for 28 test functions, the MEHHO has significant differences compared with traditional HHO, indicating that the MEHHO has a better optimization performance and a stable improvement of the optimization ability.

4.3. Comparison with Other Improved Harris Hawks Optimization

To further verify the effectiveness and significance of the MEHHO, three improved HHO algorithms are selected in this section for comparison. They are HHO combined with Particle Swarm Optimization (hHHO-PSO, 2021) [57], HHO combined with Grey Wolf Optimizer (hHHO-GWO, 2021) [58], and HHO combined with Sine-Cosine Algorithm (hHHO-SCA, 2020) [59]. To ensure the objectivity of the results, the same parameters are set for each algorithm: the population size is all 30, the maximum number of iterations is 500, and the dimension of the unimodal functions and multimodal functions is 30. After each test function is run 30 times independently, its mean value and standard deviation are calculated. The data of the three improved algorithms are all from the original articles. Because some selected test functions are different, only the comparison results of common functions are listed here, as shown in Table 13, where the bold fonts indicate better optimization results.
As can be seen from Table 13, for the functions F1–F4, the MEHHO proposed in this paper is significantly better than the other three improved HHOS. MEHHO can converge all to the theoretical optimal value of 0 in the first 500 iterations, and all the standard deviations reach 0, which indicates that the algorithm has excellent optimization accuracy and robustness. For the functions F5–F7, F14, and F15 with difficult convergence, the MEHHO can achieve the best convergence accuracy although it cannot find the theoretical optimal value, while the other three improved algorithms generally have poor optimization accuracy. For functions F21–F23, all four algorithms can reach the theoretical optimal value. For functions F26–F28, the MEHHO is closer to the theoretical optimal value than the other three improved algorithms. In conclusion, compared with the above three improved algorithms, the proposed MEHHO has the best convergence accuracy, indicating that the MEHHO effectively improves the performance of HHO.

5. Application in Channel Estimation

Channel estimation and signal detection are key tasks in the field of wireless communication. In the real world, the signal will be affected by Doppler shift, phase noise, and fading phenomenon in the process of high-speed transmission, and the received signal will be polluted. The purpose of channel estimation and signal detection is to recover the transmission symbol while preserving the key information of the signal as much as possible. With the development of computing science, deep learning has also been applied to channel estimation and signal detection [60]. Deep learning can realize end-to-end learning and replace traditional channel estimation and equalization. Compared with traditional methods, when using fewer training pilots and omitting cyclic prefixes, the deep learning method can still retain more signal key information and has a stronger robustness. However, deep learning also has some disadvantages. For example, the setting of initial key parameters is the method that obtains the optimal value through several manual experiments, which is inefficient and cannot guarantee the global optimal. Therefore, this paper intends to use the MEHHO to solve the low efficiency of network initialization parameter setting in the application of deep learning channel-estimation, and further verify the effectiveness and practicability of the MEHHO.

5.1. Channel Estimation and Signal Detection Model

In this paper, the proposed MEHHO is used to solve the problems of the low efficiency of manual parameter setting in deep learning, because the performance is greatly affected by manual parameter setting. The optimized deep learning algorithm is used to improve the channel estimation and signal detection capabilities of the Orthogonal Frequency-Division Multiplexing (OFDM) wireless communication system. In this section, we chose to optimize a Long-Short Term Memory (LSTM) network, because it has the function of long-term memory and is good at processing long sequence data, so it is more suitable for processing the transmission symbol sequence in communication. Since the channel estimation method based on a LSTM network takes a long time to train, and a large number of parameters need to be adjusted during the learning process, researchers have developed offline training and online deployment technologies as illustrated in references [61,62,63]. This technology uses the data generated by the channel model simulation to train the LSTM network model offline, and then directly uses the trained model for online deployment to recover the data transmitted online. This section uses offline training and online deployment technology to design experiments for channel estimation and signal detection based on a LSTM network. Firstly, the OFDM system and LSTM network model are established. Secondly, the key initialization parameters of the LSTM network are optimized in the offline training stage, namely, initial learning rate, training times, and batch size. Finally, the optimized LSTM network is used for OFDM channel estimation and signal detection during online deployment to obtain the minimum Symbol Error Rate (SER). In this experiment, the total symbol error rate of the prediction results of the training samples and the prediction results of the verification samples are selected as the fitness function. The description is shown in Equation (23):
Minimize
S E R I , N , M = 1 W ^ W + 1 V ^ V
Subject to
I I min , I max , N N min , N max , M M min , M max
where W ^ is the number of correctly predicted training samples; W is the number of total training samples; V ^ is the number of correctly predicted verification samples; V is the number of total verification samples; I is the initial learning rate; N is the number of training; and M is the batch size.
The process of implementing the MEHHO-LSTM for channel estimation and detection is shown in Figure 6. The specific steps are as follows:
  • Step1: Establish the mathematical model of the OFDM system, and generate the training set, verification set, and test set required by LSTM model under the 3GPP TR38.901 channel model;
  • Step2: Establish the LSTM channel estimation and signal detection model;
  • Step3: Initialize the MEHHO, take the initial learning rate I , training times N , and batch size M in the LSTM model as optimization objectives, and establish the MEHHO-LSTM model corresponding to each dimension in the HHO;
  • Step4: Calculate the fitness value of each individual according to Formula (23), and update the individual position according to the fitness value;
  • Step5: Determine whether the maximum number of iterations is reached. If so, output the optimal solution position, namely the best parameter of LSTM; Otherwise, return to Step4.
  • Step6: Substitute the optimal parameters into the LSTM network model for OFDM channel estimation and signal detection.

5.2. Experimental Parameter Setting

In this experiment, quadrature phase shift-keying (QPSK) modulation is adopted for the OFDM system. An OFDM wireless communication system with 64 subcarriers, 8 pilots, and a length of 16 cyclic prefixes is adopted. The channel model is 3GPP TR38.901, and the number of paths is 20. The ratio of the training set to the verification set is 8:2. The LSTM model used is composed of five layers, namely, the input layer, LSTM layer, fully connected layer, softmax layer, and classification layer. The input layer contains 256 neurons, the LSTM layer contains 16 neurons, and the fully connected layer contains 4 neurons. The Adam algorithm is used to train the internal parameters. The initialization parameters of the meta-heuristic algorithm used for testing are set identically. The fitness function is SER. The value ranges of the initial learning rate, training times, and batch size are 0.005 , 0.02 , 80 , 100 , and 800 , 1000 , respectively. Two groups of LSTM networks are selected for comparison. The first group is LSTM1 with parameters set to non-empirical optimal values, and the accuracy is about 78% in the training set and 53% in the validation set. The second group is named LSTM2, and the parameters are set to empirical optimal values, with 100% accuracy in both the training and validation sets. The parameters are set as follows: the initial learning rate of LSTM1 is 0.005, the number of training is 5, and the batch size is 500; the initial learning rate of LSTM2 is 0.02, the number of training is 100, and the batch size is 1000.

5.3. Results and Discussion

Firstly, the MEHHO-LSTM network is compared with LSTM1 and LSTM2 networks in channel estimation and signal detection performance. As can be seen from Figure 7, the performance of the LSTM1 network is the worst and cannot effectively reduce the SER, while the LSTM2 network is far superior to the LSTM1 network, indicating that the selection of key initialization parameters has a great impact on the network estimation and detection performance. When the signal-to-noise ratio (SNR) is greater than 10 dB, the MEHHO-LSTM method is better than LSTM2. When the SER is 10 2 , the performance is improved by 2−3 dB. The results show that the proposed algorithm can effectively optimize the key initial parameters of the LSTM network, to improve the accuracy of channel estimation and signal detection, and overcome the shortcomings of manually selecting the initial parameters of the LSTM network.
Figure 8 shows the MEHHO proposed in this paper and the original Harris Hawks Optimization (HHO, 2019) [34], Whale Optimization Algorithm (WOA, 2016) [9], Grey Wolf Optimizer (GWO, 2014) [54], Sine Cosine Algorithm (SCA, 2016) [64], performance comparison of the optimized LSTM network model in OFDM channel estimation and signal detection. As can be seen from Figure 8, with the increase in SNR, the SER of each method decreases. When the SNR of the MEHHO-LSTM is greater than 10 dB, the SER of the MEHHO-LSTM model obviously accelerates, and it is better than the other optimized network models. It is proved that the initial parameters of the model found by the proposed MEHHO are better, which can help the LSTM model learn the characteristics of the wireless channel better.
Figure 9 shows the comparison curve of the SER performance of the proposed MEHHO-LSTM algorithm, least squares (LS) algorithm, and minimum mean-square error (MMSE) algorithm with the change of SNR. It can be seen from the figure that the scheme using MEHHO-LSTM has the best performance, while the traditional LS and MMSE methods have a very slow rate of SER decline. When the SNR of the proposed model is greater than 10 dB, the rate of SER decline increases significantly. When the SER is 10 1 , the performance of the proposed algorithm is improved by 6–7 dB compared with the traditional LS and MMSE methods. This is because the traditional methods rely heavily on the number of pilots, but the MEHHO-LSTM proposed in this paper is robust to the number of pilots used for channel estimation. It can be shown that the MEHHO-LSTM network model, compared with traditional methods, can significantly reduce the SER of the OFDM system transmission signal, and has good channel estimation and signal detection performance.

6. Conclusions and Prospect

This paper presents an improved version of the Harris Hawks Optimization, Multi-strategy Enhanced Harris Hawks Optimization. Firstly, the map-compass operator and the Cauchy mutation are introduced to enhance the population diversity and improve the ability of the algorithm to jump out of the local optimal. Secondly, the spiral motion strategy is used to improve the exploration stage and to retain the dominant individuals by greedy selection to improve the convergence speed and accuracy of the algorithm. The performance of the proposed MEHHO is compared with the original HHO, other meta-heuristic algorithms, and other improved HHOs in 28 benchmark test functions, and the significance level of the algorithm is evaluated using Wilcoxon signed-rank test. The experimental results show that the multi-strategy fusion design can better simulate the hunting behavior of Harris hawks, effectively improving the convergence speed and accuracy of the algorithm, and solving the problem of easy to fall into the local optimal. When solving unimodal, multimodal, low-dimensional, and high-dimensional functions, the MEHHO can obtain better optimization results, indicating that the algorithm has better accuracy, reliability, and universality in solving global optimization problems. Finally, the application ability of the MEHHO is further verified through the deep learning-based channel estimation and signal detection problems in the field of wireless communication. It shows that the algorithm has the ability to solve engineering application problems, and can adequately solve the optimization problems of parameter selection in engineering, providing efficient and reliable solutions.
MEHHO has greater development potential in the future, and there are many aspects to continue to study. First of all, although the strategy proposed in this paper effectively improves the optimization performance of HHO, the calculation time of individual test functions is long, and the performance of some of the fixed-dimension multimodal test functions still has room for improvement. It should be considered whether new strategies can be introduced to further improve the shortcomings of the algorithm. Secondly, in the multi-objective optimization algorithm, whether the improved strategy can achieve good results also needs further research. Finally, what other engineering application optimization problems the MEHHO can solve in the field of communication still need further consideration and experimentation.

Author Contributions

Conceptualization, Y.S. and Q.H.; methodology, Y.S. and Q.H; software, Y.S. and Q.H; validation, Q.H. and T.L.; formal analysis, Y.S. Q.H., and T.L.; investigation, Y.C. and Y.L.; resources, T.L. and Y.C.; data curation, Q.H. and Y.L.; writing—original draft preparation, Y.S. and Q.H.; writing—review and editing, Y.S. and T.L.; visualization, Y.S. and T.L.; supervision, Q.H. and T.L.; project administration, Y.S. and T.L.; funding acquisition, Y.S. and T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arufe, L.; González, M.A.; Oddi, A.; Rasconi, R.; Varela, R. Quantum circuit compilation by genetic algorithm for quantum approximate optimization algorithm applied to maxcut problem. Swarm Evol. Comput. 2022, 69, 101030. [Google Scholar] [CrossRef]
  2. Wang, M.; Ma, Y.; Wang, P. Parameter and strategy adaptive differential evolution algorithm based on accompanying evolution. Inf. Sci. 2022, 607, 1136–1157. [Google Scholar] [CrossRef]
  3. Alnowibet, K.A.; Mahdi, S.; El-Alem, M.; Abdelawwad, M.; Mohamed, A.W. Guided Hybrid Modified Simulated Annealing Algorithm for Solving Constrained Global Optimization Problems. Mathematics 2022, 10, 1312. [Google Scholar] [CrossRef]
  4. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Futur. Gener. Comp. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  5. Zhao, W.; Wang, L.; Zhang, Z. A novel atom search optimization for dispersion coefficient estimation in groundwater. Futur. Gener. Comp. Syst. 2019, 91, 601–610. [Google Scholar] [CrossRef]
  6. Kassoul, K.; Zufferey, N.; Cheikhrouhou, N.; Belhaouari, S.B. Exponential particle swarm optimization for global optimization. IEEE Access 2022, 10, 78320–78344. [Google Scholar] [CrossRef]
  7. Li, S.; Wei, Y.; Liu, X.; Zhu, H.; Yu, Z. A New Fast Ant Colony Optimization Algorithm: The Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics 2022, 10, 925. [Google Scholar] [CrossRef]
  8. Kaya, E. A New Neural Network Training Algorithm Based on Artificial Bee Colony Algorithm for Nonlinear System Identification. Mathematics 2022, 10, 3487. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  10. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  11. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  12. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  13. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  14. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  15. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Meth. Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  16. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2022, 38, 1–25. [Google Scholar] [CrossRef]
  17. Ahmadianfar, I.; Heidari, A.A.; Noshadian, S.; Chen, H.; Gandomi, A.H. INFO: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst. Appl. 2022, 195, 116516. [Google Scholar] [CrossRef]
  18. Abd El-Sattar, H.; Kamel, S.; Hassan, M.H.; Jurado, F. Optimal sizing of an off-grid hybrid photovoltaic/biomass gasifier/battery system using a quantum model of Runge Kutta algorithm. Energy Conv. Manag. 2022, 258, 115539. [Google Scholar] [CrossRef]
  19. Shao, K.; Fu, W.; Tan, J.; Wang, K. Coordinated approach fusing time-shift multiscale dispersion entropy and vibrational Harris hawks optimization-based SVM for fault diagnosis of rolling bearing. Measurement 2021, 173, 108580. [Google Scholar] [CrossRef]
  20. Zhang, A.; Yu, D.; Zhang, Z. TLSCA-SVM Fault Diagnosis Optimization Method Based on Transfer Learning. Processes 2022, 10, 362. [Google Scholar] [CrossRef]
  21. Dokeroglu, T.; Deniz, A.; Kiziloz, H.E. A Comprehensive Survey on Recent Metaheuristics for Feature Selection. Neurocomputing 2022, 494, 269. [Google Scholar] [CrossRef]
  22. Ewees, A.A.; Ismail, F.H.; Ghoniem, R.M.; Gaheen, M.A. Enhanced Marine Predators Algorithm for Solving Global Optimization and Feature Selection Problems. Mathematics 2022, 10, 4154. [Google Scholar] [CrossRef]
  23. Abbasi, A.; Firouzi, B.; Sendur, P. On the application of Harris hawks optimization (HHO) algorithm to the design of microchannel heat sinks. Eng. Comput. 2021, 37, 1409–1428. [Google Scholar] [CrossRef]
  24. Chacko, A.; Chacko, S. Deep learning-based robust medical image watermarking exploiting DCT and Harris hawks optimization. Int. J. Intell. Syst. 2022, 37, 4810–4844. [Google Scholar] [CrossRef]
  25. Badashah, S.J.; Basha, S.S.; Ahamed, S.R.; Subba Rao, S.P.V. Fractional-Harris hawks optimization-based generative adversarial network for osteosarcoma detection using Renyi entropy-hybrid fusion. Int. J. Intell. Syst. 2021, 36, 6007–6031. [Google Scholar] [CrossRef]
  26. Bandyopadhyay, R.; Kundu, R.; Oliva, D.; Sarkar, R. Segmentation of brain MRI using an altruistic Harris Hawks’ Optimization algorithm. Knowl. -Based Syst. 2021, 232, 107468. [Google Scholar] [CrossRef]
  27. Kaur, N.; Kaur, L.; Cheema, S.S. An enhanced version of Harris Hawks optimization by dimension learning-based hunting for breast cancer detection. Sci. Rep. 2021, 11, 21933. [Google Scholar] [CrossRef]
  28. Alweshah, M.; Almiani, M.; Almansour, N.; Al Khalaileh, S.; Aldabbas, H.; Alomoush, W.; Alshareef, A. Vehicle routing problems based on Harris Hawks optimization. J. Big Data 2022, 9, 42. [Google Scholar] [CrossRef]
  29. Zhang, R.; Li, S.; Ding, Y.; Qin, X.; Xia, Q. UAV Path Planning Algorithm Based on Improved Harris Hawks Optimization. Sensors 2022, 22, 5232. [Google Scholar] [CrossRef]
  30. Golilarz, N.A.; Addeh, A.; Gao, H.; Ali, L.; Roshandeh, A.M.; Munir, H.M.; Khan, R.U. A new automatic method for control chart patterns recognition based on ConvNet and harris hawks meta heuristic optimization algorithm. IEEE Access 2019, 7, 149398–149405. [Google Scholar] [CrossRef]
  31. Abd Elaziz, M.; Abualigah, L.; Ibrahim, R.A.; Attiya, I. IoT workflow scheduling using intelligent arithmetic optimization algorithm in fog computing. Comput. Intell. Neurosci. 2021, 2021, 9114113. [Google Scholar] [CrossRef] [PubMed]
  32. Seyfollahi, A.; Ghaffari, A. Reliable data dissemination for the Internet of Things using Harris hawks optimization. Peer Peer Netw. Appl. 2020, 13, 1886–1902. [Google Scholar] [CrossRef]
  33. Saravanan, G.; Ibrahim, A.M.; Kumar, D.S.; Vanitha, U.; Chandrika, V.S. Iot based speed control of bldc motor with Harris hawks optimization controller. Int. J. Grid Distrib. Comput. 2020, 13, 1902–1915. [Google Scholar]
  34. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Futur. Gener. Comp. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  35. Ramadan, A.; Kamel, S.; Korashy, A.; Almalaq, A.; Domínguez-García, J.L. An enhanced Harris Hawk optimization algorithm for parameter estimation of single, double and triple diode photovoltaic models. Soft Comput. 2022, 26, 7233–7257. [Google Scholar] [CrossRef]
  36. Naeijian, M.; Rahimnejad, A.; Ebrahimi, S.M.; Pourmousa, N.; Gadsden, S.A. Parameter estimation of PV solar cells and modules using Whippy Harris Hawks Optimization Algorithm. Energy Rep. 2021, 7, 4047–4063. [Google Scholar] [CrossRef]
  37. Chen, H.; Jiao, S.; Wang, M.; Heidari, A.A.; Zhao, X. Parameters identification of photovoltaic cells and modules using diversification-enriched Harris hawks optimization with chaotic drifts. J. Clean Prod. 2020, 244, 118778. [Google Scholar] [CrossRef]
  38. Jia, H.; Lang, C.; Oliva, D.; Song, W.; Peng, X. Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sens. 2019, 11, 1421. [Google Scholar] [CrossRef] [Green Version]
  39. Wunnava, A.; Naik, M.K.; Panda, R.; Jena, B.; Abraham, A. A differential evolutionary adaptive Harris hawks optimization for two dimensional practical Masi entropy-based multilevel image thresholding. J. King Saud Univ.-Comput. Inf. Sci. 2020, 34, 3011–3024. [Google Scholar] [CrossRef]
  40. Rodríguez-Esparza, E.; Zanella-Calzada, L.A.; Oliva, D.; Heidari, A.A.; Zaldivar, D.; Pérez-Cisneros, M.; Foong, L.K. An efficient Harris hawks-inspired image segmentation method. Expert Syst. Appl. 2020, 155, 113428. [Google Scholar] [CrossRef]
  41. Srinivas, M.; Amgoth, T. EE-hHHSS: Energy-efficient wireless sensor network with mobile sink strategy using hybrid Harris hawk-salp swarm optimization algorithm. Int. J. Commun. Syst. 2020, 33, 4569. [Google Scholar] [CrossRef]
  42. Bhat, S.J.; Venkata, S.K. An optimization based localization with area minimization for heterogeneous wireless sensor networks in anisotropic fields. Comput. Netw. 2020, 179, 107371. [Google Scholar] [CrossRef]
  43. Kaveh, A.; Rahmani, P.; Eslamlou, A.D. An efficient hybrid approach based on Harris Hawks optimization and imperialist competitive algorithm for structural optimization. Eng. Comput. 2022, 38, 1555–1583. [Google Scholar] [CrossRef]
  44. Dehkordi, A.A.; Sadiq, A.S.; Mirjalili, S.; Ghafoor, K.Z. Nonlinear-based chaotic harris hawks optimizer: Algorithm and internet of vehicles application. Appl. Soft. Comput. 2021, 109, 107574. [Google Scholar] [CrossRef]
  45. Gupta, S.; Deep, K.; Heidari, A.A.; Moayedi, H.; Wang, M. Opposition-based learning Harris hawks optimization with advanced transition rules: Principles and analysis. Expert Syst. Appl. 2020, 158, 113510. [Google Scholar] [CrossRef]
  46. Fan, Q.; Chen, Z.; Xia, Z. A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems. Soft Comput. 2020, 24, 14825–14843. [Google Scholar] [CrossRef]
  47. Liu, C. An improved Harris hawks optimizer for job-shop scheduling problem. J. Supercomput. 2021, 77, 14090–14129. [Google Scholar] [CrossRef]
  48. Abualigah, L.; Diabat, A.; Altalhi, M.; Elaziz, M.A. Improved gradual change-based Harris Hawks optimization for real-world engineering design problems. Eng. Comput. 2022, 38, 1–41. [Google Scholar] [CrossRef]
  49. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  50. Luo, J.; Chen, H.; Heidari, A.A.; Xu, Y.; Zhang, Q.; Li, C. Multi-strategy boosted mutative whale-inspired optimization approaches. Appl. Math. Model. 2019, 73, 109–123. [Google Scholar] [CrossRef]
  51. Ou, X.; Wu, M.; Pu, Y.; Tu, B.; Zhang, G.; Xu, Z. Cuckoo search algorithm with fuzzy logic and Gauss–Cauchy for minimizing localization error of WSN. Appl. Soft Comput. 2022, 125, 109211. [Google Scholar] [CrossRef]
  52. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  53. Hayyolalam, V.; Kazem, A.A.P. Black widow optimization algorithm: A novel meta-heuristic approach for solving engineering optimization problems. Eng. Appl. Artif. Intell. 2020, 87, 103249. [Google Scholar] [CrossRef]
  54. Mohammadi-Balani, A.; Nayeri, M.D.; Azar, A.; Taghizadeh-Yazdi, M. Golden eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 2021, 152, 107050. [Google Scholar] [CrossRef]
  55. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  56. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  57. Sarma, R.; Bhargava, C.; Jain, S.; Kamboj, V.K. Application of ameliorated Harris Hawks optimizer for designing of low-power signed floating-point MAC architecture. Neural Comput. Appl. 2021, 33, 8893–8922. [Google Scholar] [CrossRef]
  58. Nandi, A.; Kamboj, V.K. A Canis lupus inspired upgraded Harris hawks optimizer for nonlinear, constrained, continuous, and discrete engineering design problem. Int. J. Numer. Methods Eng. 2021, 122, 1051–1088. [Google Scholar] [CrossRef]
  59. Kamboj, V.K.; Nandi, A.; Bhadoria, A.; Sehgal, S. An intensify Harris Hawks optimizer for numerical and engineering optimization problems. Appl. Soft. Comput. 2020, 89, 106018. [Google Scholar] [CrossRef]
  60. Ye, H.; Li, G.Y.; Juang, B.H. Power of deep learning for channel estimation and signal detection in OFDM systems. IEEE Wirel. Commun. Lett. 2017, 7, 114–117. [Google Scholar] [CrossRef]
  61. Essai Ali, M.H. Deep learning-based pilot-assisted channel state estimator for OFDM systems. IET Commun. 2021, 15, 257–264. [Google Scholar] [CrossRef]
  62. Ali, M.H.E.; Taha, I.B. Channel state information estimation for 5G wireless communication systems: Recurrent neural networks approach. PeerJ Comput. Sci. 2021, 7, e682. [Google Scholar]
  63. Mai, Z.; Chen, Y.; Zhao, H.; Du, L.; Hao, C. A UAV Air-to-Ground Channel Estimation Algorithm Based on Deep Learning. Wirel. Pers. Commun. 2022, 124, 2247–2260. [Google Scholar] [CrossRef]
  64. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. -Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
Figure 1. Comparison between Cauchy distribution and Gaussian distribution.
Figure 1. Comparison between Cauchy distribution and Gaussian distribution.
Mathematics 11 00390 g001
Figure 2. Schematic diagram of spiral motion.
Figure 2. Schematic diagram of spiral motion.
Mathematics 11 00390 g002
Figure 3. Convergence curves of unimodal functions (F1–F10).
Figure 3. Convergence curves of unimodal functions (F1–F10).
Mathematics 11 00390 g003aMathematics 11 00390 g003bMathematics 11 00390 g003c
Figure 4. Convergence curve of partial multimodal functions (F12–F19).
Figure 4. Convergence curve of partial multimodal functions (F12–F19).
Mathematics 11 00390 g004aMathematics 11 00390 g004b
Figure 5. Convergence curve of partial fixed-dimension multimodal functions (F26, F27).
Figure 5. Convergence curve of partial fixed-dimension multimodal functions (F26, F27).
Mathematics 11 00390 g005
Figure 6. MEHHO-LSTM model channel estimation and signal detection process.
Figure 6. MEHHO-LSTM model channel estimation and signal detection process.
Mathematics 11 00390 g006
Figure 7. Performance comparison of LSTM1, LSTM2, and MEHHO-LSTM methods.
Figure 7. Performance comparison of LSTM1, LSTM2, and MEHHO-LSTM methods.
Mathematics 11 00390 g007
Figure 8. Comparison of the performance of the network model optimized by five algorithms.
Figure 8. Comparison of the performance of the network model optimized by five algorithms.
Mathematics 11 00390 g008
Figure 9. Performance comparison between traditional LS and MMSE methods and MEHHO-LSTM methods.
Figure 9. Performance comparison between traditional LS and MMSE methods and MEHHO-LSTM methods.
Mathematics 11 00390 g009
Table 1. Unimodal benchmark functions.
Table 1. Unimodal benchmark functions.
FunctionDimRange F min
F 1 x = i = 1 D i m x i 2 30, 50, 100, 500 100 , 100 0
F 2 x = i = 1 D i m x i + i = 1 D i m x i 30, 50, 100, 500 10 , 10 0
F 3 x = i = 1 D i m j 1 i x j 2 30, 50, 100, 500 100 , 100 0
F 4 x = max x i x i , 1 i D i m 30, 50, 100, 500 100 , 100 0
F 5 x = i = i D i m 1 100 x i + 1 x i 2 2 + x i 1 2 30, 50, 100, 500 30 , 30 0
F 6 x = i = i D i m x i + 0.5 2 30, 50, 100, 500 100 , 100 0
F 7 x = i = i D i m i x i 4 + r a n d 0 , 1 30, 50, 100, 500 1.28 , 1.28 0
F 8 x = x 1 2 + 10 6 i = 2 D i m x i 2 30, 50, 100, 500 100 , 100 0
F 9 x = 10 6 x 1 2 + i = 2 D i m x i 2 30, 50, 100, 500 1 , 1 0
F 10 x = i = 1 D i m 10 6 i 1 D i m 1 x i 2 30, 50, 100, 500 100 , 100 0
Table 2. Multimodal benchmark functions.
Table 2. Multimodal benchmark functions.
FunctionDimRange F min
F 11 x = i = i D i m x i 2 10 cos 2 π x i + 10 30, 50,
100, 500
5.12 , 5.12 0
F 12 x = 20 exp 0.2 1 D i m i = 1 D i m x i 2 exp 1 D i m i = 1 D i m cos 2 π x i + 20 + e 30, 50,
100, 500
32 , 32 0
F 13 x = 1 4000 i = 1 D i m x i 2 i = 1 D i m cos x i i + 1 30, 50,
100, 500
600 , 600 0
F 14 x = π D i m 10 sin 2 π y 1 + i = 1 D i m 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y D i m 1 2 + i = 1 D i m u x i , 10 , 100 , 4 30, 50,
100, 500
50 , 50 0
F 15 x = 0.1 sin 2 3 π x 1 + i = 1 D i m x i 1 2 1 + sin 2 3 π x i + 1 + x D i m 1 2 [ 1 + sin 2 2 π x D i m ] + i = 1 D i m u x i , 5 , 100 , 4 30, 50,
100, 500
50 , 50 0
F 16 x = i = 1 D i m x i sin x i + 0.1 x i 30, 50,
100, 500
10 , 10 0
F 17 x = 0.5 + sin 2 ( i = 1 D i m x i 2 ) 2 0.5 1 + 0.001 ( i = 1 D i m x i 2 ) 2 30, 50,
100, 500
100 , 100 0
F 18 x = 1 cos 2 π i = 1 D i m x i 2 + 0.1 i = 1 D i m x i 2 30, 50,
100, 500
100 , 100 0
F 19 x = i = 1 D i m x i 6 2 + sin 1 x i 30, 50,
100, 500
1 , 1 0
F 20 x = i = 1 D i m x i e i = 1 D i m sin x i 2 30, 50,
100, 500
100 , 100 0
Table 3. Fixed-dimension multimodal benchmark functions.
Table 3. Fixed-dimension multimodal benchmark functions.
FunctionDimRange F min
F 21 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 + 4 x 2 4 2 5 , 5 −1.0316
F 22 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2 5 , 5 0.398
F 23 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 3 2 , 2 3
F 24 x = i = 1 4 c H i exp j = 1 3 a H i j x j p H i j 2 3 1 , 3 −3.86
F 25 x = i = 1 4 c H i exp j = 1 6 a H i j x j p H i j 2 6 0 , 1 −3.32
F 26 x = i = 1 5 x a i x a i T + c i 1 4 0 , 10 −10.1532
F 27 x = i = 1 7 x a i x a i T + c i 1 4 0 , 10 −10.4028
F 28 x = i = 1 10 x a i x a i T + c i 1 4 0 , 10 −10.5363
Table 4. Comparison of unimodal functions test results.
Table 4. Comparison of unimodal functions test results.
FunctionMetricMEHHOHHOWOAMPAGWOPSOBOA
F1Mean01.36 × 10−968.44 × 10−725.98 × 10−232.94 × 10−272.69 × 1031.27 × 10−11
Std04.95 × 10−964.60 × 10−717.77 × 10−231.21 × 10−261.47 × 1039.19 × 10−13
Time0.0380.0800.0410.1020.0690.0070.054
F2Mean02.67 × 10−511.68 × 10−502.85 × 10−131.12 × 10−163.20 × 1013.96 × 10−9
Std09.90 × 10−519.05 × 10−502.75 × 10−137.44 × 10−171.33 × 1011.84 × 10−9
Time0.0410.0880.0620.0840.0720.0110.070
F3Mean01.88 × 10−724.22 × 1042.41 × 10−47.18 × 10−68.15 × 1031.25 × 10−11
Std01.03 × 10−711.28 × 1045.09 × 10−41.43 × 10−53.74 × 1039.15 × 10−13
Time0.1210.3940.0120.2400.2210.0140.330
F4Mean04.21 × 10−495.14 × 1013.79 × 10−98.78 × 10−72.51 × 1016.14 × 10−9
Std02.23 × 10−482.50 × 1012.56 × 10−96.16 × 10−74.833.95 × 10−10
Time0.0370.1000.0060.0820.0640.0070.093
F5Mean6.90 × 10−31.34 × 10−22.80 × 1012.52 × 1012.71 × 1014.29 × 1052.90 × 101
Std7.80 × 10−31.86 × 10−24.66 × 10−13.35 × 10−17.35 × 10−14.72 × 1052.49 × 10−2
Time0.0140.0290.0090.0210.0170.0040.016
F6Mean5.28 × 10−51.19 × 10−43.87 × 10−18.93 × 10−87.79 × 10−12.13 × 1036.17
Std1.42 × 10−41.20 × 10−42.57 × 10−12.65 × 10−74.09 × 10−19.04 × 1025.76 × 10−1
Time0.0530.0630.0150.3610.0290.0120.026
F7Mean5.45 × 10−51.40 × 10−43.90 × 10−31.50 × 10−32.10 × 10−31.331.00 × 10−3
Std5.54 × 10−51.49 × 10−45.50 × 10−35.97 × 10−49.10 × 10−45.76 × 10−14.54 × 10−4
Time0.0330.0350.0530.0860.0240.0110.060
F8Mean02.60 × 10−894.00 × 10−653.74 × 10−175.91 × 10−221.96 × 1091.55 × 10−11
Std01.40 × 10−882.19 × 10−644.11 × 10−175.52 × 10−228.19 × 1081.15 × 10−12
Time0.0690.1340.0830.2510.1370.0160.094
F9Mean04.21 × 10−972.29 × 10−743.67 × 10−261.82 × 10−313.997.87 × 10−12
Std01.61 × 10−961.17 × 10−734.46 × 10−262.75 × 10−311.211.09 × 10−12
Time0.0570.1570.0720.2590.1400.0220.104
F10Mean03.68 × 10−947.58 × 10−711.98 × 10−195.62 × 10−243.27 × 1071.40 × 10−11
Std01.06 × 10−932.87 × 10−702.37 × 10−191.34 × 10−231.84 × 1071.20 × 10−12
Time0.1000.2600.1620.2630.1900.0170.203
Table 5. Comparison of multimodal functions test results.
Table 5. Comparison of multimodal functions test results.
FunctionMetricMEHHOHHOWOAMPAGWOPSOBOA
F11Mean001.89 × 10−1502.011.44 × 1022.79 × 10−13
Std001.04 × 10−1403.583.01 × 1016.73 × 10−13
Time0.0480.1030.0660.1460.0570.0190.115
F12Mean8.88 × 10−168.88 × 10−165.15 × 10−151.25 × 10−129.68 × 10−141.10 × 1016.04 × 10−9
Std001.59 × 10−155.26 × 10−131.53 × 10−141.402.28 × 10−10
Time0.0330.0840.0640.1280.1150.0070.140
F13Mean00004.20 × 10−32.08 × 1018.08 × 10−12
Std00008.90 × 10−39.543.42 × 10−12
Time0.0200.0760.0520.1040.0350.0080.134
F14Mean2.99 × 10−68.14 × 10−62.75 × 10−26.53 × 10−65.12 × 10−27.74 × 1027.27 × 10−1
Std5.33 × 10−69.83 × 10−64.88 × 10−22.7 × 10−53.03 × 10−21.69 × 1031.75 × 10−1
Time0.0440.1350.0170.4170.0210.2370.024
F15Mean4.6 × 10−59.1 × 10−56.03 × 10−15.60 × 10−36.05 × 10−11.45 × 1052.98
Std6.89 × 10−51.16 × 10−42.85 × 10−11.40 × 10−22.53 × 10−12.26 × 1053.83 × 10−2
Time0.0750.0930.0420.7250.0550.0900.038
F16Mean03.47 × 10−511.04 × 10−311.09 × 10−134.27 × 10−41.62 × 1014.07 × 10−9
Std01.7 × 10−505.72 × 10−311.33 × 10−134.23 × 10−44.251.13 × 10−9
Time0.0390.0960.0410.1020.0310.0050.124
F17Mean003.90 × 10−34.90 × 10−35.80 × 10−35.00 × 10−15.00 × 10−3
Std002.00 × 10−32.52 × 10−163.60 × 10−31.46 × 10−52.46 × 10−4
Time0.0460.0890.0300.1180.0860.0130.077
F18Mean03.43 × 10−481.57 × 10−11.40 × 10−11.90 × 10−16.551.86 × 10−1
Std01.87 × 10−478.97 × 10−24.98 × 10−24.81 × 10−21.373.29 × 10−2
Time0.0670.1330.0140.0660.0140.0120.025
F19Mean01.1 × 10−2805.8 × 10−1157.49 × 10−649.18 × 10−665.30 × 10−45.32 × 10−12
Std003.2 × 10−1141.91 × 10−632.91 × 10−654.70 × 10−42.81 × 10−12
Time0.0400.2320.1330.1630.1840.0380.111
F20Mean03.51 × 10−124.41 × 10−126.47 × 10−123.65 × 10−82.88 × 10−72.36 × 10−7
Std05.98 × 10−151.47 × 10−123.16 × 10−129.40 × 10−84.27 × 10−72.43 × 10−7
Time0.1160.0430.0130.0700.0210.0290.071
Table 6. Comparison of fixed-dimension multimodal functions test results.
Table 6. Comparison of fixed-dimension multimodal functions test results.
FunctionMetricMEHHOHHOWOAMPAGWOPSOBOA
F21Mean−1.03−1.03−1.03−1.03−1.03−1.03−3.56
Std4.60 × 10−72.47 × 10−92.87 × 10−94.34 × 10−168.77 × 10−97.14 × 10−58.35 × 10−2
Time0.0410.0480.0140.0420.0110.0150.037
F22Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−17.97 × 10−1
Std2.99 × 10−54.95 × 10−66.86 × 10−602.30 × 10−78.10 × 10−61.01
Time0.0150.0190.0080.0200.0060.0080.006
F23Mean3.003.003.003.003.003.005.82
Std3.29 × 10−54.91 × 10−71.40 × 10−41.66 × 10−154.25 × 10−51.08 × 10−45.33
Time0.0170.0270.0090.0330.0090.0110.008
F24Mean−3.00 × 10−1−3.00 × 10−1−3.00 × 10−1−3.00 × 10−1−3.00 × 10−1−3.866.14 × 10−9
Std2.26 × 10−162.26 × 10−162.26 × 10−162.26 × 10−162.26 × 10−166.34 × 10−53.24 × 10−4
Time0.0130.0160.0070.0180.0050.0080.011
F25Mean−3.07−3.08−3.21−3.32−3.27−3.312.90 × 101
Std1.55 × 10−11.36 × 10−11.03 × 10−15.20 × 10−127.22 × 10−23.16 × 10−24.50 × 10−1
Time0.0240.0430.0090.0320.0100.0100.009
F26Mean−1.00 × 101−5.05−7.89−1.02 × 101−9.65−6.85−4.26
Std1.73 × 10−15.64 × 10−32.532.43 × 10−111.542.823.26 × 10−1
Time0.0260.0280.0090.0400.0340.0110.110
F27Mean−1.03 × 101−5.57−7.90−1.04 × 101−1.02 × 101−7.56−3.85
Std1.70 × 10−11.493.162.43 × 10−119.70 × 10−13.164.23 × 10−1
Time0.0250.0280.0110.0460.0360.0120.088
F28Mean−1.05 × 101−5.61−7.63−1.05 × 101−1.05 × 101−7.57−3.78
Std1.12 × 10−11.493.252.93 × 10−114.51 × 10−43.566.20 × 10−1
Time0.0290.0310.0090.0380.0300.0200.082
Table 7. Results comparison of variable-dimensional functions (50 dimensions).
Table 7. Results comparison of variable-dimensional functions (50 dimensions).
FunctionOptimizerMeanStdBestWorst
F1HHO5.38 × 10−952.28 × 10−942.8 × 10−1121.24 × 10−93
MEHHO0000
F2HHO9.2 × 10−494.87 × 10−483.32 × 10−612.67 × 10−47
MEHHO0000
F3HHO5.07 × 10−632.21 × 10−622.35 × 10−921.17 × 10−61
MEHHO0000
F4HHO1.43 × 10−485.9 × 10−489.29 × 10−593.2 × 10−47
MEHHO0000
F5HHO1.89 × 10−22.95 × 10−25.48 × 10−51.22 × 10−1
MEHHO1.80 × 10−22.10 × 10−22.89 × 10−69.28 × 10−2
F6HHO2.85 × 10−44.02 × 10−46.20 × 10−81.70 × 10−3
MEHHO1.32 × 10−42.12 × 10−41.69 × 10−88.55 × 10−4
F7HHO1.62 × 10−41.99 × 10−41.10 × 10−51.10 × 10−3
MEHHO5.54 × 10−55.47 × 10−52.32 × 10−62.61 × 10−4
F8HHO2.61 × 10−41.51 × 10−861.54 × 10−1058.27 × 10−86
MEHHO0000
F9HHO7.00 × 10−983.41 × 10−973.36 × 10−1141.87 × 10−96
MEHHO0000
F10HHO9.4 × 10−885.15 × 10−871.9 × 10−1102.82 × 10−86
MEHHO0000
F11HHO0000
MEHHO0000
F12HHO8.88 × 10−1608.88 × 10−168.88 × 10−16
MEHHO8.88 × 10−1608.88 × 10−168.88 × 10−16
F13HHO0000
MEHHO0000
F14HHO5.75 × 10−68.57 × 10−65.02 × 10−83.42 × 10−5
MEHHO1.80 × 10−62.31 × 10−68.82 × 10−118.65 × 10−6
F15HHO1.06 × 10−41.32 × 10−42.71 × 10−74.74 × 10−4
MEHHO6.03 × 10−57.24 × 10−51.51 × 10−92.39 × 10−4
F16HHO6.23 × 10−503.24 × 10−493 × 10−621.78 × 10−48
MEHHO0000
F17HHO0000
MEHHO0000
F18HHO3.28 × 10−481.67 × 10−475.39 × 10−559.14 × 10−47
MEHHO0000
F19HHO1 × 10−293003 × 10−292
MEHHO0000
F20HHO1.21 × 10−209.22 × 10−241.21 × 10−201.21 × 10−20
MEHHO0000
Table 8. Results comparison of variable-dimensional functions (100 dimensions).
Table 8. Results comparison of variable-dimensional functions (100 dimensions).
FunctionOptimizerMeanStdBestWorst
F1HHO5.65 × 10−953.07 × 10−942.1 × 10−1101.68 × 10−93
MEHHO0000
F2HHO2.45 × 10−501.11 × 10−497.88 × 10−606.09 × 10−49
MEHHO0000
F3HHO2.62 × 10−599.96 × 10−592.79 × 10−924.11 × 10−58
MEHHO0000
F4HHO7.75 × 10−484.16 × 10−472.27 × 10−552.28 × 10−46
MEHHO0000
F5HHO4.21 × 10−27.05 × 10−21.65 × 10−43.50 × 10−1
MEHHO2.37 × 10−22.94 × 10−26.65 × 10−51.00 × 10−1
F6HHO8.52 × 10−41.30 × 10−31.66 × 10−84.70 × 10−3
MEHHO1.82 × 10−42.52 × 10−45.89 × 10−71.20 × 10−3
F7HHO2.04 × 10−42.16 × 10−45.36 × 10−78.43 × 10−4
MEHHO5.88 × 10−56.43 × 10−53.56 × 10−72.22 × 10−4
F8HHO3.63 × 10−871.98 × 10−862.96 × 10−1061.08 × 10−85
MEHHO0000
F9HHO8.06 × 10−964.17 × 10−954.03 × 10−1132.29 × 10−94
MEHHO0000
F10HHO3.12 × 10−901.71 × 10−891.9 × 10−1099.35 × 10−89
MEHHO0000
F11HHO0000
MEHHO0000
F12HHO8.88 × 10−1608.88 × 10−168.88 × 10−16
MEHHO8.88 × 10−1608.88 × 10−168.88 × 10−16
F13HHO0000
MEHHO0000
F14HHO3.13 × 10−64.40 × 10−67.29 × 10−91.59 × 10−5
MEHHO1.57 × 10−62.31 × 10−62.58 × 10−111.11 × 10−5
F15HHO2.79 × 10−45.00 × 10−46.34 × 10−72.10 × 10−3
MEHHO5.83 × 10−57.05 × 10−51.14 × 10−92.87 × 10−4
F16HHO2.55 × 10−61.4 × 10−59.63 × 10−607.65 × 10−5
MEHHO0000
F17HHO0000
MEHHO0000
F18HHO1.5 × 10−496.76 × 10−491.32 × 10−553.68 × 10−48
MEHHO0000
F19HHO2.8 × 10−298008.4 × 10−297
MEHHO0000
F20HHO4.68 × 10−426.21 × 10−444.66 × 10−425.00 × 10−42
MEHHO4.66 × 10−431.42 × 10−4204.66 × 10−42
Table 9. Results comparison of variable-dimensional functions (500 dimensions).
Table 9. Results comparison of variable-dimensional functions (500 dimensions).
FunctionOptimizerMeanStdBestWorst
F1HHO1.97 × 10−966.88 × 10−965.9 × 10−1143.63 × 10−95
MEHHO0000
F2HHO1.59 × 10−493.83 × 10−498.98 × 10−591.69 × 10−48
MEHHO0000
F3HHO1.09 × 10−375.96 × 10−371.26 × 10−743.27 × 10−36
MEHHO0000
F4HHO2.56 × 10−478.68 × 10−474.67 × 10−564.04 × 10−46
MEHHO0000
F5HHO2.31 × 10−13.76 × 10−12.00 × 10−31.80
MEHHO1.53 × 10−11.73 × 10−11.35 × 10−46.49 × 10−1
F6HHO3.60 × 10−36.00 × 10−31.22 × 10−52.29 × 10−2
MEHHO1.40 × 10−32.10 × 10−33.00 × 10−77.60 × 10−3
F7HHO2.08 × 10−42.96 × 10−41.59 × 10−51.30 × 10−3
MEHHO5.58 × 10−55.40 × 10−56.12 × 10−72.33 × 10−4
F8HHO1.04 × 10−892.66 × 10−891.05 × 10−1041.03 × 10−88
MEHHO0000
F9HHO5.01 × 10−992.06 × 10−983.54 × 10−1101.13 × 10−97
MEHHO0000
F10HHO1.62 × 10−888.84 × 10−884.1 × 10−1164.84 × 10−87
MEHHO0000
F11HHO0000
MEHHO0000
F12HHO8.88 × 10−1608.88 × 10−168.88 × 10−16
MEHHO8.88 × 10−1608.88 × 10−168.88 × 10−16
F13HHO0000
MEHHO0000
F14HHO2.13 × 10−63.37 × 10−64.33 × 10−81.24 × 10−5
MEHHO7.06 × 10−71.71 × 10−66.05 × 10−109.20 × 10−6
F15HHO6.12 × 10−47.75 × 10−43.00 × 10−62.90 × 10−3
MEHHO2.64 × 10−43.69 × 10−47.17 × 10−71.40 × 10−3
F16HHO3.51 × 10−51.92 × 10−43.64 × 10−571.10 × 10−3
MEHHO0000
F17HHO0000
MEHHO0000
F18HHO9.02 × 10−484.45 × 10−477.89 × 10−572.44 × 10−46
MEHHO0000
F19HHO1.3 × 10−288003.8 × 10−287
MEHHO0000
F20HHO4.51 × 10−21504.46 × 10−2154.72 × 10−215
MEHHO4.47 × 10−21504.46 × 10−2154.50 × 10−215
Table 10. Statistical results of Wilcoxon signed-rank test on unimodal functions.
Table 10. Statistical results of Wilcoxon signed-rank test on unimodal functions.
FunctionValueDim=30Dim=50Dim=100Dim=500
F1p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F2p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F3p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F4p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F5p-value0.4050.4530.0660.781
conclusion----
F6p-value0.0040.010.0140.465
conclusion+++-
F7p-value0.0050.0040.0041.60 × 10−4
conclusion++++
F8p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F9p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F10p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
Table 11. Statistical results of Wilcoxon signed-rank test on multimodal functions.
Table 11. Statistical results of Wilcoxon signed-rank test on multimodal functions.
FunctionValueDim=30Dim=50Dim=100Dim=500
F11p-valueNANANANA
conclusion====
F12p-valueNANANANA
conclusion====
F13p-valueNANANANA
conclusion====
F14p-value0.0025.30 × 10−50.3290.141
conclusion++--
F15p-value0.7661.20 × 10−50.2370.006
conclusion-+-+
F16p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F17p-valueNANANANA
conclusion====
F18p-value2.00 × 10−62.00 × 10−62.00 × 10−62.00 × 10−6
conclusion++++
F19p-value8.70 × 10−51.32 × 10−41.32 × 10−42.92 × 10−4
conclusion++++
F20p-value2.00 × 10−62.00 × 10−62.00 × 10−64.90 × 10−5
conclusion++++
Table 12. Statistical results of Wilcoxon signed-rank test on fixed-dimension multimodal functions.
Table 12. Statistical results of Wilcoxon signed-rank test on fixed-dimension multimodal functions.
FunctionF21F22F23F24F25F26F27F28
p-value2.00 × 10−60.0015.00 × 10−6NA0.0792.00 × 10−62.00 × 10−62.00 × 10−6
conclusion+++=-+++
Table 13. Comparison of optimization results of four improved algorithms on common functions.
Table 13. Comparison of optimization results of four improved algorithms on common functions.
FunctionMetricMEHHOhHHO-PSOhHHO-GWOhHHO-SCA
F1Mean07.62 × 10−984.61 × 10−961.86 × 10−91
Std03.11 × 10−972.50 × 10−959.49 × 10−91
F2Mean02.49 × 10−511.55 × 10−482.46 × 10−51
Std01.20 × 10−508.42 × 10−481.11 × 10−50
F3Mean07.38 × 10−751.53 × 10−688.88 × 10−72
Std04.02 × 10−748.36 × 10−684.86 × 10−71
F4Mean01.23 × 10−473.30 × 10−498.02 × 10−49
Std06.73 × 10−471.21 × 10−482.83 × 10−48
F5Mean6.90 × 10−37.32 × 10−31.70 × 10−21.43 × 10−2
Std7.80 × 10−38.53 × 10−32.16 × 10−22.02 × 10−2
F6Mean5.28 × 10−51.44 × 10−41.64 × 10−42.24 × 10−4
Std1.42 × 10−42.5 × 10−43.08 × 10−43.38 × 10−4
F7Mean5.45 × 10−51.77 × 10−41.49 × 10−41.22 × 10−4
Std5.54 × 10−51.74 × 10−41.15 × 10−41.10 × 10−4
F11Mean0000
Std0000
F12Mean8.88 × 10−168.88 × 10−168.88 × 10−168.88 × 10−16
Std0000
F13Mean0000
Std0000
F14Mean2.99 × 10−61.13 × 10−51.13 × 10−51.13 × 10−5
Std5.33 × 10−61.50 × 10−51.5 × 10−51.5 × 10−5
F15Mean4.6 × 10−51.13 × 10−41.13 × 10−41.13 × 10−4
Std6.89 × 10−51.66 × 10−41.66 × 10−41.66 × 10−4
F21Mean−1.03−1.03−1.03−1.03
Std4.60 × 10−74.80 × 10−93.97 × 10−101.80 × 10−9
F22Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Std2.99 × 10−53.49 × 10−51.86 × 10−52.15 × 10−5
F23Mean3.003.003.003.00
Std3.29 × 10−55.28 × 10−72.65 × 10−79.98 × 10−7
F24Mean−3.00 × 10−1−3.86−3.86−3.86
Std2.26 × 10−163.41 × 10−34.30 × 10−33.00 × 10−3
F25Mean−3.07−3.10−3.11−3.09
Std1.55 × 10−11.03 × 10−11.21 × 10−11.09 × 10−1
F26Mean−1.00 × 101−5.05−5.22−5.21
Std1.73 × 10−16.60 × 10−38.97 × 10−18.95 × 10−1
F27Mean1.03 × 101−5.08−5.14−5.25
Std1.70 × 10−14.12 × 10−31.109.26 × 10−1
F28Mean−1.05 × 101−5.12−5.30−5.28
Std1.12 × 10−15.77 × 10−39.55 × 10−18.65 × 10−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Huang, Q.; Liu, T.; Cheng, Y.; Li, Y. Multi-Strategy Enhanced Harris Hawks Optimization for Global Optimization and Deep Learning-Based Channel Estimation Problems. Mathematics 2023, 11, 390. https://doi.org/10.3390/math11020390

AMA Style

Sun Y, Huang Q, Liu T, Cheng Y, Li Y. Multi-Strategy Enhanced Harris Hawks Optimization for Global Optimization and Deep Learning-Based Channel Estimation Problems. Mathematics. 2023; 11(2):390. https://doi.org/10.3390/math11020390

Chicago/Turabian Style

Sun, Yunshan, Qian Huang, Ting Liu, Yuetong Cheng, and Yanqin Li. 2023. "Multi-Strategy Enhanced Harris Hawks Optimization for Global Optimization and Deep Learning-Based Channel Estimation Problems" Mathematics 11, no. 2: 390. https://doi.org/10.3390/math11020390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop