Next Article in Journal
Biomimetic Wings for Micro Air Vehicles
Previous Article in Journal
Few-Shot Learning in Wi-Fi-Based Indoor Positioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Strategy Improved Harris Hawk Optimization Algorithm and Its Application in Path Planning

by
Chaoli Tang
,
Wenyan Li
*,
Tao Han
,
Lu Yu
and
Tao Cui
School of Electrical & Information Engineering, Anhui University of Science and Technology, Huainan 232001, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(9), 552; https://doi.org/10.3390/biomimetics9090552
Submission received: 8 August 2024 / Revised: 5 September 2024 / Accepted: 6 September 2024 / Published: 12 September 2024

Abstract

Path planning is a key problem in the autonomous navigation of mobile robots and a research hotspot in the field of robotics. Harris Hawk Optimization (HHO) faces challenges such as low solution accuracy and a slow convergence speed, and it easy falls into local optimization in path planning applications. For this reason, this paper proposes a Multi-strategy Improved Harris Hawk Optimization (MIHHO) algorithm. First, the double adaptive weight strategy is used to enhance the search capability of the algorithm to significantly improve the convergence accuracy and speed of path planning; second, the Dimension Learning-based Hunting (DLH) search strategy is introduced to effectively balance exploration and exploitation while maintaining the diversity of the population; and then, Position update strategy based on Dung Beetle Optimizer algorithm is proposed to reduce the algorithm’s possibility of falling into local optimal solutions during path planning. The experimental results of the comparison of the test functions show that the MIHHO algorithm is ranked first in terms of performance, with significant improvements in optimization seeking ability, convergence speed, and stability. Finally, MIHHO is applied to robot path planning, and the test results show that in four environments with different complexities and scales, the average path lengths of MIHHO are improved by 1.99%, 14.45%, 4.52%, and 9.19% compared to HHO, respectively. These results indicate that MIHHO has significant performance advantages in path planning tasks and helps to improve the path planning efficiency and accuracy of mobile robots.

1. Introduction

With the gradual intelligentization of modern industry, mobile robots have attracted more and more attention in many fields and have been widely used in industry, medical treatment, agriculture, the service industry, and other fields. As a key technology for the autonomous navigation of mobile robots, path planning can be used to plan a collision-free path from the starting point to the endpoint for the robot in a complex environment [1]. However, existing path planning algorithms still face many challenges, especially in terms of solution accuracy, convergence speed, and avoiding falling into local optimization.
Path planning algorithms can be divided into traditional algorithms and swarm intelligence algorithms. Traditional algorithms include the Dijkstra algorithm [2], A-Star algorithm [3], and Artificial Potential Field (APF) [4]. The A-Star algorithm can find a fast and accurate shortest path solution through heuristic functions, but it has a large dependence on heuristic functions and a high memory consumption. The Dijkstra algorithm is simple to implement, but it is less efficient when dealing with maps containing a large number of nodes. The Artificial Potential Field method is simple and computationally efficient, making it suitable for real-time path planning, but it easily falls into local minima and has difficulty coping with complex environments.
The swarm intelligence algorithm forms a self-organizing adaptive stochastic optimization algorithm with bionic behavior by observing the living habits, foraging behavior, and social characteristics of the population [5]. Compared with traditional algorithms, group intelligence algorithms do not require gradient information, and they are flexible and easy to implement [6]. They are versatile and simple to implement and have high optimization efficiency and excellent performance in solving complex problems. By simulating the collaboration and competition mechanism of biological groups, the group intelligence algorithm shows great flexibility and adaptability, making them powerful complements to traditional algorithms. Due to these advantages, group intelligence algorithms show a wide range of application prospects and have potential applications in various fields.
Because of its better robustness, flexibility, and high search accuracy, the swarm intelligence algorithm has been widely used in path planning. Common swarm intelligence algorithms for path planning include the Whale Optimization Algorithm (WOA), Grey Wolf Optimizer algorithm (GWO), Ant Colony Optimization algorithm (ACO), Sparrow Search Algorithm (SSA), etc. Tian et al. [7] proposed a hybrid Firefly–Whale Optimization Algorithm (FWOA) combining multiple swarms and dyadic learning strategies by simulating whale foraging behaviors and firefly blinking characteristics, which can quickly find the optimal in complex mobile robot working environments and significantly improve accuracy and efficiency when solving the path planning problem. Yu et al. [8] proposed a hybrid Grey Wolf Optimizer algorithm and Differential Evolution algorithm to solve the UAV path planning problem; the improved algorithm can make the path shorter and smoother. Wu et al. [9] proposed a Modified Adaptive Ant Colony Optimization algorithm (MAACO), which can effectively reduce the path length and the number of turns and improve the convergence speed. He et al. [10] proposed an improved chaotic Sparrow Search Algorithm to overcome the problems of slow convergence and easily falling into local optimization in UAV path planning for 3D complex environments. However, these algorithms still suffer from the problems of insufficient solution accuracy and a slow convergence speed and can easily fall into local optimization in path planning, which limits their applications in more complex scenarios.
The Harris Hawks Optimization (HHO) algorithm is a swarm intelligence optimization algorithm proposed by Heidari et al. in 2019 [11]. This algorithm simulates the hunting behavior of Harris hawks to achieve the purpose of optimizing the search. Compared with other intelligent algorithms, it has the advantages of fewer parameters, simple operation, simple principles, and strong versatility. At present, the HHO algorithm has been widely used in scheduling problems [12], image processing [13], power system configuration [14,15], neural network training [16], and path planning [17,18]. However, the HHO algorithm also faces the problems of poor solution accuracy, a slow convergence speed, and the tendency to fall into local optimization. To solve these problems, researchers have proposed a variety of improvement strategies. Qu et al. [19] proposed an HHO algorithm with an information exchange function. By exploring the shared area, information exchange and sharing between Harris hawks are realized, and the convergence accuracy and efficiency are improved. Jiao et al. [20] combined orthogonal learning and reverse learning to effectively and accurately estimate the parameters of solar cells and photovoltaic modules. Zou et al. [21] improved the development phase of the HHO algorithm by using differential perturbation and a greedy strategy to enhance population diversity. In addition, a new conversion strategy was used to make the algorithm flexibly switch between search and development, thereby enhancing its global search ability. Li et al. [22] proposed an exploration strategy based on logarithmic spiral and reverse learning to improve the exploration ability of HHO. Hussain et al. [23] integrated the sine and cosine algorithm to enhance the invalid search of the HHO algorithm and avoided the stagnation of the solution of the HHO algorithm by dynamically adjusting the strategy.
HHO is also widely used in path planning. Li et al. [24] proposed a path planning algorithm based on a bionic neural network and improved HHO, which has a periodic energy decline adjustment mechanism and has solved the problem of multi-UAV path planning in a three-dimensional space. Belge et al. [17] used a hybrid HHO-GWO meta-heuristic optimization algorithm for path planning and tracking to avoid obstacles. Huang et al. [25] introduced an improved sine-trend search strategy and nonlinear jump intensity into the HHO algorithm. The improved algorithm has good feasibility and effectiveness in trajectory planning. Aiming to solve problem of multi-robot path planning, Nasr AA [26] proposed a fusion algorithm using K-means clustering and HHO, which effectively improved the convergence speed of the algorithm and reduced the calculation amount of the algorithm. Dehkordi AA et al. [27] introduced chaotic mapping into the HHO algorithm to ensure the uniform distribution of the initial population. In the path optimization problem of the Internet of Vehicles, the improved algorithm has been proven to have significantly improved convergence speed and accuracy.
To address these problems faced by the HHO algorithm in path planning, this paper proposes a Multi-strategy Improved Harris Hawk Optimization (MIHHO), which aims to improve the algorithm’s solution accuracy, convergence speed, and ability to avoid local optima. The main contributions of this paper include the following:
  • Double adaptive weight strategy: In MIHHO, the convergence accuracy and speed of path planning are effectively improved by enhancing the global search capability in the exploration phase and strengthening the local search capability in the development phase.
  • The Dimension Learning-Based Hunting (DLH) search strategy: By introducing the dimensional learning strategy, MIHHO can effectively balance global and local search capabilities and maintain the diversity of the population, thus enhancing the algorithm’s optimization ability.
  • Position update strategy based on Dung Beetle Optimizer (DBO) algorithm: A position update strategy based on the DBO algorithm is proposed, which reduces the possibility of the algorithm falling into a local optimal solution in the path planning process.
  • Performance test and analysis: The MIHHO algorithm significantly outperforms the other algorithms in terms of optimization-seeking ability, convergence speed, and stability through comparison experiments with 12 classical functions and the CEC2022 test function.
  • Path planning experiments: The performance of the MIHHO algorithm was comprehensively evaluated through global path planning simulation experiments in simple and complex environments using the optimal value, mean, and standard deviation of the path length and the number of iterations. The experimental results show that the improved strategy exhibits significant advantages in terms of the solution accuracy, convergence speed, and avoidance of local optima.
These contributions show that MIHHO has significant performance advantages in path planning tasks, providing shorter and more efficient path planning results than existing algorithms. Its effectiveness and applicability in different environments are demonstrated.

2. The Path Planning Optimization Problem

2.1. Environmental Modeling

The raster method is the most commonly used environment map modeling method in path planning, which is simple and effective and can significantly reduce the complexity of modeling the environment. Therefore, this paper adopts the raster method to construct the working environment model of the mobile robot. In the raster environment, the free area is represented by a value of 0, and the corresponding grid is filled with white color; the obstacle area is represented by a value of 1, and the corresponding grid is filled with black color. If there is a situation where an obstacle cannot occupy a grid, the obstacle is inflated to fill a grid. For the robot’s path planning problem, the search cannot cross the rectangular boundaries of the grid. In addition, it should be limited by obstacles, i.e., the robot’s trajectory cannot pass through a grid region where obstacles are present. The transformation relationship between the grid sequence number and the corresponding coordinates is shown in (1) [9]:
{ x n = mod ( n , R x ) 0.5 y n = R y + 0.5 c e i l ( n / R y ) ,
where ( x n , y n ) denotes the position coordinates of the nth grid, x n represents the horizontal and vertical coordinates of the nth grid, y n denotes the vertical coordinates of the nth grid, R x denotes the total number of rows of the environment model, R y denotes the total number of columns of the environment model, n is the serial number of the nth grid, and c e i l ( ) and mod ( ) are the round-down function and remainder function, respectively.

2.2. Constraints and Fitness Functions

After setting up relevant environmental data, a population intelligence algorithm is used to find the ideal path in the map that satisfies all requirements. In addition, a fitness function that can contain constraints is created, and solutions that can satisfy this function are retained. Those that did not satisfy the fitness function were eliminated.
In MIHHO algorithm-based path planning, the position coordinates of the Harris hawk population are set to be updated at each iteration, representing a moving route for the robot. The MIHHO algorithm is used to find the optimal path from the start point to the goal point that meets the constraints from the 2D raster map.
  • Map boundary and obstacle constraints:
The robot’s moving path must be confined within the boundary of the raster map, and the constraints u b and l b are the boundaries of the search space for path planning. Any node ( x i , y i ) in the robot’s moving path must satisfy the following boundary conditions:
{ l b x x i u b x l b y y i u b y , i ,
where l b x and u b x are the lower and upper limits of the horizontal boundary, and l b y and u b y are the lower and upper limits of the vertical boundary.
At the same time, the robot movement path cannot pass through the obstacle region, and any node ( x i , y i ) in the path must avoid the obstacle region O j , that is,
( x i , y i ) O j , i , j ,
2.
Path continuity conditions
The robot’s movement path in the access area should avoid overlapping paths and detours. Assuming that the position coordinate of the robot at the moment t is ( x t , y t ) , the position coordinate of the robot at the next moment ( x t + 1 , y t + 1 ) should be satisfied:
x t + 1 > x t o r y t + 1 > y t ,
3.
Path shortest condition
To realize mobile robot path planning, the robot must find the shortest path from the start to the goal point based on satisfying the boundary constraints and path continuity conditions. The length of the path is an important indicator of the merit of the path, and the goal of optimization is to minimize the total length of the path. The length of a path can be calculated by the Euclidean distance between all neighboring nodes on the path [28]. Specifically, the total length f i t of a path can be expressed in the following form:
f i t = i = 1 m ( x i + 1 x i ) 2 + ( y i + 1 y i ) 2 ,
where ( x i , y i ) and ( x i + 1 , y i + 1 ) are the coordinates of two neighboring nodes on the path, and m is the total number of nodes on the path. The smaller the value of this objective function f i t , the shorter the path is, and the merit of the path is thus evaluated. To achieve path optimization, the planning objective is to minimize the path length, which is
min   f i t .
Eventually, by minimizing the path length, the optimal path that satisfies all the constraints can be found.

3. Harris Hawk Optimization Algorithm

The Harris Hawk Optimization (HHO) algorithm is a heuristic optimization algorithm based on Harris hawks’ behavior strategy. The algorithm realizes the global optimization of the optimization algorithm by imitating Harris hawks’ group hunting behavior and raid-hunting strategy. The hunting process is divided into the exploration stage, the transition from exploration to exploitation, and the exploitation stage [11].

3.1. Exploration Stage

As shown in [11], in the exploration stage, the individuals of the Harris hawk population randomly inhabit everywhere, track and detect prey according to their keen eyes, and search for prey globally according to Formula (7).
X ( t + 1 ) = { X r a n d ( t ) r 1 | X r a n d ( t ) 2 r 2 X ( t ) | , q 0.5 [ X r a b b i t ( t ) X m ( t ) ] r 3 [ l b + r 4 ( u b l b ) ] , q < 0.5 ,
where X ( t ) and X ( t + 1 ) are the position vectors of the individual at the current and next iteration, respectively; t is the number of iterations; X r a n d ( t ) is the randomly selected individual position; X r a b b i t ( t ) is the location of the rabbit; r 1 , r 2 , r 3 , r 4 , and q are random numbers generated from the segment (0,1); q is used to select the strategy to be adopted randomly; u b and l b are the upper and lower bounds of the search space; and | X r a n d ( t ) 2 r 2 X ( t ) | denotes a vector that takes absolute values element by element in a multidimensional space.
| X r a n d ( t ) 2 r 2 X ( t ) | = ( | X r a n d , 1 ( t ) 2 r 2 X 1 ( t ) | , | X r a n d , 2 ( t ) 2 r 2 X 2 ( t ) | , , | X r a n d , n ( t ) 2 r 2 X n ( t ) | )
X m ( t ) is the average position of the individual, and the expression is
X m ( t ) = k = 1 M X k ( t ) M ,
where X k ( t ) is the k -th individual position in the population; M is the population size.

3.2. Transition from Exploration to Exploitation

The HHO algorithm transforms between exploration and different exploitation behaviors based on the escaped energy of the prey. The escaped energy is defined as follows:
E = 2 E 0 ( 1 t T ) ,
where E 0 is the initial energy of the prey, which is a random number from the segment (−1,1), and it is automatically updated at each iteration; t is the number of iterations; and T is the maximum number of iterations. When | E | 1 , it enters the exploration stage, and when | E | < 1 , it enters the exploitation stage.

3.3. Exploitation Stage

Based on the prey’s escape behavior and Harris hawks’ roundup strategy, combined with the prey’s escape energy E , the HHO algorithm proposes four strategies to simulate Harris hawks’ attack behavior during the exploitation stage. The probability of prey escaping is denoted by r . When r 0.5 , it means that the prey failed to escape; when r < 0.5 , the probability of successful prey escape is high.
(1) When 0.5 | E | < 1 and r 0.5 , the position update is soft besiege:
X ( t + 1 ) = Δ X ( t ) E | J X r a b b i t ( t ) X ( t ) | ,
where Δ X ( t ) = X r a b b i t ( t ) X ( t ) represents the difference between the position of the prey and the current position of the individual; J is a random number from the segment (0,2).
(2) When | E | < 0.5 and r 0.5 , the position update is hard besiege:
X ( t + 1 ) = X r a b b i t ( t ) E | Δ X ( t ) | ,
(3) When 0.5 | E | < 1 and r < 0.5 , the position update is soft besiege with progressive rapid dives:
X ( t + 1 ) = { Y , f ( Y ) < f ( X ( t ) ) Z , f ( Z ) < f ( X ( t ) ) ,
Y = X r a b b i t ( t ) E | J X r a b b i t ( t ) X ( t ) | ,
Z = Y + S × L F ,
where f ( ) is the fitness function; D is the dimension of the problem; S is a 1 × D random vector; and L F is the Levy flight function, and its formula is
L F = 0.01 × μ × δ | ν | 1 β ,
δ = ( Γ ( 1 + β ) × sin ( π β 2 ) Γ ( 1 + β 2 ) × β × 2 β 1 2 ) 1 β ,
where μ and ν are random values in ( 0 , 1 ) ; β is a random constant and is set to 1.5; and Γ is the standard gamma function with the following expression:
Γ ( x ) = 0 + e t t x 1 d t ,
(4) When | E | < 0.5 and r < 0.5 , the position update is hard besiege with progressive rapid dives:
X ( t + 1 ) = { Y , f ( Y ) < f ( X ( t ) ) Z , f ( Z ) < f ( X ( t ) ) ,
Y = X r a b b i t ( t ) E | J X r a b b i t ( t ) X m ( t ) | ,
Z = Y + S × L F ,

4. Proposed Algorithm

Due to some defects in the structure of HHO, the searchability is insufficient, the search process easily falls into local optima, and the convergence accuracy is low. Therefore, the HHO algorithm is improved by combining the double adaptive weights strategy, the Dimension Learning-Based Hunting (DLH) search strategy, and the position update strategy combined with the Dung Beetle Optimizer algorithm to improve its convergence speed and solution quality.

4.1. Double Adaptive Weights Strategy

The inertia weight factor plays an important role in HHO, which is closely related to the searchability of the algorithm. In the process of generating the Harris hawks’ position, the double adaptive weights ω 1 and ω 2 are used to update the Harris Hawks’ position. The weight values change with the iteration process and adapt to each other. The exploration ability and exploitation ability of HHO are improved, and the convergence accuracy and speed are improved. The specific formulas of ω 1 and ω 2 are as follows:
ω 1 = ( 1 t T ) 1 tan ( π ( r a n d 0.5 ) ) / T ,
ω 2 = ( 2 2 t T ) 1 tan ( π ( r a n d 0.5 ) ) / T ,
The iterative process is divided into two stages. In the exploration stage, Equation (7) is updated as follows:
X ( t + 1 ) = { ω 1 X r a n d ( t ) r 1 | X r a n d ( t ) 2 r 2 X ( t ) | , q 0.5 ω 1 [ X r a b b i t ( t ) X m ( t ) ] r 3 [ l b + r 4 ( u b l b ) ] , q < 0.5 ,
In the exploitation stage, Equations (11), (12), (14) and (20) are updated as follows:
X ( t + 1 ) = ω 2 Δ X ( t ) E | J X r a b b i t ( t ) X ( t ) | ,
X ( t + 1 ) = ω 2 X r a b b i t ( t ) E | Δ X ( t ) | ,
Y = ω 2 X r a b b i t ( t ) E | J X r a b b i t ( t ) X ( t ) | ,
Y = ω 2 X r a b b i t ( t ) E | J X r a b b i t ( t ) X m ( t ) | ,
After generating the new position, it is checked whether it falls within the upper and lower bounds of the population search space. If it is not satisfied, it is adjusted according to Equation (29) so that it falls within the feasible solution space.
X ( t + 1 ) = max ( min ( X ( t + 1 ) , u b ) , l b ) ,

4.2. Dimension Learning-Based Hunting (DLH) Search Strategy

Population diversity plays a key role in the convergence speed and accuracy of the algorithm. The population diversity of HHO gradually decreases with the iteration of the algorithm, which leads to the solution of high-dimensional complex problems such as local optima, and the global optimal solution cannot be found. Therefore, to ensure that the population maintains a rich diversity in the iteration, the Dimension Learning-Based Hunting (DLH) search strategy is used. DLH uses different methods to construct a neighbor for each Harris hawk that can share information. This dimensional learning method can improve the ability of Harris hawks to balance exploration and exploitation and maintain diversity [29].
As shown in [29], in the DLH search strategy, each new position of the Harris hawk X i ( t ) is calculated using Equation (32), and the individual hawk is learned by its different neighbors and randomly selected Harris hawks in the population. In addition to randomly generated locations through initialization and locations that are continually updated during the iteration process based on the exploration and development position X i ( t + 1 ) , a new candidate Harris hawk X i d ( t + 1 ) is generated for the new position of Harris hawk X i ( t ) in the DLH search strategy. To this end, the Euclidean distance between the current position X i ( t ) and its candidate position X i ( t + 1 ) is calculated to obtain R i ( t ) , and the formula is as follows:
R i ( t ) = X i ( t ) X i ( t + 1 ) ,
Then, the neighbor N i ( t ) of X i ( t ) is constructed by Equation (31) for R i ( t ) :
N i ( t ) = { X j ( t ) | D i ( X i ( t ) , X j ( t ) ) R i ( t ) , X j ( t ) P o p } ,
The constructed neighborhood X i ( t ) is learned by multi-neighborhood learning of Equation (32), where X i d ( t + 1 ) the d -th dimension is calculated using the random neighbor X n , d ( t ) of the d -th dimension selected from N i ( t ) and the random Harris Hawk X r , d ( t ) selected from the population as follows:
X i d ( t + 1 ) = X i , d ( t ) + r a n d × ( X n , d ( t ) X r , d ( t ) ) ,
Therefore, in the selection and update phase, the optimal candidate is selected by comparing the fitness values of the two candidate solutions through Equation (33):
X i 1 ( t + 1 ) = { X i ( t + 1 ) , f ( X i ) < f ( X i g ) X i d ( t + 1 ) , f ( X i ) f ( X i g ) ,
The selected candidate fitness value is updated if it is less than X i ( t + 1 ) , otherwise it remains unchanged.

4.3. Position Update Strategy Based on Dung Beetle Optimizer Algorithm

The Dung Beetle Optimizer (DBO) algorithm is a new swarm intelligence optimization algorithm proposed by Xue Jiankai et al. in 2022 [30], which simulates the rolling, dancing, foraging, stealing, and breeding behaviors of dung beetles. The algorithm takes into account both global exploration and local development, so it has the characteristics of a fast convergence speed and high accuracy and can effectively solve complex optimization problems. According to the idea that the rolling ball cockroach uses the sun’s rays to navigate in the DBO, the dung ball is rolled in a straight line, and a new position is introduced into the HHO. It enables individuals in the population to choose the most suitable direction, to avoid falling into the local minimum prematurely.
As shown in [30], the location update method of the ball-rolling cockroach is as follows:
X i D B O ( t + 1 ) = X i ( t ) + α × k × X i ( t 1 ) + b × Δ x       Δ x = | X i ( t ) X w | ,
where X i ( t ) denotes the position of the i -th cockroach at the t -th iteration; k ( 0 , 0.2 ] denotes the deflection coefficient; b is a fixed value from the segment (0,1); and α is the position deviation of the dung beetle due to the influence of natural factors, which is taken as a fixed value of −1 or 1. When the value is 1, it means there is no deviation, and the value of −1 represents the deviation direction; X w is the worst position in the global; and Δ x is the variation in light intensity. The larger the value is, the weaker the light source is, which promotes the dung beetle to avoid this position. In this way, the whole search space in the optimization process is explored as thoroughly as possible, and the possibility of falling into the local optimum is reduced.
Then, the fitness values of the two candidate positions are compared to select the better candidate positions.
X i ( t + 1 ) = { X i D B O ( t + 1 ) , f ( X i D B O ) < f ( X i 1 ) X i 1 ( t + 1 ) , f ( X i D B O ) f ( X i 1 ) ,

4.4. MIHHO Algorithm Flow

In summary, this paper improves the HHO algorithm, introduces three strategies, and proposes the MIHHO. The algorithm flow is as follows, and the flow chart is shown in Figure 1:
  • Step 1: The basic parameters of the Harris hawk population are initialized, including the number of populations N , the maximum number of iterations of the algorithm T , and the starting position of all individuals in the solution space.
  • Step 2: The iteration starts, the fitness value of all individuals in the population is obtained, and the best individual is obtained.
  • Step 3: The double adaptive weight strategy (Equations (22) and (23)) is used to dynamically adjust the position of the Harris hawk to improve the search efficiency.
  • Step 4: The Harris hawk population updates its position through different strategies. In the exploration phase, two strategies are used to detect prey based on the q-value (Equation (24)). In the exploitation phase, prey are attacked using different strategies based on their escape energy and escape probability, including soft besiege (Equation (25)), hard besiege (Equation (26)), soft besiege with progressive rapid dives (Equations (13), (15) and (27)), and hard besiege with progressive rapid dives (Equations (19), (21) and (28)).
  • Step 5: Multi-neighborhood learning is performed according to the DLH search strategy (Equations (30)–(33)).
  • Step 6: The fitness value of all individuals in the population is calculated, and the best and worst individuals are obtained.
  • Step 7: According to the DBO’s solar ray navigation strategy (Equations (34) and (35)), the positions of individuals caught in localized extremes are updated.
  • Step 8: The optimal individual position and fitness value so far are calculated.
  • Step 9: The updated new population is returned, and it is determined whether the algorithm terminates. If there is no termination, the second step is followed to continue the execution.
  • Step 10: The condition is terminated and the final optimal individual of the population and its fitness value are output.

4.5. Time Complexity Analysis

A time complexity analysis usually depends on three processes: population initialization, the calculation of the fitness function, and population location update. It is related to the population size N , the search space dimension D , and the maximum number of iterations T . For HHO, the total computational complexity is O ( N ) + O ( T × N ) + O ( T × N × D ) , that is, O ( N × ( T + T × D + 1 ) ) . For the MIHHO proposed in this paper, although the double adaptive weight strategy, the DLH search strategy, and the position update strategy of the DBO are introduced, these improved strategies do not increase significantly in computational complexity. The improvement strategy is still carried out within the original framework without the additional complexity of the algorithm. Therefore, the total computational complexity of MIHHO is still O ( N × ( T + T × D + 1 ) ) .
The analysis shows that the computational complexity of HHO is consistent with that of MIHHO, and the improved strategy does not increase the additional time complexity.

5. MIHHO Performance Test and Analysis

5.1. Solving the Classical Functions

The experimental simulation platform used was a Windows 10 computer, the model used was 12 GB RAM, Intel (R) Core (TM) i5-10210 U CPU, and the software used was MATLAB R2021b. To verify the performance of the MIHHO algorithm, the Grey Wolf Optimizer (GWO) [31], the Sparrow Search Algorithm (SSA) [32], the Whale Optimization Algorithm (WOA) [33], the Particle Swarm Optimization (PSO) [34], the Gradient-Based Optimizer (GBO) [35], Sand Cat Swarm Optimization (SCSO) [36], the Dung Beetle Optimizer (DBO) [30], and Harris Hawk Optimization (HHO) [11] were selected to compare with the MIHHO algorithm proposed in this paper. To ensure the fairness and effectiveness of the experiment, the population size and the number of iterations of all algorithms were set to 30 and 500, and other internal parameters of each algorithm were consistent with the settings in the original literature.
In order to test the effectiveness of the MIHHO improvements, tests were performed in the classical functions [37]. Twelve classical functions with different characteristics were used for testing. Table 1 shows information about the 12 classical functions. Among them, F1–F4 are single-peak test functions to test the development ability of the algorithms, and F5–F12 are 30-dimensional and fixed-dimensional multi-peak test functions to test the exploration ability of the algorithms and their ability to avoid local optima.
To eliminate the influence of randomness on the test results, each algorithm was run independently for each test function 30 times, and the mean, standard deviation, and optimal value were used as the algorithm performance evaluation index. The smaller the mean value, the stronger the convergence accuracy and optimization ability of the algorithm; the smaller the standard deviation, the smaller the fluctuation in the algorithm and the higher the stability. The comparison algorithm was tested with the 12 classical functions given in Table 1, and the results are shown in Table 2, where mean denotes the average rank on the accuracy of all function findings, and rank denotes the ranking based on the mean. It can be seen that the MIHHO algorithm shows excellent optimization ability and ranks first. On F1, F2, F3, F6, and F8, MIHHO reaches the optimal value, and on F9, F10, and F12, although it does not reach the theoretical optimal value, it is the closest to the theoretical optimal value, which shows its strong optimization ability. In addition, MIHHO performs stably on the mean and standard deviation, indicating that its results remain consistent over many experiments, especially on F1, F2, F3, F6, and F8. In function F5, although MIHHO reaches the same optimal value as DBO, HHO, and WOA, the standard deviation of MIHHO is several tens of orders of magnitude lower than the other algorithms.
Figure 2 shows the average iteration curves of the different algorithms after 30 runs. Based on the analysis of the convergence curves, MIHHO performs well on most of the tested functions. First, in functions F1, F2, F3, F6, F7, F8, and F12, MIHHO demonstrates the fastest convergence speed, and its convergence process is nearly linear and fast approaching the theoretical optimum, which indicates that the algorithm has a strong ability to jump out of the local optimum. In addition, the curves of MIHHO are usually lower than those of other algorithms, which indicates that MIHHO achieves better optimization results with the same number of iterations.
The Wilcoxon rank sum test [38] is a nonparametric statistical test that can detect more complex data distributions. The significance level is set to 5%, and when the p-value is less than 0.05, there is considered to be a significant difference between the two algorithms, and when p is greater than 0.05, this indicates that the difference between the two algorithms is not obvious, and the emergence of “NAN” indicates that the two comparative algorithms obtain similar results of optimization and cannot be verified as a significant difference. In this paper, the Wilcoxon rank sum test is performed on the best results of 30 independent runs of MIHHO and eight comparison algorithms to demonstrate the significant difference between the algorithms. According to the magnitude of the p-value, the symbols “+/=/−” are used to indicate that the comparison algorithm is significantly different from MIHHO, that a significant difference could not be verified, and that the difference is not significant, respectively. From the results given in Table 3, it can be seen that the p-values of the Wilcoxon rank sum test results for MIHHO are overwhelmingly less than 5%, except for the almost identical performance of the algorithms’ optimization search on F6, F7, and F8, which indicates that MIHHO has a significant difference over the other algorithms.
A box plot is often used to visualize the distribution of data [39], and in this paper, the stability of the algorithms is verified through box plots. Figure 3 demonstrates the distribution of the optimal fitness values of the comparison algorithms after 30 experiments. From Figure 3, it can be seen that for functions F1, F2, F3, F6, F7, F8, F9, and F10, MIHHO has the lowest box height, and compared with the other algorithms, the data of MIHHO are more concentrated and there are no anomalies, while the other algorithms have anomalies. This indicates that MIHHO shows higher stability in the optimization search process with less data fluctuation, which is significantly better than the other algorithms.
In summary, compared with the rest of the algorithms, MIHHO exhibits a higher convergence speed and better exploration and exploitation ability, and it can effectively avoid falling into local optima, proving that the improvement strategy proposed in this paper improves the comprehensive performance of the algorithm.

5.2. Solving the CEC2022 Test Functions

To further verify the algorithm’s optimization performance and robustness on complex test functions, the CEC2022 test function is selected for the optimization comparison analysis of six algorithms [40]. Table 4 lists detailed information about functions F13–F24 (UF is a Unimodal Function, BF is a Basic Function, HF is a Hybrid Function, and CF is a Composition Function).
Each algorithm was run independently 30 times with a population size of 30, an optimization problem dimension of 10, and 1000 iterations, and the other internal parameters of each algorithm were kept consistent with their settings in the original literature; the results of the optimization search are shown in Table 5. As can be seen from Table 5, the MIHHO algorithm has the second-highest performance among the CEC2022 test functions after GBO. However, for the unimodal function F13, MIHHO has the lowest mean and standard deviation, whereas the performance of the HHO with WOA optimization search is poor. On the basic functions, MIHHO is very close to the theoretical value even though it does not converge to the theoretical optimum every time, and it has a large improvement compared to HHO. On the hybrid functions F18 and F20, MIHHO’s average value is better than those of the other algorithms, showing excellent overall performance. On the composition functions, the convergence accuracy of MIHHO is still ahead of that of the other algorithms, and the algorithm optimization is more stable. In summary, MIHHO has a greater advantage over the other algorithms in terms of optimization performance, generally higher robustness, and the ability to solve complex optimization problems, showing that the improvement strategy can effectively overcome the defects of HHO.
Figure 4 shows the average iteration curves of different algorithms after 30 runs. As can be seen from Figure 4, on the unimodal function F13, MIHHO achieves a better optimization result with the same number of iterations. On the basic functions F16 and F17, MIHHO does not have a clear advantage but still converges to the optimal value at a speed not inferior to the other algorithms. For the hybrid functions, although MIHHO does not converge to the optimal point, MIHHO’s convergence accuracy is stronger than that of the other algorithms, and the speed of jumping out of the local optimum is also faster. For composition functions, MIHHO not only converges to the optimum quickly, but also has higher accuracy than other algorithms, indicating that MIHHO has a strong ability to jump out of the local optimum.
Table 6 shows the Wilcoxon rank sum test results of MIHHO with other algorithms on the CEC2022 test functions. As can be seen in Table 6, MIHHO performs “12/0/0” in comparison with GWO and SCSO, and in all 12 functions, MIHHO’s performance is significantly different from these two algorithms. As can be seen in Table 3, the p-values of most of the 12 functions tested are less than 0.05, indicating that MIHHO is significantly different than the other algorithms.
As shown in Figure 5, MIHHO’s boxplots on the CEC2022 test functions demonstrate its excellent performance stability and significant advantages. Specifically, on functions F13, F14, and F15, MIHHO’s boxplots show a high degree of stability, indicating that its performance fluctuates very little for these functions. For functions F20, F21, and F24, MIHHO can find even smaller values, further validating its strong ability to optimize these functions. Overall, MIHHO’s performance on the CEC2022 test function set demonstrates its excellent performance stability when dealing with complex optimization problems and its clear advantages on specific function types.
In summary, MIHHO performs well in the CEC2022 test, showing a faster convergence speed and stronger exploration and exploitation ability, and it can effectively avoid falling into local optima. The experimental results prove that the improvement strategy proposed in this paper significantly improves the comprehensive performance of the algorithm.

5.3. Comparison with Other Authors Improving HHO

The above experiments were compared with the superior algorithms of recent years, and better results were obtained. In this section, using the classical functions mentioned in Section 5.1, MIHHO is compared with the HHO algorithms that have been improved by other authors, namely Nonlinear-based Chaotic Harris Hawks Optimization (NCHHO) [27], Leader Harris Hawk Optimization (LHHO) [41], and Long-Term Memory Harris Hawk Optimization (LMHHO) [42], and the results of the experiments are shown in Table 7 and Figure 6.
In the convergence curve, MIHHO is significantly better than the other three improved algorithms; regarding the convergence speed in the first and middle stages or the convergence accuracy at the end, MIHHO is in first place. In terms of specific data, except for F6, F7, and F8, where the algorithms have the same optimization results, all of the statistical values in the other functions significantly outperform the optimization results of the other improved algorithms, indicating that the improvement strategy used by MIHHO is superior. For the unimodal functions, the chaotic and nonlinear control parameters used by NCHHO speed up the convergence of the algorithm to some extent, but no global optimum is found. From Table 7 and Figure 6, it is clear that MIHHO is much better at dealing with complex problems, and the results also reaffirm the validity of MIHHO’s ability to balance exploration and development, convergence speed, accuracy, and other properties.

5.4. Ablation Experiment

To analyze the impact of different strategies on the performance of the algorithm, three strategies of MIHHO are compared through experiments in this section. Table 8 shows the specific details of the strategies involved in the comparison. In Table 8, “0” indicates that the strategy is not introduced into the algorithm, while “1” indicates that the strategy is introduced. Some of the test functions are selected for testing, while other parameters are the same as before, and the results of the optimization search are shown in Table 9.
As can be seen in Figure 7 and Table 9, the three improved strategies have different degrees of improvement in HHO in terms of the search accuracy and convergence speed. In the unimodal functions, the DLH search strategy performs best. In functions F6 and F8, the original HHO algorithm can reach the global optimum, but the algorithm converges faster after adding the improved strategies. In function F13, although strategy 1 and strategy 3 do not significantly improve the hunting accuracy of the HHO algorithm, the combination of strategies 1 and 3 (HHO13) converges to an approximate theoretical optimum. In summary, all three improvement strategies have a positive influence on the original algorithm, and the combination of the three algorithms optimizes the performance improvement of the algorithm, proving the effectiveness of the improvement strategies.

6. The Application of MIHHO in Path Planning

In this study, several experiments were designed with the aim of demonstrating the advantages of MIHHO in terms of the solution accuracy, convergence speed, and avoidance of becoming stuck in local optimization. The experiments cover both simple and complex environments and are compared with a variety of algorithms to fully evaluate the performance of MIHHO.

6.1. Steps in the Path Planning Experiment

The position coordinates of the Harris hawk population are updated at each iteration to represent a moving route for the robot. The MIHHO algorithm is used to find the optimal path from the start point to the goal point that meets the constraints of the 2D raster map. The experimental steps are as follows:
  • Step 1: Set the initialization parameters. Set the robot start position S , goal position G , population number N , and iteration number T , and set the map boundary conditions and obstacle positions.
  • Step 2: Initialize the population. Randomly generate N paths from the starting point to the endpoint and detect whether they are within the map boundary.
  • Step 3: Calculate the degree of adaptation. Evaluate the quality of each path using the objective function (Equation (5)) to determine the best path in the initialization.
  • Step 4: Update the weights according to the double adaptive weight formula (Equations (22) and (23)) to improve the probability of finding the optimal path.
  • Step 5: In the HHO for path optimization, two strategies are used in the exploration phase to detect the prey through the q-value (Equation (24)). In the exploitation phase, based on the prey’s escape energy and escape probability, different strategies are selected for attacking, including soft besiege (Equation (25)), hard besiege (Equation (26)), soft besiege with progressive rapid dives (Equations (13), (15) and (27)), and hard besiege with progressive rapid dives (Equations (19), (21) and (28)).
  • Step 6: Perform multi-neighborhood learning according to the DLH search strategy (Equations (30)–(33)). By searching in different dimensions, the algorithm can find several different paths, increasing the likelihood of finding a better path.
  • Step 7: Calculate the fitness values of all individuals of the population and obtain the optimal and worst paths.
  • Step 8: Position update strategy based on DBO algorithm (Equations (34) and (35)), update the position of an individual caught in a local extreme to help it jump out of the local optimum and continue searching for a better solution.
  • Step 9: Calculate the optimal path and fitness value so far.
  • Step 10: Return the updated new population and determine whether the algorithm termination condition is reached; if not, then jump to the second step to continue execution.
  • Step 11: Terminate the condition and output the final optimal path of the obtained population and its fitness value.

6.2. Analysis of Path Planning Experimental Results

The environments are categorized into two types, namely simple and complex environments, according to the proportion of obstacles in the environment [43]. Simple and complex environments include two sizes, 20 × 20 and 40 × 40, respectively. Compared with complex environments, simple environments have fewer obstacles with regular shapes and sparse distribution, and these obstacles are easy to recognize and bypass and do not cause serious problems in path planning and task execution. Complex environments have a higher proportion of obstacles with irregular shapes and dense distribution, which pose a greater challenge to path planning algorithms. With this categorization method, the performance of different algorithms in various environments can be effectively evaluated.
To verify the performance of MIHHO in path planning, GWO, SSA, WOA, PSO, GBO, SCSO, DBO, and HHO were chosen as the comparison algorithms to conduct path planning experiments with MIHHO in the same raster map. To ensure the fairness and validity of the experiments, the population size and iteration number of all algorithms were set to 50 and 100, and the other internal parameters of each algorithm were kept consistent with their settings in the original literature.

6.2.1. Experimental Simulation in a Simple Environment

The individual algorithms were tested in a simple environment on maps of two sizes, 20 × 20 and 40 × 40. The percentage of obstacles was set to 20% and these obstacles were regularly distributed and regular in shape. The test site was relatively empty, and this setup ensured that the obstacles were easy to bypass, which made path planning and task execution smoother, and thus allowed for a more effective evaluation of the basic performance of each algorithm in a simple environment.
Figure 8 shows the path planning simulation results of each algorithm for different sizes (20 × 20 and 40 × 40) in the simple environment, while Figure 9 shows the path planning convergence curves of each algorithm in this environment. It can be seen that the MIHHO algorithm can plan smoother paths in a simple environment compared to the other algorithms, which indicates that the MIHHO algorithm is more efficient for path planning. The PSO algorithm, on the other hand, produces more inflection points and detours in this environment, indicating that it may be more likely to fall into a local optimum during path planning. In Figure 9, the MIHHO algorithm converges faster and finds shorter paths in fewer iterations. In contrast, other algorithms such as GWO and PSO converge relatively slowly and show large fluctuations during the iterations.
Considering the influence of chance factors on the experimental results, we carried out several experimental simulations, and each algorithm was run independently 10 times on two kinds of maps to determine the optimal value, average value, and standard deviation of the path length and the optimal value, average value, and standard deviation of the number of iterations, and we comprehensively evaluated the algorithms’ optimization results in the path planning problem. The algorithms’ performance is compared in Table 10.
As can be seen in Table 10, the MIHHO algorithm successfully converges to the optimal solution after 18 iterations in a 20 × 20 raster map with an average path length of 28.35375. The path length of MIHHO is reduced by 2.23% compared to GWO, 3.29% compared to SSA, 10.93% compared to WOA, 12.08% compared to PSO, 4.66% compared to GBO, 14.11% compared to SCSO, 1.98% compared to DBO, and 1.99% compared to HHO. These results show that MIHHO can plan shorter paths in simple environments, demonstrating excellent path optimization capabilities. In addition, MIHHO is more stable in path planning with less fluctuation in path length and can converge to the optimal solution faster with significantly improved computational efficiency.
In the 40 × 40 raster map, although the SCSO algorithm has the lowest average number of iterations, it has a longer path length and is not as well optimized as MIHHO. MIHHO has an average path length of 61.05562, which is 17.59% less than GWO, 14% less than SSA, 18.92% less than WOA, 45.86% less than PSO, 13.04% less than GBO, 19.63% less than SCSO, 9.22% less than DBO, and 14.45% less than HHO. These results show that MIHHO can still maintain a short path length when dealing with larger scale path planning problems, showing its stability in large-scale path planning.
Therefore, compared to the other eight algorithms, the MIHHO algorithm has a shorter average path length and a lower average number of iterations and presents some advantages in the overall search performance and convergence speed.

6.2.2. Experimental Simulation in Complex Environments

In the complex environment, each algorithm was tested on maps of two sizes, 20 × 20 and 40 × 40. Obstacles occupy 40% of the map, with random distribution and different shapes, resulting in highly complex and crowded maps. This setup increases the difficulty of detouring and makes path planning and task execution more challenging, allowing for a more comprehensive evaluation of the performance of each algorithm in complex environments. Figure 10 and Figure 11 show the simulation results of path planning.
The MIHHO algorithm shows significant advantages in complex environments with 40% obstacles. Its path planning results show smoother and more direct paths, indicating that MIHHO has superior obstacle avoidance and path optimization capabilities in complex environments. On the other hand, PSO and GWO are less efficient in path planning when dealing with complex environments and are more likely to be interfered with by obstacles. In addition, from the convergence curve, the MIHHO algorithm can quickly converge and find a near-optimal path within a short number of iterations with less fluctuation, showing good stability and robustness. In contrast, other algorithms such as PSO and GWO converge more slowly and are easily affected by the trap of local optimization. Overall, MIHHO is more suitable for path planning tasks in complex environments.
Each algorithm was run independently on both maps 10 times, and the algorithm performance comparison is shown in Table 11. The MIHHO algorithm finds shorter optimal paths with an average path length of 29.70135 in a small-scale 20 × 20 environment. In comparison, the path length is reduced by 3.58% compared to GWO, 3.2% compared to SSA, 14.22% compared to WOA, 17.36% compared to PSO, 13.65% compared to GBO, 13.36% compared to SCSO, 5.52% compared to DBO, and 4.52% compared to HHO. Although SCSO has the lowest average number of iterations, its planned path is relatively long.
When the size of the raster map is 40 × 40, the average path length of MIHHO is 65.5203, which is significantly better than the other algorithms, indicating that MIHHO still maintains excellent path optimization in large-scale complex environments. The path length of MIHHO is reduced by 4.13% compared with GWO, 9.97% compared with SSA, 11.75% compared with WOA, 20.5% compared with PSO, 10.64% compared with GBO, 11.82% compared with SCSO, 8.38% compared with DBO, and 9.19% compared with HHO. Although MIHHO’s average number of iterations is 36.2, which is slightly higher than some of the algorithms, its significantly shorter paths indicate that it can achieve better results in relatively fewer iterations, fully demonstrating superior performance in complex problems. Despite the increased size and complexity of the map environment, the path planning ability of the MIHHO algorithm is not affected by the computational complexity and is still able to find shorter paths.
In maps with different specifications and complexities, the MIHHO algorithm presents certain advantages over the original HHO algorithm in the optimal path length, average path length, standard deviation of the path length, and average number of iteration metrics. It has better environmental applicability in different complex environments, and this advantage becomes more prominent as the size of the environmental map increases.
From these results, it can be seen that MIHHO is significantly better than the Harris Hawk Optimization algorithm as well as other common path planning algorithms in terms of the solution accuracy, convergence speed, and avoidance of falling into local optima through the introduction of multi-strategy improvement. Especially in complex environments, MIHHO shows stronger adaptability and efficiency, which verifies its practical application value and potential in path planning.

7. Conclusions

Aiming to solve the problems of poor optimization and low search stability in mobile robot path planning, this paper proposes a Multi-strategy Improved Harris Hawk Optimization (MIHHO) algorithm. Firstly, the double adaptive weight strategy is used to improve the exploration ability and exploitation ability of HHO to improve the convergence accuracy and speed of algorithms in path planning. Secondly, the Dimension Learning-Based Hunting (DLH) search strategy is used to enhance the ability of the Harris Hawk Algorithm to balance exploration and exploitation and maintain diversity. Then, position update strategy based on DBO algorithm is proposed to avoid falling into local optimal solutions during path planning. Through simulation experiments with 12 classical functions and CEC2022 test functions, the proposed MIHHO is verified to have strong global search capability and local search capability. Finally, path planning experiments are conducted in map environments of different sizes and complexities. The experimental results show that the proposed algorithm performs more prominently in terms of the optimality of the path length and stability than the other eight well-known algorithms. MIHHO demonstrates a faster convergence speed and higher stability, possesses good applicability and optimality-finding ability, and shows its potential in practical applications. This validates the effectiveness of the algorithm in global path planning problems for mobile robots.
This study proposes a Multi-strategy Improved Harris Hawk Optimization, mainly for the global path planning of robots, modeled by a two-dimensional raster. The experimental results show that the algorithm performs well in this area and has high practical value. However, there are still two major shortcomings: (1) This study focuses on the simulation of 2D motion scenes and does not consider the motion of UAVs in 3D space, so future research should be conducted in 3D environments to expand the applicability of the algorithm. (2) The experiments in this paper are mainly based on simulation, and future research needs to be carried out in real environments to enhance the practical application of the algorithm.

Author Contributions

Conceptualization, C.T.; methodology, W.L.; software, T.H.; validation, L.Y.; formal analysis, W.L.; investigation, T.H. and L.Y.; resources, T.H. and T.C.; data curation, T.C.; writing—original draft preparation, W.L.; writing—review and editing, C.T.; visualization, W.L.; project administration, C.T. and W.L.; funding acquisition, C.T. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Graduate Innovation Foundation of Anhui University of Science and Technology (Nos. 2023cx2087, 2023cx2092, and 2023cx2080) and the Graduate Student Innovation and Entrepreneurship Practice Project of Anhui Province of China (Nos. 2023cxcysj081, 2023cxcysj089, and 2023cxcysj090).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article material, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the editors and anonymous reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Deng, X.; Li, R.; Zhao, L.; Wang, K.; Gui, X. Multi-obstacle path planning and optimization for mobile robot. Expert Syst. Appl. 2021, 183, 115445. [Google Scholar] [CrossRef]
  2. Alshammrei, S.; Boubaker, S.; Kolsi, L. Improved Dijkstra algorithm for mobile robot path planning and obstacle avoidance. Comput. Mater. Contin. 2022, 72, 5939–5954. [Google Scholar] [CrossRef]
  3. Li, X.; Yu, S.; Gao, X.Z.; Yan, Y.; Zhao, Y. Path planning and obstacle avoidance control of UUV based on an enhanced A* algorithm and MPC in dynamic environment. Ocean Eng. 2024, 302, 117584. [Google Scholar] [CrossRef]
  4. Gu, X.; Liu, L.; Wang, L.; Yu, Z.; Guo, Y. Energy-optimal adaptive artificial potential field method for path planning of free-flying space robots. J. Frankl. Inst. 2024, 361, 978–993. [Google Scholar] [CrossRef]
  5. Kar, A.K. Bio inspired computing–a review of algorithms and scope of applications. Expert Syst. Appl. 2016, 59, 20–32. [Google Scholar] [CrossRef]
  6. Abbassi, R.; Abbassi, A.; Heidari, A.A.; Mirjalili, S. An efficient salp swarm-inspired algorithm for parameters identification of photovoltaic cell models. Energy Convers. Manag. 2019, 179, 362–372. [Google Scholar] [CrossRef]
  7. Tian, T.; Liang, Z.; Wei, Y.; Luo, Q.; Zhou, Y. Hybrid Whale Optimization with a Firefly Algorithm for Function Optimization and Mobile Robot Path Planning. Biomimetics 2024, 9, 39. [Google Scholar] [CrossRef]
  8. Yu, X.; Jiang, N.; Wang, X.; Li, M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst. Appl. 2023, 215, 119327. [Google Scholar] [CrossRef]
  9. Wu, L.; Huang, X.; Cui, J.; Liu, C.; Xiao, W. Modified adaptive ant colony optimization algorithm and its application for solving path planning of mobile robot. Expert Syst. Appl. 2023, 215, 119410. [Google Scholar] [CrossRef]
  10. Yong, H.; Mingran, W. An improved chaos sparrow search algorithm for UAV path planning. Sci. Rep. 2024, 14, 366. [Google Scholar]
  11. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  12. Liu, C. An improved Harris hawks optimizer for job-shop scheduling problem. J. Supercomput. 2021, 77, 14090–14129. [Google Scholar] [CrossRef]
  13. Jia, H.; Lang, C.; Oliva, D.; Song, W.; Peng, X. Dynamic harris hawks optimization with mutation mechanism for satellite image segmentation. Remote Sens. 2019, 11, 1421. [Google Scholar] [CrossRef]
  14. Yousri, D.; Allam, D.; Eteiba, M.B. Optimal photovoltaic array reconfiguration for alleviating the partial shading influence based on a modified harris hawks optimizer. Energy Convers. Manag. 2020, 206, 112470. [Google Scholar] [CrossRef]
  15. Yousri, D.; Rezk, H.; Fathy, A. Identifying the parameters of different configurations of photovoltaic models based on recent artificial ecosystem-based optimization approach. Int. J. Energy Res. 2020, 44, 11302–11322. [Google Scholar] [CrossRef]
  16. Golafshani, E.M.; Arashpour, M.; Behnood, A. Predicting the compressive strength of green concretes using Harris hawks optimization-based data-driven methods. Constr. Build. Mater. 2022, 318, 125944. [Google Scholar] [CrossRef]
  17. Belge, E.; Altan, A.; Hacıoğlu, R. Metaheuristic optimization-based path planning and tracking of quadcopter for payload hold-release mission. Electronics 2022, 11, 1208. [Google Scholar] [CrossRef]
  18. Cai, C.; Jia, C.; Nie, Y.; Zhang, J.; Li, L. A path planning method using modified harris hawks optimization algorithm for mobile robots. PeerJ Comput. Sci. 2023, 9, e1473. [Google Scholar] [CrossRef]
  19. Qu, C.; He, W.; Peng, X.; Peng, X. Harris hawks optimization with information exchange. Appl. Math. Model. 2020, 84, 52–75. [Google Scholar] [CrossRef]
  20. Jiao, S.; Chong, G.; Huang, C.; Hu, H.; Wang, M.; Heidari, A.A.; Chen, H.; Zhao, X. Orthogonally adapted Harris hawks optimization for parameter estimation of photovoltaic models. Energy 2020, 203, 117804. [Google Scholar] [CrossRef]
  21. Zou, L.; Zhou, S.; Li, X. An efficient improved greedy Harris Hawks optimizer and its application to feature selection. Entropy 2022, 24, 1065. [Google Scholar] [CrossRef] [PubMed]
  22. Li, C.; Li, J.; Chen, H.; Jin, M.; Ren, H. Enhanced Harris hawks optimization with multi-strategy for global optimization tasks. Expert Syst. Appl. 2021, 185, 115499. [Google Scholar] [CrossRef]
  23. Hussain, K.; Neggaz, N.; Zhu, W.; Houssein, E.H. An efficient hybrid sine-cosine Harris hawks optimization for low and high-dimensional feature selection. Expert Syst. Appl. 2021, 176, 114778. [Google Scholar] [CrossRef]
  24. Li, S.; Zhang, R.; Ding, Y.; Qin, X.; Han, Y.; Zhang, H. Multi-UAV Path Planning Algorithm Based on BINN-HHO. Sensors 2022, 22, 9786. [Google Scholar] [CrossRef] [PubMed]
  25. Huang, L.; Fu, Q.; Tong, N. An improved Harris hawks optimization algorithm and its application in grid map path planning. Biomimetics 2023, 8, 428. [Google Scholar] [CrossRef]
  26. Nasr, A.A. A new cloud autonomous system as a service for multi-mobile robots. Neural Comput. Appl. 2022, 34, 21223–21235. [Google Scholar] [CrossRef]
  27. Dehkordi, A.A.; Sadiq, A.S.; Mirjalili, S.; Ghafoor, K.Z. Nonlinear-based chaotic harris hawks optimizer: Algorithm and internet of vehicles application. Appl. Soft Comput. 2021, 109, 107574. [Google Scholar] [CrossRef]
  28. Lou, T.S.; Yue, Z.P.; Jiao, Y.Z.; He, Z.D. A hybrid strategy-based GJO algorithm for robot path planning. Expert Syst. Appl. 2024, 238, 121975. [Google Scholar] [CrossRef]
  29. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  30. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  32. Xue, J.; Bo, S. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control. Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  33. Mirjalili, S.; Andrew, L. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  34. Kennedy, J.; Russell, E. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA; Volume 4. [Google Scholar]
  35. Ahmadianfar, I.; Bozorg-Haddad, O.; Chu, X. Gradient-based optimizer: A new metaheuristic optimization algorithm. Inf. Sci. 2020, 540, 131–159. [Google Scholar] [CrossRef]
  36. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
  37. Jiang, S.; Yue, Y.; Chen, C.; Chen, Y.; Cao, L. A multi-objective optimization problem solving method based on improved golden jackal optimization algorithm and its application. Biomimetics 2024, 9, 270. [Google Scholar] [CrossRef] [PubMed]
  38. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  39. Pfannkuch, M. Comparing box plot distributions: A teacher’s reasoning. Stat. Educ. Res. J. 2006, 5, 27–45. [Google Scholar] [CrossRef]
  40. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Chakrabortty, R.K. Light spectrum optimizer: A novel physics-inspired metaheuristic optimization algorithm. Mathematics 2022, 10, 3466. [Google Scholar] [CrossRef]
  41. Naik, M.K.; Panda, R.; Wunnava, A.; Jena, B.; Abraham, A. A leader Harris hawks optimization for 2-D Masi entropy-based multilevel image thresholding. Multimed. Tools Appl. 2021, 80, 35543–35583. [Google Scholar] [CrossRef]
  42. Hussain, K.; Zhu, W.; Salleh, M.N.M. Long-term memory Harris’ hawk optimization for high dimensional and optimal power flow problems. IEEE Access 2019, 7, 147596–147616. [Google Scholar] [CrossRef]
  43. Miao, C.; Chen, G.; Yan, C.; Wu, Y. Path planning optimization of indoor mobile robot based on adaptive ant colony algorithm. Comput. Ind. Eng. 2021, 156, 107230. [Google Scholar] [CrossRef]
Figure 1. MIHHO algorithm flow chart.
Figure 1. MIHHO algorithm flow chart.
Biomimetics 09 00552 g001
Figure 2. Classical functions convergence curves. (a) F1 convergence curve; (b) F2 convergence curve; (c) F3 convergence curve; (d) F4 convergence curve; (e) F5 convergence curve; (f) F6 convergence curve; (g) F7 convergence curve; (h) F8 convergence curve; (i) F9 convergence curve; (j) F10 convergence curve; (k) F11 convergence curve; (l) F12 convergence curve.
Figure 2. Classical functions convergence curves. (a) F1 convergence curve; (b) F2 convergence curve; (c) F3 convergence curve; (d) F4 convergence curve; (e) F5 convergence curve; (f) F6 convergence curve; (g) F7 convergence curve; (h) F8 convergence curve; (i) F9 convergence curve; (j) F10 convergence curve; (k) F11 convergence curve; (l) F12 convergence curve.
Biomimetics 09 00552 g002
Figure 3. Boxplots of MIHHO algorithm with other comparative algorithms. (a) F1 test function; (b) F2 test function; (c) F3 test function; (d) F4 test function; (e) F5 test function; (f) F6 test function; (g) F7 test function; (h) F8 test function; (i) F9 test function; (j) F10 test function; (k) F11 test function; (l) F12 test function.
Figure 3. Boxplots of MIHHO algorithm with other comparative algorithms. (a) F1 test function; (b) F2 test function; (c) F3 test function; (d) F4 test function; (e) F5 test function; (f) F6 test function; (g) F7 test function; (h) F8 test function; (i) F9 test function; (j) F10 test function; (k) F11 test function; (l) F12 test function.
Biomimetics 09 00552 g003
Figure 4. CEC2022 functions’ convergence curves. (a) F13 convergence curve; (b) F14 convergence curve; (c) F15 convergence curve; (d) F16 convergence curve; (e) F17 convergence curve; (f) F18 convergence curve; (g) F19 convergence curve; (h) F20 convergence curve; (i) F21 convergence curve; (j) F22 convergence curve; (k) F23 convergence curve; (l) F24 convergence curve.
Figure 4. CEC2022 functions’ convergence curves. (a) F13 convergence curve; (b) F14 convergence curve; (c) F15 convergence curve; (d) F16 convergence curve; (e) F17 convergence curve; (f) F18 convergence curve; (g) F19 convergence curve; (h) F20 convergence curve; (i) F21 convergence curve; (j) F22 convergence curve; (k) F23 convergence curve; (l) F24 convergence curve.
Biomimetics 09 00552 g004
Figure 5. Boxplot of MIHHO algorithm compared with other algorithms. (a) F13 test function; (b) F14 test function; (c) F15 test function; (d) F16 test function; (e) F17 test function; (f) F18 test function; (g) F19 test function; (h) F20 test function; (i) F21 test function; (j) F22 test function; (k) F23 test function; (l) F24 test function.
Figure 5. Boxplot of MIHHO algorithm compared with other algorithms. (a) F13 test function; (b) F14 test function; (c) F15 test function; (d) F16 test function; (e) F17 test function; (f) F18 test function; (g) F19 test function; (h) F20 test function; (i) F21 test function; (j) F22 test function; (k) F23 test function; (l) F24 test function.
Biomimetics 09 00552 g005aBiomimetics 09 00552 g005b
Figure 6. Function convergence curves. (a) F1 convergence curve; (b) F2 convergence curve; (c) F3 convergence curve; (d) F4 convergence curve; (e) F5 convergence curve; (f) F6 convergence curve; (g) F7 convergence curve; (h) F8 convergence curve; (i) F9 convergence curve; (j) F10 convergence curve; (k) F11 convergence curve; (l) F12 convergence curve.
Figure 6. Function convergence curves. (a) F1 convergence curve; (b) F2 convergence curve; (c) F3 convergence curve; (d) F4 convergence curve; (e) F5 convergence curve; (f) F6 convergence curve; (g) F7 convergence curve; (h) F8 convergence curve; (i) F9 convergence curve; (j) F10 convergence curve; (k) F11 convergence curve; (l) F12 convergence curve.
Biomimetics 09 00552 g006aBiomimetics 09 00552 g006b
Figure 7. Convergence curves of ablation experiments. (a) F1 convergence curve; (b) F2 convergence curve; (c) F3 convergence curve; (d) F6 convergence curve; (e) F8 convergence curve; (f) F13 convergence curve; (g) F14 convergence curve; (h) F18 convergence curve; (i) F21 convergence curve.
Figure 7. Convergence curves of ablation experiments. (a) F1 convergence curve; (b) F2 convergence curve; (c) F3 convergence curve; (d) F6 convergence curve; (e) F8 convergence curve; (f) F13 convergence curve; (g) F14 convergence curve; (h) F18 convergence curve; (i) F21 convergence curve.
Biomimetics 09 00552 g007aBiomimetics 09 00552 g007b
Figure 8. The simulation results of path planning in simple environments. (a) The simulation results of path planning in a 20 × 20 grid map in a simple environment. (b) The simulation results of path planning in a 40 × 40 grid map in a simple environment.
Figure 8. The simulation results of path planning in simple environments. (a) The simulation results of path planning in a 20 × 20 grid map in a simple environment. (b) The simulation results of path planning in a 40 × 40 grid map in a simple environment.
Biomimetics 09 00552 g008
Figure 9. Convergence curves for path planning in simple environments. (a) The convergence curves of each algorithm in a 20 × 20 grid map in a simple environment. (b) The convergence curves of each algorithm in a 40 × 40 grid map in a simple environment.
Figure 9. Convergence curves for path planning in simple environments. (a) The convergence curves of each algorithm in a 20 × 20 grid map in a simple environment. (b) The convergence curves of each algorithm in a 40 × 40 grid map in a simple environment.
Biomimetics 09 00552 g009
Figure 10. The simulation results of path planning in complex environments. (a) The simulation results of path planning in a 20 × 20 grid map in a complex environment. (b) The simulation results of path planning in a 40 × 40 grid map in a complex environment.
Figure 10. The simulation results of path planning in complex environments. (a) The simulation results of path planning in a 20 × 20 grid map in a complex environment. (b) The simulation results of path planning in a 40 × 40 grid map in a complex environment.
Biomimetics 09 00552 g010
Figure 11. Convergence curves for path planning in complex environments. (a) The convergence curves of each algorithm in a 20 × 20 grid map in a complex environment. (b) The convergence curves of each algorithm in a 40 × 40 grid map in a complex environment.
Figure 11. Convergence curves for path planning in complex environments. (a) The convergence curves of each algorithm in a 20 × 20 grid map in a complex environment. (b) The convergence curves of each algorithm in a 40 × 40 grid map in a complex environment.
Biomimetics 09 00552 g011
Table 1. Classical benchmark functions.
Table 1. Classical benchmark functions.
TypeFunctionDimGlobal Optima
Unimodal function F 1 ( x ) = i = 1 d x i 2 300
F 2 ( x ) = i = 1 d ( j = 1 d x j ) 2 300
F 3 ( x ) = m a x i { | x i | , 1 i d } 300
F 4 ( x ) = i = 1 d i x i 4 + r a n d [ 0 , 1 ) 300
Multimodal function F 5 ( x ) = i = 1 d x i sin ( | x i | ) 30−418.9829 × Dim
F 6 ( x ) = i = 1 d [ x i 2 10 cos ( 2 x i π ) + 10 ] 300
F 7 ( x ) = 20 exp ( 0.2 1 d i = 1 d x i 2 ) exp [ 1 d i = 1 d cos ( 2 x i π ) ] + 20 + e 300
F 8 ( x ) = 1 4000 i = 1 d x i 2 i = 1 d cos ( x i i ) + 1 300
Fixed-dimension multimodal function F 9 ( x ) = [ 0.002 + j = 1 25 1 j + i = 1 2 ( x i x i j ) 6 ] 1 21
F 10 ( x ) = i = 1 5 [ ( x a i ) ( x a i ) T + c i ] 1 4−10.1532
F 11 ( x ) = i = 1 7 [ ( x a i ) ( x a i ) T + c i ] 1 4−10.4028
F 12 ( x ) = i = 1 10 [ ( x a i ) ( x a i ) T + c i ] 1 4−10.5363
Table 2. Comparison of optimization results of different intelligent algorithms.
Table 2. Comparison of optimization results of different intelligent algorithms.
FunctionsIndexGWOSSAWOAPSOGBOSCSODBOHHOMIHHO
F1Best3.9079 × 10−3007.9517 × 10−870.000636063.1733 × 10−1283.0221 × 10−1251.6001 × 10−1673.0379 × 10−1130
Mean8.8865 × 10−282.641 × 10−541.6 × 10−730.0173061.8033 × 10−1162.257 × 10−1121.4317 × 10−1011.2927 × 10−910
Std1.0723 × 10−271.4466 × 10−537.1291 × 10−730.0347348.7403 × 10−1161.1837 × 10−1117.8418 × 10−1017.0796 × 10−910
F2Best1.2306 × 10−81.4022 × 10−9221,136.148836.28011.9012 × 10−1086.0921 × 10−1121.8658 × 10−1601.3875 × 10−980
Mean2.8545 × 10−55.1396 × 10−2743,441.70472252.62957.9774 × 10−932.5836 × 10−973.9476 × 10−782.3464 × 10−710
Std9.7623 × 10−52.6469 × 10−2614,283.77211888.08263.8353 × 10−921.4119 × 10−962.1622 × 10−771.2415 × 10−700
F3Best5.0592 × 10−81.2812 × 10−1190.000201824.49288.7515 × 10−616.9719 × 10−574.3432 × 10−781.0308 × 10−550
Mean8.3153 × 10−74.3847 × 10−2954.76446.88819.5192 × 10−542.4293 × 10−508.0166 × 10−434.9189 × 10−480
Std5.3132 × 10−72.0033 × 10−2827.79131.6523.7547 × 10−531.1926 × 10−494.3909 × 10−422.4996 × 10−470
F4Best0.000768230.000128830.000157670.0235351.5371 × 10−52.8588 × 10−60.000178089.2355 × 10−77.6663 × 10−7
Mean0.00234350.00151270.00226410.0501770.000753660.000161840.00104930.000178234.1231 × 10−5
Std0.0012360.0012530.00239190.0206270.000481080.000308570.000789510.000184196.5041 × 10−5
F5Best−7188.2016−9445.9825−12,567.2869−10,062.4719−11,331.6109−8739.4336−12,569.4866−12,569.4866−12,569.4866
Mean−6047.4534−8616.5616−9923.8706−8457.1427−9141.0372−6779.157−12,561.848−12,553.3976−12,569.4241
Std753.6649510.18631719.7314717.75751007.4905782.004639.574357.17890.11163
F6Best5.6843 × 10−140033.853600000
Mean2.46405.156760.684700000
Std3.9927028.244420.04800000
F7Best6.839 × 10−148.8818 × 10−168.8818 × 10−160.0121088.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Mean9.7759 × 10−148.8818 × 10−164.3225 × 10−150.789598.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Std1.7303 × 10−1402.5523 × 10−150.777800000
F8Best0000.001921400000
Mean0.005476603.700 × 10−180.033218000.0005747400
Std0.009715302.027 × 10−170.028653000.00314800
F9Best0.9980.9980.9980.9980.9980.9980.9980.9980.998
Mean5.01384.44392.93190.9981.03113.55141.45581.62490.998
Std4.20314.97672.96537.1417 × 10−170.181483.40541.82841.25573.4985 × 10−11
F10Best−10.1531−10.1532−10.1526−10.1532−10.1532−10.1532−10.1532−10.0896−10.1529
Mean−9.3149−7.9441−8.6934−6.5652−8.4539−5.7364−7.7898−5.1382−10.1479
Std2.21632.56942.45653.51412.44431.7622.56621.03590.0046385
F11Best−10.4024−10.4029−10.4012−10.4029−10.4029−10.4029−10.4029−10.2685−10.4026
Mean−10.4011−8.6312−7.1244−8.5207−8.4086−6.1973−8.2877−5.594−10.3967
Std0.00117862.54852.94163.21952.67682.69592.63051.55570.0061323
F12Best−10.5362−10.5364−10.5349−10.5364−10.5364−10.5364−10.5364−5.1285−10.5363
Mean−10.2639−8.7338−6.2514−8.2255−7.7551−6.9006−8.0298−5.122−10.5312
Std1.48122.59293.18033.40832.85942.88792.72450.00966130.0054868
EstimationMean5.084.335.425.8334.173.754.51.08
Rank758924361
Table 3. Results of Wilcoxon rank sum test for classical functions.
Table 3. Results of Wilcoxon rank sum test for classical functions.
FunctionsGWOSSAWOAPSOGBOSCSODBOHHO
F11.2118 × 10−124.5736 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
F21.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
F31.2118 × 10−124.5736 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
F43.0199 × 10−118.891 × 10−103.4742 × 10−103.0199 × 10−114.1997 × 10−100.000556118.1014 × 10−100.00076973
F53.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−111.8682 × 10−5
F64.5164 × 10−12NaNNaN1.2118 × 10−12NaNNaN0.33371NaN
F71.1359 × 10−12NaN1.1585 × 10−81.2118 × 10−12NaNNaNNaNNaN
F60.0055843NaN0.16081.2118 × 10−12NaNNaNNaNNaN
F84.9752 × 10−110.0756071.7769 × 10−106.3188 × 10−125.1812 × 10−122.1959 × 10−78.6748 × 10−62.0338 × 10−9
F97.6588 × 10−50.00721691.3853 × 10−60.00784030.00755068.4848 × 10−91.7273 × 10−53.0199 × 10−11
F104.9426 × 10−50.0255363.5201 × 10−70.00154530.0002679.5139 × 10−67.6152 × 10−53.0199 × 10−11
F110.000225395.3435 × 10−51.85 × 10−80.00744610.00755666.765 × 10−50.00390863.0199 × 10−11
F121.2118 × 10−124.5736 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
+/=/−12/0/08/3/110/1/112/0/09/3/09/3/09/2/19/3/0
Table 4. CEC2022 test function details.
Table 4. CEC2022 test function details.
TypeFunctionDescriptiveGlobal Optima
UFF13(CEC1)Shifted and Full Rotated Zakharov Function300
BFF14(CEC2)Shifted and Full Rotated Rosenbrock’s Function400
F15(CEC3)Shifted and Full Rotated Expanded Schaffer’s f6600
F16(CEC4)Shifted and Full Rotated Non-Continuous Rastrigin’s Function800
F17(CEC5)Shifted and Full Rotated Levy Function900
HFF18(CEC6)Hybrid Function 1 (N = 3)1800
F19(CEC7)Hybrid Function 2 (N = 6)2000
F20(CEC8)Hybrid Function 3 (N = 5)2200
CFF21(CEC9)Composition Function 1 (N = 5)2300
F22(CEC10)Composition Function 2 (N = 4)2400
F23(CEC11)Composition Function 3 (N = 5)2600
F24(CEC12)Composition Function 4 (N = 6)2700
Table 5. Comparison of CEC2022 test function optimization results.
Table 5. Comparison of CEC2022 test function optimization results.
FunctionIndexGWOSSAWOAPSOGBOSCSODBOHHOMIHHO
F13Best339.735330012,561.1581300300418.1413300.8018358.61300
Mean2576.2897303.060230,798.4928300.0367300.01833289.22942138.47911086.4727300.0177
Std2333.080310.832714,457.37810.153510.227642151.61081964.0605493.68980.0039427
F14Best404.1306400.0093404.4626400.3851400.1968402.1813404.9318400.2839400.0019
Mean435.4307412.865467.277436.4721408.6839431.3265439.2044463.4548408.0651
Std26.41323.315888.294366.922824.975628.675256.2362.545512.6514
F15Best600.0776600.0004620.7523600600.0028603.4417601.9854622.115600.1547
Mean601.8192604.5389639.8656600.4853600.86617.4314613.2101640.0181602.1263
Std4.51038.626713.00161.09751.28911.66098.58911.43290.97759
F16Best806.6137806.9647812.4975803.9798808.9546807.6149807.9637815.1933809.9519
Mean815.9944828.9864839.6426815.8086822.6187828.1278832.8943826.6859833.3066
Std7.73959.96817.02248.147210.59878.589913.62516.59798.9145
F17Best900.25936.34891075.8395900900906.5558906.93261204.7792943.4538
Mean943.32981388.52251596.4899900.4999915.30151148.22691023.8721462.76341319.221
Std86.2365177.0459480.26660.735926.8651183.5889140.3544141.8046184.9728
F18Best2676.9661909.90822207.26381890.76441831.83262157.5981996.58852365.73321874.7405
Mean6728.51094198.7615769.34545313.37553430.73685323.34025593.43637545.21413183.9722
Std2217.47851545.68122891.12872227.04541923.25121990.61862590.6036485.68161758.3396
F19Best2021.94152005.59982046.6252000.62432005.56492022.21962022.05052040.21142001.328
Mean2031.7412033.30582082.54412019.06712023.2982043.13072045.03212090.9012020.5375
Std7.120826.124227.36718.04158.433817.186130.745534.63066.2211
F20Best2207.67282220.11552226.40342200.80932206.22512221.63392221.68662223.79192204.9731
Mean2241.25382233.77832237.34022254.31182224.29912227.91222232.98962237.83732220.9942
Std40.296936.430411.001655.31874.30373.865820.90839.03433.1009
F21Best2529.33912529.28442551.01132529.28442529.28442529.29892529.28442543.09092529.2863
Mean2579.71572548.89382624.53042535.71982529.28442573.35612563.4942621.0892534.1854
Std37.235750.793850.868227.0164.3879 × 10−1330.753752.060238.984626.8255
F22Best2500.31612500.21532500.71072500.36652500.33192500.37692500.59352500.59522500.3736
Mean2597.15972642.12482770.77682583.46662558.21172565.62772555.90372697.6732533.1447
Std136.8643209.9238424.504379.501662.824566.566568.7745303.636674.2088
F23Best2603.88626002737.3267260026002606.16426002611.16042600
Mean3005.9052816.38053039.20472783.51972826.792899.87342771.61672866.4422727.1559
Std168.7909135.5907338.7632131.4767129.7348178.8386111.1563155.1139127.0772
F24Best2863.50192862.56742870.27112862.44212861.40492863.74792863.66122868.89182859.6596
Mean2873.09442880.96252915.36182874.38542871.50422873.54912882.38112944.48712872.7128
Std13.185631.548546.143516.81320.017611.289224.501656.688712.0736
EstimationMean4.924.928.333.52.335.335.257.52.5
Rank449317682
Table 6. Results of Wilcoxon rank sum test for CEC2022 test functions.
Table 6. Results of Wilcoxon rank sum test for CEC2022 test functions.
FunctionGWOSSAWOAPSOGBOSCSODBOHHO
F11.2118 × 10−124.5736 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
F21.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
F31.2118 × 10−124.5736 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
F43.0199 × 10−118.891 × 10−103.4742 × 10−103.0199 × 10−114.1997 × 10−100.000556118.1014 × 10−100.00076973
F53.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−113.0199 × 10−111.8682 × 10−5
F64.5164 × 10−12NaNNaN1.2118 × 10−12NaNNaN0.33371NaN
F71.1359 × 10−12NaN1.1585 × 10−81.2118 × 10−12NaNNaNNaNNaN
F60.0055843NaN0.16081.2118 × 10−12NaNNaNNaNNaN
F84.9752 × 10−110.0756071.7769 × 10−106.3188 × 10−125.1812 × 10−122.1959 × 10−78.6748 × 10−62.0338 × 10−9
F97.6588 × 10−50.00721691.385 × 10−60.00784030.00755068.4848 × 10−91.7273 × 10−53.0199 × 10−11
F104.9426 × 10−50.0255363.5201 × 10−70.00154530.0002679.5139 × 10−67.6152 × 10−53.0199 × 10−11
F110.000225395.3435 × 10−51.85 × 10−80.00744610.00755666.765 × 10−50.00390863.0199 × 10−11
F121.2118 × 10−124.5736 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−121.2118 × 10−12
+/=/−12/0/08/3/110/1/112/0/09/3/09/3/09/2/19/3/0
Table 7. Comparison of different improved HHO search results.
Table 7. Comparison of different improved HHO search results.
FunctionIndexNCHHOLHHOLMHHOMIHHO
F1Best4.3835 × 10−2398.3093 × 10−16600
Mean2.1917 × 10−2402.8468 × 10−14500
Std09.8053 × 10−14500
F2Best2.1941 × 10−2496.6287 × 10−13500
Mean1.8669 × 10−2141.2616 × 10−10000
Std06.8972 × 10−10000
F3Best8.0046 × 10−1311.6516 × 10−821.849 × 10−2490
Mean5.6251 × 10−1198.8838 × 10−712.316 × 10−2360
Std2.3441 × 10−1184.7576 × 10−7000
F4Best1.0575 × 10−79.3114 × 10−64.037 × 10−71.0246 × 10−6
Mean0.00012369.2186 × 10−59.3295 × 10−51.8966 × 10−5
Std9.911 × 10−55.3666 × 10−59.2981 × 10−51.6225 × 10−5
F5Best−12,569.4817−12,569.4865−12,569.487−12,569.4866
Mean−12,501.4776−12,569.4717−12,569.214−12,569.4124
Std177.64650.211661.04920.12898
F6Best0000
Mean0000
Std0000
F7Best8.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Mean8.8818 × 10−168.8818 × 10−168.8818 × 10−168.8818 × 10−16
Std0000
F8Best0000
Mean0000
Std0000
F9Best0.9980.9980.9980.998
Mean1.79161.03110.9980.998
Std1.49770.181481.44 × 10−101.33 × 10−11
F10Best−10.1025−10.1532−5.0548−10.1532
Mean−8.2697−9.1315−4.9995−10.1473
Std2.02362.0730.117590.0082563
F11Best−10.3122−10.4026−5.0876−10.4027
Mean−7.8291−9.3316−5.0278−10.3978
Std2.4532.15830.101790.0043626
F12Best−10.4924−10.5364−5.1278−10.5363
Mean−7.2542−9.6282−5.0867−10.5309
Std2.4962.04680.0860130.0053248
Table 8. Different strategies for HHO.
Table 8. Different strategies for HHO.
Double Adaptive Weight StrategyDLH Search StrategyPosition Update Strategy Based on DBO Algorithm
HHO000
HHO1100
HHO2010
HHO3001
HHO12110
HHO13101
HHO23011
Table 9. Comparison of optimization results of ablation experiments.
Table 9. Comparison of optimization results of ablation experiments.
FunctionIndexHHOHHO1HHO2HHO3HHO12HHO13HHO23
F1Best3.3253 × 10−1128.9003 × 10−17108.0823 × 10−22702.7309 × 10−2710
Mean1.5801 × 10−965.7737 × 10−1354.9407 × 10−324002.112 × 10−2530
Std8.343 × 10−961.0541 × 10−13503.8473 × 10−207000
F2Best1.2599 × 10−962.0311 × 10−1482.0311 × 10−1484.0299 × 10−23904.1191 × 10−2390
Mean6.6397 × 10−661.1736 × 10−1222.1428 × 10−123001.5794 × 10−2031.9179 × 10−282
Std3.6367 × 10−652.1428 × 10−1231.1736 × 10−1226.169 × 10−169000
F3Best2.2842 × 10−573.2572 × 10−852.9095 × 10−1651.1407 × 10−1054.5555 × 10−2221.2775 × 10−1352.9634 × 10−176
Mean7.8555 × 10−501.015 × 10−729.6934 × 10−1469.1514 × 10−981.5838 × 10−2027.7083 × 10−1233.2638 × 10−156
Std2.9179 × 10−491.89 × 10−735.2825 × 10−1451.6848 × 10−9804.2008 × 10−1221.7562 × 10−155
F6Best0000000
Mean0000000
Std0000000
F8Best0000000
Mean0000000
Std0000000
F13Best499.5464451.3108300.0151405.3196300.0091300.073300.0109
Mean1054.6712723.4408304.8567876.3008302.4262301.6767300.0219
Std450.29296.526913.9405323.133870.51432.00590.007624
F14Best426.1614412.2925404.3665405.4668400.2645400.092400.0791
Mean482.2627465.128473.499441.9414451.3182414.1834424.0303
Std46.449467.345674.575256.613455.790720.609534.091
F18Best2397.42812011.18042056.7622123.19121972.67311990.57641906.2921
Mean7016.20474514.92813871.40133963.72923275.58633392.98893280.4198
Std6090.24784452.21422301.36513065.63973052.89422258.08252056.7581
F21Best2585.94312529.28512540.44392552.00272529.36822529.28442533.6741
Mean2696.43082593.11232668.25992635.47492598.812589.13422619.374
Std40.947449.252431.757743.634551.259837.264146.626
Table 10. Comparative experimental results of path planning in simple environments.
Table 10. Comparative experimental results of path planning in simple environments.
Map SizeAlgorithmPath LengthIterations
BestMeanStdBestMeanStd
20 × 20GWO28.130228.999750.7309803132347.222.55758064
SSA28.174929.317581.320170692225.526.56752403
WOA29.081431.834721.95309774311310
PSO28.130232.251322.5544255964167.318.49954954
GBO28.130229.738361.201097738486916.91153453
SCSO30.360333.011061.84759723111419.9276469
DBO28.130228.925370.6779790361948.922.323132
HHO28.130228.929590.8855225411045.631.52142129
MIHHO27.827928.353750.340219678917.69.570788891
40 × 40GWO60.25964.48774.2990234952668.622.52998003
SSA64.970271.001632.457679449740.723.73955911
WOA70.407975.301362.098619263516.711.83262909
PSO73.1966112.7808320.9327498694.45.738757124
GBO72.557174.090691.725264679641.622.30196802
SCSO73.580775.972821.05744597127.43.306559138
DBO61.375767.257993.7520980251558.323.39539366
HHO69.277771.368531.445605752722.116.78921612
MIHHO58.223361.055621.5561865771332.218.20134305
Table 11. Comparative experimental results of path planning in complex environments.
Table 11. Comparative experimental results of path planning in complex environments.
Map SizeAlgorithmPath LengthIterations
BestMeanStdBestMeanStd
20 × 20GWO28.996930.802911.1879166851018.15.78215646
SSA28.856930.683881.5347955372142.522.62864458
WOA32.258734.625251.307826451111.37.616502551
PSO30.932535.9412.579778856111.918.02128371
GBO32.080534.394961.656553955147.129.69268522
SCSO31.96434.28091.702263173110.515.27707069
DBO30.028531.437741.423925406621.116.07240561
HHO30.168531.108081.2084400511440.230.17283547
MIHHO28.856929.701350.793536696415.58.2360994
40 × 40GWO67.099670.996563.519147981453.729.63125227
SSA75.071175.598320.185246225728.817.32563932
WOA76.242677.12130.497834272363.771236166
PSO80.691985.610783.460530845119.424.42312201
GBO74.892976.166240.7844182691245.222.14497686
SCSO75.656977.184060.56317590513.42.319003617
DBO67.277774.286812.734354672341.129.58396713
HHO70.513874.949021.612339986626.528.43413442
MIHHO65.520368.062171.0300741892836.26.908931418
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, C.; Li, W.; Han, T.; Yu, L.; Cui, T. Multi-Strategy Improved Harris Hawk Optimization Algorithm and Its Application in Path Planning. Biomimetics 2024, 9, 552. https://doi.org/10.3390/biomimetics9090552

AMA Style

Tang C, Li W, Han T, Yu L, Cui T. Multi-Strategy Improved Harris Hawk Optimization Algorithm and Its Application in Path Planning. Biomimetics. 2024; 9(9):552. https://doi.org/10.3390/biomimetics9090552

Chicago/Turabian Style

Tang, Chaoli, Wenyan Li, Tao Han, Lu Yu, and Tao Cui. 2024. "Multi-Strategy Improved Harris Hawk Optimization Algorithm and Its Application in Path Planning" Biomimetics 9, no. 9: 552. https://doi.org/10.3390/biomimetics9090552

APA Style

Tang, C., Li, W., Han, T., Yu, L., & Cui, T. (2024). Multi-Strategy Improved Harris Hawk Optimization Algorithm and Its Application in Path Planning. Biomimetics, 9(9), 552. https://doi.org/10.3390/biomimetics9090552

Article Metrics

Back to TopTop