Next Article in Journal
Cytocompatibility Study of Stainless Steel 316l Against Differentiated SH-SY5Y Cells
Previous Article in Journal
Electromyography Signals in Embedded Systems: A Review of Processing and Classification Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

UAV Path Planning: A Dual-Population Cooperative Honey Badger Algorithm for Staged Fusion of Multiple Differential Evolutionary Strategies

School of Mechanical Engineering, Sichuan University Jinjiang College, Meishan 620860, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(3), 168; https://doi.org/10.3390/biomimetics10030168
Submission received: 10 February 2025 / Revised: 9 March 2025 / Accepted: 9 March 2025 / Published: 10 March 2025

Abstract

:
To address the challenges of low optimization efficiency and premature convergence in existing algorithms for unmanned aerial vehicle (UAV) 3D path planning under complex operational constraints, this study proposes an enhanced honey badger algorithm (LRMHBA). First, a three-dimensional terrain model incorporating threat sources and UAV constraints is constructed to reflect the actual operational environment. Second, LRMHBA improves global search efficiency by optimizing the initial population distribution through the integration of Latin hypercube sampling and an elite population strategy. Subsequently, a stochastic perturbation mechanism is introduced to facilitate the escape from local optima. Furthermore, to adapt to the evolving exploration requirements during the optimization process, LRMHBA employs a differential mutation strategy tailored to populations with different fitness values, utilizing elite individuals from the initialization stage to guide the mutation process. This design forms a two-population cooperative mechanism that enhances the balance between exploration and exploitation, thereby improving convergence accuracy. Experimental evaluations on the CEC2017 benchmark suite demonstrate the superiority of LRMHBA over 11 comparison algorithms. In the UAV 3D path planning task, LRMHBA consistently generated the shortest average path across three obstacle simulation scenarios of varying complexity, achieving the highest rank in the Friedman test.

1. Introduction

With the widespread application of unmanned aerial vehicles (UAVs) in complex terrain exploration, disaster response, military reconnaissance, and other fields, three-dimensional path planning has emerged as a core challenge in autonomous navigation systems. The objective is to determine an optimal flight path that minimizes cost while ensuring the UAV navigates through three-dimensional space without violating operational constraints or colliding with obstacles [1,2].
Corresponding to such a complex constraint optimization problem as path planning, scholars have proposed some algorithms and improvement studies. Traditional path planning methods include the A-star algorithm [3], Artificial Potential Fields (APF) [4], and RRT* [5], etc. These algorithms appeared earlier, developed more maturely, and were relatively simple to implement, but for path planning in complex environments, problems such as slow convergence speed would occur. In recent years, swarm intelligence algorithms have become an important method to solve the path planning problem, and this meta-heuristic algorithm simulates the information sharing and mutual learning between biological individuals in nature, with stronger self-learning, self-adaptation, and self-organization [6,7,8,9]. Some algorithms with superior performance are widely used in UAV path planning, including Particle Swarm Algorithm (PSO) [10], Artificial Bee Colony Algorithm (ABC) [11], Whale Optimization Algorithm (WOA) [12], Harris Hawks Optimization (HHO) [13], and Sparrow Search Algorithm (SSA) [14], Dung Beetle Optimizer (DBO) [15], Crested Porcupine Optimizer (CPO) [16] and so on. Li et al. [17] introduced the FP-GPSO algorithm, which uses Fermat points for grouped particle swarm optimization to address the path-planning challenge of composite unmanned aerial vehicles. Experimental findings demonstrated the effectiveness of FP-GPSO in optimizing UAV path planning. Han et al. [18] developed a multi-strategy evolutionary database to enhance the conventional artificial bee colony algorithm, leveraging their brain-inspired evolutionary learning framework to improve cognitive abilities, and simulation results show that the algorithm produces UAV trajectories with better fuel economy and higher safety. Dai et al. [19] proposed a novel whale optimization algorithm (NWOA), which improves the convergence speed and avoids local optima by using adaptive techniques and setting virtual obstacles and introduces improved potential field factors to enhance the dynamic obstacle avoidance ability of mobile robots. Tang et al. [20] improved the Harris Hawk algorithm using several strategies such as dimensional learning-based hunting (DLH) search and applied it to robot path planning, and the results demonstrated the algorithm’s clear advantage in path planning performance. Cheng et al. [21] developed an improved SSA applied to UAV path planning based on the theory of uniform experimental design and obtained good experimental results. Lian et al. [22] added exponentially decreasing inertia weights and adaptive Cauchy mutation strategies to the dung beetle optimizer to solve complex UAV path planning problems in complex 3D environments. Liu et al. [23] proposed a periodic retreat strategy and a visual-auditory synergistic defense mechanism to improve the Crested Porcupine Optimizer (ICPO) for better obstacle avoidance and reduced energy consumption of UAVs.
The Honey Badger Algorithm (HBA) [24] is a nature-inspired optimization algorithm proposed by Hashim et al. in 2021. It is designed to identify optimal solutions by mimicking the intelligent foraging behavior of the honey badger. HBA is characterized by its strong optimization capability, robustness, and simplicity of implementation. Numerous researchers have explored and applied HBA across various fields, yielding promising experimental results, as summarized in Table 1.
Currently, research on the application of the Honey Badger Algorithm (HBA) in UAV path planning remains limited. Hu et al. [33] proposed an improved variant of HBA, named SaCHBA_PDN, which incorporates a Bernoulli shift map, segment optimal decreasing neighborhoods, and adaptive level crossing for path planning. While the algorithm demonstrates strong performance in two-dimensional path planning, it does not address three-dimensional path planning. Therefore, this study explores the application of HBA to tackle the NP-hard problem of UAV 3D path planning [34].
According to the “No Free Lunch (NFL)” theorem [35], no single algorithm can effectively solve all optimization problems. To enhance the applicability of the Honey Badger Algorithm (HBA) for UAV path planning and address its limitations, such as slow convergence speed, an imbalance between exploration and exploitation, and susceptibility to local optima [36,37,38], this study proposes an improved variant, named LRMHBA.
First, Latin hypercube sampling combined with an elite strategy [39] is employed during initialization to ensure a more uniform population distribution and enhance global search efficiency. Second, a stochastic perturbation strategy inspired by the Whale Optimization Algorithm is integrated to improve the algorithm’s ability to escape local optima. Finally, a staged dual-population co-evolutionary strategy incorporating multiple differential evolution variants is introduced. The population is divided into two subpopulations based on their adaptive capabilities. Considering the dynamic trade-off between exploration and exploitation during optimization, different differential evolution variants are assigned to the two subpopulations at various evolutionary stages. Additionally, the elite population obtained during initialization is leveraged to guide the differential mutation process, improving the balance between exploration and exploitation and enhancing the algorithm’s global optimization capability.
The following are this paper’s primary contributions:
  • Effective fusion of the randomized perturbation strategy in the whale algorithm and the honey badger algorithm.
  • Proposing a staged two-population coevolutionary strategy that incorporates multiple differential variation approaches.
  • Proposing an improved HBA algorithm (LRMHBA) that combines Latin hypercubic sampling with an elite strategy, a randomized perturbation strategy, and a staged two-population co-evolutionary strategy that fuses multiple differential variability approaches.
  • Comparative performance tests were conducted on the LRMHBA algorithm against various competing algorithms, including highly referenced algorithms and their variants, recently developed high-performance algorithms, the champion algorithm, as well as the original HBA and its variant, using the CEC2017 test suite, with evaluations covering both low-dimensional (30-dimensional) and high-dimensional (100-dimensional) function optimization. Statistical analyses, including the Wilcoxon rank-sum test and Friedman test, along with ablation and exploration-exploitation experiments, were performed to validate the advancements of LRMHBA.
  • The UAV flight cost is defined and three UAV 3D simulation scenarios from simple to complex are established, and the performance of path planning for each scenario is compared and analyzed with the LRMHBA algorithm and other competing algorithms, and the outcomes demonstrate the superiority of the LRMHBA method in the UAV path planning problem as well.
The paper is organized as follows: Section 2 describes the UAV path planning modeling approach. Section 3 describes the principles of the HBA algorithm. Section 4 describes specific improvements to the HBA algorithm. Section 5 is a discussion of testing and analyzing the LRMHBA algorithm using the CEC2017 test suite. Section 6 is the UAV path planning simulation test analysis. Section 7 summarizes the entire thesis research work.

2. UAV Path Planning Modeling

2.1. Environmental Modeling

Environmental modeling refers to the process of converting physical environmental information into digital models that can be processed by computer algorithms, serving as the prerequisite and foundation for UAV path planning.

2.1.1. Base Terrain Model

We adopted a commonly used functional simulation method to generate realistic terrain patterns [40], as expressed in Equation (1):
    z 1 x , y = k · ( sin y + a + b · s i n x + c · c o s d · x 2 + y 2 + e · c o s y + f · s i n ( g · x 2 + y 2 )
where z 1 is the height value at coordinate ( x , y ) ; k , a , b , c, d , e , f , g are the scaling coefficients controlling digital terrain characteristics.

2.1.2. Mountain Model

Natural mountainous terrain poses the most significant threat to UAV navigation. We employ an exponential superposition function to characterize such features [41], as defined in Equation (2):
  z 2 x , y = h i e x x i 2 a i 2 y y i 2 b i 2
where z 2 is the height value at the point (x, y); h i is the peak altitude of the i th mountain; ( x i , y i ) are the centroid coordinates of the i th mountain; a i and b i are the slope parameters in x/y directions for the i th mountain.

2.2. Operational Constraints

The proposed path planning model simulates UAV operations in both simple and complex hazardous environments, incorporating multiple constraint costs [42,43] to enhance the practical relevance and applicability of the study.

2.2.1. Flight Distance Cost

Limited by fuel capacity and consumption rate, UAVs operate under strict flight range constraints. The distance cost function balancing flight efficiency and obstacle avoidance is formulated as follows:
  C L = i = 1 n 1 ( x i + 1 x i ) 2 + y i + 1 y i 2 + z i + 1 z i 2
where ( x i , y i , z i ) denotes the 3D coordinates of the i th waypoint; n is the total number of waypoints.

2.2.2. Flight Altitude Cost

Maintaining appropriate flight altitude enhances fuel efficiency and operational safety. To ensure stealth performance, UAVs require stable low-altitude flight patterns. The altitude deviation cost is defined in Equation (4) with its mean reference in Equation (5):
C H = i = 1 n z i z ¯ 2
z ¯ = 1 n · i = 1 n z i
where z i is the altitude at the i th waypoint; z ¯ is the mean flight altitude.

2.2.3. Turning Maneuver Cost

Sharp turns jeopardize UAV stability and controllability. The turning angle is usually not greater than the pre-set maximum turning angle. The cost function for flight turning is presented as follows:
    C C = i = 1 n 2 cos cos θ i
cos θ i = b i T · b i + 1 b i · b i + 1
where θ i denotes the turning angle between adjacent path segments; Φ is the maximum allowable turning angle; b i is the direction vector of the i th path segment.

2.2.4. Terrain Clearance Constraint

To prevent terrain collisions during mission execution, planned trajectories must maintain altitude superiority over terrain with designated safety margins. The terrain clearance cost is formulated as follows:
C M = i = 1 n f d i
    f d i = 1 d i 2 S D 2 , i f   d i < 2 S D 0 ,                                 o t h e r w i s e
d i = z i h i
where z i denotes the terrain elevation at the i th waypoint and SD denote the Mandatory safety clearance (0.2 km in this study).

2.2.5. Obstacle Threat Cost

UAVs must avoid collisions with obstacles and radar threat zones modeled as vertical cylinders where flight operations are strictly prohibited at all altitudes. The threat cost function enforces safety buffers around threat sources, as defined in Equations (11)–(13):
C T = i = 1 n k = 1 m f d i k
f d i k = 1 d i k 2 R k · S F 2 , i f   d i k < 2 R k · S F       0 ,                                                       o t h e r w i s e
d i k = x i x k 2 + y i y k 2
where ( x k , y k ) denotes the centroid coordinates of the k th threat zone; R k denotes the radius of the k th cylindrical threat zone; S F is the safety factor for threat zones (1.2 in this study).
As illustrated in Figure 1, O k denotes the centroid of the k th threat source. The penalty activates when d i k < 2 R k · S F , with threat cost increasing quadratically as the distance decreases.
The weighted sum of each individual cost function is the overall cost function, if the proposed path is viable. Otherwise, the total cost function is assigned an excessively high penalty value, which in this paper is set to 10,000:
  C t o t a l = μ L C L + μ H C H + μ C C C + μ M C M + μ T C T i f   p a t h   f e a s i b l e 10,000 ,                                                                                             o t h e r w i s e
where μ L , μ H , μ C , μ M , μ T are weighting coefficients with μ = 1 .

3. Honey Badger Algorithm (HBA)

This section outlines the principle of the Honey Badger Algorithm (HBA), where the parameters r 1 to r 7 in the formula are random numbers between 0 and 1.

3.1. Population Initialization

The population is randomly initialized within predefined boundaries using:
X i = l b i + r 1 · u b i l b i
where X i is the position vector of the i th individual; l b and u b denote the lower/upper bounds of search space.

3.2. Excavation Phase

Honey badgers perform cardioid-shaped movements during prey excavation, mathematically modeled as:
    X n e w = X p r e y + D i r · β · I · X p r e y + F · r 2 · α · d i · cos 2 · π · r 3 × 1 cos 2 · π · r 4
where X n e w denotes the updated position of the individual honey badger; X p r e y denotes the global best position; β is the food acquisition capability factor, β 1 ; I is the prey control intensity (Equations (17)–(19)); α is the density factor for exploration-exploitation transition (Equation (20)); Dir is the Search direction operator (Equation (21)).
  I i = r 5 · S 4 · π · d i 2
S = X i X i + 1 2
d i = X p r e y X i
where S is the source intensity concentration; d i denotes the distance to prey.
      α = C · exp t t m a x
where t m a x is the maximum iterations; C is a constant 1 (typically C = 2 ).
    D i r = 1 , i f   r 6 0.5 1 ,       e l s e                

3.3. Honey Harvesting Phase

Simulating honey badgers following honeyguide birds to beehives:
  X n e w = X p r e y + D i r · r 7 · α · d i

4. LRMHBA Algorithm

4.1. Hybrid LHS Initialization and Elite Guidance

Conventional randomized initialization in the honey badger algorithm may result in suboptimal population distribution and diminished diversity, leading to increased stochastic uncertainty during optimization. To address this limitation, we propose a hybrid initialization strategy integrating Latin Hypercube Sampling (LHS), which enhances spatial uniformity and improves global exploration capabilities.
Latin Hypercube Sampling (LHS) [44,45], proposed by McKay et al. in 1979, is a stratified sampling technique for multivariate parameter distributions. Compared with random sampling, LHS demonstrates superior space-filling properties through its uniform stratification mechanism, particularly when handling limited sample sizes. This method ensures comprehensive coverage of the entire parameter space while capturing tail distribution characteristics.
To address the uneven initialization issue, we implement a hybrid initialization strategy, as shown in Figure 2:
  • Generate N samples using LHS for uniform spatial coverage.
  • Create another N sample through random sampling.
  • Select the top N individuals by fitness ranking from the combined pool.
  • Extract the elite 20% individuals to guide the proposed dual-population framework.
Figure 2. Schematic of hybrid LHS initialization and elite guidance.
Figure 2. Schematic of hybrid LHS initialization and elite guidance.
Biomimetics 10 00168 g002

4.2. Stochastic Perturbation Strategy

Premature Convergence Analysis

In the original HBA method, position updates are primarily influenced by the global best individual X p r e y , which can lead to premature convergence as the population tends to cluster around local optima. To enhance global search capability, we introduce a stochastic perturbation mechanism inspired by the Whale Optimization Algorithm (WOA), regulated by the adaptive coefficient A . When | A | 1 , the search strategy incorporates stochastic perturbation; otherwise, the honey badger’s position is updated based on X p r e y . The coefficient A is calculated as follows:
A = 2 m · r 8 m
  m = 2 2 F E S M a x F E S
where r 8 is a random number, r 8 [ 0,1 ] ;   m linearly decreases from 2 to 0; F E S is the current number of function evaluations; M a x F E S is the Maximum function evaluation number.
When A 1 , the modified position update equation and the redefined distance metric d i are expressed as:
X n e w = X r a n d + F · β · I · X r a n d + F · r 2 · α · d i · cos 2 · π · r 3 × 1 cos 2 · π · r 4
X n e w = X r a n d + F · r 7 · α · d i
      d i = X r a n d X i
The coefficient A exhibits a linear throughout the iteration process. In the initial stage, a high A value activates a random perturbation strategy to prevent premature population aggregation and enhance coverage in the search space. As A decreases during iterations, the algorithm gradually transitions to the precise exploitation of promising regions.

4.3. The Staged Dual-Population Co-Evolutionary Strategy Integrating Multiple Differential Evolution Variants

4.3.1. Motivation and Framework

To improve the convergence efficiency and convergence accuracy of the HBA algorithm, we propose a staged dual-population co-evolutionary strategy integrating multiple differential evolution variants, which is set to be executed when the optimal value has not changed in 150 function evaluation times. This approach leverages complementary characteristics of diverse differential evolutionary variants to balance exploration and exploitation dynamically.

4.3.2. DE Mutation Operators

The DE Mutation Operator is one of the core operators of the Differential Evolution (DE) algorithm [46], which serves to generate new candidate solutions (variant individuals) to guide the population to search towards a more optimal solution space. The following canonical DE mutation strategies are adopted due to their proven efficacy [47,48,49,50,51,52]:
D E / r a n d / 1 :             V i , G = X r 9 , G + F · X r 10 , G X r 11 , G ,
  D E / r a n d / 2 :             V i , G = X r 9 , G + F · X r 10 , G X r 11 , G + F · X r 12 , G X r 13 , G
D E / b e s t / 1 :             V i , G = X b e s t , G + F · X r 9 , G X r 10 , G
D E / b e s t / 2 :             V i , G = X b e s t , G + F · X r 9 , G X r 10 , G + F · X r 11 , G X r 12 , G
D E / c u r r e n t t o b e s t / 1 :             V i = X i , G + F · X b e s t , G X r i , G + F · X r 9 , G X r 10 , G
where X r k , G denotes the randomly selected individual from generation G; X b e s t , G is the global best individual; X i , G is the i th individual of generation G ; V i , G is the mutated individual of generation G ; and F is the scaling factor.
Additionally, Layeb, A. et al. [53] (2024) proposed two more differential mutation strategies, which have been validated by extensive experiments:
D E / m e a n c u r r e n t / 2           V i , G = X c 1 , G + F · X c 1 , G X i , G + F · X c 2 , G X i , G
D E / m e a n c u r r e n t b e s t / 2           V i = X b e s t , G + F · X c 1 , G X i , G + F · X c 2 , G X i , G
where
  X c 1 , G = X r 9 , G + X r 10 , G 2
X c 2 , G = X r 9 , G + X b e s t , G 2

4.3.3. Dual-Population Mutation Method with Elite Individuals

Single mutation strategy fails to address diverse evolutionary demands across population members, because high-fitness individuals (clustered near the current optimum) require intensified exploitation, while low-fitness individuals (distributed in peripheral regions) demand enhanced exploration [54].
To resolve this dichotomy, we implement fitness-based bipartite grouping:
  • Group A (Top 50% fitness): Focused on precision exploitation.
  • Group B (Bottom 50% fitness): Dedicated to spatial exploration.
At the same time, considering that the population’s search behavior and needs evolve throughout the iteration process, we have designed the following configuration of differential evolution strategies to better address the balance between global exploration and local exploitation. This configuration allows the algorithm to adapt its strategy dynamically throughout the optimization process, thereby finding the optimal balance between global and local optima and ultimately improving the optimization performance.
1.
Phase I (Initial 2/3 iterations): Exploration Emphasis.
  • Group A: DE/mean-current/2.
  • Group B: DE/rand/1.
Group A maintains the exploitation potential and group B breaks through the local optimum through randomized search.
2.
Phase II (Final 1/3 iterations): Exploitation Emphasis.
  • Group A: DE/current-to-best/2.
  • Group B: DE/mean-current/2.
Group A focuses on fine-grained search in optimal neighborhoods, and group B maintains moderate exploration to avoid precocity.
Throughout the process, the elite individuals selected during initialization guide the mutation process in DE/mean-current/2. Consequently, Formula (36) is updated to:
  X c 2 , G = X r 9 , G + X e l i t e 2
where X e l i t e represents the elite individual obtained during initialization.
After mutation and crossover, the individuals are updated using a greedy strategy:
X i , G + 1 = u i , G ,             i f   f u i , G f ( X i , G ) X i , G ,                 o t h e r w i s e                                    
where f ( · ) denotes fitness evaluation.
In summary, the three improvement strategies work together to form an integrated optimization mechanism through phased coordination. Latin hypercube sampling ensures a geometrically homogeneous initialization of the population, with the top 20% of elite individuals guiding subsequent dual-population operations as potential global optima. During iterations, the stochastic perturbation strategy dynamically prevents premature convergence by adaptively adjusting the coefficient. Simultaneously, the staged dual-population co-evolutionary strategy integrates population characteristics, differential evolution mutation operator characteristics, and the evolving demands of the iterative process, gradually enhancing local exploitation while preserving exploration capabilities. This phased collaboration enables LRMHBA to maintain a balanced exploration-exploitation dynamic throughout the entire optimization process.

4.4. Pseudocode and Flowchart of LRMHBA

Figure 3 displays the flowchart of LRMHBA, which combines the three tactics discussed in this section. Algorithm 1 displays the pseudocode.
Algorithm 1 Pseudocode of LRMHBA
1: Initialize population X using Latin hypercube sampling and elite strategy
2 :   while   F E S     M a x _ F E S  do
3 :   Calculate   α ,   I ,     m ,   A using Equations (17), (20), (23) and (24).
4 :   for   each   individual   i   do
5 :       if   | A | > 1  then
6:     Update position using random individual using Equations (25) and (26).
7:    else
8 :     Update   position   using   X _ p r e y using Equations (16) and (22).
9:    end if
10:     Update if better solution found.
11:   end for
12 :       if   F E S > 150 and no improvement in last 150 evaluations then
13 :   if   F E S     2 × M a x F E S / 3  then
14:      Set population ratios: 0% mean-current/2, 50% current-to-best/2, 50% rand/1
15:   else
16:      Set population ratios: 50% mean-current/2, 50% current-to-best/2, 0% rand/1
17:   end if
18:   Sort population by fitness and divide into groups
19:   for each group do
20:    if Group 1 (mean-current/2) then
21:       Apply mutation using Equation (33).
22:    else if Group 2 (current-to-best/2) then
23:       Apply mutation using Equation (32).
24:    else if Group 3 (rand/1) then
25:       Apply mutation using Equation (28).
26:    end if
27:    Apply binomial crossover with probability.
28:    Update if better solution found.
29:   end for
30:  end if
31:  Update X_prey and Food_Score if better solution found.
32: end while
33: Return Food_Score, X_prey.

4.5. Time Complexity Analysis

Let N denote the population size, D represent the problem dimension, and M a x F E S indicate the maximum number of function evaluations. The original Honey Badger Algorithm (HBA) exhibits a time complexity of O ( M a x F E S · N · D ) . When analyzing the computational complexity of the enhanced LRMHBA algorithm relative to HBA, the extra operations and their respective complexities include:
  • Latin hypercube sampling during initialization: O ( N · D )
  • Elite strategy sorting in the initialization phase: O ( N · l o g N )
  • Stochastic perturbation strategy: O ( 1 )
  • Dual-population mutation method: O ( N · l o g ( N 150 ) )
Among them, the time complexity of the Latin hypercube sampling and random perturbation strategies is negligible, while the Dual-Population Mutation Method may be triggered only once every 150 times, and its computation time is also approximately negligible with respect to the main loop when the problem size increases, so the total time complexity of the LRMHBA is O ( M a x F E S · ( N · D + N · l o g ( N 150 ) ) ) , which is approximately equal to O ( M a x F E S · ( N · D ) ) . So the LRMHBA algorithm basically adds no extra computational cost compared to the baseline HBA algorithm.

5. Algorithm Performance Testing and Analysis

This section evaluates the effectiveness of the LRMHBA algorithm using the CEC2017 benchmark suite. The CEC2017 test suite contains 29 functions of various types. It provides a comprehensive evaluation of the LRMHBA algorithm’s performance across different problem types. The dimensions and search ranges of the test functions within the benchmark are detailed in Appendix A.
To analyze the LRMHBA algorithm’s performance in terms of convergence accuracy, convergence speed, and exploration-exploitation capabilities, the following four categories of algorithms were selected for comparison:
  • Classical high-citation algorithms and their variants: Particle swarm optimization (PSO), Differential evolution (DE), Whale optimization algorithm (WOA), AOA (Arithmetic optimization algorithm, 2021) [55], GQPSO (Gaussian quantum-behaved particle swarm optimization, 2010) [56];
  • Recently proposed algorithms and their variants (within the past two years): PO (Parrot Optimizer, 2024) [57], (Dung Beetle Optimizer (DBO), QHDBO (2024) [58];
  • Champion algorithm: LSHADE [59];
  • HBA algorithm and its variant: HBA, SaCHBA_PDN.
Every algorithm’s parameters were chosen based on the values suggested by the corresponding sources. as detailed in Table 2. The experimental analysis included performance testing on functions with different dimensions, ablation studies, and exploration-exploitation capability experiments.
The maximum number of function evaluations (FES) was set at 100,000, and the population size for all algorithms was set at 100 to guarantee equity. To reduce experimental randomness and enhance the reliability of results, each algorithm was independently run 30 times. The optimal value, mean value, and standard deviation of the test functions were calculated for both low-dimensional (30 dimensions) and high-dimensional (100 dimensions) settings. The software used for the experiments was Matlab R2021b.

5.1. Results Analysis on CEC2017

Table 3 and Table 4 show the experimental results for dimensions 30 and 100, respectively. The best outcomes are indicated in bold for each performance metric (optimal value, mean value, and standard deviation). As shown in Table 3, when the dimension is 30, out of a total of 87 metrics (29 functions × 3 metrics), the LRMHBA algorithm outperformed other competing algorithms in 35 metrics, ranking first. Specifically, for functions CEC09, CEC11, and CEC12, LRMHBA achieved the best results across all three metrics. LSHADE ranked second with 19 best metrics, followed by HBA and SaCHBA_PDN, each with 10 best metrics.
From Table 4, it can be observed that the advantage of LRMHBA becomes even more pronounced when the dimension increases to 100. It ranked first in 44 metrics and achieved better average values than other competing algorithms for 24 functions. These results demonstrate that LRMHBA has a highly competitive global optimization capability and excellent stability.
The final row for each function in Table 3 and Table 4 presents the Friedman test rankings of the algorithms for solving the respective problem. Additionally, the overall mean rank and ranking of the algorithms are provided at the end of the tables. For the 30-dimensional problems, LRMHBA ranked first in 18 functions, whereas for the 100-dimensional problems, it ranked first in 24 functions. Among all the algorithms, LRMHBA achieved the best average rankings, with a rank of 1.58 for 30-dimensional problems and 1.27 for 100-dimensional problems. These results demonstrate that LRMHBA outperforms all other algorithms on the CEC2017 benchmark, with its advantage being particularly pronounced in high-dimensional problems. The summary statistics of the average ranks from the Friedman test for all algorithms at both 30 and 100 dimensions are shown in Figure 4.
Figure 5 and Figure 6 display the convergence curves for every algorithm that was tested. Due to space limitations, only the convergence curves for 20 functions in the 30-dimensional case are presented. From the figures, it can be observed that LRMHBA achieved better average fitness values on most benchmark functions. Compared to the most competitive algorithms, LSHADE, HBA, and SaCHBA_PDN, LRMHBA generally converged faster than LSHADE and HBA. Although LRMHBA’s early-stage convergence speed was slower than SaCHBA_PDN, the latter was more prone to getting trapped in local optima. In contrast, LRMHBA demonstrated the ability to gradually converge to better values as the number of function evaluations increased, highlighting its superior global optimization capability.
Figure 7 shows the boxplots of all the algorithms on CEC2017 in 30 dimensions, from which it can be seen that although LRMHBA has a few outliers on some of the test functions, its box lengths are very short, and its median and mean rank first on most of the test functions compared to the other compared algorithms, which indicates that the LRMHBA algorithm has a very good stability.
The statistically significant findings of the Wilcoxon Rank-Sum Test for 100-dimensional issues are shown in Table 5. A p-value of less than 0.05 indicates a statistically significant difference between the two algorithms. The symbols “+”, “=”, and “−” are used to represent that LRMHBA performs better than, equal to, or worse than the comparison algorithms, respectively. The results show that, except for two cases where the p-value exceeded 0.05 when compared with LSHADE, all other p-values were below 0.05. This indicates that the performance of LRMHBA differs greatly from that of other methods.

5.2. Ablation Study

To validate the effectiveness of each improvement strategy in the LRMHBA algorithm, three modified versions of the HBA algorithm were constructed by integrating each of the proposed strategies separately. The descriptions of the three versions are as follows:
  • LRMHBA1: HBA combined with Latin hypercube sampling and elite strategy.
  • LRMHBA2: HBA combined with a random disturbance strategy.
  • LRMHBA3: HBA combined with a staged dual-population co-evolutionary strategy integrating multiple differential evolution variants.
The CEC2017 benchmark suite was used to test HBA, LRMHBA1, LRMHBA2, LRMHBA3, and LRMHBA in the 30-dimensional case. Thirty separate runs of each algorithm were conducted. Table 6 displays the outcomes of the experiment, where the best values among the five algorithms are highlighted in bold. Additionally, the mean ranks and rankings obtained from the Friedman test are provided at the bottom of the table.
From the results, it can be observed that LRMHBA1, LRMHBA2, and LRMHBA3 each achieved the best values for some functions compared to the other algorithms. However, the LRMHBA algorithm, which integrates all three improvement strategies, demonstrated the best overall performance, obtaining the best values for 44 functions and ranking first in the Friedman test’s mean rank.
The experimental results indicate that each strategy contributed to the improvements of the algorithm. Latin hypercube sampling and elite strategy ensured a more uniform population distribution and effectively guided the search process, enhancing search efficiency. The stochastic perturbation strategy prevented premature convergence of the population during the early stages of iteration. Dual-population mutation strategy, which combines multiple differential evolution approaches, adaptively selected different mutation strategies based on the evolutionary process and population fitness. This approach effectively balanced the exploration and exploitation capabilities of the algorithm, thereby improving its global optimization ability.

5.3. Exploration and Exploitation Experiment

Exploration refers to the algorithm’s attempt to access new regions in the search space to discover potentially better solutions, while exploitation focuses resources on thoroughly searching a known promising region to find its optimal solution. Excessive exploration may lead to wasted computational resources, whereas excessive exploitation can result in premature convergence to a suboptimal solution, preventing the algorithm from escaping local optima. Consequently, a successful heuristic algorithm needs to properly balance exploration and exploitation.
In this section, a dimensional diversity measurement method [60,61] is employed to better evaluate the ability of LRMHBA to balance exploration and exploitation. The corresponding formula is as follows:
D i v = 1 D j = 1 D 1 N i = 1 N m e d i a n x j x i j
  E x p l o r a t i o n % = D i v D i v m a x
E x p l o t a t i o n % = D i v D i v m a x D i v m a x
where D i v denotes the diversity value of the population; D i v m a x is the maximum diversity value during the iteration. N is the population size; D is the dimensionality of the variables; x i j is the position of the j th dimension of the i th individual; m e d i a n x j is the median of the j th variable across all individuals in the population.
For comparative trials between LRMHBA and HBA, we chose one unimodal function, six simple multimodal functions, seven hybrid functions, and six composition functions from the CEC2017 test suite due to space constraints. The number of iterations was set to 2000, and each algorithm was executed 30 times separately. Figure 8 displays the test findings.
From the Figure 8, it is evident that LRMHBA exhibits strong exploration capability during the early iterations, enabling it to identify potential optimal solutions across various regions of the solution space. As the iterations progress, LRMHBA gradually transitions from exploration to exploitation, refining high-quality solutions and ultimately converging near the global optimum.
Additionally, compared to the HBA algorithm, LRMHBA demonstrates faster convergence, with the population quickly approaching the optimal solution. This indicates that the Latin hypercube sampling method improves population uniformity, enhancing the optimization efficiency of the LRMHBA algorithm. Furthermore, the stochastic perturbation strategy and the dual-population strategy integrating multiple differential mutation methods effectively balance exploration and exploitation, significantly improving the global optimization capability of the algorithm.

6. UAV Path Planning Simulation Experiments

In this section, we apply the proposed LRMHBA algorithm to UAV path planning to further analyze its performance.

6.1. Experimental Setup

We have constructed three map scenarios of sizes 100 × 100 × 3 km, with increasing complexity. The algorithms selected for comparison, which have shown good performance in this problem, include Particle Swarm Optimization (PSO), Salp swarm algorithm (SSA), Harris hawks optimization (HHO), Dung beetle optimizer (DBO), and Crested Porcupine Optimizer (CPO), as well as the original HBA algorithm and its improved version, SaCHBA_PDN, for 3D UAV path planning. The population size for each algorithm is set to 100, with a maximum of 500 iterations. The parameters for other algorithms are configured according to their respective references, which are shown in Table 7. The map’s starting point is set at (5, 5, 0.3) and the target point at (90, 90, 0.8), with 8 intermediate waypoints. The mountain model and threat area parameters are provided in Table 8. To lessen algorithmic randomness, each algorithm is executed 30 times on its own. The evaluation metrics include the optimal value, average value, variance, and rankings from the Friedman test.

6.2. Analysis of Experimental Results

Figure 9, Figure 10, Figure 11 and Figure 12 display the optimal path planning maps for each of the three scenarios, respectively. Figure 11 is the average cost convergence curve. It is evident that every algorithm was able to identify workable best routes. From Figure 9, we can see that the LRMHBA algorithm found the shortest and smoothest optimal path in the simplest Scenario 1. Although it did not find the shortest path in Scenarios 2 and 3, Figure 10 shows that it achieved the smallest average path planning cost in all three scenarios, indicating that LRMHBA demonstrates excellent stability. Furthermore, in both Scenario 1 and the most complex Scenario 3, LRMHBA outperformed the original HBA algorithm in terms of convergence speed, finding feasible paths with very few iterations. In Scenario 2, while the HBA algorithm showed slightly better convergence speed, it was trapped in a local optimum.
As evidenced in Table 9 summarizing planning outcomes across three scenarios (with bold values indicating optimal metrics), LRMHBA demonstrates superior performance: it achieves the optimal shortest path in Scenario 1, delivers the best average flight cost across all scenarios, and outperforms the original HBA algorithm in most metrics. These findings align with visualizations of optimal trajectory planning and convergence curves. Furthermore, the Sparrow Search Algorithm (SSA) demonstrates the highest stability by consistently identifying feasible paths, while the robustness of HHO, PSO, and DBO has declined. HHO yields excessively high mean values and variances across all scenarios, and both PSO and DBO perform poorly in the complex Scenarios 2 and 3 (with extremely high mean/variance), indicating that these algorithms frequently encounter search failures in challenging environments. Additionally, LRMHBA consistently ranked first in the Friedman test, confirming its excellent adaptability in generating efficient and smooth flight trajectories within both simple and complex three-dimensional obstacle-filled environments.

7. Conclusions

This study presents an enhanced Honey Badger Algorithm (LRMHBA) for three-dimensional UAV path planning. The proposed algorithm integrates Latin hypercube sampling with elite preservation mechanisms during population initialization, effectively mitigating low search efficiency resulting from uneven population distribution in conventional random initialization. To strengthen global optimization capacity, a stochastic perturbation mechanism derived from whale optimization is incorporated. Furthermore, an elite-guided dual-population cooperative evolution framework is developed by adaptively combining multiple differential mutation strategies, which dynamically balances global exploration and local exploitation requirements throughout the evolutionary process while ensuring optimization stability.
Comprehensive evaluations on the CEC2017 benchmark suite demonstrate LRMHBA’s superior convergence speed and solution accuracy across low- and high-dimensional optimization tasks, achieving the highest ranking in Friedman’s comprehensive evaluation among comparative powerful algorithms. Statistical validation through Wilcoxon rank-sum tests, ablation analysis, and exploration-exploitation metrics confirms three key aspects: (1) Significant performance differentiation from other algorithms, (2) Demonstrated efficacy of the proposed enhancement strategies, and (3) Better equilibrium between exploration and exploitation capabilities.
Three-dimensional path planning simulations under varying complexity scenarios—encompassing steep terrains and obstacle-threatened airspaces—reveal LRMHBA’s consistent generation of minimal-cost flight trajectories. The algorithm outperforms its counterparts in Friedman rankings, with particular dominance in simplified environments.
Future research directions include dynamic real-time path replanning mechanisms and cooperative multi-UAV optimization frameworks to enhance practical engineering applications. The algorithm’s architectural versatility suggests potential extensions to industrial robotics trajectory optimization, autonomous vehicle navigation, flexible manufacturing scheduling, and computer vision segmentation tasks, demonstrating promising applicability across these domains.

Author Contributions

Conceptualization, X.T. and C.J.; methodology, X.T.; software, X.T., and C.J.; validation, X.T., and Z.H.; data curation, Z.H.; writing—original draft preparation, X.T.; writing—review and editing, X.T. and C.J.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Programme Projects in Meishan city of China (2024KJZD162).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included in the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. CEC2017 Test Functions.
Table A1. CEC2017 Test Functions.
No.Functions F i * = F i ( X * )
Unimodal
Functions
1Shifted and Rotated Bent Cigar Function100
3Shifted and Rotated Zakharov Function200
Simple
Multimodal
Functions
4Shifted and Rotated Rosenbrock’s Function300
5Shifted and Rotated Rastrigin’s Function400
6Shifted and Rotated Expanded Scaffer’s F6 Function500
7Shifted and Rotated Lunacek Bi_Rastrigin Function600
8Shifted and Rotated Non-Continuous Rastrigin’s Function700
9Shifted and Rotated Levy Function800
10Shifted and Rotated Schwefel’s Function900
Hybrid
Functions
11Hybrid Function 1 (N = 3)1000
12Hybrid Function 2(N = 3)1100
13Hybrid Function 3 (N = 3)1200
14Hybrid Function 4 (N = 4)1300
15Hybrid Function 5 (N = 4)1400
16Hybrid Function 6 (N = 4)1500
17Hybrid Function 6 (N = 5)1600
18Hybrid Function 6 (N = 5)1700
19Hybrid Function 6 (N = 5)1800
20Hybrid Function 6 (N = 6)1900
Composition
Functions
21Composition Function 1 (N = 3)2000
22Composition Function 2 (N = 3)2100
23Composition Function 3 (N = 4)2200
24Composition Function 4 (N = 4)2300
25Composition Function 5 (N = 5)2400
26Composition Function 6 (N = 5)2500
27Composition Function 7 (N = 6)2600
28Composition Function 8 (N = 6)2700
29Composition Function 9 (N = 3)2800
30Composition Function 10 (N = 3)2900
Search   Range :   [ 100,100 ] D

References

  1. Alejandro, P.; Daniel, R.; Alejandro, P.; Enrique, F. A review of artificial intelligence applied to path planning in UAV swarms. Neural Comput. Appl. 2022, 34, 153–170. [Google Scholar] [CrossRef]
  2. Jones, M.; Soufiene, D.; Kristopher, W. Path-planning for unmanned aerial vehicles with environment complexity considerations: A survey. ACM Comput. Surv. 2023, 55, 1–39. [Google Scholar] [CrossRef]
  3. Hart, P.; Nils, J.; Bertram, R. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  4. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. Int. J. Robot. Res. 1986, 5, 90–98. [Google Scholar] [CrossRef]
  5. Nasir, J.; Islam, F.; Ayaz, Y. Adaptive Rapidly-Exploring-Random-Tree-Star (RRT*)-Smart: Algorithm Characteristics and Behavior Analysis in Complex Environments. Asia-Pac. J. Inf. Technol. Multimed 2013, 2, 39–51. [Google Scholar] [CrossRef]
  6. Desale, S.; Rasool, A.; Andhale, S.; Rane, P. Heuristic and meta-heuristic algorithms and their relevance to the real world: A survey. Int. J. Comput. Eng. Res. Trends 2015, 2, 296–304. [Google Scholar]
  7. Sadeghian, Z.; Akbari, E.; Nematzadeh, H.; Motameni, H. A review of feature selection methods based on meta-heuristic algorithms. J. Exp. Theor. Artif. Intell. 2025, 37, 1–51. [Google Scholar] [CrossRef]
  8. Hashim, F.A.; Mostafa, R.R.; Hussien, A.G.; Mirjalili, S.; Sallam, K.M. Fick’s Law Algorithm: A physical law-based algorithm for numerical optimization. Knowl.-Based Syst. 2023, 260, 110146. [Google Scholar] [CrossRef]
  9. Hu, G.; Cheng, M.; Houssein, E.H.; Hussien, A.G.; Abualigah, L. SDO: A novel sled dog-inspired optimizer for solving engineering problems. Adv. Eng. Inform. 2024, 62, 102783. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Russell, E. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar] [CrossRef]
  11. Karaboga, D.; Bahriye, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Andrew, L. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  13. Heidari, A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  14. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  15. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  16. Abdel, B.; Mohamed, R.; Mohamed, A. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  17. Li, Y.; Zhang, L.; Cai, B.; Liang, Y. Unified path planning for composite UAVs via Fermat point-based grouping particle swarm optimization. Aerosp. Sci. Technol. 2024, 148, 109088. [Google Scholar] [CrossRef]
  18. Han, Z.; Chen, M.; Shao, S.; Wu, Q. Improved artificial bee colony algorithm-based path planning of unmanned autonomous helicopter using mul-ti-strategy evolutionary learning. Aerosp. Sci. Technol. 2022, 122, 107374. [Google Scholar] [CrossRef]
  19. Dai, Y.; Yu, J.; Zhang, C.; Zhan, B.; Zheng, X. A novel whale optimization algorithm of path planning strategy for mobile robots. Appl. Intell. 2023, 53, 10843–10857. [Google Scholar] [CrossRef]
  20. Tang, C.; Li, W.; Han, T.; Yu, L.; Cui, T. Multi-Strategy Improved Harris Hawk Optimization Algorithm and Its Application in Path Planning. Biomimetics 2024, 9, 552. [Google Scholar] [CrossRef]
  21. Cheng, L.; Ling, G.; Liu, F.; Ge, M. Application of uniform experimental design theory to multi-strategy improved sparrow search algorithm for UAV path planning. Expert Syst. Appl. 2024, 255, 124849. [Google Scholar] [CrossRef]
  22. Chen, Q.; Wang, Y.; Sun, Y. An improved dung beetle optimizer for UAV 3D path planning. J. Supercomput. 2024, 80, 26537–26567. [Google Scholar] [CrossRef]
  23. Liu, S.; Jin, Z.; Lin, H.; Lu, H. An improve crested porcupine algorithm for UAV delivery path planning in challenging environments. Sci. Rep. 2024, 14, 20445. [Google Scholar] [CrossRef] [PubMed]
  24. Hashim, F.; Houssein, E.; Hussain, K.; Mabrouk, M.; Al-Atabany, W. Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  25. Abasi, A.K.; Aloqaily, M.; Guizani, M. Optimization of cnn using modified honey badger algorithm for sleep apnea detection. Expert Syst. Appl. 2023, 229, 120484. [Google Scholar] [CrossRef]
  26. Nassef, A.; Houssein, E.; Helmy, B.; Rezk, H. Modified honey badger algorithm based global MPPT for triple-junction solar photovoltaic system under partial shading condition and global optimization. Energy 2022, 254, 124363. [Google Scholar] [CrossRef]
  27. Dao, T.; Nguyen, T.; Nguyen, V. An improved honey badger algorithm for coverage optimization in wireless sensor network. J. Internet Technol. 2023, 24, 363–377. [Google Scholar]
  28. Houssein, E.; Emam, M.; Singh, N.; Samee, N.; Alabdulhafith, M.; Çelik, E. An improved honey badger algorithm for global optimization and multilevel thresholding segmentation: Real case with brain tumor images. Clust. Comput. 2024, 27, 14315–14364. [Google Scholar] [CrossRef]
  29. Jain, D.; Weiping, D.; Ketan, K. Training fuzzy deep neural network with honey badger algorithm for intrusion detection in cloud environment. Int. J. Mach. Learn. Cybern. 2023, 14, 2221–2237. [Google Scholar] [CrossRef]
  30. Xu, Y.; Zhong, R.; Cao, Y.; Zhang, C.; Yu, J. Symbiotic mechanism-based honey badger algorithm for continuous optimization. Clust. Comput. 2025, 28, 133. [Google Scholar] [CrossRef]
  31. Düzenlí, T.; Funda, K.; Salih, B.; Aydemir, S.B. Improved honey badger algorithms for parameter extraction in photovoltaic models. Optik 2022, 268, 169731. [Google Scholar] [CrossRef]
  32. Bansal, B.; Sahoo, A. Enhanced honey badger algorithm for multi-view subspace clustering based on consensus representation. Soft Comput. 2024, 28, 13307–13329. [Google Scholar] [CrossRef]
  33. Hu, G.; Zhong, J.; Guo, W. SaCHBA_PDN: Modified honey badger algorithm with multi-strategy for UAV path planning. Expert Syst. Appl. 2023, 223, 119941. [Google Scholar] [CrossRef]
  34. Zhang, X.; Duan, H. An improved constrained differential evolution algorithm for unmanned aerial vehicle global route planning. Appl. Soft Comput. 2015, 26, 270–284. [Google Scholar] [CrossRef]
  35. Service, T. A No Free Lunch theorem for multi-objective optimization. Inf. Process. Lett. 2010, 110, 917–923. [Google Scholar] [CrossRef]
  36. Abasi, A.K.; Aloqaily, M.; Guizani, M. Bare-bones based honey badger algorithm of CNN for Sleep Apnea detection. Clust. Comput. 2024, 27, 6145–6165. [Google Scholar] [CrossRef]
  37. Fathy, A.; Rezk, H.; Ferahtia, S.; Ghoniem, R.; Alkanhel, R. An efficient honey badger algorithm for scheduling the microgrid energy management. Energy Rep. 2023, 9, 2058–2074. [Google Scholar] [CrossRef]
  38. Han, E.; Noradin, G. Model identification of proton-exchange membrane fuel cells based on a hybrid convolutional neural network and extreme learning machine optimized by improved honey badger algorithm. Sustain. Energy Technol. Assess. 2022, 52, 102005. [Google Scholar] [CrossRef]
  39. Tan, Y.; Liu, S.; Zhang, L.; Song, J.; Ren, Y. The Application of an Improved LESS Dung Beetle Optimization in the Intelligent Topological Reconfiguration of ShipPower Systems. J. Mar. Sci. Eng. 2024, 12, 1843. [Google Scholar] [CrossRef]
  40. Wang, K.; Si, P.; Chen, L.; Li, Z. 3D Path Planning of Unmanned Aerial Vehicle Based on Enhanced Sand Cat Swarm Optimization Algorithm. Acta Armamentarii 2023, 44, 3382–3393. [Google Scholar]
  41. Ning, Y.; Zheng, B.; Long, Z.; Luo, J. Complex 3D Path Planning for UAVs Based on CMPSO Algorithm. Electron. Opt. Control 2024, 31, 35–42. [Google Scholar]
  42. Luo, J.; Tian, Y.; Wang, Z. Research on Unmanned Aerial Vehicle Path Planning. Drones 2024, 8, 51. [Google Scholar] [CrossRef]
  43. Zhou, X.; Tang, Z.; Wang, N.; Yang, C.; Huang, T. A novel state transition algorithm with adaptive fuzzy penalty for multi-constraint UAV path planning. Expert Syst. Appl. 2024, 248, 123481. [Google Scholar] [CrossRef]
  44. Michael, D. Latin hypercube sampling as a tool in uncertainty analysis of computer models. In Proceedings of the 24th Conference on Winter Simulation, Arlington, VA, USA, 13–16 December 1992. [Google Scholar]
  45. Stein, M. Large sample properties of simulations using Latin hypercube sampling. Technometrics 1987, 29, 143–151. [Google Scholar] [CrossRef]
  46. Storn, R. On the usage of differential evolution for function optimization. In Proceedings of the North American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996. [Google Scholar] [CrossRef]
  47. Storn, R.; Price, K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  48. Gämperle, R.; Sibylle, D.; Petros, K. A parameter study for differential evolution. Adv. Intell. Syst. Fuzzy Syst. Evol. Comput. 2002, 10, 293–298. [Google Scholar]
  49. Yu, W.; Shen, M.; Chen, W.; Zhan, Z.; Gong, Y.; Lin, Y.; Zhang, J. Differential evolution with two-level parameter adaptation. IEEE Trans. Cybern. 2013, 44, 1080–1099. [Google Scholar] [CrossRef]
  50. Li, Y.; Wang, S.; Yang, B. An improved differential evolution algorithm with dual mutation strategies collaboration. Expert Syst. Appl. 2020, 153, 113451. [Google Scholar] [CrossRef]
  51. Zuo, M.; Guo, C. DE/current−to−better/1: A new mutation operator to keep population diversity. Intell. Syst. Appl. 2022, 14, 200063. [Google Scholar] [CrossRef]
  52. Rauf, H.; Gao, J.; Almadhor, A.; Haider, A.; Zhang, Y.; Al-Turjman, F. Multi population-based chaotic differential evolution for multi-modal and multi-objective optimization problems. Appl. Soft Comput. 2023, 132, 109909. [Google Scholar] [CrossRef]
  53. Layeb, A. Differential evolution algorithms with novel mutations, adaptive parameters, and Weibull flight operator. Soft Comput. 2024, 28, 7039–7091. [Google Scholar] [CrossRef]
  54. Price, K. Multi population-based chaotic differential evolution for multi-modal and multi-objective optimization problems. In Proceedings of the North American Fuzzy Information Processing, Berkeley, CA, USA, 19–22 June 1996. [Google Scholar] [CrossRef]
  55. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  56. dos Santos Coelho, L. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar] [CrossRef]
  57. Lian, J.; Hui, G.; Ma, L.; Zhu, T.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H. Parrot optimizer: Algorithm and applications to medical problems. Comput. Biol. Med. 2024, 172, 108064. [Google Scholar] [CrossRef] [PubMed]
  58. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Dung beetle optimization algorithm based on quantum computing and multi-strategy fusion for solving engineering problems. Expert Syst. Appl. 2024, 236, 121219. [Google Scholar] [CrossRef]
  59. Tanabe, R.; Fukunaga, A.S. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014. [Google Scholar] [CrossRef]
  60. Zhu, F.; Li, G.; Tang, H.; Li, Y.; Lv, X.; Wang, X. Population diversity maintenance in brain storm optimization algorithm. J. Artif. Intell. Soft Comput. Res. 2014, 4, 83–97. [Google Scholar] [CrossRef]
  61. Morales-Castañeda, B.; Zaldivar, D.; Cuevas, E.; Fausto, F.; Rodríguez, A. A better balance in metaheuristic algorithms: Does it exist? Swarm Evol. Comput. 2020, 54, 100671. [Google Scholar] [CrossRef]
Figure 1. Schematic of the threat cost model.
Figure 1. Schematic of the threat cost model.
Biomimetics 10 00168 g001
Figure 3. Flowchart of LRMHBA.
Figure 3. Flowchart of LRMHBA.
Biomimetics 10 00168 g003
Figure 4. Statistical results of Friedman test rankings in 30 and 100 dimensions.
Figure 4. Statistical results of Friedman test rankings in 30 and 100 dimensions.
Biomimetics 10 00168 g004
Figure 5. CEC2017 functions iteration curves (dim = 30).
Figure 5. CEC2017 functions iteration curves (dim = 30).
Biomimetics 10 00168 g005
Figure 6. CEC2017 functions iteration curves (dim = 100).
Figure 6. CEC2017 functions iteration curves (dim = 100).
Biomimetics 10 00168 g006aBiomimetics 10 00168 g006b
Figure 7. Boxplots of all the algorithms on CEC2017 (Dim = 30).
Figure 7. Boxplots of all the algorithms on CEC2017 (Dim = 30).
Biomimetics 10 00168 g007
Figure 8. Exploration and exploitation curves of LRMHBA and HBA.
Figure 8. Exploration and exploitation curves of LRMHBA and HBA.
Biomimetics 10 00168 g008
Figure 9. Main and top views of optimal path planning for Scene 1.
Figure 9. Main and top views of optimal path planning for Scene 1.
Biomimetics 10 00168 g009
Figure 10. Main and top views of optimal path planning for Scene 2.
Figure 10. Main and top views of optimal path planning for Scene 2.
Biomimetics 10 00168 g010
Figure 11. Main and top views of optimal path planning for Scene 3.
Figure 11. Main and top views of optimal path planning for Scene 3.
Biomimetics 10 00168 g011
Figure 12. Iterative curves of average flight cost for 3 scenarios.
Figure 12. Iterative curves of average flight cost for 3 scenarios.
Biomimetics 10 00168 g012
Table 1. Latest variants and applications of the HBA algorithm.
Table 1. Latest variants and applications of the HBA algorithm.
MethodsApplicationsAuthors
Combined quasi-location learning, arbitrarily weighted agents, and adaptive mutation methods.Selected optimal hyperparameter values for a convolutional neural network CNN applied to sleep apnea diagnosis.Abasi et al. [25]
Proposed an efficient local search method, called dimensional learning hunting (DLH).Identified the peak of the global maximum output power of PV cells.Nassef et al. [26]
Combined HBA with elite backward learning and multidirectional strategies.Wireless sensor network coverage problem.Dao et al. [27]
Implemented an Enhanced Solution Quality (ESQ) approach.Biomedical image segmentation.Houssei et al. [28]
Developed a new fuzzy deep neural network (FDNN) combined with HBA.Cloud Computing Privacy Protection Intrusion Detection.Jain et al. [29]
Proposed a symbiosis-based HBA (SHBA) in conjunction with the cooperative symbio-sis mechanism between honey badgers and honeycreepers.Engineering problemsXu et al. [30]
Hybridization of Contrastive Learning with the Honey Badger Algorithm.Optimization of solar system model parameter values.Düzenlí et al. [31]
Designed a sparse jNMF method framework guided by the Enhanced Honey Badger Algorithm (EHBA)Integrated clustering problemBansal et al. [32]
Table 2. Algorithm parameter settings for CEC2017 benchmark testing.
Table 2. Algorithm parameter settings for CEC2017 benchmark testing.
AlgorithmsParametersSetting Value
HBA β (the ability of a honey badger to get food)6
C 2
PSOCognitive and social factors C 1 = 1 , C 2 = 1
DECrossover rate C R = 0.5
Scaling factor F = 0.5
WOAFluctuation rangeLinear decrease from 2 to 0
AOAControl parameter μ = 0.499
Sensitive parameter α = 5
DBODisruption factor k = 0.1
Luminous efficacy b = 0.3
sensitivity parameter S = 0.5
GQPSOInertia weightLinear decrease from 1 to 0.5
Cognitive and social factors C 1 = 1.5 , C 2 = 1.5
QHDBODisruption factor k = 0.1
Luminous efficacy b = 0.3
sensitivity parameter S = 0.5
LSHADECrossover rate C R = 0.5
Scaling factor F = 0.5
SaCHBA_PDN Neighborhood   parameter   δ δ = 0.01
LRMHBAScaling factor F 1 = 0.25 , F 2 = ± 0.5
Crossover rate C R = 0.7
β 6
C 2
Table 3. Test results of LRMHBA with other algorithms on CEC2017 (dim = 30).
Table 3. Test results of LRMHBA with other algorithms on CEC2017 (dim = 30).
FunctionIndexHBAPSODEWOAAOAPODBOGQPSOQHDBOLSHADESaCHBA_PDNLRMHBA
CEC01Best1.338 × 10+21.450 × 10+101.524 × 10+96.486 × 10+68.346 × 10+106.412 × 10+61.057 × 10+21.853 × 10+104.951 × 10+21.017 × 10+21.039 × 10+31.000 × 10+2
Mean5.527 × 10+32.331 × 10+101.939 × 10+92.233 × 10+71.142 × 10+112.762 × 10+88.235 × 10+62.288 × 10+103.387 × 10+96.204 × 10+66.094 × 10+53.652 × 10+9
Std5.349 × 10+34.676 × 10+91.954 × 10+81.900 × 10+71.372 × 10+103.256 × 10+81.813 × 10+71.635 × 10+93.536 × 10+92.818 × 10+71.801 × 10+62.000 × 10+10
Rank211961275108341
CEC03Best8.876 × 10+29.232 × 10+41.096 × 10+57.498 × 10+41.849 × 10+55.820 × 10+32.776 × 10+45.180 × 10+43.618 × 10+46.933 × 10+33.000 × 10+23.000 × 10+2
Mean3.116 × 10+31.384 × 10+51.687 × 10+52.196 × 10+52.536 × 10+81.470 × 10+45.913 × 10+46.176 × 10+41.809 × 10+59.158 × 10+43.017 × 10+27.527 × 10+2
Std1.773 × 10+32.807 × 10+42.485 × 10+47.259 × 10+41.199 × 10+95.493 × 10+31.606 × 10+43.754 × 10+31.103 × 10+58.858 × 10+43.516 × 10+08.211 × 10+2
Rank381011124569712
CEC04Best4.600 × 10+21.171 × 10+36.183 × 10+24.808 × 10+21.267 × 10+44.849 × 10+24.769 × 10+22.829 × 10+34.968 × 10+24.251 × 10+24.043 × 10+24.681 × 10+2
Mean4.894 × 10+22.252 × 10+36.975 × 10+25.945 × 10+24.428 × 10+45.443 × 10+25.507 × 10+24.025 × 10+39.799 × 10+24.286 × 10+25.154 × 10+24.923 × 10+2
Std1.833 × 10+18.389 × 10+23.770 × 10+15.832 × 10+11.042 × 10+43.439 × 10+17.827 × 10+14.283 × 10+25.387 × 10+26.277 × 10+05.107 × 10+11.530 × 10+1
Rank210971265118143
CEC05Best5.468 × 10+27.946 × 10+26.984 × 10+26.640 × 10+21.035 × 10+36.367 × 10+26.169 × 10+27.828 × 10+26.103 × 10+25.249 × 10+26.135 × 10+25.298 × 10+2
Mean6.026 × 10+28.279 × 10+27.280 × 10+27.949 × 10+21.154 × 10+37.340 × 10+27.023 × 10+28.075 × 10+26.969 × 10+26.002 × 10+26.567 × 10+25.646 × 10+2
Std2.090 × 10+11.946 × 10+11.121 × 10+16.350 × 10+14.378 × 10+14.108 × 10+15.524 × 10+11.277 × 10+15.881 × 10+15.491 × 10+12.701 × 10+11.841 × 10+1
Rank311791286105241
CEC06Best6.004 × 10+26.465 × 10+26.135 × 10+26.456 × 10+27.128 × 10+26.369 × 10+26.092 × 10+26.570 × 10+26.154 × 10+26.000 × 10+26.139 × 10+26.000 × 10+2
Mean6.051 × 10+26.623 × 10+26.185 × 10+26.710 × 10+27.351 × 10+26.557 × 10+26.287 × 10+26.631 × 10+26.351 × 10+26.002 × 10+26.315 × 10+26.042 × 10+2
Std4.823 × 10+07.407 × 10+01.804 × 10+01.295 × 10+19.505 × 10+08.484 × 10+08.472 × 10+02.505 × 10+02.201 × 10+17.372 × 10−17.130 × 1002.317 × 10+1
Rank394111285106172
CEC07Best8.016 × 10+21.507 × 10+31.011 × 10+31.022 × 10+32.891 × 10+39.921 × 10+28.211 × 10+21.130 × 10+37.996 × 10+27.614 × 10+28.541 × 10+27.569 × 10+2
Mean8.637 × 10+21.781 × 10+31.084 × 10+31.230 × 10+33.254 × 10+31.146 × 10+39.310 × 10+21.155 × 10+38.807 × 10+28.447 × 10+29.773 × 10+27.849 × 10+2
Std4.823 × 10+07.407 × 10+01.804 × 10+01.295 × 10+19.505 × 10+08.484 × 10+08.472 × 10+02.505 × 10+02.201 × 10+17.372 × 10−17.130 × 10+02.317 × 10+1
Rank311710128594261
CEC08Best8.497 × 10+21.097 × 10+31.003 × 10+39.079 × 10+21.301 × 10+39.283 × 10+29.194 × 10+21.037 × 10+38.948 × 10+28.289 × 10+28.766 × 10+28.259 × 10+2
Mean8.952 × 10+21.137 × 10+31.036 × 10+31.009 × 10+31.384 × 10+39.754 × 10+21.003 × 10+31.056 × 10+39.682 × 10+28.931 × 10+29.267 × 10+28.641 × 10+2
Std1.728 × 10+12.386 × 10+11.398 × 10+15.327 × 10+14.284 × 10+12.950 × 10+14.809 × 10+11.097 × 10+14.448 × 10+16.405 × 10+12.958 × 10+12.101 × 10+1
Rank211971268105341
CEC09Best1.321 × 10+37.533 × 10+36.338 × 10+35.233 × 10+31.936 × 10+42.744 × 10+31.501 × 10+35.443 × 10+31.680 × 10+39.000 × 10+21.713 × 10+39.001 × 10+2
Mean1.926 × 10+31.121 × 10+48.499 × 10+38.871 × 10+33.443 × 10+45.213 × 10+35.083 × 10+36.333 × 10+34.394 × 10+39.826 × 10+23.040 × 10+39.135 × 10+2
Std5.932 × 10+22.369 × 10+39.801 × 10+23.239 × 10+35.488 × 10+31.076 × 10+32.136 × 10+34.551 × 10+21.989 × 10+32.910 × 10+29.291 × 10+23.453 × 10+1
Rank311109127685241
CEC10Best3.510 × 10+37.510 × 10+35.703 × 10+35.186 × 10+39.690 × 10+33.960 × 10+34.037 × 10+37.336 × 10+35.674 × 10+34.053 × 10+34.315 × 10+33.634 × 10+3
Mean4.973 × 10+38.093 × 10+36.232 × 10+36.530 × 10+31.060 × 10+45.771 × 10+35.190 × 10+37.948 × 10+36.834 × 10+35.701 × 10+35.459 × 10+34.944 × 10+3
Std1.028 × 10+33.292 × 10+22.571 × 10+28.146 × 10+24.135 × 10+27.682 × 10+25.156 × 10+22.495 × 10+26.093 × 10+21.084 × 10+37.670 × 10+27.891 × 10+2
Rank211781263109541
CEC11Best1.135 × 10+32.200 × 10+31.933 × 10+31.405 × 10+31.315 × 10+41.247 × 10+31.252 × 10+32.618 × 10+31.297 × 10+31.119 × 10+31.185 × 10+31.113 × 10+3
Mean1.221 × 10+35.106 × 10+33.990 × 10+31.823 × 10+34.718 × 10+41.387 × 10+31.494 × 10+33.296 × 10+33.020 × 10+31.275 × 10+31.281 × 10+31.146 × 10+3
Std5.590 × 10+12.024 × 10+31.279 × 10+34.804 × 10+22.858 × 10+48.294 × 10+11.176 × 10+22.900 × 10+25.266 × 10+36.030 × 10+25.972 × 10+12.992 × 10+1
Rank311108125697241
CEC12Best1.592 × 10+47.208 × 10+84.120 × 10+71.760 × 10+71.363 × 10+104.470 × 10+64.568 × 10+53.117 × 10+99.469 × 10+41.103 × 10+52.885 × 10+44.490 × 10+3
Mean7.742 × 10+41.373 × 10+98.865 × 10+71.647 × 10+82.275 × 10+106.751 × 10+72.423 × 10+74.126 × 10+95.908 × 10+87.797 × 10+68.686 × 10+52.879 × 10+4
Std5.058 × 10+43.633 × 10+82.276 × 10+71.480 × 10+85.438 × 10+96.930 × 10+75.387 × 10+75.231 × 10+88.997 × 10+81.550 × 10+71.194 × 10+61.827 × 10+4
Rank210891265117431
CEC13Best3.514 × 10+31.251 × 10+84.526 × 10+65.675 × 10+48.252 × 10+98.910 × 10+31.929 × 10+48.200 × 10+89.344 × 10+42.978 × 10+31.135 × 10+41.332 × 10+3
Mean3.605 × 10+44.400 × 10+81.906 × 10+75.387 × 10+52.346 × 10+101.099 × 10+52.589 × 10+61.758 × 10+93.936 × 10+71.649 × 10+52.576 × 10+56.184 × 10+8
Std2.727 × 10+42.874 × 10+88.011 × 10+69.394 × 10+58.292 × 10+97.376 × 10+48.975 × 10+64.847 × 10+81.926 × 10+83.787 × 10+58.169 × 10+53.387 × 10+9
Rank210971256118341
CEC14Best1.827 × 10+35.271 × 10+47.695 × 10+44.459 × 10+35.772 × 10+64.454 × 10+32.475 × 10+31.644 × 10+52.050 × 10+31.430 × 10+31.627 × 10+31.529 × 10+3
Mean7.510 × 10+33.906 × 10+54.580 × 10+51.956 × 10+64.274 × 10+76.359 × 10+41.120 × 10+59.828 × 10+56.722 × 10+62.346 × 10+41.747 × 10+33.260 × 10+3
Std8.299 × 10+32.459 × 10+52.379 × 10+51.731 × 10+62.947 × 10+74.375 × 10+42.986 × 10+53.588 × 10+51.762 × 10+76.919 × 10+41.104 × 10+22.250 × 10+3
Rank489101265117213
CEC15Best1.756 × 10+31.216 × 10+71.870 × 10+52.030 × 10+41.854 × 10+91.512 × 10+45.238 × 10+32.545 × 10+64.536 × 10+31.656 × 10+32.630 × 10+31.552 × 10+3
Mean1.509 × 10+48.185 × 10+72.507 × 10+61.764 × 10+55.448 × 10+97.030 × 10+47.602 × 10+41.161 × 10+73.015 × 10+78.211 × 10+41.387 × 10+41.083 × 10+8
Std1.401 × 10+45.172 × 10+71.403 × 10+62.323 × 10+52.303 × 10+95.388 × 10+48.097 × 10+46.087 × 10+61.648 × 10+82.095 × 10+51.248 × 10+45.933 × 10+8
Rank211981276105431
CEC16Best1.967 × 10+32.894 × 10+32.758 × 10+32.985 × 10+36.039 × 10+32.541 × 10+32.397 × 10+33.699 × 10+32.549 × 10+32.116 × 10+32.107 × 10+31.745 × 10+3
Mean2.545 × 10+33.705 × 10+33.015 × 10+33.753 × 10+38.254 × 10+33.200 × 10+32.992 × 10+34.070 × 10+33.478 × 10+32.782 × 10+32.749 × 10+32.394 × 10+3
Std2.814 × 10+23.663 × 10+21.557 × 10+24.069 × 10+21.447 × 10+33.791 × 10+23.697 × 10+21.822 × 10+25.077 × 10+24.859 × 10+23.384 × 10+22.963 × 10+2
Rank210691275118431
CEC17Best1.773 × 10+32.445 × 10+32.006 × 10+31.859 × 10+34.420 × 10+32.024 × 10+31.917 × 10+32.396 × 10+32.379 × 10+31.775 × 10+32.011 × 10+31.748 × 10+3
Mean2.111 × 10+32.800 × 10+32.256 × 10+32.615 × 10+31.542 × 10+42.468 × 10+32.390 × 10+32.716 × 10+34.346 × 10+32.021 × 10+32.360 × 10+31.992 × 10+3
Std2.067 × 10+21.893 × 10+21.206 × 10+23.292 × 10+21.574 × 10+42.095 × 10+22.477 × 10+21.344 × 10+24.889 × 10+31.896 × 10+22.277 × 10+21.365 × 10+2
Rank310481276911251
CEC18Best3.781 × 10+41.043 × 10+66.850 × 10+51.258 × 10+51.072 × 10+85.909 × 10+45.143 × 10+42.027 × 10+64.925 × 10+42.729 × 10+31.453 × 10+41.169 × 10+4
Mean1.607 × 10+57.478 × 10+62.193 × 10+64.392 × 10+65.464 × 10+89.443 × 10+51.848 × 10+65.085 × 10+62.416 × 10+76.476 × 10+54.699 × 10+41.085 × 10+7
Std9.438 × 10+44.572 × 10+67.608 × 10+55.557 × 10+63.452 × 10+87.713 × 10+54.402 × 10+61.571 × 10+67.360 × 10+76.789 × 10+52.867 × 10+45.891 × 10+7
Rank311981274106512
CEC19Best2.027 × 10+32.395 × 10+74.237 × 10+53.315 × 10+51.356 × 10+94.861 × 10+32.219 × 10+33.061 × 10+73.177 × 10+31.912 × 10+32.092 × 10+31.916 × 10+3
Mean9.046 × 10+31.531 × 10+81.886 × 10+64.444 × 10+66.274 × 10+91.143 × 10+63.107 × 10+65.912 × 10+74.851 × 10+79.768 × 10+31.695 × 10+49.710 × 10+3
Std1.143 × 10+46.952 × 10+71.010 × 10+64.071 × 10+62.884 × 10+98.138 × 10+51.492 × 10+71.993 × 10+77.002 × 10+71.688 × 10+41.607 × 10+41.209 × 10+4
Rank211791265108143
CEC20Best2.166 × 10+32.512 × 10+32.229 × 10+32.268 × 10+33.293 × 10+32.268 × 10+32.315 × 10+32.486 × 10+32.209 × 10+32.040 × 10+32.385 × 10+32.034 × 10+3
Mean2.406 × 10+32.781 × 10+32.484 × 10+32.708 × 10+33.737 × 10+32.561 × 10+32.683 × 10+32.610 × 10+33.171 × 10+32.278 × 10+32.661 × 10+32.425 × 10+3
Std1.944 × 10+21.354 × 10+21.198 × 10+22.352 × 10+22.031 × 10+21.666 × 10+21.890 × 10+25.726 × 10+14.238 × 10+22.131 × 10+21.726 × 10+22.705 × 10+2
Rank210481259611173
CEC21Best2.345 × 10+32.571 × 10+32.462 × 10+32.485 × 10+32.798 × 10+32.414 × 10+32.415 × 10+32.546 × 10+32.481 × 10+32.328 × 10+32.393 × 10+32.331 × 10+3
Mean2.382 × 10+32.612 × 10+32.520 × 10+32.593 × 10+32.907 × 10+32.506 × 10+32.495 × 10+32.586 × 10+32.663 × 10+32.411 × 10+32.452 × 10+32.376 × 10+3
Std2.473 × 10+12.137 × 10+11.868 × 10+15.435 × 10+15.768 × 10+14.985 × 10+13.936 × 10+11.499 × 10+19.272 × 10+16.729 × 10+13.647 × 10+11.127 × 10+2
Rank210791265811341
CEC22Best2.300 × 10+34.134 × 10+33.936 × 10+32.418 × 10+39.698 × 10+32.355 × 10+32.308 × 10+34.322 × 10+31.404 × 10+42.300 × 10+32.300 × 10+32.300 × 10+3
Mean4.142 × 10+37.331 × 10+35.891 × 10+38.334 × 10+31.178 × 10+43.701 × 10+35.025 × 10+34.801 × 10+31.404 × 10+46.053 × 10+34.028 × 10+34.236 × 10+3
Std2.581 × 10+32.264 × 10+31.156 × 10+31.582 × 10+36.589 × 10+21.963 × 10+32.079 × 10+31.910 × 10+27.400 × 10−122.830 × 10+32.313 × 10+32.206 × 10+3
Rank297101146512831
CEC23Best2.700 × 10+32.855 × 10+32.795 × 10+32.961 × 10+33.320 × 10+32.802 × 10+32.761 × 10+33.141 × 10+32.969 × 10+32.672 × 10+32.752 × 10+32.661 × 10+3
Mean2.755 × 10+33.019 × 10+32.843 × 10+33.144 × 10+33.854 × 10+32.987 × 10+32.869 × 10+33.181 × 10+33.348 × 10+32.737 × 10+32.889 × 10+32.706 × 10+3
Std2.428 × 10+17.029 × 10+11.291 × 10+11.059 × 10+22.065 × 10+28.730 × 10+15.319 × 10+11.959 × 10+12.544 × 10+25.876 × 10+18.346 × 10+11.829 × 10+1
Rank384912751011261
CEC24Best2.867 × 10+33.059 × 10+33.028 × 10+33.112 × 10+33.719 × 10+32.999 × 10+32.937 × 10+33.315 × 10+34.689 × 10+32.853 × 10+32.945 × 10+32.854 × 10+3
Mean2.945 × 10+33.127 × 10+33.055 × 10+33.300 × 10+34.223 × 10+33.114 × 10+33.031 × 10+33.427 × 10+34.689 × 10+32.943 × 10+33.098 × 10+32.884 × 10+3
Std6.015 × 10+14.252 × 10+11.365 × 10+11.278 × 10+22.508 × 10+27.195 × 10+15.458 × 10+13.559 × 10+19.250 × 10−135.920 × 10+11.074 × 10+22.135 × 10+1
Rank385911741012261
CEC25Best2.884 × 10+33.955 × 10+33.068 × 10+32.895 × 10+31.190 × 10+42.903 × 10+32.888 × 10+33.221 × 10+32.884 × 10+32.878 × 10+32.884 × 10+32.883 × 10+3
Mean2.891 × 10+34.516 × 10+33.184 × 10+32.968 × 10+31.622 × 10+42.964 × 10+32.941 × 10+33.321 × 10+32.968 × 10+32.880 × 10+32.909 × 10+33.270 × 10+3
Std1.270 × 10+14.659 × 10+25.414 × 10+13.602 × 10+12.442 × 10+33.968 × 10+14.287 × 10+13.945 × 10+19.264 × 10+13.626 × 10+02.385 × 10+12.096 × 10+3
Rank311971286105142
CEC26Best2.800 × 10+36.034 × 10+35.406 × 10+33.570 × 10+31.238 × 10+43.267 × 10+33.147 × 10+36.451 × 10+32.412 × 10+43.490 × 10+34.933 × 10+32.900 × 10+3
Mean4.670 × 10+37.175 × 10+35.730 × 10+37.754 × 10+31.526 × 10+46.501 × 10+36.149 × 10+37.665 × 10+32.412 × 10+44.254 × 10+35.691 × 10+34.577 × 10+3
Std7.419 × 10+25.713 × 10+21.491 × 10+21.588 × 10+31.535 × 10+31.605 × 10+38.762 × 10+26.018 × 10+21.110 × 10−116.852 × 10+25.225 × 10+22.241 × 10+3
Rank385911761012241
CEC27Best3.186 × 10+33.234 × 10+33.221 × 10+33.200 × 10+34.455 × 10+33.243 × 10+33.210 × 10+33.563 × 10+33.220 × 10+33.200 × 10+33.224 × 10+33.201 × 10+3
Mean3.289 × 10+33.327 × 10+33.231 × 10+33.200 × 10+35.304 × 10+33.326 × 10+33.257 × 10+33.637 × 10+33.385 × 10+33.200 × 10+33.313 × 10+33.343 × 10+3
Std7.288 × 10+15.794 × 10+14.675 × 10+01.870 × 10−45.052 × 10+26.417 × 10+13.763 × 10+13.376 × 10+11.542 × 10+22.595 × 10−46.824 × 10+14.433 × 10+2
Rank693212851110174
CEC28Best3.108 × 10+33.899 × 10+33.477 × 10+33.296 × 10+38.198 × 10+33.292 × 10+33.259 × 10+34.467 × 10+33.230 × 10+33.300 × 10+33.192 × 10+33.163 × 10+3
Mean3.215 × 10+34.752 × 10+33.707 × 10+33.299 × 10+31.196 × 10+43.363 × 10+33.365 × 10+34.684 × 10+34.067 × 10+33.300 × 10+33.219 × 10+33.210 × 10+3
Std3.204 × 10+16.589 × 10+21.144 × 10+21.109 × 10+01.963 × 10+33.915 × 10+11.320 × 10+28.757 × 10+18.724 × 10+22.982 × 10−42.089 × 10+12.617 × 10+1
Rank210851276119431
CEC29Best3.383 × 10+034.081 × 10+33.669 × 10+33.910 × 10+37.304 × 10+34.149 × 10+33.544 × 10+34.324 × 10+34.168 × 10+33.145 × 10+33.618 × 10+33.291 × 10+3
Mean4.029 × 10+034.597 × 10+33.978 × 10+34.686 × 10+32.117 × 10+44.606 × 10+34.106 × 10+34.803 × 10+35.665 × 10+33.613 × 10+34.242 × 10+33.664 × 10+3
Std3.881 × 10+023.115 × 10+21.391 × 10+24.986 × 10+21.387 × 10+42.055 × 10+22.676 × 10+21.642 × 10+22.420 × 10+32.993 × 10+22.649 × 10+21.863 × 10+2
Rank493712851011162
CEC30Best6.126 × 10+32.321 × 10+72.587 × 10+59.263 × 10+34.416 × 10+85.747 × 10+59.088 × 10+31.522 × 10+81.185 × 10+43.212 × 10+38.460 × 10+35.386 × 10+3
Mean5.553 × 10+56.233 × 10+71.013 × 10+61.001 × 10+73.003 × 10+99.539 × 10+61.222 × 10+62.894 × 10+85.547 × 10+51.396 × 10+47.144 × 10+48.781 × 10+3
Std2.808 × 10+62.660 × 10+74.585 × 10+51.097 × 10+71.539 × 10+96.865 × 10+62.038 × 10+66.587 × 10+77.628 × 10+52.534 × 10+47.037 × 10+43.112 × 10+3
Rank310781296115142
Mean Rank2.729.907.078.1711.906.625.489.598.102.724.141.58
Final Ranking211791265108241
Table 4. Test results of LRMHBA with other algorithms on CEC2017 (dim = 100).
Table 4. Test results of LRMHBA with other algorithms on CEC2017 (dim = 100).
FunctionIndexHBAPSODEWOAAOAPODBOGQPSOQHDBOLSHADESaCHBA_PDNLRMHBA
CEC01Best1.822 × 10+72.767 × 10+111.773 × 10+112.415 × 10+95.206 × 10+112.056 × 10+103.831 × 10+91.623 × 10+117.078 × 10+095.225 × 10+32.535 × 10+71.166 × 10+5
Mean1.558 × 10+93.326 × 10+112.199 × 10+113.875 × 10+96.059 × 10+113.809 × 10+101.673 × 10+101.657 × 10+113.036 × 10+102.716 × 10+93.066 × 10+81.247 × 10+6
Std1.853 × 10+93.193 × 10+101.241 × 10+101.026 × 10+93.294 × 10+109.365 × 10+92.158 × 10+101.802 × 10+91.202 × 10+108.337 × 10+95.054 × 10+81.138 × 10+6
Rank411105128697231
CEC03Best2.197 × 10+57.104 × 10+57.156 × 10+54.868 × 10+58.023 × 10+51.715 × 10+53.466 × 10+52.649 × 10+53.761 × 10+52.962 × 10+56.301 × 10+41.655 × 10+5
Mean2.722 × 10+59.684 × 10+58.882 × 10+58.701 × 10+58.249 × 10+102.044 × 10+56.375 × 10+52.872 × 10+51.184 × 10+68.650 × 10+58.665 × 10+42.059 × 10+5
Std2.239 × 10+41.329 × 10+56.350 × 10+41.978 × 10+52.661 × 10+111.588 × 10+42.324 × 10+59.710 × 10+33.119 × 10+53.466 × 10+51.451 × 10+42.187 × 10+4
Rank410871226511913
CEC04Best7.874 × 10+23.907 × 10+43.310 × 10+41.534 × 10+031.894 × 10+52.208 × 10+31.281 × 10+33.136 × 10+42.113 × 10+34.976 × 10+27.942 × 10+27.001 × 10+2
Mean9.253 × 10+26.342 × 10+43.802 × 10+42.410 × 10+032.693 × 10+53.864 × 10+33.021 × 10+33.438 × 10+46.928 × 10+32.372 × 10+39.884 × 10+27.725 × 10+2
Std9.496 × 10+11.704 × 10+42.411 × 10+35.914 × 10+023.821 × 10+48.715 × 10+22.804 × 10+31.233 × 10+34.368 × 10+38.155 × 10+31.547 × 10+24.891 × 10+1
Rank311105127698241
CEC05Best9.526 × 10+22.046 × 10+32.101 × 10+31.443 × 10+032.830 × 10+31.459 × 10+31.249 × 10+31.801 × 10+31.180 × 10+36.224 × 10+21.147 × 10+37.390 × 10+2
Mean1.109 × 10+32.345 × 10+32.163 × 10+31.662 × 10+033.027 × 10+31.554 × 10+31.666 × 10+31.835 × 10+31.328 × 10+31.243 × 10+31.288 × 10+38.854 × 10+2
Std8.968 × 10+11.271 × 10+23.802 × 10+11.815 × 10+021.081 × 10+26.241 × 10+11.425 × 10+21.675 × 10+16.349 × 10+13.053 × 10+27.608 × 10+16.979 × 10+1
Rank211107126895341
CEC06Best6.258 × 10+27.132 × 10+26.775 × 10+26.784 × 10+027.473 × 10+26.719 × 10+26.477 × 10+26.856 × 10+26.481 × 10+26.000 × 10+26.549 × 10+26.010 × 10+2
Mean6.375 × 10+27.247 × 10+26.858 × 10+26.927 × 10+027.609 × 10+26.812 × 10+26.727 × 10+26.906 × 10+26.546 × 10+26.149 × 10+26.634 × 10+26.031 × 10+2
Std7.000 × 10+06.883 × 10+03.096 × 10+09.670 × 10+006.521 × 10+05.067 × 10+09.215 × 10+02.162 × 10+04.094 × 10+02.964 × 10+14.470 × 10+01.402 × 10+0
Rank311891276104251
CEC07Best1.637 × 10+36.805 × 10+38.288 × 10+33.130 × 10+31.200 × 10+42.964 × 10+31.844 × 10+33.093 × 10+31.728 × 10+31.134 × 10+32.311 × 10+31.063 × 10+3
Mean2.041 × 10+38.369 × 10+39.110 × 10+33.419 × 10+31.315 × 10+43.403 × 10+32.499 × 10+33.154 × 10+32.279 × 10+31.693 × 10+32.698 × 10+31.168 × 10+3
Std1.869 × 10+24.707 × 10+23.408 × 10+21.562 × 10+25.127 × 10+21.653 × 10+25.879 × 10+24.717 × 10+12.671 × 10+26.036 × 10+21.724 × 10+27.186 × 10+1
Rank310119128574261
CEC08Best1.206 × 10+32.388 × 10+32.365 × 10+31.858 × 10+33.098 × 10+31.850 × 10+31.658 × 10+32.154 × 10+31.499 × 10+31.008 × 10+31.480 × 10+31.045 × 10+3
Mean1.387 × 10+32.645 × 10+32.452 × 10+32.106 × 10+33.464 × 10+31.989 × 10+31.989 × 10+32.199 × 10+31.669 × 10+31.616 × 10+31.618 × 10+31.126 × 10+3
Std7.768 × 10+11.426 × 10+24.199 × 10+11.519 × 10+21.399 × 10+28.957 × 10+11.540 × 10+21.801 × 10+19.639 × 10+13.752 × 10+28.558 × 10+15.226 × 10+1
Rank211108126795341
CEC09Best1.408 × 10+41.047 × 10+51.084 × 10+53.627 × 10+41.723 × 10+53.235 × 10+42.093 × 10+45.267 × 10+41.975 × 10+49.125 × 10+22.039 × 10+43.051 × 10+3
Mean2.272 × 10+41.328 × 10+51.343 × 10+55.837 × 10+42.185 × 10+54.047 × 10+44.903 × 10+45.619 × 10+44.337 × 10+42.369 × 10+42.637 × 10+48.251 × 10+3
Std3.392 × 10+31.316 × 10+49.405 × 10+31.641 × 10+41.638 × 10+44.906 × 10+31.926 × 10+42.227 × 10+32.442 × 10+43.010 × 10+43.725 × 10+33.267 × 10+3
Rank210118126795341
CEC10Best1.280 × 10+43.161 × 10+42.841 × 10+42.081 × 10+43.419 × 10+41.825 × 10+41.268 × 10+42.876 × 10+43.846 × 10+41.841 × 10+41.597 × 10+41.232 × 10+4
Mean1.712 × 10+43.243 × 10+42.931 × 10+42.585 × 10+43.600 × 10+42.295 × 10+41.794 × 10+43.010 × 10+43.846 × 10+42.795 × 10+42.135 × 10+41.615 × 10+4
Std3.252 × 10+34.542 × 10+23.760 × 10+22.660 × 10+37.878 × 10+22.206 × 10+31.671 × 10+34.993 × 10+22.220 × 10−115.288 × 10+33.299 × 10+32.590 × 10+3
Rank210861153912741
CEC11Best4.326 × 10+31.619 × 10+51.309 × 10+53.089 × 10+44.148 × 10+51.446 × 10+42.884 × 10+48.732 × 10+47.063 × 10+47.525 × 10+32.882 × 10+33.058 × 10+3
Mean6.248 × 10+32.469 × 10+51.795 × 10+58.670 × 10+42.633 × 10+72.475 × 10+49.970 × 10+49.384 × 10+42.097 × 10+51.087 × 10+54.575 × 10+34.352 × 10+3
Std1.986 × 10+36.108 × 10+42.083 × 10+45.175 × 10+47.664 × 10+76.078 × 10+33.971 × 10+43.974 × 10+31.019 × 10+58.240 × 10+41.101 × 10+31.314 × 10+3
Rank311105124869721
CEC12Best1.006 × 10+75.639 × 10+103.400 × 10+106.826 × 10+82.606 × 10+111.145 × 10+94.683 × 10+88.315 × 10+102.958 × 10+94.379 × 10+72.034 × 10+75.574 × 10+6
Mean3.478 × 10+78.357 × 10+104.208 × 10+103.074 × 10+93.382 × 10+113.716 × 10+91.473 × 10+99.473 × 10+101.659 × 10+105.508 × 10+99.615 × 10+71.394 × 10+7
Std1.678 × 10+71.551 × 10+103.761 × 10+91.817 × 10+93.713 × 10+101.338 × 10+97.689 × 10+84.349 × 10+91.012 × 10+101.772 × 10+101.032 × 10+86.024 × 10+6
Rank210961275118431
CEC13Best1.513 × 10+47.198 × 10+91.036 × 10+92.575 × 10+67.090 × 10+104.193 × 10+61.568 × 10+51.615 × 10+106.455 × 10+61.868 × 10+34.061 × 10+42.857 × 10+3
Mean2.957 × 10+41.380 × 10+101.462 × 10+97.742 × 10+68.886 × 10+101.659 × 10+87.644 × 10+71.885 × 10+103.121 × 10+96.916 × 10+61.320 × 10+58.822 × 10+3
Std9.707 × 10+33.383 × 10+91.698 × 10+86.456 × 10+69.725 × 10+92.077 × 10+89.377 × 10+71.352 × 10+93.420 × 10+93.370 × 10+71.112 × 10+56.487 × 10+3
Rank210851276119341
CEC14Best6.923 × 10+41.579 × 10+71.954 × 10+71.320 × 10+61.796 × 10+81.706 × 10+62.248 × 10+61.015 × 10+72.695 × 10+68.126 × 10+47.349 × 10+47.685 × 10+4
Mean5.361 × 10+55.457 × 10+74.019 × 10+77.028 × 10+66.765 × 10+84.499 × 10+67.934 × 10+61.592 × 10+71.274 × 10+71.030 × 10+72.054 × 10+53.550 × 10+5
Std2.368 × 10+52.300 × 10+71.035 × 10+75.017 × 10+62.714 × 10+81.817 × 10+65.020 × 10+62.269 × 10+68.366 × 10+68.426 × 10+61.406 × 10+51.801 × 10+5
Rank311105124698712
CEC15Best3.560 × 10+32.437 × 10+91.078 × 10+82.397 × 10+53.599 × 10+107.263 × 10+44.477 × 10+45.942 × 10+95.583 × 10+41.809 × 10+31.640 × 10+41.904 × 10+3
Mean1.015 × 10+45.356 × 10+91.854 × 10+89.925 × 10+54.253 × 10+101.500 × 10+78.312 × 10+67.069 × 10+92.065 × 10+91.549 × 10+79.354 × 10+44.952 × 10+3
Std7.725 × 10+31.723 × 10+94.443 × 10+77.510 × 10+55.405 × 10+092.088 × 10+72.723 × 10+75.516 × 10+82.679 × 10+95.771 × 10+71.261 × 10+55.687 × 10+3
Rank210861275119341
CEC16Best4.498 × 10+31.184 × 10+41.090 × 10+41.076 × 10+42.570 × 10+47.612 × 10+36.624 × 10+31.302 × 10+47.366 × 10+37.077 × 10+35.275 × 10+34.121 × 10+3
Mean5.761 × 10+31.330 × 10+41.183 × 10+41.397 × 10+43.656 × 10+41.034 × 10+48.173 × 10+31.401 × 10+49.376 × 10+39.750 × 10+37.314 × 10+35.522 × 10+3
Std5.936 × 10+28.210 × 10+24.327 × 10+21.728 × 10+35.237 × 10+31.126 × 10+31.027 × 10+33.683 × 10+21.294 × 10+31.144 × 10+31.519 × 10+37.987 × 10+2
Rank298101274115631
CEC17Best3.919 × 10+31.312 × 10+48.541 × 10+36.004 × 10+31.210 × 10+75.488 × 10+35.643 × 10+31.494 × 10+46.877 × 10+36.041 × 10+35.280 × 10+33.459 × 10+3
Mean5.021 × 10+35.133 × 10+49.477 × 10+37.875 × 10+36.376 × 10+77.579 × 10+37.900 × 10+32.767 × 10+49.458 × 10+48.250 × 10+36.410 × 10+34.822 × 10+3
Std5.535 × 10+28.347 × 10+44.699 × 10+29.812 × 10+23.607 × 10+71.691 × 10+39.708 × 10+25.655 × 10+34.140 × 10+53.391 × 10+36.239 × 10+26.304 × 10+2
Rank210961247118531
CEC18Best3.734 × 10+54.288 × 10+73.362 × 10+71.219 × 10+65.789 × 10+81.780 × 10+63.247 × 10+61.311 × 10+75.792 × 10+69.811 × 10+52.679 × 10+54.071 × 10+5
Mean1.168 × 10+69.027 × 10+75.991 × 10+77.124 × 10+61.322 × 10+93.909 × 10+61.124 × 10+72.250 × 10+73.641 × 10+72.350 × 10+75.194 × 10+58.226 × 10+5
Std5.423 × 10+53.369 × 10+71.412 × 10+74.328 × 10+64.436 × 10+81.356 × 10+67.402 × 10+63.874 × 10+62.992 × 10+74.006 × 10+71.937 × 10+53.368 × 10+5
Rank311105124689712
CEC19Best2.362 × 10+31.525 × 10+91.706 × 10+83.862 × 10+62.891 × 10+107.189 × 10+65.677 × 10+44.994 × 10+93.456 × 10+62.020 × 10+32.718 × 10+42.097 × 10+3
Mean9.591 × 10+34.438 × 10+93.217 × 10+83.166 × 10+74.353 × 10+105.270 × 10+72.407 × 10+76.136 × 10+96.314 × 10+86.601 × 10+51.385 × 10+55.417 × 10+3
Std9.660 × 10+31.617 × 10+96.736 × 10+72.212 × 10+76.647 × 10+91.164 × 10+82.388 × 10+75.959 × 10+81.027 × 10+91.812 × 10+61.410 × 10+54.125 × 10+3
Rank210961275118341
CEC20Best4.117 × 10+37.100 × 10+35.899 × 10+35.499 × 10+38.324 × 10+35.067 × 10+34.992 × 10+36.363 × 10+37.052 × 10+35.639 × 10+34.752 × 10+33.947 × 10+3
Mean5.187 × 10+37.888 × 10+36.786 × 10+36.691 × 10+39.660 × 10+36.076 × 10+36.062 × 10+36.785 × 10+38.275 × 10+36.631 × 10+35.760 × 10+34.939 × 10+3
Std5.829 × 10+22.833 × 10+23.409 × 10+26.119 × 10+23.967 × 10+25.381 × 10+26.682 × 10+22.173 × 10+25.791 × 10+29.393 × 10+24.640 × 10+25.212 × 10+2
Rank210971245811631
CEC21Best2.695 × 10+34.050 × 10+33.874 × 10+33.765 × 10+35.034 × 10+33.390 × 10+33.371 × 10+33.803 × 10+33.555 × 10+32.638 × 10+33.166 × 10+32.510 × 10+3
Mean2.843 × 10+34.281 × 10+34.022 × 10+34.314 × 10+35.345 × 10+33.690 × 10+33.572 × 10+33.860 × 10+34.177 × 10+33.081 × 10+33.392 × 10+32.606 × 10+3
Std8.851 × 10+11.453 × 10+25.202 × 10+12.531 × 10+21.687 × 10+21.736 × 10+21.121 × 10+23.111 × 10+13.462 × 10+23.466 × 10+21.350 × 10+28.632 × 10+1
Rank211810126579341
CEC22Best1.566 × 10+43.278 × 10+43.015 × 10+42.311 × 10+43.756 × 10+42.226 × 10+41.787 × 10+43.156 × 10+42.488 × 10+41.939 × 10+41.804 × 10+41.687 × 10+4
Mean2.155 × 10+43.425 × 10+43.116 × 10+42.974 × 10+43.879 × 10+42.577 × 10+42.089 × 10+43.267 × 10+42.816 × 10+42.894 × 10+42.404 × 10+41.973 × 10+4
Std3.087 × 10+36.017 × 10+24.286 × 10+23.140 × 10+35.745 × 10+21.833 × 10+31.862 × 10+34.966 × 10+21.942 × 10+34.764 × 10+32.718 × 10+32.085 × 10+3
Rank311981252106741
CEC23Best3.258 × 10+34.605 × 10+33.927 × 10+34.754 × 10+37.376 × 10+34.276 × 10+33.892 × 10+35.625 × 10+38.724 × 10+33.019 × 10+33.833 × 10+33.016 × 10+3
Mean3.441 × 10+35.011 × 10+33.993 × 10+35.337 × 10+38.159 × 10+34.568 × 10+34.219 × 10+35.792 × 10+38.724 × 10+33.494 × 10+34.468 × 10+33.161 × 10+3
Std1.064 × 10+22.425 × 10+22.209 × 10+13.041 × 10+24.690 × 10+21.744 × 10+21.514 × 10+27.152 × 10+13.700 × 10−125.017 × 10+23.579 × 10+26.781 × 10+1
Rank384911751012261
CEC24Best3.664 × 10+35.194 × 10+34.461 × 10+35.694 × 10+31.160 × 10+44.938 × 10+34.526 × 10+37.587 × 10+31.423 × 10+43.461 × 10+34.580 × 10+33.539 × 10+3
Mean4.124 × 10+35.979 × 10+34.588 × 10+36.951 × 10+31.376 × 10+45.811 × 10+35.027 × 10+37.950 × 10+31.423 × 10+44.054 × 10+35.707 × 10+33.637 × 10+3
Std4.932 × 10+24.214 × 10+23.501 × 10+16.592 × 10+29.340 × 10+23.934 × 10+22.571 × 10+21.489 × 10+27.400 × 10−124.475 × 10+27.148 × 10+25.968 × 10+1
Rank384911751012261
CEC25Best3.409 × 10+33.893 × 10+44.672 × 10+44.294 × 10+38.876 × 10+45.121 × 10+33.540 × 10+31.421 × 10+43.589 × 10+33.270 × 10+33.478 × 10+33.251 × 10+3
Mean3.612 × 10+35.584 × 10+45.561 × 10+44.678 × 10+31.256 × 10+55.888 × 10+38.259 × 10+31.489 × 10+44.368 × 10+36.534 × 10+33.630 × 10+33.449 × 10+3
Std8.397 × 10+17.960 × 10+34.866 × 10+33.824 × 10+21.773 × 10+44.766 × 10+24.612 × 10+32.262 × 10+25.622 × 10+21.073 × 10+49.013 × 10+16.705 × 10+1
Rank211106128795341
CEC26Best1.078 × 10+42.577 × 10+41.954 × 10+42.784 × 10+47.036 × 10+42.575 × 10+42.006 × 10+43.117 × 10+48.882 × 10+48.331 × 10+31.659 × 10+43.647 × 10+3
Mean1.293 × 10+43.006 × 10+42.025 × 10+43.515 × 10+48.711 × 10+43.106 × 10+42.399 × 10+43.191 × 10+48.882 × 10+41.402 × 10+42.171 × 10+49.067 × 10+3
Std1.750 × 10+32.251 × 10+33.653 × 10+23.922 × 10+38.875 × 10+32.892 × 10+32.365 × 10+34.159 × 10+20.000 × 10+06.779 × 10+33.224 × 10+31.843 × 10+3
Rank274101186912351
CEC27Best3.561 × 10+34.416 × 10+33.980 × 10+33.200 × 10+31.427 × 10+44.042 × 10+33.579 × 10+37.453 × 10+33.840 × 10+33.200 × 10+33.760 × 10+33.439 × 10+3
Mean4.087 × 10+35.181 × 10+34.188 × 10+33.200 × 10+31.667 × 10+44.596 × 10+34.015 × 10+37.922 × 10+34.419 × 10+33.200 × 10+34.418 × 10+33.684 × 10+3
Std5.506 × 10+25.009 × 10+27.984 × 10+12.477 × 10−41.417 × 10+34.180 × 10+22.380 × 10+22.039 × 10+23.514 × 10+24.285 × 10−44.331 × 10+22.093 × 10+2
Rank510611294118273
CEC28Best3.538 × 10+32.091 × 10+41.617 × 10+43.300 × 10+35.453 × 10+44.484 × 10+35.157 × 10+31.489 × 10+44.015 × 10+33.300 × 10+33.488 × 10+33.487 × 10+3
Mean3.736 × 10+32.866 × 10+41.647 × 10+43.300 × 10+36.936 × 10+46.167 × 10+31.719 × 10+41.550 × 10+46.229 × 10+33.300 × 10+33.636 × 10+33.587 × 10+3
Std1.198 × 10+26.186 × 10+31.270 × 10+22.516 × 10−47.119 × 10+37.805 × 10+25.521 × 10+32.975 × 10+22.017 × 10+35.236 × 10−46.696 × 10+14.447 × 10+1
Rank511921271086143
CEC29Best5.485 × 10+31.727 × 10+41.116 × 10+47.946 × 10+32.209 × 10+61.022 × 10+48.291 × 10+32.341 × 10+47.287 × 10+35.980 × 10+37.787 × 10+35.005 × 10+3
Mean6.779 × 10+33.155 × 10+41.229 × 10+41.448 × 10+41.418 × 10+71.393 × 10+41.013 × 10+42.894 × 10+41.970 × 10+49.354 × 10+31.089 × 10+46.361 × 10+3
Std6.611 × 10+28.819 × 10+35.543 × 10+22.800 × 10+39.130 × 10+61.752 × 10+39.695 × 10+23.487 × 10+31.561 × 10+42.806 × 10+31.386 × 10+35.860 × 10+2
Rank211681274109351
CEC30Best6.433 × 10+43.418 × 10+91.153 × 10+84.595 × 10+74.448 × 10+101.801 × 10+82.918 × 10+61.308 × 10+106.334 × 10+73.420 × 10+35.587 × 10+51.033 × 10+4
Mean3.317 × 10+57.402 × 10+91.671 × 10+84.141 × 10+86.986 × 10+104.720 × 10+85.730 × 10+71.624 × 10+102.351 × 10+99.830 × 10+63.166 × 10+62.273 × 10+4
Std4.408 × 10+51.750 × 10+93.250 × 10+73.968 × 10+81.159 × 10+101.889 × 10+88.123 × 10+71.298 × 10+91.550 × 10+94.552 × 10+74.420 × 10+69.291 × 10+3
Rank310671285119241
Mean Rank2.6910.178.346.711.866.285.669.248.033.863.861.27
Final Ranking211971265108331
Table 5. Wilcoxon rank-sum test (Dim = 100).
Table 5. Wilcoxon rank-sum test (Dim = 100).
FunctionHBAPSODEWOAAOAPODBOGQPSOQHDBOLSHADESaCHBA_PDN
CEC019.063 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−118.120 × 10−43.019 × 10−11
+++++++++++
CEC033.965 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−111.784 × 10−43.019 × 10−11
+++++++++++
CEC042.879 × 10−63.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.006 × 10−43.019 × 10−11
+++++++++++
CEC054.998 × 10−93.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.068 × 10−23.019 × 10−11
+++++++++++
CEC064.998 × 10−93.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.006 × 10−43.019 × 10−11
+++++++++++
CEC075.186 × 10−73.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−111.114 × 10−33.019 × 10−11
+++++++++++
CEC081.492 × 10−63.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−118.197 × 10−73.019 × 10−11
+++++++++++
CEC093.094 × 10−63.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−114.982 × 10−43.019 × 10−11
+++++++++++
CEC105.462 × 10−93.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−115.874 × 10−43.019 × 10−11
+++++++++++
CEC114.801 × 10−73.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−114.856 × 10−33.019 × 10−11
+++++++++++
CEC123.646 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.835 × 10−63.019 × 10−11
+++++++++++
CEC136.518 × 10−93.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.006 × 10−43.019 × 10−11
+++++++++++
CEC147.043 × 10−73.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−114.975 × 10−112.531 × 10−43.019 × 10−11
+++++++++++
CEC155.600 × 10−73.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.709 × 10−23.019 × 10−11
+++++++++++
CEC164.311 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−111.996 × 10−53.019 × 10−11
+++++++++++
CEC172.195 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−114.060 × 10−23.019 × 10−11
+++++++++++
CEC181.102 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−118.564 × 10−43.019 × 10−11
+++++++++++
CEC191.850 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.278 × 10−53.019 × 10−11
+++++++++++
CEC204.444 × 10−73.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.891 × 10−33.019 × 10−11
+++++++++++
CEC219.063 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−116.913 × 10−43.019 × 10−11
+++++++++++
CEC225.092 × 10−83.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−111.777 × 10−108.292 × 10−63.019 × 10−11
+++++++++++
CEC233.646 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−111.058 × 10−33.019 × 10−11
+++++++++++
CEC243.081 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−118.771 × 10−23.019 × 10−11
+++++++++-+
CEC251.206 × 10−103.002 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−115.091 × 10−63.019 × 10−11
+++++++++++
CEC262.390 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.510 × 10−23.019 × 10−11
+++++++++++
CEC273.646 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−115.943 × 10−23.019 × 10−11
+++++++++-+
CEC282.602 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−114.353 × 10−53.019 × 10−11
+++++++++++
CEC296.010 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−114.226 × 10−33.019 × 10−11
+++++++++++
CEC306.528 × 10−083.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−113.019 × 10−112.002 × 10−63.019 × 10−11
+++++++++++
+/=/-29/0/029/0/029/0/029/0/029/0/029/0/029/0/029/0/029/0/027/0/229/0/0
Table 6. Results of ablation experiment.
Table 6. Results of ablation experiment.
FuncitonIndexHBALRMHBA1LRMHBA2LRMHBA3LRMHBA
CEC01Best1.012 × 10+21.013 × 10+21.025 × 10+21.190 × 10+21.150 × 10+2
Mean6.561 × 10+35.258 × 10+33.687 × 10+35.393 × 10+33.302 × 10+3
Std7.134 × 10+35.978 × 10+33.596 × 10+34.856 × 10+33.827 × 10+3
Rank53241
CEC03Best1.248 × 10+37.671 × 10+21.552 × 10+33.036 × 10+23.048 × 10+2
Mean3.231 × 10+33.584 × 10+34.083 × 10+34.298 × 10+29.780 × 10+2
Std1.327 × 10+33.115 × 10+31.529 × 10+31.707 × 10+21.959 × 10+3
Rank43512
CEC04Best4.049 × 10+24.138 × 10+24.601 × 10+24.010 × 10+24.001 × 10+2
Mean4.818 × 10+24.804 × 10+24.912 × 10+24.855 × 10+24.824 × 10+2
Std2.386 × 10+12.517 × 10+11.758 × 10+12.070 × 10+12.875 × 10+1
Rank13542
CEC05Best5.497 × 10+25.647 × 10+25.348 × 10+25.348 × 10+25.229 × 10+2
Mean6.041 × 10+26.234 × 10+25.698 × 10+25.662 × 10+25.551 × 10+2
Std2.948 × 10+13.347 × 10+12.041 × 10+11.651 × 10+11.852 × 10+1
Rank45321
CEC06Best6.004 × 10+26.006 × 10+26.001 × 10+26.000 × 10+26.000 × 10+2
Mean6.043 × 10+26.065 × 10+26.003 × 10+26.000 × 10+26.080 × 10+2
Std4.665 × 10+05.925 × 10+02.303 × 10−13.328 × 10−33.035 × 10+1
Rank45321
CEC07Best7.944 × 10+27.905 × 10+27.622 × 10+27.641 × 10+27.575 × 10+2
Mean8.575 × 10+28.722 × 10+27.957 × 10+28.021 × 10+27.810 × 10+2
Std3.717 × 10+14.464 × 10+11.798 × 10+12.602 × 10+11.454 × 10+1
Rank45231
CEC08Best8.557 × 10+28.458 × 10+28.408 × 10+28.368 × 10+28.388 × 10+2
Mean8.972 × 10+28.993 × 10+28.650 × 10+28.646 × 10+28.588 × 10+2
Std2.153 × 10+11.991 × 10+11.197 × 10+11.759 × 10+11.462 × 10+1
Rank45321
CEC09Best1.070 × 10+31.033 × 10+39.039 × 10+29.003 × 10+29.000 × 10+2
Mean2.049 × 10+32.426 × 10+39.460 × 10+29.943 × 10+29.043 × 10+2
Std6.578 × 10+28.011 × 10+24.096 × 10+12.297 × 10+27.905 × 10+0
Rank45321
CEC10Best3.491 × 10+33.689 × 10+34.443 × 10+33.174 × 10+33.569 × 10+3
Mean4.821 × 10+35.268 × 10+35.879 × 10+34.615 × 10+35.079 × 10+3
Std9.057 × 10+21.204 × 10+31.053 × 10+37.697 × 10+28.708 × 10+2
Rank24513
CEC11Best1.145 × 10+31.130 × 10+31.114 × 10+31.119 × 10+31.114 × 10+3
Mean1.226 × 10+31.215 × 10+31.178 × 10+31.147 × 10+31.139 × 10+3
Std4.197 × 10+14.861 × 10+14.143 × 10+12.734 × 10+12.272 × 10+1
Rank54321
CEC12Best1.090 × 10+48.781 × 10+32.404 × 10+45.149 × 10+38.858 × 10+3
Mean7.432 × 10+48.874 × 10+49.861 × 10+43.602 × 10+46.223 × 10+8
Std6.303 × 10+49.746 × 10+48.322 × 10+42.397 × 10+43.409 × 10+9
Rank43512
CEC13Best5.638 × 10+32.675 × 10+31.678 × 10+31.467 × 10+31.343 × 10+3
Mean3.419 × 10+43.242 × 10+41.820 × 10+41.794 × 10+42.665 × 10+4
Std3.530 × 10+42.202 × 10+41.915 × 10+41.842 × 10+42.312 × 10+4
Rank45213
CEC14Best1.977 × 10+31.925 × 10+31.834 × 10+31.695 × 10+31.548 × 10+3
Mean9.720 × 10+38.360 × 10+31.374 × 10+43.874 × 10+33.695 × 10+3
Std1.157 × 10+46.583 × 10+31.068 × 10+42.741 × 10+32.118 × 10+3
Rank34521
CEC15Best2.215 × 10+31.881 × 10+31.715 × 10+31.657 × 10+31.524 × 10+3
Mean1.489 × 10+41.353 × 10+49.104 × 10+31.002 × 10+41.418 × 10+8
Std1.970 × 10+41.456 × 10+49.948 × 10+31.095 × 10+47.764 × 10+8
Rank54321
CEC16Best1.910 × 10+31.977 × 10+31.883 × 10+31.871 × 10+31.746 × 10+3
Mean2.607 × 10+32.619 × 10+32.449 × 10+32.620 × 10+32.882 × 10+3
Std2.702 × 10+23.939 × 10+22.850 × 10+23.874 × 10+22.691 × 10+3
Rank34152
CEC17Best1.915 × 10+31.800 × 10+31.741 × 10+31.748 × 10+31.737 × 10+3
Mean2.103 × 10+32.158 × 10+31.991 × 10+32.090 × 10+32.043 × 10+3
Std1.466 × 10+22.355 × 10+21.712 × 10+22.031 × 10+21.914 × 10+2
Rank45132
CEC18Best2.251 × 10+43.658 × 10+44.196 × 10+47.205 × 10+31.199 × 10+4
Mean1.818 × 10+52.476 × 10+52.789 × 10+58.728 × 10+48.309 × 10+4
Std1.752 × 10+52.327 × 10+52.479 × 10+57.751 × 10+45.484 × 10+4
Rank34512
CEC19Best2.089 × 10+32.029 × 10+31.933 × 10+31.933 × 10+31.924 × 10+3
Mean1.143 × 10+41.034 × 10+41.095 × 10+41.121 × 10+49.254 × 10+3
Std1.408 × 10+41.475 × 10+41.398 × 10+41.132 × 10+41.102 × 10+4
Rank42153
CEC20Best2.125 × 10+32.205 × 10+32.140 × 10+32.057 × 10+32.045 × 10+3
Mean2.464 × 10+32.503 × 10+32.491 × 10+32.406 × 10+32.479 × 10+3
Std2.135 × 10+21.729 × 10+22.067 × 10+22.249 × 10+22.277 × 10+2
Rank25314
CEC21Best2.345 × 10+32.200 × 10+32.327 × 10+32.332 × 10+32.328 × 10+3
Mean2.399 × 10+32.388 × 10+32.357 × 10+32.359 × 10+32.351 × 10+3
Std3.286 × 10+14.583 × 10+11.912 × 10+11.513 × 10+11.364 × 10+1
Rank54231
CEC22Best2.300 × 10+32.300 × 10+32.300 × 10+32.300 × 10+32.300 × 10+3
Mean3.404 × 10+33.662 × 10+33.992 × 10+33.616 × 10+34.077 × 10+3
Std2.057 × 10+32.193 × 10+32.482 × 10+32.093 × 10+32.276 × 10+3
Rank34512
CEC23Best2.701 × 10+32.706 × 10+32.691 × 10+32.400 × 10+32.679 × 10+3
Mean2.760 × 10+32.776 × 10+32.720 × 10+32.718 × 10+32.710 × 10+3
Std3.013 × 10+15.079 × 10+11.871 × 10+16.572 × 10+11.696 × 10+1
Rank45231
CEC24Best2.893 × 10+32.870 × 10+32.848 × 10+32.853 × 10+32.859 × 10+3
Mean2.960 × 10+33.009 × 10+32.875 × 10+32.923 × 10+32.884 × 10+3
Std8.248 × 10+11.994 × 10+21.511 × 10+19.380 × 10+11.750 × 10+1
Rank45132
CEC25Best2.884 × 10+32.884 × 10+32.884 × 10+32.884 × 10+32.883 × 10+3
Mean2.894 × 10+32.891 × 10+32.888 × 10+32.887 × 10+33.114 × 10+3
Std1.386 × 10+11.241 × 10+11.034 × 10+14.884 × 10+01.244 × 10+3
Rank54123
CEC26Best2.800 × 10+32.800 × 10+32.900 × 10+32.900 × 10+32.900 × 10+3
Mean4.662 × 10+34.275 × 10+34.200 × 10+34.450 × 10+34.174 × 10+3
Std6.194 × 10+29.792 × 10+23.730 × 10+24.136 × 10+22.791 × 10+2
Rank54132
CEC27Best3.210 × 10+33.217 × 10+33.200 × 10+33.204 × 10+33.210 × 10+3
Mean3.309 × 10+33.403 × 10+33.250 × 10+33.269 × 10+33.355 × 10+3
Std1.130 × 10+23.264 × 10+23.441 × 10+15.323 × 10+16.116 × 10+2
Rank45231
CEC28Best3.100 × 10+33.162 × 10+33.123 × 10+33.102 × 10+33.142 × 10+3
Mean3.367 × 10+33.224 × 10+33.316 × 10+33.200 × 10+33.215 × 10+3
Std8.128 × 10+23.479 × 10+15.677 × 10+23.901 × 10+13.406 × 10+1
Rank54312
CEC29Best3.405 × 10+33.513 × 10+33.439 × 10+33.359 × 10+33.361 × 10+3
Mean4.128 × 10+34.169 × 10+33.917 × 10+33.927 × 10+33.790 × 10+3
Std4.674 × 10+26.253 × 10+22.859 × 10+24.243 × 10+22.776 × 10+2
Rank54231
CEC30Best6.976 × 10+36.156 × 10+35.882 × 10+35.577 × 10+35.074 × 10+3
Mean3.282 × 10+52.679 × 10+52.896 × 10+41.207 × 10+41.042 × 10+4
Std1.449 × 10+61.213 × 10+67.614 × 10+45.092 × 10+34.039 × 10+3
Rank54321
Mean Rank3.934.172.832.341.72
Final Ranking45321
Table 7. Parameters setting.
Table 7. Parameters setting.
AlgorithmsParametersSetting Value
SSALeader position update probability c 3 = 0.5
HHOSensitive parameter α = 5
β 1.5
CPONumber of cycles T = 2
Convergence rate α = 0.2
Trade-off factor T f = 0.8
Table 8. Mountain model parameters and threat model parameters.
Table 8. Mountain model parameters and threat model parameters.
SceneMountain Center
(xi, yi)
Mountain Slope
(ai, bi)
Mountain Height
hi
Threat Center
(xk, yk)
1(27, 26); (19, 58): (55, 59);
(60, 33); (46, 78); (79, 55)
(9, 9); (8, 8), (8, 8);
(9, 9); (8, 8); (8, 8)
1.7; 2; 1.7;
1.6; 1.8; 1.7
(45, 41);
(75, 71)
2(21, 23); (19, 41); (39, 38);
(48, 54); (43, 21); (46, 78);
(77, 49); (74, 79); (71, 24)
(5, 5); (6, 6); (6, 6);
(6, 6); (7, 7); (7, 7);
(7, 7); (6, 6); (7, 7)
1.6; 1.8; 2;
1.7; 1.5; 1.8;
1.5; 1.4; 1.6
(58, 30);
(38, 59);
(63, 65)
3(19, 21); (19, 41); (30, 85);
(42, 39); (60, 28); (52, 52);
(59, 14); (57, 68); (45, 81);
(80, 19); (81, 66); (15, 75)
(5, 5); (6, 6); (5, 5);
(5, 5); (6, 6); (5, 5);
(6, 6); (6, 6); (5, 5);
(7, 7); (6, 6); (6, 6)
1.6; 1.8; 1.7;
2; 1.6; 1.7;
1.5; 1.7; 1.8;
1.5; 1.4; 1.6
(60, 30);
(45, 75);
(20, 40);
(80, 70)
Table 9. Path planning results.
Table 9. Path planning results.
SceneIndexPSOSSAHHODBOCPOHBASaCHBA_PDNLRMHBA
1Best44.47047.06548.70044.66545.53942.05242.20941.890
Mean49.25148.6142372.03752.71347.73449.93346.38243.630
Std2.0720.9804280.1146.8711.2526.0071.8622.069
Friedman64873521
2Best42.95445.69248.19944.74248.21142.54244.49442.644
Mean377.82047.2875026.148714.32053.12947.19247.85545.446
Std1817.3441.2635058.8832524.1312.2773.5683.0162.327
Friedman24867351
3Best41.67950.70251.88042.98554.23445.59239.23045.982
Mean721.73052.3077016.9764695.64865.78852.98056.38448.253
Std2522.1221.2134634.5425046.5947.5446.1306.8521.587
Friedman52867341
Mean Rank4.333.3386.335.673.673.671
Ranking52765331
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tang, X.; Jia, C.; He, Z. UAV Path Planning: A Dual-Population Cooperative Honey Badger Algorithm for Staged Fusion of Multiple Differential Evolutionary Strategies. Biomimetics 2025, 10, 168. https://doi.org/10.3390/biomimetics10030168

AMA Style

Tang X, Jia C, He Z. UAV Path Planning: A Dual-Population Cooperative Honey Badger Algorithm for Staged Fusion of Multiple Differential Evolutionary Strategies. Biomimetics. 2025; 10(3):168. https://doi.org/10.3390/biomimetics10030168

Chicago/Turabian Style

Tang, Xiaojie, Chengfen Jia, and Zhengyang He. 2025. "UAV Path Planning: A Dual-Population Cooperative Honey Badger Algorithm for Staged Fusion of Multiple Differential Evolutionary Strategies" Biomimetics 10, no. 3: 168. https://doi.org/10.3390/biomimetics10030168

APA Style

Tang, X., Jia, C., & He, Z. (2025). UAV Path Planning: A Dual-Population Cooperative Honey Badger Algorithm for Staged Fusion of Multiple Differential Evolutionary Strategies. Biomimetics, 10(3), 168. https://doi.org/10.3390/biomimetics10030168

Article Metrics

Back to TopTop