Next Article in Journal
Manufacturing Anti-Reflective Subwavelength Structures on ZnS Using Femtosecond Laser Bessel Beam with Burst Mode
Previous Article in Journal
Research on AUV Multi-Node Networking Communication Based on Underwater Electric Field CSMA/CA Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving UAV 3D Path Planning Based on the Improved Lemur Optimizer Algorithm

Air Traffic Management Institute, Civil Aviation Flight University of China, Deyang 618307, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(11), 654; https://doi.org/10.3390/biomimetics9110654
Submission received: 14 September 2024 / Revised: 17 October 2024 / Accepted: 21 October 2024 / Published: 25 October 2024

Abstract

:
This paper proposes an Improved Lemur Optimization algorithm (ILO), which combines the advantages of the Spider Monkey Optimization algorithm, Simulated Annealing algorithm, and Lemur Optimization algorithm. Through the use of an adaptive nonlinear decrement model, adaptive learning factors, and updated jump rates, the algorithm enhances its global exploration and local exploitation capabilities. A Gaussian function model is used to simulate the mountain environment, and a mathematical model for UAV flight is established based on constraints and objective functions. The fitness function is employed to determine the minimum cost for avoiding obstacles in a designated airspace, and cubic spline interpolation is used to smooth the flight path. The Improved Lemur Optimization algorithm was tested using the CEC2017 benchmark set, assessing its search capability, convergence speed, and accuracy. The simulation results show that ILO generates high-quality, smooth paths with fewer iterations, overcoming the issues of premature convergence and insufficient local search ability in traditional genetic algorithms. It adapts to complex terrain, providing an efficient and reliable solution.

1. Introduction

In recent years, unmanned aerial vehicles (UAVs) have demonstrated significant potential across various fields, including agriculture, forestry, high-altitude operations, and wildfire monitoring, owing to their remarkable flexibility and maneuverability [1,2,3,4]. Path planning, as a core technology, plays a pivotal role in determining the effectiveness of UAVs in these applications. However, traditional heuristic methods for path planning in complex three-dimensional environments often exhibit slow performance and lack precision, making it challenging to generate rapid and accurate collision-free paths. Consequently, intelligent optimization-based path planning algorithms have garnered increasing attention for their ability to address these challenges. Numerous intelligent optimization algorithms have been applied to UAV path planning. For instance, the Simulated Annealing algorithm [5] utilizes a simulated annealing probability to determine whether to accept a solution, thereby avoiding entrapment in local optima. However, it is inherently unsuitable for high-dimensional, complex problems. With identical parameter settings, the solutions it generates can vary with each run, complicating the consistent guarantee of finding the global optimum. Next, we consider physics-based algorithms. The Kepler Optimization algorithm (KOA) [6] has yielded promising results in two specific applications; however, its performance is hindered by delayed convergence, which is a significant drawback. Conversely, the electromagnetism-like mechanism (EM) [7] exhibits several limitations, including a tendency to become trapped in local optima, low search efficiency, sensitivity to parameter changes, restricted local search capabilities, and high computational complexity. These issues are further exacerbated in complex, high-dimensional problems, where the EM algorithm may demonstrate less stability than other methods, often necessitating enhancements or hybridization with alternative approaches. The primary weaknesses of the Multi-Verse Optimizer (MVO) [8] include its propensity to become trapped in local optima, inefficient search capabilities, and limited convergence accuracy. Similarly, the Sine Cosine algorithm (SCA) [9] struggles with periodicity in its search process, making it vulnerable to local optima and reducing its ability to adapt dynamically during the search. In general, physics-based algorithms often face high-dimensional constraints in UAV path planning, leading to their susceptibility to local optima. As noted in the literature [10], the Particle Swarm Optimization (PSO) [11] algorithm, when combined with higher-order Bezier curves, produces smoother UAV path planning that aligns better with the UAV kinematic model. However, PSO’s inherent tendency to prematurely converge to local optima remains unresolved. Among population-based algorithms, several noteworthy methods have emerged in recent years, Spider Wasp Optimization (SWO) [12], Crayfish Optimization algorithm (COA) [13], Grey Wolf Optimization (GWO) [14], Glowworm Swarm Optimization [15], and Whale Optimization algorithm (WOA) [16]. Although SWO has some advantages in UAV path planning, such as efficient global search capability and strong convergence performance, it simulates relatively complex biological behaviors, leading to high computational complexity. When dealing with high-dimensional path planning problems, it may consume a large amount of computational resources, which is unfavorable for resource-constrained embedded UAV systems. Despite its overall performance, the Crayfish Algorithm lacks strong exploration capabilities, often leading to local optima in the later optimization stages. Spider Wasp Optimization exhibits strong exploration, exploitation, local optima avoidance, and convergence speed across multiple benchmark tests; however, its sensitivity to parameter settings limits broader applications. In GWO, search efficiency and exploitation heavily rely on parameter α, and improper tuning of this parameter can diminish search performance. Additionally, GWO’s reliance on the alpha wolf’s leadership often reduces population diversity, leading to premature convergence and weaker global search capabilities. The Glowworm Swarm Optimization primarily depends on local interactions between individuals, and as the population approaches the global optimum, these interactions weaken, resulting in poor local search performance and difficulty in finding more precise solutions. Similarly, while the Whale Optimization Algorithm demonstrates decent global search abilities, it struggles during the early stages due to a lack of variance among individuals, hindering exploration. Moreover, WOA’s “encircling” behavior frequently leads to premature convergence on local optima, particularly in multi-modal optimization problems, where individuals tend to cluster, limiting the exploration of other regions within the solution space. A more recent optimization method, the Gold Rush Optimizer (GRO) [17], has garnered attention; however, its application and validation remain limited. Inspired by the behavior of gold seekers, the algorithm’s robustness and efficacy across various complex problems have yet to be fully proven. Simulations of the GRO algorithm applied to UAV path planning revealed that, like other swarm intelligence algorithms, when individual behaviors become overly homogeneous, premature convergence to local optima restricts effective global exploration. To address the challenges of UAV path planning in multi-peak, complex terrains, this paper introduces the Lemur Optimization algorithm (LO) [18]. The LO’s efficiency was validated using the CEC 2011 benchmark set [19], and Its competitiveness in structural engineering problems, such as Transmission Network Expansion Planning (TNEP) and the optimal control of dual-function catalyst mixing, has been demonstrated effectively. With increasing interest in complex 3D UAV path planning, the LO algorithm was applied to this domain. However, simulation results indicated areas for improvement, specifically in search efficiency, local optimization, iteration speed, and total iteration count. In response, this study presents an Improved Lemur Optimization algorithm (ILO), which integrates elements of the Spider Monkey Optimization algorithm, Simulated Annealing, and LO algorithms. The ILO enhances path optimization by adjusting the jump rate and incorporates an adaptive nonlinear decrement model for improved global exploration in early stages and enhanced local exploitation in later stages. Additionally, an adaptive learning factor is employed to dynamically adjust step size and search direction, boosting exploration and accelerating convergence. Testing against the CEC 2017 benchmark set demonstrated that the ILO significantly outperforms the original LO algorithm, delivering greater robustness, improved iteration stability, and fewer required iterations.

2. Lemurs Optimizer (LO)

The Lemurs Optimizer (LO) is a metaheuristic optimization algorithm designed to solve global optimization problems. The LO algorithm mimics the behavior of lemurs as they search for food or resources in their environment to enhance search efficiency and optimization performance. It updates the positions of individuals in each generation, using jumping rates and the nearest optimal solution to balance global exploration with local exploitation.
The Lemur Optimizer (LO) algorithm follows several key steps to solve global optimization problems, as illustrated in Figure 1. First, a set of solutions (lemurs) is randomly initialized within the given boundaries. Each lemur’s fitness is evaluated using the objective function. The algorithm then enters a loop for a specified number of iterations. In each iteration, a “jumping rate” is calculated, which gradually decreases over time to balance exploration and exploitation. Based on this jumping rate, each lemur either updates its position by moving toward a neighboring solution or by approaching the best solution found so far. This movement process is influenced by random factors to maintain diversity in the search. Each new solution is evaluated, and if it shows improvement, the lemur‘s position is updated. Throughout the iterations, the algorithm tracks the best solution, updating and displaying results accordingly. The goal is to efficiently explore the search space and converge to the global optimal solution by simulating the natural behavior of lemurs. The specific algorithm structure can be found in Algorithm 1.
The input parameters for the Lemurs Optimizer (LO) algorithm include the following key components: the population size or the number of lemurs, the maximum number of iterations to control the optimization process, the lower and upper bounds for each decision variable that define the search space, the problem’s dimensionality, and the objective function used to evaluate the fitness of each solution. The output of the algorithm includes the best fitness value found during the optimization process, the position vector of the best solution, and a record of the best fitness value at each iteration, providing insight into the algorithm’s convergence behavior.
The goal of the algorithm is to find the global optimal solution to a given problem by simulating the behavior of lemurs. Specifically, the objective function evaluates the position of each individual in every generation, and the algorithm optimizes the objective function value by adjusting these positions. The aim is to find a solution that results in either a minimum or maximum objective value, thereby solving complex global optimization problems effectively.
The set of lemurs is represented in a matrix since the LO algorithm is a population-based algorithm. To do this, the following procedures are carried out. Assuming that we have the population defined as the following matrix:
T = x 1 1 x 1 d x s 1 x s d
In this context, T represents the set of lemurs in a population matrix of size s × d , where d represents the decision variables, and s represents the candidate solutions.
Typically, the decision variable j in solution i is randomly generated as follows:
x i j = x b j + r a n d ( ) × u b j x b j i ( 1 , 2 , , n ) j ( 1 , 2 , , d )
In Equation (2), the function r a n d ( ) generates random numbers within a specified range, and the discrete upper and lower bounds of the variable j ( j ( 1,2 , , d ) ) are represented by [ x b j , u b j ] .
Use the Formula (4) to calculate the free risk rate (FRR) and update the globally optimal lemur. For each lemur (identified by index i), perform the following steps:
The lemur that has a lower fitness value tends to change its decision variables from the lemur that has a higher fitness value. Lemurs are organized based on their fitness values in each iteration, with one chosen as the global best lemur (denoted as gbl) and one chosen as the best nearest lemur for each lemur (denoted as bnl). In summary, the concept of bnl involves population sorting and selection: In each iteration, the algorithm first sorts the population based on fitness values, evaluating and ranking each lemur from best to worst. Then, the nearest solution is selected: For each individual i , the nearest solution is identified based on its position in the sorted list. This nearest solution, found through the sorted indexes, is closer to the current individual in the search space. Next, the position of the current individual is updated: The individual adjusts its position by calculating the difference between its current solution and the nearest solution, with a random perturbation added to generate a new position. This can be seen as the lemur learning from a better nearby companion to improve its solution. Finally, the update mechanism: If the new solution is better than the current one, the individual’s position is updated; otherwise, the original position is retained.
For each decision variable in lemur i (identified by index j), assign a random number from [ 0,1 ] to rand. If rand < FRR, then use the following for:
L i j = x i j + a b s ( x i j x b n l j ) ( r a n d 0.5 ) 2 ;   r a n d < F R R x i j + a b s ( x i j x g b l j ) ( r a n d 0.5 ) 2 ;   r a n d > F R R
Update j based on the nearest best lemur (bnl). Otherwise, update j based on the global best lemur (gbl). Finally, return the globally optimal lemur. In Formula (3), j represents the current value of the lemur, x b n l j represents the value of the j-th dimension for the best nearest lemur, and x g b l j represents the value of the global best lemur.
Free risk rate (FRR) [20] indicates the risk rate of the all lemurs in the troops, and rand represents random numbers between [ 0,1 ] . Based on this formulation, it can be concluded that the probability of the FRR is the main coefficient of the LO algorithm. The formula of this coefficient is given in:
F R R = F R R H i g h   R i s k   R a t e C u r r I t e r × ( ( H i g h   R i s k   R a t e L o w   R i s k   R a t e ) / M a x I t e r )
where L o w   R i s k   R a t e and H i g h   R i s k   R a t e represent constant pre-defined values, M a x I t e r is the maximum iterations’ number, and C u r r I t e r denotes current iteration. The formula describes how FRR gradually decreases from a high risk to a low risk. It follows a linear decay process, starting from H i g h   R i s k   R a t e and gradually reducing to L o w   R i s k   R a t e as the number of iterations increases.
The C u r r I t e r × ( ( H i g h   R i s k   R a t e L o w   R i s k   R a t e ) / M a x I t e r represents the part that is reduced in each iteration, ensuring that by the time the maximum iteration count is reached, the FRR value will have decreased to L o w   R i s k   R a t e .
Algorithm 1. The LO algorithm’s pseudocode
1:
Set up the LO parameters (Number of iterations, Number of dimensions (Dim), Number of solutions, Lower Bound (LB), Upper Bound (UB), Low-Risk Rate, High-Risk Rate, Max iter).
2:
Generate Lemurs population.
3:
While the current iteration does not equal the number of iterations do
4:
Evaluate the objective function for all Lemurs.
5:
Calculate free risk rate (FRR) using Equation (4).
6:
Update the Global Best Lemur (gbl).
7:
For each lemur indexed by i do.
8:
Update the Best Nearest Lemur (bnl).
9:
For each decision variable in Lemur i indexed by j do
10:
Set random ([0, 1]) to rand.
11:
If rand < Jumping rate then.
12:
Use Equation (3) case number one to update the decision variable j.
13:
Else
14:
Use Equation (3) case number two to update the decision variable j.
15:
end if
16:
end for
17:
end for
18:
end while
19:
Return the Global Best Lemur.

3. Improved Lemur Optimizer (ILO)

3.1. Introduction of Updated Jump Rates

By introducing an innovation in the algorithm concerning the jumping rate within the FRR, the jumping rate decreases as the number of iterations increases. The formula is as follows:
J R = i n i t i a l   j u m p i n g   r a t e × exp i t e r M a x   i t e r log j u m p i n g   r a t e   min j u m p i n g   r a t e   max
where initial jumping rate is the i n i t i a l   j u m p i n g   r a t e , iter is the current number of iterations, and Max iter is the maximum number of iterations, where the constants are M a x   i t e r l o g j u m p i n g   r a t e   m i n j u m p i n g   r a t e   m a x , which controls the rate of decay, where l o g j u m p i n g   r a t e   m i n j u m p i n g   r a t e   m a x is a negative value.
Because j u m p i n g   r a t e   m i n < j u m p i n g   r a t e   m a x , the range of jump rates JR will gradually decrease from the initial value to near the minimum jump rate. The purpose of this design is to allow larger jumps to explore the search space at the beginning of the optimizer process, while gradually decreasing the jump rate at later stages for finer search and local optimizer.

3.2. Fusion Spider Wasp Optimizer

The adjustment of the jumping rate incorporates the behavior weight for hunting and nesting (TR).
Definition of TR: Behavior Weight TR is a quadratic increasing function: The value of TR increases gradually in a quadratic manner as the number of iterations increases. And mathematical expression:
T R = ( C u r r e n t   I t e r a t i o n   N u m b e r M a x i m u m   I t e r a t i o n   N u m b e r ) 2
Here, TR ranges between 0 and 1.
This behavior weight is a quadratic increasing function, gradually strengthening a certain behavior as the number of iterations increases, adapting to the need for different strategies during the convergence process.
A random decision is made on whether to use hunting and nesting behavior or mating behavior to update the population position using Formula (7).
Hunting and Nesting Behavior: Individuals explore and exploit the search space based on randomly generated parameters, using Levy flights and crossover strategies to update positions.
Mating Behavior: New solutions are generated through mating, and based on fitness, a decision is made whether to accept the new position.
Additionally, the population size is dynamically adjusted, decreasing as the number of iterations increases to enhance the algorithm’s convergence speed. This adjustment is applied to update the lemur population, where the reduction in size corresponds to a decrease in the number of lemurs. By progressively reducing the population throughout the iterations, the algorithm achieves a better balance between global exploration and local exploitation, ultimately improving overall optimization performance.
S W i t + 1 = C r o s s o v e r ( S W i t , S W m t , C R )
where Crossover denotes the uniform crossover operator applied between the solutions of S W i t and S W m t , and CR is the crossover rate. S W i t and S W m t represent the two carriers of male and female spider wasps. Where the Levy flight mechanism [21] simulates a large step leap in the search process, Equation (8):
L = u v 1 / β
where u and v are normally distributed random variables and β is a Levy distribution parameter.
By combining LO and SWO, this hybrid optimization approach possesses both global search capabilities (hunting and nesting behavior from SWO) and local exploitation abilities (jump adjustments based on the best solution and nearest solution from LO).

3.3. Adaptive Nonlinear Decreasing Model

3.3.1. Adaptive Jumping Rate

Dynamically adjust the jump rate based on the adaptive nonlinear decrement model [22], allowing it to decrease more rapidly in the later stages. The exponentially decreasing model from Equation (5) is optimized to derive Equation (9), which reduces the jumping rate and improves the local search in the later stages of the iteration.
J R = i n i t i a l   j u m p i n g   r a t e × exp i t e r 2 M a x   i t e r 2 log j u m p i n g   r a t e   min j u m p i n g   r a t e   max

3.3.2. Adaptive CR

For the SWO part of the algorithm, the crossover rate CR [23] uses an adaptive nonlinear decreasing model, where the crossover probability decreases as the number of iterations increases, reducing the exploratory behavior of the population. The adaptive crossover probability mathematical formula is:
C R ( t ) = C R ( 0 ) × ( 1 i t e r M a x i t e r ) 2
where iter is the current iteration number, Max iter is the maximum iteration number, and CR(0) is the initial crossover rate.
The trend depicted in Figure 2 is that the crossover probability gradually decreases with the number of iterations, from an initially higher value (approximately 0.2) to a lower value (close to 0). This pattern of change allows the algorithm to explore the solution space by maintaining a greater degree of randomness in the early phases of the algorithm while decreasing randomness in the later phases to allow for more refined development of better solutions.

3.3.3. Adaptive TR

The setting and adjustment of the adaptive behavior weights help optimize the algorithm’s performance throughout the search process, ensuring that it efficiently finds the optimal solution and adjusts the search strategy appropriately at different stages. The function is formulated as:
T R ( t ) = 0.3 + 0.7 × ( t M a x   i t e r ) 2
where t is the number of contemporary iterations and Max iter is the maximum number of iterations. Figure 3 shows a graph of the effect of the adaptive behavior weights TR [24] in iterations, for 100 iterations. The graph illustrates that as the number of iterations increases, the algorithm gradually reduces stochastic exploration during hunting and nesting behavior, and shifts to a more focused local exploitation in order to improve the quality of the solution and eventually converge to the optimal solution.

3.4. Combining Improved Simulated Annealing Algorithms

Combined with the improved Simulated Annealing algorithm, the position update is first performed for each individual i. During the position update process, the position update formula for each individual is based on the adaptive jump rate and crossover behavior, which can be expressed as:
x i ( t + 1 ) = x i ( t ) + x i ( t ) x n e a r ( t ) ( 2 r a n d 1 ) ;   r a n d < j u m p i n g   r a t e x i ( t ) + x i ( t ) x b e s t ( t ) ( 2 r a n d 1 ) ;   r a n d > j u m p i n g   r a t e
In this context, x i ( t ) represents the position vector of the i-th individual at the t-th iteration. x n e a r ( t ) represents the position vector of the individual closest to individual i. x n e a r ( t ) represents the best solution in the current population. Rand is a random number between [ 0,1 ] . For each new solution x i ( t + 1 ) , if its objective function value f x i t + 1 < f ( x i t ) , the new position is accepted. Otherwise, the probability of accepting the new position follows the simulated annealing acceptance criterion, given by the following probability formula:
P = exp ( ( f ( x i ( t + 1 ) ) f ( x i ( t ) ) ) T )
In this context, T represents the current temperature. The acceptance criterion is as follows: If P is greater than a random number rand between [0, 1], the new solution is accepted; otherwise, the old solution is retained. Finally, the temperature is updated T ( t + 1 ) = T 0 α t , where T 0 is the initial temperature, and α is the cooling factor, typically 0 < α < 1, Figure 4 shows the temperature decay curve after the temperature update. In this curve, the temperature represents the algorithm’s probability of accepting inferior solutions. A higher temperature means the algorithm is more likely to accept worse solutions to avoid becoming stuck in local optima. As the temperature decreases, the algorithm becomes increasingly “greedy”, focusing more on local optima and reducing the acceptance of worse solutions.

3.5. Adding Adaptive Learning Factors

Define the learning factor parameters:
Set the initial learning factor, minimum and maximum learning factors, and their decay rate as shown in Algorithm 2.
Dynamic adjustment of learning factors [25]:
The value of the learning factor is dynamically adjusted according to the number of iterations to influence the search direction and step size update.
Applying learning factors to algorithms:
Using learning factors to adjust the search direction and step size to enhance search capability and accelerate convergence. The detailed ILO algorithm process is shown in Figure 5.
Algorithm 2. The ILO algorithm’s pseudocode
1:
Set up the ILO parameters (Number of iterations, Number of dimensions (Dim), Number of solutions, initial jumping rate, jumping rate min, jumping rate max, TR, Cr, T0, α, initial learning factor, min learning factor, max learning factor, learning factor decay).
2:
Initialize population positions.
3:
Evaluate initial population fitness.
4:
Enter main loop (iteration t from 1 to Max iter).
5:
Adjust dynamic parameters.
6:
Sort population and find the current best solution
7:
For each agent i:
8:
Select the nearest optimal solution or the global best solution to update the position.
9:
Update each decision variable j’s position based on jumping rate.
10:
If random value rand < jumping rate.
11:
Update position based on the nearest best lemur.
12:
Else:
13:
Update position based on the global best lemur.
14:
Compute the new objective function value and fitness
15:
If the new position is better
16:
Update the agent’s position and fitness
17:
If the new solution is better than the global best, update the global best solution
18:
Otherwise
19:
Apply simulated annealing and accept a worse solution with a certain probability
20:
If rand < TR (hunting/nesting behavior)
21:
For each agent i.
22:
Adjust position using Levy flight.
23:
Compute the new objective function value
24:
If the new position is better, update the agent’s position and the global best solution
25:
Otherwise, apply simulated annealing to accept the worse solution with a certain probability
26:
Else
27:
For each agent i.
28:
Update position based on crossover probability Cr.
29:
Compute the new objective function value
30:
If the new position is better, update the agent’s position and the global best solution
31:
Otherwise, apply simulated annealing to accept the worse solution with a certain probability
32:
Dynamically adjust population size
33:
Record the best solution of the current iteration
34:
End

4. Simulation Test and Result Analysis of Improved Lemur Optimizer

To validate the performance of the Improved Lemur Optimization algorithm (ILO), it was compared with five other algorithms: Particle Swarm Optimization (PSO), Lemurs Optimizer (LO), Spider Wasp Optimizer (SWO), Kepler Optimization algorithm (KOA), and Gold Rush Optimizer (GRO). The following describes the test functions of the experimental data, comparison algorithms, parameter configurations, and analysis of the experimental data. The experiment was conducted on a Thunderobot laptop from Qingdao, China, equipped with a 12th Gen Intel(R) Core i5-12450H processor with a base frequency of 2.50 GHz and 256 GB of memory, and MATLAB software version R2023b installed.

4.1. Comparison of Test Function Results

In this study, we assessed the performance of the ILO algorithm using 30 test functions from the CEC2017 benchmark suite [26], comparing it against seven benchmark algorithms, including GRO, LO, PSO, SWO, and KOA (refer to Table 1 for details). The parameter settings for these algorithms are provided in Table 2. To ensure experimental fairness, all algorithms were initialized with a uniform population size of 30 and a maximum iteration limit of 500, with evaluations conducted for both 30- and 100-dimensional problems. Performance metrics, including the mean, maximum, minimum, standard deviation, and rank-sum test results, were derived from 100 independent runs per function. After 500 iterations, the ILO algorithm demonstrated its superior global search efficiency, consistently achieving the best results across all evaluation criteria, including mean, maximum, minimum, standard deviation, and rank-sum tests. In Table 1, the last line, “Search Range: [ 100,100 ] D ”, indicates that the search range of all test functions is within the D-dimensional space of [ 100,100 ] . The numbers in the last column represent the benchmark optimal values Fi* for the CEC’17 test functions. These values indicate the function’s value, Fi(x*), at the global optimal solution x*. For example, for the first function (Shifted and Rotated Bent Cigar Function), the optimal function value Fi(x*) is 100 when the global optimal solution is found. Similarly, for other functions, the numbers represent the best possible value of the function at its global optimum. These benchmark values are used to evaluate the performance of optimization algorithms, with the goal being to find solutions that are as close as possible to these optimal values.

4.2. Results and Analysis of Cec2017 Benchmark Functions

CEC2017 comprises a set of 30 single-objective benchmark functions, encompassing a diverse range of characteristics: F1 and F2 are unimodal functions, F3 to F9 are basic multimodal functions, F10 to F19 are hybrid functions, and F20 to F30 are composite functions. As illustrated in Table A1 and Table A2 in Appendix A and Appendix B, the results of the ILO algorithm after being executed with dimensions of 50 and 100 reveal the following key performance advantages of the ILO algorithm:
In the 50-dimensional case, although PSO slightly outperforms ILO on F10 and GRO shows a minor advantage on F17 and F20, the ILO algorithm clearly outperforms the remaining functions. In the 100-dimensional scenario, while SWO marginally surpasses ILO on F20, ILO exhibits substantial dominance across all other functions.
In both 50- and 100-dimensional cases, the ILO algorithm consistently performs exceptionally well on unimodal functions F1 and F2, which feature a single global optimum with no local traps. This simplifies the search process by eliminating the need to navigate through complex local optima. Even with higher dimensionality, the ILO algorithm demonstrates remarkable adaptability, striking an effective balance between exploration and exploitation. This balance ensures efficient resource utilization, rapid convergence to the optimal solution, and minimal redundant exploration.
On multimodal functions F3 and F9, the ILO algorithm exhibits strong global exploration capabilities in both 50- and 100-dimensional cases. It effectively avoids getting trapped in local optima, progressively directing its focus toward the optimal solution. As dimensionality increases, the ILO algorithm maintains its scalability and robustness, consistently identifying global or near-optimal solutions in complex landscapes while preserving population diversity. This enables thorough exploration and a well-balanced transition from early-stage exploration to later-stage exploitation.
The ILO algorithm performs exceptionally well across most composite functions (F10 to F30) in both 50- and 100-dimensional cases, especially in terms of maximum, minimum, mean, and standard deviation, highlighting its stability and robustness in managing high-dimensional, complex problems. Although PSO, GRO, and SWO show slight advantages in specific functions (F17, F20), indicating their effectiveness in structured problems, the ILO algorithm consistently demonstrates superior global optimization capabilities. Its scalability and ability to handle increasing dimensional complexity are particularly evident in high-dimensional scenarios.
A p-value below 0.05, as outlined in [28], signifies a statistically significant difference, confirming that one algorithm outperforms the other. Conversely, a p-value above 0.05 indicates no significant difference, with observed variations likely attributable to randomness. In this study, p-value analysis reinforces the ILO algorithm’s superiority, as demonstrated in Table A1 and Table A2 in Appendix A and Appendix B, where most test functions yield p-values below 0.05. This validates ILO’s exceptional performance in global search capability, convergence speed, and robustness, particularly in high-dimensional problems. The differences between the ILO algorithm and other algorithms are particularly noticeable in unimodal functions F1–F2 and multimodal functions F3–F9. This highlights the ILO algorithm’s superior performance in handling both simple and complex optimization problems, further underscoring its effectiveness in global search and rapid convergence.

4.3. Comparison of Convergence Curves and Box Plots for 50dimCec2017 Benchmark Functions

From Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5 in Appendix C, it can be seen that F1 and F2 are unimodal functions, while F3 to F9 are simple multimodal functions. The convergence curves of the ILO test functions indicate that the convergence speed of the ILO method is generally faster than that of PSO, GRO, LO, KOA, and SWO algorithms. This also demonstrates that the ILO algorithm has strong competitiveness in solving both simple and complex problems. From Figure 6, in the case of the F10 function, although the initial speed is slower, it shows strong optimization ability in the middle and later stages, and the final solution is superior to algorithms like PSO and SWO. It is also able to continuously find better solutions even in the later stages. Functions F20 to F30 are 10 high-dimensional composite functions used to evaluate the global exploration capability of the algorithms. Due to not having a global optimum, they are challenging to optimize. From Figure 7, in the F20 function, the ILO algorithm converges quickly in the early stages, and the convergence trend stabilizes in the middle and later stages, avoiding premature convergence. The quality of the final solution is significantly better than that of PSO, SWO, and KOA. Its ability to continuously optimize the solution over a long iteration process demonstrates its strong robustness. From the convergence of functions F11 to F30, it is clear that the ILO algorithm outperforms PSO, GRO, LO, KOA, and SWO in terms of convergence speed and accuracy in 50-dimensional problems. This shows that when optimizing high-dimensional functions, other algorithms tend to fall into local optima, significantly reducing their convergence speed.
In Appendix D, the box plots from Figure A6, Figure A7, Figure A8, Figure A9 and Figure A10 represent a simplified version of the curves shown in Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5. The numbers on the left side of the plots indicate the optimal fitness values. In the box plots, the upper dashed line represents the maximum value, and the lower dashed line represents the minimum value. The box itself represents the area where most of the fitness values are concentrated. Therefore, by observing the box plots, one can intuitively determine the optimal performance.
From the box plots in Figure A6, Figure A7, Figure A8, Figure A9 and Figure A10 [29], it can be seen that the ILO algorithm has almost a strong advantage over the PSO, GRO, LO, KOA, and SWO algorithms in terms of mean, maximum, and minimum values.

4.4. Comparison of Convergence Curves and Box Plots for 100dimCec2017 Benchmark Functions

From Figure A11, Figure A12, Figure A13, Figure A14 and Figure A15 in Appendix E, it can be observed that in cases ranging from 50 to 100 dimensions, the performance advantage of the ILO algorithm is significantly enhanced. Whether dealing with simple problems or complex composite problems, it achieves the optimal iterative results more stably and earlier than other algorithms. In Figure 8 for Function F20, the ILO algorithm demonstrates strong global search capabilities in the early stages and continues to improve the solution quality without stagnation in later stages. The final convergence result surpasses other algorithms like KOA, PSO, and SWO.
By comparing the convergence curves of other functions, it is evident that increasing the dimensions from 50 to 100 greatly expands the search space, making the solution space more complex. Many algorithms tend to experience performance degradation or instability in high-dimensional scenarios. However, the ILO algorithm consistently performs well in high dimensions, showcasing its excellent scalability and robustness. As the dimensionality increases, the difficulty of optimization problems usually rises significantly—a phenomenon known as the “curse of dimensionality.” The outstanding performance of the ILO algorithm indicates that it adapts well to this curse. By utilizing improved Simulated Annealing algorithms and Levy flight strategies, it maintains adaptability to high-dimensional problems and excels in addressing complex optimization challenges.
In Appendix F, the box plots in Figure A16, Figure A17, Figure A18, Figure A19 and Figure A20 represent a simplified version of the curve charts from Figure A11, Figure A12, Figure A13, Figure A14 and Figure A15. They serve as a concise representation of the data from the curve plots. The numbers on the left side of the box plots indicate the optimal fitness values. In each box plot, the highest point of the dashed line denotes the maximum value, while the lowest point of the dashed line indicates the minimum value. The box itself represents the area where most of the fitness values are concentrated. Thus, by examining the box plots, one can intuitively identify the optimal performance.
From the box plots in Figure A16, Figure A17, Figure A18, Figure A19 and Figure A20, it can be seen that the ILO algorithm demonstrates the greatest advantage in terms of mean, maximum, and minimum values compared to the PSO, GRO, LO, KOA, and SWO algorithms in the 100-dimensional case. This indicates that the ILO algorithm performs well across different dimensions, avoiding premature convergence and continuously improving solution quality. It is capable of handling complex high-dimensional problems, while other algorithms may perform poorly in high-dimensional cases. Additionally, the ILO algorithm produces more consistent results across multiple runs, avoiding extreme outcomes.
In summary, the ILO algorithm outperforms other algorithms in both 50-dimensional and 100-dimensional scenarios. The reason for this is that the ILO algorithm initially uses an updated jump rate process to allow for larger jumps to explore the search space. In the later stages, the jump rate gradually decreases to enable finer search and local optimization. The algorithm also incorporates the Spider Monkey Optimization algorithm, which leverages three behaviors to dynamically reduce population size during iterations, thereby improving convergence speed. The Levy flight mechanism facilitates large jumps in the simulated search process. The adaptive nonlinear decreasing model further optimizes the jump rate, reducing the crossover probability in the partially integrated Spider Monkey Optimization algorithm and improving optimization. Additionally, simulated annealing is applied when processing each solution—if the current solution has a poor objective value, the simulated annealing probability determines whether to accept the solution, helping to avoid local optima. The inclusion of an adaptive learning factor strengthens the algorithm’s iteration and search capabilities.

5. Comparison of UAV Path Planning Applications

5.1. Environmental Modeling

In this study, an accurate 3D environment model is constructed for UAV trajectory planning simulation. The flight area is set as a 100 × 100 × 250 rectangular coordinate space, and the Gaussian function model [30,31] is used to simulate obstacles such as mountain peaks, which can accurately reproduce the terrain’s ups and downs and also flexibly adapt to different geographic environments. The height distribution of each peak can be expressed as a Gaussian function:
h i ( x , y ) = H i exp x x i 2 2 σ x i 2 + y y i 2 2 σ y i 2
In Equation (14), h i ( x , y ) represents the height of the i -th peak at position ( x , y ) . H i is the height of the i -th peak, and ( x i , y i ) is the center position of the i -th peak. σ x i and σ y i are the standard deviations of the i -th peak in the x and y directions, respectively, determining the extent of the peak’s spread. The total terrain height is the sum of the heights of all peaks:
z ( x , y ) = i = 1 N h i ( x , y ) = i = 1 N H i exp x x i 2 2 σ x i 2 + y y i 2 2 σ y i 2
In Equation (15), N is the total number of peaks, and z ( x , y ) represents the terrain height at the horizontal coordinates ( x , y ) . To generate these peaks, we randomly determine the center position, height, and range of each peak:
1. ( x i , y i ) is the center position of the i -th peak within the map’s boundaries.
2. H i is the height of the i -th peak.
3. σ x i , σ y i control the slope by adjusting the rate of change of the peak along the x and y axes.
The specific Gaussian peak modeling is shown in Figure 9.

5.2. Flight Path and Smoothing

A Cubic Spline Fitting algorithm is used to generate paths and plot them on a 3D surface. First, global variables are initialized, including the start point, end point, and surface coordinates. Next, the start point, end point, and control points are combined into a sequence to generate a smooth path using cubic spline interpolation, and the original path points are indexed and interpolated. The ‘spline’ function in MATLAB is used to generate the interpolated curves. Finally, ‘surf(X, Y, Z)’ is applied to plot the surface map, ‘shading flat’ removes the grid lines, and ‘colormap’ sets the color mapping. A smooth flight path curve is then drawn.

5.2.1. Cubic Spline Interpolation Path Smoothing Generation Algorithm

In the Improved Lemur Optimization algorithm, each path consists of a starting point, an endpoint, and path points. By finding the optimal positions of the path points, a smooth cubic spline curve is generated by interpolating between adjacent path points.
The flight path for the i -th segment, including n control points, is defined as f ( x 0 ) , f ( x 1 ) , …, f ( x n ) where the domain is defined as x 0 < x 1 < x 2 < < x n . The cubic spline interpolation represents the function between two adjacent path points, and this function, along with its first and second derivatives, is continuous within the interval. The n segmented cubic polynomials are represented as:
f n ( x ) = a n ( x x n ) 3 + b n ( x x n ) 2 + c n ( x x n ) + d n
The n segmented cubic polynomials require solving for 4n unknown parameters: a n , b n , c n , d n . Based on the continuity of the derivatives and the interpolation [32], 4 n 2 equations can be obtained.
f n ( x n ) f n ( x n + 1 ) f n ( x n + 1 ) f n ( x n + 1 )
The remaining two conditions are determined by the starting point x 0 and the target point x n . After the UAV completes the path planning, a continuous and smooth cubic spline curve will be generated. The optimization effect of cubic spline interpolation is shown in Figure 10.
Based on Figure 11, the difference between the paths with and without the cubic spline interpolation in the LO algorithm is quite noticeable:
Without cubic spline interpolation (left): The path appears more jagged and less smooth, taking abrupt changes in direction as it navigates the surface. This indicates a direct approach without any smoothing, leading to a less efficient or optimized path.
With cubic spline interpolation (right): The path is much smoother, showing a more gradual and continuous curve. This smoothing effect allows for a more natural and efficient transition along the surface, indicating an improvement in the trajectory planning due to the interpolation.

5.2.2. Restrictive Condition

In order for the UAV to fly in the specified airspace, in each iteration, the position update of the UAV needs to satisfy two conditions:
(1) In order to ensure that the flight path is carried out in the specified airspace, the boundaries need to be constrained and the following constraints need to be satisfied:
0 x i x max 0 y i y max 0 z i z max , i = 1 , 2 , , n
(2) The fitness function [33] out the minimum surrogate value of the flight in the specified airspace that can avoid obstacles, which is derived from the objective function expression with the following expression:
f i t n e s s = min ( V c + T c + E c )

5.2.3. Objective Function

The objective function of a UAV flight path is composed of three key elements: total flight distance, obstacle avoidance cost, and the constraint of staying within a specified boundary. These three elements together determine the optimal flight strategy of the UAV. Specifically, the objective function [34,35] considers the following aspects:
f i t n e s s = V c + T c + E c
In Formula (20), V C represents the total flight cost of the UAV; T c represents the cost of the UAV flying around obstacles; E c represents the cost of the UAV flying within the specified boundaries.
The flight cost V c primarily considers the total flight distance of the UAV from the starting point to the endpoint, which is the sum of each arc segment L i . If the total flight path consists of n segments, the total flight cost can be expressed as:
V c = i = 1 n 1 L i L i = x i + 1 x i 2 + y i + 1 y i 2 + z i + 1 z i 2
The terrain cost T c is mainly to ensure that the UAV’s flight path avoids obstacles by controlling the value of T c . When the altitude z i is higher than the obstacle height Z 2 ( x i , y i ) in the terrain, T c = 0; when the altitude z i is lower than the obstacle height Z 2 ( x i , y i ), T c = . Summing this ensures that the UAV’s flight route avoids obstacles such as peaks. The expression is as follows:
T c = i = 1 n T c i T c 0 = 0 T c i = 0 Z i > Z x i , y i o t h e r w i s e
The boundary cost E c is to ensure that the UAV remains within the specified airspace, controlled by adjusting the value of E c . When the UAV is within the airspace, E c = 0; when it is outside the airspace, E c = . Summing this ensures that the UAV stays within the designated airspace. The expression is as follows:
E c = i = 1 n E c i E c 0 = 0 E c i = o t h e r w i s e 0 x i 0 , x max y i [ 0 , y max ] z i 0 , z max

6. Analysis of Simulation Results of ILO Algorithm and Other Intelligent Algorithms

In order to verify the effectiveness of the ILO algorithm in the simulation of an UAV mountain 3D path: Set the relevant environmental parameters as shown in Table 3. Where the map uses PSO, GRO, LO, KOA, SWO, and ILO algorithms for the comparison of simulation paths: In order to eliminate the effect of randomness, each algorithm is run independently 100 times on the map to compare its average convergence speed and average adaptation value. The simulation results are shown in Table 4.
As seen in Table 4, the ILO (Improved Lemur Optimization) algorithm performs better in 3D UAV path planning, consistently finding the optimal path quickly and efficiently, regardless of the changing terrain.
In Figure 12, Figure 13 and Figure 14, the PSO algorithm’s path shows some smoothness but appears somewhat volatile, especially near the endpoint, where it may become unstable. The path tends to get stuck in local optima in certain regions, highlighting the local difficulties PSO might encounter in complex search spaces. The LO algorithm’s path is more winding, with noticeable turning points in some areas. Although it finds a reasonably good endpoint, its performance is slightly worse than the improved ILO, showing some local oscillations. In contrast, ILO’s path is relatively smooth, directly connecting the start and endpoint, which indicates that the ILO algorithm is more robust during the search process and better at avoiding local optima. ILO demonstrates more efficient convergence and superior global search capability when solving the path planning problem.
The SWO algorithm performs moderately, with no significant oscillations and a certain degree of smoothness, but its overall performance lags behind ILO. KOA also shows moderate performance, similar to SWO. Although it can find the target endpoint, its path is not as smooth as ILO or PSO. The GRO shows relatively good performance, but compared to ILO, its path experiences minor fluctuations and oscillations in the middle, indicating that it might become stuck in local optima in certain regions, causing the path to deviate slightly.
Compared to other algorithms, ILO converges faster, with the fitness value reaching a lower level. Around the 30th iteration, the fitness value stabilizes, indicating that the algorithm finds a good solution in a relatively short time.

7. Concluding Remarks

This paper introduces an enhanced Improved Lemur Optimization algorithm (ILO), which demonstrates exceptional performance in UAV path planning, primarily due to its multi-tiered optimization framework. In the initial phases, the ILO algorithm modifies the jump rate to enable broader exploration of the search space, progressively reducing this rate in later stages to facilitate more precise local optimization. Furthermore, the algorithm incorporates elements from the Spider Wasp Optimization Algorithm, utilizing three distinct behaviors to dynamically decrease population size, thereby accelerating convergence. The integration of the Levy flight mechanism allows for significant leaps in the search process, substantially enhancing global exploration capabilities. An adaptive nonlinear decreasing model is employed to further refine the jump rate and reduce crossover probability, thereby improving the efficiency of local searches. When combined with simulated annealing, the algorithm effectively circumvents entrapment in local optima, while the inclusion of adaptive learning factors bolsters its iterative and search capabilities. Ultimately, the ILO algorithm achieves rapid and precise path planning optimization through a synergistic blend of these multiple strategies. While the ILO approach results in superior UAV route planning quality, it does increase computational complexity. Future research could focus on enhancing the algorithm’s performance and efficiency while preserving the streamlined nature of its mechanisms. The ILO method holds significant potential for future applications, including collaborative UAV path planning and dynamic collision avoidance.

Author Contributions

Writing—review and editing, H.L.; Writing—original draft, W.H.; Investigation, K.G.; Software, J.D.; Resources, L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Fundamental Research Funds for the Central Universities (No-PHD2023-035).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Comparison of 6 algorithms of 50 dimension.
Table A1. Comparison of 6 algorithms of 50 dimension.
Dim = 50
GROLOPSOSWOKOAILO
F1Max6.95E+093.08E+081.61E+109.78E+088.25E+076.32E+04
Mean3.37E+092.121E+084.41E+094.70E+082.21E+071.65E+04
Min1.52E+091.02E+082.86E+072.28E+082.20E+061.92E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std1.36E+096.25E+073.80E+091.95E+081.82E+071.46E+04
F2Max3.58E+545.45E+644.19E+552.68E+481.39E+453.22E+40
Mean1.40E+532.59E+631.54E+541.02E+479.03E+431.14E+39
Min4.97E+448.31E+484.29E+371.95E+391.62E+323.08E+30
p-value3.02E−113.02E−112.61E−104.50E−113.32E−061
Std6.56E+531.05E+647.65E+544.89E+473.18E+445.88E+39
F3Max2.10E+054.53E+054.63E+051.78E+052.86E+051.60E+05
Mean1.64E+053.21E+052.19E+051.17E+052.02E+051.03E+05
Min1.15E+052.40E+051.23E+058.02E+041.29E+055.18E+04
p-value3.96E−083.02E−118.10E−107.98E−028.99E−111
Std2.74E+045.31E+047.64E+042.14E+043.65E+043.32E+04
F4Max1.66E+038.44E+022.80E+048.47E+027.71E+026.60E+02
Mean1.05E+037.20E+029.16E+027.46E+026.70E+025.74E+02
Min8.41E+025.98E+025.88E+026.96E+025.52E+024.55E+02
p-value3.02E−111.46E−103.82E−103.02E−112.20E−071
Std1.73E+025.70E+014.74E+024.13E+016.30E+014.60E+01
F5Max8.45E+029.77E+029.27E+028.74E+029.91E+026.52E+02
Mean7.94E+028.97E+027.96E+027.52E+029.34E+025.91E+02
Min7.37E+027.98E+026.65E+026.90E+028.66E+025.57E+02
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std2.73E+014.76E+016.40E+014.57E+012.98E+012.72E+01
F6Max6.29E+026.16E+026.53E+026.21E+026.26E+026.06E+02
Mean6.20E+026.09E+026.35E+026.16E+026.12E+026.04E+02
Min6.14E+026.05E+026.14E+026.11E+026.06E+026.01E+02
p-value3.02E−116.07E−113.02E−113.02E−114.08E−111
Std4.11E+002.58E+008.29E+002.78E+004.77E+001.17E+00
F7Max1.23E+031.34E+031.51E+031.29E+031.41E+031.01E+03
Mean1.10E+031.22E+031.19E+031.14E+031.27E+038.96E+02
Min9.52E+021.13E+031.00E+039.73E+021.16E+038.38E+02
p-value5.49E−113.02E−113.34E−113.69E−113.02E−111
Std6.11E+015.23E+011.10E+027.63E+014.85E+014.53E+01
F8Max1.21E+031.28E+031.18E+031.10E+031.32E+039.66E+02
Mean1.11E+031.18E+031.07E+031.04E+031.24E+038.98E+02
Min1.03E+031.08E+039.44E+029.72E+021.19E+038.50E+02
p-value3.02E−113.02E−113.34E−113.02E−113.02E−111
Std3.79E+014.82E+015.94E+014.07E+013.00E+012.63E+01
F9Max6.93E+031.47E+043.58E+041.93E+041.39E+043.73E+03
Mean4.75E+036.71E+031.87E+049.60E+036.86E+031.99E+03
Min2.66E+033.12E+034.85E+033.64E+032.80E+031.15E+03
p-value1.33E−103.69E−113.02E−113.34E−118.99E−111
Std1.23E+032.56E+037.37E+033.90E+032.73E+036.77E+02
F10Max1.23E+041.54E+049.65E+031.28E+041.48E+041.07E+04
Mean1.09E+041.41E+047.76E+039.24E+031.41E+048.64E+03
Min9.20E+031.24E+045.64E+036.85E+031.31E+047.02E+03
p-value1.17E−093.02E−113.18E−037.48E−023.02E−111
Std8.34E+027.24E+029.76E+021.36E+034.32E+029.89E+02
F11Max4.04E+031.13E+043.20E+032.27E+031.93E+031.58E+03
Mean3.09E+035.24E+031.71E+031.78E+031.67E+031.39E+03
Min2.03E+032.08E+031.38E+031.48E+031.44E+031.26E+03
p-value3.02E−113.02E−116.12E−109.92E−111.33E−101
Std5.05E+022.34E+033.11E+022.27E+021.19E+027.39E+01
F12Max2.82E+081.997E+084.205E+097.78E+076.86E+079.47E+06
Mean1.32E+088.32E+071.23E+092.66E+071.17E+073.88E+06
Min3.67E+072.09E+079.76E+065.26E+062.74E+061.57E+06
p-value3.02E−113.02E−113.02E−116.07E−116.53E−081
Std5.57E+074.50E+071.18E+091.37E+071.18E+071.91E+06
F13Max3.81E+062.99E+046.11E+095.58E+042.81E+042.00E+04
Mean1.14E+061.10E+044.56E+082.31E+047.82E+036.02E+03
Min3.44E+042.50E+037.72E+038.00E+033.40E+031.67E+03
p-value3.02E−111.95E−031.46E−101.17E−091.27E−021
Std1.07E+067.89E+031.17E+091.09E+044.96E+034.92E+03
F14Max8.46E+055.26E+065.60E+061.62E+061.71E+052.24E+05
Mean3.11E+051.36E+061.02E+062.55E+058.18E+045.51E+04
Min7.45E+041.26E+051.33E+051.80E+042.26E+042.83E+03
p-value1.17E−094.50E−114.50E−112.88E−061.08E−021
Std1.79E+051.24E+061.07E+063.00E+054.00E+045.22E+04
F15Max7.25E+052.23E+041.12E+052.19E+041.07E+051.93E+04
Mean9.23E+041.13E+042.26E+049.91E+031.27E+046.08E+03
Min8.98E+031.84E+032.46E+032.83E+032.31E+032.02E+03
p-value3.82E−109.47E−031.61E−061.30E−033.85E−031
Std1.55E+056.73E+032.19E+045.66E+031.85E+044.24E+03
F16Max4.41E+035.28E+034.70E+034.29E+035.46E+034.22E+03
Mean3.36E+034.45E+033.44E+033.29E+034.86E+033.24E+03
Min2.43E+033.31E+032.60E+032.40E+034.03E+032.10E+03
p-value4.29E−018.10E−102.40E−018.42E−013.34E−111
Std4.18E+024.63E+024.74E+024.55E+023.43E+024.83E+02
F17Max3.55E+034.14E+033.72E+033.46E+034.13E+033.86E+03
Mean2.94E+033.63E+033.26E+032.98E+033.72E+033.14E+03
Min2.28E+033.27E+032.59E+032.53E+032.99E+032.38E+03
p-value1.56E−021.16E−071.71E−011.12E−021.85E−081
Std3.03E+022.76E+023.21E+022.31E+022.67E+023.04E+02
F18Max1.02E+077.17E+071.88E+078.13E+065.94E+062.20E+06
Mean3.96E+061.45E+075.20E+061.44E+061.51E+066.83E+05
Min7.45E+051.77E+064.23E+051.61E+053.20E+055.64E+04
p-value1.07E−094.08E−115.46E−092.15E−022.68E−041
Std2.67E+061.39E+074.18E+061.57E+061.25E+065.42E+05
F19Max8.01E+054.43E+041.89E+064.01E+044.11E+044.03E+04
Mean9.15E+041.81E+041.22E+051.76E+042.04E+041.73E+04
Min1.45E+042.00E+032.09E+035.40E+032.79E+032.05E+03
p-value2.00E−069.71E−011.26E−019.23E−012.28E−011
Std1.54E+051.28E+043.51E+051.02E+041.02E+049.92E+03
F20Max3.41E+034.11E+033.65E+033.52E+034.33E+033.84E+03
Mean2.97E+033.63E+033.10E+032.99E+034.01E+033.19E+03
Min2.51E+032.98E+032.44E+032.41E+033.42E+032.69E+03
p-value6.97E−035.09E−063.40E−011.44E−021.09E−101
Std2.46E+022.89E+023.06E+022.77E+021.95E+023.17E+02
F21Max2.64E+032.77E+032.76E+032.63E+032.76E+032.46E+03
Mean2.57E+032.69E+032.60E+032.55E+032.71E+032.40E+03
Min2.51E+032.61E+032.51E+032.48E+032.67E+032.36E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std3.23E+013.61E+016.04E+014.23E+012.84E+012.46E+01
F22Max1.37E+041.66E+041.18E+041.38E+041.70E+041.16E+04
Mean1.12E+041.54E+049.64E+031.01E+041.56E+049.53E+03
Min2.89E+031.40E+042.77E+032.68E+038.85E+038.13E+03
p-value1.25E−053.02E−112.06E−014.23E−032.37E−101
Std3.22E+036.46E+021.66E+032.64E+031.35E+039.73E+02
F23Max3.10E+033.19E+033.72E+033.23E+033.25E+032.92E+03
Mean3.03E+033.10E+033.37E+033.05E+033.17E+032.85E+03
Min2.97E+033.01E+033.06E+032.96E+033.10E+032.77E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std3.09E+014.56E+011.76E+025.68E+013.10E+013.28E+01
F24Max3.33E+033.36E+033.85E+033.33E+033.44E+033.15E+03
Mean3.21E+033.29E+033.54E+033.22E+033.33E+033.01E+03
Min3.14E+033.21E+033.31E+033.11E+033.28E+032.95E+03
p-value4.08E−113.02E−113.02E−114.50E−113.02E−111
Std3.90E+013.58E+011.38E+025.42E+014.04E+014.09E+01
F25Max4.48E+033.41E+033.41E+033.39E+033.26E+033.12E+03
Mean3.50E+033.22E+033.17E+033.27E+033.16E+033.07E+03
Min3.25E+033.12E+033.03E+033.12E+033.04E+033.03E+03
p-value3.02E−113.02E−111.01E−083.02E−112.03E−091
Std2.23E+026.41E+018.28E+017.65E+014.66E+012.29E+01
F26Max8.07E+038.50E+039.30E+038.04E+039.81E+035.99E+03
Mean6.88E+037.40E+037.03E+037.33E+038.29E+035.09E+03
Min5.74E+036.31E+033.57E+036.27E+037.52E+034.55E+03
p-value3.69E−113.02E−111.43E−053.02E−113.02E−111
Std4.92E+024.67E+021.65E+035.00E+024.63E+023.54E+02
F27Max3.99E+033.79E+034.10E+033.99E+033.73E+033.57E+03
Mean3.81E+033.49E+033.70E+033.71E+033.50E+033.40E+03
Min3.63E+033.37E+033.39E+033.50E+033.33E+033.28E+03
p-value3.02E−115.09E−061.20E−084.98E−112.00E−051
Std8.95E+017.81E+012.00E+021.23E+029.32E+016.33E+01
F28Max4.32E+034.52E+037.06E+034.75E+033.72E+033.45E+03
Mean3.99E+033.77E+034.03E+033.85E+033.52E+033.36E+03
Min3.65E+033.45E+033.38E+033.53E+033.38E+033.29E+03
p-value3.02E−113.02E−111.33E−103.02E−111.46E−101
Std1.62E+022.88E+028.93E+022.48E+027.53E+013.53E+01
F29Max5.35E+035.67E+035.90E+034.99E+035.68E+035.54E+03
Mean4.58E+034.77E+034.69E+034.53E+035.03E+034.44E+03
Min4.03E+034.06E+033.86E+033.99E+034.04E+033.50E+03
p-value1.30E−011.60E−032.71E−023.18E−018.29E−061
Std3.32E+023.70E+024.40E+022.82E+024.45E+024.04E+02
F30Max6.07E+072.96E+062.16E+071.18E+079.10E+062.36E+06
Mean2.39E+071.48E+064.45E+066.84E+064.21E+061.19E+06
Min8.69E+068.82E+059.03E+053.15E+061.49E+067.53E+05
p-value3.02E−111.11E−032.38E−073.02E−112.15E−101
Std1.39E+074.46E+054.80E+062.06E+062.15E+064.10E+05

Appendix B

Table A2. Comparison of 6 algorithms of 100 dimension.
Table A2. Comparison of 6 algorithms of 100 dimension.
Dim = 100
GROLOPSOSWOKOAILO
F1Max7.10E+102.62E+105.54E+102.81E+108.21E+093.26E+08
Mean5.33E+101.72E+101.89E+101.92E+104.12E+099.10E+07
Min3.67E+101.25E+105.88E+091.14E+101.83E+092.40E+07
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std8.28E+093.06E+091.13E+104.59E+091.48E+096.82E+07
F2Max4.97E+1343.31E+1541.97E+1514.75E+1281.47E+1264.76E+116
Mean2.14E+1331.10E+1536.57E+1493.34E+1274.89E+1241.59E+115
Min2.48E+1182.57E+1301.06E+1052.70E+1123.19E+1064.88E+93
p-value3.02E−113.02E−118.10E−103.69E−113.50E−091
Std9.21E+133Infinity3.60E+1501.00E+1282.67E+1258.69E+115
F3Max6.03E+059.52E+058.15E+053.86E+057.48E+054.24E+05
Mean4.49E+058.25E+056.29E+053.25E+055.68E+053.05E+05
Min3.05E+056.17E+053.98E+052.57E+054.16E+052.32E+05
p-value2.67E−093.02E−114.98E−111.22E−023.69E−111
Std6.84E+047.93E+049.60E+042.79E+048.58E+045.85E+04
F4Max7.23E+034.51E+037.58E+033.81E+032.15E+031.06E+03
Mean5.80E+032.76E+032.71E+032.73E+031.57E+039.27E+02
Min4.61E+031.84E+031.29E+031.97E+031.28E+038.02E+02
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std7.03E+025.49E+021.46E+034.68E+021.86E+026.70E+01
F5Max1.59E+031.71E+031.72E+031.51E+031.82E+039.17E+02
Mean1.40E+031.59E+031.41E+031.33E+031.66E+038.37E+02
Min1.26E+031.46E+031.17E+031.14E+031.56E+037.26E+02
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std7.30E+017.41E+011.24E+029.09E+016.67E+015.05E+01
F6Max6.65E+026.47E+026.81E+026.55E+026.63E+026.29E+02
Mean6.47E+026.37E+026.62E+026.43E+026.44E+026.20E+02
Min6.36E+026.26E+026.36E+026.33E+026.31E+026.11E+02
p-value3.02E−114.08E−113.02E−113.02E−113.02E−111
Std5.64E+004.96E+009.89E+006.26E+007.43E+004.39E+00
F7Max2.58E+032.99E+032.61E+032.70E+032.87E+031.93E+03
Mean2.24E+032.59E+032.24E+032.39E+032.49E+031.54E+03
Min1.94E+032.24E+031.99E+032.01E+032.19E+031.32E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std1.45E+021.73E+021.50E+021.73E+021.70E+021.30E+02
F8Max1.88E+032.07E+032.11E+031.89E+032.11E+031.29E+03
Mean1.74E+031.88E+031.77E+031.67E+031.95E+031.16E+03
Min1.62E+031.69E+031.58E+031.46E+031.83E+031.05E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std7.11E+016.67E+011.40E+021.15E+026.24E+015.94E+01
F9Max4.28E+047.08E+049.86E+047.32E+046.66E+042.10E+04
Mean3.12E+044.60E+046.97E+044.29E+044.13E+041.14E+04
Min1.92E+043.44E+043.67E+042.76E+042.56E+046.64E+03
p-value4.08E−113.02E−113.02E−113.02E−113.02E−111
Std6.51E+037.79E+031.55E+041.15E+049.65E+033.57E+03
F10Max2.86E+043.34E+042.70E+043.00E+043.25E+042.39E+04
Mean2.60E+043.15E+042.05E+042.40E+043.14E+041.79E+04
Min2.31E+042.94E+041.71E+042.04E+043.01E+041.55E+04
p-value3.69E−113.02E−119.51E−062.15E−103.02E−111
Std1.49E+039.59E+022.26E+032.62E+035.09E+021.86E+03
F11Max1.44E+053.33E+051.20E+058.13E+041.30E+058.25E+04
Mean1.03E+052.22E+056.96E+045.74E+049.35E+042.36E+04
Min6.48E+041.38E+053.29E+042.10E+046.16E+041.16E+04
p-value4.98E−113.02E−115.57E−102.44E−098.15E−111
Std1.92E+045.04E+042.04E+041.21E+041.98E+041.43E+04
F12Max7.23E+093.39E+092.44E+102.40E+097.37E+081.37E+08
Mean4.77E+092.12E+098.48E+091.07E+093.81E+085.08E+07
Min1.89E+099.26E+081.33E+096.84E+081.81E+081.42E+07
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std1.48E+096.10E+086.55E+093.55E+081.31E+082.63E+07
F13Max3.01E+081.69E+056.27E+093.49E+061.76E+062.97E+04
Mean7.36E+078.99E+041.10E+091.08E+069.15E+048.24E+03
Min1.76E+074.53E+049.77E+042.70E+051.98E+043.08E+03
p-value3.02E−113.02E−113.02E−113.02E−119.92E−111
Std6.13E+073.26E+041.54E+097.45E+053.15E+056.30E+03
F14Max1.40E+076.73E+072.55E+074.66E+068.31E+063.30E+06
Mean6.79E+062.51E+076.51E+062.43E+063.35E+061.32E+06
Min1.65E+068.63E+061.03E+068.69E+051.41E+062.39E+05
p-value1.33E−103.02E−115.46E−091.64E−052.19E−081
Std3.21E+061.27E+075.27E+069.92E+051.66E+067.44E+05
F15Max5.52E+064.41E+051.38E+095.11E+041.72E+042.31E+04
Mean1.82E+062.40E+041.85E+082.14E+046.21E+035.83E+03
Min1.70E+054.03E+039.86E+039.28E+033.72E+032.26E+03
p-value3.02E−111.86E−037.39E−111.55E−093.03E−021
Std1.42E+067.92E+044.25E+089.76E+032.92E+034.86E+03
F16Max9.34E+031.20E+047.22E+037.98E+031.11E+047.57E+03
Mean7.93E+031.04E+046.27E+037.00E+031.00E+045.58E+03
Min6.59E+038.89E+035.07E+035.30E+038.06E+034.34E+03
p-value7.39E−113.02E−112.68E−044.69E−083.02E−111
Std6.86E+027.24E+026.05E+027.11E+027.15E+027.30E+02
F17Max6.77E+038.23E+032.62E+046.55E+038.01E+036.54E+03
Mean5.49E+037.39E+036.73E+035.39E+037.21E+035.19E+03
Min4.74E+036.35E+034.30E+034.47E+036.18E+033.63E+03
p-value1.08E−023.34E−113.37E−051.30E−014.08E−111
Std4.06E+024.66E+023.85E+034.73E+024.48E+025.88E+02
F18Max1.95E+079.43E+072.11E+079.63E+061.90E+074.54E+06
Mean9.00E+064.41E+078.02E+063.50E+067.01E+062.28E+06
Min1.96E+061.41E+072.75E+061.32E+061.79E+062.63E+05
p-value1.96E−103.02E−111.33E−109.88E−031.43E−081
Std3.73E+062.17E+074.07E+061.88E+063.82E+061.10E+06
F19Max1.60E+071.89E+045.20E+092.37E+051.53E+041.29E+04
Mean2.89E+061.02E+043.28E+086.99E+045.79E+034.96E+03
Min4.88E+053.07E+031.43E+047.45E+032.69E+032.17E+03
p-value3.02E−111.02E−053.02E−114.50E−117.48E−021
Std3.05E+064.95E+039.52E+085.77E+042.97E+033.08E+03
F20Max6.39E+038.28E+036.37E+036.94E+038.20E+036.68E+03
Mean5.70E+037.30E+035.45E+035.32E+037.72E+035.62E+03
Min4.35E+036.11E+034.29E+033.87E+036.85E+033.40E+03
p-value6.31E−012.37E−102.12E−013.64E−023.02E−111
Std4.44E+025.54E+026.02E+026.16E+023.07E+027.02E+02
F21Max3.36E+033.71E+034.11E+033.35E+033.56E+032.82E+03
Mean3.19E+033.45E+033.50E+033.15E+033.43E+032.67E+03
Min3.07E+033.31E+033.21E+033.02E+033.32E+032.57E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std6.64E+018.73E+011.94E+027.59E+016.36E+016.41E+01
F22Max3.04E+043.57E+043.45E+043.22E+043.46E+042.52E+04
Mean2.88E+043.36E+042.49E+042.57E+043.39E+042.05E+04
Min2.73E+043.15E+041.98E+042.10E+043.04E+041.64E+04
p-value3.02E−113.02E−112.57E−072.23E−093.02E−111
Std7.91E+021.03E+033.21E+032.54E+038.53E+022.12E+03
F23Max4.09E+033.92E+035.09E+033.94E+034.05E+033.40E+03
Mean3.86E+033.76E+034.74E+033.78E+033.92E+033.24E+03
Min3.74E+033.63E+034.19E+033.57E+033.72E+033.11E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std7.61E+017.42E+012.86E+029.61E+017.94E+017.71E+01
F24Max4.81E+034.52E+038.21E+034.73E+034.70E+034.14E+03
Mean4.64E+034.31E+035.83E+034.51E+034.54E+033.80E+03
Min4.36E+034.17E+034.65E+034.29E+034.39E+033.64E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std1.14E+027.48E+016.72E+021.19E+027.63E+011.13E+02
F25Max9.07E+038.57E+036.61E+036.47E+034.78E+033.80E+03
Mean7.21E+036.32E+034.61E+035.43E+034.28E+033.63E+03
Min5.49E+035.12E+033.97E+034.67E+033.98E+033.42E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std8.15E+027.75E+026.29E+024.52E+021.97E+028.46E+01
F26Max2.48E+041.85E+043.06E+042.15E+042.21E+041.28E+04
Mean2.03E+041.68E+042.06E+041.93E+041.90E+041.06E+04
Min1.50E+041.52E+041.57E+041.69E+041.68E+049.25E+03
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std2.09E+038.31E+023.40E+031.35E+031.37E+038.56E+02
F27Max4.99E+034.15E+034.76E+035.00E+034.13E+033.87E+03
Mean4.68E+033.92E+034.04E+034.33E+033.90E+033.61E+03
Min4.35E+033.71E+033.73E+033.96E+033.74E+033.46E+03
p-value3.02E−111.78E−102.37E−103.02E−115.57E−101
Std1.78E+021.02E+022.48E+022.08E+021.15E+029.25E+01
F28Max1.15E+041.38E+041.01E+049.64E+036.70E+034.44E+03
Mean9.33E+031.04E+046.19E+037.28E+035.13E+033.93E+03
Min7.63E+036.25E+033.90E+034.96E+034.19E+033.62E+03
p-value3.02E−113.02E−118.10E−103.02E−111.21E−101
Std1.08E+031.91E+031.88E+038.69E+025.50E+022.14E+02
F29Max9.51E+031.06E+049.02E+031.06E+041.10E+048.38E+03
Mean8.81E+039.44E+038.03E+038.73E+039.95E+037.16E+03
Min7.23E+037.62E+036.43E+037.65E+038.36E+035.48E+03
p-value1.78E−105.49E−115.86E−061.33E−103.34E−111
Std5.11E+026.52E+026.56E+026.82E+027.03E+025.72E+02
F30Max1.04E+081.83E+073.46E+092.73E+075.67E+061.25E+05
Mean4.52E+074.12E+061.02E+091.07E+071.83E+066.02E+04
Min1.75E+078.33E+054.72E+063.51E+065.53E+052.10E+04
p-value3.02E−113.02E−113.02E−113.02E−113.02E−111
Std2.30E+073.66E+069.97E+086.24E+061.22E+062.12E+04

Appendix C

Figure A1. F1–F6 (50dim).
Figure A1. F1–F6 (50dim).
Biomimetics 09 00654 g0a1
Figure A2. F7–F12 (50dim).
Figure A2. F7–F12 (50dim).
Biomimetics 09 00654 g0a2
Figure A3. F13–F18 (50dim).
Figure A3. F13–F18 (50dim).
Biomimetics 09 00654 g0a3
Figure A4. F19–F24 (50dim).
Figure A4. F19–F24 (50dim).
Biomimetics 09 00654 g0a4
Figure A5. F25–F30 (50dim).
Figure A5. F25–F30 (50dim).
Biomimetics 09 00654 g0a5

Appendix D

Figure A6. F1–F6 (50dim) box plot (math.).
Figure A6. F1–F6 (50dim) box plot (math.).
Biomimetics 09 00654 g0a6
Figure A7. F7–F12 (50dim) box plot (math.).
Figure A7. F7–F12 (50dim) box plot (math.).
Biomimetics 09 00654 g0a7
Figure A8. F13–F18 (50dim) box plot (math.).
Figure A8. F13–F18 (50dim) box plot (math.).
Biomimetics 09 00654 g0a8
Figure A9. F19–F24 (50dim) box plot (math.).
Figure A9. F19–F24 (50dim) box plot (math.).
Biomimetics 09 00654 g0a9
Figure A10. F25–F30 (50dim) box plot (math.).
Figure A10. F25–F30 (50dim) box plot (math.).
Biomimetics 09 00654 g0a10

Appendix E

Figure A11. F1–F6 (100dim).
Figure A11. F1–F6 (100dim).
Biomimetics 09 00654 g0a11
Figure A12. F7–F12 (100dim).
Figure A12. F7–F12 (100dim).
Biomimetics 09 00654 g0a12
Figure A13. F13–F18 (100dim).
Figure A13. F13–F18 (100dim).
Biomimetics 09 00654 g0a13
Figure A14. F19–F24 (100dim).
Figure A14. F19–F24 (100dim).
Biomimetics 09 00654 g0a14
Figure A15. F25–F30 (100dim).
Figure A15. F25–F30 (100dim).
Biomimetics 09 00654 g0a15

Appendix F

Figure A16. F1–F6 (100dim) box plot (math.).
Figure A16. F1–F6 (100dim) box plot (math.).
Biomimetics 09 00654 g0a16
Figure A17. F7–F12 (100dim) box plot (math.).
Figure A17. F7–F12 (100dim) box plot (math.).
Biomimetics 09 00654 g0a17
Figure A18. F13–F18 (100dim) box plot (math.).
Figure A18. F13–F18 (100dim) box plot (math.).
Biomimetics 09 00654 g0a18
Figure A19. F19–F24 (100dim) box plot (math.).
Figure A19. F19–F24 (100dim) box plot (math.).
Biomimetics 09 00654 g0a19
Figure A20. F25–F30 (100dim) box plot (math.).
Figure A20. F25–F30 (100dim) box plot (math.).
Biomimetics 09 00654 g0a20

References

  1. Tanaka, T.S.T.; Wang, S.; Jørgensen, J.R.; Gentili, M.; Vidal, A.Z.; Mortensen, A.K.; Acharya, B.S.; Beck, B.D.; Gislum, R. Review of Crop Phenotyping in Field Plot Experiments Using UAV-Mounted Sensors and Algorithms. Drones 2024, 8, 212. [Google Scholar] [CrossRef]
  2. Asadzadeh, S.; de Oliveira, W.J.; de Souza Filho, C.R. UAV-based remote sensing for the petroleum industry and environmental monitoring: State-of-the-art and perspectives. J. Pet. Sci. Eng. 2022, 208, 109633. [Google Scholar] [CrossRef]
  3. Mohd Noor, N.; Abdullah, A.; Hashim, M. Remote sensing UAV/drones and its applications for urban areas: A review. IOP Conf. Ser. Earth Environ. Sci. 2018, 169, 012003. [Google Scholar] [CrossRef]
  4. Erdelj, M.; Natalizio, E. UAV-assisted disaster management: Applications and open issues. In Proceedings of the 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, USA, 15–18 February 2016; pp. 1–5. [Google Scholar]
  5. Deng, S.Q.; Guo, Z.J.; Li, F. Adaptive simulated annealing particle swarm optimisation based on the Metropolis criterion. Softw. Guide 2022, 21, 85–91. [Google Scholar]
  6. Abdel-Basset, M.; Reda, M.; Shaimaa, A.; Azeem, A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl. Based Syst. 2023, 268, 110454. [Google Scholar] [CrossRef]
  7. Birbil, S.I.; Fang, S.C. An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 2003, 25, 263–282. [Google Scholar] [CrossRef]
  8. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2016, 27, 495–513. [Google Scholar] [CrossRef]
  9. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  10. Wang, H.Q.; Song, G.Z.; Ge, C. UAV 3D Path Planning Based on Improved Dung Beetle Algorithm. Electron. Opt. Control. 2024. Available online: https://link.cnki.net/urlid/41.1227.TN.20240708.1532.008 (accessed on 16 October 2024).
  11. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  12. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Spider wasp optimizer: A novel meta-heuristic optimization algorithm. Artif. Intell. Rev. 2023, 56, 11675–11738. [Google Scholar] [CrossRef]
  13. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56 (Suppl. S2), 1919–1979. [Google Scholar] [CrossRef]
  14. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  15. Krishnanand, K.; Ghose, D. Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell. 2009, 3, 87–124. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  17. Zolf, K. Gold rush optimizer: A new population-based metaheuristic algorithm. Oper. Res. Decis. 2023, 33, 113–150. [Google Scholar] [CrossRef]
  18. Abasi, A.K.; Makhadmeh, S.N.; Al-Betar, M.A.; Alomari, O.A.; Awadallah, M.A.; Alyasseri, Z.A.A.; Doush, I.A.; Elnagar, A.; Alkhammash, E.H.; Hadjouni, M. Lemurs optimizer: A new metaheuristic algorithm for global optimization. Appl. Sci. 2022, 12, 10057. [Google Scholar] [CrossRef]
  19. Das, S.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for CEC 2011 Competition on Testing Evolutionary Algorithms on Real World Optimization Problems; Technical Report; Jadavpur University, Nanyang Technological University: Kolkata, India, 2010; pp. 341–359. [Google Scholar]
  20. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physicsbased algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  21. Cuina, C.; Songlu, F.; Liping, M. Harmonic Search Algorithm Based on Positive Cosine Optimisation Operators and Levy Flight Mechanisms. J. Data Acquis. Process. 2023, 38, 690–703. [Google Scholar]
  22. Xing, N.; Di, H.T.; Yin, W.J.; Han, Y.J.; Zhou, Y. Path Planning for Intelligent Bodies Based on Adaptive Multistate Ant Colony Optimisation. J. Beijing Univ. Aeronaut. Astronaut. 2023, 4, 1–12. [Google Scholar]
  23. Zeng, A.J.; Liu, Y.J.; Meng, X.L.; Wen, H.J.; Shao, Y.J. Parameter optimisation of a genetic algorithm for post-war weaponry workshop scheduling. Fire Control Command Control 2020, 45, 153–159. [Google Scholar]
  24. Küpper, S. Behavioural Analysis of Systems with Weights and Conditions. Ph.D. Thesis, Universität Duisburg-Essen, Duisburg, Germany, 2017; pp. 1–295. [Google Scholar]
  25. Zhang, W.C.; Du, Y.Z.; Chen, Z. Robust adaptive learning with Siamese network architecture for visual tracking. Vis. Comput. 2021, 37, 881–894. [Google Scholar] [CrossRef]
  26. Salgotra, R.; Singh, U.; Saha, S.; Gandomi, A.H. Improving cuckoo search: Incorporating changes for CEC 2017 and CEC 2020 benchmark problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar]
  27. Wang, X.; Zhang, Y.; Zheng, C.; Feng, S.; Yu, H.; Hu, B.; Xie, Z. An Adaptive Spiral Strategy Dung Beetle Optimization Algorithm: Research and Applications. Biomimetics 2024, 9, 519. [Google Scholar] [CrossRef] [PubMed]
  28. Happ, M.; Bathke, A.C.; Brunner, E. Optimal sample size planning for the Wilcoxon-Mann-Whitney test. Stat. Med. 2019, 38, 363–375. [Google Scholar] [CrossRef]
  29. Sun, Y.; Genton, M.G. Functional boxplots. J. Comput. Graph. Stat. 2011, 20, 316–334. [Google Scholar] [CrossRef]
  30. Li, Z.; Wang, F.; Wang, R.J. A particle swarm optimisation algorithm incorporating the grey wolf algorithm. Comput. Meas. Control 2021, 29, 217–222. [Google Scholar]
  31. Zhu, Z.F.; Hu, J.L.; Wen, J.J. Evaluation of Quantitative Accuracy of Obstacle Status for Substation UAV Inspection. Comput. Simul. 2022, 39, 387–391. [Google Scholar]
  32. Yu, Y.; Zhou, J.W.; Feng, Y.B.; Tan, Y. Research on unmanned vehicle trajectory optimisation method based on cubic B-spline curve. J. Shenyang Univ. Sci. Technol. 2019, 38, 71–75. [Google Scholar]
  33. Wang, X.L.; Huang, C.; Yu, Y.H.; Chen, F.H.; Hu, D.; Lu, Q.; Cui, X.Y. Unmanned Aerial Vehicle Path Planning Based on Adaptive Value Superiority Particle Swarm Algorithm. Pract. Electron. 2022, 30, 16–19. [Google Scholar]
  34. Hu, G.K.; Zhou, J.H.; Li, Y.Z.; Li, W.H. UAV 3D Path Planning Based on IPSO-GA Algorithm. Mod. Electron. Tech. 2023, 46, 115–120. [Google Scholar]
  35. Yuan, J.H.; Li, S. Three-dimensional path planning and obstacle avoidance methods for UAVs. Inf. Control 2021, 50, 95–101. [Google Scholar]
Figure 1. Lemur Optimizer flow chart.
Figure 1. Lemur Optimizer flow chart.
Biomimetics 09 00654 g001
Figure 2. Adaptive cross-probability iteration effect curve.
Figure 2. Adaptive cross-probability iteration effect curve.
Biomimetics 09 00654 g002
Figure 3. Adaptive behavior weights iteration effect curve.
Figure 3. Adaptive behavior weights iteration effect curve.
Biomimetics 09 00654 g003
Figure 4. Temperature decay curve.
Figure 4. Temperature decay curve.
Biomimetics 09 00654 g004
Figure 5. Improved Lemur Optimizer flow chart.
Figure 5. Improved Lemur Optimizer flow chart.
Biomimetics 09 00654 g005
Figure 6. F10 (50dim).
Figure 6. F10 (50dim).
Biomimetics 09 00654 g006
Figure 7. F20 (50dim).
Figure 7. F20 (50dim).
Biomimetics 09 00654 g007
Figure 8. F20 (100dim).
Figure 8. F20 (100dim).
Biomimetics 09 00654 g008
Figure 9. Three-dimensional model of Gaussian mountains.
Figure 9. Three-dimensional model of Gaussian mountains.
Biomimetics 09 00654 g009
Figure 10. Three times Cubic spline interpolation optimizer effect diagram.
Figure 10. Three times Cubic spline interpolation optimizer effect diagram.
Biomimetics 09 00654 g010
Figure 11. Comparison of generation paths of LO algorithm without Cubic Spline Interpolation Path (left) and with Cubic Spline Interpolation Path (right) Smoothing algorithm.
Figure 11. Comparison of generation paths of LO algorithm without Cubic Spline Interpolation Path (left) and with Cubic Spline Interpolation Path (right) Smoothing algorithm.
Biomimetics 09 00654 g011
Figure 12. Six algorithms: PSO, LO, ILO, SWO, KOA, and GRO for planning routes.
Figure 12. Six algorithms: PSO, LO, ILO, SWO, KOA, and GRO for planning routes.
Biomimetics 09 00654 g012
Figure 13. Convergence curves of the six algorithms: PSO, LO, ILO, SWO, KOA, and GRO.
Figure 13. Convergence curves of the six algorithms: PSO, LO, ILO, SWO, KOA, and GRO.
Biomimetics 09 00654 g013
Figure 14. Histogram of the path lengths of the six algorithms: PSO, LO, ILO, SWO, KOA, and GRO.
Figure 14. Histogram of the path lengths of the six algorithms: PSO, LO, ILO, SWO, KOA, and GRO.
Biomimetics 09 00654 g014
Table 1. Summary of the CEC’17 test functions [27].
Table 1. Summary of the CEC’17 test functions [27].
No.FunctionsFi* = Fi(x*)
Unimodal Functions1Shifted and Rotated Bent Cigar Function100
2Shifted and Rotated Sum of Different Power Function200
3Shifted and Rotated Zakharov Function300
Simple Multimodal Functions4Shifted and Rotated Rosenbrock’s Function400
5Shifted and Rotated Rastrigin’s Function500
6Shifted and Rotated Expanded Scaffer’s F6 Function600
7Shifted and Rotated Lunacek Bi_Rastrigin Function700
8Shifted and Rotated Non-Continuous Rastrigin’s Function800
9Shifted and Rotated Levy Function900
10Shifted and Rotated Schwefel’s Function1000
Hybrid Functions11Hybrid Function 1 (N = 3)1100
12Hybrid Function 2 (N = 3)1200
13Hybrid Function 3 (N = 3)1300
14Hybrid Function 4 (N = 4)1400
15Hybrid Function 5 (N = 4)1500
16Hybrid Function 6 (N = 4)1600
17Hybrid Function 6 (N = 5)1700
18Hybrid Function 6 (N = 5)1800
19Hybrid Function 6 (N = 5)1900
20Hybrid Function 6 (N = 6)2000
Composition Functions21Composition Function 1 (N = 3)2100
22Composition Function 2 (N = 3)2200
23Composition Function 3 (N = 4)2300
24Composition Function 4 (N = 4)2400
25Composition Function 5 (N = 5)2500
26Composition Function 6 (N = 5)2600
27Composition Function 7 (N = 6)2700
28Composition Function 8 (N = 6)2800
29Composition Function 9 (N = 3)2900
30Composition Function 10 (N = 3)3000
Search Range: [ 100,100 ] D
Table 2. Algorithm parameters.
Table 2. Algorithm parameters.
AlgorithmPopulationSize Number of IterationsParameters
GRO30500 Iter = 1 ,   σ 0 = 2
PSO30500 w = 0.8 ,   c 1 = 1.5 ,   c 2 = 1.5
SWO30500 TR = 0.3 ,   Cr = 0.2 ,   N m i n = 20, t = 0
KOA30500 T 0   = 3 ,   M 0 = 0.1, λ= 15
LO30500Iter = 0, jumping rate min = 0.1
Jumping rate max = 0.5
ILO30500Initial jumping rate = 0.5
Jumping rate min = 0.1
Jumping rate max = 0.5
N m i n   = 20 ,   TR = 0.3 ,   Cr = 0.2 ,   T 0   = 100 ,   α = 0.95
Table 3. Environmental parameters.
Table 3. Environmental parameters.
ParametersNotationParameter Value
MapExecution space (math.) 100 × 100 × 250
Starting pointStart[10,10,10]
Target pointGoal[80,90,80]
Number of peaksN8
Population sizeSearchAgents_no30
Number of iterationsIter100
Table 4. Comparison of average fitness value and convergence speed.
Table 4. Comparison of average fitness value and convergence speed.
ScalesAlgorithmsAverage Number of Convergence IterationsMean Fitness ValuePercentage of ILO Adaptation Values/%Percentage of ILO Converged Iterations/%
MapPSO56171.298.374.4
LO86136.264.093.5
ILO55127.4100100
SWO97129.156.798.7
KOA70149.078.685.5
GRO94129.158.598.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liang, H.; Hu, W.; Gong, K.; Dai, J.; Wang, L. Solving UAV 3D Path Planning Based on the Improved Lemur Optimizer Algorithm. Biomimetics 2024, 9, 654. https://doi.org/10.3390/biomimetics9110654

AMA Style

Liang H, Hu W, Gong K, Dai J, Wang L. Solving UAV 3D Path Planning Based on the Improved Lemur Optimizer Algorithm. Biomimetics. 2024; 9(11):654. https://doi.org/10.3390/biomimetics9110654

Chicago/Turabian Style

Liang, Haijun, Wenhai Hu, Ke Gong, Jie Dai, and Lifei Wang. 2024. "Solving UAV 3D Path Planning Based on the Improved Lemur Optimizer Algorithm" Biomimetics 9, no. 11: 654. https://doi.org/10.3390/biomimetics9110654

APA Style

Liang, H., Hu, W., Gong, K., Dai, J., & Wang, L. (2024). Solving UAV 3D Path Planning Based on the Improved Lemur Optimizer Algorithm. Biomimetics, 9(11), 654. https://doi.org/10.3390/biomimetics9110654

Article Metrics

Back to TopTop