Next Article in Journal
The Contribution of the Limbus and Collagen Fibrils to Corneal Biomechanical Properties: Estimation of the Low-Strain In Vivo Elastic Modulus and Tissue Strain
Previous Article in Journal
One-Step Fabrication Process of Silica–Titania Superhydrophobic UV-Blocking Thin Coatings onto Polymeric Films
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Nutcracker Optimization Algorithm with Hyperbolic Sine–Cosine Improvement for UAV Path Planning

School of Information Engineering, Tianjin University of Commerce, Tianjin 300134, China
*
Author to whom correspondence should be addressed.
Biomimetics 2024, 9(12), 757; https://doi.org/10.3390/biomimetics9120757
Submission received: 14 November 2024 / Revised: 4 December 2024 / Accepted: 10 December 2024 / Published: 12 December 2024

Abstract

:
Three-dimensional (3D) path planning is a crucial technology for ensuring the efficient and safe flight of UAVs in complex environments. Traditional path planning algorithms often find it challenging to navigate complex obstacle environments, making it challenging to quickly identify the optimal path. To address these challenges, this paper introduces a Nutcracker Optimizer integrated with Hyperbolic Sine–Cosine (ISCHNOA). First, the exploitation process of the sinh cosh optimizer is incorporated into the foraging strategy to enhance the efficiency of nutcracker in locating high-quality food sources within the search area. Secondly, a nonlinear function is designed to improve the algorithm’s convergence speed. Finally, a sinh cosh optimizer that incorporates historical positions and dynamic factors is introduced to enhance the influence of the optimal position on the search process, thereby improving the accuracy of the nutcracker in retrieving stored food. In this paper, the performance of the ISCHNOA algorithm is tested using 14 classical benchmark test functions as well as the CEC2014 and CEC2020 suites and applied to UAV path planning models. The experimental results demonstrate that the ISCHNOA algorithm outperforms the other algorithms across the three test suites, with the total cost of the planned UAV paths being lower.

1. Introduction

Drone technology is rapidly maturing, exhibiting high flexibility, efficiency, and autonomy. Its range of applications is expanding, allowing drones to effectively replace humans in executing high-risk tasks. Among them, searching for trapped people in disaster areas is one of the important application areas of drones. Search and rescue missions can be extremely challenging for rescue teams due to terrain constraints. However, drones are better suited for these tasks because they can navigate around ground obstacles and conduct searches significantly faster than humans. Effective and efficient path planning not only reduces the cost of search and rescue operations but also significantly enhances their overall efficiency. The UAV path planning problem can be defined as an optimization challenge, with the primary objective of finding the most cost-effective and efficient route between the starting and ending points while meeting all constraints [1]. In contrast to manual control of the UAV flight path by the operator [2], employing metaheuristic algorithms for UAV flight path planning can yield significantly higher efficiency and accuracy [3]. However, in areas with dense obstacles, drones often face challenges in finding the optimal path and may even risk collisions. Therefore, in recent years, many researchers have extensively explored the UAV path planning problem, aiming to identify flight routes that minimize costs, enhance safety, and reduce travel time [4].
UAV search and rescue is fundamentally a three-dimensional path planning problem, and the main algorithms for optimizing UAV paths include traditional optimization methods and metaheuristic algorithms. Traditional optimization algorithms are techniques utilized to determine the optimal solution for a given problem, including Dijkstra’s algorithm [5], integer programming [6], and linear programming [7]. Dijkstra’s algorithm can accurately find the shortest path; however, it requires traversing all nodes in the graph and cannot handle negative weights. Consequently, it often requires enhancement through supplementary algorithms or improvements to boost its performance. Since traditional Dijkstra’s algorithm struggles to handle problems with complex constraints, Subaselvi Sundarraj et al. [8] proposed an improved particle swarm optimization (PSO) algorithm that integrates weight control with Dijkstra’s algorithm. Deng et al. [9] enhanced Dijkstra’s algorithm by applying the integral theorem to improve its performance in handling uncertain edge weights. However, this improvement yields favorable results only in specific scenarios, and the algorithm’s computational efficiency remains inadequate in more complex environments. Linear programming methods, such as the simplex and interior point algorithms, can guarantee the existence and uniqueness of a solution. However, these methods often struggle with uncertain problems, and the optimization results may not always be globally optimal when dealing with complex or highly nonlinear issues. Therefore, Jiang et al. [10] combined linear programming, fuzzy clustering, and pigeon optimization algorithms to enhance the performance of linear programming in addressing nonlinear problems. The integrated approach significantly improves both optimization effectiveness and computational efficiency. However, it still remains susceptible to falling into local optima. Cheng et al. [11] proposed a mixed logical linear programming algorithm to enhance both the computational efficiency and the solution quality of linear programming. In addition, Kvitko et al. [12] integrated chaos theory with neural network modeling, offering a novel approach to the path planning problem. Moysis et al. [13] introduced an innovative 3D path planning method that leverages chaotic mapping to generate pseudo-random bit sequences, enhancing both the randomness and efficiency of path design. Although traditional algorithms have made significant progress, they typically require the objective function to be continuously differentiable and are prone to getting stuck in local optima [14]. These limitations have prompted researchers to prefer metaheuristic algorithms.
Metaheuristic optimization algorithms are inspired by natural and physical phenomena. Generally, these algorithms can be classified into four categories [15]: group intelligence-based algorithms, physics-based algorithms, human-based algorithms, and evolution-based algorithms. Metaheuristic algorithms are more stochastic than traditional optimization algorithms, which can adjust their parameters based on specific problems. This inherent stochasticity allows them to effectively tackle complex nonlinear issues [16] and has made them popular among researchers for their ability to yield satisfactory solutions. Examples include optimizing real-valued parameters and constrained engineering problems [17], energy management [18], flow shop scheduling [19], brain tumor classification [20], biomedical feature selection [21], combinatorial optimization [22], supply chain management [23], wireless sensor network localization [24], and clustering [25]. Group intelligence algorithms are derived from simulating the social behaviors of swarming organisms, with particle swarm optimization [26] (PSO) being the most popular among researchers; this algorithm simulates the foraging behavior of birds and is characterized by a simple structure and fewer parameters [27]. Jiang et al. [28] divided the flock into several sub-flocks, with each sub-flock performing PSO-based updates and iterations simultaneously. At the later stages of iteration, each sub-flock shares its optimal position, significantly enhancing both the exploration and exploitation capabilities of PSO. However, it is evident that this improvement strategy does not focus on optimizing the convergence speed of PSO. Sun et al. [29] divided the bird flocks into main flocks and sub-flocks, with the main flocks tasked with conducting a random optimization search in the search space to maintain diversity, while the sub-flocks focused on exploring regions near the local optimum. The synergy between these two populations significantly enhances the convergence speed of the algorithm; however, the algorithm remains prone to falling into local optima. Shao et al. [30] designed an adaptive linear variation of acceleration coefficients and maximum velocities to introduce random mutations for low-quality particles, thereby improving solution quality and reducing the risk of falling into local optima. However, the convergence accuracy of this algorithm remains lower compared to more recent swarm intelligence algorithms. Meanwhile, to enhance the convergence accuracy of the algorithm, Jiang et al. [31] incorporated an information feedback model into the Flamingo Search Algorithm (FSA). However, the stability of this modified algorithm remains relatively weak. Therefore, [32] combined PSO, which incorporates three inertia weight coefficients, with the Symbiotic Organism Search (SOS) algorithm to improve both the convergence accuracy and stability of the algorithm. Evolution-based optimization algorithms identify near-optimal solutions by simulating the natural concept of survival of the fittest. Common approaches include the genetic algorithm (GA) [33] and differential evolution (DE) [34]. The genetic algorithm transforms the optimization problem into a process of chromosome crossover and recombination in biological evolution. However, it suffers from poor population diversity and is less suited for solving continuous problems. Krishna et al. [35] combined a genetic algorithm with the K-means algorithm and demonstrated the convergence of this hybrid approach. However, achieving accurate convergence remains a significant challenge. Physics-based optimization algorithms simulate physical phenomena, such as black-hole optimization (BH). Pashaei et al. [36] demonstrate that the binary black hole algorithm offers a notable improvement in computational complexity; however, it still depends on the choice of initial parameters. Human HBBO algorithms simulate the two phases of human exploration and exploitation. Liu et al. [37] proposed a particle swarm algorithm that incorporates human behavior, significantly enhancing both the convergence speed and accuracy of the algorithm. However, despite these improvements, the algorithm still struggles to guarantee the discovery of a globally optimal solution. From the above discussion, it is evident that most metaheuristic algorithms are prone to issues such as getting stuck in local optima, lacking sufficient population diversity, and exhibiting slow convergence.
Compared with other metaheuristic algorithms, the nutcracker optimizer [38] (NOA) is characterized by its simple structure, significant randomness, and a low tendency to get trapped in local optima. The NOA algorithm, developed by Abdel-Basset et al., is a population-based optimization method. Due to its outstanding optimization capabilities, it has captured the attention of researchers since its inception and has been utilized in multiple areas. Applications include solar PV model parameter extraction [39], the performance of the NOA on multi-objective problems [40], fresh produce distribution [41], detection of news truthfulness and reliability [42], power system scheduling [43], and medical image classification [44]. Despite its successful application across various fields, the nutcracker optimizer encounters challenges, including slow convergence rates and limited accuracy in convergence. SCHO [45] exhibits excellent exploitation capabilities while maintaining a strong balance between the exploration and exploitation processes. This study effectively integrates the SCHO algorithm into the nutcracker optimizer. This novel approach tackles the limitations of the NOA algorithm, such as inefficient searching and challenges in fully leveraging reference point information. The contributions of this paper are specified below:
  • To tackle the issue of randomized foraging strategies and wide search ranges that result in inefficient search, we integrate the SCHO algorithm to improve the search efficiency of the NOA algorithm by enhancing the foraging strategy to search for high-quality food.
  • To tackle the issue of rapid population diversity loss and slow convergence in storage strategies, we employ a nonlinear function to steer the search direction of the current optimal individual while simultaneously preserving population diversity in the later iterations to prevent convergence to a local optimum.
  • Recognizing that nutcrackers do not fully leverage the information regarding reference point locations, this paper implements an improved SCHO algorithm to incorporate this reference point information. This enhancement bolsters the collaborative search capabilities of the nutcracker populations.
  • The performance of the improved algorithm is assessed using 14 classical benchmark test functions, as well as the CEC2014 and CEC2020 test suites. Additionally, the optimized algorithm is applied to real map models with progressively increasing densities, thereby validating its effectiveness.
ISCHNOA demonstrates competitive performance not only against recently proposed NOA algorithms but also when compared to other algorithms that exhibit superior performance. The remainder of this paper is structured as follows: Section 2 provides a description of the UAV path planning problem. Section 3 presents the NOA algorithm. Section 4 introduces the ISCHNOA algorithm. Section 5 assesses the performance of the ISCHNOA algorithm and applies it to 3D maps in real-world scenarios. Section 6 provides a summary of the research.

2. Description of the Issue

The UAV path planning problem involves finding an optimal path from the start position to the goal position under a set of constraints. The optimization process, on the other hand, further refines this path to identify a near-optimal solution. Constraints can be represented by cost functions, which include path cost, threat cost, altitude cost, and smoothing cost [46]. These factors are considered collectively to ensure the smooth operation of UAVs in complex environments.

2.1. Path Cost

The path cost reflects the distance between the starting point and the destination. In this study, the UAV path is composed of multiple waypoints, with the coordinates of each waypoint denoted as P i j = ( x i j , y i j , z i j ) . The distance between two nodes is calculated using the Euclidean distance, denoted as P i , j P i , j + 1 . The total path cost F 1 is mathematically modeled as shown in Equation (1).
F 1 = j = 1 n 1 P i j P i , j + 1

2.2. Threat Cost

Threat costs refer to the potential risks of collision during drone flight. During flight, there is a risk of drones colliding with obstacles. Effective obstacle avoidance is a crucial factor in assessing the threat cost. The magnitude of the threat cost is closely related to the distance between the UAV and obstacles, allowing the entire flight area to be categorized into safe, threat, and collision zones. Figure 1 illustrates the connections between these three regions. When the flight path is within the safety region, its cost is zero. However, if the flight path enters the collision region, it becomes impossible to determine an optimal path. The threat cost F 2 is represented by Equation (2).
F 2 ( x i ) = j = 1 n 1 k = 1 K T k ( P i j P i , j + 1 ) T k ( P i j P i , j + 1 ) = , i f   d k D + R k ( S + D + R k ) d k i f   D + R k < d k S + D + R k 0 i f   d k D + R k
The obstacles examined in this paper are regular cylinders. The number of obstacles is denoted by K, with the center of each obstacle denoted as C k , the radius as R k , and the diameter of the drone as D.

2.3. Altitude Cost

Altitude costs refer to the additional expenses incurred during flight as a result of changes in altitude. Figure 2 illustrates that drone flight altitude must be maintained within a specified minimum and maximum range, beyond which penalties will be incurred. When drones fly at high altitudes, they become more susceptible to external environmental factors, which increases the risk of flight accidents. Conversely, flying at excessively low altitudes poses a danger of collisions with the ground. The altitude cost F 3 is represented by Equation (4).
H ij = h i j ( h max + h min ) 2 , i f   h min h i j h max , o t h e r w i s e
F 3 x i = j = 1 n H i j
where the maximum and minimum flight altitudes are denoted as h max and h min , respectively, and h i j indicates the UAV’s height above the ground.

2.4. Smoothing Cost

The smoothing cost is introduced in the path optimization process to minimize abrupt changes in curvature and steering angle of the flight path. Let three consecutive points be P ij , P i , j + 1 , and P i , j + 2 , with their projections onto the x o y plane denoted as P ij , P i , j + 1 , and P i , j + 2 , respectively. The vector between two consecutive points is calculated as shown in Equation (5).
P i j P i , j + 1 = k × P i j P i , j + 1 × k
k is the unit vector in the Z-axis direction, while the steering and climb angles are designated as Φ and ψ , respectively. Figure 3 illustrates the steering angle and climb angle, defined in Equation (6) and Equation (7), respectively. The smoothing cost F 4 is mathematically modeled as shown in Equation (8).
Φ i j = arctan ( P i j P i , j + 1 × P i , j + 1 P i , j + 2 P i j P i , j + 1 · P i , j + 1 P i , j + 2 )
ψ i j = arctan ( z i , j + 1 z i j P i j P i , j + 1 )
F 4 ( x i ) = ω 1 j = 1 n 2 Φ i j + ω 2 j = 1 n 1 ψ i j ψ i , j 1
where ω 1 and ω 2 are the penalty coefficients for the steering and climb angles, respectively.

2.5. Overall Cost Function

The total cost of ownership in UAV path planning is a crucial factor in ensuring mission success and safety. Incorporating these costs into the planning process can effectively reduce collision risk, optimize flight altitude, enhance energy efficiency, and minimize mission completion time. Considering these costs during the planning process can effectively reduce collision risk, optimize flight altitude, enhance energy efficiency, and shorten mission completion times, thereby better addressing the needs of various applications. The total cost function F is defined in Equation (9).
F = k = 1 4 α k F k ( x i )
where α 1 α 4 represent the weighting coefficients for path cost, threat cost, height cost, and smoothing cost, respectively.

3. Overview of Nutcracker Optimizer

The NOA algorithm, a metaheuristic optimization algorithm, is inspired by the behaviors of nutcrackers throughout different seasons. In summer and fall, nutcrackers forage for high-quality food in open areas and store it away. In spring and winter, they retrieve their stored food by using information about reference points in their environment.

3.1. Population Initialization

Assuming that there are N individuals in the nutcracker population, the initialization formula is as in Equation (10).
X i , j t = u b u l r a n d + u l , i = 1 , 2 , , N ,   j = 1 , 2 , , D
where t represents the current moment, and u b and u l are the upper and lower bounds of the search region, respectively. N represents the number of nutcracker individuals in the population, D represents the dimension, and r a n d is a random vector in the interval [0, 1].

3.2. Foraging Strategy

In their foraging strategy, nutcrackers search for high-quality seeds within their foraging area. When they find high-quality food, they return it to their storage area using their own methods. However, if they encounter lower-quality seeds, the nutcracker will reselect their location and continue searching for better options. The overall foraging strategy is presented in Equation (11).
X i t + 1 = X i , j t i f   η 1 < η 2 X m , j t + γ · ( X A , j t X B , j t ) + μ · ( r 1 2 · u b , j u l , j ) , i f   t T 2 X c , j t + μ · ( X A , j t X B , j t ) + μ · ( r 2 < δ ) ( r 1 2 · u b , j u l , j ) , O t h e r w i s e o t h e r w i s e
where X i t + 1 represents the position of the i-th nutcracker after updating in the current generation, X m , j t denotes the average position of the j-th dimensional nutcracker population in that generation, and X A , j t and X B , j t are two different individual nutcrackers randomly selected from the j-th dimensional population. Additionally, u b , j and u l , j define the upper and lower bounds of the j-th dimensional range of positions, r 1 r 4 , η 1 and η 6 η 11 is a random number in the range of [0, 1], γ and η 3 is a random number generated by Levy flight, η 2 satisfies normal distribution, and μ is defined as shown in Equation (12).
μ = η 1 , r 2 < r 3 η 2 , r 3 < r 4 η 3 , r 2 < r 4

3.3. Storage Strategy

In the food storage strategy, the nutcracker will transport high-quality seeds found during the exploration phase to the storage area. The overall storage strategy is presented in Equation (13).
X i t + 1 = X i t + μ · ( X b e s t t X i t ) · λ + r 2 · ( X A t X B t ) i f   η 1 < η 2 X b e s t t + μ · ( X A t X B t ) i f   η 1 < η 5 X b e s t t · l O t h e r w i s e
where X best t represents the historically optimal position for the nutcracker, λ is a random number generated using Levy flight strategy, and l is a linearly decreasing function defined over the interval [0, 1], effectively aiding in avoiding the algorithm’s convergence to a local optimum. Meanwhile, Equation (14) helps maintain a balance between the foraging and storage strategies of the NOA algorithm.
X i t + 1 = E q u a t i o n ( 11 ) , i f   ϕ > P a 1 E q u a t i o n ( 13 ) , o t h e r w i s e
where ϕ is a random number in the interval [0, 1] and P a 1 is a factor that decreases linearly from 1 to 0.

3.4. Caching Strategy

During the winter months, the nutcracker transitions from storage mode to cache mode. With its exceptional spatial memory, the nutcracker can remember the locations of reference points marked during the storage phase and efficiently locate stored seeds using these markers. There may be many reference point locations for individual marking of nutcrackers, but in this paper, only two reference points were selected for the study and are referred to as R P . R P can be represented by a matrix as in Equation (15).
R P s = R P 1 , 1 t R P 1 , 2 t R P i , 1 t R P i , 2 t R P N , 1 t R P N , 2 t
where R P 1 , 1 t and R P 1 , 2 t are the first and second reference points of the nutcracker in generation t, respectively. The position initialization equations for the two reference points are shown in Equations (16) and (17).
R P i , 1 t = X i t + α · cos ( θ ) · ( ( X A t X B t ) ) + α · R P , i f   θ = π 2 X i t + α · cos ( θ ) · ( ( X A t X B t ) ) , o t h e r w i s e
R P i , 2 t = X i t + ( α · cos ( θ ) · ( ( u b u l ) · η 5 + u l ) + α · R P ) · U , i f   θ = π 2 X i t + α · cos ( θ ) · ( ( u b u l ) · η 5 + u l ) · U , o t h e r w i s e
where R P i , 1 t represents row i, column 1 in the reference point matrix and R P i , 2 t represents row i, column 2 in the reference point matrix. U is a binary vector, and α is a function that decreases linearly from 1 to 0. The definitions are as in Equation (18) and Equation (19), respectively.
U = 1 , i f   r 3 < P r p 0 , e l s e
α = 1 t T 2 · t T , i f   r 2 > r 3 t T 2 t e l s e
where t and T represent the current iteration number and the maximum iteration number, respectively, and α is proposed to ensure the convergence speed of the NOA algorithm. As the iteration proceeds, the nutcracker explores regions that may be near the stored food and searches using reference point locations that suit its needs. The nutcracker’s new position is updated based on the relationship between the reference point and the fitness value of its current location, which is mathematically modeled as shown in Equation (20).
X i t + 1 , 1 = X i t , i f   f ( X i t ) < f ( R P i , 1 t ) R P i , 1 t e l s e
In this case, Equation (20) can guide the individual nutcracker to search the area around the reference point R P . If the nutcracker is unable to find quality food at the first reference point, it will then search around the second reference point.

3.5. Recovery Strategy

Nutcrackers encounter two situations when searching for their tagged cache locations. The first scenario is that the nutcracker is able to remember exactly where the markers are located. In this case, there are two possibilities: one is that the stored food is still there, and the other is that the food may no longer be there due to weather changes or depredation by other organisms, which is mathematically modeled as shown in Equation (21). X i , j t + 1 , n e w 1 denotes the position found based on the first reference point.
X i , j t + 1 , n e w 1 = X i , j t , i f   η 6 < η 7 X i , j t + r 5 · ( X b e s t , j t X i , j t ) + r 6 · ( R P i , 1 t X C , j t ) , e l s e
where X C , j t is a randomly selected individual nutcracker from the population. The second case occurs when the nutcrackers cannot remember the position of the first reference point, prompting them to search around the second reference point, which is mathematically modeled as shown in Equation (22).
X i t + 1 , 2 = X i t , i f   f ( X i t ) < f ( R P i , 2 t ) R P i , 2 t , e l s e
Equation (22) compares the degree of superiority between the reference point and the current position of the nutcracker, selecting the more favorable of these positions. This selection forms the basis for Equation (23), allowing the search to be centered around the chosen superior location. Equation (21) X i , j t + 1 , n e w 2 denotes the position found based on the second reference point.
X i , j t + 1 , n e w 2 = X i j t , i f   η 8 < η 9 X i , j t + r 5 · ( X b e s t , j t X i , j t ) + r 6 · ( R P i , 2 t X C , j t ) , e l s e
The first equation of Equation (23) performs a local search around historical individuals, while the second equation incorporates the current optimal position, the second reference point, and randomly selected individuals from the population to improve global search capability. The overall recovery strategy is presented in Equation (24). The location found during the recovery phase is denoted by X i t + 1 ( r ) .
X i t + 1 ( r ) = X i , j t + 1 , n e w 1 , i f   η 10 < η 11 X i , j t + 1 , n e w 2 , e l s e
To maintain the balance of the NOA algorithm between the selection of reference points, Equation (25) is proposed. X i t + 1 ( R ) is the optimal location chosen from the first cache point, the second cache point, and the current individual.
X i t + 1 ( R ) = X i t + 1 , 1 , i f   f ( E q u a t i o n ( 20 ) ) < f ( E q u a t i o n ( 22 ) ) X i t + 1 , 2 , e l s e
Equation (26) is designed to maintain a dynamic balance between the caching strategy and the recovery strategy.
X i t + 1 = X i t + 1 ( r ) , i f   γ 2 > P a 2 X i t + 1 ( R ) , e l s e
where P a 2 is a fixed value of 0.2 and γ 2 is a random number in the interval [0, 1]. The NOA algorithm compares the new individuals with historical individuals in each iteration. If the position of the new individual is closer to the optimal solution, it will replace the historical individual, which is mathematically modeled as shown in Equation (27).
X i t + 1 = X i t + 1 , i f   f ( X i t + 1 ) < f ( X i t ) X i t , e l s e
where f ( X t t + 1 ) denotes the updated fitness value of the i-th nutcracker.

4. Nutcracker Optimizer Integrated with Hyperbolic Sine–Cosine

The NOA algorithm has several drawbacks, including low convergence accuracy, slow convergence speed, and an inability to fully utilize reference point information. These issues may prevent the NOA algorithm from achieving optimal solutions when addressing complex problems. Therefore, enhancing the performance of NOA algorithms has become a top priority. To address the shortcomings of the NOA, this study introduces a sinh cosh optimizer based on the NOA and proposes a nutcracker optimization algorithm that incorporates hyperbolic sine–cosine.

4.1. Hyperbolic Sine–Cosine Optimization Algorithm for Optimal Forage Strategy

In the foraging strategy, individual nutcrackers search randomly throughout the iteration cycle. This strong randomness helps effectively avoid the algorithm getting stuck in local optima; however, it can also result in slower convergence and may even affect the accuracy of the optimization search. For this reason, this paper introduces the SCHO algorithm into the foraging strategy of the NOA algorithm to facilitate searching in the vicinity of the population’s historical optimal positions. In this process, a weight scaling factor W 1 is introduced. In the initial stages of the iteration, this factor effectively reduces the influence of the historical optimal position on the optimization search process. This enables the algorithm to concentrate more on performing extensive stochastic searches, thereby exploring a larger solution space and avoiding local optima. The role of the weight scaling factor gradually evolves. In the later stages of the algorithm, the focus shifts to searching around the locations of historically optimal individuals, enhancing the efficiency of the nutcracker populations in locating high-quality food sources. This dynamic adjustment mechanism enhances the algorithm’s adaptability and significantly improves the effectiveness of the foraging strategy, enabling it to flexibly respond to search requirements at different stages, which is mathematically modeled as shown in Equation (28).
X i , j t + 1 = X i , j t + r 7 × sinh r 8 cosh r 8 W 1 × X best , j t X i , j t
W 1 = r 9 × 2 × t T + n
where n is a sensitivity parameter with a fixed value of 0.5.

4.2. Nonlinear Function

In the storage strategy, the third update equation in Equation (13) utilizes a linearly decreasing function to guide the population’s search around the historically optimal positions. Although the linear decreasing function effectively mitigates the rapid loss of diversity in nutcracker populations, it tends to exhibit a relatively slow convergence rate, which may hinder its ability to achieve the desired solution within a limited number of iterations. To tackle this issue, this paper designs a nonlinear function and incorporates it into the food recovery phase of the NOA algorithm. The nonlinear function has a faster rate of change and offers greater flexibility in adjusting the population’s search strategy compared to the linear function l. The combination of nonlinear and linear functions, guided by historically optimal individuals, significantly accelerates the food storage process in nutcrackers. Meanwhile, the dynamic adjustment of parameter β diversifies the population’s search direction and reduces the risk of the population becoming stuck in a specific direction for an extended period. This process also improves the convergence speed of the NOA algorithm, which is mathematically represented in Equation (31).
β = 1 , r a n d < r a n d 1 , e l s e
ω = β × r 14 × e k · t T
where k is the factor used to control the variation of the nonlinear function, with a value of 250 selected for this paper. The mathematical model developed for the combined use of nonlinear and linear decreasing functions in the storage strategy of the NOA algorithm is presented in Equation (32).
X i t + 1 = X i t + μ · X b e s t t X i t · λ + r 2 · X A t X B t , i f   η 1 < η 2 X b e s t t + μ · X A 1 t X B 1 t , i f   η 1 < η 5 ω · X b e s t t · l , O t h e r w i s e

4.3. Hyperbolic Sine–Cosine Optimization Algorithm for Optimal Recovery Strategy

During the food recovery phase, the nutcracker population searches around reference points, random individuals, and the current optimal individual. Due to the nutcracker’s limitations in accurately remembering the locations of reference points and random individuals, the NOA algorithm may not fully leverage the location information of the reference point, which can impact the overall recovery strategy. This situation can result in nutcrackers searching aimlessly within the area, potentially preventing them from locating stored food. In contrast, the exploration process of the SCHO algorithm targets a specific region for searching, allowing it to effectively integrate the information from reference points, random individuals, and the current optimal individual?s position. This integrated information is then used as the primary reference point for further searches. However, the SCHO algorithm’s iterative approach, which focuses on updating around the current optimal individual, can reduce population diversity and expose the algorithm to local optima. Therefore, to enhance the algorithm’s performance, an improved update formula for SCHO is proposed to ensure that the nutcracker population can fully utilize the integrated reference point information, as shown in Equation (33).
X i , j t + 1 = X i , j t + r 10 × W 2 × X b e s t , j t , r 11 > 0.5 X i , j t r 10 × W 2 × X b e s t , j t , r 11 < 0.5
W 2 = r 12 × a 1 × cosh r 13 + u × sinh r 13 1
a 1 = 3 × 1.3 × t T + m
In Equations (33)–(35), W 2 serves as a weighting factor that scales the degree of utilization of historically optimal individuals in the search for the ideal solution, and m and μ are fixed at values of 0.45 and 0.388, respectively.In summary, we can obtain the pseudo-code of ISCHNOA as in Algorithm 1.
Algorithm 1 The ISCHNOA
Input: population size N, the lower limits of variables u b , the upper limits of variables u l ,
the current number of iteration t = 0, and 
the maximum number of iteration T
Output: the best solution
1. Initialize N nutcracker using Equation (10)
2. Identifying the optimal individual based on fitness values.
3.    While (t < T)
4.       Generate random numbers σ 1 and σ 2 between 0 and 1.
5.       If σ 1 σ 2
6.           ϕ is a random number between 0 and 1.
7.          for i = 1:N
8.             for j = i:d
9.             if ϕ > P a 1
10.                Updating X i t + 1 using Equation (11), (27) and (28).
11.             else
12.                Updating X i t + 1 using Equations (27) and (32).
13.             end if
14.          end for
15.          t = t + 1
16.       end for
17.    Else
18.       The reference point matrix is generated using Equations (15)–(17).
19.       Generate random numbers φ between 0 and 1.
20.       for i = 1:N
21.          if φ > P a 2
22.             Updating X i t + 1 using Equations (24), (27) and (33).
23.          else
24.             Updating X i t + 1 using Equations (25), (27) and (33).
25.          end if
26.          t = t + 1
27.       end for
28.    end while

5. Simulation Experiments and Result Analysis

All simulations were performed in Matlab 2018. Experimental equipment included a computer with an Intel Core i7-9750H 2.60GHz CPU, GTX 1660Ti GPU, and Windows 10 64-bit operating system.

5.1. Introduction to Test Functions and Parameter Settings

To verify the effectiveness of the ISCHNOA algorithm, this paper first employs classical benchmark functions (Table 1) to evaluate the performance of the improved NOA in solving complex problems. Classical benchmark test functions can be divided into unimodal, multimodal, and composite categories. The unimodal test function contains a single optimal value and is primarily used to evaluate the local search capabilities and convergence speed of optimization algorithm. The multimodal test function, in contrast, features multiple extrema and is primarily used to assess an algorithm’s exploration capabilities. It evaluates the algorithm’s ability to locate a globally optimal solution within a search space that contains both a global optimum and several local optima. The composite benchmarking function is used to assess the algorithm’s ability to handle various types of problems and test its capacity to maintain a dynamic balance between global and local search. Subsequently, this paper employs the CEC2014 (Table 2) and CEC2020 (Table 3) test suites to further validate the performance of the ISCHNOA algorithm. The functions in these two test suites are more complex than classical benchmark functions, imposing higher demands on the performance of the solution algorithms. To further validate the effectiveness of the improvements, this paper compares the ISCHNOA algorithm with several recently published optimization algorithms, including the Honey Badger Algorithm [47] (HBA), the Equilibrium Optimizer [48] (EO), Sinh Cosh optimizer, and nutcracker optimizer. It is also compared with some highly cited algorithms, such as the Whale Optimization Algorithm [49] (WOA) and the Grey Wolf Optimizer [50] (GWO). The optimization algorithms mentioned above are designed for solving mathematical optimization problems, and the parameters used are set according to the authors’ recommendations in the literature. In this paper, we set the population size N to 25, the number of iterations for the classical benchmark test functions to 500, and the number of evaluations for the CEC2014 and CEC2020 test suites to 50,000. P a 1 , P a 2 , and δ are the three main control parameters of the NOA algorithm, which are fixed at 0.2, 0.4, and 0.05 in the ISCHNOA algorithm to ensure a fair comparison of the algorithms.

5.2. Classical Benchmark Function

In this paper, the ISCHNOA algorithm is evaluated for its exploration capabilities and its effectiveness in escaping local optima through tests on 14 classical benchmark functions. Based on Figure 4 and Figure 5, it can be concluded that while the ISCHNOA algorithm may not achieve the same level of convergence accuracy as some of the comparison algorithms for certain test functions, it exhibits the fastest convergence speed among them. This indicates that the ISCHNOA algorithm offers a significant advantage in quickly identifying near-optimal solutions, making it particularly well-suited for optimization problems that demand rapid responses. Overall, the improved algorithm demonstrates strong performance and considerable potential.
As illustrated in Figure 4 and Figure 5, the ISCHNOA algorithm demonstrates superior performance compared to the comparison algorithm for all functions in the unimodal problem—except for F6, where its accuracy is slightly lower. Furthermore, ISCHNOA converges twice as quickly as the other algorithms. In multimodal functions F9∼F13, and the fixed dimension multimodal functions F14 and F15, the ISCHNOA algorithm exhibits a reduced tendency to be influenced by local optimal solutions when compared to the benchmark algorithm, while maintaining the fastest convergence speed. This further underscores the effectiveness of the proposed improvement strategy in enhancing both exploration capability and convergence speed. The results indicate that the ISCHNOA algorithm demonstrates remarkable local search capability, can effectively escape from local optima in a timely manner, and exhibits a rapid convergence speed.

5.3. Comparison of Algorithms in the CEC2014 Suite

The ISCHNOA algorithm is analyzed in depth using the CEC2014 test suite to verify the effectiveness of the proposed improvement strategy. The CEC2014 suite of functions is more varied and allows for a detailed exploration of the algorithms’ performance in several aspects, including their exploitation capabilities, exploration abilities, and their capacity to escape from local optima. The dimension selected for the CEC2014 test suite in this paper is 10.
Based on the mean, variance, and ranking achieved by the algorithms listed in Table 4, it can be inferred that the ISCHNOA algorithm is the best overall. The ISCHNOA algorithm consistently ranks first among the other functions, except for functions F32, F33, F41, and F46, where its performance is slightly lower than that of the NOA algorithm. In the CEC2014 test suite, the NOA algorithm was ranked second overall, while the last-ranked algorithm was HBA. Benchmark information for the thirty test functions in the CEC2014 suite is provided in Table 2.

5.4. Comparison of Algorithms in the CEC2020 Suite

The more representative CEC2020 test suite was chosen to validate the performance of the improved algorithm, with the dimension set to 10. The test suite contains ten functions, ranging from simple to complex. These include single-peak functions (F54), multi-peak functions (F55∼F57), hybrid functions (F58∼F60), and combined functions (F61∼F63). Table 5 presents the mean, standard deviation, and algorithm rankings following 30 independent runs. The ISCHNOA algorithm achieves relative optimization in all but F58 of the ten tested functions. Additionally, only two functions show slightly higher standard deviations compared to the NOA algorithm; however, this does not impact the overall ranking of the ISCHNOA algorithm among the compared algorithms.

5.5. UAV Path Planning in Complex Environment

The convergence of a UAV path planning problem typically refers to the ability of the algorithm to successfully find an optimal or near-optimal path within a given time frame or a specified number of iterations, ultimately reaching a steady state under specific conditions. In the UAV path planning problem, both convergence speed and convergence accuracy are crucial factors. Convergence speed determines whether a viable path can be found within an acceptable timeframe during emergencies, while convergence accuracy influences the cost required for the UAV to reach its target and even impacts flight safety. In this paper, the ISCHNOA algorithm is applied to the UAV path optimization problem in complex environments to assess its effectiveness regarding convergence speed and convergence accuracy. Comparative experiments are carried out to analyze the performance of the ISCHNOA algorithm across various scenarios and evaluate its feasibility for practical applications. The results show that the ISCHNOA algorithm can quickly identify near-optimal paths while reducing flight costs and enhancing the operational efficiency of UAVs in complex environments, all while ensuring safety. This provides robust support for the use of drones in real-world operations.

5.5.1. Setting the Scene

In this paper, a digital elevation model of Christmas Island, Australia, was created using LiDAR sensors [51]. Eight obstacles, each representing a different level of difficulty, were established in two distinct areas for UAV path planning tests. By comparing ISCHNOA with NOA, WOA, GWO, HBA, EO, and SCHO applied to simulated maps, the performance of each algorithm for path planning in complex environments is evaluated [52]. Table 6 details the key parameters of the UAV path planning experiments to ensure that the simulations are realistic and reliable, providing a solid reference for practical applications. The comparative results further validate the advantages of the ISCHNOA algorithm in handling complex environments. The information about the obstacles and the cost obtained by each algorithm is displayed in Table 7.

5.5.2. Analysis of the UAV Path Planning Results

Figure 6 illustrates the top view of UAV path planning using the ISCHNOA algorithm across eight distinct obstacle classes. It effectively showcases the planned paths of the UAV and highlights their safety under varying obstacle densities. Comparing different levels of obstacles reveals noticeable changes in path selection. Notably, all algorithms are able to identify a relatively safe path, irrespective of the obstacle density. However, the optimal paths identified by these algorithms across the eight different obstacle class maps vary significantly due to the differences in their respective performance. The dark blue dots and gray circles indicate the collision areas. Since UAVs have physical dimensions, the area between the gray boundary and the white boundary is defined as the threat area, while the region beyond the white boundary is considered safe. This distinction aids in better assessing the effectiveness of path planning and its safety.

5.5.3. Total Cost of the Drone Path

As can be seen from the planned roadmap, the optimal paths identified by the algorithm vary significantly across different scenarios. Since the UAV path cost must account for multiple factors, it is essential to utilize the fitness value graph to represent the total cost of the path effectively. Figure 7 illustrates the convergence trend of the optimal fitness values for the seven top-performing algorithms throughout the iteration process. It is evident that the algorithms identify similar optimal paths with relatively comparable total costs when faced with sparse obstacles. However, the advantages of the ISCHNOA algorithm become increasingly evident in complex scenarios and under high obstacle density conditions. Compared to the NOA algorithm, the ISCHNOA algorithm effectively avoids getting trapped in local optimal solutions. This further validates the effectiveness of the improvement strategy proposed in this paper.
The information about the obstacles and the cost obtained by each algorithm is displayed in Table 7.

5.5.4. Sensitivity Analysis

Meanwhile, the adaptability of the ISCHNOA algorithm is further evaluated by adjusting the parameter k within a specified range. As shown in Table 8, k varies in the interval [150, 350]. In both the simple and obstacle-dense scenarios, changes in k typically result in only a small fluctuation in the cost.

6. Conclusions and Outlook

This paper presents an improved nutcracker optimization algorithm that integrates hyperbolic positive cosine (ISCHNOA) to enhance its effectiveness in tackling complex optimization problems and UAV path planning. Integrating the SCHO algorithm into the NOA algorithm enhances its exploration capability, enabling the UAV to find a flight path with lower cost and higher safety. Additionally, a nonlinear function is introduced to further accelerate the algorithm’s convergence speed, allowing the UAV to identify a suitable flight path in less time. The ISCHNOA algorithm’s capability to explore and escape from locally optimal solutions was assessed using standard classical test functions, as well as the CEC2014 and CEC2020 test suites. Meanwhile, this paper presents a UAV path planning model that incorporates distance cost, threat source cost, altitude cost, and path smoothing cost. The ISCHNOA algorithm is applied to identify suitable routes for UAVs across various complex scenarios. Experimental results indicate that the proposed ISCHNOA algorithm demonstrates strong performance across a variety of numerical optimization problems. The algorithm typically achieves near-optimal solutions in low-dimensional cases; however, the performance of the ISCHNOA algorithm is less remarkable on high-dimensional problems. This presents an opportunity for further research in the future. Future research will concentrate on optimizing the ISCHNOA algorithm to enhance its performance on high-dimensional problems, allowing it to better address challenges in various complex scenarios. Additionally, we plan to apply the ISCHNOA algorithm to a range of UAV tasks in urban environments.

Author Contributions

Conceptualization, S.J.; methodology, S.C.; software, S.C.; validation, S.C.; formal analysis, S.C.; investigation, Y.Z.; resources, Y.Z.; data curation, H.S.; writing—original draft preparation, S.C.; writing—review and editing, S.J.; visualization, H.S.; supervision, Y.L.; project administration, S.J.; funding acquisition, S.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Available under request.

Acknowledgments

The authors would like to thank all the reviewers for their constructive comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dewangan, R.K.; Shukla, A.; Godfrey, W.W. Three dimensional path planning using grey wolf optimizer for uavs. Appl. Intell. 2019, 49, 2201–2217. [Google Scholar] [CrossRef]
  2. Jiang, S.; Lu, Y.; Song, H.; Lu, Z.; Zhang, Y. A Hybrid News Recommendation Approach Based on Title–Content Matching. Mathematics 2024, 12, 2125. [Google Scholar] [CrossRef]
  3. Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based uav path planning. Knowl.-Based Syst. 2018, 158, 54–64. [Google Scholar] [CrossRef]
  4. Cui, J.; Wu, L.; Huang, X.; Xu, D.; Liu, C.; Xiao, W. Multi-strategy adaptable ant colony optimization algorithm and its application in robot path planning. Knowl.-Based Syst. 2024, 288, 111459. [Google Scholar] [CrossRef]
  5. Xu, M.; Liu, Y.; Huang, Q.; Zhang, Y.; Luan, G. An improved dijkstra’s shortest path algorithm for sparse network. Appl. Math. Comput. 2007, 185, 247–254. [Google Scholar] [CrossRef]
  6. Yue, R.; Xiao, J.; Wang, S.; Joseph, S.L. Three-dimensional path planning of a climbing robot using mixed integer linear programming. Adv. Robot. 2010, 24, 2087–2118. [Google Scholar] [CrossRef]
  7. Yuan, Y.-x. A scaled central path for linear programming. J. Comput. Math. 2001, 19, 35–40. [Google Scholar]
  8. Sundarraj, S.; Reddy, R.V.K.; Basam, M.B.; Lokesh, G.H.; Flammini, F.; Natarajan, R. Route planning for an autonomous robotic vehicle employing a weight-controlled particle swarm-optimized dijkstra algorithm. IEEE Access 2023, 11, 92433–92442. [Google Scholar] [CrossRef]
  9. Deng, Y.; Chen, Y.; Zhang, Y.; Mahadevan, S. Fuzzy dijkstra algorithm for shortest path problem under uncertain environment. Appl. Soft Comput. 2012, 12, 1231–1237. [Google Scholar] [CrossRef]
  10. Jiang, Y.; Bai, T.; Wang, D.; Wang, Y. Coverage path planning of uav based on linear programming—Fuzzy c-means with pigeon-inspired optimization. Drones 2024, 8, 50. [Google Scholar] [CrossRef]
  11. Fang, C.; Williams, B.C. A conffict-directed approach to chance-constrained mixed logical linear programming. Artiffcial Intell. 2023, 323, 103972. [Google Scholar] [CrossRef]
  12. Kvitko, D.; Rybin, V.; Bayazitov, O.; Karimov, A.; Karimov, T.; Butusov, D. Chaotic Path-Planning Algorithm Based on Courbage–Nekorkin Artificial Neuron Model. Mathematics 2024, 12, 892. [Google Scholar] [CrossRef]
  13. Moysis, L.; Rajagopal, K.; Tutueva, A.V.; Volos, C.; Teka, B.; Butusov, D.N. Chaotic Path Planning for 3D Area Coverage Using a Pseudo-Random Bit Generator from a 1D Chaotic Map. Mathematics 2021, 9, 1821. [Google Scholar] [CrossRef]
  14. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artiffcial Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  15. Li, K.; Huang, H.; Fu, S.; Ma, C.; Fan, Q.; Zhu, Y. A multi-strategy enhanced northern goshawk optimization algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116199. [Google Scholar] [CrossRef]
  16. Fu, S.; Li, K.; Huang, H.; Ma, C.; Fan, Q.; Zhu, Y. Red-billed blue magpie optimizer: A novel metaheuristic algorithm for 2d/3d uav path planning and engineering design problems. Artiffcial Intell. Rev. 2024, 57, 134. [Google Scholar] [CrossRef]
  17. Han, X.; Dong, Y.; Yue, L.; Xu, Q.; Xie, G.; Xu, X. State-transition simulated annealing algorithm for constrained and unconstrained multi-objective optimization problems. Appl. Intell. 2021, 51, 775–787. [Google Scholar] [CrossRef]
  18. Rodriguez, M.; Arcos-Aviles, D.; Martinez, W. Fuzzy logic-based energy management for isolated microgrid using meta-heuristic optimization algorithms. Appl. Energy 2023, 335, 120771. [Google Scholar] [CrossRef]
  19. Zhao, F.; Jiang, T.; Wang, L. A reinforcement learning driven cooperative metaheuristic algorithm for energy-efffcient distributed no-wait ffow-shop scheduling with sequence-dependent setup time. IEEE Trans. Ind. Inform. 2022, 19, 8427–8440. [Google Scholar] [CrossRef]
  20. Li, C.; Zhang, F.; Du, Y.; Li, H. Classiffcation of brain tumor types through mris using parallel cnns and ffreffy optimization. Sci. Rep. 2024, 14, 15057. [Google Scholar]
  21. Pashaei, E.; Pashaei, E. An efffcient binary chimp optimization algorithm for feature selection in biomedical data classiffcation. Neural Comput. Appl. 2022, 34, 6427–6451. [Google Scholar] [CrossRef]
  22. Hamzadayı, A.; Baykasoğlu, A.; Akpınar, S. Solving combinatorial optimization problems with single seekers society algorithm. Knowl.-Based Syst. 2020, 201, 106036. [Google Scholar] [CrossRef]
  23. Fathollahi-Fard, A.M.; Dulebenets, M.A.; Hajiaghaei-Keshteli, M.; TavakkoliMoghaddam, R.; Safaeian, M.; Mirzahosseinian, H. Two hybrid meta-heuristic algorithms for a dual-channel closed-loop supply chain network design problem in the tire industry under uncertainty. Adv. Eng. Inform. 2021, 50, 101418. [Google Scholar] [CrossRef]
  24. Molina, G.; Alba, E. Location discovery in wireless sensor networks using metaheuristics. Appl. Soft Comput. 2011, 11, 1223–1240. [Google Scholar] [CrossRef]
  25. Jiang, S.; Wang, M.; Guo, J.; Wang, M. K-means clustering algorithm based on improved ffower pollination algorithm. J. Electron. Imaging 2023, 32, 032003. [Google Scholar]
  26. Marini, F.; Walczak, B. Particle swarm optimization (pso). A tutorial. Chemom. Intell. Lab. Syst. 2015, 149, 153–165. [Google Scholar] [CrossRef]
  27. Wang, J.; Jiang, S.; Ding, J. Online learning resource recommendation method based on multi-similarity metric optimization under the COVID-19 epidemic. Comput. Commun. 2023, 206, 152–159. [Google Scholar] [CrossRef] [PubMed]
  28. Jiang, Y.; Hu, T.; Huang, C.; Wu, X. An improved particle swarm optimization algorithm. Appl. Math. Comput. 2007, 193, 231–239. [Google Scholar] [CrossRef]
  29. Sun, S.; Li, J. A two-swarm cooperative particle swarms optimization. Swarm Evol. Comput. 2014, 15, 1–18. [Google Scholar] [CrossRef]
  30. Shao, S.; Peng, Y.; He, C.; Du, Y. Efffcient path planning for uav formation via comprehensively improved particle swarm optimization. ISA Trans. 2020, 97, 415–430. [Google Scholar] [CrossRef]
  31. Jiang, S.; Shang, J.; Guo, J.; Zhang, Y. Multi-strategy improved ffamingo search algorithm for global optimization. Appl. Sci. 2023, 13, 5612. [Google Scholar] [CrossRef]
  32. He, W.; Qi, X.; Liu, L. A novel hybrid particle swarm optimization for multi-uav cooperate path planning. Appl. Intell. 2021, 51, 7350–7364. [Google Scholar] [CrossRef]
  33. Song, H.; Wang, J.; Song, L.; Zhang, H.; Bei, J.; Ni, J.; Ye, B. Improvement and application of hybrid real-coded genetic algorithm. Appl. Intell. 2022, 52, 17410–17448. [Google Scholar] [CrossRef]
  34. Das, S.; Mullick, S.S.; Suganthan, P.N. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  35. Krishna, K.; Murty, M.N. Genetic k-means algorithm. IEEE Trans. Syst. Man Cybern. Part (Cybern.) 1999, 29, 433–439. [Google Scholar] [CrossRef] [PubMed]
  36. Pashaei, E.; Aydin, N. Binary black hole algorithm for feature selection and classiffcation on biological data. Appl. Soft Comput. 2017, 56, 94–106. [Google Scholar] [CrossRef]
  37. Liu, H.; Zhang, X.; Liang, H.; Tu, L. Stability analysis of the human behaviorbased particle swarm optimization without stagnation assumption. Expert Syst. Appl. 2020, 159, 113638. [Google Scholar] [CrossRef]
  38. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer:A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl.-Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  39. Duan, Z.; Yu, H.; Zhang, Q.; Tian, L. Parameter extraction of solar photovoltaic model based on nutcracker optimization algorithm. Appl. Sci. 2023, 13, 6710. [Google Scholar] [CrossRef]
  40. Evangeline, S.I.; Darwin, S.; Anandkumar, P.P.; Sreenivasan, V. Investigating the performance of a surrogate-assisted nutcracker optimization algorithm on multiobjective optimization problems. Expert Syst. Appl. 2024, 245, 123044. [Google Scholar] [CrossRef]
  41. Wu, D.; Yan, R.; Jin, H.; Cai, F. An adaptive nutcracker optimization approach for distribution of fresh agricultural products with dynamic demands. Agriculture 2023, 13, 1430. [Google Scholar] [CrossRef]
  42. Dahou, A.; Ewees, A.A.; Hashim, F.A.; Al-qaness, M.A.; Orabi, D.A.; Soliman, E.M.; Tag-eldin, E.M.; Aseeri, A.O.; Abd Elaziz, M. Optimizing fake news detection for arabic context: A multitask learning approach with transformers andan enhanced nutcracker optimization algorithm. Knowl.-Based Syst. 2023, 280, 111023. [Google Scholar] [CrossRef]
  43. Kumar, V.; Rao, R.N.; Singh, A.; Shekher, V.; Paul, K.; Sinha, P.; Alghamdi, T.A.; Abdelaziz, A.Y. A novel nature-inspired nutcracker optimizer algorithm for congestion control in power system transmission lines. Energy Explor. Exploit. 2024, 42, 2056–2091. [Google Scholar] [CrossRef]
  44. Mohamed, A.F.; Saba, A.; Hassan, M.K.; Youssef, H.M.; Dahou, A.; Elsheikh, A.H.; El-Bary, A.A.; Abd Elaziz, M.; Ibrahim, R.A. Boosted nutcracker optimizer and chaos game optimization with cross vision transformer for medical image classiffcation. Egypt. Inform. J. 2024, 26, 100457. [Google Scholar] [CrossRef]
  45. Bai, J.; Li, Y.; Zheng, M.; Khatir, S.; Benaissa, B.; Abualigah, L.; Wahab, M.A. A sinh cosh optimizer. Knowl.-Based Syst. 2023, 282, 111081. [Google Scholar] [CrossRef]
  46. Meng, Q.; Qu, Q.; Chen, K.; Yi, T. Multi-uav path planning based on cooperative co-evolutionary algorithms with adaptive decision variable selection. Drones 2024, 8, 435. [Google Scholar] [CrossRef]
  47. Hashim, F.A.; Houssein, E.H.; Hussain, K.; Mabrouk, M.S.; Al-Atabany, W. Honey badger algorithm: New metaheuristic algorithm for solving optimization problems. Math. Comput. Simul. 2022, 192, 84–110. [Google Scholar] [CrossRef]
  48. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  49. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  50. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  51. Geoscience Australia. Digital Elevation Model (DEM) of Australia Derived from LiDAR 5 Metre Grid; Commonwealth of Australia (Geoscience Australia): Canberra, Australia, 2015.
  52. Lyu, L.; Yang, F. Mmpa: A modiffed marine predator algorithm for 3d uav path planning in complex environments with multiple threats. Expert Syst. Appl. 2024, 257, 124955. [Google Scholar] [CrossRef]
Figure 1. Threat cost visualization.
Figure 1. Threat cost visualization.
Biomimetics 09 00757 g001
Figure 2. Flight altitude visualization.
Figure 2. Flight altitude visualization.
Biomimetics 09 00757 g002
Figure 3. Turning and climbing angle calculation.
Figure 3. Turning and climbing angle calculation.
Biomimetics 09 00757 g003
Figure 4. Convergence curves of each algorithm for the test function.
Figure 4. Convergence curves of each algorithm for the test function.
Biomimetics 09 00757 g004
Figure 5. Convergence curves of each algorithm for the test function (continued).
Figure 5. Convergence curves of each algorithm for the test function (continued).
Biomimetics 09 00757 g005
Figure 6. Top view of each algorithm’s result across eight different obstacle classes.
Figure 6. Top view of each algorithm’s result across eight different obstacle classes.
Biomimetics 09 00757 g006aBiomimetics 09 00757 g006b
Figure 7. Convergence plots of each algorithm for the eight obstacle class cases.
Figure 7. Convergence plots of each algorithm for the eight obstacle class cases.
Biomimetics 09 00757 g007aBiomimetics 09 00757 g007b
Table 1. Description of some standard test functions.
Table 1. Description of some standard test functions.
FunctionDimensionValueGlobally Optimal
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
f 2 ( x ) = i = 1 n x i + i = 1 n x i 30[−10, 10]0
f 3 ( x ) = i = 1 n j = 1 i x j 2 30[−10, 10]0
f 4 x = max i x i , 1 i n 30[−10, 10]0
f 5 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30[−10, 10]0
f 6 ( x ) = i = 1 n x i + 0.5 2 30[−10, 10]0
f 7 x = i = 1 n i x i 4 + r a n d m 0 , 1 30[−10, 10]0
f 8 ( x ) = i = 1 n x i 2 10 · cos 2 π x i + 10 30[−5.12, 5.12]0
f 9 ( x ) = 20 exp 0.2 · 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 30[−32, 32]0
f 10 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 30[−600, 600]0
f 11 ( x ) = π n 10 · sin 2 π y i + i = 1 n 1 y i 1 2 1 + 10 · sin 2 π y i + 1 + y n 1 2 + i = 1 n u ( x i , 10 , 100 , 4 ) , y i = 1 + 1 4 x i + 1 u ( x i , a , k , m ) = k · x i a m , x i > a 0 , a x i a k · ( x i a ) m , x i < a 30[−50, 50]0
f 12 ( x ) = 0.1 · sin 2 3 π x 1 + i = 1 n 1 x i 1 2 1 + sin 2 ( 3 π x i + 1 ) + x n 1 2 1 + sin 2 ( 2 π x n ) + i = 1 n u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
f 13 = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 2[−65, 65]1
f 14 ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]0.0003075
Table 2. Characteristics of CEC-2014 test functions.
Table 2. Characteristics of CEC-2014 test functions.
ClassIDFunctionGlobally Optimal
UnimodalF24Function of Rotated High Conditioned Elliptic100
functionF25Function of Rotated Bent Cigar300
F26Function of Rotated Discus
SimpleF27Function of Shifted and Rotated Rosenbrock400
multimodalF28Function of Shifted and Rotated Ackley500
Test functionF29Function of Shifted and Rotated Weierstrass600
F30Function of Shifted and Rotated Griewank700
F31Function of Shifted Rastrigin800
F32Function of Shifted and Rotated Rastrigin900
F33Function of Shifted Schwefel1000
F34Function of Shifted and Rotated Schwefel1100
F35Function of Shifted and Rotated Katsuura1200
F36Function of Shifted and Rotated HappyCat1300
F37Function of Shifted and Rotated HGBat1400
F38Function of Shifted and Rotated Expanded Griewank’s
plus Rosenbrock1500
F39Function of Shifted and Rotated Expanded Scaffer1600
Hybrid testF40Function 1 Hybrid1700
FunctionsF41Function 2 Hybrid1800
F42Function 3 Hybrid1900
F43Function 4 Hybrid2000
F44Function 5 Hybrid2100
F45Function 6 Hybrid2200
CompositionF46Function 1 Composition2300
testF47Function 2 Composition2400
FunctionsF48Function 3 Composition2500
F49Function 4 Composition2600
F50Function 5 Composition2700
F51Function 6 Composition2800
F52Function 7 Composition2900
F53Function 8 Composition3000
Table 3. Characteristics of CEC-2020 test functions.
Table 3. Characteristics of CEC-2020 test functions.
ClassIDFunctionGlobally Optimal
Unimodal functionF54Shifted and Rotated Bent Cigar function100
Basic functionF55Shifted and Rotated Schwefel’s function1100
F56Shifted and Rotated Lunacek bi-Rastrigin function700
F57Expanded Rosenbrock’s plus Griewangk’s function1900
Hybrid FunctionF58Hybrid Function 1 (N = 3)1700
F59Hybrid Function 2 (N = 4)1600
F60Hybrid Function 3 (N = 5)2100
Composition FunctionF61Composition Function 1 (N = 3)2200
F62Composition Function 2 (N = 4)2400
F63Composition Function 3 (N = 5)2500
Table 4. Mean, variance, and ranking achieved by each algorithm on the CEC2014 suite.
Table 4. Mean, variance, and ranking achieved by each algorithm on the CEC2014 suite.
FIndexISCHNOANOASCHOWOAGWOHBAEO
Unimodal
F24Ave100100 7.85 × 10 5 4.95 × 10 6 6.05 × 10 6 1.12 × 10 7 5.24 × 10 4
std 5.33 × 10 4 1.37 × 10 4 9.68 × 10 6 4.65 × 10 6 5.62 × 10 6 5.76 × 10 6 6.65 × 10 4
Rank2145673
F25Ave 200.00 200 2.84 × 10 6 8.41 × 10 5 1.73 × 10 7 1.28 × 10 7 1.43 × 10 3
std 3.6 × 10 6 3.07 × 10 9 4.85 × 10 6 6.03 × 10 5 6.59 × 10 7 5.40 × 10 7 1.44 × 10 3
Rank2154763
F26Ave300300 5.95 × 10 3 4.92 × 10 4 5.87 × 10 3 1.06 × 10 4 4.39 × 10 2
std 1.19 × 10 9 2.7 × 10 11 3.21 × 10 3 2.91 × 10 4 3.95 × 10 3 4.16 × 10 3 2.03 × 10 2
Rank2157463
Multimodal
F27Ave 411.00 418.69 429.52 4.41 × 10 2 4.30 × 10 2 5.70 × 10 2 4.28 × 10 2
std 16.2939 17.0336 28.305 2.78 × 10 1 2.04 × 10 1 5.05 × 10 1 2.41 × 10 1
Rank1246573
F28Ave 5.18 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.20 × 10 2 5.21 × 10 2 5.20 × 10 2
std 6.3445 2.17 × 10 2 1.09 × 10 1 1.14 × 10 1 1.04 × 10 1 1.75 × 10 1 6.74 × 10 2
Rank1235464
F29Ave600600 6.02 × 10 2 6.08 × 10 2 6.02 × 10 2 6.23 × 10 2 6.02 × 10 2
std 5.33 × 10 4 8.08 × 10 4 1.49 1.54 1.52 4.72 × 10 2 1.52
Rank1235464
F30Ave700700 7.02 × 10 2 7.01 × 10 2 7.02 × 10 2 7.01 × 10 2 700
std 2.55 × 10 2 2.13 × 10 2 1.22 4.52 × 10 1 2.78 2.84 × 10 1 3.32 × 10 2
Rank3265741
F31Ave800800 8.05 × 10 2 8.38 × 10 2 8.08 × 10 2 8.92 × 10 2 8.08 × 10 2
std 1.5 × 10 11 1.0 × 10 12 1.10 1.43 × 10 1 3.23 2.38 × 10 1 2.89
Rank2136574
F32Ave 906.00 903.28 9.18 × 10 2 9.46 × 10 2 9.15 × 10 2 9.84 × 10 2 9.11 × 10 2
std 3.56 1.10 1.01 × 10 1 1.48 × 10 1 7.15 2.99 × 10 1 5.11
Rank2156473
F33Ave 1.00 × 10 3 1.00 × 10 3 1.12 × 10 3 1.52 × 10 3 1.31 × 10 3 2.81 × 10 3 1.18 × 10 3
std 1.604 2.2748 1.58 × 10 1 2.49 × 10 2 1.89 × 10 2 1.25 × 10 2 1.15 × 10 2
Rank1236574
F34Ave 1332.00 1523.04 1.56 × 10 3 2.91 × 10 3 1.84 × 10 3 1.99 × 10 3 1.70 × 10 3
std 182.78 190.36 1.51 × 10 2 5.11 × 10 2 3.83 × 10 2 4.29 × 10 2 3.34 × 10 2
Rank1237564
F35Ave 1200.00 1200.204 1200.08 1200.79 1200.81 1200.57 1200.33
std 7.58 × 10 2 3.5 × 10 2 1.81 × 10 2 2.71 × 10 1 6.53 × 10 1 3.64 × 10 1 2.11 × 10 1
Rank1326754
F36Ave 1300.00 1300.109 1300.05 1300.34 1300.19 1300.20 1300.06
std 2.84 × 10 2 3.4 × 10 2 1.29 × 10 3 1.29 × 10 1 5.53 × 10 2 9.69 × 10 2 4.05 × 10 3
Rank1427563
F37Ave 1400.00 1400.15 1400.07 1400.30 1400.32 1400.25 1400.17
std 3.19 × 10 2 5.4 × 10 2 9.91 × 10 2 2.21 × 10 1 2.24 × 10 1 2.04 × 10 1 7.64 × 10 2
Rank1326754
F38Ave 1500.00 1500.75 1501.75 1506.98 1501.55 1500.57 1501.12
std 0.29 0.21 1.13 2.32 7.48 × 10 1 5.89 × 10 1 5.99 × 10 1
Rank1367524
F39Ave 1602.00 1602.22 1603.32 1603.39 1602.61 1600.98 1602.23
std 0.31 0.19 5.09 × 10 1 3.12 × 10 1 7.46 × 10 1 3.51 × 10 1 4.67 × 10 1
Rank2376514
Hybrid
F40Ave 1715.63 1717.13 1.81 × 10 4 2.03 × 10 5 4.08 × 10 4 9.16 × 10 3 4.36 × 10 3
std 9.54 5.55 4.86 × 10 3 3.86 × 10 5 1.05 × 10 5 1.04 × 10 4 2.26 × 10 3
Rank1257643
F41Ave 1801.00 1800.89 7.95 × 10 3 1.19 × 10 4 1.23 × 10 4 9.75 × 10 3 8.74 × 10 3
std 0.47 0.60 6.59 × 10 3 1.18 × 10 4 7.67 × 10 3 7.50 × 10 3 5.63 × 10 3
Rank2136754
F42Ave 1900.00 1900.31 1901.37 1905.94 1902.75 1903.55 1901.52
std 0.32 0.14 9.12 × 10 1 1.37 1.12 1.68 8.04 × 10 1
Rank1237564
F43Ave 2000.00 2000.70 4.11 × 10 3 8.11 × 10 3 7.27 × 10 3 3.41 × 10 3 2.13 × 10 3
std 0.45 0.61 2.76 × 10 2 4.26 × 10 3 4.46 × 10 3 1.87 × 10 3 6.32 × 10 1
Rank1257643
F44Ave 2100.00 2100.70 3.63 × 10 3 8.87 × 10 4 9.63 × 10 3 2.33 × 10 3 2.40 × 10 3
std 0.37 0.26 3.26 × 10 2 4.26 × 10 3 4.37 × 10 3 4.51 × 10 3 1.96 × 10 2
Rank1257634
F45Ave 2200.00 2200.27 2.27 × 10 3 2.29 × 10 3 2.30 × 10 3 2.21 × 10 3 2.24 × 10 3
std 0.37 0.18 6.29 × 10 1 7.29 × 10 1 5.77 × 10 1 6.89 × 10 1 3.89 × 10 1
Rank1256734
Composition
F46Ave25002500 2.53 × 10 3 2.62 × 10 3 2.63 × 10 3 2.51 × 10 3 2.63 × 10 3
std00 2.61 × 10 1 4.06 × 10 1 4.02 2.26 × 10 1 2.9 × 10 10
Rank1245736
F47Ave 2.55 × 10 3 2.51 × 10 3 2.52 × 10 3 2.58 × 10 3 2.54 × 10 3 2.56 × 10 3 2.55 × 10 3
std 43.14 5.38 26.1 31.4 34.6 35.3 35.0
Rank5127364
F48Ave 2622.00 2624.41 2640.07 2.69 × 10 3 2.70 × 10 3 2.68 × 10 3 2.70 × 10 3
std 14.03 27.30 10.9 9.32 12.9 27.0 14.4
Rank1234657
F49Ave 2700.00 2700.10 2700.25 2700.38 2700.14 2700.02 2700.07
std 2.08 × 10 2 2.00 × 10 2 3.31 × 10 1 1.72 × 10 1 1.82 × 10 1 1.17 × 10 1 3.37 × 10 2
Rank1467523
F50Ave 2781.00 2820.83 2.80 × 10 3 3.10 × 10 3 2.98 × 10 3 2.95 × 10 3 3.02 × 10 3
std 1.01 × 10 2 1.02 × 10 2 7.21 × 10 2 1.13 × 10 2 1.46 × 10 2 9.06 × 10 1 1.12 × 10 2
Rank1327456
F51Ave300030003000 3.38 × 10 3 3.27 × 10 3 3.19 × 10 3 3.22 × 10 3
std000 1.48 × 10 2 7.62 × 10 1 1.26 × 10 2 5.14 × 10 1
Rank1115423
F52Ave 3100.00 3101.27 3100.00 1.77 × 10 5 3.77 × 10 5 3.45 × 10 5 2.41 × 10 5
std0 34.75 1.9 × 10 2 5.26 × 10 5 8.60 × 10 5 1.28 × 10 5 6.18 × 10 5
Rank1324765
F53Ave 3200.00 3451.85 3200.00 5.18 × 10 3 4.45 × 10 3 3.47 × 10 3 3.74 × 10 3
std0 60.78 61.82 1.19 × 10 3 7.85 × 10 2 6.75 × 10 2 2.94 × 10 2
Rank1327645
Ave Rank 1.43 2.10 3.77 6.03 4.66 4.76 3.83
Table 5. Mean, variance, and ranking achieved by each algorithm on the CEC2020 suite.
Table 5. Mean, variance, and ranking achieved by each algorithm on the CEC2020 suite.
FIndexISCHNOANOASCHOWOAGWOHBAEO
F54Ave100100 4.80 × 10 8 3.05 × 10 6 4.89 × 10 7 1.88 × 10 2 3.19 × 10 3
std 7.33 × 10 5 2.17 × 10 9 3.72 × 10 8 7.30 × 10 6 1.40 × 10 8 1.55 × 10 3 2.86 × 10 3
Rank2175634
F55Ave 1219.40 1.23 × 10 3 1.59 × 10 3 2.12 × 10 3 1.54 × 10 3 1.47 × 10 3 1.52 × 10 3
std 82.82 9.56 × 10 1 2.47 × 10 2 3.39 × 10 2 1.94 × 10 2 1.87 × 10 2 2.11 × 10 2
Rank1267534
F56Ave 7.16 × 10 2 7.16 × 10 2 7.51 × 10 2 7.78 × 10 2 7.30 × 10 2 7.25 × 10 2 7.21 × 10 2
std 2.31 2.84 1.8 × 10 1 2.11 × 10 1 8.20 1.46 × 10 1 6.72
Rank1267543
F57Ave 1900.93 1900.00 2.46 × 10 3 1900.05 1900.37 1900.00 1900.00
std 0.21 0 2.66 × 10 3 1.99 × 10 1 5.44 × 10 1 00
Rank4152311
F58Ave 1717.56 1716.81 5.98 × 10 4 3.40 × 10 5 5.29 × 10 4 2.87 × 10 3 3.90 × 10 3
std 11.45 10.85 1.43 × 10 5 6.45 × 10 5 1.22 × 10 5 8.09 × 10 3 1.83 × 10 3
Rank2167534
F59Ave 1600.71 1604.53 1.78 × 10 3 1.86 × 10 3 1.75 × 10 3 1.81 × 10 3 1.69 × 10 3
std 2.69 × 10 1 2.15 × 10 1 1.03 × 10 2 1.04 × 10 2 1.10 × 10 2 1.87 × 10 2 8.49 × 10 2
Rank1257463
F60Ave21002100 5.88 × 10 3 5.52 × 10 4 1.03 × 10 4 3.00 × 10 3 2.34 × 10 3
std 0.338 3.06 4.03 × 10 3 5.44 × 10 4 4.62 × 10 3 7.36 × 10 2 2.35 × 10 2
Rank1257643
F61Ave 2266.42 2.27 × 10 3 2.42 × 10 3 2.32 × 10 3 2.36 × 10 3 2.30 × 10 3 2.31 × 10 3
std 4.59 × 10 1 4.48 × 10 1 2.45 × 10 2 2.17 × 10 1 1.61 × 10 2 1.60 × 10 1 2.05 × 10 1
Rank1275634
F62Ave 2574.29 2.59 × 10 3 2.69 × 10 3 2.76 × 10 3 2.74 × 10 3 2.74 × 10 3 2.74 × 10 3
std 1.09 × 10 2 1.14 × 10 2 1.32 × 10 1 6.95 × 10 1 1.14 × 10 1 6.94 × 10 1 2.84 × 10 1
Rank1237465
F63Ave2908 2.92 × 10 3 2.93 × 10 3 2.95 × 10 3 2.94 × 10 3 2.92 × 10 3 2.93 × 10 3
std 19.44 2.24 × 10 1 2.85 × 10 1 6.73 × 10 1 1.70 × 10 1 2.30 × 10 1 2.56 × 10 1
Rank1257634
Ave Rank 1.5 1.7 4.5 6.1 5.0 3.6 3.5
Table 6. Model parameters.
Table 6. Model parameters.
Parameter SymbolValue
Population size N30
Number of iterations T1000
Weighting of costs α 10,100,10,50
Starting point coordinates(200,100,150), (50,50,100)
Target point coordinates(800,800,150), (350,360,100)
Table 7. Model initialization and average cost.
Table 7. Model initialization and average cost.
Scenario((X,Y,Z),R)ISCHNOANOASCHOWOAGWOHBAEO
1((400,500,200),50),((500,350,200),50)
((600,200,200),40)9310955692659321925810,8009256
2((325,350,200),50),((400,550,200),50)
((500,550,200),40),((600,550,200),40)10,09010,39010,61010,43310,43511,09010,460
3((325,350,200),50),((400,550,200),50)
((500,325,200),40),((550,650,200),40)
((625,500,200),40)936211,11310,58011,32010,440982210,430
4((350,200,200),40),((400,500,200),50)
((500,350,200),50),((600,200,100),40)
((650,750,150),50),((700,550,200),40)10,46010,84011,14011,75011,01011,71010,980
5((150,250,100),15),((250,125,200),15)4420441044064379439743834379
6((125,200,100),20),((175,275,100),20)
((200,125,200),15),((250,200,100),15)5155627359975416546558375391
7((100,200,100),20),((150,350,100),20)
((175,250,100),15),((200,125,100),15)
((275,200,100),15),((300,300,100),20)6288724077586482639782216478
8((100,200,100),20),((150,350,100),20)
((175,250,100),15),((200,125,100),15)
((250,200,100),20),((250,300,100),15)75079327835385069178102248032
Table 8. Sensitivity analysis of the cost of taking the value of K in ISCHNOA for eight scenarios.
Table 8. Sensitivity analysis of the cost of taking the value of K in ISCHNOA for eight scenarios.
ScenarioCost (k = 150)Cost (k = 200)Cost (k = 250)Cost (k = 300)Cost (k = 350)
193419295931093019321
210,06910,04110,09010,29510,263
395729397936294109378
410,35910,26510,46010,54010,332
544104400442044164413
651865162515551755130
765286597628865626683
875247418750775617461
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, S.; Cui, S.; Song, H.; Lu, Y.; Zhang, Y. Enhanced Nutcracker Optimization Algorithm with Hyperbolic Sine–Cosine Improvement for UAV Path Planning. Biomimetics 2024, 9, 757. https://doi.org/10.3390/biomimetics9120757

AMA Style

Jiang S, Cui S, Song H, Lu Y, Zhang Y. Enhanced Nutcracker Optimization Algorithm with Hyperbolic Sine–Cosine Improvement for UAV Path Planning. Biomimetics. 2024; 9(12):757. https://doi.org/10.3390/biomimetics9120757

Chicago/Turabian Style

Jiang, Shuhao, Shengliang Cui, Haoran Song, Yizi Lu, and Yong Zhang. 2024. "Enhanced Nutcracker Optimization Algorithm with Hyperbolic Sine–Cosine Improvement for UAV Path Planning" Biomimetics 9, no. 12: 757. https://doi.org/10.3390/biomimetics9120757

APA Style

Jiang, S., Cui, S., Song, H., Lu, Y., & Zhang, Y. (2024). Enhanced Nutcracker Optimization Algorithm with Hyperbolic Sine–Cosine Improvement for UAV Path Planning. Biomimetics, 9(12), 757. https://doi.org/10.3390/biomimetics9120757

Article Metrics

Back to TopTop