Next Article in Journal
Robotic Removal and Collection of Screws in Collaborative Disassembly of End-of-Life Electric Vehicle Batteries
Previous Article in Journal
Hyaluronic-Acid-Coated Sterosome for Dasatinib Delivery in Hepatocellular Carcinoma: Preparation, Physicochemical Characterization, and In Vitro Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm

1
School of Information Science and Technology, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
2
Hebei Key Laboratory of Electromagnetic Environmental Effects and Information Processing, Shijiazhuang 050043, China
3
Department of Information Management, Hebei General Hospital, Shijiazhuang 050051, China
4
Department of Mechanical and Electrical Engineering, Shijiazhuang Information Engineering Vocational College, Shijiazhuang 052161, China
5
Hebei Provincial Engineering Research Center for Vertical Take-Off and Landing Fixed-Wing Intelligent Unmanned Aerial Vehicle Technology and Applications, Shijiazhuang 052161, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(8), 551; https://doi.org/10.3390/biomimetics10080551
Submission received: 2 July 2025 / Revised: 5 August 2025 / Accepted: 6 August 2025 / Published: 21 August 2025
(This article belongs to the Section Biological Optimisation and Management)

Abstract

The multi-strategy optimized dream optimization algorithm (MSDOA) is proposed to address the challenges of inadequate search capability, slow convergence, and susceptibility to local optima in intelligent optimization algorithms applied to UAV three-dimensional path planning, aiming to enhance the global search efficiency and accuracy of UAV path planning algorithms in 3D environments. First, the algorithm utilizes Bernoulli chaotic mapping for population initialization to widen individual search ranges and enhance population diversity. Subsequently, an adaptive perturbation mechanism is incorporated during the exploration phase along with a lens imaging reverse learning strategy to update the population, thereby improving the exploration ability and accelerating convergence while mitigating premature convergence. Lastly, an Adaptive Individual-level Mixed Strategy (AIMS) is developed to conduct a more flexible search process and enhance the algorithm’s global search capability. The performance of the algorithm is evaluated through simulation experiments using the CEC2017 benchmark test functions. The results indicate that the proposed algorithm achieves superior optimization accuracy, faster convergence speed, and enhanced robustness compared to other swarm intelligence algorithms. Specifically, MSDOA ranks first on 28 out of 29 benchmark functions in the CEC2017 test suite, demonstrating its outstanding global search capability and conver-gence performance. Furthermore, UAV path planning simulation experiments conducted across multiple scenario models show that MSDOA exhibits stronger adaptability to complex three-dimensional environments. In the most challenging scenario, compared to the standard DOA, MSDOA reduces the best cost function fitness by 9% and decreases the average cost function fitness by 12%, thereby generating more efficient, smoother, and higher-quality flight paths.

1. Introduction

In recent years, unmanned aerial vehicles (UAVs) have been widely applied with the maturation of technology due to their advantages such as small size, light weight, strong adaptability, high concealment, and low risk factors [1,2]. The application scenarios of UAVs are constantly expanding in multiple fields, from military reconnaissance, disaster monitoring, agricultural spraying, to logistics transportation [3,4,5,6]. The autonomous flight capability and exceptional mission execution efficiency of UAVs make them a key tool in numerous industries [7,8,9]. However, as the application scope continues to expand, the challenges faced by UAVs are gradually increasing, especially the issue of path planning in complex environments [10,11]. Therefore, how to achieve efficient, safe, and accurate flight path planning in a complex environment [12] has become one of the core issues in the research of unmanned aerial vehicle technology [13].
UAV path planning algorithms can be divided into two main types: traditional algorithms and intelligent optimization algorithms. The first category is traditional algorithms, including the A* algorithm [14], Dijkstra algorithm [15], artificial potential field method [16], rapid search random tree [17], etc. The second method is a meta-heuristic algorithm [18]. Unlike traditional algorithms that rely on specific problem models, meta-heuristic algorithms employ heuristic rules and strategies to globally search for the optimal or approximate optimal solution to a given problem [19]. Therefore, the research of UAV path planning has shifted from traditional algorithms to meta-heuristic algorithms [20]. Meta-heuristic algorithms are algorithms that solve complex problems by imitating some optimization processes in nature or society [21]. In UAV path planning, meta-heuristics can effectively deal with uncertainty, complex environments, or dynamically changing conditions [22]. Numerous meta-heuristic algorithms, including particle swarm optimization (PSO) [23,24], Ant Colony Optimization (ACO) [25,26], Grey Wolf Optimization (GWO) [27,28], Sparrow Search Algorithm (SSA) [29,30], and Harris hawks optimization (HHO) [31,32], have been successfully applied to the problem of UAV path planning. While these algorithms have demonstrated robust performance across a range of optimization problems, their application to UAV path planning still faces challenges, such as limited convergence accuracy and susceptibility to local optima.
Researchers have proposed various improved schemes to address the challenges of meta-heuristic algorithms in UAV path planning. Xiao et al. [33] employed a Logistic chaotic map initialization and the Nutcracker Optimization Algorithm to enhance the quality of the initial population. Wang et al. [34] combined Tent chaotic mapping and Gaussian mutation strategy to solve the slow convergence speed and the easy fall into local optimization problems of the traditional BKA algorithm in high-dimensional data and complex function optimization. Zhou [35] et al. introduced a nonlinear control mechanism to optimize the convergence factor of the GWO algorithm, which improved the adaptability and robustness of the algorithm. Hu et al. [36] proposed a co-evolutionary multi-group particle swarm optimization (CMPSO), which innovatively improves the global optimization ability by introducing two different group learning mechanisms and a grouping mechanism based on the activity level to avoid convergence to local optima. Zhang et al. [37] introduced the Cauchy mutation strategy and adaptive weights into the search process and combined them with the Sine–cosine Algorithm (SCA) to improve the global search ability and convergence efficiency. Xu et al. [38] combined the whale optimization algorithm with the dung beetle optimization algorithm to improve local search ability.
The dream optimization algorithm (DOA) proposed by Gao et al. [39] in 2024 is a novel intelligent optimization algorithm inspired by the characteristics of human dreaming. The algorithm incorporates a basic memory strategy, a forgetting and replenishing strategy that balances exploration and exploitation, and a dream sharing strategy to enhance the ability to escape local optima. The optimization process is divided into two phases: exploration and exploitation. DOA exhibits advantages such as fast convergence speed, strong stability, and high optimization precision, making it suitable for complex engineering problems. However, despite its rapid convergence, the algorithm is prone to becoming trapped in local optima, and its original mechanism tends to rapidly lose population diversity, leading to a poor ability to escape local optima. This paper proposes a multi-strategy dream optimization algorithm (MSDOA), with the following key contributions:
  • A population initialization method using the Bernoulli chaotic map is employed to initialize the population, enhancing the diversity of the initial population, promoting a more even distribution across the entire search space, and expanding the coverage range.
  • The proposed adaptive hybrid perturbation mechanism dynamically adjusts disturbance parameters by combining Cauchy variation and Lévy flight strategies during the forgetting and supplementing phases of the dream process. This approach enhances the ability to explore the solution space while preserving high local search accuracy, thereby accelerating convergence.
  • To evade local optima, a lens-imaging learning strategy is employed during the exploration phase. This approach simulates the symmetric mapping of individuals in the search space to produce “mirror image” solutions, thereby improving the ability to escape local traps.
  • This study presents a new global perturbation mechanism, Adaptive Individual-level Mixed Strategy (AIMIS), aimed at improving global optimization performance. AIMIS combines two individual-level perturbation strategies: a global perturbation that utilizes boundary information to expand the search space and a local perturbation that leverages variances among individuals to enhance precision.
The remainder of this paper is organized as follows: Section 2 presents the problem formulation of UAV path planning. Section 3 introduces the standard dream optimization algorithm. Section 4 outlines the technical details of the proposed MSDOA. Section 5 provides comparative experimental results and analysis between MSDOA and other state-of-the-art intelligent optimization algorithms. Section 6 summarizes the main conclusions of this work.

2. Problem Description of UAV Path Planning

Examining UAV path planning difficulties involves creating an extensive cost function comprising flight path length, threat, altitude, and smoothness costs, with various constraint weights. The calculation of the cost function is as follows:

2.1. Flight Path Length Cost

The flight path length cost of the UAV reflects the distance from the starting point to the destination. The coordinates of each waypoint are represented as L i j = x i j , y i j , z i j , and the Euclidean distance between two waypoints is calculated as L i j L i , j + 1 . A top view of the UAV flight path is shown in Figure 1. The fight path length cost Fdistance is mathematically modeled as shown in Equation (1).
F d i s t a n c e = j = 1 n - 1 L i j L i , j + 1
where Lij and Li,j+1 represent the j and j + 1 path points in the ith flight path.

2.2. Threat Cost

In order for the UAV to reach the set target safely, ensuring that the UAV does not collide with obstacles is the main requirement. In this paper, it is assumed that each threat is a cylinder, and the entire flight area is divided into a safety area, threat area, and collision area. Figure 2 is a threat prediction diagram, which illustrates the relationship between these three areas. Threat cost Fthreat is expressed by Equation (2).
F t h r e a t = j = 1 n 1 k = 1 K T k ( L i j L i , j + 1 )
where T k ( L i j L i , j + 1 ) represents the threat cost function for the path segment L i j L i , j + 1 with respect to the k-th threat, as defined in Equation (3).
T k ( L i j L i , j + 1 ) =                                     i f      d k D + R k ( S + D + R k ) d k         i f      D + R k < d k S + D + R k 0                                       i f      d k > D + S + R k  
where k represents the quantity of obstacles present. Ck denotes the center of each obstacle, while Rk signifies the radius of the obstacle. D stands for the diameter of the UAV, dk represents the distance between the current UAV position and the center of the k-th obstacle, whereas S indicates the critical safety distance.

2.3. Flight Altitude Cost

When UAVs fly at high altitudes, they are more susceptible to external environmental influences, which increases the risk of flight accidents. Conversely, flying at excessively low altitudes poses the danger of ground collisions. Therefore, the altitude cost Faltitude for height-constrained flight trajectories at the maximum and minimum altitudes is expressed by Equation (4). Figure 3 illustrates the altitude constraint concept, showing the allowable flight corridor between h min and h max , as well as the terrain-relative height h i j .
F a l t i t u d e = j = 1 n H i j
where Hij represents the height cost of the current path point, calculated as shown in Equation (5).
H i j = h i j ( h max + h min ) 2 i f   h min h i j h max o t h e r w i s e
where hmax and hmin represent the maximum and minimum flight altitudes of the UAV, respectively; hij denotes the altitude above ground level.

2.4. Smoothing Cost

Smoothness of the flight path during UAV operations is crucial and is primarily impacted by the yaw and pitch angles. To enhance path smoothness, a smoothing cost is incorporated into the path optimization procedure. This cost aims to minimize the curvature of the flight trajectory and mitigate abrupt variations in the yaw angle.
In Figure 4, the yaw angle ϕij indicates the deviation between vectors L i j L i , j + 1 and L i , j + 1 L i , j + 2 , representing two consecutive path segments, while the pitch angle θij signifies the angle between vector L i , j + 1 L i , j + 2 and its projection on the horizontal plane. Assuming k is the unit vector along the z-axis, the vector between two consecutive points is determined according to Equation (6). The projection vector, along with the yaw angle ϕij and pitch angle θij, are defined in Equations (7) and (8).
L i j L i , j + 1 = k × ( L i j L i , j + 1 × k )
ϕ i j = arccos L i j L i , j + 1 L i , j + 1 L i , j + 2 L i j L i , j + 1 × L i , j + 1 L i , j + 2
θ i j = arctan z i , j + 1 z i j L i j L i , j + 1
Therefore, the smoothing cost function is expressed in Equation (9).
F s m o o t h = a 1 j = 1 n 2 ϕ i j + a 2 j = 1 n 1 θ i j θ i , j 1
where a1 and a2 represent the penalty coefficients for yaw angle and pitch angle, respectively.

2.5. Total Cost Function

The total cost function of traversing a path includes the flight path length cost, threat cost, altitude cost, and smoothing cost. The mathematical model is applicable to general path planning problems. In this study, the optimal flight path of the UAV is determined by minimizing the cost function defined in Equation (10).
F t o t a l = ω 1 F d i s t a n c e + ω 2 F t h r e a t + ω 3 F a l t i t u d e + ω 4 F s m o o t h
where w1, w2, w3, and w4 are the weighting coefficients for the flight path length cost, threat cost, altitude cost, and smoothing cost, respectively.
Based on the principles of safety and operational efficiency, the weighting coefficients w1, w2, w3, and w4 are defined as follows:
The highest weight, w3 = 10, is assigned to the altitude cost. Considering the practical need for high-altitude flight in real-world missions, this indicates that altitude safety is given the highest priority in the path planning process.
w1 = 5 reflects that the distance cost also carries significant weight, emphasizing that while ensuring safety, it is still important to control the total flight path length to improve operational efficiency.
The lower weight of w2 = 2 suggests that most threats can be effectively avoided through altitude selection, allowing altitude and distance constraints to primarily guide the path planning process.
Finally, w4 = 1, although smoothness contributes to the stability of the flight control system, in this configuration, mission safety and execution efficiency are prioritized over path smoothness.

3. Standard Dream Optimization Algorithm

The ensuing section delineates the fundamental principles and procedure of the standard DOA, explaining its mathematical model and providing a comprehensive analysis of its mechanisms.

3.1. Initialization

The initialization of the population is a crucial step in heuristic, dream-inspired optimization algorithms. The initial population is generated within the search space to initiate the optimization process. The equation for obtaining the initial population is as in Equation (11).
X i = X l + r a n d × ( X u X l ) , i = 1 , 2 , , N
where N represents the number of individuals in the population, Xi denotes the i-th individual in the population, Xl and Xu indicate the lower and upper boundaries of the search space, and rand is a D-dimensional vector where each dimension is a random number between 0 and 1.

3.2. Exploration Phase

The exploration phase of the algorithm begins by partitioning the population into five distinct groups based on variations in memory capacity. These differences in memory capability are reflected in the parameter kq, which represents the number of forgotten dimensions for each group. Prior to the “dreaming” event, a memory strategy is applied, where all individuals in a group observe the best-performing individual from previous iterations. Subsequently, the forgetting and replenishment strategy, as well as the dream-sharing strategy, are executed. Accounting for the disparities in memory capacity among individuals, each individual randomly selects certain information dimensions to forget, referred to as “forgotten dimensions.” During subsequent updates, individuals adjust their positions solely along these marked forgotten dimensions.

3.2.1. Memory Strategies

Dreams are inherently connected to existing memories. Prior to dreaming, group q recalled the positions of the optimal individuals within the group and adjusted their own positions accordingly. The update formula for the optimal individual is shown in Equation (12).
X i t + 1 = X b e s t q t
where X i t + 1 represents the position of the i-th individual at time t + 1, X b e s t q t denotes the position of the best individual in the q-th group at time t, and q = 1, 2, 3, 4, 5 indicates the group number.

3.2.2. Forgetting and Supplementation Strategy

The proposed forgetting and supplementation strategy exploits disparities in memory to stochastically discard and replenish specific dimensions, thereby augmenting both global and local search capacities. The position update equation is presented in Equation (13).
X i , j t + 1 = X b e s t q , j t + ( X l , j + r a n d × ( X u , j X l , j ) ) × 1 2 × ( c o s ( π × t + T max T d T max ) + 1 )
where X i , j t + 1 denotes the position of the i-th individual in the j-th dimension at iteration t + 1, X b e s t q , j t represents the position of the best individual in the q-th group in the j-th dimension at iteration t, X u , j and X l , j denote the upper and lower bounds of the search space in the j-th dimension, t is the current iteration count, T max is the maximum number of iterations, and T d is the maximum iteration count during the exploration phase.

3.2.3. Dream-Sharing Strategies

The dream-sharing strategy in DOA also follows the memory strategy, allowing individuals to randomly acquire positional information from others in the forgetting dimension. The update formula is represented by Equation (14).
X i , j t + 1 = X m , j t + 1      m i X m , j t        i < m N      j = K 1 ,   K 2 ,   K k q
where m is a population randomly selected from 1 to N populations.

3.3. Exploitation Phase

The Exploitation phase no longer involves further grouping. Instead, the optimal solution from the exploration phase is selected, and the position of each individual within the forgetting dimension is subsequently updated. This updating process follows a similar approach to Equations (12) and (13). Equation (15) shows that in dimensions other than K1, K2, …, Kk, individuals retain the position of the global best solution from previous iterations, effectively preserving this information during the dreaming phase. In contrast, Equation (16) illustrates that in dimensions K1, K2, …, Kk, individuals discard this information and instead regenerate new positions through self-organization, as updated as follows by Equations (15) and (16).

3.3.1. Memory Strategies

X i t + 1 = X b e s t t
where X i t + 1 denotes the ith individual at iteration t + 1; X b e s t t represents the best individual of the whole population at iteration t.

3.3.2. Forgetting and Supplementation Strategy

X i , j t + 1 = X b e s t , j t + ( X l , j + r a n d × ( X u , j X l , j ) ) × 1 2 × ( c o s ( π × t T max ) + 1 )
where X i , j t + 1 represents the position of the i-th individual in the j-th dimension at iteration t + 1; X b e s t , j t represents the position of the best individual of the entire population in the j-th dimension at iteration t.

4. Multi-Strategy Dream Optimization Algorithm

This section establishes the mathematical model of the multi-strategy dream optimization algorithm based on the analysis of the basic version presented in the previous section. It provides a detailed description of the improved mechanism, presents the corresponding pseudo-code, and flowchart.

4.1. Improved Algorithm Based on Bernoulli Chaotic Map

The initial population of the dream optimization algorithm is randomly generated, often resulting in uneven distribution and reduced population diversity. This, in turn, can negatively impact the algorithm’s convergence speed. To address this issue, a chaos mapping mechanism is employed to enhance population diversity and improve algorithmic efficiency. The nonlinear and periodic properties of chaos mapping enable the generation of more complex and effective search results. Specifically, the Logistic, Tent, and Bernoulli chaotic maps are utilized for population initialization. Compared to other maps, the Bernoulli chaotic map exhibits a wider search range and can generate initial solutions more uniformly. The Bernoulli chaotic map [40] initialization produces a more uniform sequence than random initialization, facilitating faster searches for optimal solutions and mitigating the risk of becoming trapped in local minima.
The present work employs the Bernoulli chaotic map to initialize the population, a strategy that yields a more uniform population distribution and rapidly generates initial path points exhibiting strong randomness. The obtained values are then projected into the chaotic variable space using the Bernoulli mapping relationship. The specific expression of this mapping is presented in Equation (17).
x ( t + 1 ) = x ( t ) 1 α 0 x ( t ) 1 α x ( t ) ( 1 α ) α 1 α < x ( t ) 1
where x ( t ) is a chaotic variable at time t, α represents the chaotic component.
The generated chaotic values are then mapped into the initial population of the algorithm through linear transformation, with the mapping formula given in Equation (18).
X = X l + x ( X u X l )

4.2. Adaptive Hybrid Perturbation Strategy

During the forgetting and supplementation phase, this paper introduces an adaptive hybrid perturbation mechanism to enhance the coverage of the solution space and further improve search efficiency in this phase. This study proposes an adaptive individual hybrid perturbation strategy. This strategy integrates three perturbation methods:
  • A basic uniform random perturbation to enhance population diversity;
  • A Cauchy mutation [41] factor Cy to leverage its heavy-tailed distribution and improve local escape capabilities, Cy as shown in Equation (19);
C y = 1 2 + 1 2 × t a n ( 0.5 π ( r a n d 0.5 ) )
3.
The incorporation of a Lévy flight-based perturbation RL [42], which enables long-distance jumps and improves global exploration. The mathematical expression for RL is given in Equation (20), as follows:
R L = l × l e v y ( λ )
where l represents the step length control parameter. levy(λ) is a path that obeys the Lévy distribution, which represents the introduced Lévy flight strategy.
The selection of these perturbation strategies is governed by an adaptive probability control mechanism, which dynamically adjusts the selection probabilities based on the current iteration progress. This adaptive scheduling enables a smooth transition from broad exploration in the early stages to refined exploitation in the later phases of optimization. μ represents the perturbation term generated according to the adaptive strategy.
Under this mechanism, the update rule for the i-th individual and the j-th variable at iteration t is defined in Equation (21).
X i , j t + 1 = X b e s t q , j t + ( X l , j + μ × ( X u , j X l , j ) ) × 1 2 × ( c o s ( π × t + T max T d T max ) + 1 )
where μ update formula in the equation is as shown in Equation (22):
μ = r a n d i f   h 1 < P r a n d i C y i f   h 1 < P r a n d i + P c a u c h y i R L o t h e r w i s e
where h1 is a uniformly distributed random number, and P r a n d i , P c a u c h y i and 1 P r a n d i P c a u c h y i represent the dynamically adjusted selection probabilities for each perturbation type at iteration i.

4.3. Lens Imaging Learning Strategy for Population Update

In the original dream optimization algorithm, the algorithm exhibits strong exploitation capability but is prone to falling into local optima. To address this issue, this paper introduces a lens imaging reverse learning strategy into the population update process. Compared with traditional backward learning strategies, the lens imaging reverse learning incorporates a scaling factor k, as illustrated in Figure 5. By generating the reverse solution of the current solution through inverse operation and comparing its fitness with that of the original solution, the initial solution is updated, thereby further enhancing the algorithm’s probability of escaping local optima. The lens imaging reverse learning strategy is described in Equation (23), and k is the scaling coefficient, whose expression is given in Equation (24).
X i , j * ( t ) = ( X u + X l ) 2 + ( X u + X l ) 2 k X i , j ( t ) k
k = ( 1 + ( t / T ) 0.5 ) 10
where X i , j * is the generated inverse solution, and Xi,j(t) is the current solution.

4.4. Adaptive Individual-Level Mixed Strategy

The proposed Adaptive Individual-level Mixed Strategy (AIMIS) addresses the limitations of the forgetting supplement strategy and dream-sharing strategy in the DOA algorithm. These traditional methods often rely excessively on local perturbations, leading to convergence in local optima and restricting the algorithm’s global search capability. AIMIS integrates two distinct perturbation mechanisms at the individual level to enhance global optimization performance. The first mechanism employs a global perturbation based on boundary information and chaotic sequences, aiming to expand the search scope. The second mechanism utilizes local perturbations based on differential information among individuals, targeting improved search precision. Simultaneously, a greedy selection mechanism is employed to retain individuals with superior fitness from multiple perturbation candidates, ensuring the quality of perturbation operations. Compared to traditional fixed perturbation methods, AIMIS effectively disrupts the local convergence of individuals and reconstructs perturbation paths. This enhances the algorithm’s ability to escape local optima and improves its efficiency and diversity in exploring diverse solution regions within the search space. The key to AIMIS lies in applying appropriate perturbation operations to the population, dispersing it more widely in the solution space, avoiding concentration in certain regions, and increasing the algorithm’s search capability in the global solution space. The mathematical formulation of this strategy is presented in Equation (25).
x i , j t + 1 = x i , j t + ( 1 t T max ) 2 T max × ( l b + r a n d × ( u b l b ) × H ) r a n d τ x i , j t + [ r a n d × ( 1 r a n d ) + r a n d ] × ( x w , j t x v , j t ) r a n d > τ
where H is a random number of 0 or 1, and x w , j t and x v , j t individuals are randomly selected from the population.

4.5. The MSDOA Algorithm Flow

Combined with the mathematical model of the above-mentioned multi-strategy dream optimization algorithm, Figure 6 presents the overall flowchart of MSDOA, the execution steps of MSDOA can be summarized into the following seven steps:
Step 1: Initialize parameters population size N, the maximum number of iterations T max , the lower limits of variables Xl, the upper limits of variables Xu, and the size of the problem Dim. The number of iterations is a demarcation Td.
Step 2: By means of Equation (17), incorporate the Bernoulli chaotic map to initialize the population.
Step 3: If the current iteration number t < Td, then enter the exploration phase; otherwise, enter the Exploitation phase.
Step 4: During the exploration phase, the group memory strategy is implemented to identify the optimal individual within each group. Two random probabilities, u1 and u2, are then evaluated; the condition u1 < u2 is judged. If u1 < u2 holds true, the adaptive disturbance strategy incorporates the dream forgetting and supplementation; otherwise, the dream sharing strategy is utilized to direct the individual’s search. Following this, the sub-populations are consolidated, and an inverse learning strategy, as per Equation (23), is employed to enhance the search capabilities of the population.
Step 5: In the development stage, the memory strategy is adopted directly, and then the dream forgetting and supplementation strategy are implemented to improve local search ability.
Step 6: At the end of each iteration, Adaptive Individual-level Mixed Strategy updates the population positions according to Equation (25), further avoiding local optima and improving the algorithm’s global search capability.
Step 7: Verify whether the current iteration number equals the maximum Tmax. If so, conclude the process and display the global optimal solution; otherwise, advance to step 3 and proceed with the subsequent iteration.
The detailed procedural steps of MSDOA are further illustrated in the corresponding pseudo-code, as shown in Algorithm 1.
Algorithm 1: Pseudo-code of MSDOA
Input: Initialize parameters N, Tmax, Xl, Xu, Dim, Td.
Output: The global best solution Xgbest and f(Xgbest)
1:
Bernoulli chaotic map Initialize population P = {X1, X2,…, XN} using Equations
(17) and (18).
2:
while t < Tmax
3:
    while t < Td do
4:
        Update the best solution Xgbest and minimum fitness Fitnessmin
5:
        for q = 1: 5 do
6:
                Update the best solution of group Xbestq and the Fitnessminq
7:
                Update Xi,j using Equation (12)
8:
                for i = (((q − 1)/5 × N) + 1:(q/5 × N) do
9:
                        if     u1 < u2 then
10:
                                Update Xi,j using Equation (21)
11:
                                Check the bounds of Xi,j
12:
                        else
13:
                                Update Xi,j using Equation (14)
14:
                                Check the bounds of Xi,j
15:
                        end if
16:
                        Calculate X i , j * using Equation (23)
17:
                        if f(Xi,j) > f( X i , j * ) then Xi,j = X i , j *
18:
                        else Xi,j = Xi,j
19:
                        end if
20:
                end for
21:
        end for
22:
        Update the current number of iteration t by t = t+1
23:
    end while
24:
    while t ≥ Td and t < Tmax do
25:
        Update X i t + 1 using Equation (15)
26:
        for i = 1: N do
26:
            Update Xi,j using Equation (16)
27:
                Check the bounds of Xi,j
28:
        end if
29:
        Update the current number of iteration t by t = t+1
30:
    end while
31:
    Adaptive Individual-level Mixed Strategy Update X i t + 1 using Equation (25)
32:
end while

5. Simulation and Results Analysis

The present study evaluated the performance of the MSDOA algorithm against a diverse set of widely adopted optimization algorithms, including the classical and well-established particle swarm optimization (PSO) [43], Grey wolf optimizer (GWO) [44], Harris hawks optimization algorithm (HHO) [45], as well as the recently published and highly competitive Crested Porcupine Optimizer (CPO) [46], BKA [47], and Sand Cat swarm optimization (SCSO) [48]. The assessment was conducted using standard benchmark test functions and path planning simulation experiments, examining key metrics such as global search capability, convergence rate, and stability. This comprehensive comparative analysis aimed to elucidate the advantages of the MSDOA approach in addressing complex optimization challenges.

5.1. Comparison of Algorithms in the CEC2017 Test Set

To further evaluate the efficacy of the MSDOA algorithm on high-dimensional and large-scale test functions, benchmark functions from the CEC2017 test suite are used for simulation testing. This test suite encompasses a variety of function types, such as single-peak, multi-peak, mixed, and compound functions, exhibiting high complexity. By utilizing this diverse set, the adaptability and optimization efficiency of the algorithm across various problem characteristics. To ensure the robustness and accuracy of our analysis, the maximum number of iterations during the exploration phase was set to Td = 0.9 × Tmax. Standardized testing protocols and datasets were employed, with each test function executed independently 30 times over 500 iterations to minimize the impact of randomness on the results.
The optimization performance of various algorithms was assessed through three key metrics: the Best fitness value (Best), the mean error (Mean), and the standard error (Std). As shown in Table 1, the MSDOA algorithm ranked first among the twenty-eight benchmark test functions and ranked third in the F14 benchmark test, demonstrating that the incorporation of a multi-strategy optimization approach into the basic Dream Algorithm significantly enhanced the optimization accuracy and search speed. Notably, the MSDOA algorithm exhibited significantly superior mean error and standard error values compared to other optimization algorithms, highlighting its exceptional stability, robustness, and overall performance across diverse test cases.
Figure 7 presents the convergence curves for various optimization algorithms across different test functions, with the MSDOA algorithm depicted in red. The MSDOA algorithm demonstrates a rapid decrease in fitness, achieving substantially lower levels at the maximum number of iterations, outperforming other optimization algorithms. This indicates that MSDOA not only swiftly converges to an optimal solution but also surpasses other algorithms in optimization efficiency and accuracy. Specifically, MSDOA shows an extremely fast convergence speed and an extremely low final error in the unimodal functions F1, F3, and F6, indicating that it has good local search accuracy. Among the multimodal functions F5, F7, F9, and F10, it is possible to effectively escape the local optimal trap and maintain a continuous optimization trend. For the complex-structured mixed functions F12, F13, and F20, MSDOA can adaptively adjust the strategy among different function regions to achieve global convergence. However, on the most challenging composite functions F23, F26, and F30, it can still maintain a stable decline and eventually reach an excellent optimal solution level, demonstrating strong robustness and generalization ability.
The comprehensive analysis presented in the rank analysis Figure 8a and radar chart Figure 8b conclusively demonstrates the superior performance and remarkable stability of the MSDOA algorithm across the 30 test functions. With an outstanding average ranking of 1.07, MSDOA clearly surpasses all other algorithms, emphasizing its exceptional optimization capabilities. Moreover, the uniform distribution of the radar chart and the centralized vertices further reinforce the algorithm’s excellent stability, highlighting its optimal optimization performance in the CEC2017 benchmark test.
These results clearly demonstrate that MSDOA exhibits strong adaptability, high search efficiency, and robust global optimization capability on the CEC2017 test set, making it well-suited for a wide range of complex real-world optimization problems.

5.2. Performance Test and Analysis of UAV Track Planning Under Different Algorithms

5.2.1. Performance Analysis Under Different Obstacles

Three scenarios were developed to evaluate the effectiveness of the algorithm in simulating UAV paths in three-dimensional mountainous terrain. Table 2 provides a detailed overview of the scenes. In each scenario, UAV paths were planned with six trajectory points, navigating through different obstacle densities: three obstacles in scenario 1, six obstacles in scenario 2, and nine obstacles in scenario 3. These scenarios were designed to assess the performance of the MSDOA algorithm in environments of increasing complexity, ranging from simple to highly complex terrains. The minimum flying altitude was established at 100 m, while the maximum altitude was capped at 300 m. Both the maximum turning angle and climbing angle were restricted to 45° to ensure optimal maneuverability and flight safety.
As the number of obstacles increases, the algorithm’s search difficulty also rises. Figure 9, Figure 10 and Figure 11 illustrate the flight path’s side view, top view, and convergence curve for each scenario, respectively. Table 3 presents the cost function fitness values for each optimization algorithm across scenarios. With heightened environmental complexity, the paths generated by each algorithm become more convoluted, underscoring their adaptability differences. In Figure 9a, within a simple environment, all algorithms successfully avoid obstacles, though PSO and CPO produce more winding paths, the optimal solutions across algorithms show little variance. Notably, as depicted in Figure 9c, MSDOA converges more rapidly at the iteration’s onset, demonstrating superior optimization efficiency and maintaining robust planning capability in simpler environments.
Figure 10 illustrates the planning outcomes in complex environments, showcasing the diverse paths taken by various algorithms and their adaptability fluctuations. The results suggest that most algorithms struggle with accuracy and robustness in high-complexity settings. However, the DOA algorithm stands out by achieving an optimal value of 6410, while the MSDOA algorithm surpasses all others with an impressive optimal value of 5941. Remarkably, MSDOA achieves this optimal value in just around 150 iterations, highlighting its exceptional search efficiency. In Figure 11, the path planning results from different optimization algorithms in highly complex scenarios are depicted. As the environmental complexity increases, the performance gaps between the algorithms become more pronounced. Particularly noteworthy are the paths generated by the BKA and PSO algorithms, which exceed the constraints of the map. Table 3 provides a comparative analysis showing that MSDOA achieves a 7.81% reduction in the cost function fitness compared to DOA by 7.81% and further reduces the cost function by 13.3%, 10.7%, 11.6%, 12.1%, 15.8%, and 8.48% compared to PSO, HHO, GWO, CPO, BKA, and SCSO, respectively. These results are achieved while ensuring rapid and stable convergence across iterations. Additionally, the stability of MSDOA, as indicated by its mean and standard deviation, underscores its robustness and adaptability in handling highly complex tasks.

5.2.2. Performance Analysis with Different Numbers of Waypoints

The number of waypoints has a significant impact on the computational efficiency and trajectory search performance of the algorithm. As the number of waypoints increases from 6 to 12, Table 4 shows the fitness values of the cost function for each optimization algorithm under different scenarios at 12 waypoints. The results indicate that the optimal values of all algorithms are improved, but the MSDOA algorithm demonstrates stronger adaptability and reliability. As illustrated in Figure 12, Figure 13 and Figure 14, the flight paths generated by the algorithms become more tortuous as the number of waypoints increases. In Scenario 1, the MSDOA-planned flight path is smoother than the paths generated by the other algorithms. Furthermore, by comparing the UAV routes planned by different algorithms in Figure 13 and Figure 14 with those in Figure 10 and Figure 11, it is evident that the paths generated by some inferior algorithms have exceeded the set constraint range and exhibit more turns and detours. In contrast, the MSDOA algorithm can still avoid obstacles and find the shortest, smoothest path, demonstrating its path planning capabilities. In addition, MSDOA demonstrated a 9% reduction in optimal fitness and a 12% reduction in average fitness compared to standard DOA in the most complex scenarios.
The box plot in Figure 15 illustrates that the MSDOA algorithm exhibits minimal cost fluctuations, indicative of its robust and scalable performance across complex environments. Figure 15a–c corresponds to scenarios with 6 waypoints, while Figure 15d–f represents scenarios with 12 waypoints. In both cases, the median cost of MSDOA is notably lower than that of the comparative algorithms, including PSO, HHO, and BKA. Additionally, the interquartile range of MSDOA remains consistently smaller, demonstrating its stability. Despite increasing scenario complexity or the number of waypoints, MSDOA exhibits significantly improved global search capabilities in complex optimization environments, attains superior optimization accuracy, and effectively mitigates the risk of converging to local optima. These findings confirm its efficacy in tackling UAV path planning challenges.
In conclusion, the MSDOA algorithm demonstrates robust convergence in addressing the three-dimensional path planning problem for drones in complex environments. Initially, it leverages a Bernoulli chaotic map to generate an improved initial solution. During exploration, adaptive perturbation and the Lens Imaging Learning Strategy facilitate rapid convergence to the optimal solution. Ultimately, the Adaptive Individual-level Mixed Strategy significantly enhances solution accuracy at final convergence, maintaining strong performance across diverse scenarios.

6. Conclusions

In this study, a multi-strategy MSDOA algorithm demonstrates robust performance in solving complex 3D path planning problems for UAVs. The algorithm employs a Bernoulli chaotic map to initialize the population, widen individual search ranges, and enhance population diversity. Furthermore, the algorithm incorporates an adaptive disturbance mechanism and a lens imaging reverse learning strategy during the exploration phase to update the population, thereby improving the exploration ability and accelerating convergence while mitigating premature convergence. Additionally, an Adaptive Individual-level Mixed Strategy (AIMS) is developed to conduct a more flexible search process and enhance the algorithm’s global search capability. To validate the effectiveness of these improvements, comparative experiments were conducted using the CEC 2017 benchmark functions. The results indicate that the MSDOA algorithm outperforms mainstream swarm intelligence algorithms in terms of convergence speed and optimal value search ability, particularly when addressing multi-peak, nonlinear, and high-dimensional optimization problems. Moreover, the MSDOA algorithm is applied to UAV 3D path planning problems under various scene models, and its performance is compared with other algorithms. The simulation results demonstrate that the MSDOA algorithm achieves more efficient and higher-quality path planning in complex 3D environments, resulting in smoother and safer UAV paths.

Author Contributions

Conceptualization, X.Y.; software, W.G. and S.Z.; validation, X.Y. and S.Z.; formal analysis, Z.F.; investigation, L.L.; resources, L.L. and X.W.; data curation, P.L.; writing—original draft preparation, T.J. and W.G.; writing—review and editing, S.Z. and X.W.; visualization, Z.F. and W.G.; supervision, X.Y. and T.J.; project administration, S.Z. and W.G.; funding acquisition, Z.F. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Open Project of the Shijiazhuang Key Laboratory of Intelligent Research on VTOL Fixed-Wing UAVs and the Science Research Project of the Hebei Education Department, under grant numbers KF2024-1 and QN2023137.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Available under request.

Acknowledgments

The authors would like to thank the anonymous reviewers and external experts for their valuable feedback and suggestions, which greatly helped improve the quality of this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mohsan, S.A.; Khan, M.A.; Noor, F.; Ullah, I.; Alsharif, M.H. Towards the unmanned aerial vehicles (UAVs): A comprehensive review. Drones 2022, 6, 147. [Google Scholar] [CrossRef]
  2. Javed, S.; Hassan, A.; Ahmad, R.; Ahmed, W.; Ahmed, R.; Saadat, A.; Guizani, M. State-of-the-Art and Future Research Challenges in UAV Swarms. IEEE Internet Things J. 2024, 11, 19023–19045. [Google Scholar] [CrossRef]
  3. Zhang, X.; Zheng, J.; Su, T.; Ding, M.; Liu, H. An effective dynamic constrained two-archive evolutionary algorithm for cooperative search-track mission planning by UAV swarms in air intelligent transportation. IEEE Trans. Intell. Transp. Syst. 2023, 25, 944–958. [Google Scholar] [CrossRef]
  4. Zhang, X.; Fu, Q.Y.; Li, J.; Zhao, W.J. Cooperative Search-Track Mission Planning for Multi-UAV Based on a Distributed Approach in Uncertain Environment. In Proceedings of 2021 5th Chinese Conference on Swarm Intelligence and Cooperative Control; Springer Nature: Singapore, 2022; pp. 526–535. [Google Scholar]
  5. Qu, C.; Boubin, J.; Gafurov, D.; Zhou, J.; Aloysius, N.; Nguyen, H.; Calyam, P. UAV Swarms in Smart Agriculture: Experiences and Opportunities. In Proceedings of the 2022 IEEE 18th International Conference on e-Science (e-Science), Salt Lake City, UT, USA, 11–14 October 2022; pp. 148–158. [Google Scholar]
  6. Lu, J.; Liu, Y.; Jiang, C.; Wu, W. Truck-drone joint delivery network for rural area: Optimization and implications. Transp. Policy 2025, 163, 273–284. [Google Scholar] [CrossRef]
  7. Baniasadi, P.; Foumani, M.; Smith-Miles, K.; Ejov, V. A transformation technique for the clustered generalized traveling salesman problem with applications to logistics. Eur. J. Oper. Res. 2020, 285, 444–457. [Google Scholar] [CrossRef]
  8. Katkuri, A.V.R.; Madan, H.; Khatri, N.; Abdul-Qawy, A.S.H.; Patnaik, K.S. Autonomous UAV navigation using deep learning-based computer vision frameworks: A systematic literature review. Array 2024, 23, 100361. [Google Scholar] [CrossRef]
  9. Meng, W.; Zhang, X.; Zhou, L.; Guo, H.; Hu, X. Advances in UAV Path Planning: A Comprehensive Review of Methods, Challenges, and Future Directions. Drones 2025, 9, 376. [Google Scholar] [CrossRef]
  10. Zhang, D.; Xuan, Z.; Zhang, Y.; Yao, J.; Li, X.; Li, X. Path planning of unmanned aerial vehicle in complex environments based on state-detection twin delayed deep deterministic policy gradient. Machines 2023, 11, 108. [Google Scholar] [CrossRef]
  11. Gómez Arnaldo, C.; Zamarreño Suárez, M.; Pérez Moreno, F.; Delgado-Aguilera Jurado, R. Path Planning for Unmanned Aerial Vehicles in Complex Environments. Drones 2024, 8, 288. [Google Scholar] [CrossRef]
  12. Liu, J.; Luo, W.; Zhang, G.; Li, R. Unmanned Aerial Vehicle Path Planning in Complex Dynamic Environments Based on Deep Reinforcement Learning. Machines 2025, 13, 162. [Google Scholar] [CrossRef]
  13. He, Y.; Hou, T.; Wang, M. A new method for unmanned aerial vehicle path planning in complex environments. Sci. Rep. 2024, 14, 9257. [Google Scholar] [CrossRef]
  14. Wang, J.; Li, Y.; Li, R.; Chen, H.; Chu, K. Trajectory planning for UAV navigation in dynamic environments with matrix alignment Dijkstra. Soft Comput. 2022, 26, 12599–12610. [Google Scholar] [CrossRef]
  15. Fan, J.; Chen, X.; Liang, X. UAV trajectory planning based on bi-directional APF-RRT* algorithm with goal-biased. Expert. Syst. Appl. 2023, 213, 119137. [Google Scholar] [CrossRef]
  16. Zhu, X.; Gao, Y.; Li, Y.; Li, B. Fast Dynamic P-RRT*-Based UAV Path Planning and Trajectory Tracking Control Under Dense Obstacles. Actuators 2025, 14, 211. [Google Scholar] [CrossRef]
  17. Hooshyar, M.; Huang, Y.M. Meta-heuristic algorithms in UAV path planning optimization: A systematic review (2018–2022). Drones 2023, 7, 687. [Google Scholar] [CrossRef]
  18. Debnath, D.; Vanegas, F.; Sandino, J.; Hawary, A.F.; Gonzalez, F. A Review of UAV Path-Planning Algorithms and Obstacle Avoidance Methods for Remote Sensing Applications. Remote Sens. 2024, 16, 4019. [Google Scholar] [CrossRef]
  19. Ma, Y.; Zhang, Z.; Yao, M.; Fan, G. A Self-Adaptive Improved Slime Mold Algorithm for Multi-UAV Path Planning. Drones 2025, 9, 219. [Google Scholar] [CrossRef]
  20. Vinod Chandra, S.S.; Anand, H.S. Nature inspired meta heuristic algorithms for optimization problems. Computing 2022, 104, 251–269. [Google Scholar]
  21. Jiang, Y.; Xu, X.-X.; Zheng, M.-Y.; Zhan, Z.-H. Evolutionary Computation for Unmanned Aerial Vehicle Path Planning: A Survey. Artif. Intell. Rev. 2024, 57, 267. [Google Scholar] [CrossRef]
  22. Ait Saadi, A.; Soukane, A.; Meraihi, Y.; Benmessaoud Gabis, A.; Mirjalili, S.; Ramdane-Cherif, A. UAV path planning using optimization approaches: A survey. Arch. Comput. Methods Eng. 2022, 29, 4233–4284. [Google Scholar] [CrossRef]
  23. Sonny, A.; Yeduri, S.R.; Cenkeramaddi, L.R. Autonomous UAV path planning using modified PSO for UAV-assisted wireless networks. IEEE Access 2023, 11, 70353–70367. [Google Scholar] [CrossRef]
  24. Meng, Q.; Chen, K.; Qu, Q. Ppswarm: Multi-uav path planning based on hybrid pso in complex scenarios. Drones 2024, 8, 192. [Google Scholar] [CrossRef]
  25. Zhang, Y.; Yu, H. Application of Hybrid Swarming Algorithm on a UAV Regional Logistics Distribution. Biomimetics 2023, 8, 96. [Google Scholar] [CrossRef]
  26. Bui, D.N.; Duong, T.N.; Phung, M.D. Ant colony optimization for cooperative inspection path planning using multiple unmanned aerial vehicles. In Proceedings of the 2024 IEEE/SICE International Symposium on System Integration (SII), Ha Long, Vietnam, 8–11 January 2024; pp. 675–680. [Google Scholar]
  27. Teng, Z.; Dong, Q.; Zhang, Z.; Huang, S.; Zhang, W.; Wang, J.; Chen, X. An Improved Grey Wolf Optimizer Inspired by Advanced Cooperative Predation for UAV Shortest Path Planning. arXiv 2025, arXiv:2506.03663. [Google Scholar] [CrossRef]
  28. Zhang, C.; Liu, Y.; Hu, C. Path planning with time windows for multiple UAVs based on gray wolf algorithm. Biomimetics 2022, 7, 225. [Google Scholar] [CrossRef] [PubMed]
  29. Zhang, J.; Zhu, X.; Li, J. Intelligent path planning with an improved sparrow search algorithm for workshop UAV inspection. Sensors 2024, 24, 1104. [Google Scholar] [CrossRef]
  30. Liu, G.; Shu, C.; Liang, Z.; Peng, B.; Cheng, L. A modified sparrow search algorithm with applcation in 3d route planning for UAV. Sensors 2021, 21, 1224. [Google Scholar] [CrossRef] [PubMed]
  31. You, G.; Hu, Y.; Lian, C.; Yang, Z. Mixed-strategy Harris Hawk optimization algorithm for UAV path planning and engineering applications. Appl. Sci. 2024, 14, 10581. [Google Scholar] [CrossRef]
  32. Shi, H.; Lu, F.; Wu, L.; Yang, G. Optimal trajectories of multi-UAVs with approaching formation for target tracking using improved Harris Hawks optimizer. Appl. Intell. 2022, 52, 14313–14335. [Google Scholar] [CrossRef]
  33. Chang, X.; Yang, H.; Zhang, B. Multi-Unmanned aerial vehicle path planning based on improved nutcracker optimization algorithm. Drones 2025, 9, 116. [Google Scholar] [CrossRef]
  34. Wang, S.; Xu, B.; Zheng, Y.; Yue, Y.; Xiong, M. Path Optimization Strategy for Unmanned Aerial Vehicles Based on Improved Black Winged Kite Optimization Algorithm. Biomimetics 2025, 10, 310. [Google Scholar] [CrossRef]
  35. Zhou, X.; Shi, G.; Zhang, J. Improved grey wolf algorithm: A method for uav path planning. Drones 2024, 8, 675. [Google Scholar] [CrossRef]
  36. Hu, G.; Cheng, M.; Houssein, E.H.; Jia, H. CMPSO: A novel co-evolutionary multigroup particle swarm optimization for multi-mission UAVs path planning. Adv. Eng. Inform. 2025, 63, 102923. [Google Scholar] [CrossRef]
  37. Zhang, R.; Li, S.; Ding, Y.; Qin, X.; Xia, Q. UAV path planning algorithm based on improved Harris Hawks optimization. Sensors 2022, 22, 5232. [Google Scholar] [CrossRef]
  38. Xu, T.; Chen, C. DBO-AWOA: An Adaptive Whale Optimization Algorithm for Global Optimization and UAV 3D Path Planning. Sensors 2025, 25, 2336. [Google Scholar] [CrossRef] [PubMed]
  39. Lang, Y.; Gao, Y. Dream Optimization Algorithm(DOA):A novel metaheuristic optimization algorithm inspired by human dreams and its applications to real-world engi- neering problems. Comput. Methods Appl. Mech. Eng. 2025, 436, 117718. [Google Scholar] [CrossRef]
  40. Du, X.; Zhou, Y. A Novel Hybrid Differential Evolutionary Algorithm for Solving Multi-objective Distributed Permutation Flow-Shop Scheduling Problem. Int. J. Comput. Intell. Syst. 2025, 18, 1–22. [Google Scholar] [CrossRef]
  41. Shan, W.; He, X.; Liu, H.; Heidari, A.A.; Wang, M.; Cai, Z.; Chen, H. Cauchy mutation boosted Harris hawk algorithm: Optimal performance design and engineering applications. J. Comput. Des. Eng. 2023, 10, 503–526. [Google Scholar] [CrossRef]
  42. Yu, X.; Duan, Y.; Cai, Z. Sub-population improved grey wolf optimizer with Gaussian mutation and Lévy flight for parameters identification of photovoltaic models. Expert. Syst. Appl. 2023, 232, 120827. [Google Scholar] [CrossRef]
  43. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; IEEE: Piscataway, NJ, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  44. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  45. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  46. Abdel-Basset, M.; Mohamed, R.; Abouhawwash, M. Crested Porcupine Optimizer: A new nature-inspired metaheuristic. Knowl.-Based Syst. 2024, 284, 111257. [Google Scholar] [CrossRef]
  47. Wang, J.; Wang, W.C.; Hu, X.X.; Qiu, L.; Zang, H.F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar] [CrossRef]
  48. Seyyedabbasi, A.; Kiani, F. Sand Cat swarm optimization: A nature-inspired algorithm to solve global optimization problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
Figure 1. Top-down view of flight path.
Figure 1. Top-down view of flight path.
Biomimetics 10 00551 g001
Figure 2. Threat prediction map.
Figure 2. Threat prediction map.
Biomimetics 10 00551 g002
Figure 3. Flight height constraint diagram.
Figure 3. Flight height constraint diagram.
Biomimetics 10 00551 g003
Figure 4. Flight angle diagram.
Figure 4. Flight angle diagram.
Biomimetics 10 00551 g004
Figure 5. Principle diagram of lens imaging inverse learning.
Figure 5. Principle diagram of lens imaging inverse learning.
Biomimetics 10 00551 g005
Figure 6. Flowchart of MSDOA.
Figure 6. Flowchart of MSDOA.
Biomimetics 10 00551 g006
Figure 7. Convergence curve of test function algorithm.
Figure 7. Convergence curve of test function algorithm.
Biomimetics 10 00551 g007aBiomimetics 10 00551 g007b
Figure 8. Ranking charts of optimization results on the CEC2017 benchmark. (a) The average rank chart. (b) The radar chart.
Figure 8. Ranking charts of optimization results on the CEC2017 benchmark. (a) The average rank chart. (b) The radar chart.
Biomimetics 10 00551 g008
Figure 9. Comparison of path planning results in scenario 1 (number of waypoints = 6).
Figure 9. Comparison of path planning results in scenario 1 (number of waypoints = 6).
Biomimetics 10 00551 g009
Figure 10. Comparison of path planning results in scenario 2 (number of waypoints = 6).
Figure 10. Comparison of path planning results in scenario 2 (number of waypoints = 6).
Biomimetics 10 00551 g010
Figure 11. Comparison of path planning results in scenario 3 (number of waypoints = 6).
Figure 11. Comparison of path planning results in scenario 3 (number of waypoints = 6).
Biomimetics 10 00551 g011
Figure 12. Comparison of path planning results in scenario 1 (number of waypoints = 12).
Figure 12. Comparison of path planning results in scenario 1 (number of waypoints = 12).
Biomimetics 10 00551 g012
Figure 13. Comparison of path planning results in scenario 2 (number of waypoints = 12).
Figure 13. Comparison of path planning results in scenario 2 (number of waypoints = 12).
Biomimetics 10 00551 g013
Figure 14. Comparison of path planning results in scenario 3 (number of waypoints = 12).
Figure 14. Comparison of path planning results in scenario 3 (number of waypoints = 12).
Biomimetics 10 00551 g014
Figure 15. Boxplot comparison of path planning costs.
Figure 15. Boxplot comparison of path planning costs.
Biomimetics 10 00551 g015
Table 1. Comparison of optimization results of test functions.
Table 1. Comparison of optimization results of test functions.
FunctionIndexMSDOADOAPSOHHOGWOCPOBKASCSO
F1Best2.31 × 1033.54 × 1059.53 × 1091.16 × 1085.88 × 1072.44 × 1081.39 × 1094.81 × 109
Mean1.83 × 1041.03 × 1061.98 × 10104.44 × 1082.63 × 1092.94 × 1096.71 × 1099.52 × 109
Std1.61 × 1081.56 × 10114.43 × 10199.14 × 10162.62 × 10185.87 × 10181.05 × 10191.32 × 1019
F3Best2.14 × 1035.14 × 1042.49 × 1044.71 × 1044.03 × 1041.21 × 1051.45 × 1041.15 × 105
Mean8.71 × 1031.15 × 1055.54 × 1045.79 × 1046.32 × 1042.21 × 1052.91 × 1041.81 × 105
Std1.78 × 1078.78 × 1082.98 × 1085.22 × 107l.17 × 1084.39 × 1097.24 × 1078.51 × 108
F4Best4.03 × 1024.72 × 1021.50 × 1035.84 × 1025.39 × 1025.59 × 1025.31 × 1028.8 × 102
Mean4.38 × 1025.01 × 1024.55 × 1037.27 × 1026.63 × 1026.56 × 1029.64 × 102l.65 × 103
Std4.67 × 1022.14 × 1033.54 × 1061.28 × 1041.14 × 1047.45 × 1031.28 × 1055.49 × 105
F5Best5.31 × 1025.55 × 1027.15 × 1026.97 × 1025.88 × 1026.67 × 1026.96 × 1026.97 × 102
Mean5.60 × 1025.81 × 1027.89 × 1027.7 × 1026.27 × 1027.57 × 1027.83 × 1027.98 × 102
Std2.37 × 1022.63 × 1022.78 × 1037.91 × 1021.82 × 1032.54 × 1032.77 × 1053.02 × 102
F6Best6 × 1026.32 × 1026.5 × 1026.61 × 1026.06 × 1026.47 × 1026.47 × 1026.14 × 102
Mean6 × 1026.43 × 1026.72 × 1026.66 × 1026.13 × 1026.67 × 1026.61 × 1026.3 × 102
Std4.54 × 10−34.54 × 1011.13 × 1026.18 × 1013.08 × 1011.62 × 1025.82 × 1011.19 × 102
F7Best7.66 × 1027.91 × 1021.07 × 1031.17 × 1038.24 × 1021.04 × 1031.07 × 1031.16 × 103
Mean7.99 × 1028.27 × 1021.24 × 1031.32 × 1039.08 × 1021.17 × 1031.24 × 1031.34 × 103
Std2.83 × 1023.03 × 1027.56 × 1033.35 × 1033.1 × 1038.59 × 1032.72 × 1042 × 104
F8Best8.38 × 1028.84 × 1029.31 × 1029.36 × 1028.62 × 1029.26 × 1029.42 × 1029.67 × 102
Mean8.65 × 1029.12 × 1021.03 × 1039.85 × 1029.15 × 1029.96 × 1021.01 × 1031.08 × 102
Std2.39 × 1022.73 × 1022.96 × 1036.45 × 1021.27 × 1031.65 × 1021.16 × 1021.82 × 102
FunctionIndexMSDOADOAPSOHHOGWOCPOBKASCSO
F9Best9.17 × 1021.12 × 1033.89 × 1037.14 × 1031.34 × 1034.27 × 1033.67 × 1034.11 × 103
Meanl.18 × 103l.81 × 1037.85 × 1038.8 × 1032.58 × 1037.51 × 1035.53 × 1031.06 × 104
Std1.56 × 1052.13 × 1055.75 × 1067.89 × 1051.48 × 1065.05 × 1055.56 × 105l.6 × 107
F10Best2.52 × 1032.9 × 1037.69 × 1034.74 × 1033.41 × 1034.41 × 1034.45 × 1037.15 × 103
Mean3.24 × 1033.71 × 1039.17 × 1036.32 × 1034.68 × 1036.48 × 1035.47 × 1038.61 × 103
Std8.93 × 1041.51 × 1054.74 × 1052.96 × 1058.1 × 1055.51 × 1053.19 × 1054.29 × 105
F11Best1.11 × 1031.17 × 1031.37 × 1031.31 × 1031.31 × 1031.43 × 1031.29 × 1032.87 × 103
Mean1.16 × 1032.53 × 1033.84 × 1031.58 × 1032.59 × 1031.72 × 1031.66 × 1031.02 × 104
Std1.09 × 1034.16 × 1059.68 × 1061.86 × 1041.09 × 1061.46 × 1052.66 × 1051.87 × 107
F12Best1.41 × 1052.91 × 1055.77 × 1084.47 × 1069.77 × 1056.81 × 1062.29 × 1063.77 × 107
Mean2.02 × 1062.9 × 1063.13 × 1097.64 × 107l.2 × 1084.56 × 1078.05 × 1072.16 × 108
Stdl.06 × 10122.76 × 10125.25 × 10185.36 × 10152.52 × 10166.57 × 10141.95 × 10163.91 × 1016
F13Best2.16 × 1039.32 × 1031.33 × 1053.98 × 1055.15 × 1041.36 × 1056.72 × 1042.28 × 104
Meanl.45 × 1042.46 × 1044.14 × 1084.2 × 106l.74 × 1071.21 × 1075.33 × 1052.05 × 107
Stdl.4 × 1082.25 × 1089.83 × 10173.06 × 10142.95 × 10152.57 × 10152.84 × 10112.02 × 1015
F14Best4.61 × 1032.61 × 1043.09 × 1032.82 × 1045.71 × 1034.63 × 1041.94 × 1033.2 × 104
Mean1.64 × 1053.64 × 1053.89 × 1051.28 × 1065.11 × 1058.93 × 1052.83 × 1049.98 × 105
Std2.61 × 10101.52 × 10111.12 × 10121.36 × 10123.92 × 10118.61 × 10117.23 × 1081.92 × 1012
F15Best1.55 × 1032.29 × 1036.42 × 1032.93 × 1041.48 × 1041.18 × 1041.71 × 1041.82 × 104
Mean5.14 × 1036.87 × 1032 × 104l.24 × 1051.01 × 1062.07 × 1055.57 × 1043.6 × 105
Std1.14 × 1073.62 × 1079.35 × 1077.39 × 1092.35 × 10122.44 × 10111.29 × 1091.24 × 1012
F16Best1.75 × 1031.96 × 1033.25 × 1032.85 × 1032.21 × 1032.59 × 1032.49 × 1032.72 × 103
Mean2.36 × 1032.41 × 1034.23 × 1033.64 × 1032.62 × 1033.43 × 1033.16 × 1033.44 × 103
Std4.22 × 1045.46 × 1047.6 × 1053.29 × 1058.06 × 1041.35 × 1052.17 × 1051.81 × 105
F17Best1.64 × 1031.76 × 1032.31 × 1032.15 × 1031.82 × 1032.03 × 1031.81 × 1031.98 × 103
Mean1.96 × 1032.04 × 1033.23 × 1032.75 × 1032.06 × 1032.56 × 1032.51 × 1032.59 × 103
Std1.92 × 1042.02 × 1046.62 × 1051.02 × 1053.29 × 1046.91 × 1047.81 × 1048.08 × 104
F18Best2.97 × 1041.75 × 1051.01 × 1052.45 × 1056.88 × 1048.95 × 1044.23 × 1047.75 × 105
Mean4.38 × 1057.22 × 1055.17 × 1065.08 × 1062.44 × 1063.11 × 1062.31 × 1061.24 × 107
Std1.03 × 10112.95 × 10118.93 × 10134.23 × 10137.73 × 10121.15 × 10135.89 × 10103.06 × 1014
F19Best1.72 × 1032.09 × 1034.73 × 1031.43 × 1051.65 × 1042.69 × 1046.11 × 1044.92 × 104
Mean4.84 × 1036.86 × 1032.49 × 1051.47 × 1062.4 × 1069.88 × 1054.67 × 1053.8 × 106
Std8.25 × 1061.6 × 1071.11 × 10111.21 × 10121.77 × 10131.22 × 10121.94 × 10135.24 × 1013
F20Best1.93 × 1032.17 × 1032.45 × 1032.33 × 1032.23 × 1032.43 × 1032.18 × 1032.41 × 103
Mean2.12 × 1032.37 × 1033.19 × 1032.84 × 1032.5 × 1032.81 × 1032.62 × 1032.96 × 103
Std1.84 × 1042.08 × 1047.84 × 1045.37 × 1043.52 × 1043.98 × 1044.57 × 1042.63 × 104
F21Best2.27 × 1032.35 × 1032.49 × 1032.52 × 1032.35 × 1032.46 × 1032.51 × 1032.45 × 103
Mean2.34 × 1032.6 × 1032.69 × 1032.6 × 1032.4 × 1032.54 × 1032.58 × 1032.56 × 103
Std2.25 × 1022.52 × 1023.57 × 1031.93 × 1031.68 × 1032.36 × 1032.35 × 1031.69 × 103
F22Best2.31 × 1032.45 × 1036.25 × 1033.07 × 1032.51 × 1032.46 × 1033.01 × 1033.23 × 103
Mean3.65 × 1034.82 × 1039.49 × 1036.88 × 1036.19 × 1037.41 × 1036.52 × 1037.85 × 103
Std1.55 × 1062.55 × 1062.59 × 1062.49 × 1066.9 × 1063.82 × 1061.15 × 1066.06 × 106
F23Best2.43 × 1032.69 × 1033.26 × 1033.11 × 1032.78 × 1032.84 × 1032.96 × 1032.83 × 103
Mean2.53 × 1032.72 × 1033.76 × 1033.31 × 1032.8 × 1032.96 × 1033.16 × 1032.9 × 103
Std2.43 × 1032.98 × 1039.42 × 1031.43 × 1044.65 × 1036.26 × 1042.14 × 1031.87 × 103
F24Best2.85 × 1032.92 × 1033.51 × 1033.26 × 1032.96 × 1032.97 × 1033.02 × 1033.02 × 103
Mean2.93 × 1032.98 × 1033.86 × 1033.51 × 1033.06 × 1033.11 × 1033.31 × 1033.08 × 103
Std1.01 × 1031.53 × 1036.35 × 1042.75 × 1045.17 × 1035.93 × 1031.38 × 1041.66 × 103
FunctionIndexMSDOADOAPSOHHOGWOCPOBKASCSO
F25Best2.67 × 1032.99 × 1033.13 × 1032.92 × 1032.95 × 1032.94 × 1032.95 × 1033.1 × 103
Mean2.88 × 1033.05 × 1033.5 × 1033.02 × 1033.04 × 1033.01 × 1033.08 × 1033.65 × 103
Std6.1 × 1011.21 × 1038.6 × 1041.27 × 1035.73 × 1033.22 × 1035.86 × 1041.82 × 105
F26Best2.8 × 1033.43 × 1036.68 × 1033.13 × 1033.42 × 1033.35 × 1033.65 × 1035.87 × 103
Mean3.28 × 1033.85 × 1039.47 × 1037.79 × 1035.02 × 1036.68 × 1037.63 × 1036.47 × 103
Std2.71 × 1056.06 × 1051.06 × 1062.61 × 1063.09 × 1051.29 × 1062.85 × 1061.93 × 105
F27Best3.09 × 1033.28 × 1033.81 × 1033.33 × 1033.23 × 1033.25 × 1033.25 × 1033.22 × 103
Mean3.21 × 1033.32 × 1034.73 × 1033.59 × 1033.27 × 1033.36 × 1033.44 × 1033.26 × 103
Std4.72 × 10−88.96 × 1012.84 × 1054.01 × 1049.43 × 1027.28 × 1033.14 × 1045.65 × 102
F28Best3.18 × 1033.34 × 1033.89 × 1033.31 × 1033.34 × 1036.32 × 1035.88 × 1033.63 × 103
Mean3.29 × 1033.42 × 1034.74 × 1033.49 × 1033.51 × 1037.67 × 1036.53 × 1034.61 × 103
Std1.14 × 1012.44 × 1022.43 × 1059.62 × 1032.75 × 1043.99 × 1058.87 × 1045.49 × 105
F29Best3.08 × 1033.44 × 1034.77 × 1034.08 × 1033.54 × 1044.08 × 1033.85 × 1033.82 × 103
Mean3.46 × 1033.62 × 1036.05 × 1035.04 × 1033.99 × 1034.96 × 1034.75 × 1034.53 × 103
Std1.57 × 1042.16 × 1047.67 × 1052.92 × 1053.13 × 1042.23 × 1052.77 × 1057.6 × 106
F30Best3.23 × 1031.05 × 1043.43 × 1061.23 × 1061.98 × 1065.35 × 1057.06 × 1052.23 × 104
Mean4.69 × 1034.13 × 1041.43 × 1081.46 × 1071.49 × 1074.74 × 1063.81 × 1061.05 × 106
Std1.2 × 1061.66 × 1094.24 × 1061.08 × 10151.23 × 10141.47 × 10136.26 × 10121.85 × 1012
Table 2. Scenario information.
Table 2. Scenario information.
Scenario NumberThreat CenterThreat RadiusThreat Height
1(300, 300)50230
(700, 300)50230
(500, 600)60250
2(300, 280)50220
(700, 280)45230
(300, 520)50240
(700, 520)45250
(500, 400)60260
(500, 580)50240
3(320, 280)40130
(480, 300)45135
(620, 260)40125
(350, 420)50140
(520, 480)60300
(660, 420)50140
(370, 600)45200
(500, 640)45135
(620, 590)40200
Table 3. Complex scenario simulation experiment data (number of waypoints = 6).
Table 3. Complex scenario simulation experiment data (number of waypoints = 6).
Scenario IndexMSDOADOAPSOHHOGWOCPOBKASCSO
Scenario 1Best5.82 × 1036.29 × 1035.93 × 1035.91 × 1036.11 × 1035.98 × 1036.03 × 1035.98 × 103
Mean5.91 × 1037.01 × 1037.47 × 1037.26 × 1036.89 × 1037.48 × 1037.24 × 1046.21 × 103
Std4.4 × 1013.72 × 1021.06 × 1031.53 × 1024.76 × 1028.31 × 1027.91 × 1021.2 × 102
Scenario 2Best5.94 × 1036.41 × 1036.74 × 1036.68 × 1036.55 × 1036.39 × 1036.47 × 1036.19 × 103
Mean5.99 × 1037.21 × 1037.87 × 1037.78 × 1047.01 × 1037.87 × 1037.37 × 1036.53 × 104
Std1.02 × 1026.12 × 1021.84 × 1031.76 × 1037.12 × 1028.75 × 1028.87 × 1024.53 × 102
Scenario 3Best6.37 × 1036.91 × 1037.35 × 1037.13 × 1037.21 × 1037.25 × 1037.51 × 1036.96 × 103
Mean6.52 × 1037.65 × 1038.96 × 1039.35 × 1038.39 × 1038.41 × 1038.37 × 1047.77 × 103
Std1.63 × 1024.28 × 1022.14 × 1031.84 × 1031.04 × 1031.09 × 1029.53 × 1035.11 × 102
Table 4. Complex scenario simulation experiment data (number of waypoints = 12).
Table 4. Complex scenario simulation experiment data (number of waypoints = 12).
ScenarioIndexMSDOADOAPSOHHOGWOCPOBKASCSO
Scenario 1Best5.95 × 1036.58 × 1036.78 × 1036.48 × 1036.64 × 1036.99 × 1036.83 × 1036.17 × 103
Mean6.12 × 1037.76 × 1038.38 × 1038.44 × 1037.43 × 1038.2 × 1038.35 × 1046.76 × 103
Std1.37 × 1025.95 × 1021.46 × 1031.41 × 1025.82 × 1028.21 × 1029.14 × 1023.57 × 102
Scenario 2Best5.91 × 1036.83 × 1036.81 × 1036.57 × 1036.96 × 1036.87 × 1037.02 × 1036.38 × 103
Mean6.11 × 1037.89 × 1038.15 × 1049.06 × 1037.51 × 1038.48 × 1038.56 × 1037.04 × 104
Std1.59 × 1025.38 × 1022.84 × 1032.21 × 1039.85 × 1021.18 × 109.12 × 1026.59 × 103
Scenario 3Best6.39 × 1036.97 × 1038.25 × 1037.48 × 1037.51 × 1037.85 × 1038.25 × 1036.81 × 103
Mean6.68 × 1038.05 × 1031.06 × 1031.05 × 1041.09 × 1039.22 × 1039.59 × 1048.34 × 103
Std2.25 × 1026.06 × 1023.14 × 1033.66 × 1034.04 × 1031.33 × 1021.23 × 1028.37 × 102
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, X.; Zhao, S.; Gao, W.; Li, P.; Feng, Z.; Li, L.; Jia, T.; Wang, X. Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm. Biomimetics 2025, 10, 551. https://doi.org/10.3390/biomimetics10080551

AMA Style

Yang X, Zhao S, Gao W, Li P, Feng Z, Li L, Jia T, Wang X. Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm. Biomimetics. 2025; 10(8):551. https://doi.org/10.3390/biomimetics10080551

Chicago/Turabian Style

Yang, Xingyu, Shiwei Zhao, Wei Gao, Peifeng Li, Zhe Feng, Lijing Li, Tongyao Jia, and Xuejun Wang. 2025. "Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm" Biomimetics 10, no. 8: 551. https://doi.org/10.3390/biomimetics10080551

APA Style

Yang, X., Zhao, S., Gao, W., Li, P., Feng, Z., Li, L., Jia, T., & Wang, X. (2025). Three-Dimensional Path Planning for UAV Based on Multi-Strategy Dream Optimization Algorithm. Biomimetics, 10(8), 551. https://doi.org/10.3390/biomimetics10080551

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop