Next Article in Journal
An Efficient Evolutionary Neural Architecture Search Algorithm Without Training
Next Article in Special Issue
Performance Improvement of Seismic Response Prediction Using the LSTM-PINN Hybrid Method
Previous Article in Journal
Biomechanical Modeling, Muscle Synergy-Based Rehabilitation Assessment, and Real-Time Fatigue Monitoring for Piano-Integrated Upper Limb Therapy
Previous Article in Special Issue
A Gaussian Mixture Model-Based Unsupervised Dendritic Artificial Visual System for Motion Direction Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs

School of Mechanical Engineering, Dalian Jiaotong University, Dalian 116028, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(7), 420; https://doi.org/10.3390/biomimetics10070420
Submission received: 23 May 2025 / Revised: 22 June 2025 / Accepted: 27 June 2025 / Published: 29 June 2025

Abstract

In this study, we propose a novel improved Dung Beetle Optimizer called Environment-aware Chaotic Force-field Dung Beetle Optimizer (ECFDBO). To address DBO’s existing tendency toward premature convergence and insufficient precision in high-dimensional, complex search spaces, ECFDBO integrates three key improvements: a chaotic perturbation-based nonlinear contraction strategy, an intelligent boundary-handling mechanism, and a dynamic attraction–repulsion force-field mutation. These improvements reinforce both the algorithm’s global exploration capability and its local exploitation accuracy. We conducted 30 independent runs of ECFDBO on the CEC2017 benchmark suite. Compared with seven classical and novel metaheuristic algorithms, ECFDBO achieved statistically significant improvements in multiple performance metrics. Moreover, by varying problem dimensionality, we demonstrated its robust global optimization capability for increasingly challenging tasks. We further conducted the Wilcoxon and Friedman tests to assess the significance of performance differences of the algorithms and to establish an overall ranking. Finally, ECFDBO was applied to a 3D path planning simulation in UAVs for safe path planning in complex environments. Against both the Dung Beetle Optimizer and a multi-strategy DBO (GODBO) algorithm, ECFDBO met the global optimality requirements for cooperative UAV planning and showed strong potential for high-dimensional global optimization applications.

Graphical Abstract

1. Introduction

Unmanned aerial vehicles (UAVs) have experienced rapid development and have been widely deployed in various domains since their inception. UAVs have been applied in such specialized areas as soil information collection and irrigation decision support in agriculture [1], deterring birds, connecting nodes in communication networks, monitoring the marine environment [2], and urban planning [3]. Three-dimensional (3D) path planning is a critical component of autonomous UAV missions, as it directly determines whether a UAV can navigate complex environments safely and efficiently to reach its destination [4]. An optimized flight path must not only avoid terrain obstacles but also minimize travel distance while adhering to the dynamic constraints of the aircraft. These requirements make path planning particularly challenging in 3D spaces with complex terrain and numerous static obstacles [5]. It is also essential to consider flight altitude, camera angle, overlap rate, and obstacle avoidance to enable comprehensive 3D observation and accurate modeling, especially in areas with varying topography or dense urban structures [6,7,8,9].
Among the classical global path planning algorithms, the grid-based search methods (e.g., A*) are capable of finding the optimal paths in static environments, but they incur large memory overhead and are sensitive to the environment scale, making them unsuitable for planning in vast 3D space [10,11,12,13,14,15,16]. The artificial potential field methods are highly respected for their simplicity and computational speed, and have been used for UAV obstacle avoidance; however, pure potential field methods do not guarantee global optimality and are prone to falling into local minima in complex terrains, leading the path to ‘deadlocks’ [17,18]. Another class of sampling-based expansion algorithms, namely Rapidly exploring Random Trees (RRT) and its improved variant RRT*, can effectively expand paths in continuous space, but RRT makes it difficult to strike a balance between path optimality and obstacle safety, thus may still produce excessively long or non-smooth paths [19]. To address these problems, researchers have proposed various improvements to RRT; for example, Guo et al. introduced a fuzzy control strategy to optimize the sampling process and developed the FC-RRT* algorithm to improve search efficiency and safety in complex 3D environments [20]. In conclusion, traditional algorithms often face such problems as high computational costs and being prone to falling into suboptimal solutions in challenging terrains and scenarios with many obstacles.
In recent years, intelligent optimization algorithms have been increasingly applied in UAV path planning owing to their strong global search capabilities [21]. Genetic algorithms (GA), which are naturally adept at multi-objective optimization, have been employed in planning UAV paths under multiple tasks or constraints. For example, Pehlivanoglu et al. devised an improved GA for UAV coverage missions [22]. Particle Swarm Optimization (PSO) algorithms [23,24] have gained widespread attention because of their fast convergence and small number of parameters. Researchers have proposed numerous variants tailored to UAV path-planning needs, including phase-angle-encoded PSO [25,26,27], quantum-behaved PSO [28], and discrete PSO [29,30]. Differential evolution (DE) algorithms excel in global search capability, and some scholars have proposed hybrid DE algorithms in combination with strategies such as grey wolf optimization to plan 3D paths in complex mountainous terrain [31]. Additionally, swarm intelligence methods like Ant Colony Optimization (ACO) [32] and Artificial Bee Colony (ABC) [33] have been applied to UAV obstacle-avoidance path planning. However, single-strategy meta-heuristics often suffer from getting trapped in local optima or lacking sufficient convergence precision. Original PSO, for instance, is prone to converge prematurely, while ACO’s communication-based modeling incurs substantial computational overhead, limiting its real-time application [34]. In addition, researchers have explored various types of plants, animals, and insects in nature and invented various meta-heuristic algorithms. Inspired by the foraging behavior of coatis, Dehghani M et al. proposed the Coati Optimization Algorithm (COA) [35]. Arora S et al. modeled the food search and mating behavior of butterflies and proposed and tested the Butterfly Optimization Algorithm (BOA) [36] based on the foraging strategy of butterflies. Zhong et al. proposed the Beluga Whale Optimization (BWO) algorithm, inspired by the collective behavior of beluga whales [37]. Duan and Qiao introduced the Pigeon-Inspired Optimization (PIO) algorithm, based on the navigational behavior of pigeon flocks. PIO has been successfully applied to flight-robot path planning [38]. Recently, the Nutcracker Optimization Algorithm (NOA) was proposed, simulating the spatial memory and random foraging behavior of nutcracker birds in their seed search, caching, and retrieval process [39]. Liu and Cai et al. proposed a hybrid multi-strategy artificial rabbit optimization (HARO) [40]; by simulating spherical and cylindrical obstacle models, they achieved efficient and stable UAV path planning in complex environments. Therefore, these advancements highlight an emerging research focus: refining algorithms to meet the specific demands of UAV path planning by striking a more effective balance between global exploration capability and local optimization precision.
In response to the above optimization requirements, Dung Beetle Optimizer (DBO) offers a novel perspective for UAV path planning. First proposed by Xue and Shen in 2022 [41], DBO draws inspiration from the behavioral patterns of dung beetles, incorporating five typical behavioral mechanisms—rolling, dancing, foraging, stealing, and reproducing. When applied to three examples of engineering optimization, DBO also achieved superior results compared to benchmark algorithms; the potential of it for practical applications is further confirmed. Owing to these advantages, DBO has attracted increasing attention in the field of path planning and is considered a promising approach for solving complex optimization tasks such as 3D UAV path planning.
Despite the strong performance of the original DBO, subsequent research has revealed certain limitations when the algorithm is applied to high-dimensional and complex search spaces. Responding to these shortcomings, scholars have proposed several enhanced variants of the Dung Beetle Optimizer. For single-UAV path planning, Shen et al. introduced the Multi-strategy Dung Beetle Optimizer (MDBO) [42]. In 2024, Tang et al. proposed RCDBO, an enhanced DBO variant for robotic path planning [43]. Another notable variant is GODBO, introduced by Wang et al. [44], which enhances exploration through opposition-based learning and multiple strategies centered on the current best solution. Results indicated clear improvements in both convergence precision and speed. Similarly, SSTDBO, proposed by Hu et al. [45], was designed to overcome the original DBO’s deficiencies in population diversity, global detection ability, and convergence precision.
For multi-UAV cooperative planning, Zhang et al. proposed the Multi-strategy Improved Dung Beetle Optimizer (MIDBO) for UAV task allocation [46]. In another study, Shen Q. et al. extended DBO to the multi-objective optimization domain by introducing the Directed Evolution Non-dominated Sorting Dung Beetle Optimizer (DENSDBO-ASR) [47]. This algorithm was applied to cooperative multi-UAV path planning and achieved excellent results in both convergence precision and solution set diversity. Collectively, these enhancements demonstrate that integrating chaos theory, evolutionary operators, and distribution-based mechanisms can substantially boost the performance of DBO, making it more suitable for high-dimensional and complex path-planning tasks.
Many researchers have proposed various DBO improvement algorithms, such as MDBO, SSTDBO, and GODBO. However, there are still some problems that need to be solved in high-dimensional, complex search spaces and in actual UAV 3D path planning tasks. For example, SSTDBO can maintain population diversity in the early stages of the search, but it easily falls into local optima due to the fast convergence of strategies, especially in scenarios with more than 50 dimensions. The premature convergence of the algorithm is especially obvious. Second, MDBO is prone to premature population convergence in high-dimensional, complex environments, making it difficult to escape local optima. Furthermore, GODBO enhances exploration capability by applying multi-strategy learning to the current optimal solution. However, it struggles to escape local optimization in multi-constraint UAVs. GODBO enhances exploration capability by applying multi-policy learning to the current optimal solution. However, it exhibits high computational overhead in multi-constraint UAV tasks, affecting the algorithm’s real-time performance and practicality.
In summary, the Dung Beetle Optimizer offers both advantages and limitations in UAV path planning. On the one hand, DBO integrates diverse biological behavior mechanisms, which makes DBO have strong global exploration and local exploitation capabilities. Particularly, strategy-enhanced variants of DBO have shown improved adaptability to complex terrain, enabling safer and more efficient path planning in large-scale 3D environments [48]. On the other hand, the original DBO is still prone to premature convergence and becoming trapped in local optima. In environments with complex obstacle distributions, this can lead to insufficient search diversity and difficulty escaping suboptimal regions [49]. Furthermore, as UAV path planning is inherently a high-dimensional optimization problem, the computational cost of DBO increases with problem scale, making it essential to balance convergence speed and algorithm efficiency in practical applications. For these challenges, we propose a novel improved DBO variant for 3D UAV path planning in environments with complex terrain and static obstacles. The algorithm integrates three core strategies: nonlinear contraction of chaotic perturbation, which significantly inhibits premature convergence; an intelligent boundary-handling mechanism that realizes an “energy wall” rebound at the obstacle boundary in a gradient-guided manner to improve the algorithm’s ability to adapt to a dynamic environment; and an attraction-repulsion force-field variability strategy that adjusts adaptively with iterations. This strategy effectively balances global search and local development, resulting in better convergence accuracy and computational efficiency for high-dimensional complex problems and multi-constraint UAV path planning. This study makes the following specific contributions:
  • An Environment-aware Chaotic Force–field Dung Beetle Optimizer (ECFDBO) is proposed by us, which augments the original Dung Beetle Optimizer (DBO) with three novel improvement strategies: chaotic perturbation mixed nonlinear contraction, environment-aware boundary handling, and dynamic attraction–repulsion field mutation. It ensures stable performance in high-dimensional search spaces.
  • ECFDBO was subjected to multi-perspective, multi-run experiments by us on the CEC2017 (Dim = 30, 50, and 100) suite to assess its robustness and effectiveness. Wilcoxon and Friedman’s tests demonstrate that ECFDBO’s performance differences among seven state-of-the-art metaheuristics are statistically significant.
  • We formulate the multi-UAV coordination task as a multi-constrained optimization problem with several key constraints based on the application of UAV tasks in remote sensing. Additionally, we apply the cooperative path-planning model to four distinct environments to validate its simulated effectiveness.
The rest of the paper is structured as follows. Section 2 reviews the preliminary knowledge of the original DBO algorithm. Section 3 details the three novel improvement mechanisms introduced in ECFDBO. Section 4 analyzes the computational complexity of the proposed algorithm. Section 5 presents benchmark results on the CEC2017 suite and accompanying statistical tests. Section 6 describes the UAV simulation experiments and discusses the results. Section 7 concludes the paper and outlines directions for future work.

2. Preliminary Knowledge

The DBO algorithm is derived from the survival behavior of dung beetles in their natural environment. The algorithm plays an important role in achieving global dominance by simulating the rolling of a dung ball by a beetle and the subsequent redirection of movement by implementing a dancing behavior. The beetles adopt this behavior when encountering obstacles, thus preventing falling into local traps. Female dung beetles establish a localized exploration area by hiding dung balls and laying eggs. Following this initial foraging behavior, the young dung beetles mimic the adults and thus perform a fine search in a small area. Finally, competitive behavior occurs between individuals, accelerating the approach to the optimal solution. This phenomenon is illustrated in Figure 1.

2.1. Population Initialization

The Dung Beetle Optimizer initializes its population in a randomized manner, where each dung beetle’s position corresponds to a potential solution to the optimization problem. For the d dimensional optimization problem, the position of the i -th dung beetle is denoted as x i = x i , 1 , x i , 2 , , x i , d , and the population of dung beetles of N size can be denoted as X = x 1 , x 2 , , x N T , as shown in Equation (1).
X = x 1 , 1 x 1 , d x N , 1 x N , d

2.2. Rolling Stage

2.2.1. Obstacle-Free Mode

In the absence of obstacles, a rolling dung beetle simulates its natural counterpart by carrying a dung ball multiple times its size on its back and moving in a straight line in flat or slightly undulating terrain with the help of sun or moonlight. This phase focuses on “covering a wider search space with large steps.” The ball’s position is updated according to Equation (2).
x i ( t + 1 ) = x i ( t ) + ϕ k x i ( t 1 ) + b Δ x
where ϕ k x i ( t 1 ) simulates the inertia of the dung beetle continuing in its original direction, ϕ = ± 1 is randomly selected to reflect small deflections in the rolling direction due to natural disturbances such as wind and terrain, Δ x = | x i ( t ) X w | , X w are the current worst individual locations, which encourages the dung beetle to move away from the inferior region by increasing the distance to the worst solution, and x i ( t ) maintains the continuity of the algorithm in the current optimal neighborhood.

2.2.2. Obstacle Mode

When a dung beetle encounters an obstacle during rolling and cannot continue in its original direction, it performs a “dancing” behavior—rotating on its dung ball and pausing momentarily to recalibrate its path. The position update rule corresponding to this dancing behavior is defined in Equation (3).
x i ( t + 1 ) = x i ( t ) + tan ( θ ) x i ( t ) x i ( t 1 )
It has been demonstrated that the magnitude of the movement in the new direction is proportional to the difference between the two positions of the dung beetle before and after. Furthermore, the value of tan ( θ ) takes on a characteristic that produces diversity from zero deflection θ = 0 , π / 2 , π to large steering. This helps the algorithm to jump out of the trap and avoid falling into local optimization.

2.3. Reproduction Behavior

Female dung beetles will set aside a safe spawning area for their young within a certain distance from the main path of the colony—close to the food source but avoiding detection by predators. The algorithm dynamically defines the upper and lower boundaries of the spawning area centered on the current local optimal position X * , as shown in Equation (4).
L b * = max X * × ( 1 R ) , L b , U b * = min X * × ( 1 + R ) , U b
where R = 1 t / T max ; L b , U b denote the lower and upper bounds of the optimization problem; as the iteration advances ( R decreases gradually), the spawning area shrinks from a wide area to a narrow area, which is conducive to fine mining near the optimal solution. A single spawn location is generated per iteration, as shown in Equation (5).
B i ( t + 1 ) = X * + b 1 × B i ( t ) L b * + b 2 × B i ( t ) U b *
where B i ( t ) is the position information of the i -th spawn ball at the t -th iteration; b 1 and b 2 are two independent random vectors of size 1 × d . Egg balls are strictly confined to spawning areas.

2.4. Foraging Behavior

Some of the dung beetles leave the spawning area to forage on the ground, representing the algorithm’s diverse and smaller step-size fine search around the high-quality area. It also dynamically defines the foraging area centered on the global optimization X b , as shown in Equation (6).
L b b = max X b × ( 1 R ) , L b , U b b = min X b × ( 1 + R ) , U b
where L b b , U b b are the lower and upper bounds of the optimal foraging area, respectively. The position of the small dung beetle is updated by measuring normal perturbation and linear interpolation, as shown in Equation (7).
x i ( t + 1 ) = x i ( t ) + C 1 × x i ( t ) L b b + C 2 × x i ( t ) U b b
where C 1 is a random number following a normal distribution; C 2 is a random vector in the range 0 , 1 .

2.5. Stealing Behavior

In high-density areas, some dung beetles attempt to “steal” dung balls from others to quickly acquire resources. In the algorithm, this behavior corresponds to a jump-based position update that drives individuals toward convergence on the global best solution. The position update rule for this stealing behavior is defined in Equation (8).
x i ( t + 1 ) = X b + S × g × x i ( t ) X * + x i ( t ) X b
where g is a random vector of size 1 × d obeying a normal distribution and S is a constant controlling the jump amplitude. Pseudo code such as the Algorithm 1.
The Pseudo-Code of DBO:
Algorithm 1: DBO (Dung Beetle Optimizer)
Input: population size: N; problem dimension: d; search boundary: [lb, ub]; maximum number of iterations: Tmax; fitness function: f(x)
Output: Optimal position: Xbest
1:      Initialize Xi ∼ Uniform(lb, ub) for i = 1…N ←Calculated using Equation (1)
2:      Evaluate fi = f(Xi), set Xbest = argmin fi, Xworst = argmax fi
3:      for t = 1 to Tmax do
4:              p = ⌊0.2·N⌋
5:              // Rolling stage
6:              for i = 1 to p do
7:                    if rand() < 0.9 then
8:                            Xi ←Calculated using Equation (2)
9:                    else
10:                          Xi ←Calculated using Equation (3)
11:                  end if
12:            end for
13:            // Reproduction stage
14:            R = 1 − t/Tmax
15:            [Lb*, Ub*] ←Calculated using Equation (4)
16:            for i = p + 1 to p + m do
17:                  Xi ←Calculated using Equation (5)
18:            end for
19:            // Foraging stage
20:            for i = p + m+1 to p + m+q do
21:                  Xi ←Calculated using Equation (6)
22:            end for
23:            // Stealing stage
24:            for i = p + m+q + 1 to N do
25:                  Xi ←Calculated using Equation (7)
26:            end for
27:            // Boundary check
28:            for i = 1 to N do
29:                  Xi = clip(Xi, lb, ub)
30:            end for
31:            Evaluate all fi, update Xbest, Xworst
32:    end for
33:    return Xbest

3. Proposed Algorithm

According to the No Free Lunch theorem [50], different algorithms perform variably on different problems, so there is a continual need for novel optimization strategies in academic research.

3.1. Chaotic Perturbation Mixed Nonlinear Contraction Mechanisms

In the ecological behavior of dung beetles, real individuals exhibit remarkable dynamic spatial adaptability. When a population cooperates around a core resource (the dung ball), its range of activity follows a spiral contraction pattern, gradually shrinking over time. This spatial focusing strategy creates a negatively correlated feedback mechanism with an individual’s energy reserves, causing their radius of motion to decrease at a nonlinear rate, showing a preferential guarding of vested resources and conservative local search behavior.
Inspired by this, a chaotic perturbation–based nonlinear contraction mechanism is designed to enhance the foraging behavior in the algorithm. The specific update rule is provided in Equation (9).
X i t + 1 = X best   t + Φ r 1 X best   t X i t + Ψ r 2 X cbest   t X i 1 t
Ψ r 2 = t a n h γ r 2 e λ r 2 2
Φ r 1 = 1 cos 2 π r 1 1 + log 1 + r 1
Ψ r 2 is the nonlinear shrinkage factor; Φ r 1 is the chaotic perturbation factor; γ is the shrinkage strength parameter; λ is the attenuation factor; X cbest   t denotes the current iteration optimal solution; X best   t is the globally optimal solution; t a n h γ r 2 maps random numbers r 2 [ 0 , 1 ] to smooth bounded ranges [ 0 , t a n h ( γ ) ] to avoid oscillations due to large step sizes. γ controls the steepness of the curve, with larger values of r 2 being more sensitive to small changes. Exponential decay term e λ r 2 2 : Suppresses the step size of large r 2 -values, creating a “small step size for high frequency, large step size for low frequency” search pattern.
The Φ r 1 generates a non-monotonic and asymmetric perturbation pattern within the interval r 1 ( 0 , 1 ) , producing differentiated disturbance intensities at positions r 1 = 0.25 and 0.75 . Compared with other chaotic mappings, this approach has lower computational cost and does not require iterative sequence generation. The use of two independent random variables (one governing the perturbation strength and the other controlling the search direction) forms a two-dimensional probabilistic space, which adds a broader range of search patterns than would be possible with a single random number. This design has proven effective in overcoming local optima in constrained search corridors.

3.2. Environment-Aware Boundary-Handling Strategy

In the original DBO algorithm, when an individual violates the search boundary, a simple truncation or directly setting the transboundary component to a boundary value is often used to ensure feasibility. There are two main problems with this approach: information loss: directly “trimming” individuals may destroy their original directional and positional information, especially in the critical region, which may lead to some valuable search information being ignored; and too much randomness: the lack of directional guidance when resetting to the boundary may destroy convergence.
Inspired by this, we designed an environment-aware boundary-handling strategy that treats the boundary as an “energy wall” and intelligently rebounds particles along the gradient field upon collision, using a three-phase process: collision detection (identifying the transgressed dimension), gradient sensing (calculating the gradient of the objective function at the boundary), and intelligent rebounding (adjusting the position along the direction of gradient descent). Specifically, as in Equation (12):
For individual X i in dimension d transgression:
X i , d new = ub d η X i , d ub d sign d f   if   X i , d > ub d lb d + η lb d X i , d sign d f   if   X i , d < lb d
The center difference method is used to avoid double counting:
d f f X + ϵ e d f X ϵ e d 2 ϵ
where η : rebound factor; d f : gradient direction in the d dimension; e d : unit vector in the d dimension; ϵ : minor perturbation.
This environment-aware boundary-handling strategy significantly enhances the performance of the optimization algorithm on constrained problems through a gradient-guided intelligent rebound mechanism. On the one hand, by maintaining the directionality of the search, particles rebound along the direction of the gradient of the objective function when they cross the boundary, thus avoiding the loss of information caused by the traditional random reset, and prompting the search process to always be directed toward the favorable region; on the other hand, through the adaptive step-size adjustment, the fine search is conducted near the boundary in order to enhance the exploitation of the boundary region, effectively avoiding the premature convergence and at the same time maintaining the diversity of the populations.

3.3. Dynamic Attraction–Repulsion Force-Field Mutation Strategy

The original DBO algorithm relies only on attraction to the global optimal solution, which leads to rapid population aggregation and loss of diversity, thereby increasing the risk of premature convergence. Moreover, all dimensions are updated synchronously, which cannot deal with nonlinear correlation between variables, and the fixed or linear decay step mechanism is difficult to adapt to the demand of multi-stage optimization.
Based on this, a dynamic attraction–repulsion force field strategy is designed: the suboptimal solution with the top 10% fitness is selected as the source of repulsion so that the algorithm is able to perform a fine local search when it is close to the global optimization, and at the same time the repulsion force is utilized to prevent premature aggregation and maintain population diversity. The repulsive force strength is enhanced with iterations as specified in Equation (14):
F i = w g X best   X i X best   X i 3 w r j K X j X i X j X i 3
Intelligent mutation rules:
X i new   = X i + F i η guided     if   rand   < p m X i + N 0 , σ t 2   else  
where w g = w 0 1 t T is the time-varying attractive weight; w r = t T w 0 is the time-varying repulsive weight; K is the set of suboptimal solutions in the top 10 % of fitness; η guided   = 1 e F i is the nonlinear step size; σ t = σ 0 1 t T is the adaptive factors perturbation; this strategy is favorable in the weak-field region ( F 0 ): fine search with small step sizes; and in the strong-field region ( F 1 ): fast moving with large step sizes. In the case of the multiple-peak function: the repulsive field effectively prevents premature convergence, and in the case of the single-peak function: the attractive field accelerates the convergence rate. Pseudo code such as the Algorithm 2.
The Pseudo-Code of ECFDBO:
Algorithm 2: ECFDBO (Environment-aware Chaotic Force-field Dung Beetle Optimizer)
Input: population size: N; problem dimension: d; search boundary: [lb, ub]; maximum number of iterations: Tmax; fitness function: f(x)
Output: Optimal position: Xbest
1:      Initialize {Xi}₁ⁿ ∼ Uniform (lb, ub)
2:      Evaluate fi, set Xbest
3:      for t = 1…Tmax do
4:                  // Construct the set of suboptimal solutions Q
5:                  Sort {Xi} by fitness, let Q ← best K = max (3, ⌊0.1·N⌋)
6:                  for each i = 1…N do
7:                        Compute
8:                                  Fi ← Fg − Fr← Calculated using Equation (14)
9:                              if rand () < pm then
10:                                  η ←Calculated using Equation (15)
11:                                  Xi ←Attraction–Repulsion Mutation
12:                             else
13:                                    // Chaos Steps Update
14:                                    Xi ← Calculated using Equation (9)
15:                             end if
16:                             Xi ← SmartReflect(Xi, lb, ub, ε) ← Calculated using Equation (12)
17:                 end for
18:                 {Xi}← DBO_ Stage({Xi})      // Rolling, Reproduction, Foraging, Stealing
19:                 Evaluate all fi, update Xbest
20:    end for
21:    return Xbest

4. Complexity Analysis

The time complexity is an important index of the computational efficiency of the algorithm, assuming that the population size is N , the problem dimension is D , and the number of iterations is T max . The complexity of the original DBO algorithm is mainly influenced by the population initialization complexity ( O ( N D ) ) and the complexity of the iteration process ( O ( N D ) ), and the time complexity of the DBO algorithm is shown in Equation (16).
O D B O = O ( N D ) + T max O ( N d ) O ( T max N D )
ECFDBO adds three key strategies to the original DBO algorithm. The total complexity of its dynamic attraction–repulsion variation strategy: O ( N K D ) = O ( N 2 D ) , where K 0.1 N are suboptimal solutions; the complexity of the individual behavior update: O ( N D ) ; and the complexity of the environment-aware boundary processing strategy: O ( N D C f ) , where C f is the time complexity of a single adaptation. For each updated individual, it is necessary to check whether D variables are out of bounds and correct them, but usually the proportion of out-of-bounds individuals is limited and the actual complexity is far less than the upper bound, so the time complexity can be approximated as O ( N D ) . The ECFDBO time complexity is calculated as shown in Equation (17)
O E C F D B O = O ( T max × N × K × D ) O ( T max × N 2 × D )
From the analysis, it is evident that the complexity of ECFDBO increases from linear to quadratic growth of the original algorithm, primarily due to the attraction–repulsion field. Despite the increase in time complexity, this increase is acceptable under modern computing conditions. The introduced strategy reduces the probability of the algorithm falling into a local optimization and improves the overall convergence performance so that the increased time cost in exchange for a significant improvement in performance is reasonable and worthwhile in most practical optimization scenarios.

5. CEC2017 Test

The CEC series of test suites, recommended by the IEEE CEC Conference, has become a standard evaluation platform in swarm intelligence and evolutionary algorithms, ensuring fair and reproducible comparisons among different algorithms. In order to fully evaluate ECFDBO’s performance advantages, ECFDBO algorithms, and some classical and novel meta-heuristic algorithms are compared based on the CEC2017 test suite, which has wide recognition. All experiments of this research work were performed on Windows 10 64-bit, Intel(R) Core (TM) i5-12400 processor (2.50 GHz), 16 GB RAM, and MATLAB R2023a.

5.1. CEC2017 Introduction

The CEC2017 test suite aims to provide a fair and systematic performance evaluation of the newly proposed optimization algorithms. The test suite contains 29 standardized single-objective real-parameter optimization functions (the F2 function has been officially removed due to instability), including two single-peak functions (F1–F3) to test the global convergence capability of the algorithm; seven multi-peak functions (F4–F10) to test the global exploration capability of the algorithm; ten hybrid functions (F11–F20) to evaluate the performance of the algorithm by combining different basis functions; and 10 composition functions (F21–F30) to test the robustness and adaptability of the algorithm to highly complex and nonlinear problems. Both simple convex functions are covered, and high-dimensional complex combinations are introduced to ensure the reliable performance of the algorithm in diverse search spaces. The shift and rotation transformations are applied to all basis functions, and the search domain is uniformly set to: [ 100 , 100 ] d to force the algorithm to find the optimal solution in a large range.

5.2. Comparison with Mainstream Optimization Algorithms

In order to facilitate the testing of the convergence capability, global exploitation capability, and robustness of the new improved algorithm, a total of seven comparison algorithms are compared and analyzed on the CEC2017 test component using five novel metaheuristic algorithms, namely BWO [37], COA [35], PIO [38], BOA [36], and NOA [39], as well as the original DBO [41] algorithm and a novel variant of DBO, namely GODBO [44]. Parameter settings are shown in Table 1. The test parameters are 30 populations, 500 iterations, and 30, 50, and 100 dimensions, and the results are evaluated on five performance indicators: mean, standard deviation, best, median, and worst. The optimal solution for each function is shown in bold.
As shown in Table 2 and Figure A1, in the 30-dimensional CEC2017 benchmark, the ECFDBO algorithm achieved the best average performance on most of the test functions. Among the 29 test functions, ECFDBO achieved the best performance on 18 functions, while GODBO performed best on the remaining 11 functions. It is worth noting that, with the exception of GODBO, none of the other algorithms compared can outperform ECFDBO on any function; for example, on the typical complex benchmark function F1, the average value of ECFDBO is only 63,942.51, while the suboptimal algorithm DBO is as high as 2.64 × 108, and the rest of the algorithms are even in the order of 1010, which is an extremely wide gap, showing the significant advantage of ECFDBO in convergence precision. On simpler benchmarks such as the F5 function, although GODBO slightly outperforms ECFDBO (with means of 657.68 vs. 800.48), ECFDBO still significantly surpasses all other algorithms. In terms of standard deviation, ECFDBO achieved minimum values for 15 functions, followed by BWO with 7 functions. On the complex hybrid function F12, ECFDBO’s standard deviation is 1,863,468, which is 102 better than the second best algorithm, DBO with 1.19 × 108. On the F29 function, ECFDBO’s 271.7949 is only less than 50 different from the first place GODBO, and in the remaining optimal, median, and worst values, ECFDBO has 15, 19, and 17 optimal function values, respectively, all of which are ranked first among all algorithms. Overall, ECFDBO shows clear dominance on the 30-dimensional problem: six of the seven algorithms compared are outperformed by ECFDBO on almost all functions, and only GODBO is able to follow ECFDBO on some functions.
As shown in Table 3 and Figure A2, the overall dominance of ECFDBO is further extended in the 50-dimensional benchmark test. The result statistics for the 29 tested functions show that ECFDBO achieves the best average performance value on 19 functions, while the remaining 10 functions are still dominated by GODBO. None of the other algorithms outperform ECFDBO on any function, reflecting the fact that ECFDBO remains a solid leader in higher dimensions. For example, on the F9 function, ECFDBO has a mean of 2.63 × 104, while the second-best GODBO is about 1.87 × 104 higher, and the remaining algorithms’ values are generally more than an order of magnitude higher. This indicates that as the dimensionality increases, some algorithms such as BWO and COA show deterioration in performance, with their average errors and rankings significantly lagging behind those of ECFDBO; in contrast, ECFDBO and GODBO are still able to maintain good adaptation to complex functions. In terms of standard deviation, ECFDBO achieves the lowest fluctuation on 14 functions, and the second-best BWO is the most stable on 5 functions; in the complex single-peak function F1, the standard deviation of ECFDBO is about 2.36 × 106, while the value of GODBO is more than 7.03 × 108, with a magnitude difference of more than 102. In the rest of the best, median, and worst values, ECFDBO also achieves the first place with 18, 17, and 18, while GODBO is in the second place with 10, 9, and 10. In the high-dimensional environment of 50 dimensions, the ECFDBO algorithm still shows the overall leading performance advantage. Its average convergence precision continues to be the best and is slightly better than the performance in 30 dimensions. In the extreme cases of optimal/worst, ECFDBO wins with the majority function, showing good robustness.
As shown in Table 4 and Figure A3, in the 100-dimensional benchmarks, the performance of the algorithms is more clearly differentiated: ECFDBO remains the best on most functions, with the best average optimization on a total of 20 functions; GODBO is still the main contender, with a slight edge on 8 functions; and there is an unexpected optimization of the COA algorithm on F3, with an average of 3.51 × 105 for COA and 5.56 × 105 for ECFDBO, suggesting that ECFDBO may fall into a local optimization on this particular function. Nevertheless, ECFDBO still performs best on most functions. For example, on the F7 function, which has a complex multi-peak structure, ECFDBO averages 3.29 × 103, significantly outperforming all algorithms except GODBO (2.75 × 103); even on the F3 function, which has a dominant COA, ECFDBO ranks 4th without significant performance degradation. In terms of standard deviation, best, median, and worst, ECFDBO wins again with 14, 18, 17, and 18 times, respectively. Especially in terms of standard deviation, the suboptimal algorithm BWO has only 5 optimal times, GODBO is only dominated by one function, F29, and DBO has even failed completely, creating a substantial performance gap with ECFDBO. Overall, ECFDBO shows more stable and efficient optimization in 100 dimensions compared to 30 dimensions and 50 dimensions.
Combining the experimental results in 30, 50, and 100 dimensions, ECFDBO consistently leads in average performance, stability, and extreme cases, demonstrating good scalability and robustness as dimensionality increases, and it shows better overall solution quality and stability of results on the CEC2017 test set. As the dimensionality of the problem increases, the difference in solution accuracy between the algorithms widens significantly. ECFDBO achieving the lowest mean and std for most of the test functions in all dimensions, indicating that its converged solutions are closer to the global optimization with less variation. As the dimensions increase from 30 to 100, ECFDBO grows steadily from 18, 19 to 20 functions to reach the optimal mean. This shows that ECFDBO adapts well to dimensional extensions and maintains strong solving power on high-dimensional problems. In contrast, other algorithms such as BOA, BWO, PIO, etc. show a significant decrease in solution accuracy as dimensionality increases. This reflects that these algorithms are prone to local optimization or unstable convergence in high-dimensional complexity, while ECFDBO effectively alleviates the performance degradation caused by the increase in dimensionality through improved strategies.

5.3. Wilcoxon and Friedman Statistical Tests

In order to provide a more in-depth and systematic statistical analysis of the experimental results, the Wilcoxon and Friedman tests were introduced in this study to quantify the significance of the performance differences between the ECFDBO algorithm and the seven comparison algorithms. Specifically, the Wilcoxon test performs a test on the paired observations of ECFDBO and each comparison algorithm in the dataset, where the difference in performance between the two algorithms is considered statistically significant if the p-value obtained from the test is <0.05 (shown in bold) when the significance level is α = 0.05 ; otherwise, the difference is not statistically significant. The Friedman test aggregates the rankings of all algorithms across the functions in the CEC2017 benchmark suite and computes their average ranks. This process evaluates whether the overall differences among multiple algorithms across multiple test functions are statistically significant, thereby providing a global, nonparametric statistical basis for algorithm comparisons.
Based on Table 5, Table 6 and Table 7, the Wilcoxon test shows that ECFDBO has a significant advantage over most of the algorithms. Overall, the differences between ECFDBO and the seven algorithms BWO, COA, PIO, BOA, NOA, GODBO, and DBO reached a significant level (p < 0.05) in the function-by-function comparison with each of the compared algorithms in 30 dimensions, which proves that the results of ECFDBO are significantly better than these algorithms. The three algorithms, BWO, COA, and NOA, all showed significant differences during the comparison process with ECFDBO one by one. PIO had only two functions (F23 and F24), and BOA had one function (F22). GODBO had four functions (F17, F18, F20, and F24) and did not show significant differences, while the original DBO showed advantages with ten functions. When the number of dimensions reaches 50, ECFDBO has more significant advantages, including the original DBO and GODBO, which were second to ECFDBO in the previous 30 dimensions, and the advantages of ECFDBO over the original DBO and GODBO are further extended in the 50-dimensional benchmark. The remaining algorithms have more significant differences with ECFDBO; for example, the p-value is close to 0 for BWO and COA, etc. Four algorithms even show significant differences with ECFDBO in all 29 functions, proving that ECFDBO has a clear lead in higher dimensions. The performance improvement of ECFDBO over all seven algorithms compared to 100 dimensions was significantly better than the other algorithms, with a total of 11 functions with p-values > 0.05. It is worth noting that although the GODBO algorithm achieved the best results on individual functions in the previous test, the overall gap between it and ECFDBO is still extremely obvious—ECFDBO achieved 20 wins against GODBO in 100 dimensions. It can be seen that as dimensionality increases, ECFDBO outperforms all comparators in the statistical sense.
As shown in Table 8, the Friedman test shows that the ECFDBO algorithm has a lower average rank than the other algorithms on all dimensions, and the bright spot is that GODBO and the original DBO are ranked second and third on all dimensions. From this, it can be seen that the superiority of ECFDBO is not due to randomness but has reliable statistical support. There are almost no algorithms that can match or exceed ECFDBO’s overall performance in any dimension. This means that the performance improvement of ECFDBO over every other algorithm is statistically significant, with advantages over BWO, COA, and NOA being particularly prominent, and a reliable lead over second-tier algorithms such as GODBO and DBO. Thus, it can be quantitatively confirmed that ECFDBO significantly outperforms the seven meta-heuristic algorithms compared to the CEC2017 test set as a whole, demonstrating robust optimization capabilities in all dimensions. For this statistical advantage, the excellent performance of the ECFDBO algorithm on each benchmark function fully demonstrates its effectiveness and competitiveness in solving complex optimization problems.

5.4. Contribution of the Improvement Strategies

ECFDBO brings three key improvement strategies for the DBO algorithm to enhance its performance.
Chaotic Perturbation Mixed Nonlinear Contraction Mechanisms: ECFDBO introduces chaotic sequence perturbation into the iterative process and adopts a nonlinearly decreasing convergence factor to regulate the search step size so as to dynamically balance global exploration and local exploitation. Chaotic sequences can effectively increase the population diversity and improve the global search capability of the algorithm due to their ergodicity and randomness. In the early stage of the algorithm, the chaotic perturbation helps ECFDBO to explore the solution space more extensively to avoid falling into the local optimization prematurely, and as the iteration proceeds, the perturbation amplitude gradually shrinks according to the nonlinear function, which helps the algorithm to perform a fine search near the optimal solution at a later stage, thus improving the convergence precision. This adaptive perturbation mechanism makes up for the lack of perturbation means in the original algorithm: lack of population perturbation, the algorithm can easily fall into the local optimization and missing the opportunity to search other regions. Therefore, chaotic perturbation combined with nonlinear contraction strengthens the global exploration capability and late convergence stability of ECFDBO in complex multi-peak environments, dramatically reducing the solution deviation and improving the robustness of the results.
Environment-Aware Boundary-Handling Strategy: For the problem of candidate solutions crossing the boundary, ECFDBO adopts an environment-aware boundary-handling mechanism. When an individual crosses the boundary of the defined area, the algorithm does not simply cut it back to the boundary or reset the random value but adjusts it according to the environmental information at the time the individual crosses the boundary so as to maintain the feasibility of the solution without losing diversity. Proper boundary handling is critical to the performance of population intelligence algorithms, and ECFDBO’s boundary handling takes into account the fitness environment of the individual’s region, intelligently pulling out-of-bounds solutions back into potentially favorable regions of the search space rather than blindly discarding or zeroing them. This ‘environment-aware’ approach improves the efficiency of the algorithm’s search near the boundary, avoiding the traditional hard boundary handling that can cause individuals to oscillate or stagnate at the boundary.
Dynamic Attraction–Repulsion Force-Field Mutation Strategy: ECFDBO introduces a mutation operator that simulates the effects of attractive and repulsive forces to speed up convergence while maintaining population diversity. The strategy applies an ‘attractive force’ to the high-quality solutions to bring the surrounding individuals closer to them so as to use the global optimal information to guide the search and improve the speed of local convergence; at the same time, it applies a ‘repulsive force’ to the over-aggregated individuals to push them away from their current position to encourage the population to spread to unexplored areas and maintain the global search capability. Through Attractive -repulsive mutation, the population of ECFDBO can consistently jump out of local traps while approaching the global optimization. It makes the algorithm rarely stagnate in local extremum traps of complex functions and speeds up the optimality search process of the global optimization.
In the relationship between the 3D path planning task of unmanned aerial vehicles (UAVs) and the CEC2017 test experiments: In terms of the correspondence of problem characteristics, ECFDBO significantly outperforms the comparative algorithms in the 30D/50D/100D tests of CEC2017, demonstrating its remarkable high-dimensional optimization capabilities. Given that UAV path planning involves optimizing dozens of variables simultaneously in a 3D space, such as path length, threat avoidance, and altitude constraints, the high-dimensional optimization capabilities of ECFDBO can effectively address such problems. The excellent performance of ECFDBO in the multi-peak functions (F4-F10) and hybrid functions (F11-F20) of CEC2017 enables it to help UAVs find a globally safe path amidst the “traps” of local optimal solutions formed by multiple obstacles. The intelligent boundary-handling mechanism of ECFDBO, as evidenced by its performance in the constrained functions of CEC2017, ensures that UAVs strictly comply with physical constraints such as the flight altitude range and minimum path length. Regarding the problem goals, the advantages of ECFDBO in terms of convergence speed and results in CEC2017 guarantee that UAVs can quickly find the optimal solution during path planning. The comprehensive advantages of ECFDBO in the composite functions (F21-F30) can assist UAVs in generating optimal paths under conditions of multiple conflicting constraints.

6. Collaborative 3D Path Planning Simulation

Path planning in complex spatial environments is a critical step for UAV navigation in remote sensing missions. It ensures that the UAV safely and efficiently travels from one location to another while satisfying all mission-related constraints and requirements [51]. Based on these considerations, the model developed in this study is constructed as follows.

6.1. Problem Statement

6.1.1. Flight Path Distance

In order to make the operation of UAVs more relevant to actual needs, the path planning should be based on different application scenarios for goal setting and constraints. Since we focus on remote sensing and surface monitoring, etc., we assume that there are a total of N UAVs, and the path of each UAV i consists of a series of discrete points, and the flight path of the UAVs is represented as a list consisting of a number of discrete points, P i = { p i , 1 , p i , 2 , , p i , m i } : the set of path points of the UAV i , where p i , k = ( x i , k , y i , k , z i , k ) ; the total flight path length of the flight path of the UAV i is represented as Equation (18):
L i = k = 1 m i 1 p i , k + 1 p i , k
Let the sum of the theoretical shortest path lengths of all UAV origins and destinations be L max , where the path cost from the origin to the destination is expressed as Equation (19):
f o = p 1 i = 1 N L i L max
where   p 1 is the adjustment factor. By minimizing f 0 , it is possible to minimize the total range of the planned path, thus increasing the efficiency of the flight.

6.1.2. Security and Threat Constraints

In complex environments, UAVs must avoid static obstacles or airborne threats to ensure flight safety, where safety constraints are defined as cylindrical and spherical zones. Assuming there are K obstacles in the environment, each obstacle is represented by a cylindrical zone with horizontal projected circular center coordinates of C k and radius of R k ( k = 1 , 2 , , K ). The safety radius of the UAV itself is denoted as D , and the safety distance to be maintained outside the collision zone of the UAV and the obstacle is defined as S . Let the closest distance to the center of an obstacle when the flight segment P i , k P i , k + 1 passes through the projection area of the obstacle be d k (e.g., Figure 2). Based on the distance of the UAV relative to the obstacle, the threat cost function T k of the flight segment relative to the obstacle k can be defined as in Equation (20) below:
T k ( P i , k P i , k + 1 ) = 0 , if d k > S + D + R k ( S + D + R k ) d k S , if D + R k < d k S + D + R k 1 , if d k D + R k .
where
(1) If d k D + R k , the UAV has entered the collision zone of the obstacle, then the cost is taken as T k = 1 to indicate a collision;
(2) If d k > S + D + R k , the UAV is far enough away from the obstacle, take T k = 0 ;
(3) If D + R k < d k S + D + R k , the UAV has not collided but has entered the dangerous zone around the obstacle, and a penalty is applied to the path. The cost in this case is inversely proportional to the deviation from the distance and can be set to T k ( P i , k P i , k + 1 ) = ( S + D + R k ) d k S .
In summary, the total threat cost of the entire flight path relative to all obstacles can be cumulatively added to the threat surrogate value of each flight segment under each obstacle as in Equation (21):
f n o = j = 1 n 1 k = 1 K T k ( P i , k P i , k + 1 ) .
For U A V i at waypoint p i , k , if detected inside the   j -th threat sphere, distance d i , k j is calculated; For spherical region 1, given a constant p 31 , the cost contribution can be defined as c i , k j = p 31 ( d i , k j ) 4 ; For spherical region 2, the following equation is used, assuming that spherical region 2 has a radius of R a , j : c i , k j = ( R a , j ) 4 ( R a , j ) 4 + ( d i , k j ) 4 . and that the cumulative traversal cost per UAV in all threat regions is f t , i = k = 1 m i j T i ( k ) c i , k j , , where T i ( k ) denotes U A V i the set of threat regions detected at point k . The total threat traversal cost is then given by Equation (22):
f t = i = 1 N f t , i
With the threat-cost model described above, the cost will increase if the UAV’s path is too close to an obstacle, so the path planning will be directed away from the obstacle, and safety constraints will be met.

6.1.3. Cost of Flight Altitude

In practice, in complex environments, the flight altitude of a UAV usually needs to be limited to a certain altitude based on the accuracy requirements of sensors or remote sensing equipment such as detection radar. The mission requirements (e.g., clarity of aerial imagery, sensor accuracy) will specify a minimum altitude H i , m i n and a maximum altitude H i , m a x as the lower and upper limits of the safe flight altitude for the UAV i and make the median safe value H i , mid = H i , min + H i , max 2 . e.g., Figure 3.
The altitude constraint is therefore included in the cost function of the path planning. For the altitude z i , k of U A V i at each of its track points   p i , k , let the penalty function h i k be defined as Equation (23):
h i ( k ) = z i , k H i , m i d H i , m i d , if   H i , m i n z i , k H i , m a x 1 , otherwise ,
That is, if the waypoint altitude is within the permissible range [ H i , m i n , H i , m a x ] , the cost is taken as the deviation from the median of the range to encourage the UAV to fly in the middle of the permissible altitude range, and the altitude cost of the path can be obtained by summing the altitude costs of the path for all the waypoints along the path, then the altitude cost model of the flight is given by Equation (24):
f h = i = 1 N k = 1 m i h i ( k ) .
Modelling in this way ensures that the planned paths strictly adhere to the upper and lower altitude limits specified by the mission, with appropriate penalties for approaching the altitude limits, thus improving the mission fitness of the paths.

6.1.4. Path Smoothness

To ensure a smooth UAV path, UAVs need to avoid violent steering and climb/dive maneuvers as much as possible. For this, two metrics, horizontal steering angle and climb angle, are introduced to quantify path smoothness, and their effects are incorporated into the cost function, e.g., Figure 4.
Firstly, the horizontal steering angle Δ ϕ i ( k ) is defined as the degree of turning of the path in the horizontal plane: for the three consecutive waypoints P i , k , P i , k + 1 and P i , k + 2 in the path, their projection points on the horizontal plane X O Y are P i , k , P i , k + 1 and P i , k + 2 , respectively, then the horizontal steering angle Δ ϕ i ( k ) can be defined as the angle between the projected segments P i , k P i , k + 1 and P i , k + 1 P i , k + 2 , i.e., the magnitude of the horizontal steering angle, which is calculated using Equation (25):
Δ ϕ i ( k ) = arctan P i , k P i , k + 1 × P i , k + 1 P i , k + 2 P i , k P i , k + 1 . P i , k + 1 P i , k + 2 .
Secondly, the angle of climb Δ θ i ( k ) is used to describe the degree of inclination of the path in the vertical direction and can be defined as the angle of climb or descent of the segment P i , k P i , k + 1 with respect to the horizontal plane. It is usually calculated using Equation (26):
Δ θ i ( k ) = arctan Δ z k P i , k P i , k + 1
where Δ z k = z i , k + 1 z i , k is the height difference between adjacent waypoints and P i , k P i , k + 1 is the projected length of the segment in the horizontal plane. On the basis of these two angles, the change in declination and elevation angle is determined for each U A V i between its successive waypoints, defining the indicator function as in Equation (27):
a i ( k ) = 1 , if Δ ϕ i ( k ) > ϕ max o r Δ θ i ( k ) > θ max , 0 , otherwise ,
A i = 1 , if   k make s   a i ( k ) = 1 , 0 , o t h e r w i s e ,
f a = i = 1 N A i .
where Δ ϕ i ( k ) and Δ θ i ( k ) are the change in declination and pitch angle, respectively, for segment k . Equation (28) is the angular constraint cost. By working together to form a cost function as in Equation (29), these two parts of the cost ensure that the generated path is smooth enough in 3D space to satisfy the UAV’s maneuverability constraints.

6.1.5. Time Synchronization Constraint

For U A V i , the total duration of its voyage is t i = L i v i , where v i is the average speed of the UAV. Then the time synchronization cost is defined by Equation (30):
f m = i = 1 N p 4 | t i t c |
where p 4 is the time synchronization weighting factor and the target synergy time is t c .

6.1.6. Space Collision Costs

As in Figure 5, assuming collision detection between different UAVs, the number of times that the distance between   U A V i and U A V j is detected to be less than the safe distance d s a f e at the same time or time period is denoted as C i j , where p 5 is the spatial synergy or collision weight factor. The spatial synergy cost model can be written as Equation (31):
f c = p 5 1 i < j N C i j

6.1.7. Minimum Path Segment Interval Constraint

For each flight distance   l i , k = p i , k + 1 p i , k of   U A V i , when l i , k < L min , L min is a predetermined minimum allowable path segment length, the indicator function and the cost function can be defined as Equations (32) and (33):
t i ( k ) = 1 , if l i , k < L min , 0 , o t h e r w i s e .
f t r = i = 1 N k = 1 m i 1 t i ( k )
Taking into account the path distance, obstacle threat, safety altitude range, and path smoothness, a comprehensive cost function (Equation (34)) for path planning can be established:
F = w 1 f o + w 2 f h + w 3 f t + w 4 f m + w 5 f c + w 6 f a + w 7 f t r + w 8 f n o
where w 1 , w 2 , w 3 , …, w 8 are the weighting parameters of each cost component, which can be set according to the mission requirements and unit scale differences. Up to this point, the UAV cooperative 3D path planning problem is transformed into an optimization problem with a comprehensive cost function F as the objective.
Restrictions:
Waypoints p i , 1 and p i , m i are fixed to the given start and end points;
For each waypoint p i , k ,   p i , k Ω must be satisfied, where Ω is the area in which the UAV can fly.

6.2. Simulation Experiment

In this experiment, two terrain maps and two threat layouts are set up, four application scenarios are simulated and validated, the UAV formation consists of three UAVs, and ECFDBO is used to compare the GODBO and the original DBO algorithms that performed well in the test. The experimental settings are population size N = 30, iteration number D = 200, and fitness function weights 0.05, 0.05, 2, 0.7, 0.7, 0.6, 0.9, 0.9, etc. In this experimental phase, the aim is to plan a safe path; therefore, safety-related costs such as security and threat constraints, collisions, and time synchronization are given high weights to ensure the safety and coordination of the task. In practical application, design can be carried out according to specific needs. The path points are set to 15 × 3. The layout of the experimental scenarios is shown in Figure 6.
We applied the eight algorithms to the four scenarios in Figure 6 for experimental validation, and each scenario was run independently 10 times, and the MEAN and BEST metrics of each algorithm were counted separately. From Table 9, we can see that in Scenario 1, ECFDBO has an average score of 1.5688, which is just behind COA (1.4832) and BWO (1.5763), but its best score (0.7681) is much better than all the algorithms except GODBO (0.3057). This shows that in simple environments, ECFDBO can not only maintain a more stable global search level but also better explore the optimal solution space. As the complexity of Scenario 2 increases, the average fitness of most algorithms increases. At this time, ECFDBO’s average fitness of 1.5242 remains low, and its optimal fitness reaches 0.1660, which is the lowest among all algorithms. In Scenario 3, ECFDBO’s best = 0.8294 is in first place. Scenario 4 further increases the search difficulty, and the mean and optimal values of most algorithms are significantly higher than those in Scenario 3. ECFDBO’s mean = 1.4131 and best = 0.8181 are still robust, while GODBO (mean = 2.9202, best = 2.1725) and the other algorithms fluctuate more in comparison. The optimal values are bolded in the table.
The fitness iteration curves are shown in Figure 7. Scenario 1 terrain is relatively simple; obstacles are scarce and scattered; each algorithm can easily find a safe and feasible path. In terms of the convergence curve (Figure 7a), ECFDBO and GODBO converge to the lowest track cost during the early iterations and tend to be stable and optimal in about 40–70 generations; the final path cost of the original DBO is slightly higher than that of the former two. This shows that in a simple environment, each algorithm can find a better solution. Regarding path planning results, the three-dimensional paths (Figure 8a) and top view (Figure 9a) show that all three algorithms generally share similar paths and effectively avoid obstacles. ECFDBO has fewer inflection points and good smoothness; DBO’s planned path, while safe and feasible, is slightly stiff at some turns and has more track corners than ECFDBO. This is because the global search pressure of each algorithm in simple scenarios is not large; DBO can find feasible paths, but its local optimization ability is slightly inferior, and the path smoothness is somewhat poor. From the height profile (Figure 10a), it is evident that ECFDBO provides the best optimal altitude control. In general, ECFDBO finds the optimal path in Scenario 1, which verifies the algorithm’s effectiveness in simple environments.
Scenario 2 increases the number of safety constraints compared with Scenario 1, thus increasing the difficulty of path planning. From the convergence performance (Figure 7b), ECFDBO converges rapidly to the minimum cost in about 60 generations, showing excellent global optimization speed. DBO and GODBO converge in about 20–40 generations, but their minimum costs are slightly higher than that of ECFDBO. This indicates that the search accuracy of ECFDBO is significantly better than that of the other algorithms in this scenario. The path comparisons (Figure 8b and Figure 9b) reveal that all three algorithms exhibit varying degrees of constraint violation. However, ECFDBO demonstrates the optimal overall planning trend, effectively bypassing obstacles while remaining closer to the terrain. It can be seen from the top view that ECFDBO’s path shows a reasonable “S”-shaped smooth turn when crossing obstacle gaps, with continuous inflection points and uniform curvature. GODBO’s planned path is suboptimal, exhibiting slight zigzagging in the same region and significantly violating altitude constraints, indicating insufficient step-size control during local searches. In contrast, DBO generates a significantly longer detour, characterized by sharp bends near multiple obstacles and a less smooth and compact path. This indicates that the original DBO tends to produce oscillations near obstacles as environmental complexity increases. In contrast, ECFDBO effectively avoids over-corrected paths due to its chaotic disturbance and refined control through a nonlinear scaling strategy. In Figure 10b, ECFDBO still maintains good altitude. Therefore, ECFDBO not only converges faster in Scenario 2 but also plans a lower cost and smoother path.
As the terrain in Scenario 3 becomes more complex, the requirements for global exploration and fine obstacle avoidance are higher. The convergence curve (Figure 7c) shows that ECFDBO still maintains the fastest convergence rate and approaches the optimal solution at about 100 generations. The fitness curves of GODBO and DBO remain significantly higher than that of ECFDBO in the whole iteration process, indicating that they fall into local optimization. It is obvious that ECFDBO has a more stable optimization ability and global convergence effect in a complex terrain environment. From the path shape (Figure 8c), the three algorithms successfully plan the path through the complex terrain, but the differences are significant: ECFDBO’s track avoidance strategy is more intelligent, and it starts to adjust the altitude and direction before entering the narrow channel to achieve a high-quality solution; GODBO and DBO take a more conservative approach; their paths are close to the edge of the terrain, resulting in a ‘sliding along the wall’ phenomenon. The top view (Figure 9c) clearly shows that ECFDBO’s path maintains a more uniform buffer distance from obstacle boundaries, while GODBO’s path even exhibits dangerous behavior. ECFDBO adopts intelligent boundary handling, and when particles approach the boundary, they are guided by a gradient to actively stay away from the “danger wall,” avoiding the “collision–retreat–re-collision” circuitous phenomenon of traditional DBO. In addition, the ECFDBO track turns more smoothly, with only necessary direction changes and no obvious back-and-forth twists; in contrast, the path generated by DBO contains several small loops, indicating that its particles were trapped in local search areas and missed better paths.
Scenario 4 adds safety constraints to the complex scenario, including multiple adjacent obstacles. A comparison of algorithm convergence (Figure 7d) shows that although the problem in this scenario is complex, ECFDBO can still converge to the vicinity of global optimization within about 120 generations and obtain the lowest path cost; GODBO converges to the final solution at a slightly higher cost; the DBO convergence curve stays at a higher cost value for a long time, showing an obvious premature convergence tendency and only obtaining suboptimal solutions. From a three-dimensional perspective (Figure 8d), the path planning results of each algorithm can be visually compared. ECFDBO successfully opens up a nearly direct path while avoiding obstacle clusters: when approaching complex obstacle clusters, the path does not plunge into crowded areas but chooses a relatively open detour corridor in advance and then bypasses obstacle clusters with smooth curves. This “round-before-through” strategy makes the ECFDBO path compact and efficient both in the top plane (Figure 9d) and in the height profile (Figure 10d): the curved path closely follows the globally optimal ideal path, with little excess detour and height fluctuation. In contrast, GODBO’s path has many sharp turns in this complex environment: from the top view, its path has made sharp turns close to right angles in order to avoid some obstacles, which not only increases the range but also causes the path not to be smooth; more seriously, it can be found in Figure 8d that GODBO’s planned path even has a “knotting” phenomenon. The path quality of the original DBO is the worst: DBO falls into local search in front of the obstacle group due to a lack of an effective disturbance-jumping mechanism, and finally, the planned track takes a large circle to reach the target, and the path cost is much higher than ECFDBO. ECFDBO stands out precisely because of its environment-aware boundary strategy: when particles approach an obstacle boundary, the algorithm does not simply truncate their coordinates but uses gradient information to bounce them away from the obstacle by a certain distance. This mechanism reduces the detour of repeated correction after collision with obstacles so that the ECFDBO track can keep smooth progress in complex areas. To sum up, ECFDBO achieves global optimal planning with the shortest path length, the lowest total cost, and the smoothest and safest path in the most complex Scenario 4, which fully proves the value of the improved strategy proposed in this paper in practical complex path planning.
In the above experiments in 3D environments of varying difficulty, the ECFDBO algorithm is able to converge to points with lower path generation values in all scenarios. Especially in complex environments, the advantages of ECFDBO over the original DBO and GODBO are more obvious—its global exploration capability ensures that the algorithm does not fall into suboptimal solutions in localized regions of obstacles, while the intelligent boundary handling and chaotic perturbation mechanisms improve path smoothing and reduce ineffective detours. The final results show that ECFDBO is able to more reliably plan safe and efficient 3D UAV paths, meeting the path optimality and smoothness requirements for practical tasks.

7. Conclusions

The Environment-aware Chaotic Force-field Dung Beetle Optimizer (ECFDBO) proposed in this paper significantly enhances the global optimization capability and convergence accuracy compared with the original DBO algorithm by employing three innovative strategies. Furthermore, ECFDBO is applied to plan remote sensing application paths in different complex environment scenarios. The results show that the ECFDBO algorithm can find the global optimal path in a complex environment with a faster convergence speed. These studies show that ECFDBO has broad prospects in the field of high-dimensional global optimization and can provide more efficient and reliable solutions for the autonomous navigation of unmanned aerial vehicles in complex environments. Nowadays, machine learning algorithms, such as reinforcement learning, are widely used for bionic robot path planning. Lin and Li et al. proposed a novel fixed-horizon constrained reinforcement learning (RL) framework [52] to ensure vehicle safety and strike a balance between accomplishing goals and providing comfort. Additionally, they proposed a sample efficient teacher-advice mechanism with Gaussian process (TAG) [53] to address the challenges of low sample efficiency and exploration difficulties in reinforcement learning. The above methods may be used for UAV path planning in the future.

Author Contributions

Conceptualization, X.Z. and R.L.; methodology, R.L. and X.Z.; investigation, X.Z. and R.L.; writing—original draft preparation, R.L.; writing—review and editing, X.Z. and R.L.; supervision, S.L. and X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the research grant from the National Key Research and Development Program of China (2024YFB3312204).

Data Availability Statement

The original contributions presented in this study are included in the article. further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

This appendix section consists of a classical convergence curve for CEC2017 (dim = 30, 50, 100). Provides visual evidence for the basic function tests in Section 5, including typical single-peak functions F1/F3, simple multi-peak functions F5/F8/F9/F10, hybrid functions F15/F16/F17/F20, and composition functions F21/F23/F24/F27/F29/F30.
Figure A1. Convergence trends of classical functions in CEC 2017 (dim = 30).
Figure A1. Convergence trends of classical functions in CEC 2017 (dim = 30).
Biomimetics 10 00420 g0a1aBiomimetics 10 00420 g0a1b
Figure A2. Convergence trends of classical functions in CEC 2017 (dim = 50).
Figure A2. Convergence trends of classical functions in CEC 2017 (dim = 50).
Biomimetics 10 00420 g0a2aBiomimetics 10 00420 g0a2bBiomimetics 10 00420 g0a2c
Figure A3. Convergence trends of classical functions in CEC 2017 (dim = 100).
Figure A3. Convergence trends of classical functions in CEC 2017 (dim = 100).
Biomimetics 10 00420 g0a3aBiomimetics 10 00420 g0a3b

References

  1. Ndlovu, H.S.; Odindi, J.; Sibanda, M.; Mutanga, O. A systematic review on the application of UAV-based thermal remote sensing for assessing and monitoring crop water status in crop farming systems. Int. J. Remote Sens. 2024, 45, 4923–4960. [Google Scholar] [CrossRef]
  2. Yang, Z.; Yu, X.; Dedman, S.; Rosso, M.; Zhu, J.; Yang, J.; Xia, Y.; Tian, Y.; Zhang, G.; Wang, J. UAV remote sensing applications in marine monitoring: Knowledge visualization and review. Sci. Total Environ. 2022, 838, 155939. [Google Scholar] [CrossRef] [PubMed]
  3. Guan, S.; Zhu, Z.; Wang, G. A review on UAV-based remote sensing technologies for construction and civil applications. Drones 2022, 6, 117. [Google Scholar] [CrossRef]
  4. Debnath, D.; Hawary, A.F.; Ramdan, M.I.; Alvarez, F.V.; Gonzalez, F. QuickNav: An effective collision avoidance and path-planning algorithm for UAS. Drones 2023, 7, 678. [Google Scholar] [CrossRef]
  5. Meng, Q.; Qu, Q.; Chen, K.; Yi, T. Multi-UAV Path Planning Based on Cooperative Co-Evolutionary Algorithms with Adaptive Decision Variable Selection. Drones 2024, 8, 435. [Google Scholar] [CrossRef]
  6. Maboudi, M.; Homaei, M.R.; Song, S.; Malihi, S.; Saadatseresht, M.; Gerke, M. A review on viewpoints and path planning for UAV-based 3-D reconstruction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5026–5048. [Google Scholar] [CrossRef]
  7. Zhang, Z.; Zhu, L. A review on unmanned aerial vehicle remote sensing: Platforms, sensors, data processing methods, and applications. Drones 2023, 7, 398. [Google Scholar] [CrossRef]
  8. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  9. Mazaheri, H.; Goli, S.; Nourollah, A. A Survey of 3D Space Path-Planning Methods and Algorithms. ACM Comput. Surv. 2024, 57, 1–32. [Google Scholar] [CrossRef]
  10. Dechter, R.; Pearl, J. Generalized best-first search strategies and the optimality of A. J. ACM 1985, 32, 505–536. [Google Scholar] [CrossRef]
  11. Hart, P.E.; Nilsson, N.J.; Raphael, B. A formal basis for the heuristic determination of minimum cost paths. IEEE Trans. Syst. Sci. Cybern. 1968, 4, 100–107. [Google Scholar] [CrossRef]
  12. Mandloi, D.; Arya, R.; Verma, A.K. Unmanned aerial vehicle path planning based on A* algorithm and its variants in 3d environment. Int. J. Syst. Assur. Eng. Manag. 2021, 12, 990–1000. [Google Scholar] [CrossRef]
  13. Bai, X.; Jiang, H.; Cui, J.; Lu, K.; Chen, P.; Zhang, M. UAV path planning based on improved A∗ and DWA algorithms. Int. J. Aerosp. Eng. 2021, 2021, 4511252. [Google Scholar] [CrossRef]
  14. Chen, J.; Li, M.; Yuan, Z.; Gu, Q. An improved A* algorithm for UAV path planning problems. In Proceedings of the IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, 12–14 June 2020; Volume 1, pp. 958–962. [Google Scholar]
  15. Sun, S.; Wang, H.; Xu, Y.; Wang, T.; Liu, R.; Chen, W. A Fusion Approach for UAV Onboard Flight Path Management and Decision Making Based on the Combination of Enhanced A* Algorithm and Quadratic Programming. Drones 2024, 8, 254. [Google Scholar] [CrossRef]
  16. Huang, C. A Novel Three-Dimensional Path Planning Method for Fixed-Wing UAV Using Improved Particle Swarm Optimization Algorithm. Int. J. Aerosp. Eng. 2021, 2021, 7667173. [Google Scholar] [CrossRef]
  17. Heidari, H.; Saska, M. Collision-free path planning of multi-rotor UAVs in a wind condition based on modified potential field. Mech. Mach. Theory 2021, 156, 104140. [Google Scholar] [CrossRef]
  18. Di, B.; Zhou, R.; Duan, H. Potential field based receding horizon motion planning for centrality-aware multiple UAV cooperative surveillance. Aerosp. Sci. Technol. 2015, 46, 386–397. [Google Scholar] [CrossRef]
  19. Noreen, I.; Khan, A.; Habib, Z. Optimal path planning using RRT* based approaches: A survey and future directions. Int. J. Adv. Comput. Sci. Appl. 2016, 7, 97–107. [Google Scholar] [CrossRef]
  20. Guo, Y.; Liu, X.; Yang, Y.; Zhang, W. FC-RRT*: An improved path planning algorithm for UAV in 3D complex environment. ISPRS Int. J. Geo. Inf. 2022, 11, 112. [Google Scholar] [CrossRef]
  21. Phung, M.D.; Ha, Q.P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization. Appl. Soft Comput. 2021, 107, 107376. [Google Scholar] [CrossRef]
  22. Pehlivanoglu, Y.V.; Pehlivanoglu, P. An enhanced genetic algorithm for path planning of autonomous UAV in target coverage problems. Appl. Soft Comput. 2021, 112, 107796. [Google Scholar] [CrossRef]
  23. Roberge, V.; Tarbouchi, M.; Labonté, G. Comparison of parallel genetic algorithm and particle swarm optimization for real-time UAV path planning. IEEE Trans. Ind. Inform. 2012, 9, 132–141. [Google Scholar] [CrossRef]
  24. Kennedy, J. Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies; Springer: Boston, MA, USA, 2006; pp. 187–219. [Google Scholar]
  25. Fu, Y.; Ding, M.; Zhou, C. Phase angle-encoded and quantum-behaved particle swarm optimization applied to three-dimensional route planning for UAV. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2011, 42, 511–526. [Google Scholar] [CrossRef]
  26. Wei-Min, Z.; Shao-Jun, L.; Feng, Q. θ-PSO: A new strategy of particle swarm optimization. J. Zhejiang Univ. Sci. A 2008, 9, 786–790. [Google Scholar] [CrossRef]
  27. Hoang, V.T.; Phung, M.D.; Dinh, T.H.; Ha, Q.P. Angle-encoded swarm optimization for uav formation path planning. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5239–5244. [Google Scholar]
  28. Sun, J.; Fang, W.; Wu, X.; Palade, V.; Xu, W. Quantum-behaved particle swarm optimization: Analysis of individual particle behavior and parameter selection. Evol. Comput. 2012, 20, 349–393. [Google Scholar] [CrossRef]
  29. Phung, M.D.; Quach, C.H.; Dinh, T.H.; Ha, Q. Enhanced discrete particle swarm optimization path planning for UAV vision-based surface inspection. Autom. Constr. 2017, 81, 25–33. [Google Scholar] [CrossRef]
  30. Onwubolu, G.C.; Babu, B.V.; Clerc, M. Discrete particle swarm optimization, illustrated by the traveling salesman problem. New Optim. Tech. Eng. 2004, 47, 219–239. [Google Scholar]
  31. Yu, X.; Jiang, N.; Wang, X.; Li, M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst. Appl. 2023, 215, 119327. [Google Scholar] [CrossRef]
  32. Yu, X.; Chen, W.N.; Gu, T.; Yuan, H.; Zhang, H.; Zhang, J. ACO-A*: Ant colony optimization plus A* for 3-D traveling in environments with dense obstacles. IEEE Trans. Evol. Comput. 2018, 23, 617–631. [Google Scholar] [CrossRef]
  33. Xu, C.; Duan, H.; Liu, F. Chaotic artificial bee colony approach to Uninhabited Combat Air Vehicle (UCAV) path planning. Aerosp. Sci. Technol. 2010, 14, 535–541. [Google Scholar] [CrossRef]
  34. Debnath, D.; Vanegas, F.; Sandino, J.; Hawary, A.F.; Gonzalez, F. A Review of UAV Path-Planning Algorithms and Obstacle Avoidance Methods for Remote Sensing Applications. Remote Sens. 2024, 16, 4019. [Google Scholar] [CrossRef]
  35. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovský, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl. Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  36. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  37. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl. Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  38. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  39. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl. Based Syst. 2023, 262, 110248. [Google Scholar] [CrossRef]
  40. Liu, B.; Cai, Y.; Li, D.; Lin, K.; Xu, G. A Hybrid ARO Algorithm and Key Point Retention Strategy Trajectory Optimization for UAV Path Planning. Drones 2024, 8, 644. [Google Scholar] [CrossRef]
  41. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  42. Shen, Q.; Zhang, D.; Xie, M.; He, Q. Multi-strategy enhanced dung beetle optimizer and its application in three-dimensional UAV path planning. Symmetry 2023, 15, 1432. [Google Scholar] [CrossRef]
  43. Tang, X.; He, Z.; Jia, C. Multi-strategy cooperative enhancement dung beetle optimizer and its application in obstacle avoidance navigation. Sci. Rep. 2024, 14, 28041. [Google Scholar] [CrossRef]
  44. Zilong, W.; Peng, S. A multi-strategy dung beetle optimization algorithm for optimizing constrained engineering problems. IEEE Access 2023, 11, 98805–98817. [Google Scholar] [CrossRef]
  45. Hu, W.; Zhang, Q.; Ye, S. An enhanced dung beetle optimizer with multiple strategies for robot path planning. Sci. Rep. 2025, 15, 4655. [Google Scholar] [CrossRef] [PubMed]
  46. Zhang, R.; Chen, X.; Li, M. Multi-UAV cooperative task assignment based on multi-strategy improved DBO. Clust. Comput. 2025, 28, 195. [Google Scholar] [CrossRef]
  47. Shen, Q.; Zhang, D.; He, Q.; Ban, Y.; Zuo, F. A novel multi-objective dung beetle optimizer for Multi-UAV cooperative path planning. Heliyon 2024, 10, e37286. [Google Scholar] [CrossRef]
  48. Chen, Q.; Wang, Y.; Sun, Y. An improved dung beetle optimizer for UAV 3D path planning. J. Supercomput. 2024, 80, 26537–26567. [Google Scholar] [CrossRef]
  49. Lv, F.; Jian, Y.; Yuan, K.; Lu, Y. Unmanned Aerial Vehicle Path Planning Method Based on Improved Dung Beetle Optimization Algorithm. Symmetry 2025, 17, 367. [Google Scholar] [CrossRef]
  50. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  51. Ait Saadi, A.; Soukane, A.; Meraihi, Y.; Benmessaoud Gabis, A.; Mirjalili, S.; Ramdane-Cherif, A. UAV path planning using optimization approaches: A survey. Arch. Comput. Methods Eng. 2022, 29, 4233–4284. [Google Scholar] [CrossRef]
  52. Lin, K.; Li, Y.; Chen, S.; Li, D.; Wu, X. Motion planner with fixed-horizon constrained reinforcement learning for complex autonomous driving scenarios. IEEE Trans. Intell. Veh. 2023, 9, 1577–1588. [Google Scholar] [CrossRef]
  53. Lin, K.; Li, D.; Li, Y.; Chen, S.; Liu, Q.; Gao, J.; Jin, Y.; Gong, L. Tag: Teacher-advice mechanism with gaussian process for reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 12419–12433. [Google Scholar] [CrossRef]
Figure 1. Dung Beetle Optimizer.
Figure 1. Dung Beetle Optimizer.
Biomimetics 10 00420 g001
Figure 2. Cylinder threat.
Figure 2. Cylinder threat.
Biomimetics 10 00420 g002
Figure 3. Cost of flight altitude.
Figure 3. Cost of flight altitude.
Biomimetics 10 00420 g003
Figure 4. Path Smoothness.
Figure 4. Path Smoothness.
Biomimetics 10 00420 g004
Figure 5. Safe distance for drones coordination.
Figure 5. Safe distance for drones coordination.
Biomimetics 10 00420 g005
Figure 6. Layout of the four simulated scenarios.
Figure 6. Layout of the four simulated scenarios.
Biomimetics 10 00420 g006
Figure 7. Comparison of fitness curves.
Figure 7. Comparison of fitness curves.
Biomimetics 10 00420 g007
Figure 8. 3D display of path.
Figure 8. 3D display of path.
Biomimetics 10 00420 g008aBiomimetics 10 00420 g008b
Figure 9. Vertical display of path.
Figure 9. Vertical display of path.
Biomimetics 10 00420 g009aBiomimetics 10 00420 g009b
Figure 10. Height profile display of path.
Figure 10. Height profile display of path.
Biomimetics 10 00420 g010aBiomimetics 10 00420 g010b
Table 1. Parameter settings.
Table 1. Parameter settings.
AlgorithmParameter
ECFDBO P p   = 0.2 ,   η 0   = 0.02 ,   w 0   = 0.5 ,   p m = 0.07
BWO W f = 0.1, κ = (1 − 0.5 × t/iter)× rand, α = 3/2, K d = 0.05
COANo internal hyperparameters
PIO N c 1 = round   ( 0.7 × Max iter ) ,   N c 2 = Maxiter − Nc1
BOA p = 0.8, a = 0.1, c = 0.01
NOA α   = 0.05 ,   P a 2   = 0.2 ,   P r b = 0.2
GODBO P p = 0.2, λ = 1.25
DBO P p = 0.2
Table 2. F1–F30 Benchmark Function Test Results (dim = 30).
Table 2. F1–F30 Benchmark Function Test Results (dim = 30).
ECFDBOBWOCOAPIOBOANOAGODBODBO
F1mean63,942.515.3 × 10106.01 × 10102.39 × 10105.61 × 10106.94 × 101070,930,0592.64 × 108
std25,681.633.89 × 1097.05 × 1094 × 1098.45 × 1097.01 × 10991,756,8101.52 × 108
best26,987.474.5 × 10104.35 × 10101.71 × 10103.7 × 10105.09 × 101098,021.648,135,863
worst63,480.115.37 × 10106.04 × 10102.31 × 10105.63 × 10106.86 × 101049,093,7792.57 × 108
median136,598.95.86 × 10107.33 × 10103.7 × 10107.02 × 10108.4 × 10104.57 × 1085.65 × 108
F3mean9633.59181,544.4584,950.1693,315.582,745.1167,120.284,780.3699,466.21
std3718.3146400.385554.42311,092.817651.11718,639.59399.64844,057.83
best4364.10765,363.8368,694.5465,001.0264,619.64128,257.561,888.8748,642.91
worst9190.55682,800.5886,511.1594,228.9183,836.86167,579.187,999.288,573.77
median20,079.2389,699.1892,190.43140,987.294,910.4201,649.398,912.04276,108.9
F4mean507.365712,623.1915,825.342722.20120,935.818,256.86580.2727661.3345
std36.807751638.8892792.877942.64093621.4183024.03878.66037116.997
best413.70777410.9677205.8441384.75213,918.2511,069.96473.5652492.5788
worst499.46512,837.316,777.872499.82820,571.3918,053.4567.4128640.5954
median593.544814,671.8519,562.265098.44526,488.4223,726.71842.6524913.3457
F5mean800.4774927.4693913.1074861.2371912.0203994.2789657.6842747.3314
std66.0973420.7524230.0424735.850426.8791130.7631851.8198846.88862
best689.4502866.6334859.8462815.4365861.6815891.0154582.831670.1164
worst787.3878928.829913.4338856.991912.54621002.681644.9905749.2904
median941.7984959.2968979.8217937.9318957.82571033.27796.3756823.7392
F6mean656.6684692.5882692.5231664.002689.7923700.5578632.6684651.2531
std10.413093.7965846.0533899.6076026.464416.2835177.12591914.1612
best638.408683.9351680.0325647.6615672.0864687.1371620.9715625.9021
worst656.0106692.5493692.5136662.3847690.1579700.4147630.7543651.611
median682.1257700.5901706.2709684.4867698.8628709.9328649.3964689.0801
F7mean1153.351399.7371426.2641480.2951399.6082509.026956.43921033.291
std95.0121931.4609656.8181469.9839439.21844146.928559.3196896.73882
best984.19911324.881192.9111319.4961303.0752124.928854.8254861.6912
worst1155.9051405.5921431.8471488.5561397.562510.022949.45351023.429
median1363.5041455.3511492.5071591.5721475.4212819.7981099.9981237.708
F8mean1025.4921150.5191144.5031150.4451140.7761240.074956.07721023.584
std55.4836717.4931626.3024531.9682122.1005331.09259.1953953.63345
best932.35061115.9861067.3191051.6531078.1641152.245876.1469933.177
worst1017.9251151.8331147.9541147.0371146.5341241.165938.44191024.154
median1136.2661196.1251181.3281205.5581168.8741295.5811083.5071126.276
F9mean9736.61211,225.6111,006.8511,206.3111,285.1619,537.193953.8676765.347
std2383.2041012.3241454.6682377.3261176.2022469.751464.3711543.686
best5041.0988552.4268263.2247715.9428045.32113,774.641759.7524228.991
worst9855.97511,336.2811,214.1310,825.4111,383.5119,132.043849.6416651.009
median15,743.0213,329.9513,500.8518,884.4613,336.923,146.737918.7969490.701
F10mean5713.7458901.3878736.0628948.6049224.0949078.2856807.6466231.936
std664.7464357.743476.0732447.7387346.0579310.51751617.939944.1829
best4282.2838159.7147632.8977856.5558399.9397874.6333640.7444264.357
worst5621.718961.4518780.9518953.6119296.5569103.1816538.6875975.079
median7289.919538.5999513.0059625.6359659.2499465.2189083.3628844.381
F11mean1292.3368200.1449024.4364832.3919120.10912,833.551401.8271742.403
std69.07541516.371678.051121.9112668.2682871142.0839575.7978
best1164.3074821.0516320.4242628.5125682.7185379.7991234.6031346.82
worst1285.5298306.7358685.5134614.7788221.46813,228.651375.0951613.581
median1441.46511,521.0712,781.117008.44416,082.0617,555.231911.364505.651
F12mean2,519,8371.16 × 10101.3 × 10102.05 × 1091.33 × 10101 × 10101.05 × 10875,765,766
std1,863,4682.18 × 1093.2 × 1096.39 × 1083.25 × 1092.26 × 1092.56 × 1081.19 × 108
best192,565.78.3 × 1097.21 × 1098.47 × 1086.78 × 1095.96 × 1092,916,0362,025,324
worst2,047,4601.17 × 10101.28 × 10102.04 × 1091.31 × 10101.01 × 101020,653,60627,026,788
median8,109,6151.6 × 10101.96 × 10104.15 × 1092.14 × 10101.41 × 10101.09 × 1096.04 × 108
F13mean25,844.736.55 × 1098.69 × 1096.77 × 1081.3 × 10105.64 × 1091.15 × 10811,920,817
std20,870.872.1 × 1094.36 × 1092.49 × 1086.3 × 1091.5 × 1096.03 × 10820,468,936
best3899.6992.83 × 1092.84 × 1091.85 × 1083.88 × 1092.1 × 10941,537.4446,900.35
worst18,980.976.51 × 1097.8 × 1096.19 × 1081.22 × 10105.79 × 109224,277.81,574,382
median72,569.171.05 × 10102.02 × 10101.22 × 1092.83 × 10109.46 × 1093.31 × 10971,958,593
F14mean32,334.974,257,8543,960,435815,745.24,297,7233,025,255283,592.4209,998.4
std26,512.962,620,0073,167,737583,094.94,097,8391,467,041382,286.9185,649
best2309.498592,077.2557,71390,371.21144,606.2667,300.710,666.7810,345.24
worst25,730.53,906,6923,948,395681,304.72,568,4982,860,144136,500.5166,456.7
median89,428.4414,500,69014,945,4092,395,54219,757,2777,075,4321,673,009881,024.9
F15mean13,198.823.28 × 1088.58 × 1081.52 × 1085.68 × 1088.21 × 10865,525.93285,280.7
std12,461.621.9 × 1085.73 × 10891,543,8445.76 × 1084.38 × 10853,997.251,102,114
best1908.74913,466,52445,179,49334,061,39137,393,2461.09 × 1082920.757399.019
worst9093.73.34 × 1087.62 × 1081.13 × 1083.63 × 1088.15 × 10858,370.1159,707.92
median43,755.358.69 × 1082.91 × 1094.03 × 1082.37 × 1091.71 × 109177,445.26,109,350
F16mean2879.7695819.1786364.7594098.7027467.6785202.7023115.2833288.972
std344.4561410.5533906.3894348.05641430.561352.1852420.6608435.3645
best2175.8634673.7855161.7593114.324054.9694353.8572333.6742172.459
worst2899.1255900.7826260.5494115.1187507.3355263.2613055.1363317.381
median3567.7856616.5848789.344960.3239887.6665749.9034083.4444184.224
F17mean2516.4914475.7145249.2512971.06310974.593581.0372582.4412679.382
std262.9417763.23363076.794154.614411,398.86228.1887289.5168272.8822
best2054.3223156.6853006.4962702.3833899.0543086.81904.7852203.731
worst2571.8114347.1323900.7412999.477324.9743562.2452597.0152706.697
median3159.8626487.52717,638.143241.07259,337.94019.6633276.5923195.838
F18mean715,402.354,249,83259,452,99211,870,74249,325,99140,892,7181,948,3174,033,524
std817,243.327,073,42653,818,5397,421,46244,586,03319,512,9223,992,7455,215,681
best41,453.476,640,9961,889,1492,794,6866,903,8985,674,77735,531.3978,313.29
worst486,623.953,683,49749,593,12910,764,93431,256,16139,061,988860,592.81,839,512
median3,439,5571.12 × 1082.11 × 10830,407,7451.86 × 1081.06 × 10820,309,92123,672,496
F19mean21,262.954.85 × 1087.77 × 1082 × 1086.64 × 1081.17 × 1094,097,3325,784,654
std18,941.242.03 × 1085.22 × 10896,177,9254.79 × 1084.63 × 10811,501,86318,081,030
best2583.75892,132,65459,577,48440,201,93389,876,1912.88 × 1082193.5692638.476
worst16,050.654.87 × 1087.96 × 1082.14 × 1086.08 × 1081.22 × 109116,303.4404,450.8
median56,775.439.58 × 1082.71 × 1094.2 × 1082.24 × 1091.97 × 10950,617,20298,073,607
F20mean2656.3643033.383094.9543036.893118.83100.2582697.7812742.864
std239.1146130.2307168.8114119.10192.77931110.8192226.7765260.9707
best2272.1892771.0092554.9832787.1742897.1342857.4762300.2892235.967
worst2647.2913043.5923139.1873045.2433119.753110.2192628.0472707.018
median3128.0323296.2133339.5093352.2853288.1633338.993137.8413249.261
F21mean2580.1262723.7012747.4252615.5622751.3712754.2392465.952561.324
std73.8895148.4436245.4719127.5713948.3675423.9824945.8466748.4529
best2447.7082517.9472648.1552564.3862623.9242706.6182377.8782471.764
worst2572.0322732.732752.2982617.7242755.7232760.2372468.8112558.892
median2731.7552792.622855.6072663.1352825.2022790.2362560.6562655.951
F22mean6787.7688871.0499576.7025759.6467151.2639518.5394205.6826215.286
std1677.095484.1969764.60352399.4391026.057723.92332613.7662311.638
best2303.1927887.2586769.963515.9354850.2287868.1232311.72364.278
worst7116.8318882.6289874.2764697.1817126.8089654.0742427.1487212.762
median8901.3779708.59310,688.610,792.89364.77110,622.989568.5869886.574
F23mean3014.1553333.1743651.9412990.4323575.0733402.1292915.6513014.023
std97.7217750.0228148.059329.88314187.204760.65244112.072186.11765
best2787.7883226.2593351.3572935.9553025.9963276.9152762.1882864.823
worst2996.7123337.9573632.1022989.9313596.073403.8722871.6063007.588
median3250.4813430.5453998.3193056.9973864.4953505.6233139.6313177.673
F24mean3161.3783613.3393787.6493158.174161.5343645.1353113.0823182.059
std102.517779.48947185.565642.43335238.494475.6168590.5652975.59769
best2965.5893496.2823365.743082.7723590.0923434.832979.8353044.666
worst3173.1653596.3643808.8723159.0294152.1743653.6063097.1283162.987
median3327.0243762.8244200.6713240.0744590.2743824.5273310.43336.49
F25mean2896.5484480.7345323.2744724.2655704.6868804.5582923.4582999.593
std15.91772199.7949350.8185380.5895641.2731863.468530.3988477.06749
best2883.7283962.2034410.2934017.3564641.957384.9032885.0762894.905
worst2889.6724477.0855326.1154702.9035546.1778584.7132924.3952983.4
median2950.2924847.755965.0915495.4957193.24110803.073023.5883210.79
F26mean7455.36910,646.1511,735.056895.65511,882.1211,251.126034.2327158.672
std1340.177561.0557861.7151959.6498788.1217822.3199997.3035785.1776
best2816.7899133.7569854.4655176.72810,765.619540.233676.8335295.298
worst7586.72410,689.6811,804.837168.29511,911.4411,502.115888.1337151.111
median9757.08711,751.2213,408.358646.07713,352.9412,632.548731.3878632.776
F27mean3290.3024034.9964495.7083404.7364395.6594132.7213349.1823347.978
std45.08133151.9417395.554755.45793363.7561122.920179.8654270.67476
best3213.5453652.4493830.9853293.7713752.5563903.3593261.0873249.935
worst3272.5564059.0764402.4443403.0384458.5464138.3763323.0773336.538
median3392.3064294.6965635.0993520.2935352.9414402.5143634.8853534.849
F28mean3252.3876458.3297659.1264566.7598163.2177789.1373352.8663604.37
std42.03031325.0748701.5746459.2043500.8554776.348275.75684687.9095
best3200.3295574.2116126.0614073.5117047.8346335.1973265.2053299.473
worst3250.1486521.0587650.0864413.0268244.7527842.0113324.3363392.65
median3373.5177105.4518955.8315831.2319021.7439020.7723558.8776272.067
F29mean4429.6176895.768345.1375101.26313,156.876395.724214.6154549.805
std271.7949711.90641580.409304.27426171.057428.0731229.5287392.7875
best3947.6535733.5825557.7954427.1066837.4215507.7693817.1213800.398
worst4455.0786874.3038501.485147.4511,409.636436.5634261.454548.292
median4972.6818645.28812397.945690.93934917.827228.3354635.8575311.23
F30mean48,500.771.16 × 1091.78 × 1091.23 × 1081.22 × 1096.8 × 1081,780,7977,762,810
std41,110.943.57 × 1081.19 × 10950,993,4365.91 × 1082.22 × 1082,843,25222,587,572
best6351.3415.08 × 1081.04 × 10835,740,7343.27 × 1082.09 × 10830,084.0213,525.09
worst31,118.471.15 × 1091.35 × 1091.15 × 1081.1 × 1096.98 × 108570,931.41,081,845
median151,815.91.96 × 1095.45 × 1092.4 × 1082.48 × 1091.1 × 10910,298,1841.23 × 108
Table 3. F1–F30 Benchmark Function Test Results (dim = 50).
Table 3. F1–F30 Benchmark Function Test Results (dim = 50).
ECFDBOBWOCOAPIOBOANOAGODBODBO
F1mean4,780,2581.06 × 10111.09 × 10119.65 × 10101.09 × 10111.71 × 10111.18 × 1099.19 × 109
std2,359,2955.01 × 1099.86 × 1091.39 × 10107.88 × 1091.07 × 10107.03 × 1081.69 × 1010
best1,512,9349.37 × 10108.91 × 10106.82 × 10109.01 × 10101.5 × 10114.25 × 1084.64 × 108
worst3,928,3631.06 × 10111.1 × 10119.83 × 10101.1 × 10111.73 × 10111 × 1093.77 × 109
median10,522,6021.15 × 10111.25 × 10111.22 × 10111.22 × 10111.94 × 10113.35 × 1097.07 × 1010
F3mean87,516.16249,766.4197,257.9264,701.7316,388.5329,198.4322,936.4248,865.5
std32,122.3836,977.6821,021.1729,715.68149,694.833,740.9558,986.3354,829.01
best36,759.7189,932.7162,315.7179,046.2170,203248,425.4193,637.4145,758.5
worst73,523.75251,556.4195,704.8266,654.4272,373.2331,103.4314,708247,962.2
median162,429.7351,854.8238,114.7311,972.1865,964.3382,355.9513,354.7393,175.2
F4mean609.842634,119.1339,293.5715,732.341,410.9251,193.12912.58621194.075
std60.867623216.9015100.6635187.0623473.4528197.951159.1647299.3094
best489.118322,722.1530,096.177784.6931,673.137,072.26654.7685663.9026
worst617.030434,459.9239,746.0914,700.9940,911.5153,136.07878.3931115.647
median800.192439,085.9548,427.431,976.0349,882.5263,973.421351.2251895.822
F5mean1023.0861196.7921194.1381243.6791176.3891447.111831.1338995.0373
std111.822318.201132.0534128.341826.2011433.4608781.3543799.21457
best837.51481151.5111134.7581167.6871104.9981368.81700.8344821.8647
worst1044.5481198.4891197.4121241.5881184.5531455.214817.10061005.631
median1273.8561221.9351243.2231313.4451218.4041507.0441044.841153.54
F6mean673.1587703.0555702.3727691.9153705.7748720.2816647.1692670.9763
std6.7416264.4709174.7700889.7329275.4958814.5796957.0424829.649086
best659.9427689.8744687.6308673.5214686.885708.4121634.3791643.4003
worst673.5805703.072704.0603693.1564705.456720.9183647.4455670.9801
median688.3811710.4015707.9847711.0534715.5399727.7279663.3866687.7654
F7mean1677.5121989.6022070.3072113.6292008.2234750.6851335.7241401.825
std129.80243.6618841.5732877.6307952.39592171.1507181.3475134.9587
best1415.0561864.4351960.9931884.3821841.494404.8231075.4771139.746
worst1692.1211994.1922074.9032120.8442001.7874751.851307.0071407.649
median1973.9522069.7462130.1372199.2362110.8075066.7571893.6421700.882
F8mean1351.271512.3731495.8531563.1821512.0891749.0391169.1131279.896
std85.0668517.7712927.1142736.5918924.96442.2352124.1553117.3652
best1132.0231467.4541448.8891487.3891436.2191623.5991024.4381068.045
worst1332.6281511.1771496.7991568.8331512.1491756.3791111.1421312.018
median1524.1141545.2171545.2191624.4621555.271810.8911443.0491485.358
F9mean26,315.1939,713.0139,487.9243,652.7440,148.1264,453.5118,684.1227,981.46
std5710.8652801.2812733.5216709.9243165.2926313.145930.0567751.885
best14,12030,240.1934,025.1329,168.8430,667.4844,678.9510,217.2114,544.16
worst26,145.7140,060.1739,455.9443,719.7540,167.5465,672.7318,116.5727,353.29
median39,054.8244,701.0644,175.1454,292.8845,526.8476,863.9333,371.2143,059.4
F10mean9036.24415,023.6815,408.315,529.4915,601.8115,607.1612,851.3311,442.55
std1028.188463.9161452.2438447.2961565.7118364.03582916.4892387.653
best6960.83413,673.8114,072.4914,574.5914,099.7814,972.367474.5488069.785
worst9123.11915,059.9915,410.7115,589.2815,670.6215,660.514,533.6911,116.4
median10,696.6815,644.2916,156.4616,218.3516,541.2716,122.5415,705.315,258.18
F11mean1456.01422,646.1825,955.216,038.5124,384.5637,752.183113.9235406.151
std101.13211911.8712606.7494284.9272959.7446875.879685.75454643.407
best1307.77318,668.8919,768.289414.71115,229.519,129.471899.8942176.385
worst1438.39922,946.7826,342.515,599.3825,237.1938,325.343226.6073474.322
median1667.79525,295.7430,296.2626,549.5828,484.6249,140.394584.31519,830.63
F12mean21,208,2236.3 × 10108.9 × 10101.41 × 10108.17 × 10106.76 × 10103.98 × 1089.58 × 108
std11,394,6221.05 × 10101.46 × 10102.63 × 1091.76 × 10109.25 × 1094.05 × 1088.38 × 108
best7,817,7853.66 × 10105.51 × 10108.37 × 1095.32 × 10104.61 × 101031,344,19631,725,817
worst19,125,8716.25 × 10109.14 × 10101.39 × 10108.61 × 10106.79 × 10102.88 × 1086.88 × 108
median48,400,2148.33 × 10101.13 × 10112.09 × 10101.13 × 10118.4 × 10101.75 × 1093.61 × 109
F13mean65,586.634.12 × 10104.74 × 10104.37 × 1094.13 × 10102.85 × 10101.93 × 1081.61 × 108
std44,267.628.42 × 1091.62 × 10101.31 × 1091.72 × 10104.54 × 1095.22 × 1082.28 × 108
best13,035.422.71 × 10101.55 × 10101.89 × 1091.66 × 10101.92 × 1010114,020.32,961,075
worst51,812.644.15 × 10104.68 × 10104.28 × 1093.65 × 10102.87 × 101022,893,07865,134,665
median144,709.65.91 × 10108.47 × 10106.93 × 1097.87 × 10103.76 × 10102.57 × 1099.92 × 108
F14mean327,094.470,972,8781.21 × 1084,716,7151.57 × 10831,301,0093,210,8354,034,978
std206,097.426,730,7851.07 × 1082,287,2601.16 × 10816,751,1254,229,7514,957,667
best62,115.7510,693,6425,875,5081,233,77416,131,2509,128,897304,722.2148,710.7
worst311,704.971,552,20179,985,9854,621,5921.2 × 10827,603,2291,954,6261,928,327
median859,561.41.44 × 1084.11 × 10812,215,3404.41 × 10885,198,63223,139,88917,643,989
F15mean20,076.146.39 × 1098.42 × 1091.85 × 1099.56 × 1097.81 × 1091,194,62899,311,999
std12,505.081.61 × 1093.71 × 1097.37 × 1082.88 × 1092.2 × 1094,993,2853.24 × 108
best3435.9153.34 × 1092.86 × 1093.12 × 1083.54 × 1092.44 × 10916,137.7538,951.08
worst19,200.356.16 × 1098.49 × 1091.85 × 1099.96 × 1098.04 × 10995,439.28222,537.5
median55,648.579.84 × 1092.03 × 10103.18 × 1091.53 × 10101.1 × 101027,128,8571.76 × 109
F16mean4149.218913.88310,551.976367.5111,540.078841.3684350.9224647.712
std449.7375713.05571734.323517.75361392.449475.2364620.8001733.0185
best2885.2427250.6747519.8455428.6658080.717966.5373034.953140.566
worst4161.4548945.43910,219.526349.43611,493.538814.2524308.8454797.013
median4885.02410,160.0414,156.687510.28414,834.549721.6095773.1026039.062
F17mean3925.4597852.05612,419.15903.53317,758.6116,857.183714.4674287.897
std400.71811469.2639511.617524.519511,094.17660.292376.9633575.6267
best3266.5285184.3024962.2284893.8746891.4056954.5432801.8982809.991
worst3885.2327429.90710,080.375944.59815,095.3817,259.13752.5414376.222
median4527.25311,919.5754,864.47012.8347,128.5343,756.894383.1835328.389
F18mean3,741,3471.58 × 1082.06 × 10856,514,6571.8 × 1081.7 × 1088,681,29510,596,914
std2,325,90763,623,3261.04 × 10825,410,81592,081,40053,349,2519,155,9399,242,248
best723,232.652,320,25377,846,39719,530,34957,970,64875,065,695724,814.81,571,896
worst3,132,2721.59 × 1081.82 × 10851,753,0011.41 × 1081.72 × 1085,021,8947,209,046
median10,168,4013.36 × 1085.33 × 1081.11 × 1084.29 × 1082.91 × 10842,010,77939,489,195
F19mean22,177.843.65 × 1094.7 × 1096.87 × 1084.44 × 1093.52 × 1093,340,4648,448,320
std15,848.028.07 × 1081.74 × 1092.54 × 1081.8 × 1097.66 × 1085,014,68210,142,526
best2419.4921.99 × 1091 × 1092.68 × 1081.72 × 1091.14 × 10912,709.89258,850.9
worst23,607.73.62 × 1094.73 × 1096.82 × 1083.92 × 1093.53 × 1091,037,7334,918,391
median44,662.615.31 × 1097.28 × 1091.18 × 1098.35 × 1094.8 × 10919,438,06139,488,873
F20mean3644.9414157.5164297.974367.1424365.7914512.713788.8563724.264
std405.6563182.891195.1588236.7496218.3726156.6324432.4924269.4832
best2731.1733642.4673861.9113678.1933747.0124093.0512791.9653237.719
worst3730.8174153.2034339.084388.0354407.1634520.6213857.833697.314
median4448.9224476.9174654.8144731.9534701.5964894.4674418.5134227.031
F21mean2854.4383211.0123266.6112971.8143230.1923246.8532689.2162836.455
std104.424940.4947991.5201448.4748561.1281534.9707110.539577.75509
best2624.2273095.6473044.1842876.9813144.7843169.2992492.8462695.025
worst2870.0383215.3193261.0332972.4783221.7543253.1852665.4872831.948
median3047.4143267.5753445.2663083.4293357.73314.3832909.1212953.604
F22mean10,998.816,980.0617,249.416,679.9417,358.8417,391.2713,652.7312,998.59
std840.0132408.6059515.12642123.499786.7584417.06352907.1682632.685
best9175.08816,194.0816,400.357408.29313,847.8516,508.119287.2738554.412
worst10,922.9117,105.0617,387.5217,186.6617,511.7517,358.6212,992.1912,206.8
median12,981.2117,617.4118,030.3318,131.4718,188.6518,118.7117,712.5317,233.74
F23mean3581.6434128.5824620.6013510.334719.5514298.043307.9363534.21
std206.231291.56528189.044768.58512192.4512124.7457129.4303133.5617
best3210.1393939.2554212.7743411.7914076.1434022.3293161.8933350.576
worst3583.1594138.6964653.1073497.2764730.4264302.7963268.2643537.664
median3952.5774266.7044987.4753681.9775042.4474499.23650.2813956.54
F24mean3731.6154570.2914977.0453600.4925383.014623.2963516.1783755.547
std182.5002145.2558319.122458.59646363.3184151.54256.9348134.312
best3515.3224265.0154471.5063480.2674864.8394229.2673166.453428.635
worst3694.5584581.4834947.5263600.1115382.9514625.6493453.593798.043
median4218.9384826.4765813.1293707.1216419.9064884.1364227.3543985.79
F25mean3110.89814,313.6215,991.5914,126.0216,062.9433,891.033248.2444482.356
std35.02985642.67811098.7872319.1721040.2412965.02778.154222045.493
best3039.40412,625.613,102.368969.72713,429.0724,792.333123.9833140.849
worst3115.63114,328.0715,887.5414,173.4816,125.5834,648.343243.3653573.945
median3192.15915,495.9517,971.1818,126.4318,101.8638,039.973434.7210,113.06
F26mean12,568.0516,850.617,622.6417,458.2117,780.6721,232.977701.15910,970.81
std1580.866378.2373535.1292077.685615.38351315.0652403.6141118.15
best9295.75616,187.6316,391.9711,932.3415,771.8918,804.744126.5878835.321
worst12,811.0216,858.8217,703.917,938.1917,869.6921,437.787611.06711,041.12
median17,119.8517,673.4518,813.6419,701.7818,878.7223,754.612,024.913,645.62
F27mean3808.6756063.4927074.1024337.9616850.2846255.2824020.1083984.485
std235.8524455.9033910.8321196.7503807.757321.4446209.8758213.1498
best3484.3475243.0285646.5054030.3895131.2245697.8833657.323623.358
worst3811.766093.3326923.0874318.2296767.4586244.3694020.0473939.278
median4270.0546871.2268560.4034815.0718550.2046939.0984531.8014417.38
F28mean3365.32512,358.2214,036.879989.37314,426.5515,690.863964.5286578.806
std37.49875722.20681214.6511047.5971115.6151440.8921197.6222123.016
best3298.1559369.05112,230.387433.32812,290.512,654.153371.9883527.566
worst3369.19512,457.7213,653.719811.27314,322.9115,661.533764.2486543.57
median3456.38913,161.6916,948.6612,382.3517,099.8618,663.6310,185.5510,123.58
F29mean5497.25829,545.38156,637.68103.622309,734.525,633.535625.9946369.865
std590.198117,812.2192,179.3635.5026341,5979502.369632.0883889.2101
best4439.05510,118.9925,201.376565.72528,361.9513,564.984492.3464770.456
worst5484.53625,437.2979,776.498036.524144,237.523,549.285650.5876220.656
median7052.81675,688.88999,331.19552.5351,209,93655,501.016680.6219653.24
F30mean2,795,5295.14 × 1098.39 × 1091.2 × 1097.06 × 1095.73 × 10929,677,05748,080,697
std1,351,7221.11 × 1093.87 × 1093.56 × 1082.87 × 1091.39 × 10929,781,94141,573,944
best903,013.92.54 × 1092.27 × 1096.3 × 1081.76 × 1092.99 × 1094,194,7333,665,377
worst2,509,6185.03 × 1097.53 × 1091.12 × 1097.05 × 1095.69 × 10915,930,41226,709,720
median6,397,6977.53 × 1091.68 × 10102.14 × 1091.23 × 10107.88 × 1091.02 × 1081.49 × 108
Table 4. F1–F30 Benchmark Function Test Results (dim = 100).
Table 4. F1–F30 Benchmark Function Test Results (dim = 100).
ECFDBOBWOCOAPIOBOANOAGODBODBO
F1mean7.45 × 1082.59 × 10112.73 × 10112.69 × 10112.61 × 10114.84 × 10113.7 × 10108.06 × 1010
std3.24 × 1085.57 × 1099.06 × 1091.3 × 10101.46 × 10101.66 × 10101.34 × 10106.3 × 1010
best2.28 × 1082.46 × 10112.54 × 10112.33 × 10112.27 × 10114.41 × 10111.82 × 10102.93 × 1010
worst6.31 × 1082.6 × 10112.75 × 10112.7 × 10112.62 × 10114.84 × 10113.61 × 10104.57 × 1010
median1.61 × 1092.68 × 10112.88 × 10112.91 × 10112.84 × 10115.17 × 10116.51 × 10102.5 × 1011
F3mean556,312.7377,040351,483475,644.8511,431.5783,361.5414,625.3595,729.4
std155,549.450,453.2116,245.01176,121174,033.487,093.05106,216.4250,553.3
best284,156.4332,419.6310,093.1366,642.5359,050.2608,466.1362,163.9322,179.9
worst574,389.5362,168.8356,070.1366,718435,797778,483.9391,103.9511,466.6
median964,242.4560,682.7374,778.9883,061.5947,707.4975,420951,696.21,369,806
F4mean1103.319100,125.8109,569.470,926.48113,669.9174,227.73727.13719,936.42
std109.62697489.94711,451.9216,469.7510,194.1314,779.121350.38620,626.2
best948.558385,618.2387,562.8639,553.1890,745.05147,507.91956.4743387.313
worst1076.578101,194.6108,947.372,477.39115,869.6174,697.23401.5118784.596
median1481.919110,236.8126,735.7100,351.8130,039.5198,696.79386.49971,595.01
F5mean1744.4062120.3832130.7642219.4192091.9492741.8011684.5841692.849
std294.656622.9901635.4466655.0738550.979666.98955226.6237205.015
best1373.3082049.4022060.3072109.291936.352556.871284.5591344.777
worst1714.6382123.8772133.9042223.2982106.8042756.2771692.641613.662
median2269.2372159.8012191.7592307.572145.8582837.6712112.3962048.001
F6mean678.0402712.5224713.1258717.9441712.8578740.7731665.6168683.7687
std8.1099822.3304293.6826795.0896173.3134863.7808029.10765613.90364
best666.344707.813701.4139707.4029706.6868729.5406650.24657.0897
worst677.4142712.1859713.4107720.0742713.603741.2842664.9418685.7911
median698.8027717.3717.6363725.4998718.7438747.9626699.5684707.3842
F7mean3293.1373884.2554032.224143.6783944.47811,069.352753.513015.819
std200.92661.6437652.51027104.389869.1295479.8582202.3837256.7895
best2614.1243761.5383921.1823800.13804.8449824.9992370.6762389.143
worst3309.6883879.634026.9394147.4193950.49811,177.422698.883022.455
median3736.9153984.3994133.6474285.5944083.8811,755.913180.3373526.901
F8mean2222.1042599.9352598.422663.8252578.033117.1851903.0552153.67
std202.204534.7215356.7543448.3320138.1737173.77207200.1381254.3752
best1717.6552526.2532448.6142550.7782508.1562964.2771645.1441722.929
worst2221.6242602.3392615.5132667.0692582.0483115.3491838.5542236.892
median2620.5012680.0562705.4582736.032661.6893234.992372.9732498.944
F9mean60,455.3580,183.8481,018.7499,296.1486,041.04165,765.377,050.3673,587.85
std186554820.3384885.7666903.583395.67410,464.628062.06712,965.47
best27,080.3265,379.8969,477.4378,988.280,435.73142,076.658,765.5735,460.35
worst60,669.3580,472.1580,475.79102,981.485,766.82165,432.277,127.3375,350.98
median103,171.687,361.8191,129.78107,49292,480.88185,227.989,292.594,984.11
F10mean18,753.7432,411.8932,823.0433,026.7133,190.8433,492.9931,613.6428,054.6
std1731.832578.5304855.6955632.246635.5477566.00722337.0554967.889
best15,986.8731,194.7230,940.5331,454.931,740.4532,155.7322,861.3618,960.93
worst18,962.7532,577.9232,940.2633,120.233,221.7533,575.5132,405.6429,006.16
median23,201.8733,521.2434,038.4834,043.1834,294.134,400.9533,363.6133,813.22
F11mean30,562.76375,307.3251,568.9234,786.9435,013.7361,867.1278,041229,312.4
std6570.7879,545.4366,685.1940,111.28200,693.658,812.1576,660.3754,000.11
best19,355.85253,923.3143,315.6125,476.8165,383.3250,249.2134,985.8151,359.1
worst29,760.43368,621.2225,813.5239,133.6361,261.8364,866.1276,670.5220,557.2
median43,382516,893.2437,470.1319,686.3991,459463,055.7478,738.9335,987.1
F12mean3.35 × 1081.92 × 10112.03 × 10118.9 × 10101.91 × 10112.31 × 10113.21 × 1097.15 × 109
std1.47 × 1081.23 × 10101.81 × 10101.77 × 10102.33 × 10102.12 × 10101.07 × 1092.48 × 109
best1.23 × 1081.6 × 10111.65 × 10115.96 × 10101.39 × 10111.73 × 10111.71 × 1091.37 × 109
worst3.26 × 1081.95 × 10112.04 × 10119.01 × 10101.94 × 10112.37 × 10112.84 × 1096.64 × 109
median6.28 × 1082.07 × 10112.39 × 10111.33 × 10112.36 × 10112.62 × 10115.34 × 1091.52 × 1010
F13mean699,0344.38 × 10104.9 × 10101.56 × 10104.62 × 10105.55 × 10101.01 × 1083.83 × 108
std1,404,6443.91 × 1095.34 × 1093.05 × 1095.96 × 1094.91 × 1091.46 × 1082.85 × 108
best38,284.63.67 × 10103.65 × 10106.95 × 1093.2 × 10104.24 × 1010195,26879,403,252
worst89,752.624.46 × 10104.86 × 10101.53 × 10104.71 × 10105.63 × 101053,102,3002.98 × 108
median4,738,6885.04 × 10106 × 10102.16 × 10105.63 × 10106.33 × 10106.02 × 1081.29 × 109
F14mean3,357,85085,212,4691.02 × 10868,947,1801.51 × 1081.76 × 1085,119,03419,596,326
std2,371,16929,998,08034,349,98122,965,47591,411,20643,582,9253,686,51514,389,273
best942,121.931,318,90244,596,73315,755,74145,787,78888,132,3761,248,8443,347,050
worst2,366,67481,087,97389,294,63670,365,9011.21 × 1081.76 × 1084,347,84316,060,509
median10,420,1911.57 × 1081.82 × 1081.41 × 1083.93 × 1082.66 × 10818,256,52468,172,667
F15mean78,930.72.34 × 10102.51 × 10105.57 × 1092.41 × 10102.23 × 101013,072,25861,298,556
std219,072.83.02 × 1094.33 × 1091.12 × 1095.12 × 1095.04 × 10921,948,86179,951,870
best12,611.071.36 × 10101.51 × 10103.66 × 1091.27 × 10101.39 × 101052,749.1406,195.4
worst26,017.692.42 × 10102.57 × 10105.56 × 1092.41 × 10102.24 × 10103,704,24938,353,874
median1,220,0232.84 × 10103.21 × 10108.21 × 1093.38 × 10102.96 × 101077,559,8933.09 × 108
F16mean7420.48822,623.5825,147.8814,599.0826,320.2324,164.127951.0928898.95
std1072.3461437.1683156.294843.76332172.9111644.5611032.61284.48
best5570.59919,049.6619,358.8512,878.8919,940.4519,856.095912.5986313.505
worst7379.48722,827.8525,509.0314,565.1626,451.6824,062.327705.2678900.273
median9546.87925,581.1330,852.4916,755.6630,537.5327,797.6110,722.211,774.85
F17mean6750.2735,120,65811,568,26619,556.0818,571,3734,215,6148014.5459561.856
std607.1553,202,45410,391,7137302.89412,013,1872,973,7141475.5251472.375
best5389.3571,038,674719,282.411,665.35431,620.6253,709.65894.5886670.773
worst6734.3644,444,3749,134,41017,682.3419,581,0693,070,1167672.8099609.199
median7765.74411,556,60438,037,31239,841.9357,540,75210,549,13713,547.6912,806.76
F18mean7,110,3322.01 × 1083.2 × 1081.23 × 1082.6 × 1083.31 × 10813,751,66231,038,738
std2,779,52853,722,1891.74 × 10833,787,6321.06 × 1081.03 × 1088,773,56321,182,188
best2,437,01687,289,24538,115,40461,624,38267,995,5531.5 × 1082,827,9175,159,165
worst6,988,3791.97 × 1082.93 × 1081.18 × 1082.57 × 1083.23 × 10811,671,40729,795,728
median12,595,1113.26 × 1087.19 × 1081.98 × 1085.18 × 1085.3 × 10834,595,90088,488,020
F19mean161,629.12.24 × 10102.72 × 10105.25 × 1092.62 × 10102.45 × 101023,478,42782,678,768
std150,792.22.39 × 1094.2 × 1091.01 × 1093.76 × 1092.58 × 10918,582,03671,810,474
best17,306.41.72 × 10102.03 × 10103.31 × 1091.84 × 10101.95 × 1010235,256.21,656,899
worst129,3122.26 × 10102.76 × 10105.24 × 1092.62 × 10102.42 × 101019,932,31660,757,956
median770,506.82.66 × 10103.52 × 10106.7 × 1093.3 × 10103.02 × 101072,024,1843.6 × 108
F20mean6336.5727815.5547936.2658023.3338061.2418370.7037436.647428.492
std730.5444237.0489322.0247365.498302.5378229.1073532.3598657.3067
best4856.5397222.0147237.6837322.8117465.8927798.5545956.3236130.059
worst6494.8787841.197978.258028.8768092.7698389.8377606.7067453.894
median7347.2818218.6188376.7358735.58550.4188715.8948196.1528526.579
F21mean4091.6234753.5325054.3944138.9024835.8054827.723545.2874007.131
std202.3991117.0087215.807120.4908219.0194101.6755164.3183179.7956
best3706.6054460.9524449.1783924.6824400.5454541.9133209.0563581.077
worst4139.5994766.4475106.2114141.2524823.0884850.1113529.6983982.011
median4547.7384954.5465383.4774358.6115245.6054975.3273936.3634318.997
F22mean21,835.8634,804.5935,405.4235,512.3535,802.4935,768.7930,883.3929,201.22
std1578.262499.9623454.6997673.7102563.8343648.31025023.1914840.573
best18,345.0933,725.2134,380.0933,911.8334,512.6234,051.6821,219.4921,274.95
worst22,189.2534,807.935,414.2535,587.1835,893.7935,799.7433,186.5928,547.36
median24,528.1135,643.7336,446.6536,867.8636,636.7636,958.2735,643.2235,948.67
F23mean4522.3776128.4856706.2524726.0736666.5656624.2334458.0824862.778
std227.661206.8003311.8774144.3862298.6917208.8306334.0317183.4844
best4069.0155697.2215925.0374526.846167.8036166.973866.8584401.4
worst4541.9626160.6896759.4134682.8346672.5526635.9414401.4694880.059
median4985.7776464.2557374.6915104.4537176.8017003.2715053.4565160.213
F24mean5857.8269342.53710,841.565860.02111,614.2610,528.285506.5016235.145
std378.8069367.4904824.4215249.1441349.797673.2516544.6357411.0967
best5095.2728696.9128963.085409.1828609.489354.274438.2425502.835
worst5875.3819381.77410,730.265886.52411,526.3410,473.025462.2466121.079
median6719.06810,217.7612,762.66386.00113,823.6511,594.646438.3217131.038
F25mean3769.48927,856.3929,869.9829,312.0230,582.8185,464.155939.27510,182.26
std111.13211252.4482092.4533198.3631622.117145.521722.18187108.135
best3560.14425,277.2626,005.6921,133.6827,172.6466,896.084862.7694884.121
worst3761.92827,753.129,784.6429,816.0430,471.8287,164.455813.6347050.382
median4054.04930,542.9933,982.0435,071.933,326.4399,108.897745.36730,475.37
F26mean31,009.5250,936.9253,737.9843,732.1357,798.1463,818.1121,038.1426,878.31
std4110.5811176.9662254.5968990.1922389.7124546.233950.963906.133
best21,423.1448,574.3448,509.4130,908.4554,023.0756,471.5513,467.2719877.39
worst31,427.3950,935.7654,026.6444,323.9558,188.6563,397.9321,286.1626,315.97
median37,732.0553,129.9657,818.8758,175.3761,513.2874,691.4431,846.0633,541.61
F27mean4049.22512,738.0914,612.96633.67915,46612,099.814588.7844665.642
std237.415887.32291847.36462.8381183.631821.7461437.8651558.4765
best3622.84111,001.379956.1045859.0812,395.4410027.233820.7023734.622
worst4011.68512,881.1814,617.136603.60515,331.3112,476.574517.5364706.624
median4536.89714,634.9218,357.87681.48917,298.8113,150.65615.7086164.552
F28mean3928.90227,579.230,966.0132,360.4837,253.3353,180.926247.65618,748.06
std175.0026932.47871115.2622089.1061964.4363257.2891052.1546524.976
best3719.93525,065.0128,118.0925,031.7533,693.3144,544.134336.4866136.359
worst3912.31527,677.5131,323.8533,469.6537,559.3953,123.786163.07720,628.16
median4428.84129,593.0432,652.5634,467.540,517.0258,373.929215.41330,822.45
F29mean8480.565452,072.2705,440.136,007.73920,606.3856,0339526.91911,454.74
std1080.918180,448.5485,272.920,755.47470,669.8403,997.3898.08781669.909
best5932.0293,201.6272,902.4820,394.78258,530.2123,197.87963.6428446.091
worst8524.741435,448.8582,095.430,908.54882,911.5898,0839369.62911,355.29
median11,096.79718,608.22,136,675129,997.72,325,1862,057,81811,796.0815,180.8
F30mean3,917,7083.94 × 10104.42 × 10107.01 × 1094.13 × 10103.86 × 10101.43 × 1082.89 × 108
std1,861,6715.48 × 1096.62 × 1091.1 × 1095.24 × 1094.95 × 1091.37 × 1081.37 × 108
best1,659,8752.29 × 10102.86 × 10104.93 × 1092.49 × 10102.63 × 101011,050,31159,217,772
worst3,252,8834.16 × 10104.43 × 10107.01 × 1094.18 × 10103.97 × 101091,981,1432.85 × 108
median9,756,6744.64 × 10105.49 × 10109.38 × 1095.05 × 10104.56 × 10105.76 × 1086.86 × 108
Table 5. Wilcoxon test (dim = 30).
Table 5. Wilcoxon test (dim = 30).
FunctionBWO vs. ECFDBOCOA vs. ECFDBOPIO vs. ECFDBOBOA vs. ECFDBONOA vs. ECFDBOGODBO vs. ECFDBODBO vs. ECFDBO
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.08 × 10−113.02 × 10−11
F33.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.25 × 10−51.01 × 10−8
F54.2 × 10−105.97 × 10−90.000625.97 × 10−94.08 × 10−111.55 × 10−90.001953
F63.02 × 10−113.34 × 10−110.0076174.08 × 10−113.02 × 10−112.37 × 10−100.082357
F74.98 × 10−118.99 × 10−113.69 × 10−115.49 × 10−113.02 × 10−116.12 × 10−101.64 × 10−5
F87.39 × 10−112.87 × 10−103.16 × 10−102.61 × 10−103.02 × 10−115.27 × 10−50.958731
F90.0003990.0107630.0391670.0005263.69 × 10−111.96 × 10−101.11 × 10−6
F103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0058280.021506
F113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0003182.37 × 10−10
F123.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.57 × 10−107.38 × 10−10
F133.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.61 × 10−107.39 × 10−11
F143.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−117.69 × 10−81.47 × 10−7
F153.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.42 × 10−61.31 × 10−8
F163.02 × 10−113.02 × 10−117.39 × 10−113.02 × 10−113.02 × 10−110.0303170.000168
F173.34 × 10−114.08 × 10−114.18 × 10−93.02 × 10−113.69 × 10−110.3328550.05012
F183.02 × 10−113.69 × 10−113.69 × 10−113.02 × 10−113.02 × 10−110.1857670.000149
F193.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.002389.06 × 10−8
F206.53 × 10−89.26 × 10−93.35 × 10−82.23 × 10−93.5 × 10−90.539510.166866
F213.2 × 10−93.82 × 10−100.0162854.2 × 10−106.07 × 10−113.65 × 10−80.3871
F221.69 × 10−93.82 × 10−100.0035010.784461.61 × 10−100.0063770.982307
F233.69 × 10−113.02 × 10−110.4642731.78 × 10−103.02 × 10−110.0010580.888303
F243.02 × 10−113.02 × 10−110.6414243.02 × 10−113.02 × 10−110.0701270.501144
F253.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.59 × 10−52.87 × 10−10
F264.08 × 10−113.02 × 10−110.0351373.02 × 10−114.08 × 10−118.29 × 10−60.08771
F273.02 × 10−113.02 × 10−113.5 × 10−93.02 × 10−113.02 × 10−110.0002680.000812
F283.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.1 × 10−82.37 × 10−10
F293.02 × 10−113.02 × 10−111.69 × 10−93.02 × 10−113.02 × 10−110.0038480.251881
F303.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.48 × 10−95.19 × 10−7
Table 6. Wilcoxon test (dim = 50).
Table 6. Wilcoxon test (dim = 50).
FunctionBWO vs. ECFDBOCOA vs. ECFDBOPIO vs. ECFDBOBOA vs. ECFDBONOA vs. ECFDBOGODBO vs. ECFDBODBO vs. ECFDBO
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F33.02 × 10−113.34 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.69 × 10−11
F43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.99 × 10−114.5 × 10−11
F58.1 × 10−103.82 × 10−93.82 × 10−108.48 × 10−93.02 × 10−111.31 × 10−80.297272
F63.02 × 10−113.34 × 10−116.52 × 10−93.34 × 10−113.02 × 10−113.69 × 10−110.347828
F77.39 × 10−113.34 × 10−113.34 × 10−116.07 × 10−113.02 × 10−117.12 × 10−92.02 × 10−8
F89.76 × 10−101.31 × 10−85.49 × 10−119.76 × 10−103.02 × 10−114.42 × 10−60.059428
F91.61 × 10−101.33 × 10−103.82 × 10−101.21 × 10−103.02 × 10−111.34 × 10−50.379036
F103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.28 × 10−57.2 × 10−5
F113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F123.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.49 × 10−115.49 × 10−11
F133.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.99 × 10−113.02 × 10−11
F143.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.82 × 10−102.57 × 10−7
F153.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.55 × 10−96.07 × 10−11
F163.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.18090.004637
F173.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0850.002891
F183.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0150140.000284
F193.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−115.46 × 10−93.02 × 10−11
F201.87 × 10−71.1 × 10−85 × 10−94.57 × 10−91.78 × 10−100.1223530.589451
F213.02 × 10−113.34 × 10−112.68 × 10−63.02 × 10−113.02 × 10−111.03 × 10−60.347828
F223.02 × 10−113.02 × 10−111.86 × 10−93.02 × 10−113.02 × 10−110.0005260.002755
F233.69 × 10−113.02 × 10−110.1494493.02 × 10−113.02 × 10−112 × 10−60.283778
F243.02 × 10−113.02 × 10−110.0005873.02 × 10−113.02 × 10−110.0003990.149449
F253.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.2 × 10−104.5 × 10−11
F262.87 × 10−104.98 × 10−119.26 × 10−94.08 × 10−113.02 × 10−112.03 × 10−94.08 × 10−5
F273.02 × 10−113.02 × 10−111.69 × 10−93.02 × 10−113.02 × 10−110.000770.004226
F283.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.15 × 10−103.02 × 10−11
F293.02 × 10−113.02 × 10−114.08 × 10−113.02 × 10−113.02 × 10−110.2904727.74 × 10−6
F303.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−111.46 × 10−102.15 × 10−10
Table 7. Wilcoxon test (dim = 100).
Table 7. Wilcoxon test (dim = 100).
FunctionBWO vs. ECFDBOCOA vs. ECFDBOPIO vs. ECFDBOBOA vs. ECFDBONOA vs. ECFDBOGODBO vs. ECFDBODBO vs. ECFDBO
F13.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F37.22 × 10−66.53 × 10−70.0467560.1623754.69 × 10−80.0001410.994102
F43.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F56.28 × 10−64.12 × 10−66.01 × 10−81.09 × 10−53.02 × 10−110.5493270.761828
F63.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.26 × 10−70.082357
F73.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−116.72 × 10−102.6 × 10−5
F81.41 × 10−91.55 × 10−98.15 × 10−115.46 × 10−93.02 × 10−112.15 × 10−60.520145
F93.26 × 10−72.2 × 10−72.61 × 10−108.48 × 10−93.02 × 10−112.6 × 10−50.001857
F103.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−114.62 × 10−10
F113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F123.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F133.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.37 × 10−103.02 × 10−11
F143.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0292051.29 × 10−9
F153.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.47 × 10−103.34 × 10−11
F163.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0518776.77 × 10−5
F173.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−112.43 × 10−52.61 × 10−10
F183.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0005871.6 × 10−7
F193.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.5 × 10−113.02 × 10−11
F204.98 × 10−114.5 × 10−113.34 × 10−113.02 × 10−113.02 × 10−117.69 × 10−81.11 × 10−6
F213.34 × 10−113.34 × 10−110.3710774.08 × 10−113.34 × 10−111.33 × 10−100.111987
F223.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−114.57 × 10−92.92 × 10−9
F233.02 × 10−113.02 × 10−110.0003013.02 × 10−113.02 × 10−110.4119116.05 × 10−7
F243.02 × 10−113.02 × 10−110.9823073.02 × 10−113.02 × 10−110.0112280.000952
F253.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
F263.02 × 10−113.02 × 10−119.06 × 10−83.02 × 10−113.02 × 10−113.2 × 10−90.000356
F273.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−118.2 × 10−73.32 × 10−6
F283.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.34 × 10−113.02 × 10−11
F293.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−110.0001783.82 × 10−9
F303.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−113.02 × 10−11
Table 8. Friedman test in different dimensions.
Table 8. Friedman test in different dimensions.
Test Functions and DimensionsAlgorithm and the Friedman Test
AlgorithmECFDBOBWOCOAPIOBOANOAGODBODBO
CEC2017-30DFriedman2.1241 5.2759 6.2828 4.1517 6.3724 6.3034 2.3655 3.1241
Rankings15648723
CEC2017-50DFriedman2.0276 4.8690 6.0690 4.4621 6.4345 6.4897 2.5379 3.1103
Rankings15647823
CEC2017-100DFriedman1.9931 4.6414 5.8483 4.6483 6.2207 6.8138 2.4621 3.3724
Rankings14657823
Table 9. Comparison of fitness values of various algorithms.
Table 9. Comparison of fitness values of various algorithms.
DBOBWOCOAPIOBOANOAGODBOECFDBO
Scenario 1mean2.23161.57631.48322.08172.60132.72791.23461.5688
best1.44681.56511.40441.73321.90312.67300.30570.7681
Scenario 2mean2.42091.54983.70703.009412.90849.09492.43081.5242
best2.08021.46302.14461.85169.29566.63151.75410.1660
Scenario 3mean1.79411.74721.89631.30482.90772.40241.78081.3504
best1.44771.7071.71990.95152.07451.82981.10310.8294
Scenario 4mean2.30571.76602.56583.41317.13577.56922.92021.4131
best2.05741.73692.24042.59154.59665.41142.17250.8181
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, X.; Liu, R.; Li, S. A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs. Biomimetics 2025, 10, 420. https://doi.org/10.3390/biomimetics10070420

AMA Style

Zheng X, Liu R, Li S. A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs. Biomimetics. 2025; 10(7):420. https://doi.org/10.3390/biomimetics10070420

Chicago/Turabian Style

Zheng, Xiaojun, Rundong Liu, and Siyang Li. 2025. "A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs" Biomimetics 10, no. 7: 420. https://doi.org/10.3390/biomimetics10070420

APA Style

Zheng, X., Liu, R., & Li, S. (2025). A Novel Improved Dung Beetle Optimization Algorithm for Collaborative 3D Path Planning of UAVs. Biomimetics, 10(7), 420. https://doi.org/10.3390/biomimetics10070420

Article Metrics

Back to TopTop