Next Article in Journal
Nature-Inspired Solutions for Sustainable Mining: Applications of NIAs, Swarm Robotics, and Other Biomimicry-Based Technologies
Next Article in Special Issue
Optimizing the Design of TES Tanks for Thermal Energy Storage Applications Through an Integrated Biomimetic-Genetic Algorithm Approach
Previous Article in Journal
Research on Ship Replenishment Path Planning Based on the Modified Whale Optimization Algorithm
Previous Article in Special Issue
An Enhanced Misinformation Detection Model Based on an Improved Beluga Whale Optimization Algorithm and Cross-Modal Feature Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Swarm Intelligence and Human-Inspired Optimization for Urban Drone Path Planning

by
Yidao Ji
1,
Qiqi Liu
2,
Cheng Zhou
1,
Zhiji Han
3 and
Wei Wu
4,*
1
School of Mechanical Engineering, University of Science and Technology Beijing, Beijing 100083, China
2
Institute of Unmanned Systems, Beihang University, Beijing 100191, China
3
College of Engineering, Ocean University of China, Qingdao 266404, China
4
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(3), 180; https://doi.org/10.3390/biomimetics10030180
Submission received: 8 February 2025 / Revised: 11 March 2025 / Accepted: 12 March 2025 / Published: 14 March 2025
(This article belongs to the Special Issue Nature-Inspired Metaheuristic Optimization Algorithms 2025)

Abstract

:
Urban drone applications require efficient path planning to ensure safe and optimal navigation through complex environments. Drawing inspiration from the collective intelligence of animal groups and electoral processes in human societies, this study integrates hierarchical structures and group interaction behaviors into the standard Particle Swarm Optimization algorithm. Specifically, competitive and supportive behaviors are mathematically modeled to enhance particle learning strategies and improve global search capabilities in the mid-optimization phase. To mitigate the risk of convergence to local optima in later stages, a mutation mechanism is introduced to enhance population diversity and overall accuracy. To address the challenges of urban drone path planning, this paper proposes an innovative method that combines a path segmentation and prioritized update algorithm with a cubic B-spline curve algorithm. This method enhances both path optimality and smoothness, ensuring safe and efficient navigation in complex urban settings. Comparative simulations demonstrate the effectiveness of the proposed approach, yielding smoother trajectories and improved real-time performance. Additionally, the method significantly reduces energy consumption and operation time. Overall, this research advances drone path planning technology and broadens its applicability in diverse urban environments.

1. Introduction

Over the past decade, drones have increasingly become a frequent presence in news reports and everyday activities. Public perception of drones has shifted from novelty to normalcy. Particularly in terms of civil applications, drones have been commonly seen in activities ranging from celebratory performances to security monitoring, from power line inspections to package delivery. In other words, the effective use of drones not only significantly facilitates daily life but has also become one of the key pillars supporting the low-altitude economy [1,2,3,4]. Consequently, the capability to command drones to perform tasks with stability, efficiency, and economy has become a key technological objective for both academia and industry. Among the core technologies is path planning for drones in three-dimensional space [5,6,7], which addresses the challenge of generating a reasonable path for drones to avoid obstacles and reach predetermined destinations in complex, dynamic environments. This is a practical yet challenging task that has attracted numerous researchers to explore and optimize drone paths using optimization algorithms.
Zhou et al. tackled the issue of drone trajectory replanning by employing a gradient-based algorithm to optimize newly generated paths. They also proposed a path-guided optimization approach, which combines efficient sampling-based topological path searching and parallel trajectory optimization to address issues of local minima and path optimality [8]. Melo et al. investigated dynamic full-coverage path planning for drones by combining linear and heuristic optimization algorithms. They further estimated energy costs at different flight speeds and utilized fog-edge computing to reduce computational overheads [9]. Liu et al. developed a graph-based point-to-point path-planning algorithm that first generates two routes using the elliptic tangent graph method, then applies four heuristic rules to select an efficient, collision-free path [10]. Peng et al. addressed the planning of multiple waypoints along a path, formulating the objective function to focus on energy-efficient offloading and safe path planning [11]. From the perspective of unknown environments, Venkatasivarambabu et al. proposed a Dynamic Window approach that integrates Dynamic Programming and Probabilistic Route Mapping techniques to enhance drone navigation and localization capabilities [12]. Han et al. focused on optimizing the mapping process. They developed a modeling method based on geographical coordinates subdividing grids and combined A* and backtracking path-planning algorithms to reduce the computational complexity of indoor drone path planning while improving planning reliability [13]. Chen et al. proposed a bilevel optimizer to address the rapid trajectory planning of drones under constraints of variable time allocation and safety sets. The optimizer solved a low-level convex quadratic program and updated the high-level spatial and temporal waypoints [14]. Souto et al. explored optimization strategies using popular artificial intelligence techniques, employing simple Q-learning, ε-greedy methods, and a state–action–reward–state–action framework to optimize paths while considering urban terrain, weather, and drone energy consumption [15].
It is worth noting that optimization algorithms based on mathematical and computational theories have advantages in terms of quick convergence and low computational complexity when solving drone path-planning problems. However, for more complex and robustness-demanding path optimization problems, bio-inspired optimization algorithms have gained greater attention. For example, Shivgan and Dong formulated the drone path-planning problem as a traveling salesman problem and used genetic algorithms to find optimal routes that minimize energy consumption [16]. Yuan et al. also applied genetic algorithms to generate and optimize full-coverage scanning paths for drones. They utilized the Good Point Set Algorithm to generate initial populations and designed heuristic crossover operators and random interval inverse mutation operators to avoid local optima, ultimately achieving better flight efficiency [17]. Mesquita and Gaspar optimized drone patrol paths using Particle Swarm Optimization (PSO) to better monitor and deter birds, focusing on maximizing the random generation of paths and waypoints [18]. Huang, inspired by the dynamic divide-and-conquer strategy and A* algorithm, improved the PSO algorithm by dividing complex planning problems into small-scale subproblems with fewer waypoints. The uniformity of particle expansion was evaluated, ultimately generating paths for drones in three-dimensional space [19]. Phung and Ha proposed an improved PSO method based on spherical vectors, describing paths between waypoints with vectors and considering constraints such as path length, threats, turning angles, and flight altitude to generate safety-enhanced flight paths [20]. Ying et al. proposed to introduce a Bayesian model on the basis of the genetic algorithm, considering the rules and general habits of seafarers, and showed that the sea collision avoidance route generated by this algorithm is more effective than that of the pure genetic algorithm [21]. Cao et al. introduced a path replanning algorithm based on threat assessment using a dynamic Bayesian network, which ensures that unmanned underwater vehicles can adjust their paths to avoid danger when facing uncertain events [22].
Similar to PSO, Ant Colony Optimization (ACO) is another widely utilized optimization algorithm. Inspired by the foraging behavior of ants, ACO mimics these natural phenomena to solve optimization problems. Wan et al. modeled the three-dimensional path planning problem as a multi-objective, multi-constraint optimization problem. By improving modeling and search capabilities, the proposed ACO algorithm maintained both global and local search abilities and achieved a balanced and diverse Pareto solution set [23]. In [24], ACO was used to address path generation in complex environments, where specific workloads, capacity constraints, and mobility speeds were considered for each node. Comparative studies demonstrated the efficiency and feasibility of the improved ACO algorithm over conventional methods. The social structures and behaviors of certain species often inspire new optimization algorithm designs. For instance, Zhang et al. focused on the Grey Wolf Optimization Algorithm (GWOA), based on hierarchical predatory behavior. They introduced dynamic adjustment strategies for nonlinear convergence and weight coefficients, verifying the algorithm’s effectiveness in planning drone paths in complex environments [25]. Yu et al. combined the Differential Evolution algorithm with GWOA to enhance the exploration capability of path-planning algorithms. They modified GWO’s search strategies and adjusted DE’s mutation strategy based on ranking concepts [26]. Similarly, Jiang et al. proposed a dual-layer planning strategy that utilized a collision-avoidance speed controller based on a partially observable Markov decision process and improved GWOA using an enhanced communication mechanism and ε-level comparison for path planning [27].
Pan et al. improved the golden eagle optimizer by integrating personal example learning and mirror reflection learning strategies into the algorithm framework, effectively enhancing the efficiency of drone routes for power inspections [28]. Zhang et al. focused on Harris Hawks Optimization, introducing Cauchy mutation strategies, adaptive weights, and the Sine–Cosine Algorithm to improve algorithm performance [29]. Shen et al. adopted strategies such as beta distribution, Levy distribution, and two different cross-operators to enhance the dung beetle optimizer. Interestingly, the behavioral patterns of slime molds also serve as references for optimization algorithms [30]. Abdel-Basset et al. used Pareto optimality to balance multiple objective functions in drone path planning, thereby improving the performance of a hybridized slime mold algorithm [31]. The relationship between behaviors can also provide optimization pathways. For example, Zu et al. drew inspiration from the hunter–prey optimization algorithm to enhance the rapid path-planning capabilities of drones. They designed a chaotic mapping model and a golden sine strategy to improve algorithm updates and convergence speed [32]. Zhang et al. improved the search and rescue optimization algorithm, which has the advantage of being easy to apply but suffers from slow convergence in drone path planning. Their main improvements included integrating a heuristic crossover strategy and a real-time path adjustment strategy [33].
Drawing inspiration from biological behaviors and strategies enhances drone path-planning algorithms, improving their ability to handle multi-objective and multi-constraint scenarios while mimicking biological intelligence. The main contributions of this study are summarized as follows:
  • Inspired by the collective intelligence of animal groups and electoral process in human societies, this study introduces hierarchical structures and group interaction behaviors into the standard PSO algorithm. Specifically, competitive and supportive behaviors are mathematically modeled, significantly enhancing the learning strategies of particles and improving the algorithm’s global search capability during its mid-term optimization stage.
  • To prevent the algorithm from falling into local optima during the later stages of optimization, a mutation mechanism is introduced. This enhancement further improves the diversity of the population, thereby increasing the overall accuracy of the improved PSO algorithm.
  • To address the challenges in drone path planning, this paper proposes an innovative method that integrates a path segmentation and prioritized update algorithm with a cubic B-spline curve algorithm. These methods effectively improve the optimality and smoothness of the generated paths, ensuring safe navigation for drones in complex urban environments. Additionally, the proposed approach outperforms other swarm optimization algorithms in terms of path length.
The remainder of this paper is organized as follows: Section 2 introduces the standard PSO algorithm; details of our algorithm enhancements are presented in Section 3; simulation and analysis of the improved algorithm are discussed in Section 4; the application of the improved algorithm in drone urban path planning is presented in Section 5; and Section 6 summarizes the research and future directions of this paper.

2. The Standard Particle Swarm Optimization (PSO) Algorithm

PSO is an optimization algorithm based on swarm intelligence, inspired by the behaviors of bird flocks searching for food in the environment. The core concept of the algorithm involves modeling the positions of food searched for by the bird flock as the potential solutions to problem. Then, each individual in the bird flock is abstracted as a particle with no mass or volume. At last, the continuous movement of particles, driven by interactions, is treated as the process of searching for the optimal solution in a multi-dimensional solution space. Due to its simplicity, efficiency, and ease of implementation, the PSO algorithm has gained widespread attention for solving various optimization problems [34].
In the standard PSO algorithm, each particle represents a potential solution to the optimization task. During each iteration, each particle adjusts its position based on not only the best position which has been discovered (pbest) but also the best position which has been found by the entire swarm (gbest). Through this mechanism, the PSO algorithm continuously explores the solution space and optimizes until the optimal solution is found. Specifically, the basic workflow of the standard PSO algorithm is as follows. Firstly, set various parameters of the PSO algorithm. Secondly, randomly initialize the position and velocity of each particle in the swarm. Thirdly, calculate the fitness value for each particle, representing the quality of the solution it finds, and update the pbest and gbest if the new fitness value is better than the pbest or gbest. Then, update the velocity and position of each particle using specific update equations with pbest and gbest. At last, repeat the steps of calculating and updating until a predefined stopping criterion is met, such as reaching the maximum number of iterations or achieving a solution quality in a specified threshold.
In the above algorithmic process, the particle update phase is the most critical step. Mathematically, this can be described as follows. For a D-dimensional optimization problem, each particle has two numerical characteristics: the velocity vector and the position vector, represented as V i = [ v i 1 , v i 2 , , v i D ] and X i = [ x i 1 , x i 2 , , x i D ] , respectively, where V i and X i denote the velocity vector and the position vector of the i t h particle, respectively. v i j and x i j denote the velocity and position of the j-th dimension of the i t h particle, respectively. During the search process in the solution space, each particle updates its velocity and position according to the following equations:
v i j t + 1 = ω v i j t + c 1 r 1 ( p b e s t i j t x i j t ) + c 2 r 2 ( g b e s t j t x i j t ) x i j t + 1 = x i j t + v i j t + 1
where t denotes the current iteration number. v i j t and x i j t denote the velocity and position of the j t h dimension of the i t h particle during the t t h iteration, respectively. ω denotes the inertia weight of the particle. c 1 and c 2 denote the cognitive and social learning factors, respectively, both positive real numbers. r 1 and r 2 are random numbers uniformly distributed in the range of [0, 1]. p b e s t i j t denotes the j-th component of the pbest of the i t h particle during the t t h iteration. g b e s t j t denotes the j-th component of the gbest during the t t h iteration.

3. Improved Particle Swarm Optimization Algorithm

3.1. Electoral Process

3.1.1. Introduction of Electoral Process

According to numerous studies, the topology of the population in swarm intelligence algorithms, that is, the connection structure between all individuals in the whole population, determines the mode of information exchange in the swarm, which indirectly affects the performance of the algorithm, such as convergence speed and accuracy. At the same time, the learning strategy of individuals in swarm intelligence algorithms, that is, the way in which individuals update their position or speed according to interactive information, determines the individual’s exploration (finding new solutions) and exploitation (improving current solutions) behaviors in the search space, which directly affects the performance of the algorithms. Therefore, the standard PSO algorithm may have the following shortcomings: Firstly, the individual in the standard PSO algorithm relies too much on the current global optimal information, which may lead to the algorithm converging to the current global optimal solution prematurely, that is, the local optimal solution in the solution space. Secondly, since each particle interacts with all particles in the standard PSO algorithm, i.e., the fully connected topology, this may lead to a rapid reduction in the diversity of solutions (particles) in the optimization process, that is, rapid homogenization, which affects the convergence accuracy of the algorithm. Last, the topology of the standard PSO algorithm is static and cannot be adaptively adjusted according to the optimization situation, so the search ability of the algorithm may decrease with the optimization process. And for the learning strategies, the standard PSO algorithm adopts a monotonous formula to update the velocities and positions of particles, which may lead to a lack of effective balance between exploration and exploitation behavior. The fixed learning strategy of the standard PSO algorithm will also lead to a gradual decrease in the search efficiency of the algorithm.
To address the limitations of the standard PSO algorithm and expand its applications in fields such as path planning, this study proposes an improved version of the PSO algorithm by introducing an electoral process. This mechanism draws inspiration from human electoral processes and the social interaction behaviors, including periodically electing, leading, competing, and supporting. By refining the algorithm’s topological structure and learning strategies, these behaviors are simplified and incorporated into the algorithmic framework. The aim is to enhance the algorithm’s search capability and accuracy. The specific improvements are detailed below.

3.1.2. Hierarchical Structure

To emulate the social interactions observed in human electoral processes, this study introduces a hierarchical structure to classify the particle population within the Improved Particle Swarm Optimization (IPSO) algorithm. Based on their fitness values, the particles are divided into four tiers: leader, candidates, voters, and followers. The hierarchical structure is illustrated in Figure 1.
In order to emulate the social interaction behaviors observed in human electoral processes, this study first introduces a hierarchical structure to divide the particles within the IPSO algorithm, that is, the particles in the IPSO algorithm are divided into four levels according to the fitness value: leader, candidates, voters, and followers. The specific structure is shown in Figure 1. The leader is the particle with the lowest fitness value in the population (for the minimization problem), which is represented by the red man in Figure 1, and the candidates are the N particles in the population with the fitness value second only to the leader, which are represented by the orange men in Figure 1, while the rest of the particles are voters and followers represented by the black and blue men in Figure 1, which need to be divided according to the support rate.
The leader is the particle with the lowest fitness value in the population and represents the global optimal solution found so far. The leader provides a clear search target for the entire population, guiding all particles closer to the optimal solution, which is the key to algorithm convergence, similar to the current global optimal experience (gbest) in the standard PSO algorithm.
The candidates are the particle swarm second only to the leader in fitness value, representing multiple local optimal solutions. They represent the good individuals in the population, which are the objects of supporting voters and potential candidates to become the leader. The presence of candidates allows for the preservation of the suboptimal solutions that have been found to maintain population diversity, while increasing the exploitation of suboptimal solutions, thereby improving search efficiency.
The voters are the largest particle swarm in the population, and they are the most important population in the IPSO algorithm, which conducts extensive exploration with the guidance of the leader and candidates in the search space to find possible optimization solutions. What’s more, these particles support the candidates in the upper level according to the change in their fitness values, and affect the search direction of the entire population by adjusting the support rate for each candidate. The diversity of voter particles and exploratory behavior are the keys for the algorithm to avoid precocious convergence and maintain search breadth.
The voters are the largest particle swarm in the population, and they are the most important in the IPSO algorithm. Guided by leaders and candidates, they explore extensively in the search space for possible optimization solutions. In addition, these particles will support the candidate particles in the upper level according to the change in their fitness values, and then adjust their search direction according to the different support objects and support rates, thus affecting the search direction of the entire population. The diversity of the voters and their variable search behaviors are the key for the IPSO algorithm to avoid precocious convergence and maintain the search breadth.
The followers are a subgroup of the voters whose support rates for the candidates exceed a predefined threshold. They have absolute loyalty to the candidates they follow and no longer support other candidates. The presence of followers reinforces the focused search for good candidates (exploitation).

3.1.3. Interaction Behaviors in Electoral Process

The human electoral processes include many complex processes, such as periodic elections, open campaigns, voting support, and so on. And these processes include a variety of human interaction behaviors. In order to facilitate the introduction of the electoral process, this study simplifies and summarizes the various human interaction behaviors in the electoral processes into periodically electing, leading, competing, and supporting. These behaviors are shown in Figure 1.
The behavior of periodically electing. This behavior refers to the process of re-electing the leader and candidates according to a fixed time period. It is a key process to ensure that the leader and candidates always represent the global optimal solution and the suboptimal solutions, so as to ensure that the search direction of the entire population can be updated in time. The arrows Reelect, Elect1, and Elect2 are used in Figure 1 to represent this behavior. Specifically, the Elect1 process first sorts the voters and followers by their fitness values. Then, it selects the top N better particles to compare with the current candidates. If the fitness value of any voter is better than any candidate, this voter will be promoted to the new candidate. At the same time, it resets the support rate of the replaced candidates’ followers to the initial support rate, that is, the followers of the replaced candidates become voters. Subsequently, the Elect2 process selects the optimal particle as the new leader in the updated candidates, thereby guiding the entire population to evolve towards the optimal solution. The Reelect process is to periodically trigger calls to Elect1 and Elect2 to update the leader and candidates. This dynamic electoral process not only gives the population an updated global optimal search direction but also enhances the diversity of the population, which helps the algorithm to jump out of the local optimal and improve the search ability of the global optimal solution.
The behaviors of leading and competing. In the PSO algorithm introduced into the electoral process in this paper, these two behaviors are the key interaction behaviors. The behavior of leading is reflected in the way the voters follow the candidates based on their fitness values, as well as how all particles follow the leader, which is represented by three Lead arrows in Figure 1. Specifically, the orange Lead arrow indicates that in each iteration, the voters follow the candidates based on the support rates, that is, the candidates lead the voters. The two red Leader arrows indicate that the leader leads the candidates, voters, and followers. This behavior of leading provides a clear search direction for the population. And the behavior of competing is embodied in the candidates, which decide whether to introduce the competition term through a certain probability called the competition rate, represented by the Compete arrow in Figure 1. The competition term is designed to be the average position of the other candidates, similar to the center of mass, in order to make each candidate move away from other candidates, which is the key to simulate the individual differences and dynamic changes in social competition. This behavior of competing can push candidates away from the average position of other candidates, increasing the likelihood of exploring new areas, thereby enhancing the diversity of the population. Through this combination of the behaviors of leading and competing, the algorithm can effectively simulate the social hierarchical structure and competition mechanism, thereby enhancing the search ability and adaptability of the algorithm.
The behavior of supporting. This behavior is one of the cores of dynamic population interaction, and it is also a key basis for the dynamic conversion of voters and followers. Supportive behavior refers to adjusting one’s search direction based on the degree of support from voters and followers for the candidates, which is represented by the Support arrow in Figure 1. This degree of support is quantified in this paper through the indicator of the support rate. And the voters and followers will adjust their support rate for the candidates according to the changes in their own fitness values after each iteration. For voters, this behavior enables the voters to respond more flexibly to dynamic changes within the population, allowing the better candidates to attract more voters, thereby facilitating a concentrated search and accelerating the convergence speed. At the same time, this can also increase the diversity of the population, as different voters may support different candidates, resulting in a fragmentation of search directions and reducing the risk of the algorithm falling into local optimums. For followers, this behavior strengthens the focused search for good candidates, allowing the algorithm to explore more deeply in the identified promising areas, which can improve the efficiency of the search.

3.1.4. Competition Rate and Support Rate

According to the introduction of the aforementioned interaction behaviors, the behavior of competing is a crucial interaction behavior. Therefore, in order to realistically simulate that behavior, this paper introduces the concept of the competition rate, which is used to control whether the competition terms are introduced when updating the candidates. Specifically, in each iteration, each candidate will introduce a competition term with a probability determined by the competition rate. This increases the randomness of the behavior of competing, making it more aligned with the behaviors in actual election processes. Below is the calculation formula for the competition rate and the judgment formula for introducing competitive elements:
R c t = R c , min + ( R c , max R c , min ) c n t u , max c n t u t c n t u , max f c t = 1 ,   rand R c t 0 ,   e l s e
where R c t denotes the magnitude of the competition rate during the t t h iteration. R c , min and R c , max denote the minimum and maximum competition rates. c n t u t and c n t u , max denote the number of iterations during which the leader remains unchanged at the t t h iteration and the maximum number of iterations during which the leader remains unchanged. rand is random number uniformly distributed in the range of [ 0 , 1 ] . f c t denotes whether the competition term is introduced during the t t h iteration.
It can be seen from the above formula that the competition rate proposed in this paper is dynamically adjusted based on the operation of the algorithm. Specifically, the competition rate is inversely proportional to the number of iterations during which the leader remains unchanged. When the leader has not changed for a long time, the competition rate increases to promote the exploration behavior of the population. When the leader is frequently updated, the competition rate decreases to accelerate the convergence speed. Through this dynamically adjusted competition rate, it is possible to balance the exploration and exploitation behaviors of the population, thereby improving the search efficiency of the algorithm and the quality of the solutions. At the same time, the introduction of the competition rate allows the algorithm to adaptively adjust its search strategy to cope with complex optimization problems and dynamically changing search environments.
On the other hand, according to the introduction of the aforementioned behavior of supporting, an extremely important element is the degree of support that the voters and followers have for the candidates. In order to quantitatively describe this behavior of supporting of the voters and followers, this paper introduces the concept of the support rate, which ranges from 0 to 100, representing the degree of support that the voters and followers have for the candidates. Once the support rate is introduced, this behavior of supporting is simplified to the dynamic changes in the support rate of the voters and followers. Specifically, after each iteration, the voters update their support rates for the assigned candidates based on the changes in their own fitness values: if the change is positive, the support rate for the current candidate is increased; if the change is negative, the support rate is decreased. When the support rate for a certain candidate exceeds a predefined threshold (e.g., 80), this voter will become a follower of that candidate, with its support rate fixed at 100, while the support rates for other candidates drop to 0 and are no longer updated. And the above process of updating the support rate can be described using the following formula,
R s , i , m t + 1 = R s , i , m t + R s , f i x e d ,   f i t i t + 1 f i t i t R s , i , m t R s , f i x e d ,   e l s e R s , i , m t + 1 = max ( min ( R s , i , m t + 1 , 100 ) , 0 )
where R s , i , m t denotes the support rate of the i t h voter for the randomly assigned m t h candidate during the t t h iteration, ranging from 0 to 100 . R s , f i x e d denotes the change in the support rate for each update. max a , b and min a , b return the larger and smaller of a and b .

3.1.5. Learning Strategy

To introduce the electoral process into the standard PSO algorithm, thereby simulating the human interaction behaviors in the election processes, this paper makes targeted improvements to the learning strategies of particles at different levels based on the original standard PSO algorithm, as detailed below.
The learning strategy of the leader. As the optimal individual in the population, the leader’s position represents the current global best solution and is therefore usually not updated during each iteration.
The learning strategy of the candidates. As a group with the behavior of competing, the learning strategy of the candidates not only considers their own positions and the leader’s position but also includes a competition term to simulate individual differences in social competition. In summary, the following are the velocity and position update formulas for the candidates:
e c , i t = 1 N 1 m = 1 N x m t , ( m i ) v i t + 1 = ω v i t + c 1 r 1 ( p b e s t i t x i t ) + c 2 r 2 ( x l e a d e r t x i t ) f c t c 3 r 3 ( e c , i t x i t ) x i t + 1 = x i t + v i t + 1
where t denotes the current iteration number. e c , i t denotes the competition centroid of other candidates excluding the i t h candidate during the t t h iteration. N denotes the number of the candidates. x i t and v i t denote the position and velocity of the i t h candidate during the t t h iteration, respectively. w , c 1 , c 2 , and c 3 denote the inertia factor, individual cognitive learning factor, social cognitive learning factor, and competition factor, respectively, all of which are positive real numbers. r 1 , r 2 , and r 3 are random numbers uniformly distributed in the range of [ 0 , 1 ] . p b e s t i t denotes the individual best position vector of the i t h candidate during the t t h iteration. x l e a d e r t denotes the position vector of the leader during the t t h iteration. f c t denotes the coefficient for introducing the competition term during the t t h iteration.
The learning strategy of the voters. Since the voters are the key group for implementing the behavior of support in the algorithm, the learning strategy of the voters is extended from the standard PSO algorithm by adding a support term, so that the voters not only follow the leader but also perform personalized searches based on their own support rates for the candidates. In summary, the following are the velocity and position update formulas for the voters:
v i t + 1 = ω v i t + c 1 r 1 ( p b e s t i t x i t ) + c 2 r 2 ( x l e a d e r t x i t )             + 0.01 R s , i , m t c 3 r 3 ( x c a n d m t x i t ) x i t + 1 = x i t + v i t + 1
where x i t and v i t denote the position and velocity of the i t h voter during the t t h iteration, respectively. m is the index of the candidate randomly assigned to the voter during this iteration. R s , i , m t denotes the support rate of the i t h voter for the m t h candidate during the t t h iteration, ranging from 0 to 100. c 3 denotes the support factor. r 3 is a random number uniformly distributed in the range of [ 0 , 1 ] . x c a n d m t denotes the position of the m t h candidate randomly assigned.
The learning strategy of the followers. The followers are individuals among the voters whose support rate for a certain candidate exceeds a predefined threshold, making them a special group in the algorithm for implementing the behavior of supporting. Their learning strategy is similar to that of the voters, but since their support rate is fixed, it is no longer affected by the changes in the support rate. In summary, the following are the velocity and position update formulas for the followers,
v i t + 1 = ω v i t + c 1 r 1 ( p b e s t i t x i t ) + c 2 r 2 ( x l e a d e r t x i t )          + c 3 r 3 ( x c a n d m t x i t ) ,   m = k ω v i t + c 1 r 1 ( p b e s t i t x i t ) + c 2 r 2 ( x l e a d e r t x i t ) , m k x i t + 1 = x i t + v i t + 1
where x i t and v i t denote the position and velocity of the i t h follower during the t t h iteration, respectively. k denotes the index of the candidate that the current follower is following, so m = k indicates that the randomly assigned candidate is the one being followed, with the support rate of 100 . Otherwise, the support rate is 0 .

3.1.6. Pseudocode of the Electoral Process

The pseudocode of the electoral process is shown in Algorithm 1.
Algorithm 1: The electoral process
1:For each voter or follower
2:        Update the voter or follower according to Equation (5) or Equation (6).
3:        Calculate the individual fitness with the custom cost function and update the individual pbest.
4:        Update the support rate according to Equation (3).
5:        If the support rate of the voter is over fixed value, then change this voter to follower.
6:End
7:Calculate the competition rate according to Equation (2).
8:For each candidate
9:        If random value <= competition rate
10:                Update the candidate with competition according to Equation (4).
11:        Else
12:                Update the candidate without competition according to Equation (4).
13:        End
14:        Calculate the individual fitness with the custom cost function and update the individual pbest.
15:End
16:Reselect the leader and candidates.
17:If any candidate changes, then change its followers to voters.

3.2. Mutation Mechanism

3.2.1. Introduction of Mutation Mechanism

The PSO algorithm with the introduced election mechanism not only effectively preserves suboptimal solutions but also significantly improves the diversity of the population, enabling a broader search for solutions in the early and middle stages of optimization. However, upon further consideration, the voters gradually converge as the optimization process progresses, and since candidates are elected from voter particles, this may inevitably cause the optimal and suboptimal solutions in the population to gradually converge in the later stages of optimization, potentially leading to being trapped in a local optimum.
Therefore, this paper further introduces a mutation mechanism based on the previous improvements, inspired by the genetic mutation process in biological evolution. This improvement aims to enhance population diversity and further improve algorithm accuracy by embedding the mutation mechanism in the update processes of the voters and followers, directly mutating the positions of particles.

3.2.2. Mutation Rate

Although the introduction of the mutation mechanism significantly enhances population diversity and thereby improves algorithm accuracy, it also increases the proportion of exploration during the optimization process, resulting in a slower convergence speed. Therefore, when applying the mutation mechanism, it is essential to consider how to balance the accuracy and speed of the algorithm, which essentially involves designing the degree of mutation in the population during the optimization process.
To quantitatively describe and design the degree of mutation, this paper introduces the concept of a mutation rate, which represents the probability of particles undergoing mutation during the optimization process. Considering that the demand for solution exploration differs between the early and later stages of the optimization process, a method for dynamically adjusting the mutation rate is designed. Specifically, the mutation rate decreases as the number of iterations increases: the smaller the iteration number, the larger the mutation rate, and vice versa. The specific formula for dynamic adjustment of the mutation rate is
R m t = R m , min + ( R m , max R m , min ) ( i t e r a t i o n s i ) N m i t e r a t i o n s N m
where R m t denotes the mutation rate at the t t h iteration. R m , min and R m , max denote the minimum and maximum mutation rates during the optimization process, respectively. i and i t e r a t i o n s denote the current iteration number and the maximum number of iterations. N m is a positive integer representing the exponent, typically set between 3 and 5.
According to the above formula, the proposed mutation rate is inversely proportional to the N m t h power of the iteration count. This means that in the early stages of the optimization process, particles are more likely to undergo mutation, thereby increasing the algorithm’s exploratory capability. In the later stages, the mutation rate gradually decreases, and the algorithm transitions to fine-tuning and exploiting existing solutions, thus improving the convergence speed. This dynamic adjustment of the mutation rate helps accelerate its convergence while maintaining the algorithm’s exploratory capability, achieving a balance between algorithm accuracy and speed.

3.2.3. Dynamic Mutation Strategy

For the mutation strategy, the simplest single mutation strategy is considered first. However, this paper argues that a single mutation method is not conducive to solving complex and variable optimization problems. Therefore, a dynamic mutation strategy is proposed. This strategy selects different mutation methods according to specific rules during each mutation in the optimization process, thereby enhancing the algorithm’s adaptability.
The first mutation strategy: Random Mutation. This mutation strategy is triggered with a relatively small probability. Its core idea is to completely randomize the particle’s position, akin to a “reinitialization” in the search space. In this study, it was found that this strategy is suitable for situations where the gaps between local optima are relatively large, helping particles escape the current region and explore new search spaces. However, it is more likely to affect the convergence speed. Specifically, after mutation, the particle’s position is set to a random location between the boundary’s minimum and maximum values, mathematically expressed as follows:
x i t + 1 = b min + r a n d ( b max b min )
where b min and b max denotes the boundary’s minimum and maximum values, while rand is a random number uniformly distributed in the range of [ 0 , 1 ] .
The second mutation strategy: Cauchy Mutation. This mutation strategy is adopted with a relatively high probability and is characterized by introducing the Cauchy distribution to increase the population diversity. In this study, it was found that this strategy is suitable for situations where the gaps between local optima are small, facilitating more detailed searches within local regions because of the limited jump range. Specifically, the position of the particle after mutation is generated by perturbing the current position using Cauchy mutation. The mathematical expression is as follows:
x i t + 1 = x i t [ 1 + γ tan ( π ( r a n d 0.5 ) ) ]
where γ denotes the scaling parameter in the formula for the Cauchy distribution to control the mutation strength, while rand is a random number uniformly distributed in the range of [ 0 , 1 ] .
Dynamic Selection of Mutation Strategies. In summary, the two mutation strategies mentioned above each have their respective advantages and disadvantages. Therefore, this paper adopts the following selection strategy for real-time decision-making:
Mutation   Strategy = R a n d o m   M u t a t i o n ,   r a n d R m , f i x e d C a u c h y   M u t a t i o n , e l s e
where rand is a random number uniformly distributed in the range of [ 0 , 1 ] , while R m , f i x e d is a predefined constant, generally less than 0.5 . According to this selection strategy, a fixed probability is used to choose different mutation strategies. Without significantly affecting computational complexity, this approach ensures a high probability of Cauchy mutation to enhance the local search capability, a low probability of random mutation to increase population diversity, and avoidance of the limitations associated with relying solely on a single mutation strategy.

3.2.4. Pseudocode of Mutation Mechanism

The pseudocode of the mutation mechanism is shown in Algorithm 2.
Algorithm 2: The mutation mechanism
1:Calculate the mutation rate according to Equation (7).
2:For each voter or follower
3:        If random value <= mutation rate
4:                If random value <= fixed rate
5:                        Update the voter or follower with random mutation according to Equation (8).
6:                 Else
7:                        Update the voter or follower with Cauchy mutation according to Equation (9).
8:                 End
9:         End
10:End

3.3. Pseudocode and Flowchart of the IPSO Algorithm

In summary, IPSO is proposed in this paper to enhance the global search capability in the early and middle stages, while increasing population diversity in the middle and later stages. This leads to improved algorithm accuracy, thereby enhancing optimization performance. The pseudocode of the IPSO algorithm is shown in Algorithm 3, and the flowchart of this algorithm is shown in Figure 2.
Algorithm 3: The IPSO algorithm
1:Set various parameters of the IPSO.
2:Initialize the positions and velocities of population randomly, and calculate the fitness value.
3:Enter main loop until the end condition is triggered.
4:        Calculate the mutation rate according to Equation (7).
5:        For each voter or follower
6:                If random value <= mutation rate
7:                        If random value <= fixed rate
8:                                Update the voter or follower with random mutation according to Equation (8).
9:                         Else
10:                                Update the voter or follower with Cauchy mutation according to Equation (9).
11:                         End
12:                 Else
13:                        Update the voter or follower without mutation according to Equation (5) or Equation (6).
14:                 End
15:                 Calculate the individual fitness with the custom cost function and up- date the individual pbest.
16:                 Update the support rate according to Equation (3).
17:                 If the support rate of the voter is over fixed value, then change this voter to follower.
18:         End
19:         Calculate the competition rate according to Equation (2).
20:         For each candidate
21:                If random value <= competition rate
22:                        Update the candidate with competition according to Equation (4).
23:                 Else
24:                        Update the candidate without competition according to Equation (4).
25:                 End
26:                 Calculate the individual fitness with the custom cost function and up- date the individual pbest.
27:        End
28:        Reselect the leader and candidates.
29:        If any candidate changes, then change its followers to voters.
30:        i = i + 1.
31:Exit main loop to end the optimization process.

4. Simulation Test and Result Analysis

To verify the specific performance of IPSO, this paper compares the algorithm with Particle Swarm Optimization (PSO), the Grey Wolf Optimizer (GWO), and the genetic algorithm (GA). The experimental environment is configured as follows: The operating system is Windows 11 (64-bit), the processor is an 11th Gen Intel(R) Core(TM) i7-11800H with a base frequency of 2.30 GHz, the memory is 16 GB, and the simulation software is MATLAB R2024b.

4.1. Comparison Algorithms and Parameter Settings

In this study, to fairly and comprehensively evaluate the performance of IPSO, we used the CEC2005 benchmark suite [35], which includes 25 test functions. These consist of five Unimodal Functions (F1–F5), seven Basic Multimodal Functions (F6–F12), two Expanded Functions (F13–F14), and eleven Composition Functions (F15–F25).
At the same time, this paper compares IPSO with three other classic swarm intelligence algorithms: PSO, GWO, and GA. To ensure the reliability of the test results, the population size for all algorithms was set to 55, and the number of iterations was set to 1000. The specific parameter settings for each algorithm are shown in Table 1.

4.2. Optimization Results and Analysis of Cec2005 Benchmark Functions

Using the aforementioned experimental environment and algorithm parameter settings, each algorithm was run independently 30 times on the 25 test functions. The results are shown in Table 2, which include metrics such as the average value, standard deviation, and best value. To more fairly evaluate the algorithm’s performance, this paper primarily uses the average of the experimental results to measure the optimization accuracy of the algorithms, and finally summarizes the experimental results of the 25 test functions to statistically analyze the optimization adaptability of the algorithms for different test functions.
When comparing the average values of the experimental results, it is evident that the IPSO algorithm provides significantly better solutions on most of the test functions, except for a few Unimodal Functions, demonstrating its strong global search capability. Additionally, the solutions obtained by the IPSO algorithm on certain Unimodal Functions are only slightly inferior to those of the standard PSO algorithm, showing its outstanding local search ability. Furthermore, comparing the standard deviations of the experimental results clearly shows that the IPSO algorithm has higher stability compared to the other algorithms, highlighting its superior adaptability.
In conclusion, IPSO retains the excellent local search capability of the original algorithm (PSO) while significantly enhancing its global search ability, resulting in an improved optimization accuracy and a noticeable increase in the algorithm’s adaptability to different optimization problems.

4.3. Convergence Curve Analysis

To visually illustrate the optimization accuracy and convergence speed of the algorithm, this paper analyzes the results using convergence curves. Figure 3 shows the convergence curves obtained from the 15th run of the experiment. The red, blue, black, and green curves represent the convergence of the IPSO, PSO, GWO, and GA algorithms, respectively. The x-axis of each curve represents the logarithm of the number of iterations, and the y-axis represents the fitness value of the optimal solution during the optimization process.
From the Figure 3, it can be inferred that IPSO’s optimization accuracy generally outperforms the PSO, GWO, and GA algorithms. For some Multimodal Functions and Composition Functions where other algorithms perform poorly, IPSO continues to optimize the current global optimal solution in the later stages of the optimization process, although slightly slower in convergence speed. While other algorithms fall into local optima, IPSO still achieves better results, demonstrating its advantage in solving complex optimization problems.
In summary, within the specified number of iterations, IPSO shows a significant advantage in optimization accuracy.

5. Application of IPSO in Urban Drone Path Planning

5.1. Environmental Modeling

For the 3D urban path-planning problem, the modeling of the urban environment is one of the key aspects. In this study, a three-dimensional space of 100   m × 100   m × 100   m is selected as the flight area for the drone, with obstacles of different shapes simulating real-world environments such as city buildings. To facilitate the modeling, cuboids and cylinders are used to build the obstacles. The positions of the cuboids are determined by their upper and lower vertices. The cylinders are determined using the center coordinates, height, and radius. The specific obstacle location information is shown in Table 3. All simulation experiments are conducted in this environment.

5.2. Cost Function

The cost function is used to evaluate the quality of the planned path. The smaller the value of the cost function, the better the quality of the obtained path. Similar to general path-planning problems, the cost function for urban drone path planning consists of a series of different cost components, such as path length cost, height cost, collision cost, and path smoothness cost. For simplicity, this study considers only the path length, height, and collision costs.

5.2.1. Path Length Cost

Path length is the most direct method to evaluate the quality of a planned path. Since the drone has limited endurance, the shorter the planned path, the more advantageous it is for the drone’s mission. In this study, the path is composed of a predefined start point, predefined end point, and N p path points to be optimized along the path. The coordinates of the j t h path point on the i t h path are denoted as X i j = ( x i j , y i j , z i j ) . Therefore, the path length cost is defined as the sum of the Euclidean distances between adjacent path points from the start to the end, as expressed by the following equation,
c o s t i , l e n g t h = j = 0 N p ( x i ( j + 1 ) x i j ) 2 + ( y i ( j + 1 ) y i j ) 2 + ( z i ( j + 1 ) z i j ) 2
where ( x i 0 , y i 0 , z i 0 ) denotes the coordinates of the path start point, and ( x i ( N p + 1 ) , y i ( N p + 1 ) , z i ( N p + 1 ) ) denotes the coordinates of the path end point.

5.2.2. Height Cost

Choosing the correct flight height is also crucial for drone path planning. In drone path planning, especially in urban path planning, the setting of flight height is generally mandatory, as both excessively low and high flight heights may lead to accidents. Therefore, the cost function for flight height is expressed as follows,
i s o v e r h e i g h t ( X i j ) = ,   u n s a f e 0   ,   s a f e c o s t i , h e i g h t = j = 1 N p i s o v e r h e i g h t ( X i j )
where i s o v e r h e i g h t (   ) is a custom function used to check whether the height of the path point is within the prescribed range. If the height exceeds the limit, the cost is set to infinite; if the height is within the normal range, the cost is set to zero.

5.2.3. Collison Cost

Collision-free is the most important criterion in path planning. In urban drone path planning, the safety of the drone’s flight is the premise of the planning. Therefore, once a collision occurs, the cost function value should make the path invalid, meaning the path’s cost should be infinitely large. Thus, the cost function for collision is expressed as follows:
i s c o l l i s o n ( X i j , X i ( j + 1 ) ) = ,   u n s a f e 0 ,   s a f e c o s t i , c o l l i s o n = j = 0 N p i s c o l l i s o n ( X i j , X i ( j + 1 ) )
where i s c o l l i s o n (   ) is a custom function used to determine whether the path between two adjacent points will collide with obstacles. If a collision occurs, the cost is set to infinity; if no collision occurs, the cost is set to zero. In this study, it is assumed that a collision occurs if the path points overlap with the obstacle locations in the inflated 3D grid map.

5.2.4. Total Cost Function

The total cost function is defined as the combination of the three individual cost functions mentioned above:
c o s t i = c o s t i , l e n g t h + c o s t i , h e i g h t + c o s t i , c o l l i s o n

5.3. Path Segmentation Optimal Update Algorithm

In drone path-planning tasks, planning efficiency is often crucial. A faster path-planning process can greatly improve task execution efficiency, thus increasing the overall benefits. However, it has been found in the research that, during path planning using IPSO, the convergence speed tends to slow down significantly in the later stages of the optimization process, leading to a decrease in planning speed. This phenomenon occurs because the path oscillates around the optimal path solution, and the more path points there are, the more intense the oscillation becomes, thereby greatly reducing the convergence speed of path planning. Following analysis, it is proposed that the cause of this phenomenon lies in the fact that each path is composed of multiple path points. Therefore, during particle updates, multiple path points are updated simultaneously. The path points with bad positions may cause updates to the path points with good positions, leading to the movement of original optimal path points to suboptimal positions due to factors like inertia. This oscillation, when repeated, results in the phenomenon observed in the experiment.
To reduce or even avoid the aforementioned oscillation phenomenon, this paper proposes a path segmentation optimal update algorithm. The specific idea is as follows: In the later stages of the optimization process, during each iteration, the path containing N path points is decomposed into N updates. Each update only updates one path point, which is randomly selected from the un-updated path points. After each update, the fitness value of the path is calculated using Equation (17), and the optimal path from these N updates is selected as the new path for this iteration. This algorithm refines the original path update process by updating path points step by step instead of updating the entire path at once. This approach reduces or even eliminates the oscillation phenomenon, significantly improving the efficiency and convergence speed of path planning.
f i t i t + 1 = f i t i t f i t i j t + f i t i j t + 1
where f i t i t + 1 and f i t i t denote the fitness values of the i t h path after and before the update, respectively. f i t i j t + 1 and f i t i j t denote the fitness values of the updated and original path points j relative to the previous and next path points.

5.4. Path Smoothing

The improved PSO algorithm proposed in this study defines each path as consisting of a starting point, an end point, and several intermediate path points. The connections between these points are represented as straight line segments, which inevitably result in numerous sharp turns. When drones follow such paths during flight, excessive turns can significantly increase energy consumption. Therefore, it is essential to optimize the sharp turns on the path, a process referred to as path smoothing.
In comparison with a series of path-smoothing algorithms, this study selected the cubic B-spline algorithm due to its advantages in local control, flexibility, and computational efficiency. The smoothed path is expressed as follows:
C ( t ) = i = 0 n P i B i , 3 ( t )
where C ( t ) denotes the path expression. n is the number of control points. P i denotes the i t h control point. B i , 3 ( t ) denotes is the i t h cubic B-spline basis function, which is recursively defined as follows:
B i , 0 ( z ) = 1 ,   z i z z i + 1 0 ,   else B i , k ( z ) = z z i z i + k z i B i , k 1 ( z ) + z i + k + 1 z z i + k + 1 z i + 1 B i + 1 , k 1 ( z )
where B i , k ( t ) denotes the i t h B-spline basis function of k t h order. z i denotes the i t h knot, which defines the segmentation of the curve.

5.5. Urban Drone Path Planning Simulation

In this study, path-planning simulation experiments were conducted within the urban environment model described in the Section 5.1 on environment modeling. We considered different scenarios and different start and end points to test the optimization ability of the algorithm. Similar to the earlier benchmark experiments, the IPSO algorithm was compared with three other swarm intelligence optimization algorithms, namely PSO, GWO, and GA, in these path-planning simulations. Each algorithm was independently executed 20 times in the environment for comparison and analysis. To ensure the reliability of the results, the population size for all algorithms was set to 55, with 1000 iterations, and the other parameters for the algorithms remained consistent with those in Table 1. Specifically, each path consisted of six intermediate path points. The simulation results are shown in Figure 4 and Figure 5. In (a) and (c), the start point is (10, 10, 10) and the end point is (90, 90, 70). In (b), the start point is (10, 70, 10) and the end point is (80, 35, 80).
In Figure 4, it can be visually observed that, in the situations of different scenarios and different start and end points, IPSO outperforms the other three algorithms in planning superior paths, demonstrating its excellent optimization capability and robust adaptability. Figure 5 provides more detailed insights, showing that the paths planned by the IPSO algorithm are consistently shorter than those generated by the other three algorithms. This reduction in path length leads to shorter flight times and significantly enhances task execution efficiency, indicating a superior path-planning performance.
Table 4 presents the results of path planning, including metrics such as the average value, standard deviation, best value, and worst value.
When comparing the average values of the experimental results, it is evident that the cost of the path solutions obtained by the IPSO algorithm is significantly lower than that of the other algorithms, indicating that its path solutions are markedly superior. This demonstrates that the improved algorithm effectively enhances the search capability and algorithm accuracy. Furthermore, when analyzing the standard deviation of the experimental results, it can be found that IPSO consistently delivers stable and superior optimization results for path-planning problems under different starting points, highlighting its strong adaptability. In summary, the IPSO algorithm exhibits an outstanding performance in addressing urban drone path-planning problems.
Furthermore, to address safety-critical challenges posed by environmental uncertainties or sensor noise in drone path planning, this research proposes an IPSO-based dynamic replanning strategy. The urban environment is modeled with latent obstacle estimation, and it is assumed that the drone will hover or execute a controlled landing during path recalculation to ensure operational continuity. As illustrated in Figure 6, the experimental stages are as follows. In Stage 1, the cylindrical obstacles represent the original grid map modeled before takeoff, while the red path denotes the path planned based on the initial map. When the drone follows the red path and reaches a certain distance, new cylindrical obstacles (shown in Stage 2) are dynamically introduced to simulate newly detected obstacles during flight. At this point, the IPSO algorithm reinitializes path planning from the drone’s current position (retaining the original end point), generating the blue path based on the updated grid map. Additional obstacles (shown in Stages 3 and 4) are successively introduced during flight, triggering further replanning to generate the black and green paths, respectively. Experimental results demonstrate that the IPSO algorithm with the dynamic replanning strategy can effectively avoid sudden obstacles not present in the original grid map. This provides an effective solution for handling environmental uncertainties or sensor noise in IPSO-based path planning.

6. Conclusions

This paper addresses the urban drone path-planning problem by proposing an IPSO algorithm. By incorporating swarm intelligence and the competitive and collaborative behaviors observed in human electoral process, the global search capability and optimization accuracy of IPSO are significantly enhanced. The utilization of hierarchical structures, competitive mechanisms, and dynamic mutation strategies effectively overcomes the standard PSO’s tendency to fall into local optima when dealing with complex optimization problems. Experimental results using the CEC2005 benchmark test functions demonstrate that the IPSO algorithm outperforms the standard PSO, GWO, and GA in optimizing multiple test functions. Particularly in the optimization of Multimodal and Composition Functions, IPSO shows a stronger ability to continuously optimize the global optimum, indicating a superior performance in solving complex problems.
In the practical application of urban UAV path planning, this paper further introduces a path segmentation optimal update algorithm and a cubic B-spline algorithm based on the IPSO algorithm. The proposed algorithm significantly reducing path length and energy consumption, improving the smoothness and planning efficiency of drone paths while enhancing flight safety in complex urban environments. Simulation experiments verify that the proposed path-planning algorithm exhibits an excellent performance across various test scenarios with different starting and ending points, generating shorter paths and flight times with higher stability and adaptability. Future research directions will include further optimizing the algorithm’s dynamic adjustment mechanisms to improve convergence speed and adaptability, as well as incorporating more advantages from biological intelligence. However, the proposed IPSO algorithm still faces limitations in computational efficiency. This restricts its applicability in scenarios requiring high real-time performance. For application in urban drone path planning, IPSO is only used for global path planning generally. If there is a real-time requirement, it can be considered in combination with the local path-planning algorithm. We will also continue to optimize the algorithm by optimizing the dynamic adjustment mechanisms, introducing heuristic strategies, and so on in future research, to improve the operational efficiency of IPSO and further expand its application.

Author Contributions

Conceptualization, Y.J. and W.W.; Methodology, Y.J. and Q.L.; Software, C.Z.; Investigation, Z.H.; Resources, Q.L.; Data Curation, C.Z.; Writing—Original Draft, Q.L. and Y.J.; Writing—Review and Editing, Y.J.; Supervision, Y.J.; Project Administration, Z.H.; Funding Acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Major Program of the National Natural Science Foundation of China (Grant Number: T2293720/3724), the National Natural Science Foundation of China (Grant Number: 62173326), the Fundamental Research Funds for the Central Universities (Grant Number: FRF-TP-22-037A1), and the CAS Project for Young Scientists in Basic Research (Grant Number: YSBR-041).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Telli, K.; Kraa, O.; Himeur, Y.; Ouamane, A.; Boumehraz, M.; Atalla, S.; Mansoor, W. A comprehensive review of recent research trends on unmanned aerial vehicles (uavs). Systems 2023, 11, 400. [Google Scholar] [CrossRef]
  2. Elmeseiry, N.; Alshaer, N.; Ismail, T. A detailed survey and future directions of unmanned aerial vehicles (uavs) with potential applications. Aerospace 2021, 8, 363. [Google Scholar] [CrossRef]
  3. Shakhatreh, H.; Sawalmeh, A.H.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Kereishah, A.; Guizani, M. Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  4. Gharib, M.R.; Moavenian, M. Full dynamics and control of a quadrotor using quantitative feedback theory. Int. J. Numer. Model. Electron. Netw. Devices Fields 2016, 29, 501–519. [Google Scholar] [CrossRef]
  5. Hooshyar, M.; Huang, Y.M. Meta-heuristic algorithms in UAV path planning optimization: A systematic review (2018–2022). Drones 2023, 7, 687. [Google Scholar] [CrossRef]
  6. Aggarwal, S.; Kumar, N. Path planning techniques for unmanned aerial vehicles: A review, solutions, and challenges. Comput. Commun. 2020, 149, 270–299. [Google Scholar] [CrossRef]
  7. Maboudi, M.; Homaei, M.; Song, S.; Malihi, S.; Saadatseresht, M.; Gerke, M. A Review on Viewpoints and Path Planning for UAV-Based 3-D Reconstruction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 5026–5048. [Google Scholar] [CrossRef]
  8. Zhou, B.; Gao, F.; Pan, J.; Shen, S. Robust real-time uav replanning using guided gradient-based optimization and topological paths. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1208–1214. [Google Scholar]
  9. Melo, A.G.; Pinto, M.F.; Marcato, A.L.; Honório, L.M.; Coelho, F.O. Dynamic optimization and heuristics based online coverage path planning in 3D environment for UAVs. Sensors 2021, 21, 1108. [Google Scholar] [CrossRef]
  10. Liu, H.; Li, X.; Fan, M.; Wu, G.; Pedrycz, W.; Suganthan, P.N. An autonomous path planning method for unmanned aerial vehicle based on a tangent intersection and target guidance strategy. IEEE Trans. Intell. Transp. Syst. 2020, 23, 3061–3073. [Google Scholar] [CrossRef]
  11. Peng, C.; Huang, X.; Wu, Y.; Kang, J. Constrained multi-objective optimization for UAV-enabled mobile edge computing: Offloading optimization and path planning. IEEE Wirel. Commun. Lett. 2022, 11, 861–865. [Google Scholar] [CrossRef]
  12. Venkatasivarambabu, P.; Agrawal, R. Enhancing UAV navigation with dynamic programming and hybrid probabilistic route mapping: An improved dynamic window approach. Int. J. Inf. Technol. 2024, 16, 1023–1032. [Google Scholar] [CrossRef]
  13. Han, B.; Qu, T.; Tong, X.; Jiang, J.; Zlatanova, S.; Wang, H.; Cheng, C. Grid-optimized UAV indoor path planning algorithms in a complex environment. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102857. [Google Scholar] [CrossRef]
  14. Chen, Q.; Cheng, S.; Hovakimyan, N. Simultaneous spatial and temporal assignment for fast UAV trajectory optimization using bilevel optimization. IEEE Robot. Autom. Lett. 2023, 8, 3860–3867. [Google Scholar] [CrossRef]
  15. Souto, A.; Alfaia, R.; Cardoso, E.; Araújo, J.; Francês, C. UAV path planning optimization strategy: Considerations of urban morphology, microclimate, and energy efficiency using Q-learning algorithm. Drones 2023, 7, 123. [Google Scholar] [CrossRef]
  16. Shivgan, R.; Dong, Z. Energy-efficient drone coverage path planning using genetic algorithm. In Proceedings of the 2020 IEEE 21st International Conference on High Performance Switching and Routing (HPSR), Newark, NJ, USA, 11–14 May 2020; pp. 1–6. [Google Scholar]
  17. Yuan, J.; Liu, Z.; Lian, Y.; Chen, L.; An, Q.; Wang, L.; Ma, B. Global optimization of UAV area coverage path planning based on good point set and genetic algorithm. Aerospace 2022, 9, 86. [Google Scholar] [CrossRef]
  18. Mesquita, R.; Gaspar, P.D. A novel path planning optimization algorithm based on particle swarm optimization for UAVs for bird monitoring and repelling. Processes 2021, 10, 62. [Google Scholar] [CrossRef]
  19. Huang, C. A Novel Three-Dimensional Path Planning Method for Fixed-Wing UAV Using Improved Particle Swarm Optimization Algorithm. Int. J. Aerosp. Eng. 2021, 2021, 7667173. [Google Scholar] [CrossRef]
  20. Phung, M.D.; Ha, Q.P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization. Appl. Soft Comput. 2021, 107, 107376. [Google Scholar] [CrossRef]
  21. Ying, S.; Shi, C.; Yang, S. Ship route designing for collision avoidance based on Bayesian genetic algorithm. In Proceedings of the IEEE International Conference on Control and Automation, Guangzhou, China, 30 May–1 June 2007; pp. 1807–1811. [Google Scholar]
  22. Cao, X.; Ren, L.; Wang, X.; Sun, C. Path Re-planning method of unmanned underwater vehicles based on dynamic bayesian threat assessment. Ocean Eng. 2025, 315, 119819. [Google Scholar] [CrossRef]
  23. Wan, Y.; Zhong, Y.; Ma, A.; Zhang, L. An accurate UAV 3-D path planning method for disaster emergency response based on an improved multiobjective swarm intelligence algorithm. IEEE Trans. Cybern. 2022, 53, 2658–2671. [Google Scholar] [CrossRef]
  24. Athira, K.A.; Yalavarthi, R.; Saisandeep, T.; Harshith, K.S.S.; Sha, A. ACO-DTSP Algorithm: Optimizing UAV Swarm Routes with Workload Constraints. Procedia Comput. Sci. 2024, 235, 163–172. [Google Scholar]
  25. Zhang, W.; Zhang, S.; Wu, F.; Wang, Y. Path planning of UAV based on improved adaptive grey wolf optimization algorithm. IEEE Access 2021, 9, 89400–89411. [Google Scholar] [CrossRef]
  26. Yu, X.; Jiang, N.; Wang, X.; Li, M. A hybrid algorithm based on grey wolf optimizer and differential evolution for UAV path planning. Expert Syst. Appl. 2023, 215, 119327. [Google Scholar] [CrossRef]
  27. Jiang, W.; Lyu, Y.; Li, Y.; Guo, Y.; Zhang, W. UAV path planning and collision avoidance in 3D environments based on POMPD and improved grey wolf optimizer. Aerosp. Sci. Technol. 2022, 121, 107314. [Google Scholar] [CrossRef]
  28. Pan, J.S.; Lv, J.X.; Yan, L.J.; Weng, S.W.; Chu, S.C.; Xue, J.K. Golden eagle optimizer with double learning strategies for 3D path planning of UAV in power inspection. Math. Comput. Simul. 2022, 193, 509–532. [Google Scholar] [CrossRef]
  29. Zhang, R.; Li, S.; Ding, Y.; Qin, X.; Xia, Q. UAV path planning algorithm based on improved Harris Hawks optimization. Sensors 2022, 22, 5232. [Google Scholar] [CrossRef] [PubMed]
  30. Shen, Q.; Zhang, D.; Xie, M.; He, Q. Multi-strategy enhanced dung beetle optimizer and its application in three-dimensional UAV path planning. Symmetry 2023, 15, 1432. [Google Scholar] [CrossRef]
  31. Abdel-Basset, M.; Mohamed, R.; Sallam, K.M.; Hezam, I.M.; Munasinghe, K.; Jamalipour, A. A Multiobjective Optimization Algorithm for Safety and Optimality of 3-D Route Planning in UAV. IEEE Trans. Aerosp. Electron. Syst. 2024, 60, 3067–3080. [Google Scholar] [CrossRef]
  32. Zu, L.; Wang, Z.; Liu, C.; Ge, S.S. Research on UAV path planning method based on improved HPO algorithm in multi-task environment. IEEE Sens. J. 2023, 23, 19881–19893. [Google Scholar] [CrossRef]
  33. Zhang, C.; Zhou, W.; Qin, W.; Tang, W. A novel UAV path planning approach: Heuristic crossing search and rescue optimization algorithm. Expert Syst. Appl. 2023, 215, 119243. [Google Scholar] [CrossRef]
  34. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. MHS’95. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  35. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. KanGAL Rep. 2005, 2005005, 2005. [Google Scholar]
Figure 1. A graphic description of hierarchy and behaviors in PSO.
Figure 1. A graphic description of hierarchy and behaviors in PSO.
Biomimetics 10 00180 g001
Figure 2. The flowchart of the IPSO algorithm.
Figure 2. The flowchart of the IPSO algorithm.
Biomimetics 10 00180 g002
Figure 3. Convergence curves of test functions.
Figure 3. Convergence curves of test functions.
Biomimetics 10 00180 g003aBiomimetics 10 00180 g003bBiomimetics 10 00180 g003cBiomimetics 10 00180 g003d
Figure 4. Urban drone path planning.
Figure 4. Urban drone path planning.
Biomimetics 10 00180 g004aBiomimetics 10 00180 g004b
Figure 5. Convergence curve of urban drone path planning.
Figure 5. Convergence curve of urban drone path planning.
Biomimetics 10 00180 g005aBiomimetics 10 00180 g005b
Figure 6. IPSO-based dynamic replanning strategy.
Figure 6. IPSO-based dynamic replanning strategy.
Biomimetics 10 00180 g006aBiomimetics 10 00180 g006b
Table 1. Algorithm parameter settings.
Table 1. Algorithm parameter settings.
AlgorithmMain Parameters
IPSO N = 5 ,   w = 0.7 ,   c 1 = 1.5 ,   c 2 = 1.5 ,   c 3 = 1.5
R c , max = 50 ,   R c , min = 10 ,   c n t u , max = i t e r a t i o n s / 8 ,   R s , f i x e d = 10
R m , max = 30 ,   R m , min = 10 ,   N m = 5 ,   R m , f i x e d = 0.4
PSO w = 0.9 ,   c 1 = 1.5 ,   c 2 = 1.5
GWO-
GA R mutation = 0.1 ,   R c r o s s = 0.5
Table 2. Optimization results of cec2005 benchmark functions.
Table 2. Optimization results of cec2005 benchmark functions.
Test FunctionIndicatorIPSOPSOGWOGA
F1(x)Avg−450−450−449.999999−425.952248
Std2.111112 × 10−1401.820577 × 10−625.555973
Best−450−450−450−449.534698
Ranking2134
F2(x)Avg−450−450−449.999994−405.596209
Std1.165899 × 10−1308.731787 × 10−652.953349
Best−450−450−450−449.447446
Ranking2134
F3(x)Avg−450−291.7186561659.36777043277.750098
Std4.095846 × 10−7866.9426212376.3868173528.763719
Best−450−450−449.961901−401.367885
Ranking1234
F4(x)Avg−450−450−449.999994−416.461782
Std1.411290 × 10−1207.065243 × 10−642.570312
Best−450−450−450−446.094106
Ranking2134
F5(x)Avg−310−310−310−310
Std0000
Best−310−310−310−310
Ranking1111
F6(x)Avg390400.026165454.317210416.378099
Std1.125253 × 10−1138.69206340.95215239.244906
Best390390390390.002167
Ranking1243
F7(x)Avg−178.307870−140.506148−140.836896−137.325516
Std0.9479480.4536190.0329877.992522
Best−179.288269−140.721283−140.847181−140.173508
Ranking1324
F8(x)Avg−140−123.999641−120.664410−119.760441
Std08.1369443.6397640.259870
Best−140−140−139.935693−120.536663
Ranking1234
F9(x)Avg−330−329.568851−329.801007−314.726714
Std00.5014660.4047876.0152092
Best−330−330−330−325.883823
Ranking1324
F10(x)Avg−330−329.436189−329.834087−307.451342
Std1.055555 × 10−140.5654550.74332712.665757
Best−330−330−330−325.769611
Ranking1324
F11(x)Avg90.00002090.00876690.01406690.123873
Std3.075193 × 10−50.0479170.0516610.081992
Best909090.00090690.027348
Ranking1234
F12(x)Avg−460−460−459.652625−392.819433
Std5.850990 × 10−1100.44396556.504576
Best−460−460−459.999986−459.999489
Ranking2134
F13(x)Avg−129.999342−129.970727−129.987677−125.716794
Std0.0036020.0382820.01581011.192063
Best−130−130−130−129.937801
Ranking1324
F14(x)Avg−299.992227−299.979382−299.984732−299.030191
Std0.0096820.0107670.0070200.057294
Best−300−300−300−299.207456
Ranking1324
F15(x)Avg120173.438024171.423962133.593857
Std4.678729 × 10−1352.15525651.09445044.552562
Best120120120120
Ranking1432
F16(x)Avg120155.102276172.327423165.117313
Std1.371207 × 10−1442.705134102.11001542.324586
Best120120120120
Ranking1243
F17(x)Avg120143.661025152.204302160.281951
Std1.730435 × 10−1432.29981948.07213637.710854
Best120120120120
Ranking1234
F18(x)Avg10300246.810454266.812006
Std9.886142 × 10−7118.46722282.393323127.507629
Best1010110.06694410.258672
Ranking1423
F19(x)Avg170.083801340269.280453300.223773
Std56.232486102.21680871.88049570.592002
Best10.018245210112.226819113.127308
Ranking1423
F20(x)Avg20273.333333254.623740210.357130
Std30.512857129.94251689.219680159.311326
Best1010111.72129810.003517
Ranking1432
F21(x)Avg366.666666719.622594661.015170613.797966
Std36.514837168.549209204.53518788.241399
Best360360360.000001560.000024
Ranking1432
F22(x)Avg446.666686659.881575648.032910628.196021
Std100.801368146.38556660.84628287.267175
Best360360495.632347560.000140
Ranking1432
F23(x)Avg385.389456829.452956683.332953712.911694
Std77.470487182.859350339.324722152.364963
Best360360360360.001968
Ranking1423
F24(x)Avg450469.150180456.260876460.002078
Std30.51285738.52419018.5651210.004279
Best360460358.508267460.000003
Ranking1423
F25(x)Avg437.625668732.660497785.231299749.432197
Std43.573251166.982200207.16936696.469473
Best360360360.054591419.794733
Ranking1243
Average Ranking1.162.642.683.28
Final Ranking1234
Table 3. The obstacle location information.
Table 3. The obstacle location information.
ScenarioArchitectureLower VertexUpper Vertex
Scenario 1Building 1(15, 0, 0)(40, 20, 30)
Building 2(30, 42, 0)(60, 65, 60)
Building 3(70, 70, 0)(100, 100, 30)
Building 4(0, 86, 0)(50, 100, 40)
Building 5(0, 86, 40)(50, 100, 100)
(0, 70, 40)(20, 86, 100)
Building 6(60, 0, 0)(100, 30, 60)
(80, 0, 60)(100, 30, 100)
Scenario 2ArchitectureCenter X, Center Y, Height, Radius
Building 1(25, 30, 50, 5)
Building 2(75, 85, 60, 5)
Building 3(60, 20, 90, 5)
Building 4(80, 50, 70, 5)
Building 5(50, 50, 80, 5)
Building 6(20, 60, 80, 5)
Building 7(50, 80, 40, 5)
Building 8(90, 10, 60, 5)
Building 9(20, 90, 60, 5)
Table 4. Path planning results under two different starting points.
Table 4. Path planning results under two different starting points.
Test FunctionIndicatorIPSOPSOGWOGA
Planning 1Avg144.296760166.284557157.449830151.865283
Std2.03313022.18611512.9067748.138659
Best141.383928147.386272153.955209141.692737
Worst148.170932214.948553204.104464167.703344
Ranking1432
Planning 2Avg113.776126135.392295114.903634122.945473
Std1.00245316.9437080.7067959.395027
Best111.782006114.349097114.346174115.905400
Worst115.096053164.877072117.397439146.093836
Ranking1423
Planning 3Avg137.030056153.006240139.566073173.011788
Std17.59742921.34027916.57437917.939675
Best119.174606125.422954121.780195131.703866
Worst171.895310186.264765170.414441206.094407
Ranking1324
Average Ranking13.672.333
Final Ranking1423
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ji, Y.; Liu, Q.; Zhou, C.; Han, Z.; Wu, W. Hybrid Swarm Intelligence and Human-Inspired Optimization for Urban Drone Path Planning. Biomimetics 2025, 10, 180. https://doi.org/10.3390/biomimetics10030180

AMA Style

Ji Y, Liu Q, Zhou C, Han Z, Wu W. Hybrid Swarm Intelligence and Human-Inspired Optimization for Urban Drone Path Planning. Biomimetics. 2025; 10(3):180. https://doi.org/10.3390/biomimetics10030180

Chicago/Turabian Style

Ji, Yidao, Qiqi Liu, Cheng Zhou, Zhiji Han, and Wei Wu. 2025. "Hybrid Swarm Intelligence and Human-Inspired Optimization for Urban Drone Path Planning" Biomimetics 10, no. 3: 180. https://doi.org/10.3390/biomimetics10030180

APA Style

Ji, Y., Liu, Q., Zhou, C., Han, Z., & Wu, W. (2025). Hybrid Swarm Intelligence and Human-Inspired Optimization for Urban Drone Path Planning. Biomimetics, 10(3), 180. https://doi.org/10.3390/biomimetics10030180

Article Metrics

Back to TopTop