Next Article in Journal
A Novel Method Based on Eulerian Streamlines for Droplet Impingement Characteristic Computation Under Icing Conditions
Previous Article in Journal
A Transformer-Based Self-Organizing UAV Swarm for Assisting an Emergency Communications System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-UAV Dynamic Target Search Based on Multi-Potential-Field Fusion Reward Shaping MAPPO

1
School of Mechatronics Engineering, Beijing Institute of Technology, Beijing 100081, China
2
Institute of Advanced Interdisciplinary Technology, Shenzhen MSU-BIT University, Shenzhen 518116, China
3
Yangtze River Delta Research Institute of BIT, Jiaxing 314000, China
*
Author to whom correspondence should be addressed.
Drones 2025, 9(11), 770; https://doi.org/10.3390/drones9110770
Submission received: 29 August 2025 / Revised: 5 November 2025 / Accepted: 6 November 2025 / Published: 7 November 2025

Highlights

What are the main findings?
  • Proposes MPRS-MAPPO, an adaptive reward shaping method integrating three potential fields, enhancing multi-UAV coordination and learning efficiency in dynamic target search.
  • Achieves a 7.87–29.76% improvement in target detection rate and an 11.58% increase in training return compared to baseline methods.
What are the implications of the main findings?
  • Offers an effective MARL framework for cooperative search under sparse rewards and dynamic conditions.
  • The design enhances efficiency and stability, serving as a reference for other multi-agent systems.

Abstract

In the cooperative search for dynamic targets by multiple UAVs, target uncertainty and system complexity pose significant challenges to cooperative decision-making. Multi-agent reinforcement learning (MARL) technology can be used for cooperative policy optimization, but it suffers from convergence difficulties and low policy quality in reward-sparse environments such as dynamic target search. To address this issue, this paper proposes a Multi-Potential-Field Fusion Reward Shaping MAPPO (MPRS-MAPPO) algorithm. First, three potential field functions are constructed for reward shaping: probability edge potential field, maximum probability potential field, and coverage probability sum potential field. Subsequently, an adaptive fusion weight mechanism is proposed to adjust fusion weights based on the correlation between potential field values and advantage values. Furthermore, a warm-up phase is introduced to improve training stability. Extensive experiments, including multi-scale and physical tests, demonstrate that MPRS-MAPPO significantly improves convergence speed, detection rate, and stability compared with MAPPO, MASAC, QMIX, and Scanline. Detection rates increased by 7.87–29.76%, and training uncertainty decreased by 7.43–56.36%, validating the algorithm’s robustness, scalability, and real-world applicability.

1. Introduction

Unmanned aerial vehicles (UAVs), owing to their cost-effectiveness, flexible deployment, and adaptability to complex environments, have been extensively employed in tasks such as reconnaissance, mapping, patrolling, and target search, becoming indispensable components of intelligent perception systems [1,2]. In multi-UAV systems, these advantages are further amplified: cooperative operations not only enhance the efficiency and robustness of complex task execution but also expand mission coverage and system scalability [3]. In particular, for dynamic target search missions in unknown environments, multi-UAV systems have been widely applied in disaster rescue, agricultural inspection, and counter-terrorism security scenarios [4,5].
Nevertheless, multi-UAV systems still encounter considerable challenges in dynamic target search tasks. On one hand, their cooperative mechanisms are inherently complex, requiring rational task allocation to avoid path conflicts, resource waste, and redundant search efforts. On the other hand, targets generally possess only limited prior location information at the initial stage, and their positions and movements change over time, which makes cooperative decision-making even more challenging.
Existing studies on multi-UAV dynamic target search can be broadly categorized into four methodological classes: planning-based approaches, optimization-based approaches, heuristic methods, and reinforcement learning. Planning-based methods generate high-coverage search paths through area partitioning [6,7] and trajectory design [8,9,10,11], which are suitable for static or partially known environments but lack responsiveness under dynamic target scenarios. Optimization-based approaches rely on multi-objective function models to balance metrics such as path length [12,13], search time [14,15,16], and coverage rate [17]; although theoretically optimal, their computational complexity escalates rapidly with task scale, limiting real-time applicability. Heuristic methods, including particle swarm optimization [18,19], ant colony algorithms [20], and multi-population cooperative coevolution [21], exhibit strong global search capability and algorithmic flexibility [22,23], but often lack effective feedback control, making them prone to local optima and slow convergence.
Among the aforementioned approaches, reinforcement learning (RL) [24], particularly multi-agent reinforcement learning (MARL) [25], has emerged as a promising paradigm for addressing dynamic target search problems. Its key advantage lies in its independence from precise modeling, instead enabling adaptive policy learning through continuous interaction with the environment, thereby ensuring strong generalization and robustness. In particular, multi-agent algorithms based on proximal policy optimization, such as MAPPO [26], have demonstrated remarkable performance in handling partial observability, high-dimensional state spaces, and cooperative decision-making, and have been widely applied to multi-UAV cooperative search tasks [27,28,29]. For instance, Refs. [30,31] leveraged MARL to optimize search path allocation and real-time response mechanisms, significantly improving target-tracking accuracy and system-level coordination. In scenarios with unknown target quantities or incomplete information, Ref. [32] integrated map construction with policy learning, enabling UAVs to iteratively update environmental cognition and dynamically adapt strategies during search. Furthermore, researchers have proposed various enhancements in algorithmic structures and optimization processes to further improve training stability and efficiency [33,34,35,36]. More recent studies have introduced MASAC [37] to enhance learning stability and generalization in UAV swarm decision-making under incomplete information and HAPPO [38] to improve coordination and optimization efficiency in heterogeneous multi-agent environments.
Despite the great potential of RL in this domain, its training process remains constrained by the sparse-reward problem [39]. Before the discovery of targets, agents often fail to obtain effective reward signals, leading to inefficient policy optimization. To address this challenge, researchers have introduced reward shaping (RS) techniques [40], which leverage prior knowledge to design auxiliary rewards that improve training efficiency and policy quality. Among them, potential-based reward shaping (PBRS) [41] has been widely adopted due to its provable policy invariance. Various studies have attempted to design diverse potential functions to enhance learning across different tasks. For example, energy-aware shaping functions have been used to improve UAV emergency communication efficiency [42]; position-constrained functions have been applied to optimize multi-agent assembly tasks [43]; linearly weighted multi-potential fusion has been employed to enhance single-agent adaptability [44]; and dynamic adjustment of shaping reward magnitudes has been proposed to improve stage-wise training adaptability [45]. However, in multi-agent scenarios—particularly in multi-UAV dynamic target search—systematic studies on how to effectively integrate multiple potential functions for reward shaping are still lacking.
To address these challenges, we propose a Multi-Potential-Field Fusion Reward Shaping MAPPO (MPRS-MAPPO). Building upon sparse primary rewards, this method designs three semantically meaningful potential functions that serve as shaping signals. Moreover, an adaptive fusion weight mechanism is introduced to adaptively adjust the weights of different potential functions according to their relationship with advantage values, thereby mitigating potential interference. Table 1 compares the proposed approach with existing methods across four dimensions: multi-UAV applicability, dynamic target handling, reward shaping, and potential-field fusion. It can be observed that this work is the first to integrate all four key characteristics into a unified algorithmic framework.
In summary, the main contributions of this work are as follows:
  • We designed three semantically distinct potential field functions: Probability Edge Potential Field, Maximum Probability Potential Field, and Coverage Probability Sum Potential Field, which provide shaping signals from the perspectives of local prior information, global optimal prediction, and swarm-level coordination.
  • We developed an Adaptive Fusion Weight Mechanism that adaptively adjusts the weights of potential functions based on their correlation with advantage values, reducing interference among multiple potentials and enabling stable and efficient training convergence.
  • We proposed the MPRS-MAPPO algorithmic framework, which introduces a warm-up phase followed by a multi-potential field fusion reward shaping mechanism to address the sparse-reward challenge, thereby improving the learning efficiency and cooperation of agents in dynamic target search tasks.
Finally, extensive experiments were conducted on a custom-built multi-UAV simulation platform to validate the superiority of the proposed method in terms of training efficiency, policy coordination, and search performance.
The remainder of this paper is organized as follows. Section 2 formulates the mathematical model of the multi-UAV dynamic target search problem. Section 3 presents the proposed MPRS-MAPPO algorithm in detail. Section 4 describes the experimental design and evaluation results. Section 5 concludes the paper and outlines future research directions.

2. System Modeling

This paper investigates the cooperative search for dynamic ground targets by multiple fixed-wing UAVs. As shown in Figure 1, the task area is the ground space area. Each UAV is equipped with a sensor that has a limited field of view, the range of which is represented by a yellow cone. The ground target and its trajectory are indicated by green icons, and its movement is non-deterministic and may maneuver out of the task area. Although the UAV obtains the prior initial position of the target, the actual position of the target will continue to change due to its maneuverability. Consequently, a cooperative search by multiple UAVs is required to increase the target detection probability. The mission objective is to maximize the number of detected targets within the specified search duration.
Based on the task scenario, this section defines the environmental, target motion, UAV motion, and sensor models that provide the basis for the subsequent method research.

2.1. Environment Model

The search environment is modeled as a two-dimensional grid map M , where the probability of a target existing at position g = ( x , y ) changes dynamically at each time step t . This dynamic likelihood is represented by the target probability distribution map p g , t , and can be expressed as follows:
p g , t 0,1 , g M .
The sum of the existence probabilities of the target at each position in the area is defined as the overall existence probability of the target, and its calculation formula is as follows:
p r e m a i n ( t ) = g M p ( g , t ) ,
when p r e m a i n t equals 1, it indicates that the target is still within the search area; if the value is less than 1, it indicates that the target may have escaped from the area.
The UAV moves in the area, and the sensor carried by it senses a sub-area at each moment. The output of each sensing is z { 0 , 1 } , where z = 1 indicates that the target is detected, otherwise it indicates that the target is not detected. For the airborne sensor, when the UAV position q u a v t and the target position q t g t t coincide, the probability of sensing the target is the detection probability p D . When the UAV position q u a v t and the target position q t g t t do not coincide, the probability of misjudging the existence of the target is the false alarm probability p F A . The details are as follows:
p D = P ( z = 1 q u a v ( t ) = q t g t ( t ) ) p F A = P ( z = 1 q u a v ( t ) q t g t ( t ) ) .
According to the Bayesian theory and the observation results at the current moment, the posterior probability of each sub-area in the target probability distribution map can be updated. Since the movement of the target will cause the change in the overall existence probability p r e m a i n t , it needs to be taken into account when calculating the posterior probability. Based on the probability distribution at time t , the probability at time t + 1 is updated as:
p ( g , t + 1 ) = p r e m a i n ( t ) p D p ( g , t ) p D p ( g , t ) + p F A ( 1 p ( g , t ) ) , z = 1 p r e m a i n ( t ) ( 1 p D ) p ( g , t ) ( 1 p D ) p ( g , t ) + ( 1 p F A ) ( 1 p ( g , t ) ) , z = 0 .
If there are multiple targets in the area, each target j corresponds to a target probability map p j ( g , t ) , which is used to represent the probability distribution of the target position in the area.

2.2. Target and UAV Motion Models

2.2.1. Target Motion Model

In this paper, multiple dynamic targets are considered, and the total number is N T . For each target, only its initial position is known, while its subsequent positions evolve randomly over time. The behavior of an individual target can be modeled as a Markov process, and its probability distribution evolves accordingly. The specific formula for a single target is:
p ( g , t + 1 ) = g N ( g ) T ( g g ) p ( g , t ) ,
in this equation, g and g denote target position, T ( g g ) is the state transition probability from position g to g , and N ( g ) is the set of neighboring grid cells reachable from g in a single time step. In the absence of prior information on the target’s kinematics, the transition probability T ( g g ) is assumed to follow a uniform distribution. As the complexity of the target motion model increases, the position probability distribution becomes more dispersed and irregular, which makes it more difficult to discover targets within a limited time. This increased complexity also affects the convergence and stability of the cooperative search strategy. Therefore, the selection of an appropriate motion model should comprehensively consider both the characteristics of targets in specific scenarios and the computational efficiency of probabilistic inference. This paper assumes the target has a tendency to move away from its initial position. Consequently, its probability distribution spreads outwards over time, forming an annular area of high probability.

2.2.2. UAV Motion Model

This paper considers a swarm of N a homogeneous fixed-wing UAVs, represented as:
U = { u a v 1 , u a v 2 , , u a v N a } .
Each UAV can autonomously adjust its heading and speed, and fly at different altitudes to avoid collisions. For the convenience of modeling and simulation, it is simplified into a particle model with direction constraints in a two-dimensional plane, and its state is represented by a three-dimensional vector x , y , ψ , where x , y represents the position coordinates of the UAV in the two-dimensional plane, and ψ represents its yaw angle. The continuous time motion model of the UAV is:
x ˙ = v cos ψ y ˙ = v sin ψ ψ ˙ = ω .
Discretizing this model with a sampling time of t yields the following discrete-time model:
x t + 1 y t + 1 ψ t + 1 = x t y t ψ t + t cos ψ t 0 sin ψ t 0 0 1 v t ω t .
To reflect the kinematic constraints of fixed-wing UAVs, this paper sets the yaw change within each time step to be 0 ° , ± 45 ° , ± 90 ° . The UAV’s speed is normalized such that in one time step, it moves one grid unit for axial movements and 2 grid units for diagonal movements.

2.3. Sensor Model

To search for ground targets, the UAV is equipped with sensors with detection capabilities. Considering the errors in the actual sensors, the detection probability model and the false alarm probability model are established for the sensors, and the target recognition and judgment method based on the sensor model is designed.
For the detection probability model p D , let the position of the UAV be q i u a v t = x i t , y i t , the position of the point to be detected be q j t g t t = x j t , y j t , and the detection probability decays exponentially with distance, as defined below:
p D ( q j t g t , q i u a v ) = p m a x e d ( q j t g t , q i u a v ) / λ ,
where d ( q j t g t , q i u a v ) = ( x j x i ) 2 + ( y j y i ) 2 is the Euclidean distance between the target and the sensor, p m a x is the maximum detection probability when the target is at the center of the sensor’s field of view, and λ is the attenuation coefficient. The smaller the λ , the faster the detection probability decreases.
The trend of the sensor’s detection probability with distance is shown in the figure. The farther the distance from the airborne sensor, the lower the detection probability, as shown in Figure 2. In the actual simulation, the minimum effective detection probability is set to define the effective detection range of the sensor.
The false alarm probability p F A , is the probability that the sensor incorrectly reports a target at the position where none is present. Each false alarm means that the target does not exist, but the sensor thinks it has identified the target.
From the above analysis, it can be seen that there are two possibilities for whether there is a target at the specific position g , H 0 , H 1 , where H 0 indicates that there is no target at the position, and H 1 indicates that there is a target at the position. The sensor output is a Bernoulli variable z 0 , 1 , where z = 1 indicates that the target is detected.
According to Bayes’ theorem, given a prior probability of target presence p g , t , the posterior probability after receiving an observation z can be computed using the detection probability p D , false alarm probability p F A , and the prior probability, as follows:
P ( H 1 | z = 1 ) = p D p ( g , t ) p D p ( g , t ) + p F A ( 1 p ( g , t ) ) .
The method of judging whether there is a target at the point when the sensor outputs z = 1 by setting the posterior probability threshold η e x i s t is:
z j u d g e = 1 , P H 1 | z = 1 η e x i s t 0 , P H 1 | z = 1 < η e x i s t ,
where η e x i s t is the set posterior probability threshold, which affects the value of z j u d g e . When z j u d g e is 1, it indicates that there is a target at this place, and when z j u d g e is 0, it indicates that there is no target at this place.

3. Proposed Methodology

The problem of multi-UAV cooperative search for dynamic targets must account for target mobility and a finite task time, during which targets may escape the search area. The objective for the UAV cluster is to maximize the number of targets detected before they escape. Accordingly, the task objective function in this paper is formulated with two components: maximizing the number of targets detected within the mission timeframe and minimizing the sum of the initial detection times for all targets. The objective function is expressed as follows:
m a x j = 1 N T I t j d e t e c t T m a x j = 1 N T t j d e t e c t ,
where I is the indicator function, which outputs 1 when the target is detected, otherwise it is 0. t j d e t e c t represents the time when the j -th target is detected; T m a x is the maximum time limit of the task.
Throughout the mission, targets move randomly, their positions evolving over time, with the possibility of exiting the search area. To model this dynamic behavior, the target position constraint is defined as:
q j t g t t + 1 q j t g t t , j .
The mission duration for each UAV is constrained by T m a x , and can be expressed as follows:
t i m i s s i o n T m a x , i .
The UAV is subject to its own kinematic constraints, and the change rate of heading angle is limited, and can be expressed as follows:
ψ i t + 1 ψ i t Δ ψ m a x .
The position of the drone needs to meet the constraint that the position is within the map, and can be expressed as follows:
q i u a v t M m i s s i o n , i , t .
In summary, the mathematical model of the multi-UAV cooperative search problem for dynamic targets can be formally expressed as follows. This model captures the key aspects of the task, including the need to maximize the total number of targets detected within the limited mission time, as well as to minimize the cumulative initial detection times for all targets. Additionally, it incorporates several practical constraints that each UAV must satisfy during operation. These include kinematic constraints such as the maximum allowable change in heading angle, spatial constraints ensuring that UAV positions remain within the mission area, temporal constraints limiting the mission duration, and dynamic constraints reflecting the random movement of targets over time. By integrating the objective function with these constraints, the multi-UAV cooperative search problem is fully formulated as:
max j = 1 N T I ( t j d e t e c t T m a x ) j = 1 N T t j d e t e c t s . t . | ψ i ( t + 1 ) ψ i ( t ) | ψ m a x q j t g t ( t + 1 ) q j t g t ( t ) , j q i u a v ( t ) M m i s s i o n t i m i s s i o n T m a x , i .
This optimization problem is characterized by multiple objectives, numerous constraints, and stochastic target motion, making it intractable for traditional methods. To address this challenge, this paper proposes the MPRS-MAPPO algorithm for solving based on the decentralized partially observable Markov decision process (Dec-POMDP) framework and the multi-agent proximal policy optimization (MAPPO) algorithm.

3.1. Dec-POMDP Formulation

According to the characteristics of the task, the process of multi-UAV cooperative search for dynamic targets is modeled as a Dec-POMDP [30]. In this model, the global state s t cannot be perceived by a single UAV. Each UAV takes action a t i based on its local observation information o t i . Since the target motion is unknown and each UAV can only obtain partial information, the system is partially observable. The corresponding Dec-POMDP can be described by the following tuple:
N , S , A , T , R , Ω , O , γ ,
where
(1)
N = { 1 , , N a } is the set of N a agents.
(2)
S is the state space of the global environment, and s S represents the current state.
(3)
A = A 1 × A 2 × × A N a is the joint action space, and A i is the action space of the i -th agent.
(4)
T ( s | s , a ) = P ( s | s , a ) [ 0,1 ] is the state transition probability function of the environment, which represents the probability of the environment transitioning to the state s under the state s and the given joint action a = ( a 1 , a 2 , , a N a ) of the agent.
(5)
R ( s , a ) = ( R 1 ( s , a ) , R 2 ( s , a ) , , R N a ( s , a ) ) is the joint reward function that outputs the reward values of each agent according to the current state s of the environment and the joint action a of N a agents, where R i ( s , a ) is the reward value obtained by the i -th agent.
(6)
Ω = Ω 1 × Ω 2 × × Ω N a is the observation space of N a agents, where o i Ω i is the specific observation of the i -th agent.
(7)
O ( s , i ) is the local observation function, and o i = O ( s , i ) represents the local observation o i obtained by the i -th agent based on the observation function in the state s .
(8)
γ is the discount factor in the Markov decision process.
At time t , each agent i obtains its own local observation result o t i = O ( s t , i ) according to its own local observation function, where the global state s t is the current state of the environment that the agent cannot fully perceive. Based on o t i , each agent generates an action a t i = π i ( o t i ) according to its policy π i . The actions output by all agents constitute the joint action a t = ( a t 1 , a t 2 , , a t N a ) . The environment transitions to the new state s t + 1 according to the current state s t , the current joint action a t and the state transition function T . The environment also outputs the reward values obtained by each agent according to the joint reward function R . The goal of each agent is to maximize the cumulative discounted reward E π t = 0 γ t R i ( s t , a t ) of all agents by optimizing the joint policy π = ( π 1 , π 2 , , π N a ) . The optimized policy enables the UAV to have the ability to search for cooperative dynamic targets.

3.2. State Space and Action Space

Based on the Dec-POMDP framework, N a UAVs are modeled as a multi-agent system to cooperatively search for N T potential targets. The global state s t S contains the position and heading information of all agents, as well as the probability distribution information of all targets:
s t = q i u a v ( t ) , ψ i ( t ) i = 1 N a , G j , P j j = 1 N T ,
where q i u a v ( t ) = ( x i ( t ) , y i ( t ) ) denotes the grid position of agent i and ψ i ( t ) is its heading. G j = { g j , 1 , g j , 2 , , g j , M j } is the set of grid position where the existence probability for target j is non-zero, and P j = { p ( g j , 1 ) , p ( g j , 2 ) , , p ( g j , M j ) } is the set of corresponding probability values.
Due to the partial observability of the environment, agent i ’s local observation o t i Ω i comprises only its own kinematic state and the local target probability map it maintains:
o t i = q i u a v ( t ) , ψ i ( t ) , G j i , P j i j = 1 N T ,
where G j i and P j i represent the estimation of the probability area position and probability value of target j by agent i respectively. In this implementation, the global state is constructed by aggregating all agents’ local observations.
The discrete action space A i for each agent i is defined as:
A i = { a 1 , a 2 , a 3 , a 4 , a 5 } ,
this set consists of five discrete actions: a 1 : Maintain current heading and move forward one grid step. a 2 : Turn left by π / 4 and move forward one grid step. a 3 : Turn left by π / 2 and move forward one grid step. a 4 : Turn right by π / 4 and move forward one grid step. a 5 : Turn right by π / 2 and move forward one grid step. This kinematic model is illustrated in Figure 3.

3.3. Fusion Reward Shaping

Potential-based reward shaping is a technique that transforms prior knowledge into reward signals to alleviate the sparse reward problem and enhance exploration efficiency in reinforcement learning. This method achieves such shaping by defining a potential field, which provides additional intermediate reward signals on top of the sparse base rewards. Its mathematical form is:
R ( s , a , s ) = R ( s , a , s ) + γ ϕ ( s ) ϕ ( s ) ,
where ϕ ( ) is the potential function, γ is the discount factor, R ( s , a , s ) is the original reward, and R ( s , a , s ) is the reward after shaping. This method can theoretically guaranty the consistency of the optimal policy before and after reward shaping. For reinforcement learning, designing a reasonable potential field function is crucial for reward shaping.
Building on the above theory, in the dynamic target search scenario, the total reward is defined as:
R t o t a l = R b a s e + R s h a p e ,
where R b a s e is the base reward, R s h a p e is the shaping reward, and R t o t a l is the final reward after shaping.
Specifically, building on the base environmental reward, this paper introduces three types of potential field functions: a Probability Edge Potential Field ( ϕ e d g e ), a Maximum Probability Potential Field ( ϕ p m a x ), and a Coverage Probability Sum Potential Field ( ϕ p c o v ). We also introduce an Adaptive Fusion Weight Mechanism to dynamically adjust their fusion weights. This mechanism allows the system to automatically identify and prioritize the most effective potential functions during training, thereby enhancing both learning efficiency and final policy quality.
The shaping reward is defined as:
R s h a p e = f e d g e , p m a x , p c o v ω f γ ϕ f s t + 1 ϕ f s t ,
where ω f denotes the adaptive weight of potential function f .
In the following, we will elaborate on the base reward, the potential field functions, and the adaptive fusion weight mechanism in detail.

3.3.1. Base Reward

The base reward R b a s e t , is composed of two components, as defined in the following equation:
R b a s e t = R f i n d t + R p r i o r t ,
where the first component is a discovery reward R f i n d t , which incentivizes agents to find new targets. It provides a reward of +50 upon the discovery of a new target at timestep t , and 0 otherwise, and can be expressed as follows:
R f i n d t = 50 i f   f i n d = 1 0 o t h e r w i s e .
The second component is a prior probability reward R p r i o r t , designed to encourage the exploration of high-probability areas. For visiting a grid cell q u a v ( t ) , the agent receives a reward equal to 20 times the cell’s prior probability p p r i o r ( g ) , and can be expressed as follows:
R p r i o r t = 20 p p r i o r ( q u a v ( t ) ) .
The base reward R b a s e t encourages the agent to explore high-probability areas to find more targets, but the reward signal is relatively sparse due to the limited probability distribution.

3.3.2. Multiple Potential Fields

To realize multi-potential field fusion reward shaping mechanism, this paper proposes three complementary potential field functions from both local and global perspectives: Probability Edge Potential Field ϕ e d g e , Maximum Probability Potential Field ϕ p m a x , and Coverage Probability Sum Potential Field ϕ p c o v :
(1)
Probability Edge Potential Field ϕ e d g e
This potential field is defined from the perspective of local prior information. It is designed to guide agents toward the edge regions of the target probability distribution. In this way, agents obtain higher potential energy when approaching the probability boundary, thereby encouraging exploration around uncertain areas and improving search efficiency. As shown in Figure 4.
The potential field ϕ e d g e is defined as follows. The position of the i -th agent is:
q i u a v = x i , y i , i 1 , , N a .
The number of targets is N T , and the probability area position of the j -th target is:
G j = { g j , 1 , g j , 2 , , g j , M j } , j { 1 , , N T } ,
where M j is the number of all possible positions of target j , and the set of probability values corresponding to each position is:
P j = { p ( g j , 1 ) , p ( g j , 2 ) , , p ( g j , M j ) } .
The minimum Euclidean distance from the i -th agent to the edge of the target probability area is defined as:
d i e d g e = m i n 1 j N T m i n 1 k M j q i u a v g j , k .
Then the probability edge potential energy of the i -th agent is:
ϕ i e d g e = C 0 d i e d g e + 1 ,
where C 0 is the scale factor, which is used to control the range of the potential field value.
The global probability edge potential energy takes the average value of the potential energy of all agents and is processed by the Sigmoid fuzzy function, and can be expressed as follows:
ϕ e d g e ( s ) = σ 1 N a i = 1 N a ϕ i e d g e ,
where
σ ( x ) = 1 1 + e a σ ( x c σ ) ,
σ is a Sigmoid-like fuzzy function, with parameter a σ controlling the steepness of the curve and c σ determining its center point. According to the definition, higher ϕ e d g e potential values correspond to the edges of the probability region, whereas positions farther from the edges exhibit lower potential values.
(2)
Maximum Probability Potential Field ϕ p m a x
This potential field is constructed from the perspective of global optimal prediction.
It is designed to guide agents toward the local maximum of the probability distribution. Positions closer to the maximum probability values yield higher potential energy, directing agents to the most likely target locations and thus accelerating target detection. As shown in Figure 5.
The potential field ϕ p m a x is defined as follows. The position of maximum probability for target j is:
g j m a x = g j , k j , k j = a r g m a x 1 k M j p ( g j , k ) .
The minimum distance from the i -th agent to all the maximum probability points is:
d i p m a x = m i n 1 j N T q i u a v g j m a x .
The local maximum probability potential energy of the i -th agent is:
ϕ i p m a x = C 0 d i p m a x + 1 .
Finally, the local maximum probability potential energy of all agents is first averaged to obtain a collective measure, and then this value is processed through a Sigmoid-like fuzzy mapping function to produce the global potential energy.
ϕ p m a x ( s ) = σ 1 N a i = 1 N a ϕ i p m a x .
According to the definition, positions closer to the maximum probability point of each target have higher ϕ p m a x potential values, while positions farther away exhibit lower values.
(3)
Coverage Probability Sum Potential Field ϕ p c o v
This potential field is proposed from the perspective of swarm-level coordination. It is designed to guide agents toward regions with the highest global coverage probability. Positions with larger cumulative probabilities correspond to higher potential energy, as shown in Figure 6.
The potential field ϕ p c o v is defined as follows. First, the sum of the existence probability of all targets at each grid position is calculated. This distribution is defined as the global coverage probability distribution, and can be expressed as follows:
C ( g ) = j = 1 N T k = 1 M j p ( g j , k ) I ( g = g j , k ) ,
where I is the indicator function.
From this distribution, we identify the grid position with the maximum coverage probability:
g c o v _ m a x = a r g m a x g M C ( g ) .
The distance from agent i to this maximum coverage position is:
d i p c o v = q i u a v g c o v _ m a x .
The total potential energy of its local coverage probability is:
ϕ i p c o v = C 0 d i p c o v + 1 .
The global coverage probability sum potential field function based on fuzzy logic is:
ϕ p c o v ( s ) = σ 1 N a i = 1 N a ϕ i p c o v .
According to the definition, positions closer to the points with higher cumulative coverage probability have larger ϕ p c o v potential values.

3.3.3. Adaptive Fusion Weight Mechanism

To adaptively fuse the above potential fields, this paper designs an adaptive fusion weight mechanism based on the correlation between advantage values and potential field values. In this way, multiple potential fields are combined according to their contribution to the advantage, forming a weighted shaping reward. The shaping reward is calculated as:
R s h a p e t = f = 1 N f ω f γ ϕ f ( s t + 1 ) ϕ f ( s t ) ,
where N f is the number of potential fields, and ϕ f is the f-th potential field. In this mechanism, the fusion weights of each potential field function are modeled as random variables that obey the Dirichlet distribution, and satisfy the following constraints [44]:
ω ω R N f : f = 1 N f ω f = 1 , ω f 0 ,
where N f represents the number of potential field functions. By utilizing Bayesian updating theory, the parameter vector of the Dirichlet distribution is dynamically adjusted to achieve adaptive optimization of the weight distribution of each potential field function. The parameter vector can be expressed as follows:
α = ( α 1 , α 2 , , α N f ) .
The update is based on the Pearson correlation coefficient between each potential energy value and the advantage value, so that the potential field function with a strong positive correlation with the advantage value obtains a higher weight, so as to play a greater role in the reward shaping process. This method is specifically for policy optimization reinforcement learning methods.
During a warm-up phase, agents are trained using only the base reward, R b a s e t . This promotes the initial convergence of the value network, which in turn improves the credibility of the advantage function estimates.
Following the warm-up, the weight adjustment phase begins. In this phase, agents interact with the environment to collect trajectory data during each episode. For each timestep t in a trajectory, the following data tuple is recorded:
τ = s t , a t , A ( s t , a t ) , ϕ f ( s t ) f = 1 N f t = 1 L ,
where s t represents the global state, which in this implementation is constructed by aggregating all agents’ local observations. a t represents the joint action, A ( s t , a t ) is the advantage value, ϕ i s t represents the potential energy value of the i -th potential field function under the state s t , and L is the length of the trajectory.
For each potential field function ϕ i , extract the advantage value and potential energy value data pair A t , ϕ t f t = 1 L . And perform the following steps.
Calculate the mean value of the potential energy is calculated as:
μ ϕ f = 1 L t = 1 L ϕ t f .
Calculate the mean of the odds ratio is calculated as:
μ A = 1 L t = 1 L A t .
Calculate the Pearson correlation coefficient between the potential energy value and the advantage value is calculated as:
ρ f = t = 1 L ( ϕ t f μ ϕ f ) ( A t μ A ) t = 1 L ( ϕ t f μ ϕ f ) 2 t = 1 L ( A t μ A ) 2 .
The correlation coefficients are truncated and normalized, and the process can be expressed as follows:
ρ f m a x ( ρ f , 0 ) , ρ f ρ f / f = 1 N f ρ f .
Update the Dirichlet distribution parameters using the normalized coefficients, and can be expressed as follows:
α f n e w = α f o l d + η ρ f ,
where η is the learning rate for updating the Dirichlet parameters.
After updating the parameters, the mean of the Dirichlet distribution is used as the new weight of the potential field function, and the calculation formula is:
ω f = α f / f = 1 N f α f .
In summary, the proposed mechanism enables adaptive adjustment of fusion weights, ensuring a balanced contribution of different potential fields to the overall shaping reward. The Adaptive Fusion Weight Mechanism is outlined in Algorithm 1.
Algorithm 1. Adaptive Fusion Weight Mechanism.
1: Input: Number of potential fields N f , learning rate η , trajectory length L
2: Initialize: Dirichlet parameters α = α 1 , α 2 , , α N f uniformly
3: Warm-up Phase:
   Agents are trained using only base reward R base t  to stabilize value estimation
4: Weight Adjustment Phase:
5: for each episode do
6:    Collect trajectory τ = { s t , a t , A t , ϕ t f f = 1 N f } t = 1 L
7:    for each potential field f = 1 to N f do
8:    Compute mean potential energy: μ ϕ f = 1 L t = 1 L ϕ t f
9:    Compute mean advantage: μ A = 1 L t = 1 L A t
10:     Compute Pearson correlation coefficient:
              ρ f = t = 1 L ϕ t f μ ϕ f A t μ A t = 1 L ϕ t f μ ϕ f 2 t = 1 L A t μ A 2
11:    Truncate and normalize: ρ f max ρ f , 0
12:  end for
13:  Normalize correlations: ρ f ρ f / f = 1 N f ρ f
14:  for each potential field f = 1 to N f do
15:    Update Dirichlet parameter: α f α f + η ρ f
16:  end for
17:  for each potential field f = 1 to N f  do
18:    Compute new weight: ω f = α f / f = 1 N f α f
19:  end for
20: end for

3.4. MPRS-MAPPO Algorithm

Based on the above potential fields and fusion mechanism, we propose the MPRS-MAPPO algorithm. The algorithm introduces a multi-potential field fusion reward shaping mechanism on the basis of the standard MAPPO framework [26] and adopts a centralized training and decentralized execution (CTDE) architecture. During training, each agent collects experience tuples consisting of states, actions, and shaped rewards, which are stored in a shared replay buffer. During centralized training, the global state is used by the value network and for reward shaping to stabilize learning. During decentralized execution, each UAV makes decisions based only on its local observation. The fusion weights of the potential fields are updated adaptively based on their correlation with advantage values, ensuring that more informative potentials have a stronger influence on learning. This allows agents to efficiently explore the environment while maintaining coordinated behavior.
Its core innovation lies in using the multi-potential field fusion module to correct the original environment reward in real time and guide the agent to learn the cooperative policy. Meanwhile, the proposed mechanism is fully compatible with the existing MAPPO algorithm based on policy optimization. The framework of the algorithm is shown in Figure 7.
The key steps of the centralized training of the algorithm are as follows:
(1)
Action And Environment Sampling
The MPRS-MAPPO framework inherits the policy optimization approach of MAPPO, where each agent i has an independent policy network π θ i and a shared value network v ω i . At each timestep t , multiple parallel environments provide a set of local observations o t i i = 1 N a . Each agent then samples an action a t i from its respective policy network, forming action set a t i i = 1 N a . After executing this joint action, the environment returns the next observations and the base reward R b a s e t .
(2)
Reward Shaping
The multi-potential field fusion shaping module shapes the original reward to obtain the shaping reward R s h a p e t , where the fusion weights are computed using Algorithm 1:
R s h a p e t = [ ω e d g e γ ϕ e d g e s t + 1 ϕ e d g e s t + ω p m a x ( γ ϕ p m a x ( s t + 1 ) ϕ p m a x ( s t ) ) + ω p c o v γ ϕ p c o v ( s t + 1 ) ϕ p c o v ( s t ) .
The final reward, R t o t a l t is the sum of the base reward R b a s e t and the shaping reward R s h a p e t . The experience data s t , a t , R t o t a l t , s t + 1 is stored in the experience replay buffer. Here, the potential field functions ϕ e d g e , ϕ p m a x , and ϕ p c o v are all computed from the global state to ensure that shaping reflects the cooperative search context.
In the reward shaping stage, the global state which aggregates all agents’ local observations is used to compute the potential field values. This allows the shaping module to consider the overall spatial distribution of UAVs and targets, thereby maintaining consistent cooperative guidance.
(3)
Value Network Update
The shared value network is updated by minimizing the following loss function, where the global state s t is used as input to estimate the overall expected return of the multi-UAV system:
L ω i = E s t v ω i s t R ^ i t 2 ,
where R ^ i t = t = t γ t t R t o t a l t is the cumulative discounted return.
During centralized training, the value network receives the global state s t as input, allowing it to estimate the overall expected return of the multi-agent system. This enables each agent’s policy to be optimized with respect to the joint environment dynamics.
(4)
Policy Network Update
Each agent’s policy network π θ i uses only its local observation o t i to select actions, ensuring decentralized execution. However, during training, policy optimization is performed using advantage estimates derived from the global value function, thereby integrating centralized information into decentralized learning.
The objective function of policy update is:
L θ i = E ( s t , a t ) m i n r t ( θ i ) A θ i ( o t i , a t i ) , c l i p ( r t ( θ i ) , 1 ε , 1 + ε ) A θ i o t i , a t i ) ,
where r t θ = π θ a t | o t i π θ a t | o t i is the importance sampling ratio, c l i p limits the policy update range, and A θ i is the generalized advantage estimation (GAE), which is calculated as follows:
A θ i = l = 0 ( γ λ ) l δ t + l i , δ t i = R t o t a l t + γ v ω i s t + 1 v ω i s t ,
where γ is the reinforcement learning discount factor, λ is the hyperparameter of GAE, and δ t i is the temporal difference (TD) error of agent i at time t .
In the distributed execution phase of the algorithm, each agent relies on its own local observation o t i and the converged policy network to make independent decisions, realizing complete decentralized control. As shown in the following:
a t i = a r g m a x a π θ i a | o t i .
The key to this architecture is the design of three complementary potential functions ( ϕ e d g e , ϕ p m a x , ϕ p c o v ), each with a distinct focus. These are integrated via the Adaptive Fusion Weight Mechanism, which balances their respective contributions during training. The adaptive mechanism dynamically adjusts the fusion weights according to their correlation with the advantage values, ensuring that more informative potential fields have a stronger influence at each training stage. By assigning greater weight to functions that are more beneficial for learning, this method improves both training efficiency and the quality of the final converged policy. The complete MPRS-MAPPO algorithm is outlined in Algorithm 2.
Algorithm 2. MPRS-MAPPO.
1: Notations:
    θ : policy network parameters (actor)
    ω : value network parameters (critic)
    α : Dirichlet distribution parameters (from Algorithm 1)
    D : experience buffer
    W : warm-up stage length, T : episode horizon, K : update iterations
    η θ , η ω , η α : learning rates
    ϕ f : potential field functions { ϕ edge , ϕ pmax , ϕ pcov }
    ω f : adaptive fusion weight of field f
    R base , R shape , R total : base, shaping, and total rewards
2: Initialize: θ , ω , α (uniform), D = , e p i s o d e = 0
3: while policy not converged do
4:     e p i s o d e     e p i s o d e   +   1
5:    for t = 1 to   T   do
6:    Each agent i : sample a t i π θ i ( o t i )
7:    ( s t + 1 , R b a s e ) ← e n v . s t e p a t
8:    if e p i s o d e W   then                       Warm-up
9:       R t o t a l = R b a s e
10:     else                      Fusion Reward Shaping
11:       R s h a p e = f e d g e , p m a x , p c o v ω f γ ϕ f s t + 1 ϕ f s t
12:       R t o t a l = R b a s e + R s h a p e
13:    end if
14:     D . a d d s t , a t , R t o t a l , s t + 1
15:  end for
16:  if e p i s o d e   >   W   then                 Adaptive Fusion Weights
17:     Call Algorithm 1 to update α f   and compute   ω f
18:  end if
19:  for k   = 1 to K   do                  Policy and Value Updates
20:      R ^ = l = 0 γ l R t o t a l t + l
21:      ω ω η ω ω E v ω s t R ^ 2
22:      A t G A E = l = 0 γ λ l δ t + l
23:      θ i θ i + η θ θ i E [ m i n ( r t A t G A E , c l i p ( r t , 1 ± ε ) A t G A E ) ]
21:  end for
25: end while

4. Results and Discussion

To evaluate the performance of the proposed MPRS-MAPPO algorithm in multi-UAV cooperative search tasks, we designed representative scenarios in which multiple UAVs search for multiple dynamic targets. Systematic experiments were conducted using baseline algorithms and multiple evaluation metrics, and ablation studies were further performed to analyze the performance contributions of individual components. The experiments include both simulation tests and physical flight experiments. The experimental setup and results are presented as follows.

4.1. Experimental Environment and Parameter Settings

The experiments were conducted on a high-performance workstation (Intel i7-13700KF, 32 GB RAM, NVIDIA RTX 3060Ti), where a multi-UAV cooperative search simulation environment was built using PyTorch (version 2.3.0) and OpenAI Gym (version 0.20.0).
As shown in Figure 8, the schematic of the simulation environment is organized into three layers: the top layer displays the current positions and trajectories of the UAVs, the middle layer presents the probabilistic estimates of undiscovered target locations, and the bottom layer shows the positions of detected targets. This simulation environment integrates UAV agents, dynamic targets, and probability distributions derived from sensor observations.
The simulation area was defined as a 15 km × 15 km discrete grid (step size: 1 km). In the representative 4v6 scenario, four UAVs were initialized at random positions to search for six dynamic targets. Each UAV was modeled as a fixed-wing aircraft with a speed of 60 m/s, capable of performing forward, left-turn, and right-turn maneuvers. Collision avoidance was achieved by altitude separation (0.9–1.1 km). The onboard sensor had a coverage of 1 km × 1 km, a detection probability of 0.95, and a false alarm probability of 0.05. Targets moved randomly at approximately 20 m/s and could potentially leave the area.
The main experimental parameters for the UAV search task are summarized in Table 2. The environment size, UAV/target numbers, and their speeds define task dynamics, while detection and false alarm probabilities reflect sensing reliability. For training, episode length, learning rates, discount factor, GAE parameter, PPO clipping, and warm-up rollouts are specified.

4.2. Algorithm Effectiveness Analysis

To verify the effectiveness of the proposed algorithm, detailed analyses were first conducted under the standard experimental configuration of 4 UAVs and 6 targets (4v6). Experiments were conducted with 10 different random seeds. The evaluation metrics include the convergence trends of return and target detection rate, the dynamic variations in the three potential field weights, and the UAV search trajectories after training.
During training, the global return and target detection rate of MPRS-MAPPO both exhibited stable convergence, as shown in Figure 9. Specifically, the return increased from 232.47 to 451.43, while the target detection rate rose from 41.52% to 78.89%, demonstrating the algorithm’s strong optimization capability and convergence properties.
As illustrated in Figure 10, during the warm-up phase, the weights of the three potential field functions were evenly initialized at 33.33%. After the warm-up, the weight of the Maximum Probability Potential Field ϕ p m a x rapidly increased to 48.36%, then decreased and stabilized at 42.14%, showing a “rise-then-fall” trend. In contrast, the weights of the Probability Edge Potential Field ϕ e d g e and the Coverage Probability Sum Potential Field ϕ p c o v initially dropped to 23.86% and 27.61%, respectively, and later gradually increased, stabilizing at 27.38% and 29.47%. This dynamic adaptation validates the algorithm’s capability for multi-potential field fusion.
Figure 11 illustrates the execution process of the UAVs. Red aircraft represent the agents, and green dots denote the true target positions (visible only initially or upon discovery).
At the beginning, targets were randomly distributed. By step 8, the UAVs had detected three targets and gradually converged toward high-probability areas; by step 11, two additional targets were detected; and by step 26, all targets were successfully located. The trajectories demonstrate that the UAVs progressively shifted from concentrated search to distributed coverage, thereby improving overall efficiency.
For clarity, the discovery time of each target is annotated at its upper right corner, while the movement directions of the UAVs are indicated along their respective trajectories.
In terms of real-time performance, the average decision-making time per step was approximately 0.015 s, indicating that the proposed method can meet real-time requirements in online multi-UAV cooperative search scenarios.
To further evaluate the robustness and scalability of the proposed algorithm, comparative experiments were conducted under different task scales, including 2 UAVs vs. 3 targets (2v3), 4 UAVs vs. 6 targets (4v6), and 6 UAVs vs. 9 targets (6v9). The 6v9 scenario involved a more complex case where several targets were initialized in close proximity. The convergence curves of global return under different task scales are shown in Figure 12. The results indicate that the algorithm achieved effective convergence across all configurations, with slightly slower convergence as task complexity increased. Since the number of targets differs among scenarios, the return curves are distributed at different height levels accordingly.

4.3. Comparison with Existing Methods

To comprehensively evaluate the overall performance of MPRS-MAPPO, four representative methods were selected as baselines: three multi-agent reinforcement learning algorithms, MAPPO, MASAC and QMIX, and a heuristic coverage-based search strategy, Scanline. The evaluation metrics included global return curve trends, converged return values, target detection rate, and the rolling standard deviation of the return curves.
MAPPO [26] is a multi-agent policy optimization algorithm that performs centralized training with decentralized execution and uses a clipped surrogate objective to stabilize policy updates. It effectively balances policy improvement and variance control, making it a widely adopted baseline in cooperative MARL tasks.
MASAC [37] is an off-policy, entropy-regularized algorithm that enhances the exploration and stability of multi-agent systems, particularly in environments with incomplete information.
QMIX [25] is a value-based multi-agent reinforcement learning algorithm that decomposes the joint action-value function into individual agent contributions while ensuring monotonicity.
Scanline [10] is a heuristic coverage-based search method that directs UAVs along pre-defined sweeping paths to maximize area coverage and target detection. Although simple and easy to implement, it lacks learning ability and adaptability in dynamic or partially observable environments.
As shown in Figure 13, MPRS-MAPPO outperformed the other algorithms in terms of both convergence speed and stability. While MAPPO and QMIX improved policy performance to some extent, MAPPO suffered from slower convergence, and QMIX exhibited significant fluctuations. MASAC converged faster than MAPPO and reached a similar steady return, with variance larger than MAPPO but smaller than QMIX. As a heuristic method, Scanline oscillated around its mean return value without demonstrating learning or optimization capabilities.
To further analyze behavioral characteristics, the representative search trajectories were visualized, as shown in Figure 14.
Only representative trajectory examples are shown here for clarity; the MASAC trajectories are not displayed due to their similarity in overall pattern to MAPPO. MPRS-MAPPO successfully detected all six targets within only 26 steps, demonstrating a “first concentration, then dispersion” cooperative pattern: initially focusing on high-probability areas, followed by dynamic task allocation to cover wider areas. This reflects advantages in shorter paths and balanced workload distribution. In contrast, MAPPO detected five targets within 36 steps but suffered from uneven workload allocation and missed detections in the later stage. QMIX required 45 steps to detect only four targets, with trajectories biased toward one side and limited utilization of probabilistic information. Scanline detected only three targets after 64 steps due to its static coverage strategy, resulting in the lowest efficiency.
The quantitative comparison is summarized in Table 3. MPRS-MAPPO achieved significantly higher converged return values and detection rates than the baseline methods, while also maintaining the lowest return standard deviation, indicating superior search efficiency and more stable training performance. In this paper, “training uncertainty” refers to the variability of the cumulative returns during training, which is quantitatively measured by the standard deviation of the returns shown in Table 3.
In summary, MPRS-MAPPO demonstrated clear advantages in multi-UAV cooperative search tasks. It outperformed the baseline methods in terms of convergence speed, target detection rate, and training stability (Return Std), thereby verifying the effectiveness of the adaptive fusion weight mechanism in complex cooperative scenarios. Compared to MAPPO, MASAC, QMIX, and Scanline, MPRS-MAPPO improved target detection rates by 7.87%, 12.06%, 17.35%, and 29.76%, respectively, while reducing training uncertainty by 7.43%, 47.13%, 53.36%, and 56.29%. Future work will explore integrating MHT [46] or PHD [47] tracking to enhance target continuity.

4.4. Ablation Studies

To comprehensively evaluate the MPRS-MAPPO algorithm, two groups of ablation experiments were conducted: one to assess the contribution of each algorithmic component and the other to analyze the cooperative behavior among agents. The first group examines the impact of reward shaping, multi-potential-field fusion, and the warm-up stage on learning performance. The second group explores how partial loss of cooperation affects system efficiency by fixing some UAVs to random policies.
To validate the effectiveness of each component in MPRS-MAPPO, we designed ablation experiments focusing on four aspects:
  • The effectiveness of reward shaping;
  • Whether multi-potential fields outperform single-potential fields;
  • Whether dynamic fusion is superior to fixed fusion;
  • The role of the warm-up stage.
The evaluation metrics included the convergence speed, converged return value, and training stability of the return curves. All methods were implemented on the MAPPO framework under the same environment as in the comparative experiments. As shown in Table 4, the descriptions of the ablation study methods are summarized.
The training return curves of different methods are shown in Figure 15. Overall, MPRS-MAPPO consistently achieved the highest returns with the fastest convergence speed, demonstrating that multi-potential-field fusion and the warm-up stage significantly facilitate policy learning. NoWarmup and Multi-PF-Fixed ranked second and third, respectively, indicating that the warm-up stage further improves training performance and that dynamic fusion is superior to fixed fusion. The single-potential-field methods and the baseline showed much lower returns.
Statistical results of the ablation study are summarized in Table 5.
During convergence, MPRS-MAPPO achieved the highest mean return (an improvement of 11.58%) and the lowest return standard deviation (a reduction of 7.43%), demonstrating superior performance and stability. By contrast, single-potential-field methods provided limited improvements and, in some cases, even introduced instability.
In summary, the adaptive fusion weight mechanism contributed a 5.54% improvement in return; incorporating the multi-potential-field fusion structure further improved the return to 9.69%; and, with the addition of the warm-up stage, the overall return increased by 11.58%. These results convincingly demonstrate the effectiveness and complementarity of reward shaping, multi-potential-field fusion, and the warm-up stage in complex cooperative search tasks.
To further evaluate the impact of inter-agent cooperation, additional experiments were conducted in which a portion of UAVs were fixed to perform random actions, while the remaining UAVs maintained cooperative policy execution.
The results are summarized in Table 6. As the number of random (non-cooperative) UAVs increased, the overall performance gradually declined. The global return decreased from 451.43 (4 cooperative UAVs) to 303.47 (only 1 cooperative UAV), while the detection rate dropped from 78.89% to 54.09%. Meanwhile, the average episode length increased, reflecting slower mission completion and lower search efficiency due to weakened cooperation.
These results clearly indicate that multi-UAV cooperation plays a critical role in achieving efficient target search and high detection performance.

4.5. Physical Experiments

To validate the effectiveness of the proposed algorithm on a real UAV platform, a physical dynamic target search experiment was conducted under laboratory conditions. The setup involved a quadrotor UAV searching for an unknown dynamic target. The UAV was equipped with a visible-light sensor for visual target detection, while the dynamic target was a ground vehicle. Only the target’s initial position was known to the UAV before takeoff. After the experiment started, the vehicle began to move randomly, and the UAV autonomously navigated toward the probabilistic region to search for the target based on the proposed MPRS-MAPPO algorithm. The main experimental parameters are listed in Table 7.
The experimental field setup is shown in Figure 16, where the UAV and dynamic target were deployed in an open area. At the start of the experiment, the target vehicle began to move randomly within the designated region, while the UAV executed the proposed search policy.
The results of the physical test are illustrated in Figure 17. From left to right, the purple time axis represents the progression of the experiment.
The experimental states at each moment are shown in the dashed boxes aligned with the timeline, including the Field View (real-world scene), Sensor View (onboard camera image), and Ground Station Interface (real-time monitoring).
Search Start (t1 = 37.23 s): The UAV takes off and begins its search. The target is still far away; thus, no vehicle appears in the Field View or Sensor View. The Ground Station Interface shows the UAV and target at their initial positions, separated by a large distance.
Mid Process (t2 = 39.81 s): Both the UAV and target have moved, and the UAV is approaching the high-probability region. However, the target has not yet entered the sensor’s field of view, and no detection is achieved at this stage.
Search Success (t3 = 41.62 s): The UAV maneuvers above the target, and the vehicle becomes visible in the Sensor View. The search is completed successfully, and the task is terminated.
The experimental results confirm that the proposed MPRS-MAPPO algorithm can operate on a real UAV platform and autonomously locate dynamic targets in an outdoor environment. The UAV successfully completed the search mission with accurate navigation and real-time target detection, demonstrating the algorithm’s robustness, real-world feasibility, and potential for practical deployment.

5. Conclusions

This paper addresses the challenges of sparse rewards, unstable training, and inefficient convergence of cooperative strategies in multi-UAV cooperative search for dynamic targets. We propose MPRS-MAPPO, a multi-agent reinforcement learning algorithm that incorporates a multi-potential field fusion reward shaping mechanism. This method is designed for dynamic target search scenarios. It integrates three complementary potential field functions: Probability Edge Potential Field, Maximum Probability Potential Field, and Coverage Probability Sum Potential Field. An Adaptive Fusion Weight Mechanism is adopted for dynamic weight adjustment. In addition, a warm-up stage is introduced to mitigate early-stage misguidance. These designs effectively improve both policy learning efficiency and multi-agent cooperation capability.
Compared with MAPPO, MASAC, QMIX, and the heuristic Scanline method, MPRS-MAPPO achieved notable improvements in convergence speed, global return, and detection rate (by 7.87–29.76%), while reducing training uncertainty by 7.43–56.36%. Ablation studies confirmed that reward shaping, multi-potential-field fusion, and the warm-up stage jointly enhance learning efficiency. Cooperation ablation showed performance degradation when some UAVs used random policies, highlighting the importance of collaboration. Multi-scale (2v3, 4v6, 6v9) and physical flight experiments further verified the algorithm’s scalability, robustness, and real-world applicability.
Future work will focus on extending the framework to more complex and realistic environments to further improve system robustness and practical applicability.

Author Contributions

Conceptualization, X.H. and Y.W.; methodology, X.H. and Z.W.; software, X.H.; validation, C.X.; formal analysis, Y.W.; investigation, X.H.; resources, Z.W.; data curation, Y.G.; writing—original draft preparation, X.H.; writing—review and editing, Y.G.; visualization, C.X.; supervision, Y.W.; project administration, Z.W.; funding acquisition, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article material. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bisio, I.; Garibotto, C.; Haleem, H.; Lavagetto, F.; Sciarrone, A. RF/WiFi-Based UAV Surveillance Systems: A Systematic Literature Review. Internet Things 2024, 26, 101201. [Google Scholar] [CrossRef]
  2. Ekechi, C.C.; Elfouly, T.; Alouani, A.; Khattab, T. A Survey on UAV Control with Multi-Agent Reinforcement Learning. Drones 2025, 9, 484. [Google Scholar] [CrossRef]
  3. Javed, S.; Hassan, A.; Ahmad, R.; Ahmed, W.; Ahmed, R.; Saadat, A.; Guizani, M. State-of-the-Art and Future Research Challenges in UAV Swarms. IEEE Internet Things J. 2024, 11, 19023–19045. [Google Scholar] [CrossRef]
  4. Dinaryanto, O.; Hermawan, D.; Agustian, H.; Astuti, Y. The Technology Development and Application of Unmanned Aerial Vehicle (UAV) in the Agriculture Field in Indonesia. In Proceedings of the 2024 International Conference of Adisutjipto on Aerospace Electrical Engineering and Informatics (ICAAEEI), Yogyakarta, Indonesia, 11–12 December 2024; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  5. Telli, K.; Kraa, O.; Himeur, Y.; Ouamane, A.; Boumehraz, M.; Atalla, S.; Mansoor, W. A Comprehensive Review of Recent Research Trends on Unmanned Aerial Vehicles (UAVs). Systems 2023, 11, 400. [Google Scholar] [CrossRef]
  6. Wilson, T.; Williams, S.B. Adaptive Path Planning for Depth-constrained Bathymetric Mapping with an Autonomous Surface Vessel. J. Field Robot. 2018, 35, 345–358. [Google Scholar] [CrossRef]
  7. Ma, Y.; Zhao, Y.; Li, Z.; Yan, X.; Bi, H.; Królczyk, G. A New Coverage Path Planning Algorithm for Unmanned Surface Mapping Vehicle Based on A-Star Based Searching. Appl. Ocean Res. 2022, 123, 103163. [Google Scholar] [CrossRef]
  8. Lu, J.; Zeng, B.; Tang, J.; Lam, T.L.; Wen, J. TMSTC*: A Path Planning Algorithm for Minimizing Turns in Multi-Robot Coverage. IEEE Robot. Autom. Lett. 2023, 8, 5275–5282. [Google Scholar] [CrossRef]
  9. Xie, J.; Garcia Carrillo, L.R.; Jin, L. Path Planning for UAV to Cover Multiple Separated Convex Polygonal Regions. IEEE Access 2020, 8, 51770–51785. [Google Scholar] [CrossRef]
  10. Giang, T.T.C.; Lam, D.T.; Binh, H.T.T.; Ly, D.T.H.; Huy, D.Q. BWave Framework for Coverage Path Planning in Complex Environment with Energy Constraint. Expert Syst. Appl. 2024, 248, 123277. [Google Scholar] [CrossRef]
  11. Xia, Y.; Chen, C.; Liu, Y.; Shi, J.; Liu, Z. Two-Layer Path Planning for Multi-Area Coverage by a Cooperative Ground Vehicle and Drone System. Expert Syst. Appl. 2023, 217, 119604. [Google Scholar] [CrossRef]
  12. Saha, S.; Vasegaard, A.E.; Nielsen, I.; Hapka, A.; Budzisz, H. UAVs Path Planning under a Bi-Objective Optimization Framework for Smart Cities. Electronics 2021, 10, 1193. [Google Scholar] [CrossRef]
  13. Maskooki, A.; Kallio, M. A Bi-Criteria Moving-Target Travelling Salesman Problem under Uncertainty. Eur. J. Oper. Res. 2023, 309, 271–285. [Google Scholar] [CrossRef]
  14. Lejeune, M.; Royset, J.O.; Ma, W. Multi-agent Search for a Moving and Camouflaging Target. Nav. Res. Logist. 2024, 71, 532–552. [Google Scholar] [CrossRef]
  15. Cho, S.-W.; Park, J.-H.; Park, H.-J.; Kim, S. Multi-UAV Coverage Path Planning Based on Hexagonal Grid Decomposition in Maritime Search and Rescue. Mathematics 2021, 10, 83. [Google Scholar] [CrossRef]
  16. Kazemdehbashi, S.; Liu, Y. An Algorithm with Exact Bounds for Coverage Path Planning in UAV-Based Search and Rescue under Windy Conditions. Comput. Oper. Res. 2025, 173, 106822. [Google Scholar] [CrossRef]
  17. Yousuf, B.; Lendek, Z.; Buşoniu, L. Exploration-Based Search for an Unknown Number of Targets Using a UAV. IFAC-PapersOnLine 2022, 55, 93–98. [Google Scholar] [CrossRef]
  18. Li, Y.; Chen, W.; Fu, B.; Wu, Z.; Hao, L.; Yang, G. Research on Dynamic Target Search for Multi-UAV Based on Cooperative Coevolution Motion-Encoded Particle Swarm Optimization. Appl. Sci. 2024, 14, 1326. [Google Scholar] [CrossRef]
  19. Ma, T.; Wang, Y.; Li, X. Convex Combination Multiple Populations Competitive Swarm Optimization for Moving Target Search Using UAVs. Inf. Sci. 2023, 641, 119104. [Google Scholar] [CrossRef]
  20. Yue, W.; Xi, Y.; Guan, X. A New Searching Approach Using Improved Multi-Ant Colony Scheme for Multi-UAVs in Unknown Environments. IEEE Access 2019, 7, 161094–161102. [Google Scholar] [CrossRef]
  21. Wu, Y.; Nie, M.; Ma, X.; Guo, Y.; Liu, X. Co-Evolutionary Algorithm-Based Multi-Unmanned Aerial Vehicle Cooperative Path Planning. Drones 2023, 7, 606. [Google Scholar] [CrossRef]
  22. Enhancing Biologically Inspired Swarm Behavior: Metaheuristics to Foster the Optimization of UAVs Coordination in Target Search. Comput. Oper. Res. 2019, 110, 34–47. [CrossRef]
  23. Niu, Y.; Yan, X.; Wang, Y.; Niu, Y. An Improved Sand Cat Swarm Optimization for Moving Target Search by UAV. Expert Syst. Appl. 2024, 238, 122189. [Google Scholar] [CrossRef]
  24. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, UK, 1998. [Google Scholar]
  25. Rashid, T.; Samvelyan, M.; Witt, C.S.D.; Farquhar, G.; Foerster, J.; Whiteson, S. QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  26. Yu, C.; Velu, A.; Vinitsky, E.; Gao, J.; Wang, Y.; Bayen, A.; Wu, Y. The Surprising Effectiveness of PPO in Cooperative, Multi-Agent Games. In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
  27. Bany Salameh, H.; Hussienat, A.; Alhafnawi, M.; Al-Ajlouni, A. Autonomous UAV-Based Surveillance System for Multi-Target Detection Using Reinforcement Learning. Clust. Comput. 2024, 27, 9381–9394. [Google Scholar] [CrossRef]
  28. Liu, Y.; Li, X.; Wang, J.; Wei, F.; Yang, J. Reinforcement-Learning-Based Multi-UAV Cooperative Search for Moving Targets in 3D Scenarios. Drones 2024, 8, 378. [Google Scholar] [CrossRef]
  29. Zhao, H.; Cui, D.; Hao, M.; Xu, X.; Liu, Z. Cooperative Search Strategy of Multi-UAVs Based on Reinforcement Learning. In Proceedings of the 2022 International Conference on Autonomous Unmanned Systems (ICAUS 2022), Xi’an, China, 23–25 September 2022; Fu, W., Gu, M., Niu, Y., Eds.; Lecture Notes in Electrical Engineering. Springer Nature: Singapore, 2023; Volume 1010, pp. 3407–3415, ISBN 978-981-99-0478-5. [Google Scholar]
  30. Su, K.; Qian, F. Multi-UAV Cooperative Searching and Tracking for Moving Targets Based on Multi-Agent Reinforcement Learning. Appl. Sci. 2023, 13, 11905. [Google Scholar] [CrossRef]
  31. Wei, D.; Zhang, L.; Liu, Q.; Chen, H.; Huang, J. UAV Swarm Cooperative Dynamic Target Search: A MAPPO-Based Discrete Optimal Control Method. Drones 2024, 8, 214. [Google Scholar] [CrossRef]
  32. Yan, P.; Jia, T.; Bai, C. Searching and Tracking an Unknown Number of Targets: A Learning-Based Method Enhanced with Maps Merging. Sensors 2021, 21, 1076. [Google Scholar] [CrossRef]
  33. Guo, H.; Liu, Z.; Shi, R.; Yau, W.-Y.; Rus, D. Cross-Entropy Regularized Policy Gradient for Multirobot Nonadversarial Moving Target Search. IEEE Trans. Robot. 2023, 39, 2569–2584. [Google Scholar] [CrossRef]
  34. Guo, H.; Peng, Q.; Cao, Z.; Jin, Y. DRL-Searcher: A Unified Approach to Multirobot Efficient Search for a Moving Target. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 3215–3228. [Google Scholar] [CrossRef]
  35. Boulares, M.; Fehri, A.; Jemni, M. UAV Path Planning Algorithm Based on Deep Q-Learning to Search for a Floating Lost Target in the Ocean. Robot. Auton. Syst. 2024, 179, 104730. [Google Scholar] [CrossRef]
  36. Wang, G.; Wei, F.; Jiang, Y.; Zhao, M.; Wang, K.; Qi, H. A Multi-AUV Maritime Target Search Method for Moving and Invisible Objects Based on Multi-Agent Deep Reinforcement Learning. Sensors 2022, 22, 8562. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, E.; Liu, F.; Hong, C.; Guo, J.; Zhao, L.; Xue, J.; He, N. MADRL-Based UAV Swarm Non-Cooperative Game under Incomplete Information. Chin. J. Aeronaut. 2024, 37, 293–306. [Google Scholar] [CrossRef]
  38. Wen, T.; Wang, X.; Chen, Q. A HAPPO Based Task Offloading Strategy in Heterogeneous Air-Ground Collaborative MEC Networks. In Proceedings of the 2025 IEEE/CIC International Conference on Communications in China (ICCC), Shanghai, China, 10–13 August 2025; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  39. Park, G.; Jung, W.; Han, S.; Choi, S.; Sung, Y. Adaptive Multi-Model Fusion Learning for Sparse-Reward Reinforcement Learning. Neurocomputing 2025, 633, 129748. [Google Scholar] [CrossRef]
  40. Ng, A.Y.; Harada, D.; Russell, S. Policy Invariance under Reward Transformations: Theory and Application to Reward Shaping. In Proceedings of the ICML 1999 (International Conference on Machine Learning), Stockholm, Sweden, 27 June–1 July 1999. [Google Scholar]
  41. Bal, M.İ.; Aydın, H.; İyigün, C.; Polat, F. Potential-Based Reward Shaping Using State–Space Segmentation for Efficiency in Reinforcement Learning. Future Gener. Comput. Syst. 2024, 157, 469–484. [Google Scholar] [CrossRef]
  42. Ye, C.; Zhu, W.; Guo, S.; Bai, J. DQN-Based Shaped Reward Function Mold for UAV Emergency Communication. Appl. Sci. 2024, 14, 10496. [Google Scholar] [CrossRef]
  43. Huang, B.; Jin, Y. Reward Shaping in Multiagent Reinforcement Learning for Self-Organizing Systems in Assembly Tasks. Adv. Eng. Inf. 2022, 54, 101800. [Google Scholar] [CrossRef]
  44. Gimelfarb, M.; Sanner, S.; Lee, C.-G. Reinforcement Learning with Multiple Experts: A Bayesian Model Combination Approach. In Proceedings of the 32nd Annual Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, QC, Canada, 2–8 December 2018. [Google Scholar]
  45. Fu, Z.-Y.; Zhan, D.-C.; Li, X.-C.; Lu, Y.-X. Automatic Successive Reinforcement Learning with Multiple Auxiliary Rewards. In Proceedings of the Twenty-eighth International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; International Joint Conferences on Artificial Intelligence Organization: Palo Alto, CA, USA, 2019; pp. 2336–2342. [Google Scholar]
  46. Chong, C.-Y.; Mori, S.; Reid, D.B. Forty Years of Multiple Hypothesis Tracking—A Review of Key Developments. In Proceedings of the 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 10–13 July 2018; IEEE: Piscataway, NJ, USA; pp. 452–459. [Google Scholar]
  47. Zeng, Y.; Wang, J.; Wei, S.; Zhang, C.; Zhou, X.; Lin, Y. Gaussian Mixture Probability Hypothesis Density Filter for Heterogeneous Multi-Sensor Registration. Mathematics 2024, 12, 886. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of dynamic target search scene.
Figure 1. Schematic diagram of dynamic target search scene.
Drones 09 00770 g001
Figure 2. Schematic diagram of sensor detection probability model. Warmer colors indicate higher probabilities.
Figure 2. Schematic diagram of sensor detection probability model. Warmer colors indicate higher probabilities.
Drones 09 00770 g002
Figure 3. Schematic diagram of drone operation model.
Figure 3. Schematic diagram of drone operation model.
Drones 09 00770 g003
Figure 4. Schematic diagram of Probability Edge Potential Field ϕ e d g e . Warmer colors indicate higher potential values.
Figure 4. Schematic diagram of Probability Edge Potential Field ϕ e d g e . Warmer colors indicate higher potential values.
Drones 09 00770 g004
Figure 5. Schematic diagram of Maximum Probability Potential Field ϕ p m a x . Warmer colors indicate higher potential values.
Figure 5. Schematic diagram of Maximum Probability Potential Field ϕ p m a x . Warmer colors indicate higher potential values.
Drones 09 00770 g005
Figure 6. Schematic diagram of Coverage Probability Sum Potential Field ϕ p c o v . Warmer colors indicate higher potential values.
Figure 6. Schematic diagram of Coverage Probability Sum Potential Field ϕ p c o v . Warmer colors indicate higher potential values.
Drones 09 00770 g006
Figure 7. Framework of the MPRS-MAPPO algorithm.
Figure 7. Framework of the MPRS-MAPPO algorithm.
Drones 09 00770 g007
Figure 8. Simulation environment setup. Different dashed colors represent the trajectories of different UAVs.
Figure 8. Simulation environment setup. Different dashed colors represent the trajectories of different UAVs.
Drones 09 00770 g008
Figure 9. Training curves of MPRS-MAPPO. (a) Convergence of return curves. (b) Variation in target detection rate.
Figure 9. Training curves of MPRS-MAPPO. (a) Convergence of return curves. (b) Variation in target detection rate.
Drones 09 00770 g009
Figure 10. Variation in the weights of the three potential fields during training.
Figure 10. Variation in the weights of the three potential fields during training.
Drones 09 00770 g010
Figure 11. Dynamic target probability distributions and UAV search trajectories. Different dashed colors represent the trajectories of different UAVs.
Figure 11. Dynamic target probability distributions and UAV search trajectories. Different dashed colors represent the trajectories of different UAVs.
Drones 09 00770 g011
Figure 12. Convergence curves of MPRS-MAPPO under different task scales (2v3, 4v6, 6v9).
Figure 12. Convergence curves of MPRS-MAPPO under different task scales (2v3, 4v6, 6v9).
Drones 09 00770 g012
Figure 13. Comparison of return curves between MPRS-MAPPO and existing methods.
Figure 13. Comparison of return curves between MPRS-MAPPO and existing methods.
Drones 09 00770 g013
Figure 14. Comparison of search trajectories between MPRS-MAPPO and existing methods. Different dashed colors represent the trajectories of different UAVs.
Figure 14. Comparison of search trajectories between MPRS-MAPPO and existing methods. Different dashed colors represent the trajectories of different UAVs.
Drones 09 00770 g014
Figure 15. Comparison of convergence curves for different ablation methods.
Figure 15. Comparison of convergence curves for different ablation methods.
Drones 09 00770 g015
Figure 16. Field setup of the physical experiment.
Figure 16. Field setup of the physical experiment.
Drones 09 00770 g016
Figure 17. Experimental process and results.
Figure 17. Experimental process and results.
Drones 09 00770 g017
Table 1. Comparison of related studies.
Table 1. Comparison of related studies.
StudyMulti-UAVDynamic TargetReward ShapingFusion Shaping
Planning-based [6,7,8,9,10,11]fulllownonenone
Optimization-based [12,13,14,15,16,17]fullmiddlenonenone
Heuristic-based [18,19,20,21,22,23]fullmiddlenonenone
RL-based [27,28,29,30,31,32,33,34,35,36]fullfulllownone
Reward shaping [39,40,41,42,43,44,45]lownonefulllow
This workfullfullfullfull
Full: fully supported; middle: moderately supported; low: weakly supported; none: not supported.
Table 2. Experimental parameter settings for UAV search task.
Table 2. Experimental parameter settings for UAV search task.
ParameterValue
Area size15 km × 15 km
Number of UAVs/Targets2, 4, 6/3, 6, 9
UAV speed60 m/s
Target speed20 m/s
Detection probability0.95
False alarm probability0.05
Episode length200
Actor/Critic learning rate1 × 10−5/2 × 10−5
Discount factor γ 0.99
GAE λ 0.95
PPO clipping parameter0.2
Warm-up rollouts40
Table 3. Performance comparison with existing methods.
Table 3. Performance comparison with existing methods.
MethodGlobal ReturnDetection Rate (%)Return Std
MPRS-MAPPO451.4378.899.59
MAPPO406.8571.0210.36
MASAC395.0766.8318.14
QMIX334.9061.5420.56
Scanline240.9149.1321.94
Table 4. Description of ablation study methods.
Table 4. Description of ablation study methods.
Method NameDescription
MPRS-MAPPOMulti-potential fields + dynamic fusion + warm-up stage (proposed method)
NoWarmupMulti-potential fields + dynamic fusion, without a warm-up stage
Multi-PF-FixedMulti-potential field fusion with fixed weights
Single-PF1Only Probability Edge Potential Field ( ϕ e d g e )
Single-PF2Only Maximum Probability Potential Field ( ϕ p m a x )
Single-PF3Only Coverage Probability Sum Potential Field ( ϕ p c o v )
BaselineSparse reward without reward shaping
Table 5. Statistical results of ablation study.
Table 5. Statistical results of ablation study.
MethodGlobal Return∆Return (%)Return Std∆Return Std (%)
MPRS-MAPPO451.4311.589.59−7.43
NoWarmup444.319.699.86−4.82
Multi-PF-Fixed427.875.549.96−3.86
Single-PF1416.822.6010.996.08
Single-PF2413.201.7410.34−0.19
Single-PF3422.493.9810.874.92
Baseline406.85--10.36--
Table 6. Cooperation ablation results under different ratios of cooperative and random UAVs.
Table 6. Cooperation ablation results under different ratios of cooperative and random UAVs.
ScenarioCooperative UAVsRandom UAVsGlobal ReturnDetection Rate (%)Episode Length
140451.4378.8940
231398.2269.3344
322342.2259.4946
413303.4754.0947
Table 7. Main Parameters of the Physical Experiment.
Table 7. Main Parameters of the Physical Experiment.
ParameterValue
Area size100 m × 100 m
Sensor typevisible-light
UAV number1
Target number1
UAV speed15 m/s
Target car speed5 m/s
UAV flight height5 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hong, X.; Wang, Z.; Wang, Y.; Xue, C.; Gao, Y. Multi-UAV Dynamic Target Search Based on Multi-Potential-Field Fusion Reward Shaping MAPPO. Drones 2025, 9, 770. https://doi.org/10.3390/drones9110770

AMA Style

Hong X, Wang Z, Wang Y, Xue C, Gao Y. Multi-UAV Dynamic Target Search Based on Multi-Potential-Field Fusion Reward Shaping MAPPO. Drones. 2025; 9(11):770. https://doi.org/10.3390/drones9110770

Chicago/Turabian Style

Hong, Xiaotong, Zhengjie Wang, Yue Wang, Chao Xue, and Yang Gao. 2025. "Multi-UAV Dynamic Target Search Based on Multi-Potential-Field Fusion Reward Shaping MAPPO" Drones 9, no. 11: 770. https://doi.org/10.3390/drones9110770

APA Style

Hong, X., Wang, Z., Wang, Y., Xue, C., & Gao, Y. (2025). Multi-UAV Dynamic Target Search Based on Multi-Potential-Field Fusion Reward Shaping MAPPO. Drones, 9(11), 770. https://doi.org/10.3390/drones9110770

Article Metrics

Back to TopTop