Abstract
Single UAVs have limited capabilities for complex missions, so suitable solutions are needed to improve the mission success rate, as well as the UAVs’ survivability. A cooperative multi-UAV formation offers great advantages in this regard; however, for large and complex systems, the traditional control methods will be invalid when faced with unstable and changing environments. To deal with the poor self-adaptability and high requirements for the environmental state information of traditional control methods for a multi-UAV cluster, this paper proposes a consistent round-up strategy based on PPO path optimization to track targets. In this strategy, the leader is trained using PPO for obstacle avoidance and target tracking, while the followers are expected to establish a communication network with the leader to obtain environmental information. In this way, the tracking control law can be designed, based on the consistency protocol and the Apollonian circle, to realize the round-up of the target and obstacle avoidance. The experimental results show that the proposed strategy can achieve the round-up of the target UAV and guide the pursuing multi-UAV group to avoid obstacles in the absence of the initial detection of the target. In multiple simulated scenarios, the success rates of the pursuit multi-UAV cluster for rounding up the target are maintained above 80%.
1. Introduction
Currently, with the development of science and technology, UAVs have been widely used in military, industrial, agricultural, and other fields. However, when faced with the requirements of target search, target pursuit, and target round-up abilities, a single UAV often suffers from various problems, including an inefficient detection range and a weak ability to adapt to the environment. Thus, it is increasingly important to study the techniques involving multi-UAVs for dealing with automatic collaborative tasks. At present, many countries have been involved in the research of multi-UAV formation, including some specific military plans. As early as in 2008, the University of Pennsylvania verified the indoor formation flight and obstacle avoidance of 16–20 small quadcopters. In recent years, several low-cost multi-UAV formation projects, such as the Defense Advanced Research Projects Agency’s Pixie Project and DARPA’s Attack UAV Swarm Tactics Project from U.S., have been launched [1]. The ability to continuously traverse and monitor the specific target areas is the main advantage of multi-UAV. Due to limitations in operational accuracy and capability, a single UAV generally experiences difficulty performing tasks independently, and a multi-UAV cluster can effectively compensate for this deficiency. However, for a complex cluster system, a multi-UAV cluster must exhibit the ability to self-organize and adapt. In previous research regarding multi-agent control methods, agents were generally required to make corresponding decisions based on environmental information and individual information within the cluster. The overreliance on this information will lead to poor environmental adaptability; therefore, enabling a cluster to accomplish tasks based on less information and improving the efficiency of information utilization are current research hotspots.
The method of control of the UAV cluster mainly depends on the multi-agent system’s collaborative control techniques [2,3,4]. In 1986, Reynolds first proposed the Boids model, which was based on the observations of bird swarm [5]. It assumed that the individual can perceive the information of its neighborhoods within a certain range, and the decision was made based on the three basic principles of aggregation, separation, and alignment [6]. On this basis, Vicsek established a plane model of a discrete time system in 1995 [7] to simulate the consistent behavior of particle emergence. The above classic models have laid the foundation for traditional cluster control methods. Up until now, the formation control methods based on the principle of consistency have mainly included the leader–follower method [8,9] and the behavior control method [10]. The specific idea behind the leader–follower method is to select a leader in the cluster, with the remaining agents as the followers. The leader holds the flight path and the task’s target. In this way, based on a distributed control method, the states of the followers will gradually be consistent with the leader, and the cluster ultimately maintains stable flight. The behavior-based control method is based on the idea of swarm intelligence, according to the desired behavior pattern of the UAVs. Individual behavior rules and local control schemes are designed for each UAV, and a “behavior library” is obtained and stored in the formation controller. When the control system of the UAV is instructed, it selects and executes the behavior from the “behavior library”, according to the instruction. Based on the above general consistent control methods, Lopen improved the method for a multi-agent system [11] using visual constraints. Further, Song handled the loop formation problem with limited communication distance [12] and Hu considered the coordinated control of spacecraft formation with external interference and limited communication resources [13].
The consistent cluster control methods can achieve the collaboration of a multi-agent system, but it has the disadvantages of lacking autonomy and adaptability when facing dynamic environments and complex tasks. Therefore, from a data-driven perspective, reinforcement learning (RL) methods, which have strong decision-making capabilities [14,15], have also been widely studied in this field. Bansal [16] explored the process of generating complex behaviors for a multi-agent through a self-gaming mechanism. For dealing with problems involving discrete and continuous states in a multi-agent system, Han Hu [17] proposed the DDPG-D3QN algorithm. Jincheng [18] investigated the role of baselines in stochastic policy gradients to better apply the policy optimization methods in real-world situations. For the solutions to the offline RL problems, Zifeng Zhuang [19] found that the inherent conservativeness of policy-based algorithms needed to be overcome, and then proposed behavioral proximity policy optimization (BPPO), which did not require the introduction of any additional constraints or regularization compared to PPO. Zongwei Liu [20] proposed an actor-director-critic algorithm, which added the role of director to the conventional actor-critic algorithm, improving the performance of the agents. To address the problems of low learning speed and poor generalization in decision making, Bo Li [21] proposed PMADDPG, which is an improved version of the multi-agent deep deterministic policy gradient (MADDPG). Siyue Hu [22] proposed a noise-MAPPO algorithm, for which the success rate was over 90% in all StarCraft Challenge scenarios. Since single-agent RL exhibits the disadvantage of overestimation bias of the value function, which causes the multi-agent reinforcement learning method to learn policy ineffectively, Johannes Ackermann [23] proposed an approach that reduced this bias by using double centralized critics. Additionally, the self-attention mechanism [24] was introduced on this basis, with remarkable results. In order to improve the learning speed in complex environments, Sriram Subramanian [25] utilized the L2Q learning framework and extended the framework from single-agent to multi-agent settings. For improving the flow of autonomous vehicles in road networks, Anum [26] proposed a method based on a multi-agent RL and an autonomous path selection algorithm.
The research in the above literature has obtained certain achievements in regards to formation control and obstacle avoidance for multi-UAV. However, for these conventional algorithms, the adopted leader generally tends to follow the previously prescribed flight path. This means that if the target requiring tracked is not within the detectable range of the leader, the multi-UAV cluster cannot construct an effective decision-making mechanism, leading to failures for tracking tasks. To deal with this problem, this paper designs a consistent round-up strategy based on PPO path optimization for the leader–follower tracking problem. This strategy is based on the consistent formation control method for a leader–follower multi-UAV cluster and aims to reach the goal of target round-up and obstacle avoidance. PPO can balance the exploration-exploitation aspects, while maintaining the simplicity and computational efficiency of the algorithm’s implementation. PPO tries to avoid training instability caused by excessive updating by limiting the step size of the policy updates. This allows PPO to perform in a balanced and robust way in a range of tasks. It is supposed that each member in the multiple-UAV cluster has a detectable range for spotting the nearby target and the obstacles, and each obstacle has an impact range for causing a collision when any moving UAV enters this range. The basic principle of the proposed strategy is to force the multi-UAV cluster to approach and round-up the target, based on the consistent formation control when any member locates it, and there are no obstacles nearby, while optimizing the policy for the leader, based on PPO, to determine the best flight path and make the followers cooperated with the leader in other conditions. To verify the performances of the proposed strategy in different environments, four scenarios are considered in the numerical experiments, including environments with a fixed target, a moving target, a fixed target and a fixed obstacle, as well as a fixed target and a moving obstacle. The results showed that the strategy exhibits excellent performance in tracking the target and successfully avoiding obstacles. In summary, the main contributions of this paper can be concluded as follows:
- (1)
- Designing a flight formation based on the Apollonian circle for tracking the target, and executing the collaborative flight of multi-UAV, based on consistent formation control, achieving the round-up for the target in situations where the target is within the detectable range and in which none of the pursuit UAVs enter the impact range of any obstacle.
- (2)
- Optimizing the acting policy of the leader based on the PPO algorithm for finding the best flight path to track the target and avoid obstacles, achieving the round-up for the target with the help of consistent formation control in situations where the target is out of the detectable range and any of the pursuit UAVs enter the impact range of any obstacle.
- (3)
- Validating and analyzing the performance of the proposed algorithm in regards to target round-up and obstacle avoidance in environments with a fixed target, a moving target, a fixed target and a fixed obstacle, as well as a fixed target and a moving obstacle.
The rest of this paper is designed as follows: Section 2 introduces the necessary preliminaries related to this paper; Section 3 illustrates the design principles, implementation process, and extensibility of the proposed strategy; Section 4 details the numerical experiment, and the results from the proposed round-up strategy, applied in different environments, are compared and analyzed; while Section 5 presents the conclusions.
2. Background
2.1. Leader–Follower Model of Multi-UAV Based on Algebraic Graph Theory
In the modeling for a multi-UAV cluster with a leader–follower structure, it is noted that the leader needs to send the information, including its position and attitude, to all followers at a certain time point [27]. Therefore, in order to achieve information sharing among the cluster individuals, it is necessary to establish a communication topology network. Since the graph theory is an important branch of operations research, it can be widely applied to model communication relationships among cluster individuals. Therefore, the communication topology network between individual UAVs can be represented by a graph, which can be seen in Figure 1a,b. In directed graph, the information communication is one-way. In undirected graph, the information exchange can be two-way. This paper applies a directed graph to model a leader–follower multi-UAV cluster, and the corresponding topology network is shown in Figure 2. The information exchange of members of the leader-follower multi-UAV cluster is supposed to be one-way.
Figure 1.
Schematic diagram of directed and undirected graphs.
Figure 2.
The multi-UAV cluster and its corresponding topology network.
For the involved multi-UAV cluster, let denote the directed graph to represent the topological structure for the cluster. In the graph, , is the set of vertices, () is the set of edges , which represents the path that exists between node and node , and represents the weighted adjacency matrix of the graph . For the arbitrary nodes and , it is noted that the graph is strongly connected if there edge and exist. On the contrary, if only one edge exists for nodes to , the graph becomes fully undirectedly connected.
The adjacency matrix represents the adjacency relationship between the vertices in the graph, and it is two-dimensional, where is a non-negative real number that represents the weight of the edge between node and node For the directed graph, if an edge exists between node and node , is recorded as 1; otherwise, it is 0. Moreover, for the undirected graph, it satisfies . Therefore, the expression of is as follows:
In addition to the adjacency matrix , the Laplace matrix L(G) is defined to describe the characteristic of the graph:
where is a diagonal matrix, and the element can be calculated according to the following equation
Therefore, the matrix L(G) is an asymmetric matrix, whose rows sum to zero.
Lemma 1
[28]. Denote G as the graph which contains n nodes, and the Laplace matrix associated with the graph G is . The Laplace matrix has zero eigenvalues with the algebraic weight 1, and the rest of the eigenvalues have positive real parts. Denote these eigenvalues as , then these eigenvalues satisfy:
where represents the real part of eigenvalue . When there is an information exchange in the communication topology network, the eigenvalue real part of the Laplace matrix corresponding to the graph will be or greater than 0.
In the applied multi-UAV leader–follower model, the individual UAVs are considered as vertices of the graph, and the presence or absence of edges represents the existence of an information exchange between the UAVs. For a particular UAV, if the neighbors appear within the communication acceptance range of itself, the corresponding edges will be generated in the graph. In this paper, two communication topology networks will exist, including the one which contains the members of the pursuit multi-UAV cluster and the other which contains the members of cluster and the target.
2.2. Description of the Proximal Policy Optimization Algorithm
A multi-UAV cluster can round up the target when this target is in the detectable range of any pursuit UAV. However, in potential bad conditions, the multi-UAV cluster is unable to detect the target at the initial moment. This will cause the adjacency matrix of the graph to become a zero matrix, thus leading to the failure of the task. To solve this problem, we introduce the PPO algorithm for the leader to guide the followers to find the target when the cluster cannot detect it initially, thus realizing the goal of avoiding obstacles and rounding up the target.
2.2.1. Policy Gradient
The PG algorithm is a policy-based reinforcement learning algorithm. The algorithm represents the strategy as a continuous function related to the reward function. The continuous function optimization method is then used to find the optimal strategy, and the optimization objective is to maximize the continuous function [29,30].
The objective function can be assumed to be , and a parameterized policy function is obtained by neural network training to obtain the highest reward, so it is necessary to find an array of parameters which can create the best . This is usually achieved by using the gradient descent method. Thus, the process of updating the parameters can be represented as follows:
Then, the strategy will obtain the trajectory which can maximize the mean of the reward.
2.2.2. Proximal Policy Optimization
PPO is a policy gradient (PG) method of a reinforcement learning algorithm, in which the core idea is to adjust the probability of the sampling actions, thus achieving an optimized policy based on good return. Generally, the objective function for a PG algorithm can be written as:
where represents the policy, represents the estimate of the advantage function at time step t, and represents the average empirical value of a finite batch of samples. The policy parameter can be optimized using a stochastic gradient descent method. On the basis of this on-policy algorithm, the application of two different policies can be considered to improve the sampling efficiency and transform the algorithm in an off-policy way. Thus, in PPO, it is denoted that represents the policy which interacts with the environment, and represents the strategy that is updated in an inner-looped manner. Further, the learning goal can be revised into:
where KL is the Kullback–Leibler divergence, which is used to limit the distribution difference between and , and is a positive penalty factor that dynamically adjusts the function of KL.
In addition, another version of PPO, called PPO2, limits the updating progress by means of truncation. In PPO2, the objective function can be written as:
where denotes the truncation hyperparameter, which is generally set around 0.2, and denotes the truncation function, which is responsible for limiting the proportion to ensure the convergence.
3. A Consistent Round-Up Strategy Based on PPO Path Optimization
The traditional formation control method for a multi-UAV cluster generally requires leaders follow a previously prescribed flight path, which may lead to the failure of target tracking and obstacle avoidance when faced with a complex environment. Specifically, when based on the consistency protocol only, the round-up mission will fail if the pursuit multi-UAV cluster cannot detect the target, or if the round-up route is interrupted by an obstacle. To deal with this problem, this paper designs a consistent round-up algorithm based on PPO path optimization for the leader–follower [31] tracking problem, as shown in Figure 3.
Figure 3.
Diagram of the proposed consistent round-up strategy, based on PPO path optimization.
The proposed strategy assumes that one leader and several followers exist in the multi-UAV cluster. From Figure 3, it can be noted that the proposed round-up strategy consists of two main parts: the PPO algorithm and the consistency protocol. The leader is trained and controlled by the PPO algorithm, playing the role of tracking the target and avoiding the obstacles in the environment when the target is out of the cluster’s detectable range. the PPO-based reinforcement learning algorithm will plan the optimal flight path of the leader. Once the optimal flight path is maintained, the followers are expected to follow the leader through the consistency protocol. When the target can be detected by the cluster, the consistency protocol will control the cluster to round up the target, based on the formation of an Apollonian circle. The strategy combines the two parts above to guide the cluster to finish the mission of safely rounding up the target.
3.1. Discrete-Time Consistency Protocol
The purpose of this cluster is to simultaneously round up a single target and avoid obstacles. The area in which the target can be detected is defined as the “detectable area”. If none of the individuals in this pursuit cluster can detect the target, the leader needs to plan a flight path to approach the detectable area of the target. Once the leader is able to enter the detectable area, the round-up path can be planned based on the Apollonian circles, which requires a cooperative flight according to a consistent protocol.
For a two-dimensional discrete-time system, considering that there are UAVs in the cluster, the dynamics for each individual can be described by the following model:
where , , , and represent the position, velocity, and control inputs of the member, respectively. Moreover, denotes the sampling period (). For any , if the system in any initial state satisfies the following conditions:
then the discrete system is capable of achieving asymptotic consistency.
Assume that each member can only communicate with the adjacent neighbors in its communication region, and the set of adjacent neighbors for the th member at moment can be expressed as:
where indicates the communication topology network of the pursuit cluster, denotes the Euclidean distance, and is the maximum communication radius between the th and th member. It is noted that the cluster should eventually reach a consistent state when performing the tracking task, which means that the direct distance between the individual members should be maintained, as required. Therefore, based on the definition of asymptotic consistency of the discrete system, the following constraint must be satisfied for the cluster:
where d is the required distance between neighboring UAVs in a consistent steady state.
Thus, based on the consistency protocol, the control law for the member in the multi-UAV cluster can be composed of three components, as follows [32]:
where controls the safety distance among the cluster members; controls the consistent speed for the cluster members, and control the pursuit UAV to achieve the same speed as the target and maintain the round-up distance based on the Apollonian circle. The specific definitions for , , and are as follows:
From Equations (17)–(19), the coefficients , , and represent the control parameters, indicates the set of neighbors in the communication adjacency network, and denotes the elements in the adjacency matrix. When other members appear within the detection range of the member, a communication path between the member and the neighbor will be quickly established. In this way, the corresponding element of the adjacency matrix is ; otherwise,. Additionally, represents the set of neighbors in the communication adjacency network and the target. If there is a member which has discovered the target, is set as 1; otherwise, . The symbol indicates the position coordinates, and indicates the velocity coordinates of the th member in the inertial coordinate system. The symbol represents the target UAV, d is the safe distance between the pursuing UAVs, and dc denotes the capture distance required in the round-up task.
From the descriptions from Equations (17)–(19), it is concluded that induces the separations of the members in the cluster so that the minimum distance between the members can be maintained; causes the speed alignment of the pursuit UAVs to maintain a consistent speed; and elicits the alignment of the pursuit UAVs with the speed and relative distance of the target, realizing the round-up of the target.
3.2. Target Round-Up Based on the Apollonian Circle
When the leader enters the detectable area of the target, the cluster needs to surround the target and round it up. To achieve this goal, it is necessary to design a round-up route based on the Apollonian circle [33]. Rounding up the target with multiple UAVs ensures, to the greatest extent, that the target cannot escape after being tracked. In order to simplify the formation design process, it is assumed that the speed values of the UAVs do not change during the task.
The diagram of an Apollonian circle is drawn in Figure 4. Suppose that point P is the position of a pursuit UAV and its velocity is , and point D is the position of the target and its velocity is ; then the ratio is expressed as follows:
Figure 4.
Diagram of an Apollonian circle.
The circle shown in Figure 4 is the so-called Apollonian circle, where O is the center, and is the radius. The position of center O and the radius can be expressed as [34]:
where represents the distance between point D and point P.
From Figure 4, it is seen that C is an arbitrary point located on the Apollonian circle. Define the angle of the tangent line between the target and the Apollonian circle as . In the case where the ratio of the target velocity to the pursuing UAV’s velocity is k, the pursuit UAV will not be able to successfully pursue the target when the angle of the tangent line between the target and the Apollonian circle is less than , which can be expressed as follows:
It can be seen that when the angle is greater than , the UAV can always find an angle that is able to catch the target.
Therefore, when multiple pursuit UAVs are employed, the cluster can form several Apollonian circles to surround the target, thus the rounding up target by the pursuit cluster and preventing its escape. To achieve this round-up goal, it is always desirable that the target should be within the Apollonian circles formed by all of the pursuing UAVs, as shown in the Figure 5.
Figure 5.
Diagram of the round-up formation of the pursuit cluster.
Uses D to represent the target to be rounded up and to represent the center of the Apollonian circle formed by the ith pursuit UAV and the target. The details of the formed Apollonian circle can be obtained based on Equations (20)–(23). In order to round up the target, it is necessary to design the desired position for each pursuit UAV. In this way, when the pursuit UAV can detect the target, it will continuously fly towards , thus completing the round-up for the target. The final formation of the round-up condition is shown in Figure 6.
Figure 6.
The final formation of the round-up condition.
In Figure 6, represents the tangent point formed by the ()th and nth Apollonian circles. Denote the angle formed by the points of any adjacent centers of the Apollonian circle and the center as , and denote the angle formed by the points between the center of any Apollonian circle and the corresponding tangent point as ; then, it is seen that . Combining the geometric properties and the definition of an Apollonian circle, we can obtain the following relationships:
Based on the above designed formation, it can be seen that if the position of the leader is known, then the designed positions of the followers can be known as:
where , and is the capture distance required in the round-up task. To ensure that the formed Apollonian circles can closely surround the target, the minimum distance between neighboring pursuit UAVs can be set as follows:
Thus, the target is expected to be rounded up by the pursuit cluster, and this round-up strategy is used as a basic strategy for the multi-UAV cluster when the target can be detected, and there are no obstacles nearby.
3.3. Target Tracking and Obstacle Avoidance Based on the Proposed Strategy
The whole process is shown in Figure 7, where we use a circle to express the obstacle and a star to represent the target. It is seen that for situations in which the leader is able to reach the detectable area of the target, the target can be rounded up based on the round-up route and consistency protocol provided in Section 3.1 and Section 3.2. However, when facing a complex environment in which the leader is unable to directly reach the detectable area of the target, or where certain obstacles exist, it is necessary to plan an optimal flight path for the leader. The optimization process is conducted based on the PPO algorithm. By using such a reinforcement learning method, the leader can be guided to reach the detectable area in an optimized way, thereby further completing the encirclement of the target by other followers.
Figure 7.
The process of the proposed round-up strategy.
The PPO algorithm consists of two types of networks, including the actor network and the critic network. With input which contains the states of the leader, the target, and the obstacle, the actor networks can generate the corresponding policies and the corresponding actions, and the critic network generates the state value function. The whole diagram of the actor network and the critic network can be shown in Figure 8.
Figure 8.
Diagram of PPO network.
In Figure 8, the input layer has six input nodes: [, , , , , ], where [] represents the position of the leader itself, [] represents the relative position of the target, and [] represents the relative position of the obstacle. Then, the activation layer are selected as the type of ReLU functions. After that, there are two fully connected layers, comprising 256 cells. The output layer of the actor network possesses two nodes corresponding to the amount of change in the horizontal coordinates and the amount of change in the vertical coordinates of the leader. The output layer of the critic network is designed as one node, corresponding to the state value function.
The network’s PPO update process is shown in Figure 9. The first step is to initialize the conditions of the target, leader, and obstacle, where their position and speed are randomly generated within a certain range. Then, the relative state can be calculated and inputted into the PPO algorithm. Based on the policy network, the leader’s action will be outputted and executed to the environment. After the interaction, the next state and the reward can be obtained. To repeat the above steps, the trace can be obtained and then stored in the memory.
Figure 9.
Diagram of algorithm network updates.
Based on the trajectory from the memory, it is possible to obtain the state value function . To ensure that the output of the critic is close to , the loss function for the critic network can be expressed as follows:
As for the actor network, the loss function is shown in Equation (9) in Section 2.2. Through the descent gradient method, the network parameters in the actor and critic networks can be updated to cause the leader to approach the target and better avoid the obstacles.
4. Experimental Results
This section presents the experimental results for the proposed consistent round-up strategy. First, the experimental environment setting is introduced. Then, the analysis of performance regarding the consistency protocol and the PPO path optimization in the proposed strategy are conducted, respectively. Additionally, the generalization ability of the proposed strategy is verified.
4.1. Experimental Environment Setting
The experiments are performed in a two-dimensional airspace environment. The training hyperparameters, environmental parameters, and function parameters are shown in Table 1, Table 2 and Table 3, respectively. Among them, the training of the leader is based on the PTYORCH framework, and the display of the cluster and the target is carried out based on the MATLAB platform.
Table 1.
Setting of training hyperparameters.
Table 2.
Setting of environmental parameters.
Table 3.
Function parameters.
Environment. This paper employs a two-dimensional environment of multi-UAV clusters based on continuous observation information and discrete action space, which can be divided into the target round-up environment and the leader’s training environment. The former environment can include the target, obstacle, and multi-UAV cluster with a trained leader, while the latter environment can include the leader, target, and obstacle. Among these, the role of the multi-UAV cluster is to round up the target. The role of the leader is to lead the cluster to avoid the obstacle and track the target out of the detectable range, and the target will escape when it locates any pursuit UAV. If the leader hits an obstacle, it will receive a minus 100 bonus value, and if the leader completes the obstacle avoidance and the leading task, it will earn a positive 100 bonus value. The reward design and escape strategy for the target is detailed in Appendix A and Appendix B.
Leader Training Environment. This environment is divided into four specific cases, including the one with no obstacle, but a fixed target; the one with no obstacle, but a moving target; the one with a fixed target and a fixed obstacle; and the one with a fixed target and a moving obstacle. The leader aims to earn more rewards by avoiding collisions with the obstacle and reaching the detectable area of the target as soon as possible. Based on the PPO algorithm, the path of the leader will be optimized.
Target Round-Up Environment. This environment is divided into two cases. The first case is the one in which there is no obstacle in the environment and the cluster can detect the target at the initial moment; the cluster will round up the target based on the consistency protocol proposed in Section 3.1. The second case is the one in which an obstacle exists in the environment, and the cluster cannot detect the target at the initial moment; the trained leader will guide the followers for obstacle avoidance and target tracking. When the cluster reaches the detectable area of the target, it will then round up the target, based on the consistency protocol.
To evaluate the effectiveness of the strategy, the success rate of the round-up task is defined and expressed as
where is the capture distance, is the number in the multi-UAV cluster, and is the distance between the th pursuit UAV and the target.
4.2. Experiment Using the Round-Up Strategy Based on Consistency Protocol
In this scenario, it is considered that the multi-UAV cluster takes the round-up strategy, based on a consistency protocol. It supposes that the target can be detected by the cluster, and there is not any obstacle nearby. The involved pursuit cluster includes one leader and five followers, and there is one target that needs to be captured. The length of the time step is set to 0.05 s, and the number of simulated steps is set to 80. The target is set to follow a route of before it finds any pursuit UAV. The round-up formation of the cluster is based on the Apollonian circles designed in Section 3.2, and the consistency protocol is adopted during the mission. In this way, the traces of the pursuit cluster and target are shown in Figure 10, where the black star represents the target, the triangle represents the leader, and the others are followers.
Figure 10.
The traces of the pursuit cluster and the target using the method of consistency protocol: no obstacle and a fixed detectable target.
After 80 steps of the flight, the distance between each pursuit UAV and the target is shown in Table 4.
Table 4.
The final distances and the success rate with the method of consistency protocol: no obstacle and a fixed detectable target.
In Table 4, represents the distance between the leader and the target, and represents the distance between the th follower and the target. In this case, equals 90.10%, which indicates that the multi-UAV cluster has a 90.10% success rate in regards to the round-up task for the target.
It is seen that the cluster performs well based on the consistency protocol when the target can be initially detected. However, if the cluster loses the target at the initial moment, it will likely lead to a failure. Set the detectable range of the cluster to be reduced to 1 m, and the target will be out of the cluster’s detectable range. The relevant traces of the cluster and the target are shown in Figure 11.
Figure 11.
The traces of the pursuit cluster and the target with the method of consistency protocol: no obstacle and a fixed undetectable target.
It is seen that in this condition, the adjacency matrix in the communication topology for the cluster will be 0; thus, the cluster cannot complete the round-up mission.
4.3. Experiment of Consistent Round-Up Strategy Based on PPO Path Optimization
Considering the condition that the target is not within the detectable range of the pursuit cluster, or obstacles exists nearby, the leader should be trained to choose the optimized flight path. The followers are expected to be guided by the leader and round up the target.
The scenario considered here is the one with no obstacle, but a fixed target. The initial position of the leader is set as m, and the initial position of the target is set as m. Based on the PPO algorithm, the reward curve, along with learning episodes, is shown in Figure 12. The curve will also be smoothed by the moving average method. From the figure, it can be seen that the reward of the leader has been improved, which indicates that the leader can reach the detectable area of the target.
Figure 12.
The reward curve of the leader in PPO training: no obstacle and an undetectable fixed target.
The display effect of the trained leader is shown in Figure 13. In the figure, the red and black circles represent the leader and the target, respectively, and the radius of the circle shows the relevant physical body. From Figure 13a–c, it is noted that after 400 training episodes, the leader can approach the target by the shortest path.
Figure 13.
The diagram of the leader after PPO training: no obstacle and an undetectable fixed target.
Since the followers should cooperate with the trained leader, the cluster is guided under the optimized flight path of the leader, rounding up the target based on the consistency protocol when the target is detectable. Figure 14 shows the traces of the cluster and the target of this scenario.
Figure 14.
The traces of the pursuit cluster and the target with a PPO-trained leader: no obstacle, but an undetectable fixed target.
From Figure 14a, it is seen that the followers follow the path of the leader to approach the target at the early stage of the mission. And when the cluster reaches the detectable range of the target, they round up the target together, as shown in Figure 14b,c. The distances between the pursuit UAVs and target after 80 steps of flight, as well as the success rate, are shown in Table 5.
Table 5.
The final distances and the success rate with a PPO-trained leader: no obstacle, but an undetectable fixed target.
4.4. Generalization Experiment Using the Proposed Round-Up Strategy
To verify that the proposed strategy can be applied, not only to the scenario in Section 4.3, but to additional situations, this experiment simulates and analyzes the performance of the proposed consistent round-up strategy in other scenarios, including the one with no obstacle, but a moving target, the one with a fixed obstacle and a fixed target; and the one with a moving obstacle and a fixed target.
4.4.1. Performance under the Scenario with No Obstacle, but a Moving Target
In this scenario, the initial position of the leader is set as m, and the initial position of the target is set as m. Additionally, more time steps are required to show the effect of tracking a moving target. Based on the PPO algorithm, the leader can be trained. The reward curve, along with the learning episodes, are shown in Figure 15. From the figure, it can be seen that after 600 times of training, the reward curve converges, and the leader can reach the area where it can detect the target.
Figure 15.
The reward curve of the leader in PPO training: no obstacle, but a moving target.
It can be seen from the curve that when the reward curve reaches convergence, the reward is 100 scores lower than that shown in Figure 12. The reason is that when tracking a moving rather than a fixed target, the leader will be more easily penalized because this does not always guarantee that the relative distance can continuously decrease. The process of the trained leader tracking the target is displayed in Figure 16.
Figure 16.
The diagram of the leader after PPO training: no obstacle and an undetectable moving target.
Similar to the condition shown in Section 4.3, when the target cannot be initially detected, the followers in the cluster must follow the path of the leader, which can be trained based on PPO. And based on the consistency protocol, the cluster will round up the target once the leader can locate it. The relevant traces are shown in Figure 17.
Figure 17.
The traces of the pursuit cluster and the target with a PPO-trained leader: no obstacle, but a moving target.
From Figure 17, it can be seen that the cluster can effectively track and round up a moving target. To provide more details regarding this round-up mission, the final distances between the pursuit UAVs and the target, as well as the success rate, are shown in Table 6.
Table 6.
The final distances and the success rate with a PPO-trained leader: no obstacle, but a moving target.
Table 6 shows that in the scenario with no obstacle, but a moving target, the success rate is 84.35%, which is slightly higher than the 82.52% obtained in the scenario with a fixed target. However, it also indicates that the maximum distance and the minimum distance have more deviations from the required capture distance, which means that the flight is not as stable as the one involving a fixed target.
4.4.2. Performance under the Scenario with a Fixed Target and a Fixed Obstacle
In this experiment, the scenario includes a fixed target and a fixed obstacle, and the goal of the cluster is to round up the target while avoiding the obstacle. The initial position of the leader is set as m, the initial position of the target is set as m, and the initial position of the obstacle is set as m.
Based on the PPO algorithm, after 500 episodes of training, the reward curve of the leader converges, which means that the leader can reach the detectable area of the target. The corresponding reward curve is shown in Figure 17.
From Figure 18, it is observed that the leader can approach the target and avoid the obstacle after being trained, and the converged reward is about 800 scores, which is higher than that in Figure 12 and Figure 15 because of the extra bonus earned by obstacle avoidance. The display after training is shown in Figure 19, where the red circle is the leader, the black circle is the target, and the yellow circle is the obstacle. It can be seen that the leader can avoid the impact range of the obstacle by reaching the detectable range of the target after the training.
Figure 18.
The reward curve of the leader in PPO training: a fixed target and a fixed obstacle.
Figure 19.
The diagram of the leader after PPO training: a fixed target and a fixed obstacle.
The traces of the leader, followers, and the target are shown in Figure 20, where the gray range represents the impact range of the obstacle.
Figure 20.
The traces of the pursuit cluster and the target with a PPO-trained leader: a fixed target and a fixed obstacle.
From the above figure, it is seen that the leader can lead the followers to reach the detectable range of the fixed target and simultaneously avoid the fixed obstacle. After the cluster moves to a position closer to the target, the pursuit UAVs cooperate and create a round-up formation for the target, thus completing the mission. The final distances between the pursuit UAVs and the target, as well as the success rate, are shown in Table 7.
Table 7.
The final distances and the success rate with a PPO-trained leader: a fixed target and a fixed obstacle.
Table 7 shows that the success rate is 82.02%, which is a bit lower than that in the scenario with no obstacle and a fixed target. This is because of the obstacle, which imposes difficulty on the round-up mission.
4.4.3. Performance under the Scenario with a Fixed Target and a Moving Obstacle
In this experiment, the scenario includes a fixed target which must be tracked, as well as a moving obstacle. The goal of the cluster is to round up the fixed target while avoiding the moving obstacle. The initial position of the leader is set as m, the initial position of the target is set as m, and the initial position of the obstacle is set as m. The velocity direction vector of the obstacle is , and the magnitude of the velocity of the obstacle is 1 m/s.
Obviously, the leader in this case also needs to be trained based on the PPO, and the relevant reward curve is shown in Figure 21.
Figure 21.
The reward curve of the leader in PPO training: a fixed target and a moving obstacle.
From Figure 21, it can be seen that the reward curve converges after 600 times of training, and the leader is expected to reach the detectable area of the target while simultaneously avoiding the obstacle. A diagram of the leader, target, and the moving obstacle after training can be seen in Figure 22.
Figure 22.
The display of the leader after PPO training: a fixed target and a moving obstacle.
After the introduction of the trained leader, the followers should follow the path of the leader to track the target and avoid the obstacle. Once the cluster can detect the target, the target will be rounded up, based on the consistency protocol. The corresponding traces are shown in Figure 23, where the gray range represents the impact range of the obstacle.
Figure 23.
The traces of the pursuit cluster and the target with a PPO-trained leader: a fixed target and a moving obstacle.
From Figure 23a, it can be seen that the leader leads the cluster to avoid the obstacle by turning at a specific angle. Additionally, as the obstacle and the cluster move, the cluster finds a suitable path to approach the target and avoid the obstacle, as shown in Figure 23b. Finally, with the help of the consistency protocol, the cluster rounds up the target, thus completing the mission, as illustrated in Figure 23c. The final distances between the pursuit UAVs and the target, as well as the success rate, are shown in Table 8.
Table 8.
The final distances and the success rate with a PPO-trained leader: a fixed target and a moving obstacle.
Table 8 shows that the success rate is 83.35%, which is a bit higher than that in the scenario with a fixed obstacle. This is because the speeds of the pursuit UAVs are faster than that of the obstacle, and the moving trend of the obstacle also affects the result. However, due to the presence of the obstacle, the success rate here is still lower than that in the scenario with no obstacle.
5. Conclusions
To deal with potential failure when rounding up a target for the multi-UAV cluster, this paper proposes a consistent round-up strategy based on PPO path optimization. The involved multi-UAV cluster adopts the leader–follower structure. In regards to the condition when the target is out of the detectable range or in which an obstacle exists nearby, the leader should be trained using the PPO algorithm to guide the followers to approach the target, as well as to avoid the obstacles. Once the cluster can detect the target, the pursuit cluster can round up the target through a designed formation, based on the Apollonian circle and the consistency protocol. In the experiments, the success rates of the pursuit multi-UAV group for rounding up the target are maintained above 80% in the four testing scenarios. Additionally, we found that the moving trend and the existence of the obstacle will affect the performance of the pursuit cluster in different directions.
Author Contributions
Conceptualization, X.W., Z.Y., X.B. and D.R.; methodology, X.W., Z.Y., M.J. and H.L.; software, Z.Y., X.W. and D.R.; validation, X.W., Z.Y., H.L. and X.B.; formal analysis, Z.Y., M.J. and H.L.; investigation, Z.Y., X.W. and D.R.; resources, X.W., Z.Y. and H.L.; data curation, X.W., Z.Y. and D.R.; writing—original draft preparation, X.W., Z.Y., X.B. and D.R.; writing—review and editing, X.W., Z.Y., H.L. and M.J.; visualization, Z.Y., X.B. and H.L.; supervision, X.W., M.J. and D.R. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The data is unavailable.
Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relations that could have appeared to influence the work reported in this paper.
Appendix A. Reward Design for the Leader in the PPO Algorithm
When training the leader in the cluster using PPO, the objective for the training basically contains the following issues: (a) enabling the leader to approach the detectable range of the target; (b) avoiding the obstacles; and (c) making the flight path to the target as short as possible. Therefore, the reward function in PPO can be designed as:
where is the reward obtained from the leader at each position state, and is the reward when moving in a different direction.
Since we hope that the leader can safely approach the target using the shortest path, the designed contains two components, shown as:
where reflects the time for the leader to move to the required area, and guides the leader to avoid the obstacles. The reward function of is designed as:
where denotes a positive constant, denotes the distance of the leader from the target, and denotes the radius of the detectable range. Moreover, for , it is designed as:
where denotes a positive constant, denotes the relative distance of the leader from the obstacle, and denotes the maximum radius of the obstacle’s impact range. From the expression of , it can be seen that the closer the relative distance between the leader and the obstacle, the greater the penalty the obtained by the leader.
In order to bring the leader to the required area while avoiding the obstacle more quickly, the design of should consider the best direction for the leader at each time step. Figure A1 shows the diagram of the locations of the leader, the obstacle, and the required area.
Figure A1.
Diagram of the locations of the leader, obstacle, and the required area.
In Figure A1, it can be seen that the ideal direction for the leader is . If the actual direction of the leader is , a deviated angle can be obtained as follows.
Thus, the design of can be expressed as
which indicates that a smaller value of responds to a higher reward of .
Appendix B. Target Escape Strategy
The target which needs to be captured is set to have two moving strategies. The target is set to have a certain ability to find the pursuit UAV. Therefore, when it judges that the condition is safe, it moves along its designed flight, but when it recognizes any pursuit UAV, it will escape, and the escaping route can be expressed as:
where represents the target’s position, represents the position at the time when the target find itself being tracked by the pursuit UAV, and represents the target’s velocity. The velocity can be updated as
where represents the number of the pursuit UAVs identified by the target, represents the speed of the ith pursuit UAV, and represents the maximum flight speed of the target.
References
- Xu, Z.; Yang, Y.; Shi, B. Joint Optimization of Trajectory and Frequency in Energy Constrained Multi-UAV Assisted MEC System. In Proceedings of the International Conference on Service-Oriented Computing; Troya, J., Medjahed, B., Piattini, M., Yao, L., Fernández, P., Ruiz-Cortés, A., Eds.; Springer Nature: Cham, Switzerland, 2022; pp. 422–429. [Google Scholar]
- Kada, B.; Khalid, M.; Shaikh, M.S. Distributed cooperative control of autonomous multi-agent UAV systems using smooth control. J. Syst. Eng. Electron. 2020, 31, 1297–1307. [Google Scholar] [CrossRef]
- Wang, X. Prospects for the Future Development of China’s Space Transportation System. Space Sci. Technol. 2021, 2021, 9769856. [Google Scholar] [CrossRef]
- Zhang, F.; Shi, Q.; Cheng, G. Multi-agent Collaborative Participation of Agricultural Machinery Service, High Quality Operation and Agricultural Production Efficiency: A Case Study of Conservation Tillage Technology. 2023. Available online: https://www.researchsquare.com/article/rs-2424721/v1 (accessed on 19 October 2023).
- Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH Comput. Graph. 1987, 21, 25–34. [Google Scholar] [CrossRef]
- Zomaya, A.Y. (Ed.) Handbook of Nature-Inspired and Innovative Computing: Integrating Classical Models with Emerging Technologies; Springer Science + Business Media: New York, NY, USA, 2006. [Google Scholar]
- Vicsek, T.; Czirók, A.; Ben-Jacob, E.; Cohen, I.; Shochet, O. Novel Type of Phase Transition in a System of Self-Driven Particles. Phys. Rev. Lett. 1995, 75, 1226–1229. [Google Scholar] [CrossRef]
- Zhu, X.; Lai, J.; Chen, S. Cooperative Location Method for Leader-Follower UAV Formation Based on Follower UAV’s Moving Vector. Sensors 2022, 22, 7125. [Google Scholar] [CrossRef]
- Santana, L.V.; Brandao, A.S.; Sarcinelli-Filho, M. On the Design of Outdoor Leader-Follower UAV-Formation Controllers from a Practical Point of View. IEEE Access 2021, 9, 107493–107501. [Google Scholar] [CrossRef]
- Shen, Y.; Wei, C. Multi-UAV flocking control with individual properties inspired by bird behavior. Aerosp. Sci. Technol. 2022, 130, 107882. [Google Scholar] [CrossRef]
- Lopez-Nicolas, G.; Aranda, M.; Mezouar, Y. Adaptive Multirobot Formation Planning to Enclose and Track a Target with Motion and Visibility Constraints. IEEE Trans. Robot. 2019, 36, 142–156. [Google Scholar] [CrossRef]
- Song, C.; Liu, L.; Xu, S. Circle Formation Control of Mobile Agents with Limited Interaction Range. IEEE Trans. Autom. Control. 2018, 64, 2115–2121. [Google Scholar] [CrossRef]
- Hu, Q.; Shi, Y.; Wang, C. Event-Based Formation Coordinated Control for Multiple Spacecraft Under Communication Constraints. IEEE Trans. Syst. Man Cybern. Syst. 2019, 51, 3168–3179. [Google Scholar] [CrossRef]
- Nguyen, T.T.; Nguyen, N.D.; Nahavandi, S. Deep Reinforcement Learning for Multiagent Systems: A Review of Challenges, Solutions, and Applications. IEEE Trans. Cybern. 2020, 50, 3826–3839. [Google Scholar] [CrossRef]
- Jiang, Z.; Cao, X.; Huang, X.; Li, H.; Ceccarelli, M. Progress and Development Trend of Space Intelligent Robot Technology. Space Sci. Technol. 2022, 2022, 9832053. [Google Scholar] [CrossRef]
- Bansal, T.; Pachocki, J.; Sidor, S.; Sutskever, I.; Mordatch, I. Emergent Complexity via Multi-Agent Competition. arXiv 2018, arXiv:1710.03748. [Google Scholar]
- Hu, H.; Wu, D.; Zhou, F.; Zhu, X.; Hu, R.Q.; Zhu, H. Intelligent Resource Allocation for Edge-Cloud Collaborative Networks: A Hybrid DDPG-D3QN Approach. IEEE Trans. Veh. Technol. 2023, 72, 10696–10709. [Google Scholar] [CrossRef]
- Mei, J.; Chung, W.; Thomas, V.; Dai, B.; Szepesvari, C.; Schuurmans, D. The Role of Baselines in Policy Gradient Optimization. Adv. Neural Inf. Process. Syst. 2022, 35, 17818–17830. [Google Scholar]
- Zhuang, Z.; Lei, K.; Liu, J.; Wang, D.; Guo, Y. Behavior Proximal Policy Optimization. arXiv 2023, arXiv:2302.11312. [Google Scholar]
- Liu, Z.; Song, Y.; Zhang, Y. Actor-Director-Critic: A Novel Deep Reinforcement Learning Framework. arXiv 2023, arXiv:2301.03887. [Google Scholar]
- Li, B.; Liang, S.; Gan, Z.; Chen, D.; Gao, P. Research on multi-UAV task decision-making based on improved MADDPG al-gorithm and transfer learning. IJBIC 2021, 18, 82. [Google Scholar] [CrossRef]
- Hu, J.; Hu, S.; Liao, S. Policy Regularization via Noisy Advantage Values for Cooperative Multi-agent Actor-Critic methods. arXiv 2023, arXiv:2106.14334. [Google Scholar]
- Ackermann, J.; Gabler, V.; Osa, T.; Sugiyama, M. Reducing Overestimation Bias in Multi-Agent Domains Using Double Centralized Critics. arXiv 2023, arXiv:1910.01465. [Google Scholar]
- Liu, K.; Zhao, Y.; Wang, G.; Peng, B. Self-attention-based multi-agent continuous control method in cooperative environments. Inf. Sci. 2021, 585, 454–470. [Google Scholar] [CrossRef]
- Chen, Q.; Wang, Y.; Jin, Y.; Wang, T.; Nie, X.; Yan, T. A Survey of an Intelligent Multi-Agent Formation Control. Appl. Sci. 2023, 13, 5934. [Google Scholar] [CrossRef]
- Mushtaq, A.; Haq, I.U.; Sarwar, M.A.; Khan, A.; Khalil, W.; Mughal, M.A. Multi-Agent Reinforcement Learning for Traffic Flow Management of Autonomous Vehicles. Sensors 2023, 23, 2373. [Google Scholar] [CrossRef]
- Wang, B.; Zhou, K.; Qu, J. Research on Multi-robot Local Path Planning Based on Improved Artificial Potential Field Method. In Advances in Intelligent Systems and Computing; Krömer, P., Zhang, H., Liang, Y., Pan, J.-S., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 684–690. [Google Scholar] [CrossRef]
- Kumar, A.; Ojha, A. Experimental Evaluation of Certain Pursuit and Evasion Schemes for Wheeled Mobile Robots. Int. J. Autom. Comput. 2018, 16, 491–510. [Google Scholar] [CrossRef]
- Li, S.E. Direct RL with Policy Gradient. In Reinforcement Learning for Sequential Decision and Optimal Control; Springer Nature: Singapore, 2023; pp. 187–229. [Google Scholar] [CrossRef]
- Wang, X.; Ma, Z.; Mao, L.; Sun, K.; Huang, X.; Fan, C.; Li, J. Accelerating Fuzzy Actor–Critic Learning via Suboptimal Knowledge for a Multi-Agent Tracking Problem. Electronics 2023, 12, 1852. [Google Scholar] [CrossRef]
- Lebedev, I.; Lebedeva, V. Analysis of «Leader—Followers» Algorithms in Problem of Trajectory Planning for a Group of Multi-rotor UAVs. In Software Engineering Application in Informatics; Silhavy, R., Silhavy, P., Prokopova, Z., Eds.; Springer International Publishing: Cham, Switzerland, 2021; Volume 232, pp. 870–884. [Google Scholar]
- Zhao, H.; Peng, L.; Zhu, F. Research on Formation Algorithm Based on Second-order Delay Multi-Agent System. In Proceedings of the ICRCA 2019: 2019 The 4th International Conference on Robotics, Control and Automation, Shenzhen, China, 19–21 July 2019; pp. 168–172. [Google Scholar]
- Dorothy, M.; Maity, D.; Shishika, D.; Von Moll, A. One Apollonius Circle is Enough for Many Pursuit-Evasion Games. arXiv 2022, arXiv:2111.09205. [Google Scholar]
- Ramana, M.V.; Kothari, M. Pursuit-Evasion Games of High Speed Evader. J. Intell. Robot. Syst. 2016, 85, 293–306. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).