Mission Chain Driven Unmanned Aerial Vehicle Swarms Cooperation for the Search and Rescue of Outdoor Injured Human Targets

A novel cooperative strategy for distributed unmanned aerial vehicle (UAV) swarms with different functions, namely the mission chain-driven unmanned aerial vehicle swarms cooperation method, is proposed to allow the fast search and timely rescue of injured human targets in a widearea outdoor environment. First, a UAV-camera unit is exploited to detect the suspected human target combined with improved deep learning technology. Then, the target location information is transferred to a self-organizing network. Then, the special bio-radar-UAV unit was released to recheck the survivals through a respiratory characteristic detection algorithm. Finally, driven by the location and vital sign status of the injured, a nearby emergency-UAV unit will perform corresponding medical emergency missions, such as dropping emergency supplies. Experimental results show that this strategy can identify the human targets autonomously from the outdoor environment effectively, and the target detection, target sensing, and medical emergency mission chain is completed successfully relying on the cooperative working mode, which is meaningful for the future search-rescue mission of outdoor injured human targets.


Introduction
After natural disasters, wars, and other public safety events, complex environments put forward severe tests for the search for the wounded. A wide range of areas in distress makes the search for the wounded inefficient, thus missing the best rescue time for the wounded. In addition, if rescuers can obtain the location information and life status of the injured in a timely manner, it is crucial to improve the rescue effect. At this stage, the wounded search equipment includes mainly individual search equipment and wounded search unmanned aerial vehicles. Common single-soldier search equipment includes chest bands, wristbands, and handheld search devices, and the main vital signs monitored include breathing, heart rate, and blood oxygen [1]. This kind of equipment has the following deficiencies: first, the equipment needs to be distributed in advance, and it is easy to cause inconvenience in the movement of the user personnel; second, when this equipment is damaged by impact, fire, etc., the accuracy of the collection of life information of the injured will be reduced. To solve the limitations of wearable technology, researchers can effectively improve the search efficiency of outdoor injured people by using improves the response speed, but the computing power is lower than the computing power of the ground station equipment.
The construction of shared networks makes collaboration between different functional UAVs possible. Besides, the real-time search for injured outdoor human targets in the jungle requires a high-performance object-detection algorithm. A large number of object-detection algorithms have been proposed based on deep convolutional neural networks (CNNs) to extract target features and improve detection accuracy. Such algorithms mainly include two categories, and one is the regression-based single-stage, which directly uses CNN to extract features to predict object classification and location. The other is the two-stage algorithm, which generates preselected boxes (region proposals) that may contain the objects to be inspected in advance and then uses the CNN to classify each box. The two-stage includes R-CNN and Faster R-CNN [14]. Typical single-stage detection algorithms include single-shot detection (SSD) [15] and You Only Look Once (YOLO) [16]. Although single-stage networks expose a deficiency of lower accuracy than two-stage networks, it outperforms two-stage networks in terms of processing speed. Thanks to this, single-stage networks deployed on Nvidia edge devices are optimized through TensorRT to meet real-time and accuracy requirements [17], which greatly improves the inference speed for vehicle recognition. For this reason, the TensorRT-based optimized YOLO is suitable and thus adopted in our searching task for outdoor injured human targets.
Subsequently, after the UAV cluster obtains the target detection results of the injured outdoor personnel, it is necessary to judge the survival status of the target in time to provide detailed data support for the rescue plan. To realize this, acquiring vital signs of the target is a convincing and direct way and serves as a trigger for the emergency UAV to throw corresponding medical emergency supplies.
In this paper, a novel mission chain-driven unmanned aerial vehicle swarms cooperation method is proposed to improve the performance of UAV swarms during the search-rescue task for injured outdoor human targets and solve the problems of low endurance and low efficiency of individual UAVs. It combines long-range radio (LoRa) self-organizing network, machine vision, bio-radar, and medical emergency together. Thus, the human targets would be screened twice via different modal sensors, providing effective rescue guidance for ground search and rescue personnel.

System Design
The UAV swarm collaboration system, shown in Figure 1, consists of the LoRa selforganizing network, human target detection, and remote sensing of breath signals. In one mission execution, the system will first use the LoRa self-organizing network to establish information sharing between UAVs. Secondly, the onboard edge device (Nvidia jetson Nano) will use an object-detection algorithm to process the images captured by the camera in real-time. Finally, the UAV equipped with a bio-radar will further perceive the breath signals of the remote sense target. UAVs carrying emergency relief supplies will deliver medical supplies to the targets. A schematic diagram of the collaboration between UAVs with different functions is shown in Figure 2.
throw corresponding medical emergency supplies.
In this paper, a novel mission chain-driven unmanned aerial vehicle swarms cooperation method is proposed to improve the performance of UAV swarms during the searchrescue task for injured outdoor human targets and solve the problems of low endurance and low efficiency of individual UAVs. It combines long-range radio (LoRa) self-organizing network, machine vision, bio-radar, and medical emergency together. Thus, the human targets would be screened twice via different modal sensors, providing effective rescue guidance for ground search and rescue personnel.

System Design
The UAV swarm collaboration system, shown in Figure 1, consists of the LoRa selforganizing network, human target detection, and remote sensing of breath signals. In one mission execution, the system will first use the LoRa self-organizing network to establish information sharing between UAVs. Secondly, the onboard edge device (Nvidia jetson Nano) will use an object-detection algorithm to process the images captured by the camera in real-time. Finally, the UAV equipped with a bio-radar will further perceive the breath signals of the remote sense target. UAVs carrying emergency relief supplies will deliver medical supplies to the targets. A schematic diagram of the collaboration between UAVs with different functions is shown in Figure 2.   For UAV swarms that collaborate on missions outdoors, excellent manoeuvrability, flexibility, and hover stability are essential. To adapt to better communication and control, we chose a quadcopter UAV as a delivery platform, which can cooperate with Jetson Nano to fly autonomously and combine external modules such as communication modules and cameras to form a multifunctional intelligent unmanned aerial vehicle system. The camera has 12.4 million pixels and can provide an effective target detection image at a height of 70 metres. This program-based development control design greatly improves the intelligent control performance of the UAV cluster and reduces manpower expenditure in the search and rescue mission. Figure 3 shows the appearance of these UAV swarms.

Bio-Radar Module and First Aid Kit
The bio-radar sensor (model: JC122-3.3UA6) was custom-developed. When a normal person breathes with minute displacement, the thoracic cavity will reflect the microwave emitted by the radar sensor to generate an echo signal. According to the Doppler effect [18,19], there will be a phase difference between the original radar beams and the echo  For UAV swarms that collaborate on missions outdoors, excellent manoeuvrability, flexibility, and hover stability are essential. To adapt to better communication and control, we chose a quadcopter UAV as a delivery platform, which can cooperate with Jetson Nano to fly autonomously and combine external modules such as communication modules and cameras to form a multifunctional intelligent unmanned aerial vehicle system. The camera has 12.4 million pixels and can provide an effective target detection image at a height of 70 metres. This program-based development control design greatly improves the intelligent control performance of the UAV cluster and reduces manpower expenditure in the search and rescue mission. Figure 3 shows the appearance of these UAV swarms.

UAV Swarms
For UAV swarms that collaborate on missions outdoors, excellent manoeuvrability, flexibility, and hover stability are essential. To adapt to better communication and control, we chose a quadcopter UAV as a delivery platform, which can cooperate with Jetson Nano to fly autonomously and combine external modules such as communication modules and cameras to form a multifunctional intelligent unmanned aerial vehicle system. The camera has 12.4 million pixels and can provide an effective target detection image at a height of 70 metres. This program-based development control design greatly improves the intelligent control performance of the UAV cluster and reduces manpower expenditure in the search and rescue mission. Figure 3 shows the appearance of these UAV swarms.

Bio-Radar Module and First Aid Kit
The bio-radar sensor (model: JC122-3.3UA6) was custom-developed. When a normal person breathes with minute displacement, the thoracic cavity will reflect the microwave emitted by the radar sensor to generate an echo signal. According to the Doppler effect [18,19], there will be a phase difference between the original radar beams and the echo

Bio-Radar Module and First Aid Kit
The bio-radar sensor (model: JC122-3.3UA6) was custom-developed. When a normal person breathes with minute displacement, the thoracic cavity will reflect the microwave emitted by the radar sensor to generate an echo signal. According to the Doppler effect [18,19], there will be a phase difference between the original radar beams and the Drones 2022, 6, 138 5 of 14 echo beams. The relationship between the phase variation and the chest displacement can be expressed as below: where ∆x(t) is the chest displacement, λ is the wavelength of the bioradar, and ∆θ(t) is the phase change introduced by the breathing activity of the human subject. After analogue amplification and filtering, the respirational waveform can be obtained. Observing this waveform can be used to judge the physiological condition of a person. The bioradar operated with a wavelength of 1.25 cm and provided continuous linear waves with a maximum transmission power of 1 mW. Taking advantage of the wide sensing range of the bio-radar (horizontal angle: ±60 • , vertical angle: ±16 • ), the respiration signal of the tested target is detected by throwing multiple bio-radars at different angles and distances.
The appearance of the outdoor first aid kit is shown in Figure 4. The first aid kit is equipped with commonly used medicines and equipment, including scissors, fixing belts, tourniquets, cotton wool, various dressings, haemostatic drugs (powder), quick-acting rescue pills, hypertension drugs, nitroglycerine, Star intestine medicine, malaria medicine, cold medicine, cough medicine, etc. beams. The relationship between the phase variation and the chest displacement can be expressed as below: where ∆ is the chest displacement, is the wavelength of the bioradar, and ∆ is the phase change introduced by the breathing activity of the human subject.
After analogue amplification and filtering, the respirational waveform can be obtained. Observing this waveform can be used to judge the physiological condition of a person. The bioradar operated with a wavelength of 1.25 cm and provided continuous linear waves with a maximum transmission power of 1 mW. Taking advantage of the wide sensing range of the bio-radar (horizontal angle: ±60°, vertical angle: ±16°), the respiration signal of the tested target is detected by throwing multiple bio-radars at different angles and distances.
The appearance of the outdoor first aid kit is shown in Figure 4. The first aid kit is equipped with commonly used medicines and equipment, including scissors, fixing belts, tourniquets, cotton wool, various dressings, haemostatic drugs (powder), quick-acting rescue pills, hypertension drugs, nitroglycerine, Star intestine medicine, malaria medicine, cold medicine, cough medicine, etc.

Control and Information Processing Center
Jetson Nano is a powerful embedded device produced by Nvidia. The device contains a 128-core Maxwell architecture graphics processing unit (GPU), which achieves balanced processing in terms of power consumption, volume, and price. The official test frame rate of Tiny YOLO running on Jetson Nano after TensorRT acceleration is FPS = 38. Combining the advantages of small size and excellent computing power, Jetson Nano can meet the needs of onboard suspected target detection.
The model of the LoRa chip we used for networking is ATK-LoRa-SX1278. Based on spread spectrum technology, the LoRa chip can perform ultralong-distance wireless transmission and has the characteristics of low power consumption and many networking nodes. Jetson Nano will configure the LoRa channel, baud rate, and other parameters to build a communication network between UAVs.
This shared network could further achieve the function of the secondary screening of suspected targets through UAV-based radar. Based on the open-source Python-dronekit control library, we have developed programs for autonomous flight of UAVs based on shared point locations. The UAV that performs the detection of life forms will throw the perception module equipped with bio-radar near the target to obtain the target's respiration signal.

Control and Information Processing Center
Jetson Nano is a powerful embedded device produced by Nvidia. The device contains a 128-core Maxwell architecture graphics processing unit (GPU), which achieves balanced processing in terms of power consumption, volume, and price. The official test frame rate of Tiny YOLO running on Jetson Nano after TensorRT acceleration is FPS = 38. Combining the advantages of small size and excellent computing power, Jetson Nano can meet the needs of onboard suspected target detection.
The model of the LoRa chip we used for networking is ATK-LoRa-SX1278. Based on spread spectrum technology, the LoRa chip can perform ultralong-distance wireless transmission and has the characteristics of low power consumption and many networking nodes. Jetson Nano will configure the LoRa channel, baud rate, and other parameters to build a communication network between UAVs.
This shared network could further achieve the function of the secondary screening of suspected targets through UAV-based radar. Based on the open-source Python-dronekit control library, we have developed programs for autonomous flight of UAVs based on shared point locations. The UAV that performs the detection of life forms will throw the perception module equipped with bio-radar near the target to obtain the target's respiration signal.

Mission Chain Driven UAV Swarms Cooperation Algorithm
Based on the above platform, for the real-time search and rescue of injured human targets in a wide-area environment, we propose a cooperative process based on the functional differences driven by the mission chain. The collaboration process of the UAV swarm refers to the three function types of UAVs: detect, sense, and supply. The schematic diagrams of the three are shown in Figure 5.
(1) UAV-camera-based suspected human target detection. After the UAVs receive the Take-off command, they will automatically form a formation to go to the mission point and automatically perform the search task according to the "zigzag" pattern. (2) UAV-radar-based human target reconfirmation. We wrote a Python script to obtain the location of the UAV when the target was detected and share the location information of the injured to the sensing UAV through the self-organizing network. The sensing UAV will autonomously fly near the injured person and throw a sensing module to further obtain the breath signal of the target. (3) Medical emergencies through the emergency UAV. Finally, after the target survival is determined, the UAVs that deliver emergency rescue will provide the necessary support to keep the wounded alive.
Drones 2022, 6, 138 6 of 14 Based on the above platform, for the real-time search and rescue of injured human targets in a wide-area environment, we propose a cooperative process based on the functional differences driven by the mission chain. The collaboration process of the UAV swarm refers to the three function types of UAVs: detect, sense, and supply. The schematic diagrams of the three are shown in Figure 5. (1) UAV-camera-based suspected human target detection. After the UAVs receive the Take-off command, they will automatically form a formation to go to the mission point and automatically perform the search task according to the "zigzag" pattern. (2) UAV-radar-based human target reconfirmation. We wrote a Python script to obtain To meet the real-time and accuracy requirements of target recognition detection algorithms on airborne edge devices, we tried the recognition and detection effects of the YOLO series of algorithms on low-contrast targets in the grass, including yolov3, yolov4, and yolov4-tiny. The Yolov4-tiny structure is a simplified version of Yolov4, a lightweight model. The overall network structure has a total of 38 layers. Using three residual units, the activation function uses LeakyReLU, the classification and regression of the target are changed to use two feature layers, and the feature pyramid network (FPN) is used when merging the effective feature layers.The feature structure of Yolov4 tiny is shown in Figure 6.
sensing UAV will autonomously fly near the injured person and throw a sensing module to further obtain the breath signal of the target.
(3) Medical emergencies through the emergency UAV. Finally, after the target survival is determined, the UAVs that deliver emergency rescue will provide the necessary support to keep the wounded alive.

UAV-Camera-Based Suspected Human Target Detection
(1) Tiny Yolov4 To meet the real-time and accuracy requirements of target recognition detection algorithms on airborne edge devices, we tried the recognition and detection effects of the YOLO series of algorithms on low-contrast targets in the grass, including yolov3, yolov4, and yolov4-tiny. The Yolov4-tiny structure is a simplified version of Yolov4, a lightweight model. The overall network structure has a total of 38 layers. Using three residual units, the activation function uses LeakyReLU, the classification and regression of the target are changed to use two feature layers, and the feature pyramid network (FPN) is used when merging the effective feature layers.The feature structure of Yolov4 tiny is shown in Figure 6.  (2) K-means clustering methods In object recognition detection, the network model learns the target category based on multiple features and needs to learn the position and size of the target in the graph. Therefore, algorithms such as Faster regional-based convolutional neural network (RCNN), SSD, and YOLO preset a set of reference boxes of different sizes and aspect ratios on the image in advance before recognition and detection. These cover almost all positions of the picture to match the width and height of the target in the datasets to calculate the target box faster and better. It is called the anchor point in the Faster RCNN, and the SSD is called the previous bounding box. Starting with YOLOv2, the detection mechanism of the YOLO series uses an anthropology mechanism. The difference is that in Faster-RCNN, anchor points are set manually; however, for different datasets, one needs to preset the appropriate anchors based on the target size. In YOLO, the k-means clustering algorithm is used to cluster the bounding boxes in the training set. Finding the appropriate size of the anchor solves this problem very well. (2) K-means clustering methods In object recognition detection, the network model learns the target category based on multiple features and needs to learn the position and size of the target in the graph. Therefore, algorithms such as Faster regional-based convolutional neural network (RCNN), SSD, and YOLO preset a set of reference boxes of different sizes and aspect ratios on the image in advance before recognition and detection. These cover almost all positions of the picture to match the width and height of the target in the datasets to calculate the target box faster and better. It is called the anchor point in the Faster RCNN, and the SSD is called the previous bounding box. Starting with YOLOv2, the detection mechanism of the YOLO series uses an anthropology mechanism. The difference is that in Faster-RCNN, anchor points are set manually; however, for different datasets, one needs to preset the appropriate anchors based on the target size. In YOLO, the k-means clustering algorithm is used to cluster the bounding boxes in the training set. Finding the appropriate size of the anchor solves this problem very well.

(3) TensorRT Acceleration
In the actual deep learning model deployment link, the efficiency of using the original network framework is relatively inefficient, so Nvidia has developed a TensorRT inference library for its own GPU. TensorRT is a C++ inference framework that can run on Nvidia's various GPU hardware platforms. We use PyTorch, TF, or other frameworks to train the model, which can be converted to TensorRT format, and then use the TensorRT inference engine to run our model, thereby improving the speed of this model running on Nvidia GPUs.

UAV-Radar-Based Human Target Reconfirmation
After receiving the location information of the suspected target, the UAV equipped with bio-radar will fly near the target, collect the breath signal of the suspected target by throwing the sense device, determine whether the target is still alive, and transmit this information to the lower layer. There are two important problems to be solved in the process of life information perception of the wounded: the acquisition of the wounded information and the long-distance transmission of the information. Therefore, the design of the system adopts modularization, which is composed of three modules: a life sensing device, an air communication device, and a ground station processing terminal, as shown in Figure 7.
nal network framework is relatively inefficient, so Nvidia has developed a TensorRT inference library for its own GPU. TensorRT is a C++ inference framework that can run on Nvidia's various GPU hardware platforms. We use PyTorch, TF, or other frameworks to train the model, which can be converted to TensorRT format, and then use the TensorRT inference engine to run our model, thereby improving the speed of this model running on Nvidia GPUs.

UAV-Radar-Based Human Target Reconfirmation
After receiving the location information of the suspected target, the UAV equipped with bio-radar will fly near the target, collect the breath signal of the suspected target by throwing the sense device, determine whether the target is still alive, and transmit this information to the lower layer. There are two important problems to be solved in the process of life information perception of the wounded: the acquisition of the wounded information and the long-distance transmission of the information. Therefore, the design of the system adopts modularization, which is composed of three modules: a life sensing device, an air communication device, and a ground station processing terminal, as shown in Figure 7.  with bio-radar will fly near the target, collect the breath signal of the susp throwing the sense device, determine whether the target is still alive, and information to the lower layer. There are two important problems to be sol cess of life information perception of the wounded: the acquisition of the w mation and the long-distance transmission of the information. Therefore, th system adopts modularization, which is composed of three modules: a life s an air communication device, and a ground station processing terminal, as ure 7.   The air communication device is carried on the UAV. It is responsible for connectin the life sense device and the ground station processing terminal up and down, using LoR and radio to transmit data. First, the life sensing device autonomously alarms after detect ing a life signal. It simultaneously transmits the life information of the wounded to th unmanned air communication terminal, and then the wireless transmitter transmits th information to the ground workstation processing terminal. A physical diagram of the ai communication device is shown in Figure 9.

Experiments and Discussion
To verify the actual performance of the UAV swarm, the experiment was divide into three parts for testing: (1) Multi-UAV cooperation: The communication distance of the ad hoc network and th task coordination based on functional differences are mainly tested. (2) Human target detection: The YOLOv4-Tiny algorithm was tested to match the accu racy and speed of object recognition detection. (3) Human Target reconfirmation: The accuracy of the sensing device on the respiratio signal acquisition of human targets was tested and analysed. (4) Medical emergencies through the emergency UAV

Multi-UAV Cooperation
First, we tested the communication distance of the LoRa ad hoc network: the maxi mum communication distance of a single node was tested in a relatively open offsite en vironment. Change the distance between the LoRa acquisition and receiver ends. Test sig nal strength and packet loss were tested at 1000 m, 1500 m, and 1600 m, and the commu nication was stable at 1500 m without data loss. The packet loss rate was 5% at 1500 m an more than 50% at 1600 m.
Second, the communication distance between the air communication device and th ground station was tested in the same way, and the communication quality between them was stable at 1500 m. By combining two communication modules, the communicatio range is effectively expanded, and the communication quality is stable. In the subsequen

Experiments and Discussion
To verify the actual performance of the UAV swarm, the experiment was divided into three parts for testing: (1) Multi-UAV cooperation: The communication distance of the ad hoc network and the task coordination based on functional differences are mainly tested. (2) Human target detection: The YOLOv4-Tiny algorithm was tested to match the accuracy and speed of object recognition detection. (3) Human Target reconfirmation: The accuracy of the sensing device on the respiration signal acquisition of human targets was tested and analysed. (4) Medical emergencies through the emergency UAV

Multi-UAV Cooperation
First, we tested the communication distance of the LoRa ad hoc network: the maximum communication distance of a single node was tested in a relatively open offsite environment. Change the distance between the LoRa acquisition and receiver ends. Test signal strength and packet loss were tested at 1000 m, 1500 m, and 1600 m, and the communication was stable at 1500 m without data loss. The packet loss rate was 5% at 1500 m and more than 50% at 1600 m.
Second, the communication distance between the air communication device and the ground station was tested in the same way, and the communication quality between them was stable at 1500 m. By combining two communication modules, the communication range is effectively expanded, and the communication quality is stable. In the subsequent testing process, the communication protocol will be improved to enhance the anti-jamming performance of the system wireless communication in different environments.
Multi-UAV cooperation includes two parts: four-UAV formations and three-UAV cooperation. We conducted a formation test of four UAVs outdoors to verify whether multiple UAVs could respond to ground workstation commands in real-time. As shown in Figure 10a, after receiving the take-off command from the ground workstation, the four UAVs fly to the vicinity of the target area according to the "one-line" formation and the search mission. The test results show that the formation effect is ideal, the action feedback is accurate and timely, and the planned mission area can be covered.
testing process, the communication protocol will be improved to enhance the anti-jamming performance of the system wireless communication in different environments.
Multi-UAV cooperation includes two parts: four-UAV formations and three-UAV cooperation. We conducted a formation test of four UAVs outdoors to verify whether multiple UAVs could respond to ground workstation commands in real-time. As shown in Figure 10a, after receiving the take-off command from the ground workstation, the four UAVs fly to the vicinity of the target area according to the "one-line" formation and the search mission. The test results show that the formation effect is ideal, the action feedback is accurate and timely, and the planned mission area can be covered. Then, we carried out a simulated searching experiment of outdoor injured people in the playground and designed a working group of two search UAVs and a sensing UAV. The two UAVs responsible for the search sent the simulated position of the wounded to the perception UAV in the coverage search mission. After receiving the position, the perception UAV quickly went to the point to throw the respiration signal perception module. As shown in Figure 10b, the cooperation of three UAVs can be initially realized, and the information transmission and receiving in communication (like the target's position information) among them could reach 65%. To solve this problem, we analysed the reason for the external interference to affect the stability of the communication. In the future, we will propose a method to improve the stability of communication protocol, and we will continue to improve the accuracy.

Experimental Design and Configuration
This experiment mainly simulates the identification and detection of wounds in an outdoor low-contrast environment. It builds a training platform for an in-depth learning model and a platform for UAV target identification and detection. Among these operating modes, the former trains the data with better computing power, thus providing good training and testing standards for the airborne system of the latter.
The main experiment process includes making datasets, model training, and algorithm deployment. First, 2130 images of targets at different heights and positions were collected, and the wounded targets in these images were labelled. The data were divided into a training set and a test set at a ratio of 7:3. The algorithm is trained by a neural network, and the precision and recall rate of the test set are calculated. The parameters of the deep learning platform we use for training data are mainly shown in Table 1. Then, we carried out a simulated searching experiment of outdoor injured people in the playground and designed a working group of two search UAVs and a sensing UAV. The two UAVs responsible for the search sent the simulated position of the wounded to the perception UAV in the coverage search mission. After receiving the position, the perception UAV quickly went to the point to throw the respiration signal perception module. As shown in Figure 10b, the cooperation of three UAVs can be initially realized, and the information transmission and receiving in communication (like the target's position information) among them could reach 65%. To solve this problem, we analysed the reason for the external interference to affect the stability of the communication. In the future, we will propose a method to improve the stability of communication protocol, and we will continue to improve the accuracy.

Experimental Design and Configuration
This experiment mainly simulates the identification and detection of wounds in an outdoor low-contrast environment. It builds a training platform for an in-depth learning model and a platform for UAV target identification and detection. Among these operating modes, the former trains the data with better computing power, thus providing good training and testing standards for the airborne system of the latter.
The main experiment process includes making datasets, model training, and algorithm deployment. First, 2130 images of targets at different heights and positions were collected, and the wounded targets in these images were labelled. The data were divided into a training set and a test set at a ratio of 7:3. The algorithm is trained by a neural network, and the precision and recall rate of the test set are calculated. The parameters of the deep learning platform we use for training data are mainly shown in Table 1.

Results and Analysis
Based on the above dataset and experimental configuration, the dataset was preprocessed by operations such as labeling and normalization. The initial learning rate (0.001) and decay parameter (0.0005) have the best training effect. Using mean average precision (mAP 50 ) (IOU = 50) and frames per second (FPS) detection speed as evaluation indicators, we compared the detection effects of the original Yolov4-tiny, clustered yolov4-tiny, Yolov4-tiny after clustering, and TensorRT acceleration.
As shown in Table 2, compared to the original Yolov4-tiny, the detection accuracy of Yolov4-tiny after clustering the anchor box by the K-means clustering algorithm increased by 23.69%. The detection speed remained unchanged. The clustered Yolov4-tiny refers to the size of the anchor box clustered in advance when predicting the target, which improves the detection effect. The detection accuracy of the TensorRT acceleration of the clustered Yolov4-tiny decreased by 3.86% and the detection speed increased by 171% compared to the clustered Yolov4-tiny. Compared with the original Yolov4-tiny, the detection speed was improved on the premise of ensuring a higher detection accuracy. The results show that the algorithm has a good detection effect on low-contrast targets with a height of less than 70 m. As shown in Figure 11, it can accurately detect the wounded in trenches and weed coverage, and can meet the needs of UAV target detection.

.2. Results and Analysis
Based on the above dataset and experimental configuration, the dataset was preprocessed by operations such as labeling and normalization. The initial learning rate (0.001) and decay parameter (0.0005) have the best training effect. Using mean average precision (mAP 50 ) (IOU = 50) and frames per second (FPS) detection speed as evaluation indicators, we compared the detection effects of the original Yolov4-tiny, clustered yolov4-tiny, Yolov4-tiny after clustering, and TensorRT acceleration.
As shown in Table 2, compared to the original Yolov4-tiny, the detection accuracy of Yolov4-tiny after clustering the anchor box by the K-means clustering algorithm increased by 23.69%. The detection speed remained unchanged. The clustered Yolov4-tiny refers to the size of the anchor box clustered in advance when predicting the target, which improves the detection effect. The detection accuracy of the TensorRT acceleration of the clustered Yolov4-tiny decreased by 3.86% and the detection speed increased by 171% compared to the clustered Yolov4-tiny. Compared with the original Yolov4-tiny, the detection speed was improved on the premise of ensuring a higher detection accuracy. The results show that the algorithm has a good detection effect on low-contrast targets with a height of less than 70 m. As shown in Figure 11, it can accurately detect the wounded in trenches and weed coverage, and can meet the needs of UAV target detection.

Human Target Reconfirmation
After a suspected human target has been detected based on a UAV camera, the following necessary mission is to confirm further whether it is a surviving human body and to sense the corresponding physiological state.

Human Target Reconfirmation
After a suspected human target has been detected based on a UAV camera, the following necessary mission is to confirm further whether it is a surviving human body and to sense the corresponding physiological state.
Therefore, once the video detects a suspected target, the drone will be triggered, and the bio-radar module will be precisely dropped around the human target. Since the electromagnetic waves emitted by bio-radar can detect regular chest wall movement caused by human respiratory movement, the respiratory characteristic analysis of radar echoes can determine whether the suspected target is a human body. In addition, a previous research group showed that after human injury (blood loss, temperature loss), human respiratory radar echoes would have morphologically specific characteristics, which can be used to assess the physiological state of survivors. The physical state of the survivor is good or bad, or even terrible (being dying), which could directly represent the urgency of the need of the survivor to be treated in time.
To test the effectiveness of detecting human respiration signals by throwing bio-radar, we tested the detection distance and angle of the bio-radar separately. Ideally, it is assumed that the radar can be accurately thrown to the position of 0.5 m in front of the human chest. The experiment of the effective detection distance of the respiration signal was carried out by increasing the distance by 0.5 m successively. Then, with the human body target as the centre, the detection effect of different angles (the view angle to the human chest wall) (0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees) at 0.5 m were tested with 0 degrees in the direction of the chest cavity. Finally, the detection effect of different angles corresponding to different distances was tested.
Like the respiratory detection results of a typical example (0 degrees, 2 m) shown in Figure 12, a strong and regular respiratory signal was clearly acquired. There was only weak and chaotic noise when the human target left, demonstrating that the respiration feature acquired by bio-radar could convincingly confirm the human attribution. Moreover, the statistical results of different detection positions (angle and position) show that almost all human respiration can be detected from most view angles within a distance. Even the detection failure at 90 degrees can be solved by throwing over two radar modules or using our proposed omnidirectional bio-radar integrated module [20].
by human respiratory movement, the respiratory characteristic analysis of radar echoes can determine whether the suspected target is a human body. In addition, a previous research group showed that after human injury (blood loss, temperature loss), human respiratory radar echoes would have morphologically specific characteristics, which can be used to assess the physiological state of survivors. The physical state of the survivor is good or bad, or even terrible (being dying), which could directly represent the urgency of the need of the survivor to be treated in time.
To test the effectiveness of detecting human respiration signals by throwing bio-radar, we tested the detection distance and angle of the bio-radar separately. Ideally, it is assumed that the radar can be accurately thrown to the position of 0.5 metres in front of the human chest. The experiment of the effective detection distance of the respiration signal was carried out by increasing the distance by 0.5 metres successively. Then, with the human body target as the centre, the detection effect of different angles (the view angle to the human chest wall) (0 degrees, 45 degrees, 90 degrees, 135 degrees, 180 degrees) at 0.5 metres were tested with 0 degrees in the direction of the chest cavity. Finally, the detection effect of different angles corresponding to different distances was tested.
Like the respiratory detection results of a typical example (0 degrees, 2 m) shown in Figure 12, a strong and regular respiratory signal was clearly acquired. There was only weak and chaotic noise when the human target left, demonstrating that the respiration feature acquired by bio-radar could convincingly confirm the human attribution. Moreover, the statistical results of different detection positions (angle and position) show that almost all human respiration can be detected from most view angles within a distance. Even the detection failure at 90 degrees can be solved by throwing over two radar modules or using our proposed omnidirectional bio-radar integrated module [20]. However, our previous studies have effectively verified that the injured human body will go through four typical physiological stages, including normal, transitioning, and agonal stages. Fortunately, the bio-radar could effectively detect and judge these stages based on some signal features combined with machine learning methods [21]. Consequently, it could help determine the most appropriate rescue strategy of emergency UAV swarms for rescuers and finally help save more lives.

Medical Emergencies through the Emergency UAV
The degree of urgency and priority of medical treatment depends mainly on the degree of injury and physiological state. Therefore, these medical emergency UAVs will perform reasonable treatment missions according to the physiological state evaluation based on the detected respiratory radar signal. As shown in Figure 13, this type of UAV is equipped with a throwing switch and an outdoor first-aid kit for autonomously throwing However, our previous studies have effectively verified that the injured human body will go through four typical physiological stages, including normal, transitioning, and agonal stages. Fortunately, the bio-radar could effectively detect and judge these stages based on some signal features combined with machine learning methods [21]. Consequently, it could help determine the most appropriate rescue strategy of emergency UAV swarms for rescuers and finally help save more lives.

Medical Emergencies through the Emergency UAV
The degree of urgency and priority of medical treatment depends mainly on the degree of injury and physiological state. Therefore, these medical emergency UAVs will perform reasonable treatment missions according to the physiological state evaluation based on the detected respiratory radar signal. As shown in Figure 13, this type of UAV is equipped with a throwing switch and an outdoor first-aid kit for autonomously throwing medical supplies. Bandages and haemostatic drugs can provide the injured person with timely treatment for trauma. In addition, quick-acting rescue pills, cold medicine, and nitroglycerin can relieve some sudden acute symptoms and effectively obtain more rescue time for those in distress.
Drones 2022, 6, 138 13 of 14 medical supplies. Bandages and haemostatic drugs can provide the injured person with timely treatment for trauma. In addition, quick-acting rescue pills, cold medicine, and nitroglycerin can relieve some sudden acute symptoms and effectively obtain more rescue time for those in distress. Figure 13. Emergency UAV for emergency supply.

Conclusions
Targeting the challenging mission chain, including searching, sensing, and emergency treatment of injured human targets under a wide-area outdoor jungle environment, a novel cooperative strategy for distributed UAV swarms is proposed. In this way, the multi-UAV network will work collaboratively to perform a quick search, accurate sensing, and timely medical treatment. This provides a strong basis for the planning of rescue programs, reducing the difficulty of searching for the wounded and effectively improving the efficiency of searching for and rescuing the wounded. The combination of casualty search and technology such as drones, machine vision, and bio-radar will subvert traditional search and rescue methods. Generally, however, the collaborative search and rescue of the outdoor injured based on the distributed drone swarms could also be continuously improved, such as the accuracy of physiological state evaluation. However, it is promising to lay a foundation for the intelligent search and rescue of the injured in the new era.