In previous sections, an animal tracking approach and a face detection technique using aerial images have been presented. However, for these to be promising in the fight against poachers, an autonomous control of the UAV is mandatory. This section presents a vision-based control approach to close the control loop using the result of an image processing algorithm as the input of the control. Our work includes vision-based control to follow suspicious vehicles (potential poachers) and to accomplish autonomous landing to recover the UAV after the end of a surveillance mission to recharge the batteries. This way, we are presenting a potential full surveillance mission in which the UAV takes off from a specific place and then follows GPS waypoints (these tasks are already integrated in most of the commercial available UAVs) to patrol a specific area of natural parks. If there is any animal or group of them detected during patrolling, it should be tracked to get information about the animal status to determine a potential poacher attack. In case people are found, the system has to detect the faces and store these data for security authorities (these two image processing algorithms were presented previously in this paper). Any potential vehicle should also be tracked. In this case, the UAV is able to follow the suspicious vehicle in a specific trajectory relatively to it. During the vehicle following task, the GPS position is shared with security authorities. Finally, the UAV has to come back to the closest (moving or static) base station and accomplish the autonomous landing to recharge its batteries and/or to be prepared for the next mission. In this section, we present the control approach to follow vehicles and to autonomously land on both static and moving bases.
7.1. Vision-Based Fuzzy Control System Approach
The presented control approach for UAVs is designed to use an onboard downwards looking camera and an inertial measurement unit (IMU). The information extracted from these two sensors is used to estimate the pose of the UAV in order to control it to follow vehicles and to land on the top part of ground vehicles or moving targets. The computer vision algorithm used is 3D estimations based on homographies. The homography detection is done with regards to a known target, which is an augmented reality (AR) code. The detection of this type of code is done with an ROS-implementation of the ArUco library [
77]. The result of this algorithm is the pose estimation of the multi-copter with respect to the AR code. Multi-copters, as well as rotary wing platforms have a singularity with respect to fixed wings, which is the relation of the movement and the tilt of the thrust vector. It is not possible to move longitudinally (forward/backward) or laterally (left/right) without modifying the thrust vector. Because the camera is attached to the frame of the UAV, this movement significantly affects the estimations calculated by the vision algorithm. The estimation of the rotation of the UAV is retrieved from the gyroscope of the IMU. The subtraction of the roll and pitch rotations of the UAV from the image estimation is called de-rotation [
78,
79]. The relevant formulas are presented in Equation (
17).
where
are the translation estimation on the
UAV axis, respectively,
are the roll and pitch rotations of the UAV, respectively, and
are the resulting de-rotated translations of the
of the UAV, respectively.
The de-rotated values of the pose estimation of the UAV are given as input to the control system. The control approach presented in this work consists of four controllers that are working in parallel, commanding the longitudinal, lateral and vertical velocities, as well as the orientation of the UAV. These four controllers are implemented as fuzzy logic PID-like controllers. The main reason for using this specific technique for the control loop is the way that this technique manages the uncertainty that is derived from the noisy data received from the vision-based detection algorithms and the IMU, as well as how it manages the high complexity of this type of non-linear robotics platform. Furthermore, the use of linguistic values by the fuzzy logic controllers simplifies the tuning process of the control system. The four fuzzy controllers were implemented using an in-house software called MOFS [
80]. An initial configuration of the controllers was done based on heuristic information and was then subsequently tuned by using the Virtual Robotics Experimental Platform (V-REP) [
81] and self-developed ROS modules. Detailed information about the tuning process can be found in [
82]. In the present work, we use the same controller definition for the longitudinal, lateral and vertical velocity controllers, which is shown in
Figure 16. The inputs are given in meters, and the output is calculated in meters × seconds. The orientation velocity controller gives the inputs in degrees, and the output is calculated in degrees × seconds. The control system has two different working states: In the first state, the UAV is set to follow the vehicles of a poacher while the height is predefined. The second state is used to recover the UAV for the next mission by landing it autonomously on top of the security entity’s vehicles.
Figure 16.
Final design of the variables of the fuzzy controller after the manual tuning process in the virtual environment (V-REP).
Figure 16.
Final design of the variables of the fuzzy controller after the manual tuning process in the virtual environment (V-REP).
In this work, an additional tuning phase has been included. It has been developed during experiments with a quadrotor tracking the target. Comparing to previous work [
82], in this work, a weight value was assigned to each of the three inputs of each control variable, as well as to the output of each controller. The tuning process of these weights was done with the real quadrotor in real experiments.
Table 1 shows the final values of the weight for all of the controllers after the tuning process.
Table 1.
Tuned weight values for the four controllers.
Table 1.
Tuned weight values for the four controllers.
Controller Weight | Lateral | Longitudinal | Vertical | Heading |
---|
Error | 0.3 | 0.3 | 1.0 | 1.0 |
Derivative of the error | 0.5 | 0.5 | 1.0 | 1.0 |
Integral of the error | 0.1 | 0.1 | 1.0 | 1.0 |
Output | 0.4 | 0.4 | 0.4 | 0.16 |
7.2. Experiments
The experiments have been done in the laboratory of the Automation & Robotics Research Group at the University of Luxembourg. The flight arena has a size of
m and a height of 5 m. The UAV used in the experiments is an AR.Drone v2.0 [
83]. This platform is not equipped with an onboard computer to process the images. Therefore, the image processing and the control process are calculated remotely on a ground station. The delay of the WiFi communication affects the system by increasing the complexity of the non-linear system of the vision-based UAV control. A WiFi-router is used to reduce the maximum variations of the image rate. A Youbot mobile robot from KUKA [
84] has been used as the target vehicle to follow, as well as for the autonomous landing task on a moving target. The top of this robot was equipped with a landing platform. The platform was covered with an ArUco code in order for it to be detected by the vision algorithm, as is shown in
Figure 17. This ground platform was controlled randomly via ROS with a remote computer. The omnidirectional wheels of this ground robot allowed for the position of the tracked target to be modified in all directions. This freedom of movement and the height limitation increase the complexity of the following and landing tasks. This type of movement cannot be performed by normal vehicles.
Figure 17.
Youbot platform with the ArUco target.
Figure 17.
Youbot platform with the ArUco target.
Two different kinds of experiments were preformed. In the first experiment, the UAV had to follow the moving target from a fixed altitude of
m. In the second experiment, the UAV had to land on the moving ground platform.
Table 2 shows the results of seven different experiments. The results are expressed with the root mean squared error (RMSE) for the evolution of the error for each controller. Depending on the controller, the RMSE is in meters (lateral, longitudinal and vertical velocity controller) or in degrees (heading velocity controller). The speed of the target platform was set to
m/s for the following Test #1, following Test #4 and the landing Test #3. For all other tests, the speed of the target platform was set to
m/s.
Table 2.
Root mean square error for the lateral, longitudinal, vertical and heading velocity controllers for the autonomous following and landing on a moving target.
Table 2.
Root mean square error for the lateral, longitudinal, vertical and heading velocity controllers for the autonomous following and landing on a moving target.
Controller | Lateral | Longitudinal | Vertical | Heading | time |
---|
Experiment | (RMSE, m) | (RMSE, m) | (RMSE, m) | (RMSE, Degrees) | (s) |
---|
Following #1 | 0.1702 | 0.1449 | 0.1254 | 10.3930 | 300 |
Following #2 | 0.0974 | 0.1071 | 0.1077 | 8.6512 | 146 |
Following #3 | 0.1301 | 0.1073 | 0.1248 | 5.2134 | 135 |
Following #4 | 0.1564 | 0.1101 | 0.0989 | 12.3173 | 144 |
Landing #1 | 0.1023 | 0.0.096 | 1.1634 | 4.5843 | 12 |
Landing #2 | 0.0751 | 0.0494 | 1.1776 | 3.5163 | 11 |
Landing #3 | 0.0969 | 0.0765 | 0.9145 | 4.6865 | 31 |
The most important experiments are shown in the next graph.
Figure 18 shows the behavior of the system in the first target-following experiment. While this was also the experiment with the longest duration, the RMSE error of the lateral and longitudinal is under 15 cm. An error in the estimation of the orientation of the target can be seen between the 50th and the 80th second of the test. This error, which was produced by the computer vision algorithm, did not affect the estimations for the lateral, longitudinal and vertical controllers.
Figure 18.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the first moving target-following experiment.
Figure 18.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the first moving target-following experiment.
Figure 19 shows the evolution of the error rate for all of the controllers in the second target-following experiment presented in
Table 2. In this case, several orientation movements were applied to the target in order to evaluate the behavior of the heading controller in detail. This controller performs quickly, as can be seen in the two big changes at the first 50 s of the experiment. In this section of the test, the error reaches up to 35°, but the controller manages to reduce it in just a few seconds. During the other tests, more changes have been applied to the orientation of the target platform with similar performances of the heading controller. In this case, the RMSE of the heading controller was
°.
Figure 20 shows the behavior of the controller for the second autonomous landing experiment. In this test, a gradual reduction of the altitude was performed by the UAV, reducing it from
m to 1 m in 8 s with an almost zero error for the lateral and longitudinal controllers. It has to be taken into account that the control system was set to reduce the vertical error up to 1 m, and then, a predefined landing command was sent to the UAV, which reduces the speed of the motors gradually.
Figure 21 shows the behavior of the control system for the third autonomous landing experiment. In this case, one can observe how the vertical controller pauses a couple of times for a few seconds in between the 25th and the 30th second of the test. This is because the vertical control system only sends commands when the errors of the heading, lateral and longitudinal controllers are smaller than some predefined values. This predefined behavior stabilized the landing process, reducing the potential loss of the target during the landing.
Figure 19.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the second moving target-following experiment.
Figure 19.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the second moving target-following experiment.
Figure 20.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the second autonomous landing on a moving target experiment.
Figure 20.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the second autonomous landing on a moving target experiment.
Figure 21.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the third autonomous landing on a moving target experiment.
Figure 21.
Evolution of the error of the lateral, longitudinal, vertical and heading controllers on the third autonomous landing on a moving target experiment.
The videos related to some of the experiments presented in this section are available online [
69].