UWB and IMU-Based UAV’s Assistance System for Autonomous Landing on a Platform

This work presents a novel landing assistance system (LAS) capable of locating a drone for a safe landing after its inspection mission. The location of the drone is achieved by a fusion of ultra-wideband (UWB), inertial measurement unit (IMU) and magnetometer data. Unlike other typical landing assistance systems, the UWB fixed sensors are placed around a 2 × 2 m landing platform and two tags are attached to the drone. Since this type of set-up is suboptimal for UWB location systems, a new positioning algorithm is proposed for a correct performance. First, an extended Kalman filter (EKF) algorithm is used to calculate the position of each tag, and then both positions are combined for a more accurate and robust localisation. As a result, the obtained positioning errors can be reduced by 50% compared to a typical UWB-based landing assistance system. Moreover, due to the small demand of space, the proposed landing assistance system can be used almost anywhere and is deployed easily.


Introduction
The inspection of infrastructures is a necessary task for their correct performance and durability, especially in the case of the energetic, petrochemical, construction or transport sectors. However, sometimes dangerous zones with difficult accessibility must be reached by a human worker (or a group of workers), increasing the risks of the work. For this reason, there is a growing interest in the use of drones or unmanned aerial vehicles (UAVs) for infrastructure inspection [1][2][3][4][5][6]. One of the main advantages of UAVs is their high adaptability to any infrastructure, as they can be used to inspect power transmission lines [1][2][3], surfaces in bridges and roads [4], wind turbines [5] or rail viaduct bearings [6] among others. As a consequence, the infrastructure inspection already makes 45% of the total UAV market [7].
Nevertheless, the use of drones for inspection tasks also has its drawbacks as investment must be made in vehicle and staff training to pilot the UAV. Moreover, since drones must be operated by a person, this solution is still prone to human errors, so the possibility of using autonomous drones should be considered.
The landing manoeuvre is probably one of the riskiest situations of a flight. In the case of an autonomous drone, knowing the real-time location of the vehicle with respect to the landing area is crucial for a successful operation. A positioning error of a few metres could cause significant damage to the drone. A high positioning rate is also important, since adverse conditions such as windy weather could cause sudden velocity changes that could not be detected on time.
In the aeronautic sector it is common to use the global navigation satellite system (GNSS) for an autonomous landing [8]. Nevertheless, this technology is not always avail-controlled environment, where the authors could easily control all the movements of the drone. In a real environment, wind could cause sudden velocity changes to a UAV. As a consequence, the obtained performance could further decrease because of the limited positioning rate of UWB systems. In fact, the low positioning rate of UWB-based UAV positioning systems poses a big limitation in the positioning accuracy of the system.
There are different methods to improve the positioning accuracy of UWB-based UAV positioning systems. For example, in [43] a particle filter algorithm is proposed for an enhanced performance of UWB for the localisation of drones. However, approaches fusing data from different sensors are more popular. It is very common to fuse UWB data with inertial measurement units, as suggested by [44][45][46]. A third sensor can also be added to the UWB/IMU approach, such as a light scanner in [47], a frequency modulated continuous wave (FMCW) radar in [48] or a real-time kinematic global positioning system (RTK-GPS) in [49]. Another popular approach is to add visual data to the UWB-based RTLS as in [50][51][52][53]. Laser imaging detection and ranging (LIDAR) sensors have also been used to improve the UWB accuracy for UAV location in [54], where a drone had to fly close to a wall. Despite the improved performance of the RTLS proposed in the mentioned works, only one of them uses a simple infrastructure [53], where four UWB anchors are placed around a 1.5 × 1 m pad with a system of visual fiducial tags. The UWB data are fused with the visual and inertial data, resulting in a safe landing. However, it is not known how this system would perform in a dark environment, since the pad must remain in the field of view of the camera.
This paper proposes a novel LAS for autonomous drones that combines data from UWB, IMUs and magnetometers to estimate the position of the drone when approaching or moving away from the landing platform. In this LAS, as in the case of [42], UWB anchors are placed around a small landing platform and two tags are placed on the drone. However, in our case, both tags also have IMUs and magnetometers. The proposed drone positioning algorithm takes advantage of the UWB positioning accuracy and of the higher sampling rate of the IMUs and provides accurate estimates of the position of the drone, even when the drone suffers from high accelerations. This positioning algorithm is executed in the single board computer (SBC) of the drone and works in two steps. In the first step, for each tag, the proposed drone positioning algorithm fuses the information of the IMU and magnetometer with UWB data to estimate its position. In the second step, the positioning estimates of each tag are combined to provide a more accurate estimate of the position of the centre of the drone. Unlike other solutions in the state of art, our proposal neither needs a complex infrastructure deployment, nor does it depend on lighting conditions or availability of GNSS. Additionally, our proposed system presents high accuracy even with sudden changes in drone velocity, as it achieves a higher positioning rate than traditional UWB-based positioning systems. Finally, the proposed combination of tags' positions further improves the accuracy of our system. Higher robustness is gained because the possible errors of a tag are compensated with the other.
The rest of the article is organised as follows: Section 2 describes how a LAS works when only UWB data are used, Section 3 describes the proposed LAS and the main contributions to the state of art, Section 4 explains the performed experiments and analysis and the obtained results are presented in Section 5. Finally, conclusions and future research lines are given in Section 6.

State of the Art of UWB-Based Systems
When UWB technology is used as RTLS, two main elements are necessary: anchors and tags. Anchors are fixed sensors at known locations, while tags are the moving sensors to be located. Each tag communicates with the anchors in order to calculate the distance to all of them. With the measured distances and the known locations of anchors, the positions of the tags can be calculated.
For the real-time location of a UAV, a single tag is usually placed on the vehicle and an anchor infrastructure is deployed around the flying space. Ideally, the anchors should have a separation of tens of meters so that the calculation is optimal. This type of infrastructure is typical in the literature, as proposed for example by [37][38][39]. Nevertheless, optimal anchor infrastructures cannot always be deployed, so [42] suggests a deployment where four anchors are placed on a 2 × 2 m square on the floor.
With an anchor infrastructure and a tag on the drone, the estimated distances can be used in different algorithms to calculate the position of the vehicle. One of the most typical methods to calculate the position of a tag from ranging measurements to known anchors is the Extended Kalman Filter (EKF). An algorithm based on the EKF is used by [41,55], which is shown in Figure 1. This algorithm needs a motion model f and an observation model h to be defined where x i is the state vector related to the i th estimation. It contains the position of the tag to be estimated and its first and second derivatives u i is the optional input vector set to zero and f is the function representing the motion model of the system. It relates the previous state x i−1 with the current state x i . The observation vector is represented by y i , which contains the measured distances between the tag and each anchor. These distances can be calculated with the state estimate x i and the function of the observation model h. Note thatx andx notation in (1) represents the a priori and a posteriori state estimate, respectively. The process is characterised with the stochastic random variables ω i and ν i that represent the process and observation noise, respectively. They are assumed to be independent, white and normal probably distributed with covariance matrices Q i and R i , respectively.
The above mentioned a priori estimate of the state is calculated with the linearised version of the motion model fx where ⊗ represents the Kronecker product of the matrices, ∆ the time difference between two consecutive time steps and C the error covariance matrix of the state estimate.
Using the predicted estimate of the state vector, the predicted observation vectorỹ i can be calculated by means of the observation model h. For each anchor l, the distance between the predicted position (x i ,ỹ i ,z i ) T and the fixed sensor position (X l , Y l , Z l ) T is calculated as Finally, the predicted statex i is corrected to obtainx i by comparing the predicted observation vectorỹ i with the measured ranging values y r î where H i is the Jacobian matrix of the observation model h.
Despite the high accuracy of the UWB technology and the suitability of the EKF for a correct performance of this type of locating systems, they still present some drawbacks for the UAV localisation. The data rate of UWB systems is limited and may be incapable of detecting sudden changes in the drone path due to sudden wind changes. In order to deal with these types of conditions, it is better to add the data from an IMU that could give an accurate estimate of the vehicle's acceleration, track all the trajectory changes and improve the data rate of the position estimates.

Proposed LAS
In this section the proposed novel LAS and its main differences compared to the typical UWB-based systems described in Section 2 are explained. Note that the proposed LAS is designed for a drone that inspects critical infrastructures such as off-shore wind turbines or a tank of a petrochemical plant. After the inspection mission, the drone needs to land on a small platform to charge its batteries. This platform is on a boat or in a confined space, so there is not enough space or time to deploy typical UWB infrastructure. In our proposal, a small anchor infrastructure with easy deployment is used, which also allows us to make the UWB anchors part of the landing platform and use the same power supply for the anchors and the battery charger. Thus, the resulting LAS needs no complex additional infrastructure. Moreover, our LAS is not affected by changing lighting conditions because of the day or night time, rain or fog that traditionally affect computer-vision systems. As our system is based on UWB technology, during the landing, our LAS will present lower positioning errors than GNSS, which can be around 2 m in the latter case [29].
In this work, data from UWB, IMU and magnetometer are proposed to be combined to estimate the position of the drone. Figure 2 depicts the system architecture. Similar to [42], eight anchors are placed around the landing platform of 2 × 2 m and two tags are installed on both sides of the drone. Taking advantage of the availability of these sensors, a new positioning algorithm is proposed. This algorithm first fuses the UWB, IMU and magnetometer data from each tag to obtain two independent position estimates and then combines them to calculate the position of the centre of the drone. From the resulting data, only the horizontal coordinates of the drone are used since the vehicle is capable of accurately estimating its altitude with other sensors; i.e., an altimeter. Figure 3 shows the placement of the tags on the drone. They are installed on both sides of the drone with a separation of 0.36 m. In this work, the LAS will provide the position of the point that is in the middle of the line formed by the two tags. We will denote the centre of the drone to this point. In the same picture, the SBC of the drone can be seen, which receives the data from both tags and runs the necessary positioning algorithm.
The tags employ the DW1000 chip of Decawave as UWB transceiver. This transceiver follows the IEEE 802.15.4a standard and is configured with the parameters presented in Table 1. Apart from the UWB transceiver, the tags also contain the LSM6DSOTR IMU [56] and the LIS2MDLTR magnetometer [57]. Both sensors are developed by STMicroelectronics and they can be fused in the MotionFX library of STMicroelectronics [58] in order to subtract the measurement of gravity from the acceleration data and obtain the orientation of the tag. The chosen configuration parameters of MotionFX are shown in Table 2.  Figure 4 shows the flow chart of the proposed algorithm. As described in Section 2, the tags communicate with the anchors in order to calculate the distances between the sensors, r (1) i,l,j , using the two way ranging (TWR) method. The subscripts i, l and j of the ranging estimates refer to the time step, the identifier of the anchor and the identifier of the tag, respectively. The obtained data are sent to the SBC of the drone (see Figure 3) which runs the necessary algorithms for a correct position estimation.
Unlike the state of art, our LAS filters the ranging estimates with a parameter r max , which represents the maximum allowed ranging estimate. Since the objective of the proposed LAS is to help the drone during the autonomous landing and not the rest of the flight, any estimate ranging above r max is discarded.
Moreover, our proposed LAS adds the data of two IMUs and magnetometers to the algorithm, one for each tag. At time t i−1 and tag j, the measured specific force y a i−1,j , angular velocity y ω i−1,j and magnetic field y m i−1,j are used by the MotionFX library to calculate the acceleration a The terms (b j ) and (w) refer to the body frame of tag j and world frame, respectively. The used frames are shown in Figure 5.
Landing pla orm Tag T2 Figure 5. Body, world and inertial frames.
Each tag contains an independent body frame, (b 1 ) and (b 2 ), which are fixed to the sensors. All measurements of the IMUs and magnetometers and the resulting acceleration i−1 are referred to their body frames. The quaternion q transforms any vector referred to the body frame (b j ) to the world frame (w), of which the x, y and z axes look at the east, north and up, respectively. However, the inertial frame (in), which is defined by the landing platform, does not have to be aligned with the world frame, so another quaternion q (in) (w) must be defined to transform any vector referred to the world frame (w) to the inertial frame (in). If the landing platform is on the horizontal plane, q (in) (w) is defined as a quaternion that rotates any vector by an angle φ around the z axis. Both quaternions can be combined to calculate the quaternion q being the quaternion multiplication operator. The calculated ranging data, acceleration and orientation of the tags are fused in two parallel EKF algorithms. This way, two position estimatesp i,1 andp i,2 are calculated and finally combined to obtain the position of the centre of the dronep i,D . If for some reason one of the tags does not see the anchors for a time t reinit , that tag stops giving position estimates. In this case, the position of the centre of the drone can still be calculated with the position estimate of the other tag, its orientation and the relative position of the centre of the drone with respect to the remaining tag. Once the UWB signal is available again in the tag, its EKF is reinitialised. This means that all the parameters of the EKF are set to their initial value. As the EKF algorithm needs some time to converge to the real solution, during a time period of t converge , the position estimates of the newly found tag are not used in the combination algorithm. The EKF algorithm is further explained in Section 3.1 and the combination algorithm in Section 3.2.

EKF with Fusion of Sensors
The first part of the proposed positioning algorithm consists of an EKF that takes advantage of the availability of the data of the IMU and magnetometer. The flow chart that summarises this part is shown in Figure 6.  Figure 6. Flow chart of the EKF that fuses the UWB data with an inertial measurement unit (IMU) and magnetometer.
After a reinitialisation, the first position of the tag is estimated by means of a recursive least squares (RLS) algorithm [59] using the first received UWB data. After this first position estimate, every time a new acceleration estimate is received, the prediction step is performed. Since the IMU and UWB rates are different, while no new UWB measurements are received, the EKF algorithm keeps working with the predicted state estimate. When new UWB data are received, the correction step is performed. The advantage of this method is that the resulting positioning rate of the proposed LAS is of 25 Hz, much faster than the UWB ranging rate. However, if no UWB measurements are obtained for a long period of time, the position estimate can drift and get lost. For this reason, if not enough UWB ranging estimates are obtained during an adjustable time interval of t reinit , the proposed LAS stops giving position estimates. Once the UWB signal is recovered, the algorithm is reinitialised.
This algorithm is run twice in parallel, once for each tag. For simplicity, the letter j that is used to refer to the tag is going to be skipped in this subsection.
Unlike the previous algorithm of Section 2, the state vector only contains position and velocity data with p i being the position of the tag at time step i and v i its velocity. The acceleration data are introduced in the motion model as one of the input parameters. The inputs are the acceleration referred to the body frame a (b) and a unit quaternion q (in) (b) that rotates any vector from the body frame (b) to the inertial frame (in).
The motion model f that transforms the previous statex where ∆ represents the time between two consecutive steps and R (in) (b) the rotation matrix obtained from the unit quaternion q (in) (b) as explained in [60] with the here defined function q2R. For any unit quaternion q = q w q x q y q z T , its corresponding rotation matrix R q is calculated as The noise parameters of the motion model f are represented with e (a) for the acceleration data and e (φ) for the orientation data. The latter is represented as an orientation deviation in the body coordinate frame and it is converted to a unit quaternion q (φ) with the function f q being ||e (φ) || 2 the euclidean norm of vector e (φ) .
Both noise parameters are determined empirically and have zero mean and covariance Q I MU , which is necessary for the prediction step of the EKF to make an a priori estimation of the state error covariance matrixP ĩ In the above equation the Jacobian matrices F i−1 and G i−1 of the motion model f have been calculated with respect to the state vector x and noise vector e. The calculation process of useful derivatives for quaternions and rotation matrices is explained in [61].
After the prediction step, the a priori estimate must be corrected with the UWB ranging data asx where y r i is the vector of measured ranging values,ỹ i is the predicted observation vector calculated with (8) and K i represents the Kalman gain matrix. The Kalman gain matrix is calculated as where H i is the Jacobian matrix of the observation model and R i the measurement covariance matrix. The Jacobian matrix of the observation model is calculated with (11). Finally, the predicted state error covariance matrixP i must be corrected witĥ

Combination of Tags
In the last part of the proposed positioning algorithm, the two independent position estimatesp i,1 andp i,2 are combined to calculate the position of the centre of the dronê p i,D . If the estimates of both tags are available at time step i, then the average position is calculated. If at some certain moment, only one of the tags gives a positioning estimate, then the position of the centre of the drone can be calculated with the known orientation q (in) (b j ) i and the coordinates of the centre of the drone d j with respect to the body frame of the remaining tag (b j ). The algorithm is summarised in (29)

Methodology
For the correct assessment of the proposed LAS, some experiments were performed by flying the drone in a controlled indoor environment close to the landing area. Additionally, more experiments were conducted in a real outdoor environment. In both cases, the parameters r max , t reinit and t converge described in Section 3 were set to 20 m, 2 s and 3 s, respectively. In the next subsections, the experimental set-ups as well as the employed evaluation methods are described.

Indoor Experiments
All the indoor tests were run in the Industry 4.0 Laboratory of Ceit-BRTA which contains a motion capture system of Optitrack, which allowed us to track the drone with millimetre level accuracy. Due to the high accuracy of the motion capture system, its measurements were used as ground truth. A picture of the testing zone can be seen in Figure 7a. The developed LAS was deployed inside the observation area of the Optitrack system, as shown in Figure 7b and the positions of each anchor are given in Table 3.  For safety reasons, some fences were placed around the measurement zone. Once the set-up was prepared, 9 different flights were conducted inside the tracking area. All of them consisted of a take-off, movements close to the landing platform and a landing. The paths followed by the centre of the drone during the flights can be seen in Figure 8. The flights can be separated into two groups: those with a mean horizontal acceleration under 1 m/s 2 and those with a mean horizontal acceleration over 1 m/s 2 , as shown in Table 4. The measured accelerations correspond to the centre of the drone, so the acceleration on each tag may be slightly different. By dividing the flights in two groups, the effect of acceleration on the positioning accuracy can be evaluated.

Outdoor Experiments
The drone was also flown in a real outdoor environment with the proposed LAS. A picture of the test zone is shown in Figure 9 with the prepared set-up. The anchors were placed in the same positions as described in Table 3. These experiments were useful to test the LAS at longer distances than in the indoor environment. It is especially interesting to test the ability of the LAS to find the drone once it reaches the visible range of the system, when the landing is about to occur.
The chosen place contains a concrete platform of 2 × 2 m to land the drone and deploy the LAS. There is also a wind turbine, which simulates the infrastructure that the drone should inspect. Two different flights were performed, both of which consisted of a take-off, a linear movement to the wind turbine reaching a height of 14 m, return and landing, as shown in Figure 10. Because of the unavailability of a highly accurate outdoor locating system such as Optitrack, the performance of the system in this environment was evaluated qualitatively by comparing it to the GNSS position estimates.

Calculation of Errors
In order to evaluate the performance of the proposed system, the positioning error in the horizontal plane XY was calculated as where i represents the error of the position estimate i, x i and y i the real 2D position coordinates andx i andŷ i the estimated 2D position. Once all the positioning errors were calculated, the system was evaluated with the mean error µ, standard deviation σ and root mean square error RMSE [62]. Additionally, the error below which 80% of samples are, the probability of obtaining an error under 1 m and the maximum error max were calculated.

Results
In this section the obtained experimental results are presented and discussed. As explained in the previous section, two types of measurements were performed. The first group was in an indoor controlled environment with the objective of evaluating the feasibility of the proposed LAS. The performance of the system was evaluated with only UWB data and later with the fusion of the inertial data, so that the advantages of data fusion could be seen. The second group of experiments was performed in a realistic environment, and the performance of the proposed LAS was qualitatively evaluated.

Accuracy with Four UWB Anchors
For a correct comparison with the systems of the state of art, the performance of our LAS was evaluated using only UWB data. First, a similar set-up to that proposed by [42] was considered; i.e., only the ranging estimates of the four anchors on the corners were used to position the tags.
In Table 5, the accuracy data are given for both tags. These tables show, for each flight, the mean positioning error, its standard deviation, the root mean square error, the error below which 80% of samples are, the percentage of errors below 1 m and the measured maximum error.
The obtained results confirm that a small anchor infrastructure of 2 × 2 m can accurately locate a drone when it flies close to the landing platform. Considering all flights, the RMSE value was 0.377 m for Tag T1 and 0.442 m for Tag T2. However, there was a considerable difference between those flights with low horizontal acceleration (Flights 1 to 4) and those with high acceleration (Flights 5 to 9). This is confirmed with the cumulative distribution function plots shown in Figure 11a for Flights 1 to 4 and Figure 11b for Flights 5 to 9. When the acceleration values were low, as can be seen in Figure 11a, the obtained results were similar to those of [42]. In this case, the authors of [42] measured a mean horizontal acceleration of 0.67 m/s 2 and a maximum of 2.35 m/s 2 . However, our results demonstrate that when the drone suffered a higher acceleration (Flights 5 to 9) the accuracy of the UWB-based LAS was reduced, so a traditional system using only UWB data could have problems under adverse conditions.

Accuracy with Eight UWB Anchors
For a better performance of the LAS, our proposal adds some redundancy by using the estimates of eight anchors instead of four. The benefit of having anchor redundancy is that it is possible to calculate new positions even if an anchor fails to see the tags. If only four anchors are used, the lack of a single anchor-tag distance measurement is enough to skip a new position sample. With eight anchors, however, new positions can be calculated even with the lack of four sensors' measurements. We tested the effect of this redundancy on the positioning accuracy and Table 6 shows the obtained data for the LAS using only UWB data with eight anchors.
The added redundancy reduced the mean error, RMSE and especially the maximum error. However, just adding more anchors could not solve the problems in the flights of higher accelerations. These flights need a high sampling rate sensor such as an IMU, as we propose in our LAS.

Accuracy with Fusion of Data
With the data of eight UWB anchors, an IMU and a magnetometer, our proposed LAS uses the EKF algorithm presented in Section 3.1 to fuse all this information. In Table 7 the obtained results of this algorithm are shown when it is used to estimate the position of the two tags of the drone. Compared to the obtained results with only UWB data of eight anchors, the data fusion improved the accuracy of the system, especially in the second group of flights, where the mean horizontal acceleration was over 1 m/s 2 . For example, significant changes can be noticed in Flight 5 with the data fusion algorithm: the position of Tag T1 had an RMSE of 0.194 m and Tag T2 had an RMSE of 0.249 m. Without the proposed fusion algorithm, these values were 0.401 m and 0.503 m, respectively, so our proposed positioning algorithm halved the RMSE values in this case. Furthermore, the maximum error in this flight also had a reduction of around 50% in both tags. In general, our proposal significantly reduced the values of the mean error, standard deviation and maximum error in all flights with high accelerations.
Moreover, the fusion of data was also beneficial for those flights with low accelerations as almost all error metrics of every flight improved. Considering all data, with our proposed fusion algorithm, great accuracy can be obtained to locate a drone close to its landing platform.

Accuracy with a Combination of Tags
Finally, our proposed LAS combines both tags of the drone for a more accurate path. After fusing the UWB data from each tag with their IMUs, both position estimates are fused to calculate the position of the centre of the drone. In Table 8 the accuracy of the proposed system is shown. Compared to the individual results of Table 7, the accuracy was further improved. The mean error and the RMSE were reduced in almost all cases. Moreover, there was a general reduction of the standard deviation of the error, which means a reduction of outliers. When only a tag was used to estimate positions, it could sometimes be with an unfavourable orientation with respect to the anchors. In these cases, the position estimates would suffer from high errors. With two tags, it is less likely to have a bad estimate at the same time with both of them. Therefore, the biggest errors were compensated with the help of the other tag. Thus, significant improvements can also be seen in the percentage of samples with an error under 1 m and in the maximum error.

Summary of Results
As a summary of the improved results, Table 9 shows the key metrics obtained with a set-up similar to [42] and with our proposed LAS. The first row is the result of considering all the measured errors of both tags and all flights when positioning with only UWB data of four anchors. We can observe that the proposed LAS has improved all performance metrics. The reduction in RMSE value is remarkable, as it reduced from 0.410 m to 0.208 m; i.e., our novel LAS can reduce the obtained errors by 50% compared to a typical system. Thanks to a higher accuracy and a more frequent data rate, the task of autonomous landing becomes much safer with our proposal.

Results in a Real Environment
In addition to the indoor measurements, the proposed LAS was also tested in an outdoor realistic environment. In this way, it can be assessed how the LAS finds the drone after the inspection mission and how it tracks the vehicle until the landing manoeuvre. Figure 12 represents the estimated trajectories by the proposed LAS compared to the GNSS. The position estimates of our proposal are shown as red points, while the GNSS trajectories are represented as blue lines. However, these GNSS data could not be used as ground truth, since their errors were similar to or greater than those of the UWB system. As an example, note that in both flights the GNSS incorrectly estimated that the drone landed out of the platform, while the proposed system is able to correctly estimate the landing place. Due to dilution of precision, the accuracy of a UWB positioning system such as the one proposed in [42] is degraded at large distances. However, thanks to the information of the IMUs and magnetometers, the results in Figure 12 show that, at large distances, the drone position estimates of our proposed system are similar to those of GNSS in outdoor environments. This accuracy is enough to help the drone approach the platform. Furthermore, when we are near the platform, the accuracy of our system improves significantly, as we can see in Table 8, where the drone flew as far as 4.5 m from the platform. Thus, when the drone starts its landing operation, the accuracy of the system is good enough to help it land on the platform. Table 10 presents a comparison of the proposed system with others in the literature. This table shows the characteristics of a UWB-based LAS, such as the one proposed in [42], while the accuracy indicated is the one presented in Section 5.1.1. We can observe that this system achieved the worst accuracy of the compared LAS systems. The authors of [19] used vision in their LAS and presented a high accuracy in indoor environments and short ranges. However, this vision-based LAS was tested at a much lower horizontal velocity than our proposed LAS. Our proposal combines UWB technology with IMUs and magnetometers and, thus, achieves a high positioning rate, which is crucial for autonomous landing. Moreover, our proposed LAS is not affected by lighting conditions, as is the case with vision-based systems. We have shown that it can find the landing platform from at least 20 m and is robust in terms of high horizontal velocities and accelerations.

Conclusions and Future Research
This paper presents a novel landing assistance system capable of locating a UAV for a safe landing after its inspection mission. The proposed LAS is composed of eight UWB anchors placed around the landing platform of the drone and two UWB tags on the vehicle. Both tags also contain an IMU and a magnetometer, which enables the combination of real time acceleration of the drone with the UWB data. Unlike other proposed solutions in the literature, our LAS neither needs a large infrastructure deployment, nor does it depend on lighting conditions or the availability of GNSS.
In a recent study, a similar deployment was proposed for a UWB-based RTLS of an autonomous drone. In contrast to this study, our research tested several flights with different horizontal accelerations, so that the effect of sudden changes of the movement of the drone, which could be caused by windy weather, could be studied. It has been concluded that higher accelerations can cause problems in UWB-based RTLSs, as their positioning rate can be too low for correct tracking of the drone's movements.
Our proposed LAS is more accurate than UWB-based systems when the drone suffers from high accelerations thanks to the fusion of UWB data with different sensors, namely IMUs and magnetometers. Our proposed algorithm takes advantage of the high sampling rate of the IMUs to estimate the position of the drone with a higher rate. Thus, it achieves a better tracking performance of the drone in those flights of high velocity and/or acceleration. Moreover, the proposed combination of tags' positions further improves the accuracy of our LAS. Higher robustness is gained because possible errors from one of the tags are compensated with the other. As a result, with our novel LAS, an RMSE value of 0.208 m was obtained, compared to an RMSE value of 0.410 m of a traditional UWB-based LAS. Thanks to the higher accuracy and sampling rate of our proposal, the decision-making of an autonomous vehicle becomes safer.
Additionally, measurements in an outdoor relevant environment have shown that our system is able to position the drone when it is flying close to the landing platform and to track it accurately until the end of the flight. When the drone is flying far away from the landing platform our system presents an accuracy similar to GNSS. However, when the drone is near to the landing platform, our LAS presented better accuracy than GNSS. Furthermore, compared with vision-based systems in the literature, our LAS is not sensitive to lighting conditions. This will allow it to be used with a drone that inspects the inside of critical infrastructures such as off-shore wind turbines or a tank in a petrochemical plant.
In conclusion, this paper has presented an accurate landing assistance system for autonomous drones that combines UWB with IMU and magnetometer data. The system can also be improved to obtain a higher flexibility. For example, the case of a moving platform has not been considered, so future research lines could point towards this direction.