Multi-Ray Modeling of Ultrasonic Sensors and Application for Micro-UAV Localization in Indoor Environments

Due to its payload, size and computational limits, localizing a micro air vehicle (MAV) using only its onboard sensors in an indoor environment is a challenging problem in practice. This paper introduces an indoor localization approach that relies on only the inertial measurement unit (IMU) and four ultrasonic sensors. Specifically, a novel multi-ray ultrasonic sensor model is proposed to provide a rapid and accurate approximation of the complex beam pattern of the ultrasonic sensors. A fast algorithm for calculating the Jacobian matrix of the measurement function is presented, and then an extended Kalman filter (EKF) is used to fuse the information from the ultrasonic sensors and the IMU. A test based on a MaxSonar MB1222 sensor demonstrates the accuracy of the model, and a simulation and experiment based on the ThalesII MAV platform are conducted. The results indicate good localization performance and robustness against measurement noises.


Introduction
Micro air vehicles (MAVs) are a type of drone and are approximately the size of a person's hand. This property makes them easy to pack and allows them to be flown indoors. One of the fundamental problems of autonomous indoor flight is the localization ability. This problem has become more severe due to the strict restrictions on the size and weight of MAVs. Thus, how to utilize low-cost and lightweight sensor resources to locate MAVs in complex and ever-changing indoor environments is a hot and challenging issue.
Many indoor localization technologies have been developed to achieve indoor localization, such as localization based on ranging sensors [1][2][3], Bluetooth [4], inertial measurement units (IMUs), cameras, ultra wide band (UWB) [5], wireless local area network (WLAN) [6], ZigBee [7] and radio frequency sensors [8]. In this paper, the above approaches can be divided into two types according to whether the main localization sensors are placed on the unmanned aerial vehicle (UAV): onboard-sensor-based approaches and offboard-sensor-based approaches. The offboard-sensor-based approaches, such as Cricket developed by MIT, require some equipment, such as the beacons or motion capture cameras, to be prearranged in the UAV's flight environment; thus, such approaches have good positioning accuracy in known environments.
The onboard-sensor-based approaches, which do not require the assistance of external devices, can be applied to unknown environments. In [9], the data from the IMUs and lidar are used as inputs to the odometer, and the position of the UAV and the map are given simultaneously. In [10], a landmark-based method is introduced. In this method, some simply shaped objects, such as walls, corners and edges, are chosen as landmarks. Additionally, 16 ultrasonic sensors are mounted around

The Micro-UAV Platform
The Thales I I indoor MAV platform, shown in Figure 1, is the second generation of the Thales series created by the our group [19]. The MAV has the advantages of small size and light weight, and it can fly for about 4 min with a 400 mA battery.The weight of the Thales I I platform is approximately 75 g, which consists of the airframe (15 g), the battery (12 g), 4 motors and propellers (24 g) and 4 MB1222 sonar range finders (24 g), and its diagonal length is 135 mm (motor to motor). The system architecture of the Thales I I MAV platform is shown in Figure 2, the lower part of the architecture shows the main hardware components, and it is a modified version based on the open source hardware Pixhawk [20]. The powerful ARM STM32F427 is used to perform the calculation and the ESP8285 WiFi module is used to communicate with the mobile controller. Four 820-hollow-cup-motors are used to drive the 55 mm propellers. The angular velocity and movement acceleration are measured by an MPU6000 IMU sensor, and the heading angle is provided by an LSM303 magnetic sensor; both sensors have a sampling period of 8 ms.  Considering the size and load limitations, some widely used precise distance measurement approaches, such as the laser range finder and the depth camera, cannot be applied in the MAV platform. In the Thales I I platform, four MB1222 I2CXL-MaxSonar-EZ2 range finders are installed on the bottom of the MAV. They are installed perpendicular to each other, as shown in Figure 3. Thus, the ranges of four directions can be provided in a single measurement.
The features of the MB1222 I2CXL-MaxSonar-EZ2 range finder include centimeter resolution, an excellent compromise between sensitivity and side object rejection, short to long distance detection, range information from 20 cm to 765 cm, up to a 40 Hz read rate, and an I2C interface [21]. Thus, this sensor is one of the best choices for the localization task. The other features of the MAV platform are shown in Table 1.
The operating system running on the flight control board is the open source software PX4. It is easy to develop customized tasks, and all the data during the flight period are easy to store. The main functions of the proposed localization algorithm are shown as the upper part in Figure 2.

Modeling of the Ultrasonic Sensors
Ultrasonic sensors are based on the time of flight to measure distance and return a range. However, this range is not the straight line distance to an obstacle; rather, it is the distance to the point that has the strongest reflection. This point could be anywhere along the perimeter of the sensor's beam pattern [17,22], which makes the modeling of ultrasonic sensors a complex issue, particularly for online computing. Figure 4 shows the detection area of the MaxSonar MB1222 sonar sensor; it is obtained by placing and measuring a plastic plate at predefined grid points in front of the ultrasonic sensor.
As shown in Figure 4, the 2D beam pattern of the MB1222 sensor was approximated as an irregular polygon. To reduce the computational load of the polygon model, a multi-ray model is proposed, and the beam pattern is approximated by a ray group that starts from the origin, as shown in Figure 5.  Then, the ultrasonic 2D multi-ray model S can be formulated as a ray group as where s 0 represents the sonar sensor's position and s j is the end point of the j-th ray. Thus, for a known obstacle O, the model output l is obtained through a two-step calculation. First, a set of all the intersections of O and S is calculated as and then l is given as Equation (3) follows the principle that the ultrasonic sensor provides the nearest measurement of all detections, and a predefined value l max is given if there is no intersection between S and R.
Based on the beam pattern of the MaxSonar MB1222 sonar sensor, the multi-ray model was given as shown in Figure 6. Nine rays were used to approximate the detection zone of MB1222. Note that the far ends of the rays were selected slightly beyond the edge to obtain better coverage of the detection zone. To test the fitness of the multi-ray model and the actual sensor measurement, a comparative test was performed between the proposed model and the MB1222 sensor, as shown in Figure 7. The sensor was placed on the edge of a semicircle with radius r, pointing to the center of the circle, and the angle ψ was then increased in five-degree steps. The actual measurement l t is shown in Table 2. The corresponding output of the multi-ray model l m is presented in Table 3. The modeling error l e is presented in Table 4.
As shown in Table 2, (1) The measurement had a constant offset of approximately 3 cm to 4 cm, even in ψ = 0, i.e., the sensor is perpendicular to the wall.
(2) The maximum detection angles varied with the distances to the wall. The farther the sensor was from the wall, the narrower the detection angle. The half-side detection angle was close to 0 when the distance exceeded 5.9 m, and it reached approximately 35 degrees when r was less than 1. For comparison, the 3 cm offset was subtracted from the output of the model, and the model error was defined as l e = l t − l r − 3 cm, as shown in Tables 3 and 4. As shown, in most cases, the model error was less than 1 cm, and the maximum model error was 2 cm. Considering that the minimum resolution of the sensor was 1 cm, the proposed model had good fitness with the actual sensor for indoor localization.  0  5  10  15  20  25  30  35  40   30  27  27  27  26  26  25  24  24  -60  57  57  56  55  54  53  50  49  -90  86  86  85  84  82  80  78  78  -120  116 115 114 113 110 109 106 106  -150  146 145 144 142 140 139 136  --250  247 245 244 245 243  ----350  346 345 344  ------450 447 446 444 mark "-" means that the sensor returned its maximum result, i.e., the reflection intensity did not reach the threshold of the sensor.
Note that obvious angular constraint characteristics were observed in the measurements of ultrasonic sensors; however, we did not introduce the angular constraint in the proposed model, which was a consideration for reducing the calculation load. Because the constraint involves calculating the angles between all line segments of S and O, it may lead to a significant increase in the calculation load. In an alternative approach, the jump filter, was used to solve this problem, which will be presented in Section 5.

Modeling of the MAV System
To describe the motion of the MAV, the map coordinate system O m−x m ,y m ,z m and the body coordinate system O b−x b ,y b ,z b were introduced. The map coordinate system O m−x m ,y m ,z m was fixed to the earth, and its origin is located at the starting corner m 1 of the map M. The body coordinate system O b−x b ,y b ,z b was fixed to the MAV (in strapdown configuration), as shown in Figure 8.
where [φ, θ, ψ] are the roll, pitch and yaw angles, respectively. Then, the accelerations on the body frame can be transferred to the map frame by where G = [0, 0, g] is the gravity vector in the map frame. Therefore, the discrete-time state-space model of the MAV is given by where t imu represents the sampling period of the IMU and v(k) = [v x (k), v y (k)] and p(k) = [p x (k), p y (k)] are the velocity vector and position vector in the map frame at step k, respectively. The output of the MAV system was the measurement of multiple sonar sensors, which is defined as where l = [l 1 , l 2 , l 3 , l 4 ] is the measurement vector of sonar sensors, and h() is a nonlinear function of p(k), ψ(k), the sonar model S and the map of the working area M. To obtain the measurements of the sonar sensors, one needs to represent the sonar's model S in the map coordinate system. Since S is a set of line segments, this transformation can be achieved by representing the endpoints of line segments as where p and ψ denote the UAV's position and heading angle in the map coordinate system, respectively. ψ s 0 is the heading angle of sonar in the body coordinate system, and d 0 is the length between the origins of the body frame and of the sonar frame. Additionally, d j and ψ s j are the length and the angle of the jth ray in the sonar coordinate system, respectively. Then, the ultrasonic sensor's measurement l is given by Equations (3) and (11).
In particular, among all the intersections, the one that minimizes Equation (3) is defined as the "active intersection" r a , and terms "active ray" s a and "active wall" M a are introduced to represent the corresponding ray and the corresponding wall with the active intersection. (3), the measurement function of the system is a nonlinear and discontinuous function; thus, using the EKF rather than the traditional Kalman filter is a feasible way to estimate the location of the MAV. The key issue is to solve the Jacobian matrix of Equation (3).

As shown in Equation
The gradient matrix of the function h with respect to x at step k is given by Based on the multi-ray model, the Jacobian matrix can be calculated by geometric methods. At time k, suppose that the relationship between the sonar model and the map is as shown in Figure 9. Additionally, assume that the active ray S a and the active ray M a remain unchanged. The Jacobian matrix can then be given as where ψ a S i and ψ a M i represent the yaw angles of the "active ray" and the "active wall" of the i th ultrasonic sensor in the map frame. In addition, ∂l i ∂p x and ∂l i ∂p y were set to zeros if there was no obstacle in the detection range of the i th ultrasonic sensor. Then, the MAV's position can be obtained through a standard EKF procedure as x(k|k) =x(k|k − 1)

P(k|k) = [I − K(k)H(k)]P(k|k − 1)
. (16) Note that Equation (3) is a piecewise continuous function, and its output may jump in some conditions, such as if S a changes, M a changes or S a and M a change simultaneously. In addition, as mentioned in Section 3, if the angle between S a and M a exceeds the detection angle constraint, it may also lead to a significant deviation between l(k) andl(k). Similar results can also occur when the sensor occasionally malfunctions. Considering that the above cases will lead to a significant change in the term l(k) −l(k), a jump filter is given to solve this problem as where is a predesigned threshold. Therefore, if the measurement l i (k) is significantly different from its predictionl i (k), i.e., |l i (k) −l i (k)| ≤ , the corresponding measurement will be filtered out from the estimation. The flow chart of the indoor localization algorithm is shown in Figure 10.

Experiment
We thoroughly evaluate the proposed positioning algorithm using both a simulation and actual implementation.

Simulation Result
The localization algorithm developed in this paper was first tested through a simulation. To perform the simulation, a polygon a priori map is given as shown in Figure 11, and the sampled data of the accelerometer and the magnetic heading sensor are formed as whereā b andψ are the true acceleration and the true heading angle of the MAV, and N (0, V a ) and N (0, V ψ ) are the corresponding Gaussian noises with variances of V a and V ψ .
For a MAV in this map, since the position, the heading angle, the map and the ultrasonic model are known, the ultrasonic theoretical measurementl is known. We also add a Gaussian noise with variance V l to it as The other parameters used in the simulation are presented in Table 5. The simulation results are shown in Figures 11-15.  The actual trajectory of the MAV is shown by the solid line in Figure 11. The MAV first flew straight to the northeast and then straight north, and finally executed a turning maneuver. The true values of the IMU shown in Figure 12 illustrate that the MAV experienced many acceleration and deceleration events during the flight, and its heading angle also changed significantly with time.  The localization results based on the integral of IMU sensors and based on the proposed EKF approach are shown in Figure 11. The IMU position error increases over time due to the drift of the accelerometer, and the localization accuracy is poor. In contrast, the estimated locations of the EKF approach are very close to the actual trajectory. A quantitative error comparison is presented in Figure 13. The localization error of the proposed method is less than 0.25 m, while the IMU localization error increases cumulatively and finally approaches 2.8 m.
The measurements and multi-ray model estimations of the four sonar sensors are presented in Figure 14. The ultrasonic measurements have undergone multiple mutations over time; meanwhile, the mutation of the model estimations were not synchronized with the measurements due to localization errors, some differences even reached four meters, such as l 3 in 4.64 s. The activation of the jump filter is shown in Figure 15. In this case, errors of more than 0.3 m will be filtered out, and the threshold is selected based on the maximum possible cumulative error of the IMU during one sampling period of the sonar sensor. As shown in Figure 13, the difference between the estimations and measurements does not significantly affect the localization because of the correction of the jump filter. The statistical analysis of the localization error of EKF approach is shown in Figures 16 and 17. Figure 16 shows the distribution of the Euclidean norm of EKF localization errors. The mean EKF localization error was 0.062 m and its variance was 0.003 m 2 . The red line denotes a smoothing function fit of the error. The main components of the data are concentrated between 0 and 0.1 m, which is very close to a Rayleigh distribution. A small amount of data was distributed between 0.1 and 0.22 m, and this is due to the cumulative error caused by the asynchronous between the measurements and estimations. Figure 17 shows the distribution of the localization error vector, most of the data were less then the mean error, while a few data were close to 0.25 m.

Experimental Results
The proposed algorithm was implemented as an application of PX4 autopilot software. It acquired data from the IMU sensors every 8 mm and from four sonar measurements every 160 mm, and it reported the position of the MAV to the other applications. The Thales I I MAV platform was running the upgraded PX4 autopilot software.
In Figure 18 the red Gaussian describes the distributions of the acceleration values along x b and y b axes of the Thales I I MAV. The bias mean errors on x b and y b axes were 0.053 m/s 2 and 0.27 m/s 2 , respectively, and the variations were 0.17 (m/s 2 ) 2 and 0.21 (m/s 2 ) 2 , respectively. That shows the IMU sensors were not very accurate and may lead significant cumulative errors over time.
An L-shaped experimental site was constructed using foam boards, as shown in Figure 19. Because we do not have a more accurate localization system, we used a preset path to validate the proposed approach. The test process is to first set a preset trajectory, then move the MAV as close as possible to the preset trajectory, and finally compare the positioning result with the preset trajectory. Note that the second step is achieved by manual operations; thus, it may lead to deviations between MAV's actual position and the preset trajectory. As shown in Figure 20, the dotted line denotes the preset path, and it starts from the point (0.5, 0.55) and passes through two turns to reach the right end point (2.25, 4.75). A ±10 cm error band is also given by two dash lines, which is formed by two lines that are parallel to the preset path and each line is 10 cm away from the preset path. As shown in the figure, most of the localization outputs were within the error band which indicates that localization error does not exceed 20 cm. Considering the accuracy of human execution, the proposed approach can solve the indoor localization problem well. Figure 21 presents the measurements of the four MB1222 ultrasonic sensors. Note that the measurement data are stored as the localization application starts to run; thus, the recording time does not start from 0. As shown in Figure 21, the measurement may contain several jumps in the values when the ultrasonic reflected beam changes from one wall to the other. For example, the measurement of sonar no. 4, which points to the right side, jumped from 0.57 m to 7.65 m at approximately 36 s; this indicates that the MAV had just passed the first corner.   In practice, the items in a room may change, which may adversely affect the localization algorithm. To test the adaptability of the algorithm to this situation, an unmodeled obstacle was placed in the test site. The obstacle was a box that was approximately 0.7 m long and 0.5 m wide. The test results are shown in Figures 22 and 23. The proposed algorithm worked well with the unmodeled obstacle, as the localization results have not been significantly affected and stay within the error band.

Conclusions
In this paper, a novel beaconless indoor localization approach that relies on onboard ultrasonic sensors and IMU sensors is presented.
A multi-ray model for ultrasonic sensors is proposed. It approximates a beam pattern accurately while maintaining a low computational complexity, which makes it suitable to be applied to a light MAV. Then, a multi-ray modeling process has been provided based on the beam pattern of the MaxSonar MB1222 ultrasonic sensor. The comparative test validates that the proposed model has good fitness with the actual sensor for indoor localization.
Based on the multi-ray model, an EKF-based indoor localization method has been presented. The measurements of sonar sensors and IMU sensors are fuzed to achieve higher precision positioning. The jump filter is introduced to suppress the abnormal and significant difference between the estimations and measurements.
Simulations are presented to validate the proposed methods, and the results show it has a localization accuracy of approximately 20 cm. Afterwards, the proposed approach are applied to the Thales I I MAV, which is a small size and light weight platform. The results illustrate that its computational complexity is simple enough to run on the stm32 platform and positioning accuracy is also higher than 20 cm. An experimental test with an unmodeled obstacle shows the good robustness of proposed method, the localization results have not been significantly affected and stays within the error band.
Future work is to improve the algorithm for more complex indoor environments such as offices with many electric and electronic equipments, that may lead a large interference to the measurement of the magnetic compass.
Author Contributions: F.X. conducted the ultrasonic sensor modeling. Y.L. and S.X. contributed the localization method. Z.J. contributed the simulation and application. Y.L. and Z.J. wrote the paper. S.X. and F.X. revised the paper.