1. Introduction
The capacity of autonomous localization and navigation for an unmanned ground vehicle (UGV) in an indoor environment is imperative. This enables the vehicle to explore the unknown environment without pre-installation, which also can be applied to visually-impaired persons and first responders. Meanwhile, localization and navigation indoors is challenging due to the absence of Global Positioning System (GPS) signals, which are widely used in outdoor navigation. Generally, an UGV can estimate its position and orientation by integrating the information from inertial sensors, like the gyroscope and accelerometer, which are known as dead reckoning. However, the Inertial Navigation System (INS) suffers from error accumulation. To this end, it is necessary to fuse INS with other complementary sensors to enhance the dead reckoning accuracy in GPS-denied environments. A common sensor available in most UGVs used indoors is light detection and ranging (LiDAR). LiDAR emits a sequence of laser beams at a certain bearing. When hitting the surface of the objects within the scanning range, the laser pulse reflects back. LiDAR estimates distance from the laser scanner to the objects in the environment by recording the time difference between emitted and reflected pulses. The high sampling rate and reliable LiDAR measurements can be used to derive the motion of the vehicle or to interpret the contour of the environment.
Most localization and navigation approaches using LiDAR can be classified into two categories based on the availability of a geometric map [
1]. Given the map, the position and orientation of the vehicle can be estimated by matching a scan or features extracted from the scan with the
a priori map. The main limitation with this map-based localization method is the system’s incapability to adjust to spatial layout changes [
2]. When the map is unavailable, the vehicle will perform localization by matching scans to track the relative position and orientation changes, thus tracking the vehicle’s position and orientation. If a map is required to be generated concurrently, this is called simultaneous localization and mapping (SLAM) [
3].
The most common features used in indoor LiDAR-based odometry and SLAM are line features. Since line features universally exist in indoor environments, they are extracted and tracked to perform LiDAR-based indoor localization. The relative position and orientation (pose) changes estimated from LiDAR are fused with INS in a filtering algorithm: the Kalman filter (KF) or the particle filter (PF). However, there are significant challenges involved in this mechanism. Firstly, although line features are dominant in most indoor environments, there are some areas where well-structured line features do not exist. Secondly, it is important to determine how many lines there are in the environment and which point belongs to which line. In addition, tracking the same line between consecutive scans is also extremely important. Thirdly, the LiDAR range and bearing measurements are contaminated with errors due to instrument error, the reflectivity ability of the scanned surface, environmental conditions, like temperature and the atmosphere,
etc. [
4]. When calculating line parameters, the error factors need to be dealt with. Finally, when integrating INS with LiDAR through a filter, the measurement covariance matrix, which indicates how much we can trust the measurements, should be taken into consideration. All of these issues have not been extensively addressed in the literature, especially the measurement covariance estimation issue. Therefore, this paper introduces an adaptive covariance estimation method that can be used in any LiDAR-aided integrated navigation system. The method is applied in an extended Kalman filter (EKF) design that fuses LiDAR, MEMS-grade inertial sensors and the vehicle’s odometer observations.
The rest of this paper is organized as follows:
Section 2 introduces some of the related works, and
Section 3 describes the line extraction methods, including line segmentation, line parameter calculation and line merging. In addition, line feature matching-based pose changes and their adaptive covariance estimation method is described. Then, in
Section 4, the integrated system and the filter design are explained in detail.
Section 5 presents and discusses the experiment results, while the conclusion is given in
Section 6.
2. Related Work
LiDAR is usually mounted on carriers, like an air vehicle, land vehicle or pedestrian, to implement pose estimation, mobile mapping and navigation in both indoor and outdoor environments, unaided or with other sensors. One of the techniques that is commonly adopted in LiDAR-based motion estimation is scan matching. The scan matching methods can be broadly classified into three different categories: feature-based scan matching, point-based scan matching and mathematical property-based scan matching [
5]. Feature-based scan matching extracts distinctive geometrical patterns, such as line segments [
6,
7,
8,
9], corners and jump edges [
10,
11], lane makers [
12] and curvature functions [
13], from the LiDAR measurements. Feature-based scan matching is efficient, but it depends on the structure of the environment and has limited ability to cope with unstructured or clustered environments. Point-based scan matching matches scan points directly without extracting any features. Compared with the feature-based scan matching method, it is suitable for more general environments. In addition, since it uses the raw sensor measurements for estimation, it is more accurate, but the computation load increases as well. Specifically, the iterative closest point (ICP) algorithm [
14,
15] and its variants [
16] are the most popular methods dealing with the point-based scan matching problem, due to their simplicity and effectiveness. The core step is to find corresponding point pairs, which represent the same physical place in the environment. This process is difficult to implement, due to interference and noises, and requires iteration to refine the final results. For the mathematical property-based scan matching method, it can be a histogram [
17], Hough transformation [
18], normal distribution transform [
19] or cross-correlation [
20].
Despite the limitations of the feature-based scan matching method, indoor localization and navigation can benefit from it, because most indoor environments are commonly well structured, and line features can be easily extracted and used. The line feature extraction process estimates the line parameters from a set of scan points, and it generally includes the following steps:
- (1)
The first step is preprocessing, which is an optional procedure on the LiDAR measurements. In [
10,
21], a median filter is used to filter out some of the outliers and noises, while a six order polynomial fitting is used in [
22,
23] between the measured distance and the true distance to compensate for the systematic error.
- (2)
The following step is to segment scan points into different groups, each of which includes points belonging to the same line. Generally, there are three segmentation or breakpoint detection methods: one is based on the Euclidean distance between two consecutive scanned points, while the others are KF-based segmentation methods and the fuzzy cluster method [
23,
24,
25].
- (3)
Then, the line parameters are computed. The most commonly used line fitting techniques are the Hough transform (HT) [
26] and least squares. HT can be used to extract line features by transforming the image pixel into the parameter space and detecting peaks in that space. However, HT has some drawbacks, like the occurrence of multi-peaks due to the discretization of the image and the parameter space, as indicated in [
27].
- (4)
Finally, collinear lines are merged into one line feature making the process more efficient.
A wide range of commonly used line feature extraction methods are compared in [
28]. The difference lies in the segmentation method, while total least squares is used to calculate the line parameters and their associated covariance. However, these methods iterate the segmentation and line fitting procedures over all of the scan points. Therefore, they are computationally expensive. In addition, errors in LiDAR measurements are usually neglected. Some earlier papers discussed and dealt with the systematic and random errors in the LiDAR raw measurements. Particularly, the noises in both range and angle measurements are assumed to be zero mean Gaussian random variables [
9] and are modeled as the difference between the true value and the noisy measurements. Then, each scan point’s influence in the overall line fitting is weighted according to its uncertainty. Similarly, in [
11], two sources of error—measurement process noise and quantization error—are modeled. The uncertainty of extracted line features, which depends on the line fitting method and on the range error models, is studies in [
29]. By associating and matching the same line features in different scans, the vehicle relative motion change between the two positions where scans are taken can be estimated.
A lot of the work on LiDAR measurements and the feature uncertainty discussion do not cover the further analysis of the integration with other sensors, like the work in [
8,
30]. In [
8], the modified incremental split and merge algorithm is proposed for line extraction. Besides, the standard deviation of the perpendicular distance from the scan points to the extracted line feature is calculated to evaluate the quality of the extracted lines and is used in the calculation of the covariance of LiDAR observations (perpendicular distance changes in this case). Meanwhile, INS outputs are used to aid LiDAR feature prediction and matching, tilt compensation and laser-based navigation solution computation when not enough features are extracted.
Different from existing methods, in this paper, an efficient LiDAR-based pose change estimation method supported by a novel adaptive measurement covariance estimation technique is introduced. Furthermore, an EKF design that fuses LiDAR, inertial sensors and the odometer is given to provide a robust integrated navigation system for UGVs. The main contributions of this paper are listed below:
- (1)
The covariance of LiDAR-derived relative pose change is estimated. This is crucial when the LiDAR-derived relative pose changes are to be fused with information from other sensors. The proposed adaptive covariance estimation method can be applied in any LiDAR-based integrated navigation system.
- (2)
The LiDAR intensity measurements are used to weight the influence of each scanned point on the line parameters estimation.
- (3)
The influences of the geometric layout of the environment (especially the long corridor) and line feature extraction error are addressed.
4. The Integrated Solution and Filter Design
In this section, the integrated solution is described. The main motion model of the proposed system is an INS supported by odometer measurements. Inertial sensors can provide angular velocity and acceleration, while the odometer can provide linear velocity. They are independent of the operating environment, but suffer from error accumulation. On the other side, pose tracking using LiDAR line feature-based scan matching has long-term consistent accuracy, but it is susceptible to the environment’s structure. To achieve better performance than using any sensor unaided, the three sensors are integrated through an EKF. One innovation of our filter design is that only one vertical gyroscope is used without the accelerometer, given the assumption that land-based vehicles, especially UGVs, mostly move in horizontal planes [
32,
33]. The benefits are the lower cost and less complexity in mechanization. More importantly, velocity is derived from the odometer instead of the integration results of the accelerometer outputs; hence, the error in velocity is reduced. The block diagram of the multi-sensor integrated navigation system is demonstrated in
Figure 5.
More specifically, a single-axis gyroscope vertically aligned with the vehicle body frame is used to detect the rotation rate of the body frame with respect to the inertial frame, from which the azimuth can be yielded. The odometer can output velocity along the direction where the vehicle moves forward. By integrating the single-axis gyroscope and odometer, the vehicle motion change from dead reckoning can be derived. Meanwhile, the pose change and covariance can be derived from LiDAR based on the proposed algorithm. Due to the nonlinearity of the system, the EKF is adopted to fuse the information from different sensors. Then, the gyroscope bias is accurately estimated, and the navigation solutions are corrected.
Figure 5.
The block diagram of the multi-sensor integrated navigation system.
Figure 5.
The block diagram of the multi-sensor integrated navigation system.
4.1. System Model
The error state vector for the filter design is defined as:
where the variables are defined as below: δΔ
x is the error in the relative displacement between scans along the
x-axis in the vehicle body frame; δΔ
y is the error in the relative displacement between scans along the
y-axis in the vehicle body frame; δ
vf is the error in the forward speed derived from the odometer; δ
vx is the error in the forward velocity projecting along the
x-axis in the vehicle body frame; δ
vy is the error in the forward velocity projecting along the
y-axis in the vehicle body frame; δΔ
A is the error in the azimuth change between scans; δ
aod is the error in acceleration derived from the odometer measurements; δ
bz is the error in the gyroscope bias.
The system model is derived from INS/odometry mechanization. Specifically, the azimuth change between two scans can be calculated using gyroscope measurements. It is given as follows:
where Δ
AI is the azimuth change obtained from INS,
wz is the gyroscope measurement,
bz is the gyroscope bias and
T is the time interval between two scans. Based on the azimuth change, forward velocity can be projected to the vehicle body frame:
Therefore, the relative position change between two scans can be denoted as:
By applying Taylor expansion to the INS/odometry dynamic system given in Equations (22)–(26) and considering only the first order term, the linearized dynamic system error model is given as below [
34,
35]:
Here, both random errors in acceleration derived from odometer and gyroscope measurements are modeled as first order Gauss–Markov processes. γ
od and β
z are the reciprocal of the correlation time constants of the random process, while σ
od and σ
z are the standard deviations associated with the odometer and gyroscope measurements, respectively.
4.2. Measurement Model
For the measurement model, the pose changes estimated from the line feature scan matching are used as filter observations, and the observation covariance matrix is also adaptively calculated by the proposed technique. The advantages of this filter design are two-fold: (1) the line geometry influence is taken into account when estimating the observation covariance; thus, it can precisely reflect the error level in the observations; (2) the tuning of the observation covariance matrix is avoided.
The difference of vehicle pose change between scans obtained from INS/odometry and LiDAR is taken as the observation vector in the measurement model. The observation vector is defined by
z and can be given as:
The subscript
L and
I denote measurements from LiDAR and INS, respectively. Therefore, the design matrix for the measurement model can be represented as:
For the observation covariance matrix
R, it is assumed to be a diagonal matrix without considering the correlation between relative displacement and azimuth change. It is given as:
Here,
CΔx,Δy and
CΔA are calculated using Equations (17) and (21), respectively.
After establishing the system model and measurement model, EKF equations are used to implement the prediction and correction. The prediction and correction are performed in the body frame and then transformed into the navigation frame to provide corrected navigation outputs.
5. Experiment Results and Analysis
To evaluate the proposed algorithm, real experiments have been performed in an indoor office building with a UGV called Husky A200 from Clearpath Robotics Inc. (Waterloo, Canada), shown in
Figure 6. This UGV is wirelessly controlled, and the platform is equipped with a CHR-UM6 MEMS-grade inertial measurement unit (IMU) [
36], a SICK laser scanner LMS111 (SICK, Waldkirch, Germany) [
37] and a quadrature encoder. The specifications of the laser scanner are shown in
Table 1.
The experiment is conducted on the second floor of the building, with the UGV moving in the corridor during the whole process. The spatial layout of the floor is an approximate 70 m by 40 m rectangle, shown in
Figure 7.
As can be seen from the floor plan, the corridor length sometimes can exceed the maximum scanning range of the LiDAR, which is 20 m in our experiments. This may result in the singularity problem. From the earlier discussion, it can be seen that a poor geometric layout can cause the singularity issue, and the estimated covariance will be large. Meanwhile, due to the proper weighting scheme for the line feature in the covariance estimation process, a large line extraction error will not jeopardize the estimation accuracy of the relative position change and associated covariance. To address the influences of geometry layout and line quality on the relative position change and the covariance estimation, two cases are discussed. The LiDAR scans under these two situations are shown in
Figure 8 and
Figure 9 with extracted lines marked. The green square represents the beginning of the line, and the red diamond means the end of the line. In both figures, the graph on the left shows the previous scan, while the graph on the right is the scan ten epochs later. The ten-epoch interval is to guarantee distinguishable changes can accumulate and be estimated.
Table 1.
SICK LMS111 specifications.
Table 1.
SICK LMS111 specifications.
Parameter | Value |
---|
Statistical Error (mm) | ±12 |
Angular Resolution (°) | 0.5 |
Maximum Measurement Range (m) | 20 |
Scanning Range (°) | 270 |
Scanning Frequency (Hz) | 50 |
Figure 7.
LiDAR-aided and INS/Odometry trajectories.
Figure 7.
LiDAR-aided and INS/Odometry trajectories.
Figure 8 illustrates an example of when the UGV is located in the long corridor. In this case, only two parallel walls are detected, and two parallel line features will be matched between the two scans. The geometry layout will cause the singularity in the relative position change estimation. However, since the scan points on the extracted lines are well distributed, the line quality is good.
Figure 8.
UGV moves in the long corridor. (a) The first scan. (b) The second scan.
Figure 8.
UGV moves in the long corridor. (a) The first scan. (b) The second scan.
Figure 9 demonstrates another example when the UGV comes to a corner. Four lines are extracted in the first scan, while five lines are extracted in the second scan. In the line matching step, the first three lines in both scans are matched successfully, while the fourth line in the first scan is matched with the fifth line in the second scan. For the four pairs of matched lines, the line extraction errors of the first two pairs are small. The third pair’s line extraction error will increase, while the last pair’s line extraction error will be the most significant. As indicated by Equation (16), the bigger the line extraction error is, the lower the weight that will be given to the corresponding line pair in the line feature scan matching-based pose change and covariance estimation process. To this end, the mismatched fourth line pair will have the least weight in position change estimation.
Figure 9.
The UGV arrives at a corner. (a) The first scan. (b) The second scan.
Figure 9.
The UGV arrives at a corner. (a) The first scan. (b) The second scan.
The relative displacement and covariance estimation results for the above two examples are shown in
Table 2. Under both situations, the UGV mainly moves along the vertical direction of the body frame. The maximum speed of the vehicle is 1 m/s while ten epoch time interval is 0.2 s. Therefore, the results of example 2 are reasonable while the results of example 1 are quite erroneous. This leads to the conclusion that the geometry layout of the extracted lines has significant impact on the estimation of the displacement. On the contrary, since the line feature is weighted by its quality in displacement estimation, line quality has minor effect on the results. However, the covariance of the displacement estimated from the proposed algorithm can reflect the error level in position change precisely. When the displacement is propagated to the filter, the observation can be evaluated according to its covariance and accurate error states estimation can be achieved.
Table 2.
SICK LMS111 specifications.
Table 2.
SICK LMS111 specifications.
Example | Δx (m) | Δy (m) | Cov (Δx) | Cov (Δy) |
---|
Example 1 | 0.3801 | 30.1185 | 0.0041 | 20.6805 |
Example 2 | 0.0602 | 0.2057 | 0.0001025 | 0.0001956 |
Figure 10 shows a portion of the relative position change in the vehicle forward direction from LiDAR and INS along with the estimated covariance. From the figure, it can be clearly seen that when the LiDAR-derived position change is accurate, the estimated covariance is very small. On the contrary, when the LiDAR-derived position change is significantly jeopardized, the estimated covariance is fairly large. This demonstrates the consistency and the accuracy of the proposed covariance estimation method.
Figure 10.
Relative position change in the vehicle forward direction from LiDAR and INS and the estimated covariance.
Figure 10.
Relative position change in the vehicle forward direction from LiDAR and INS and the estimated covariance.
The trajectories obtained from the unaided INS/odometry dead reckoning system after initial gyroscope bias compensation and the LiDAR-aided integrated system are demonstrated on the floor plan. Besides, the filtered trajectory without weighting the scanned points in the line parameter estimation and without weighting the matched line pairs in the pose change and the associated covariance estimation are also shown in
Figure 7. It is important to note that, even without the weighting scheme, the observation covariance still adaptively changes instead of being set to a fixed value. However, due to the inaccurate pose change and associated covariance estimation, the error in the navigation solutions accumulates gradually. Compared with the unaided and unweighted navigation solutions, the filtered navigation solutions show a significant improvement in accuracy. This means that with the corrections from the LiDAR and observation covariance matrix from the proposed algorithm, the gyroscope bias is precisely estimated (illustrated in
Figure 11), which directly reflects the improved accuracy of the integrated solutions (the red curve in
Figure 7).
Figure 11.
Gyroscope bias estimation results.
Figure 11.
Gyroscope bias estimation results.