Next Article in Journal
Correction of Dynamic Errors of a Gas Sensor Based on a Parametric Method and a Neural Network Technique
Previous Article in Journal
Partial Discharge Monitoring in Power Transformers Using Low-Cost Piezoelectric Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Feature-Based Laser Scan Matching and Its Application for Indoor Mapping

1
Beijing Advanced Innovation Center for Imaging Technology, Capital Normal University, Beijing 100048, China
2
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
3
State Key Laboratory of Information Engineering, Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(8), 1265; https://doi.org/10.3390/s16081265
Submission received: 12 May 2016 / Revised: 2 July 2016 / Accepted: 5 August 2016 / Published: 10 August 2016
(This article belongs to the Section Remote Sensors)

Abstract

:
Scan matching, an approach to recover the relative position and orientation of two laser scans, is a very important technique for indoor positioning and indoor modeling. The iterative closest point (ICP) algorithm and its variants are the most well-known techniques for such a problem. However, ICP algorithms rely highly on the initial guess of the relative transformation, which will reduce its power for practical applications. In this paper, an initial-free 2D laser scan matching method based on point and line features is proposed. We carefully design a framework for the detection of point and line feature correspondences. First, distinct feature points are detected based on an extended 1D SIFT, and line features are extracted via a modified Split-and-Merge algorithm. In this stage, we also give an effective strategy for discarding unreliable features. The point and line features are then described by a distance histogram; the pairs achieving best matching scores are accepted as potential correct correspondences. The histogram cluster technique is adapted to filter outliers and provide an accurate initial value of the rigid transformation. We also proposed a new relative pose estimation method that is robust to outliers. We use the lq-norm (0 < q < 1) metric in this approach, in contrast to classic optimization methods whose cost function is based on the l2-norm of residuals. Extensive experiments on real data demonstrate that the proposed method is almost as accurate as ICPs and is initial free. We also show that our scan matching method can be integrated into a simultaneous localization and mapping (SLAM) system for indoor mapping.

Graphical Abstract

1. Introduction

The mapping of indoor environments or underground spaces has become increasingly important in recent years. However, the lack of availability of GPS signals inside buildings and underground spaces makes indoor modeling and mapping a challenging task. Recently, numerous works have appeared to solve this issue. The most popular solution may be the simultaneous localization and mapping (SLAM) technique, which is widely applied in robotics. Compared with vision-based SLAM systems, laser-based ones can provide more accurate indoor maps and models.
The core of laser-based SLAM, scan matching, is a technique to recover the relative position and orientation of two laser scans. It estimates a rigid transformation to project one laser scan so that the projected laser scan aligns with the other one. The ICP algorithm [1,2] and its variants (to name a few [3,4]) are used pervasively in laser scan matching. However, the performance of ICPs is unstable. When the good initial guess of the transformation is unavailable, the ICPs may not converge to a correct solution. Thus, ICP-based SLAM systems usually fuse additional highly expensive sensor data for guaranteeing robustness, such as those from inertial measurement unit (IMU) and odometry. This will reduce the power of these methods for practical applications because of the very expensive hardware systems.
In vision-based SLAM systems, visual features (such as SIFT [5], SURF [6], ORB [7], and BRISK [8]) are adapted for relative pose recovery. Keyframe image matching typically consists of four major stages: keypoint detection, keypoint description, keypoint matching, and transformation estimation. In the first stage, salient and stable interest points are extracted. The keypoint is then described based on its photometric neighborhoods, such as local gradients. The third stage calculates the distances between the descriptor vectors to recognize reliable correspondences (inliers). Finally, the transformation matrix is computed based on RANSAC [9]. This pipeline has no assumption regarding the initial guess of the transformation.
Currently, 3D indoor mapping is drawing increasing attention. It can directly produce dense point clouds and does not rely on the prior of planar floors. However, 2D indoor mapping is still very important. The 2D indoor maps can be used for emergencies, evacuation and localization. For example, Google indoor maps are produced by a 2D indoor mapping system called Cartographer. To obtain 3D indoor models, another two 2D lasers can be integrated. Moreover, 2D lasers are much cheaper than 3D ones, and the processing of 2D laser scans is much more efficient than that of 3D scans.
In this paper, we carefully design a feature-based framework for 2D laser scan matching. Like keyframe image matching, there are also four basic stages in our framework. First, distinct feature points are detected based on an extended 1D SIFT, and line features are extracted via a modified Split-and-Merge [10,11] algorithm. The point and line features are then described by a distance histogram; the pairs that achieve the best matching scores are accepted as potential correct correspondences. Finally, we estimate the relative positions and orientations of two laser scans based on a robust cost function. Extensive experiments on real data demonstrate that the proposed method is almost as accurate as ICPs and is initial free. We also show that our scan matching method can be integrated into a SLAM system for indoor mapping and indoor modeling. The contributions of our work are summarized as follows:
(1)
We propose a new initial-free 2D laser scan matching method by combining point and line features, which is a very effective technique for indoor mapping and modeling.
(2)
We carefully design a framework for the detection of point and line feature correspondences from laser scan pairs. We also give an effective strategy to discard unreliable features. Thus, our detected feature correspondences are distinct, reliable, and invariant to rotation changes.
(3)
We propose a new relative pose estimation method that is robust to outliers. We use the lq-norm (0 < q < 1) metric in this approach, in contrast to classic optimization methods whose cost function is based on the l2-norm of residuals. Unlike the conventional RANSAC-based [9] strategy, strategy, there is no gross error detection stage in our pose estimation algorithm. In addition, our pose estimation algorithm is more robust to noise than the RANSAC-based one.
(4)
We make an honest attempt to present our work to a level of detail allowing readers to re-implement the method.

2. Related Work

Many works on 2D laser scan matching have been presented, so we do not give here a comprehensive study. We will review only a few most relevant and recent works, including the most well-known ICP families, feature-based methods, and other techniques.
The idea of the ICP algorithm [1,2] is intuitive and simple: it alternates between finding closest point correspondences under an initial transformation and fitting the transformation with the correspondences until convergence. The classical ICP algorithm is to minimize a function of point-to-point distance. Lu and Milios [12] proposed two ICP variants for scan registration. The first one alternates between translation estimation and rotation computation. Given a fixed rotation, the translation is optimized by least squares. The translation is then fixed, and the rotation is searched by the global-section method [13]. The second is an iterative dual correspondence (IDC) method that adapts two types of correspondences, i.e., Euclidean distance and range angular distance. Minguez et al. [14] presented a new metric distance for ICP, called MbICP, which considers the configuration space of the sensor and combines the rotation and translation errors of the sensor. As shown in their paper, MbICP improves the correspondence building stage and the convergence rate of the classical ICP. Censi [4] describes a point-to-line metric-based ICP (PLICP). In contrast to other point-to-line methods [2], PLICP develops a closed-form solution for the planar case, just like 2D laser scans. PLICP also improves the convergence of classical ICP from linear to quadratic. ICPs can achieve very high accuracy in scan matching. However, as mentioned earlier, their high dependence on a good initial guess will reduce their power for practical applications.
Diosi and Kleeman [15,16] developed a point-to-point matching method called polar scan matching (PSM). It directly matches the range measurement of two laser scans in the native polar coordinate system. PSM uses the matching bearing rule to associate the range measurements of the target scan with the reference scan and minimizes a weighted range residual cost function. This method is much faster than ICPs because it avoids searching for correspondences. As noted in [17], PSM can also improve the ability to converge to an optimal solution from a laser range initial guess.
Montesano et al. [18] give a probabilistic formulation of scan registration called probabilistic scan matching (Probabilistic SM). Like classical ICP, it also consists of two stages, i.e., correspondence location and transformation estimation. In the first stage, the uncertainty in both relative transformation and range measurement is modeled based on Gaussian distributions. In addition, Probabilistic SM allows these correspondences to be found by probability integration over all possible associations between the range measurements of the two scans. The probabilistic modeling of uncertainty is more suitable for real sensor data compared with pure geometrical approaches.
Another probabilistic-based method, called correlative scan matching (CSM) [19], uses cross-correlation to register two laser scans, which achieves real-time performance with little loss of accuracy. CSM maximizes the probability of having observed range measurements. That is, it searches for an optimal rigid transformation that overlaps the two laser scans as much as possible. To avoid a local maximum, CSM searches over the entire parameter space of plausible rigid-body transformations. To detect the plausible region, some additional information should be provided, such as that from visual odometry, wheel odometry, or commanded motion.
These techniques also rely on a good initial guess or prior information. To eliminate the dependence on assumptions, feature-based scan matching methods have appeared. Ramos et al. [20] propose a novel feature-based approach based on conditional random fields (CRF). In their CRF-Matching, a joint estimation over all observations in a laser scan is performed, and shape information is considered to reject outliers. The CRF model could integrate multiple features, such as shape features and appearance features. The maximum posteriori of the rigid transformation is estimated by loopy belief propagation. The main limitations are the high computational complexity and the unreliability for partially overlapping laser scan pairs. Tipaldi and Arras [21,22] presented a method called FLIRT. In their work, they studied three types of laser feature detectors (range-based, normal-based, and curvature-based detectors) and two feature descriptors (linear local shape context descriptor and β-Grids descriptor. Based on the comprehensive analysis of these detectors and descriptors, they combine the best detector with the best descriptor to form FLIRT. They show that FLIRT can be adapted in laser SLAM systems and that impressive results can be achieved. However, their descriptor in FLIRT is very slow, which prevents it from being widely used in real SLAM applications because the number of scans in a SLAM process is usually huge. In addition, there is no outlier removal section in their literature.
Motivated by these techniques, we propose a real-time scan matching approach based on point and line features. Our method is almost as accurate as ICPs and is initial free. It can be easily adapted in a real indoor SLAM system, as shown in our experiments.

3. Scan Matching

We first define some notations for this task. The right subscription k, k ∊ Z+ is used to indicate the frame of laser scans. Sk, Pk and Lk represent the laser scan measurements, point features, and line segment features of the frame k. Let fp(k,i)(x(k,i),y(k,i)) be the i-th feature point in Pk and fj(k,i) be the i-th feature line in Lk. Thus, our scan matching problem can be described as:
Problem: Given two consecutive 2D laser scans Sk-1 and Sk, recover the relative rigid transformation (relative position tk-1 and orientation rk-1) Tk-1 = {rk-1, tk-1} from these two laser scans based on point and line feature correspondences.

3.1. Feature Detection

Keypoint detection: The scale-invariant feature transform (SIFT) [5] is widely adapted in computer vision applications because of its robustness to image scale, rotation, illumination and viewpoint changes. In this stage, we propose an extended 1D SIFT keypoint detector so that it can be suitable for 1D range data (Figure 1a). We extract the keypoints in scale space only based on the raw laser range information.
The SIFT keypoint detector is based on the scale-space theory [23], which is a multiscale signal representation technique. The scale space uses a set of smoothed signals Sig(x, σi) to represent the original signal Sig(x), where σi is the size of the smoothing kernel, whose role is to suppress the fine scale structures. In our task, Sig(x) is the range information Sk(x) of the k-th frame laser scan. The formulation of the scale-space construction is as follows:
S k ( x , σ i ) = K ( x , σ i ) S k ( x )
where K(x, σi) is a scale-space kernel. In the original SIFT, the Gaussian function is adopted because it is the only possible choice for continuous signals, as evidenced by Lindeberg [23]. However, laser range scans are 1D discrete signals. A discrete kernel, maintaining the properties of the Gaussian function, is chosen:
K ( x , σ i ) = e σ i I x ( σ i )
where Ix stands for the modified Bessel function.
In SIFT, scale selection is performed to make detected feature points invariant to scale changes. However, the range data of different scans have the same scale. This means that the detected laser feature points are inherently invariant to scale. Thus, instead of extracting local extrema among three scale layers, we regard the local extrema of the Laplacian of the range signal at each scale layer Sk(x, σi) as feature keypoints. The Laplacian function is equivalent to the second derivative of Sk(xi) for one-dimensional signals. The keypoints Pk are the local peaks of Equation (3):
2 S k ( x , σ i ) = S k ( x + 1 , σ i ) + S k ( x 1 , σ i ) 2 S k ( x , σ i )
For stability, we want to reject unreliable feature points whose local surfaces are roughly parallel to the laser beams (Figure 2a) or the points on the edges of occluded areas (Figure 2b). Point A is an unreliable feature point with low position accuracy because its local normal direction is almost perpendicular to the laser beam. Point C is also rejected because it may be occluded in the next laser scan. For example, if the laser of Figure 2b moves to the right side of the current position, point C will no longer exist. In detail, we calculate the local surface direction of each feature point and its distance to the neighboring points. If the difference between the local surface direction and laser beam is less than 10 degrees or one of the distances to the neighboring points is longer than 1 m, the feature point will be discarded.
Keyline detection: Line segments are a very important type of feature for indoor environments. The advantages of using line features are twofold: first, line features are more distinct for indoor scenes and are less sensitive to the noise in the laser range measurements; second, the orientation information can be easily extracted from 2D line correspondences, which can help us reject feature point outliers and provide a good initial guess for our transformation estimation stage. These advantages will be demonstrated in the following sections.
We detect keylines for each laser scan based on a slightly modified Split-and-Merge algorithm [10,11] (Figure 1b). First, consecutive laser points in laser scan Sk are segmented into clusters, which, because of the points in the same cluster, tend to belong to the same object. The distance of two consecutive laser points is chosen as the segmentation criterion. The two points will be classified into the same cluster if the distance is smaller than a threshold. Clusters with too few points are regarded as isolated points and discarded. Then, for a cluster, we fit a line segment and detect the point with maximum distance to the line. If the maximum distance is smaller than a given threshold, we advance to process the next cluster. Otherwise, we split the cluster into two subclusters at the detected point. This process is recursive. When all possible lines are extracted, we calculate their slopes. Most 2D range finders are time-of-flight lidars, so the adjacent lines can be easily extracted. If two adjacent lines are almost parallel, we compute the distance between their middle points along normal direction. The two adjacent lines will be merged into one single line segment as long as the corresponding middle point distance is very small. Finally, the line segments with small length or few points are removed as unreliable keylines. The details are summarized in Algorithm 1.
Algorithm 1. Line Segment Extraction
1 input: a laser scan S k
2 output: line segments L k
3 begin
4  segment the laser scan S k and remove small
   segments to form clusters C k ;
5  for each cluster c i C k do
6   fit a line f l to c i , compute the length d l of f l ;
7   remove line if its length is small;
8   detect point p with maximum distance d max to f l ;
9   if d max < τ d ( τ d = 0.1 ) then
10    add the line segment f l to L k ;
11   else
12    split c i into two subclusters s c i 1 , s c i 2 at p ,
     then, perform Algorithm 1 for each subcluster;
13   end
14  end
15  computer the slope s f l ( k , i ) of each line segment in L k ,
   find adjacent line pairs in L k ;
16  for each pair ( f l ( k , i ) f l ( k , j ) ) do
17   if | s f l ( k , i ) s f l ( k , j ) | < τ s ( τ s = 3 ) then
18    calculate their middle point distance in normal direction d m ;
19    if d max < τ d m ( τ d m = 0.03 ) then
20     merge ( f l ( k , i ) , f l ( k , j ) ) into a single line, update L k ;
21    end
22   end
23  end
24  return line segment set L k ;
25 end

3.2. Feature Description

In this section, we use a distance histogram to describe point and line features. The distance histogram is different from the 2D gradient histogram because distance is invariant to rotation. Thus, our distance histogram descriptor is inherently invariant to rotation changes.
Keypoint description: For each feature point fp(k,i)Pk, we search its neighborhoods. If the distance between the searched point p(k,i)Sk and feature point fp(k,i) is less than the search radius r, the searched point p(k,i) is considered to be a neighbor of the feature point fp(k,i). These distances of the neighborhoods are then formed into an 8-bin histogram. The description of feature point fp(k,i) is the normalized distance histogram.
Keyline description: For each feature line fl(k,i)Lk, we find three points on the line (see Figure 3). These three points divide the feature line fl(k,i) into four segments with equal length. Each point is then described by an 8-bin normalized distance histogram just like the feature point description. The description of feature line fl(k,i) is a 24-bin normalized distance histogram that simply concatenates the three-point descriptor.

3.3. Feature Matching

Keyline matching: For two consecutive laser scans Sk-1 and Sk, we detect their feature line segments Lk−1 and Lk. For each feature line fl(k-1,i)Lk-1, our goal is to search for its correspondence in Lk. We first compute the matching score between fl(k-1,i) and all lines in Lk:
s c o r e = d l ( f l ( k 1 , i ) ) d l ( f l ( k , j ) )   s . t .   f l ( k , j ) L k
where d l ( · ) stands for the line descriptor; score is the matching score; and · represents the Euclidean distance. The feature line fl(k,j)Lk that achieves the best matching score will be accepted as the potential correspondence of fl(k-1,i). We then check the lengths of fl(k-1,i) and fl(k,j). They are discarded as outliers if their length difference is very large because the line correspondence pair in laser scans usually has similar Sk-1 lengths. Figure 4a shows a sample result. As seen, most feature lines are correctly matched. However, there are still some outliers, such as line correspondence 9. Because the Lk relationship between laser scans and Sk is rigid transformation, the angles or rotations between the correct 2D line correspondences are almost the same. Thus, we first cluster the angles of line correspondence via the histogram technique. The average angle of the largest cluster is regarded as the correct relative rotation between Sk-1 and Sk, formatted as matrix r ¯ k 1 . The correspondences with angles that are inconsistent with r ¯ k 1 are rejected. We then rotate the line segments in and calculate the middle point distance between the remaining matches. The correspondences with middle point distances that are too different from others are also treated as outliers. The others are accepted as inliers (Figure 4b).
Keypoint matching: Suppose that we have detected the feature points Pk-1 and Pk of Sk-1 and Sk, respectively. As above, for each feature point fp(k,j)Pk-1, we compute the matching score between fp(k,j) and all points in Pk:
s c o r e = d p ( f p ( k 1 , i ) ) d p ( f p ( k , j ) )   s . t .   f p ( k , j ) P k
where d p ( · ) stands for the feature point descriptor. The feature point fp(k,j)Pk that achieves the best matching score will be accepted as the potential correspondence of fp(k,i). As seen in Figure 5a, the points connected by a red line are potential correspondences. There are many false matches in the naive matching results. As we have obtained the rotation r ¯ k 1 between laser scans Sk-1 and Sk in the last section, we can eliminate the rotation changes between these potential correspondences. Thus, there is only translation differences between these transformed matching points. In fact, the correct point correspondences should have almost the same translations. We cluster the translations in x and y coordinates. The average translation t ¯ k 1 of the largest cluster in x and y is considered as the true translation between Sk-1 and Sk. The correspondences that are inconsistent with the translation t ¯ k 1 are then rejected as outliers. The cleaned matching points are shown in Figure 5b.

3.4. Transformation Estimation

Let C ( k 1 , k ) { ( f p ( k 1 , i ) P k 1 , f l ( k , i ) P k ) } be the feature point correspondence pairs; the relationship between pair ( f p ( k 1 , i ) P k 1 , f l ( k , i ) P k ) can by modeled by a rigid transformation:
f p ( k 1 , i ) = r k 1 · f l ( k , i ) + t k 1
The number of degrees of freedom of this equation is three, i.e., one for rotation and two for translation. Two correspondence pairs are enough to solve it. Classical approaches usually minimize the following nonlinear least-squares cost function via the Gauss-Newton [24] method:
arg min r k 1 , t k 1 i = 1 n p i p ^ i 2 2
where p i = f p ( k 1 , i ) represents the observations, and p ^ i = r k 1 · f l ( k , i ) + t k 1 is the calculated value. p i and p ^ i are introduced only for compactness of notation. n is the number of pairs in C ( k 1 , k ) .
However, these methods are sensitive to outliers. They usually fail to obtain the correct solutions when the observations (feature correspondences) are corrupted by outliers. The current solutions usually combine minimal solvers with RANSAC-based [9] schemes to reject outliers during a prepro­cessing stage. Unfortunately, the minimal solvers with RANSAC-based schemes are sensitive to noise. In addition, RANSAC-based methods are very slow.
Outliers cannot be absolutely avoided in correspondences C ( k 1 , k ) , so a good solver should have the ability to automatically segment the residual vector v = [ v 1 , v 2 , ... , v n ] into an inlier set I ( v i | | v i | 0 ) and an outlier set O ( v i | | v i | 0 ) . Classical least-squares cost is based on a fundamental assumption that observations are subjected to a normal distribution and free of outliers. The results will be skewed. Recently, a sparsity-inducing norm, the lq-norm [25,26,27,28], has shown great potential and can achieve better performance than the l1-norm [29] and l0-norm [30]. Motivated by that, we reformulate the cost function based on the lq-norm metric:
arg min r k 1 , t k 1 i = 1 n p i p ^ i q q
where q is an lq-norm ( 0 < q < 1 ) operator.
To optimize this problem, it is formed as an lq-norm penalized least-squares (lqLS) problem, which has been well studied by Marjanovic and Solo [28]. By introducing auxiliary variables M = [ m 1 , m 2 , ... , m n ] into Equation (8), the cost function is rewritten as:
arg min r k 1 , t k 1 , M i = 1 n m i q q   subject   to   ε i = p i p ^ i m i = 0
Using the augmented Lagrangian function, we can reformulate this constrained optimization function into an unconstrained one:
L ρ ( R , t , M , Λ ) = i = 1 n ( m i q q + λ i T ε i + ρ 2 ε i 2 2 ) = i = 1 n ( m i q q + ρ 2 λ i ρ + ε i 2 2 1 2 ρ λ i 2 2 )
where Λ = [ λ 1 , λ 2 , ... , λ n ] are Lagrange multipliers or dual variables and ρ > 0 is a penalty parameter.
To simplify this problem, the alternating direction method of multipliers (ADMM) is employed to decompose the function into three subproblems:
prob   1 : M m + 1 : = arg min M L ρ ( r k 1 m , t k 1 m , M , Λ m ) = = arg min M i = 1 n ( m i q q + ρ 2 δ i m i 2 2 )
prob   2 : ( r k 1 m + 1 , t k 1 m + 1 ) : = arg min r k 1 , t k 1 L ρ ( r k 1 , t k 1 , M m , Λ m ) = arg min r k 1 , t k 1 i = 1 n ( ρ 2 γ i p ^ i 2 2 )
prob   3 : λ i m + 1 : = λ i m + ρ ε i   i = 1 , 2 , ... , n
where the superscript m denotes the iteration counter. d i = l i ρ + p i + p ^ i and g i = l i ρ + p i m i are used only for compactness of notation. In prob 1, M is the only variable to be estimated, whereas the others are fixed. In prob 2, the rigid transformation Tk-1 = {rk-1,tk-1} is the only variable. The ADMM alternates among these three steps until convergence. Details about augmented Lagrangian methods and ADMM can be found in the literature [31].
prob 1 is an lq-norm penalized least squares (lqLS) problem. We adapt Marjanovic and Solo’s lq Cyclic Descent (lqCD) algorithm [28] to solve it. More details about the lqLS problem may be seen in Marjanovic and Solo’s literature [28]. prob 2 is a classical least-squares function, which can be easily solved by DLT. Our lq-norm solver can converge to the correct solution within only a few iterations because the very good initial guess has been obtained above.

4. Results

4.1. SLAM System and Datasets

We design a hardware system for indoor mapping and modeling tasks. As shown in Figure 6, this SLAM system consists of three laser scanners, a panoramic camera, and two odometers. The 2D Sick laser scanner has a 360° field of view, a 0.5°-point measurement resolution, and a 10 Hz scanning rate. The horizontal laser is used for location (scan matching), and the other two oblique lasers are used for mapping 3D point clouds. The two odometers can provide a good initial guess for ICP scan matching.
We collect two datasets using this hardware system. The first one is a market with an area of 8000 m2. We push the cart and walk. The length of the full trajectory is 1137 m. We sample this dataset into 2200 key laser scans. The second one is an underground parking garage with an area of 3200 m2. The length of the trajectory is 357 m. It is sampled in 1414 key laser scans. In both datasets, the moving speed of the cart is 1 m/s.
Moreover, we also use two SLAM benchmarking datasets for evaluation [32]. The first dataset was collected at the Intel Research Lab, and the other one was collected at the MIT CSAIL Building. The ground truth of both datasets was provided.

4.2. Comparison with State-of-the-Art Methods

For our datasets, we randomly pick 4 laser scan pairs from each dataset and register these scan pairs by using PSM, PLICP, ICP without an initial guess (initial guess set to zero, denoted by ICP1), the proposed method and ICP with a good initial guess (initial-guess obtained by the two odometers, denoted by ICP2). The results of ICP2 are regarded as the ground truth. For benchmarking datasets, we randomly pick 50 laser scan pairs. We compare our method with PSM, PLICP and ICP1. The ICP1 uses a point-to-point metric and sets the maximum iterations to 100. The source codes of PSM and PLICP are obtained from the authors’ websites. The source code of ICP used in this paper can be found in [33].
Figure 7, Figure 8, Figure 9 and Figure 10 give the visual matching results. As seen, our method achieves similar results to ICP2 or the ground truth. In all plots of our results, ICP2 or the ground truth, the transformed laser scan of Sk (denoted by blue +) covers the first laser scan Sk-1 (denoted by red +). This means that both our method and ICP2 can correctly align these laser scan pairs. PLICP and ICP1 have similar performances; PSM performs better than PLICP and ICP1. However, PSM, PLICP and ICP1 will fail in some cases. These methods are very sensitive to the initial guess, whereas our method is initial free.
Table 1 and Table 2 show the quantitative evaluation results of our datasets. In the Tables, tx and ty represent the relative translation along the x-axis and y-axis between a laser scan pair, respectively; r θ represents the relative rotation angle between a scan pair. In this evaluation, we regard the results of ICP2 as the approximate ground truth rigid transformation. PSM, PLICP, ICP1 and our method are compared with ICP2. From these tables, we can also draw similar conclusions as in the visual comparisons. As seen, our estimated transformations are almost the same as ICP2. The maximum angle difference between our method and ICP2 is less than 0.01 rad (0.573°). For most cases, the translation differences between our method and ICP2 are smaller than 0.05 m. The maximum translation difference is 0.09 m, which is the result of scan pair 3 in dataset 1. We enlarge the visual results of scan pair 3 (see Figure 11) and find that our result is even better than that of ICP2. The success rates of PSM, PLICP and ICP1 are 37.5%, 12.5% and 12.5%, respectively. Comparing ICP1 with ICP2, only the second scan pair is correctly matched; the translation differences are 0.07 m and 0.04 m, which are less accurate than our method. Even the translation differences of the third scan pair in dataset 2 are 0.13 m and 0.13 m, and their rotation difference is up to 0.05 rad (2.865°). ICP1 can succeed only for scan pair 2 of dataset 1, which can be expected because the rotation between this pair is small. Thus, the proposed method is much more reliable when the initial guess is unavailable.
Table 3 gives the quantitative evaluation results of the benchmarking datasets. In this table ex, ey and eθ represent the mean absolute error of translation and rotation. Their calculation equations are as follows:
e x = i = 1 n | t x t x g t | n ,   e y = i = 1 n | t y t y g t | n ,   e θ = i = 1 n | t θ t θ g t | n
where txgt and tygt are the ground truth of translation; tθgt is the ground truth of rotation; and n is the total scan pairs (in this experiment, n = 50). If ex and ey of a scan pair are smaller than 0.1 m and eθ is smaller than 0.03 rad, this estimated transformation matrix is regarded as a correct one. The success rate is the ratio of correctly matched scan pairs and total scan pairs. From this table, we can see that the translation error of our method is less than 2 cm and that the rotation error is approximately 0.1 rad. This is much better than those of PSM, PLICP and ICP1 because the success rate of our method is 100%. Our method achieves 18%, 34% and 42% growth rates of the success rate for the Intel dataset and 20%, 50% and 56% growth rates for the MIT dataset, respectively. PSM, PLICP and ICP1 are not very robust and stable because of the need for an initial guess.

4.3. Running Time

Scan matching is usually a front end of SLAM systems, in which time-consuming methods are unacceptable. We select four scan pairs for evaluation. The numbers of points in each scan of the four scan pairs are 180, 360, 720 and 1080. All experiments are run in Microsoft Visual Studio 2010 installed on a laptop PC with an Intel Core i5-3210M 2.5 GHz CPU and 8 GB of RAM. Note that we rewrite the ICP code in C++ for comparison. The results are given in Table 4.
As seen, PSM is the fastest. It achieves 100 FPS when the number of points is 720. Our method is much faster than PLICP and ICP1 because the numbers of keypoints and keylines are very small. Our method can achieve almost 30 FPS and 20 FPS when the numbers of points are 360 and 720, respectively. This is sufficient for 2D indoor scan matching because the number of points in a 2D laser scan is less than 1080 in most cases.

4.4. Application for SLAM

For each dataset, we first perform the proposed scan matching algorithm on all consecutive key laser scan pairs, obtaining the relative positions and orientations Tk-1 = {rk-1,tk-1} of these pairs. We choose the first key laser scan as the base on which to build a global map. The coordinate system of this global map is the same as the first key scan. We use Qk-1 to represent the points on the incremental global map, accumulated until the k-1-th laser scan. For a new laser scan Sk, we then match Sk with Qk-1 via ICP. The initial guess of ICP is provided by our method (Tk-1 = {rk-1,tk-1})). This transformation matrix Tk-1 = {rk-1,tk-1} is refined by this process, and more accurate relative positions and orientations are achieved. The global map is also updated by fusing the points of laser scan Sk. This map matching technique has a very important advantage: it is a global matching method, which can largely reduce the drift caused by scan matching methods. After all laser scans are processed, a 2D indoor map and a trajectory can be produced. Finally, this trajectory (positions and orientations) can be further optimized by some global optimization approaches such as G2O [34] or some filtering techniques. In this experiment, we regard the trajectory refined by the global map as the initial guess needed by particle filter SLAM (Gmapping [35]) and obtain a more accurate indoor map. We can also generate the 3D point clouds of the indoor scenes based on the other two laser scanners and the optimized trajectory.
Figure 12 and Figure 13 show the indoor mapping results of dataset 1 and dataset 2, respectively. As seen, a smooth trajectory can be achieved by combining our scan matching approach with a map matching technique. There are no abrupt changes in the results, which indicates that there are no obvious false matches because the cart is moved at a near-constant speed. The unrefined trajectory has a similar shape to the global refined one. Its local accuracy is much closer to the global trajectory. The proposed method drifts over time, which could not be avoided by any scan matching method. From the second pictures in Figure 12 and Figure 13, we can see that the 2D point clouds have very high local accuracy. The thickness of the point clouds is less than 0.1 m in most cases. The point clouds represent the outline of indoor walls whose thickness is almost 0 in the ideal case. However, the range noise of the Sick laser scanner is 0.02 m, and the location error cannot be avoided. Thus, a trajectory that can construct a map with 0.1 m local accuracy is sufficient to provide a good initial guess for global refinement methods, such as Gmapping. The third pictures of Figure 12 and Figure 13 verify this conclusion. The thickness of the point clouds is less than 0.08 m in all cases. There is no “ghosting” in the maps, and the walls are straight. The constructed 2D maps are consistent with the real scenes. Figure 14 gives the final indoor maps of the Intel and MIT datasets. Figure 15 shows the 3D point clouds of dataset 1 reconstructed by the other two oblique lasers.

4.5. Limitations

There are still some problems in our algorithm that must be resolved. First, our method is unsuitable for outdoor scenes. Outdoor scenes are more complicated than indoor environments. There are many moving objects and few line segments in outdoor scenes that will affect our method. This is an inherent drawback of our scan matching. Second, our scan matching method has only three degrees of freedom. It will fail if the floor of the indoor scene is not planar. Thus, if we want to build a map of a multiple-floor indoor environment, we must perform the process for each floor; all floors are then registered manually. This is a common drawback of 2D scan matching methods. In addition, our method highly relies on the feature detection stage. Our method needs at least one correct line correspondence and two correct point correspondences. If there are no line correspondences in a laser scan pair, we should discard one scan and obtain another one to build a new pair. This is the process of key laser scan selection. Alternatively, we can use only point correspondence for scan matching. If there are few 1D SIFT feature points, we should add other feature points, such as normal-based feature points, curve-based feature points, line intersection points, and so on.

5. Conclusions

In this paper, we propose a 2D laser scan matching method based on point and line features. Our method does not require an initial guess for the transformation. We carefully design a framework for the detection of point and line feature correspondences. The feature points are detected based on an extended 1D SIFT, and line features are extracted via a modified split-and-merge algorithm. Line segments are a very important type of features for indoor environments. The advantages of using line features are twofold: first, line features are more distinct for indoor scenes and are less sensitive to noise in the laser range measurements; second, the orientation information can be easily extracted from 2D line correspondences, which can help us reject feature point outliers and provide a good initial guess for our transformation estimation stage. The point and line features are then described by a distance histogram. The histogram cluster technique can help us filter outliers and provide an accurate initial value of the rigid transformation. We also propose a new relative pose estimation method that is robust to outliers. Unlike the conventional RANSAC-based strategy, there is no gross error detection stage in our pose estimation algorithm. In addition, our pose estimation algorithm is more robust to noise than the RANSAC-based one. Extensive experiments on real data demonstrate that the proposed method is almost as accurate as ICPs and is initial free. We also show that our scan matching method can be integrated into a simultaneous localization and mapping (SLAM) system for indoor mapping. There are also some limitations to our method. The main limitation is the three degrees of freedom. Our future work will be to extend the proposed method to six degrees of freedom.

Acknowledgments

The authors would like to express their gratitude to the editors and the reviewers for their constructive and helpful comments for substantial improvement of this paper. This work is supported by National Natural Science Foundation of China (No. 41271452 and No. 41371434).

Author Contributions

Jiayuan Li wrote the main program and most of the paper; Ruofei Zhong designed the experiments; Qingwu Hu conceived the study; Mingyao Ai gave some important advises on writing and polished the language.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Best, P.J.; McKay, N.D. A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar]
  2. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  3. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  4. Censi, A. An ICP Variant Using a Point-to-Line Metric. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008.
  5. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  6. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (surf). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  7. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. Orb: An Efficient Alternative to Sift or Surf. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011.
  8. Leutenegger, S.; Chli, M.; Siegwart, R.Y. Brisk: Binary Robust Invariant Scalable Keypoints. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011.
  9. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  10. Einsele, T. Real-Time Self-Localization in Unknown Indoor Environment Using a Panorama Laser Range Finder. In Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robots and Systems, Grenoble, France, 11 September 1997.
  11. Nguyen, V.; Martinelli, A.; Tomatis, N.; Siegwart, R. A Comparison of Line Extraction Algorithms Using 2D Laser Rangefinder for Indoor Mobile Robotics. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005.
  12. Lu, F.; Milios, E.E. Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans. In Proceedings of the 1994 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994.
  13. Press, W.H.; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. Numerical Recipes in C.; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
  14. Minguez, J.; Lamiraux, F.; Montesano, L. Metric-Based Scan Matching Algorithms for Mobile Robot Displacement Estimation. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005.
  15. Diosi, A.; Kleeman, L. Laser Scan Matching in Polar Coordinates with Application to Slam. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005.
  16. Diosi, A.; Kleeman, L. Fast laser scan matching using polar coordinates. Int. J. Robot. Res. 2007, 26, 1125–1153. [Google Scholar] [CrossRef]
  17. Hong, S.; Ko, H.; Kim, J. Vicp: Velocity Updating Iterative Closest Point Algorithm. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–8 May 2010.
  18. Montesano, L.; Minguez, J.; Montano, L. Probabilistic Scan Matching for Motion Estimation in Unstructured Environments. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005.
  19. Olson, E.B. Real-Time Correlative Scan Matching. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009.
  20. Ramos, F.T.; Fox, D.; Durrant-Whyte, H.F. Crf-Matching: Conditional Random Fields for Feature-Based Scan Matching. In Proceedings of the Robotics: Science and Systems, 2007, Atlanta, GA, USA, 27–30 June 2007.
  21. Tipaldi, G.D.; Arras, K.O. Flirt-Interest Regions for 2d Range Data. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA), Anchorage, AK, USA, 3–8 May 2010.
  22. Tipaldi, G.D.; Braun, M.; Arras, K.O. Flirt: Interest Regions for 2d Range Data with Applications to Robot Navigation. In Proceedings of the Experimental Robotics, 2014, Marrakech and Essaouira, Morocco, 15–18 June 2014.
  23. Lindeberg, T. Scale-space theory: A basic tool for analyzing structures at different scales. J. Appl. Stat. 1994, 21, 225–270. [Google Scholar] [CrossRef]
  24. Lowe, D.G. Fitting parameterized three-dimensional models to images. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 441–450. [Google Scholar] [CrossRef]
  25. Marjanovic, G.; Solo, V. On optimization and matrix completion. IEEE Trans. Signal Process. 2012, 60, 5714–5724. [Google Scholar] [CrossRef]
  26. Marjanovic, G.; Solo, V. Lq Matrix Completion. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012.
  27. Marjanovic, G.; Hero, A.O. On Lq Estimation of Sparse Inverse Covariance. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014.
  28. Marjanovic, G.; Solo, V. Sparsity penalized linear regression with cyclic descent. IEEE Trans. Signal Process. 2014, 62, 1464–1475. [Google Scholar] [CrossRef]
  29. Mazumder, R.; Hastie, T.; Tibshirani, R. Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res. 2010, 11, 2287–2322. [Google Scholar] [PubMed]
  30. Mohimani, G.H.; Babaie-Zadeh, M.; Jutten, C. Fast sparse representation based on smoothed ℓ0 norm. In Independent Component Analysis and Signal Separation; Springer: Berlin, Germany, 2007; pp. 389–396. [Google Scholar]
  31. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  32. Slam Benchmarking Dataset. Available online: http://kaspar.informatik.uni-freiburg.de/~slamEvaluation/datasets.php (accessed on 17 June 2016).
  33. Icp. Available online: http://www.mathworks.com/matlabcentral/fileexchange/27804-iterative-closest-point (accessed on 14 January 2016).
  34. Kümmerle, R.; Grisetti, G.; Strasdat, H.; Konolige, K.; Burgard, W. G2o: A General Framework for Graph Optimization. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011.
  35. Grisetti, G.; Stachniss, C.; Burgard, W. Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans. Robot. 2007, 23, 34–46. [Google Scholar] [CrossRef]
Figure 1. Point and line features detected from a laser scan. (a) feature point detection; (b) feature line detection.
Figure 1. Point and line features detected from a laser scan. (a) feature point detection; (b) feature line detection.
Sensors 16 01265 g001
Figure 2. The black solid lines are the local surface patches of feature points; the red dotted lines are the laser beams; the green dotted lines represent the occluded region. (a) The direction of point A’s local surface patch is almost parallel to the laser beam, so the location accuracy of point A is very limited; (b) There is a gap between point C and point D, so point C may be occluded in the next scan. Point A and point C are both rejected as unreliable feature points in our approach.
Figure 2. The black solid lines are the local surface patches of feature points; the red dotted lines are the laser beams; the green dotted lines represent the occluded region. (a) The direction of point A’s local surface patch is almost parallel to the laser beam, so the location accuracy of point A is very limited; (b) There is a gap between point C and point D, so point C may be occluded in the next scan. Point A and point C are both rejected as unreliable feature points in our approach.
Sensors 16 01265 g002
Figure 3. Illustration of constructing our line descriptor.
Figure 3. Illustration of constructing our line descriptor.
Sensors 16 01265 g003
Figure 4. Feature line matching. (a) Line correspondences with outliers; (b) line matching result after outlier removal.
Figure 4. Feature line matching. (a) Line correspondences with outliers; (b) line matching result after outlier removal.
Sensors 16 01265 g004
Figure 5. Feature point matching. (a) Point correspondences with outliers; (b) point matching result after outlier removal.
Figure 5. Feature point matching. (a) Point correspondences with outliers; (b) point matching result after outlier removal.
Sensors 16 01265 g005
Figure 6. Our SLAM system. It consists of three laser scanners, a panoramic camera, and two odometers.
Figure 6. Our SLAM system. It consists of three laser scanners, a panoramic camera, and two odometers.
Sensors 16 01265 g006
Figure 7. Some scan matching results of dataset 1. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the results of ICP with a good initial guess (ICP2). In all of these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Figure 7. Some scan matching results of dataset 1. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the results of ICP with a good initial guess (ICP2). In all of these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Sensors 16 01265 g007
Figure 8. Some scan matching results of dataset 2. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the results of ICP with a good initial guess (ICP2). In all of these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Figure 8. Some scan matching results of dataset 2. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the results of ICP with a good initial guess (ICP2). In all of these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Sensors 16 01265 g008aSensors 16 01265 g008b
Figure 9. Some scan matching results of the Intel dataset. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the ground truth. In all these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Figure 9. Some scan matching results of the Intel dataset. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the ground truth. In all these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Sensors 16 01265 g009aSensors 16 01265 g009b
Figure 10. Some scan matching results of the MIT dataset. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the ground truth. In all these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Figure 10. Some scan matching results of the MIT dataset. The first row is the results of PSM; the second row is the results of PLICP; the third row is the results of ICP without an initial guess (ICP1); the fourth row is our scan matching results; the last row is the ground truth. In all these plots, the points denoted by red +, black +, and blue + represent the first laser scan Sk-1, the second laser scan Sk, and the transformed laser scan of Sk by using the estimated transformation matrix, respectively.
Sensors 16 01265 g010
Figure 11. The enlarged result of scan pair 3 of dataset 1. (a) The green box is the enlarged region; (b) the enlarged results of our method; (c) the enlarged results of ICP2.
Figure 11. The enlarged result of scan pair 3 of dataset 1. (a) The green box is the enlarged region; (b) the enlarged results of our method; (c) the enlarged results of ICP2.
Sensors 16 01265 g011
Figure 12. (a) trajectory comparison. Red line is the trajectory of our method before Gmapping refinement; white line is the result after refinement; (b) 2D map built by trajectory without refinement; (c) 2D map built by trajectory with Gmapping refinement.
Figure 12. (a) trajectory comparison. Red line is the trajectory of our method before Gmapping refinement; white line is the result after refinement; (b) 2D map built by trajectory without refinement; (c) 2D map built by trajectory with Gmapping refinement.
Sensors 16 01265 g012aSensors 16 01265 g012b
Figure 13. (a) trajectory comparison. Red line is the trajectory of our method before Gmapping refinement; white line is the result after refinement; (b) 2D map built by trajectory without refinement; (c) 2D map built by trajectory with Gmapping refinement.
Figure 13. (a) trajectory comparison. Red line is the trajectory of our method before Gmapping refinement; white line is the result after refinement; (b) 2D map built by trajectory without refinement; (c) 2D map built by trajectory with Gmapping refinement.
Sensors 16 01265 g013
Figure 14. The final results of Intel and MIT datasets. (Left) Intel; (right) MIT.
Figure 14. The final results of Intel and MIT datasets. (Left) Intel; (right) MIT.
Sensors 16 01265 g014
Figure 15. The 3D point clouds of dataset 1. (a) 3D point clouds; (b) Point clouds after cutting off the roof.
Figure 15. The 3D point clouds of dataset 1. (a) 3D point clouds; (b) Point clouds after cutting off the roof.
Sensors 16 01265 g015
Table 1. Quantitative evaluation on dataset 1.
Table 1. Quantitative evaluation on dataset 1.
DataScan Pair 1Scan Pair 2Scan Pair 3Scan Pair 4
Methodstx/mty/mrθ/radtx/mty/mrθ/radtx/mty/mrθ/radtx/mty/mrθ/rad
PSM1.220.550.840.510.090.11−1−0.621−0.290.570.33
PLICP1.030.610.130.1600.070.56-0.080.080.01−0.250
ICP16.94−2.760.821.460.130.077.633.76−0.31.84−0.27−0.05
ours1.210.580.831.550.050.071.90.640.792.080.310.3
ICP21.240.550.841.530.090.071.850.550.82.070.310.3
d(PSM–ICP2)000−1.0600.04−2.87−1.280.2−2.410.240.02
d(PLICP–ICP2)−0.210.06−0.71−1.37−0.090−1.29−0.63−0.72−2.06−0.56−0.3
d(ICP1–ICP2)5.7−3.31−0.02−0.070.0405.783.21−1.1−0.23−0.58−0.35
d(ours–ICP2)−0.030.03−0.010.02−0.0400.050.09−0.010.0100
Table 2. Quantitative evaluation on dataset 2.
Table 2. Quantitative evaluation on dataset 2.
DataScan Pair 1Scan Pair 2Scan Pair 3Scan Pair 4
Methodstx/mty/mrθ/radtx/mty/mrθ/radtx/mty/mrθ/radtx/mty/mrθ/rad
PSM0.680.10.451.340.550.59−0.56−0.780.31−1.36−1.910.46
PLICP0.51.460.04−0.62−0.050.021.850.220.210.42−0.620.54
ICP11.190.120.432.451.070.521.980.340.171.870.510.4
ours0.650.070.451.340.540.581.850.250.221.940.550.54
ICP20.640.080.441.370.550.581.850.210.221.960.550.54
d(PSM–ICP2)0.040.020.01−0.0300.01−2.41−0.990.09−3.32−2.46−0.08
d(PLICP–ICP2)−0.141.38−0.4−1.99−0.6−0.5600.01−0.01−1.54−1.170
d(ICP1–ICP2)0.550.04−0.011.080.52−0.060.130.13−0.05−0.09−0.04−0.14
d(ours–ICP2)0.01−0.010.01−0.03−0.01000.040−0.0200
Table 3. Quantitative evaluation on Intel and MIT datasets.
Table 3. Quantitative evaluation on Intel and MIT datasets.
DatasetsIntelMIT
Methodex/mey/meθ/radSuccess Rateex/mey/meθ/radSuccess Rate
PSM0.060.220.0882%0.280.090.1380%
PLICP0.170.220.3966%0.530.290.2650%
ICP10.440.360.4158%0.580.270.2544%
Ours0.020.010.01100%0.010.020.01100%
Table 4. The results of computation time.
Table 4. The results of computation time.
Methods180/ms360/ms720/ms1080/ms
PSM251014
PLICP3296252364
ICP137117286372
Ours15345792

Share and Cite

MDPI and ACS Style

Li, J.; Zhong, R.; Hu, Q.; Ai, M. Feature-Based Laser Scan Matching and Its Application for Indoor Mapping. Sensors 2016, 16, 1265. https://doi.org/10.3390/s16081265

AMA Style

Li J, Zhong R, Hu Q, Ai M. Feature-Based Laser Scan Matching and Its Application for Indoor Mapping. Sensors. 2016; 16(8):1265. https://doi.org/10.3390/s16081265

Chicago/Turabian Style

Li, Jiayuan, Ruofei Zhong, Qingwu Hu, and Mingyao Ai. 2016. "Feature-Based Laser Scan Matching and Its Application for Indoor Mapping" Sensors 16, no. 8: 1265. https://doi.org/10.3390/s16081265

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop