Fast and Automatic Registration of Terrestrial Point Clouds Using 2D Line Features

Point cloud registration, as the first step for the use of point cloud data, has attracted increasing attention. In order to obtain the entire point cloud of a scene, the registration of point clouds from multiple views is necessary. In this paper, we propose an automatic method for the coarse registration of point clouds. The 2D lines are first extracted from the two point clouds being matched. Then, the line correspondences are established and the 2D transformation is calculated. Finally, a method is developed to calculate the displacement along the z-axis. With the 2D transformation and displacement, the 3D transformation can be easily achieved. Thus, the two point clouds are aligned together. The experimental results well demonstrate that our method can obtain high-precision registration results and is computationally very efficient. In the experimental results obtained by our method, the biggest rotation error is 0.5219o, and the biggest horizontal and vertical errors are 0.2319 m and 0.0119 m, respectively. The largest total computation time is only 713.4647 s.


Introduction
The terrestrial laser scanning technology has been extensively used in many practical applications such as the reconstruction of scenes [1,2], deformation monitoring [3][4][5][6], cultural heritage management [7,8], and the measurement of exploitative volumes over open-pit mines [9]. Due to the limited field of view of a laser scanner, the point cloud of one view is not enough to describe the complete scene. Therefore, a process to register the point clouds of all the views into a common coordinate system is necessary. This process includes two steps: coarse and fine registration. For the scenes that contain man-made objects (e.g., building), there are many line and plane features, which are useful for the point cloud registration. In this paper, our goal is to perform the coarse registration based on 2D line features. The fine registration is usually carried out by the iterative closet point (ICP) algorithm [10] or its variants [11,12]. A large number of methods have been developed for the coarse registration of point clouds. These methods calculate the transformation parameters by extracting different geometric features, including point, line, plane and specific object.
The point-based methods are the most common. Numerous methods first extract the keypoints from two point clouds to be matched, and calculate local shape descriptors for these keypoints. The correspondences between keypoints from different point clouds are established by comparing the descriptor similarity. In [13], the fast point feature histogram (FPFH) descriptor was proposed, and the sample consensus method was developed to select the likely correct correspondences. In [14], the authors designed a new descriptor called quintuple local coordinate image (QLCI), which was variation. For the scenes with man-made objects, there are a plenty of line and plane features in scenes, so extracting line and plane features to perform the registration of point clouds is a good choice. However, most of the existing line/plane-based methods usually extract the 3D lines and/or 3D planes. This means that these methods process the point cloud data in 3D space. As a result, the extraction of 3D lines and/or 3D planes is rather time-consuming. In this paper, we first propose a method to extract the 2D line features. Then, the 2D transformation is obtained by using the 2D line features. A method is also developed to find the correct line correspondences and calculate the 2D transformation. Owing to the dimensionality reduction, the proposed registration method processes the point clouds in 2D space. This can largely reduce the processing time [34]. In our method, only two pairs of line correspondences are enough to register the point clouds. In addition, we treat the 2D lines as infinite lines rather line segments and do not use the lengths of the lines to facilitate the search of the candidate line correspondences. Hence, even when the corresponding lines are partly scanned, they can also be used to calculate the 2D transformation. The terrestrial laser scanner is usually leveled, so the scanned point clouds are vertical. Thus, there is no rotation around the x and y axes. Therefore, with the 2D transformation, we can easily achieve the 3D transformation. The main contributions of our work are summarized as follows: (1) A 2D line feature extraction method is proposed. Because we process the point clouds in 2D space, the method is computationally efficient. (2) A method is formulated to search the line correspondences and calculate the 2D transformation.
(3) A method is developed to calculate the displacement along z-axis. (4) Finally, a registration method based on the 2D line features is presented. Owing to the use of the 2D line features, the registration method has relatively good time efficiency.
The remainder of this paper is organized as follows. In Section 2, the 2D line extraction method is detailly given. The coarse registration method based on the 2D line features are presented in detail in Section 3. Section 4 shows the experimental results and analysis. The conclusions are discussed in Section 5.

D Line Extraction
In our registration method, the 2D line extraction is the first task. In order to get the 2D line features, the points on the 2D lines need to be extracted first. The extraction process is shown in Figure 1. The original point cloud is first projected on the horizontal plane. Because the points on a facade will be projected on a line, and therefore have high point density in the 2D space. The density of projected points (DoPP) method [35] has been proposed to extract the points on 2D lines. In this method, the 2D point cloud is divided into regular mesh grids. Then the number of the points in a grid is treated as the density value of the grid. In order to only extract the points on the 2D lines as possible, the grid size should be set as a very small value. This means that the number of the grids is very large. Thus, the method is very time-consuming. Here, we calculate the number of the neighboring points around a point, and regard it as the density value of the point. The neighborhood radius is set as a small value (0.3pr in this paper, where pr denotes the point cloud resolution). The points with point density being higher than 5 are extracted from the 2D point cloud. The extraction result is shown in Figure 1c. This method is significantly faster to compute than the DoPP method. The extracted points are denoted as point cloud P Line . As we can see, the point cloud P Line is highly dense. We simplify the point cloud to further reduce the number of the points. Thus, the computation cost is further reduced. In order not to result in the loss of some regions, the point cloud is simplified according to the space distance. The sampling interval is set as pr, making it have the same point cloud resolution as the original point cloud. The 2D line features are extracted from the simplified 2D point cloud P Line−s . Then, the 2D line features are extracted by a region growing method. For 3D plane extraction, the seed region is often grown according to the angle between the normal vectors of the current seed point and its neighboring point [36]. However, we find that the angle between the normal vectors is not suitable for the 2D line extraction, because the points (e.g., the points in the red circle in Figure  1c) that are close to a line will be segmented to the line. This will degrade the accuracy of line fitting. In [37], the distance from the neighboring point to the plane is used to grow the seed region for 3D plane extraction. The distance indicator is suitable for 2D line extraction, and is therefore used in our method. Certainly, the distance here denotes the distance from the neighboring point to the line. We extend the point-based region growing method in [37] to extract the 2D line features and make some improvements. First, we calculate the fitting residual of the local neighborhood of each point. This is accomplished by performing singular value decomposition on the local neighborhood. The fitting residual is defined as the ratio between the small singular value and the big one. The point with the minimum fitting residual is selected, and the local neighborhood ( ) of this point is treated as the initial seed region . The points in the neighborhood are also removed from the point cloud. The seed region is iteratively extended by adding its neighbors. If the distance from a neighbor to the line fitted by ∪ is smaller than a threshold (0.5pr in this paper), the neighbor is added into the seed region and removed from the point cloud. The line is also fitted by singular value decomposition. In this iteration process, we use the newly added points rather than the whole seed region to search the neighbors ( ) in order to increase the computational efficiency. The initial added point set is set as the local neighborhood ( ). The process is stopped until no point can be added into the seed region. The obtained line at the last iteration is preserved in . Then, the next point with minimum fitting residual is selected. In [37,38], the algorithms are ended when the point cloud is empty. By contrast, we end the algorithm when the minimum fitting residual of the currently selected point is bigger than a threshold (0.01 in this paper). This is because the points with big fitting residual are those cluttered points, which are not on the lines. The 2D line extraction is shown in Algorithm 1 (see Appendix A) in detail. Note that in this algorithm, the input point cloud is the simplified 2D point cloud.

Point Cloud Registration with 2D Line Features
After the 2D line extraction, we need to identify the line correspondences. Then the 2D transformation is calculated according to the line correspondences. Afterward, we need to calculate the displacement along the z-axis. Finally, the 3D transformation is obtained. The outline of our registration method is shown in Figure 2. Then, the 2D line features are extracted by a region growing method. For 3D plane extraction, the seed region is often grown according to the angle between the normal vectors of the current seed point and its neighboring point [36]. However, we find that the angle between the normal vectors is not suitable for the 2D line extraction, because the points (e.g., the points in the red circle in Figure 1c) that are close to a line will be segmented to the line. This will degrade the accuracy of line fitting. In [37], the distance from the neighboring point to the plane is used to grow the seed region for 3D plane extraction. The distance indicator is suitable for 2D line extraction, and is therefore used in our method. Certainly, the distance here denotes the distance from the neighboring point to the line. We extend the point-based region growing method in [37] to extract the 2D line features and make some improvements. First, we calculate the fitting residual of the local neighborhood of each point. This is accomplished by performing singular value decomposition on the local neighborhood. The fitting residual is defined as the ratio between the small singular value and the big one. The point p min with the minimum fitting residual is selected, and the local neighborhood N(p min ) of this point is treated as the initial seed region SG. The points in the neighborhood are also removed from the point cloud. The seed region SG is iteratively extended by adding its neighbors. If the distance from a neighbor p k to the line fitted by SG ∪ p k is smaller than a threshold δ (0.5pr in this paper), the neighbor is added into the seed region and removed from the point cloud. The line is also fitted by singular value decomposition. In this iteration process, we use the newly added points AP rather than the whole seed region to search the neighbors N(AP) in order to increase the computational efficiency. The initial added point set AP is set as the local neighborhood N(p min ). The process is stopped until no point can be added into the seed region. The obtained line l k at the last iteration is preserved in L. Then, the next point with minimum fitting residual is selected. In [37,38], the algorithms are ended when the point cloud P is empty. By contrast, we end the algorithm when the minimum fitting residual of the currently selected point is bigger than a threshold ε (0.01 in this paper). This is because the points with big fitting residual are those cluttered points, which are not on the lines. The 2D line extraction is shown in Algorithm 1 (see Appendix A) in detail. Note that in this algorithm, the input point cloud P is the simplified 2D point cloud.

Point Cloud Registration with 2D Line Features
After the 2D line extraction, we need to identify the line correspondences. Then the 2D transformation is calculated according to the line correspondences. Afterward, we need to calculate the displacement along the z-axis. Finally, the 3D transformation is obtained. The outline of our registration method is shown in Figure 2.

2D Transformation Calculation
Assuming that we have two point clouds: source point cloud and target point cloud , the source point cloud will be aligned into the target point cloud. The two line sets extracted from the two point clouds are denoted as = { ⋯ } and = ⋯ . Because the calculated normals of the lines are ambiguous, we add the opposite lines into . That is the line set becomes Then we need to find the corresponding lines between and . Random sample consensus (RANSAC) [39] is the most frequently used method. However, the number of iterations is very difficult to determine. Considering that the number of the extracted 2D lines is generally small, we choose to traverse all the candidate line correspondences. Certainly, if the number of the extracted 2D lines is big, a RANSAC-like algorithm can also be applied. The cosine value of the normals of two lines is used to search the candidate line correspondences. In order to avoid repeatably calculating the cosine values during the traverse process, we calculate the cosine values of two arbitrary lines for the two line sets beforehand. Arbitrary two lines in can form a pair = { }. If the absolute value of the cosine value of ( < >) < ( 10 ) (1) the pair and its cosine value are preserved. The aim is to eliminate the pairs that are formed by parallel two lines, because the parallel two lines cannot be used to compute the 2D transformation. The same operation is also used for .

2D Transformation Calculation
Assuming that we have two point clouds: source point cloud P s and target point cloud P t , the source point cloud will be aligned into the target point cloud. The two line sets extracted from the two point clouds are denoted as L s = l s 1 l s 2 · · · l s n 1 and L t = l t 1 l t 2 · · · l t n 2 . Because the calculated normals of the lines are ambiguous, we add the opposite lines into L t . That is the line set Then we need to find the corresponding lines between L s and L t . Random sample consensus (RANSAC) [39] is the most frequently used method. However, the number of iterations is very difficult to determine. Considering that the number of the extracted 2D lines is generally small, we choose to traverse all the candidate line correspondences. Certainly, if the number of the extracted 2D lines is big, a RANSAC-like algorithm can also be applied. The cosine value of the normals of two lines is used to search the candidate line correspondences. In order to avoid repeatably calculating the cosine values during the traverse process, we calculate the cosine values of two arbitrary lines for the two line sets beforehand. Arbitrary two lines in L s can form a pair c s = l s i l s j . If the absolute value of the cosine value of c s abs(cos < l s i l s j >) < cos(10 0 ) the pair c s and its cosine value are preserved. The aim is to eliminate the pairs that are formed by parallel two lines, because the parallel two lines cannot be used to compute the 2D transformation. The same operation is also used for L t . The preserved pairs and their cosine values for the line set L s are denoted as C s = c s 1 c s 2 · · · c s m 1 and COS s = cos s 1 cos s 2 · · · cos s m 1 . Similarly, the preserved pairs and their cosine values for the line set L t are denoted as C t = c t 1 c t 2 · · · c t m 2 and COS t = cos t 1 cos t 2 · · · cos t m 2 . We traverse all the pairs in C s and C t to identify the correct line correspondences. For each pair in C s , its corresponding pair is searched in C t . If the pair c t f is considered as the possible match of c s e . The threshold λ is experimentally set as 0.2pr. The two pairs c s e = l s i l s j and c t f = l t p l t q are used to calculate the 2D transformation.
There are two matching conditions between c s e and c t f . The line correspondences can be l s i l t p and l s j l t q , but they can also be l s i l t q and l s j l t p . Therefore, we calculate two 2D transformations. The transformation that generates the big overlap between L s and L t is preserved. The overlap is defined as overlap = the number of correct line correspondences min(n 1 , n 2 ) Note that when we generate the pair set C s , the l s i l s j is a pair, but l s j l s i does not exist in the pair set because this will increase the number of the pairs and finally increase the computation time of the traverse process. For the pair set C t , l t p l t q is a pair, but l t q l t p does not exist in the pair set as well. Taking l s i l t p and l s j l t q as an example, we can formulate we can represent the 2D rotation matrix as The 2D translation vector is calculated as After all the pairs in C S and C t are traversed and the 2D transformations are calculated, the 2D transformation with the biggest overlap between L s and L t is output. The calculation of the 2D transformation is detailly illustrated in Algorithm 2 (see Appendix B).

The Calculation of the Displacement along the Z-axis
With the 2D transformation, the 3D transformation can be obtained as where R = r 0 2×1 0 1×2 1 is the 3D rotation matrix and T = t t z is the 3D translation vector. Hence, our next task is to calculate the displacement t z along the z-axis. Assuming that the two simplified 2D point clouds from P s and P t are P s Line−s and P t Line−s , P s Line−s is first transformed by using the 2D transformation  Figure 3b. Thus, we can use the minimum z-values in the two cylindrical neighborhoods to calculate the displacement t z = z t min − z s min (11) where z t min is the minimum z-value of N(p s Line−s ), and z s min is the minimum z-value of N p s Line−s . As long as the lower parts of the two cylindrical neighborhoods are not missed, the displacement can be correctly computed. In most cases, this can be guaranteed when the point p s Line−s is one in the overlap area. In order to get the reliable displacement, we randomly select 50 points from the overlap area, and calculate 50 displacements. Then, a clustering method is performed on these displacements. The displacements with similar size will be clustered together. The false displacements are less likely to cluster together, meaning that a displacement represents a cluster. The cluster with the maximum number of the displacements is used to get the final displacement, which is the mean value of the displacements in the maximal cluster. Eventually, the 3D transformation is obtained by Equation (9). Because the lower parts of two cylindrical neighborhoods are applied to calculate the displacement, our method is not suitable for the aerial point clouds, for which the lower parts are always missed. is the minimum z-value of ( ). As long as the lower parts of the two cylindrical neighborhoods are not missed, the displacement can be correctly computed. In most cases, this can be guaranteed when the point ̄ is one in the overlap area. In order to get the reliable displacement, we randomly select 50 points from the overlap area, and calculate 50 displacements. Then, a clustering method is performed on these displacements. The displacements with similar size will be clustered together. The false displacements are less likely to cluster together, meaning that a displacement represents a cluster.
The cluster with the maximum number of the displacements is used to get the final displacement, which is the mean value of the displacements in the maximal cluster. Eventually, the 3D transformation is obtained by Equation (9). Because the lower parts of two cylindrical neighborhoods are applied to calculate the displacement, our method is not suitable for the aerial point clouds, for which the lower parts are always missed.

Experiments and Results
In this section, the experiments will be performed on an indoor dataset and an outdoor dataset. In order to save time, we first perform the point cloud simplification. A point-based method and a plane-based method are implemented for comparison. In the point-based method, the intrinsic shape signature (ISS) detector [40] is used to extract the keypoints, and the QLCI descriptors [14] are calculated on the keypoints. The nearest neighbor similarity ratio (NNSR) [41] is applied to establish the point-to-point correspondences. The 1-point RANSAC algorithm [41], which is an improved version of RANSAC, is applied to find the correct correspondences and calculate the 3D transformation. For the plane-based method, a similar pipeline to our method is applied for a fair comparison. The 3D planes are extracted by the region growing method in [38], then the cosine values of the normals of three planes are calculated to find the candidate plane correspondences and the triplets with parallel planes are eliminated. All the triplets are traversed to identify the correct plane correspondences and compute the 3D transformation. All the experiments are implemented in Matlab by a PC with an Intel Core i7-7500U 2.7 GHz CPU and 20 GB of RAM.

Evaluation Criterion of Registration Accuracy
Three metrics are used to evaluate the registration accuracy. The rotation error is measured by

Experiments and Results
In this section, the experiments will be performed on an indoor dataset and an outdoor dataset. In order to save time, we first perform the point cloud simplification. A point-based method and a plane-based method are implemented for comparison. In the point-based method, the intrinsic shape signature (ISS) detector [40] is used to extract the keypoints, and the QLCI descriptors [14] are calculated on the keypoints. The nearest neighbor similarity ratio (NNSR) [41] is applied to establish the point-to-point correspondences. The 1-point RANSAC algorithm [41], which is an improved version of RANSAC, is applied to find the correct correspondences and calculate the 3D transformation. For the plane-based method, a similar pipeline to our method is applied for a fair comparison. The 3D planes are extracted by the region growing method in [38], then the cosine values of the normals of three planes are calculated to find the candidate plane correspondences and the triplets with parallel planes are eliminated. All the triplets are traversed to identify the correct plane correspondences and Remote Sens. 2020, 12, 1283 8 of 16 compute the 3D transformation. All the experiments are implemented in Matlab by a PC with an Intel Core i7-7500U 2.7 GHz CPU and 20 GB of RAM.

Evaluation Criterion of Registration Accuracy
Three metrics are used to evaluate the registration accuracy. The rotation error is measured by where R is the calculated rotation matrix, and R true is the true rotation matrix. Considering that the displacement along z-axis is separately computed in our method, we divide the translation error into horizontal error and vertical error, which are defined as where t is the calculated 2D translation vector, t true is the true 2D translation vector, t z is the calculated displacement along z-axis, and t z−true is the true displacement along z-axis. Here, the true values R true , t true and t z−true are obtained by the fine registration, which is performed by the ICP algorithm [10].

Registration Results on the Indoor Dataset
The indoor dataset [42] is acquired by a FARO Focus 3D X330 HDR scanner. The publishers scanned five scenes: Apartment, Bedroom, Boardroom, Lobby and Loft. For each scene, multiple point clouds were obtained from different views. We choose the two point clouds of Apartment and two point clouds of Boardroom to perform the experiments. The registration results obtained by our method are shown in Figures 4 and 5. In order to have a better visual experience, we remove the points on the ceiling after two point clouds are registered and exhibit the point clouds without the points on the ceiling.
As we can see from Figures 4 and 5, the two point clouds of both Apartment and Boardroom are automatically aligned. There is no significant inconsistency between the two registered point clouds. The simplified 2D point clouds are quite well aligned together. From the indoor furniture (e.g., sofa and chair in Figure 4d, and the council board and meeting chair in Figure 5), we can further see that the registration results are rather good.
The registration accuracy of the three methods is shown in Table 1. In order to save space, Apartment is abbreviated as Ap, and Boardroom is abbreviated as Bo. We can see that all the three methods can provide enough good initial pose for the fine registration. Our method obtains the best registration accuracy. The point-based method is the poorest method. This is because the point-based method is sensitive to noise and point density variation. The 3D plane and 2D line belong to the higher-level geometric feature, so the plane-based and 2D line-based methods can achieve significantly more accurate registration results. The plane-based method shows inferior registration accuracy compared to our method. This is because the region growing method [38] is not robust to outliers, meaning that the fitting accuracy of plane is low. In contrast, the distance indicator is applied in our 2D line extraction method. The outliers can be removed automatically. For the vertical error, our method also gets the smallest error. This indicates that the proposed method for calculating the displacement along the z-axis is feasible. scanned five scenes: Apartment, Bedroom, Boardroom, Lobby and Loft. For each scene, multiple point clouds were obtained from different views. We choose the two point clouds of Apartment and two point clouds of Boardroom to perform the experiments. The registration results obtained by our method are shown in Figure 4 and Figure 5. In order to have a better visual experience, we remove the points on the ceiling after two point clouds are registered and exhibit the point clouds without the points on the ceiling.  The computation time of the three methods is listed in the Table 2. In the Apartment scene, the number of points in the source and target point clouds is 124,426 and 185,436, respectively. In the Boardroom scene, the number of points in the two point clouds is 101,276 and 138,880, respectively. We also give the numbers of the features of the two point clouds being matched for better comparison. The feature in the three methods respectively denotes point feature (i.e., keypoint), plane feature and 2D line feature. Because the accuracy of the coarse registration has a great influence on the iteration number of the fine registration, we also give the computation time for the fine registration. For the plane-based method, we also set a threshold of the minimum fitting residual to stop the feature extraction. Actually, the numbers of the extracted plane features are 65 vs. 77 for Apartment and 87 vs. 108 for boardroom. In this experiment, we only use the large planes to calculate the 3D transformation. Nevertheless, we can see from Table 2 that the computation time for the feature extraction in the plane-based method is much more than that in our method. The plane-based method extracts the 3D planes and processes the point clouds in 3D space. As consequence, the computation cost is very high. By contrast, in our method, only the points with high density are extracted and further simplified to reduce the number of the points. Finally, the 2D lines are extracted in 2D space. Thus, our feature extraction is computationally very efficient. The time cost of the coarse registration in the plane-based method is also significantly more than that in our method, though the plane-based method has less features and our method has an additional step to calculate the displacement along the z-axis. This is because the number of the formed triplets in the plane-based method is much more than the number of the formed pairs in our method. The point-based method is also more time-consuming than our method, because the number of the keypoints is big. Processing thousands of keypoints needs a great deal of computation cost. The 1-point RANSAC algorithm uses the local reference frames of keypoints to calculate the rotation, so only one point is enough to calculate the 3D transformation. This largely reduces the required iteration number of the RANSAC algorithm. In this experiment, the iteration number is set as 200, but the point-based method still spends more time on the coarse registration than our method.
indoor furniture (e.g., sofa and chair in Figure 4d, and the council board and meeting chair in Figure  5), we can further see that the registration results are rather good.
The registration accuracy of the three methods is shown in Table 1. In order to save space, Apartment is abbreviated as Ap, and Boardroom is abbreviated as Bo. We can see that all the three methods can provide enough good initial pose for the fine registration. Our method obtains the best registration accuracy. The point-based method is the poorest method. This is because the point-based method is sensitive to noise and point density variation. The 3D plane and 2D line belong to the higher-level geometric feature, so the plane-based and 2D line-based methods can achieve significantly more accurate registration results. The plane-based method shows inferior registration accuracy compared to our method. This is because the region growing method [38] is not robust to outliers, meaning that the fitting accuracy of plane is low. In contrast, the distance indicator is applied in our 2D line extraction method. The outliers can be removed automatically. For the vertical error, our method also gets the smallest error. This indicates that the proposed method for calculating the displacement along the z-axis is feasible.

Registration Results on the Outdoor Dataset
The outdoor dataset [43] is acquired by a Leica C10 laser scanner. The publishers provide the point clouds of the City and Castle scenes. We choose two point clouds from each scene to perform the experiments. The experimental results obtained by our method are shown in Figures 6 and 7. We also amplify the local point clouds in order to see the registration results more clearly. For the outdoor scenes, our method is still feasible. The simplified 2D point clouds are aligned quite well. We can also see from the registration results of the 3D point clouds (Figure 6c and Figure  7c) and the locally amplified point clouds (Figure 6d and Figure 7d) that the point clouds of both the two scenes are well registered.
The registration accuracy tested on the outdoor dataset is listed in Table 3. Our method obtains the highest accuracy, followed by the plane-based method and point-based method. Due to the sensitivity to noise and point density variation, the point-based method achieves the poorest accuracy. For the outdoor scenes, our method is still feasible. The simplified 2D point clouds are aligned quite well. We can also see from the registration results of the 3D point clouds (Figures 6c and 7c) and the locally amplified point clouds (Figures 6d and 7d) that the point clouds of both the two scenes are well registered.
The registration accuracy tested on the outdoor dataset is listed in Table 3. Our method obtains the highest accuracy, followed by the plane-based method and point-based method. Due to the sensitivity to noise and point density variation, the point-based method achieves the poorest accuracy. In the plane-based method, the applied plane extraction method is not able to resist outliers, so the method also exhibits a poorer registration accuracy than our method. In comparison with the other two methods, the vertical error of our method is rather small as well. This means that our method can get the precise displacement along the z-axis. The time efficiency tested on the outdoor dataset is listed in Table 4. In the City scene, the number of points in the source and target point clouds is 193,219 and 177,535, respectively. In the Castle scene, the number of points in the two point clouds is 455,367 and 426,357, respectively. The exact numbers of the extracted planes in the plane-based method are 160 vs. 172 for City and 158 vs. 157 for Castle. Only the large planes are used to perform the point cloud registration. As we can see, our method achieves the best computation efficiency. The plane-based method is the most time-consuming, because it processes the point clouds in 3D space. The point-based method is also computationally expensive due to the large number of the keypoints. Specially, for the Castle scene, the number of the extracted keypoints is very large. This is because the ISS detector prefers to extract the points with abundant geometric information (i.e., non-planar points) as the keypoints, and there is a large amount of such points (e.g., the points of the trees) in the scene.  The time efficiency tested on the outdoor dataset is listed in Table 4. In the City scene, the number of points in the source and target point clouds is 193,219 and 177,535, respectively. In the Castle scene, the number of points in the two point clouds is 455,367 and 426,357, respectively. The exact numbers of the extracted planes in the plane-based method are 160 vs. 172 for City and 158 vs. 157 for Castle. Only the large planes are used to perform the point cloud registration. As we can see, our method achieves the best computation efficiency. The plane-based method is the most time-consuming, because it processes the point clouds in 3D space. The point-based method is also computationally expensive due to the large number of the keypoints. Specially, for the Castle scene, the number of the extracted keypoints is very large. This is because the ISS detector prefers to extract the points with abundant geometric information (i.e., non-planar points) as the keypoints, and there is a large amount of such points (e.g., the points of the trees) in the scene.

Conclusions
In this paper, we presented a registration method based on the 2D line features. Firstly, a 2D line extraction algorithm was proposed. Then, the simplified 2D point clouds were registered by establishing correspondences between the 2D lines from the two point clouds being matched, and the 2D transformation was calculated. Finally, a method was formulated to compute the displacement along the z-axis, and the 3D transformation was obtained by integrating the 2D transformation and displacement.
The experiments have been performed on an indoor dataset and an outdoor dataset to validate the performance of our method. A point-based method and a plane-based method are compared with our method. The experimental results show that our method can obtain a relatively good registration result. In terms of the registration accuracy, our method is the best on both of the two datasets. The rotation error is less than 0.55 o . The horizontal error is less than 0.25m and the vertical error is less than 0.015m. The more significant advantage is that our method has a very good computation efficiency. On the point clouds of the Castle scene, the total computation time is the largest, which is less than 12 minutes.
Author Contributions: W.T. designed the method and wrote this paper. X.H. performed the experiments and analyzed the experiment results. Z.C. and P.T. checked and revised this paper. All authors have read and agreed to the published version of the manuscript.