# Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Pair-wise Registration

#### 2.1. Registration Process

#### 2.2. Generation of Reflectance Images

#### 2.3. Pixel-to-Pixel Correspondence

- Scale-space extrema detection: employing a difference-of-Gaussian function to identify potential points of interest that are invariant to scale and orientation,
- Key-point localisation: fitting an analytical model (mostly in the form of a parabola) at each candidate location to determine the location and scale,
- Orientation assignment: assigning one or more orientations to each key-point location based on local image gradient directions, and
- Key-point descriptor: measuring the local image gradients at the selected scale in the region around each key-point.

#### 2.4. Point-to-Point Correspondence

#### 2.4.1 Outlier Detection

_{AB}= S

_{A′B′}.

_{AB}= S

_{A′B′}(Figure 3). However, it is practically impossible for S

_{AB}to exactly equal S

_{A′B′}due to the location error of 3D points. Therefore, we need to estimate the tolerable error for distance invariance. In the work of Barnea and Filin [13], this value is empirically acquired. However, as the distance invariance error is variant and related to the two corresponding pairs, the tolerable value should be self-adaptive. The equation for the distance difference between 3D points A, B and their corresponding points A′, B′ is:

_{A}, Y

_{A}, Z

_{A}), (X

_{B}, Y

_{B}, Z

_{B}), (X

_{A′}, Y

_{A′}, Z

_{A′}) and (X

_{B′}, Y

_{B′}, Z

_{B′}) are respectively the 3D coordinates of points A, B, A′ and B′.

_{ΔS}can be estimated using the error propagation law in light of the location error of the two corresponding point pairs. Because the location error of each point is independent, the error estimation can be obtained by:

_{(XYZ)i}is the variance matrix of point i, i = A, B, A′ and B′, and

_{R}, the horizontal angular error σ

_{θ}, the vertical angular error σ

_{φ}and the angular resolution as invariant and independent for every point. Therefore, the variance matrix of point i D

_{(XYZ)i}is computed in terms of σ

_{R}, σ

_{θ}, σ

_{φ}and the angular resolution, according to the error propagation law.

_{ΔS}is chosen as a threshold to determine the correct correspondence, so Equation 2 can be written as

_{ΔS}is a variant and is related to each of the two corresponding pairs. Therefore, the chosen threshold is self-adaptive, instead of a constant. If the above condition is satisfied, those two point pairs are said to be corresponding. Otherwise, there should be an outlier among the point pairs. According to Equation 6, however, we cannot determine which point pair is an outlier, or whether both of them are.

_{ΔS}.

#### 2.4.2. Computation of Transformation Parameters

#### 2.4.3. Correspondence Prediction

## 3. Global Registration

_{2-1}, T

_{2-1}), (R

_{3-2}, T

_{3-2}),…, (R

_{6-5}, T

_{6-5}), corresponding to the motion of each slave into the master reference frame. The last-to-first closure constraint can be expressed as follows:

_{1}, Y

_{1}, Z

_{1}) and (X

_{m}, Y

_{m}, Z

_{m}) are the correspondence coordinates in the first and last scans, respectively; (T

_{i,X}, T

_{i,Y}, T

_{i,Z}) represent the translation from scan i+1 to i; and R

_{i}is the rotation from i+1 to i.

_{m}, Y

_{m}, Z

_{m}) in the last scan m are ideally transformed into their corresponding (X

_{1}, Y

_{1}, Z

_{1}) in the first scan using m−1 rigid transformations. The discord of this transformation process in practical terms is referred to as the closure error. However, in the case of Equation 7 it comprises not only transformation error, but also correspondence error, which is not expected to decrease during the bundle adjustment process. As a result, the closure constraint expressed by Equation 7 is not error-free in the ideal case.

_{1-6}, T

_{1-6}). As shown in Figure 4, (R

_{1-6}, T

_{1-6}) enables the first self-closure scan S1, which can be expressed as follows:

_{m}, Y

_{m}, Z

_{m}) is replaced by (X

_{1}, Y

_{1}, Z

_{1}) and accordingly one more transformation (R

_{m}, T

_{m}) is added, which means that after m rigid transformations (X

_{1}, Y

_{1}, Z

_{1}) maps back to itself in the ideal case. The closure constraint is error-free. In practical terms, the closure error can reach least squares after a bundle adjustment that merely consists of parameter errors. In addition, the self-closure form can be further extended to a circular one, as clearly the transformation ring ((R

_{2-1}, T

_{2-1}),…, (R

_{1-6}, T

_{1-6})) allows each of the six scans to form a self-closure model (see Figure 4). The self-closure error of scan i can be visualised by the disagreement between the solid rectangle Si and the dotted one Si’ (i = 1,…,6). The bundle adjustment allows self-closure errors of all scans to simultaneously reach the least squares value once a circular self-closure constraint is imposed. As this adjustment is based on limited corresponding points, introducing a circular self-closure constraint instead of a single one is not likely to considerably increase computation time. Moreover, multi-view registration holding true to this constraint will safely converge to a global minimum.

## 4. Results and Discussion

#### 4.1. Pair-Wise Registration Results

_{ΔS}in terms of the angular resolution and angular and range accuracies listed in Table 1 for the distance invariance verification of each pair of correspondences. In order to provide good initial RTPs for the succeeding iterative process, we finally chose σ

_{ΔS}, rather than 3σ

_{ΔS}, as the threshold to remove outliers. As a result, only 110 correct correspondences were preserved, instead of the 655 correspondences shown in Figure 6. Figure 7 partly illustrates the correct correspondences. We used an iterative corresponding process to ensure the matching of a larger number of points and a reasonable distribution of the corresponding points to thereby include as many new matches as possible. During this process, the threshold became looser as 3σ

_{ΔS}to accept more correct correspondences because most of the matching for point pairs became more accurate with increasing numbers of iterations.

_{i}is the total number of TLS points in scan i of Dataset 1, iter is the number of total iterations, rms is the root-mean-square error of the registration computed by least-squares adjustment, max, min, and avg are respectively the maximum, minimum and average distance between 3D corresponding point pairs that are selected as check point pairs after registration in a common coordinate frame.

#### 4.2. Global Registration Results

## 5. Summary and Outlook

## Acknowledgments

## References and Notes

- Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell
**1992**, 14(2), 239–256. [Google Scholar] - Godin, G.; Boulanger, P. Range image registration through viewpoint invariant computation of curvature. International Archives of Photogrammetry and Remote Sensing, Zurich, Switzerland, 1995; 30, pp. 170–175.
- Sharp, G.C.; Lee, S.W.; Wehe, D.K. ICP registration using invariant features. IEEE Trans. Pattern Anal. Mach. Intell
**2002**, 24(1), 90–102. [Google Scholar] - Sequeira, V.; Ng, K.; Wolfart, E.; Goncalves, J.G.M.; Hogg, D. Automated reconstruction of 3D models from real environments. ISPRS J. Photogramm. Remote Sens
**1999**, 54(1), 1–22. [Google Scholar] - Bae, K-H.; Lichti, D.D. Automated registration of unorganised point clouds from terrestrial laser scanners. International Archives of Photogrammetry and Remote Sensing, Istanbul, Turkey, 12–23 July, 2004; 35, pp. 222–227.
- Rabbani, T.; Dijkman, S.; van den Heuvel, F.; Vosselman, G. An integrated approach for modelling and global registration of point clouds. ISPRS J. Photogramm. Remote Sens
**2007**, 61(6), 355–370. [Google Scholar] - Dold, C.; Brenner, C. Registration of terrestrial laser scanning data using planar patches and image data. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Dresden, Germany, 25–27 September, 2006; 36, pp. 78–83.
- Wendt, A. On the automation of the registration of point clouds using the metropolis algorithm. International Archives of Photogrammetry and Remote Sensing Istanbul, Turkey, 12–23 July, 2004; 35, pp. 106–111.
- Seo, J.K.; Sharp, G.C.; Lee, S.W. Range data registration using photometric features. Proceeding of International Conference on Computer Vision and Pattern Recognition 05, San Diego, CA, USA, 20–25 June, 2005; 2, pp. 1140–1145.
- Al-Manasir, K.; Fraser, C.S. Registration of terrestrial laser scanner data using imagery. Photogramm. Record
**2006**, 21(115), 255–268. [Google Scholar] - Barnea, S.; Filin, S. Registration of terrestrial laser scans via image based features. International Archives of Photogrammetry and Remote Sensing Espoo, Finland, 12–14 September, 2007; 36, pp. 32–37.
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with application to image analysis and automated cartography. Comm. ACM
**1981**, 24(6), 381–395. [Google Scholar] - Barnea, S.; Filin, S. Keypoint based autonomous registration of terrestrial laser point-clouds. ISPRS J. Photogramm. Remote Sens
**2008**, 63, 19–35. [Google Scholar] - Kang, Z.; Zlatanova, S. Automatic registration of terrestrial scan data based on matching corresponding points from reflectivity images. Proceeding of 2007 Urban Remote Sensing Joint Event, Paris, France, 11–13 April, 2007; pp. 1–7.
- Kang, Z.; Zlatanova, S.; Gorte, B. Automatic registration of terrestrial scanning data based on registered imagery. Proceeding of 2007 FIG Working Wee, Hong Kong SAR, China, 13–17 May, 2007.
- Moravec, H.P. Towards Automatic Visual Obstacle Avoidance. Proceeding of 5th International Joint Conference on Artificial Intelligence, Cambridge, MA, August, 1977; p. 584.
- Chen, Y.; Medioni, G.G. Object modeling by registration of multiple range images. IVC
**1992**, 10(3), 145–155. [Google Scholar] - Masuda, T.; Yokoya, N. A robust method for registration and segmentation of multiple range images. CVIU
**1995**, 61(3), 295–307. [Google Scholar] - Pulli, K. Multiview registration for large data sets. Proceeding of Int’l Conf. 3D Digital Imaging and Modeling, Ottawa, Ont., Canada, 04–08 October, 1999; pp. 160–168.
- Bergevin, R.; Soucy, M.; Gagnon, H.; Laurendeau, D. Towards a general multi-view registration technique. IEEE Trans. Pattern Anal. Mach. Intell
**1996**, 18(5), 540–547. [Google Scholar] - Benjemaa, R.; Schmitt, F. Fast global registration of 3D sampled surfaces using a multi- z-buffer technique. IVC
**1999**, 17(2), 113–123. [Google Scholar] - Sharp, G.C.; Lee, S.W.; Wehe, D.K. Multiview registration of 3D scenes by minimizing error between coordinate frames. IEEE Trans. Pattern Anal. Mach. Intell
**2004**, 26(8), 1037–1049. [Google Scholar] - Hu, S.; Zha, H.; Zhang, A. Registration of multiple laser scans based on 3D contour features. Proceeding 10th Int. Conf. on Information Visualisation, London, UK, July 5–7, 2006; pp. 725–730.
- Stoddart, A.J.; Hilton, A. Registration of multiple point sets. Proceeding of 13th International Conference on Pattern Recognition, Vienna, Austria, 25–29 August, 1996; 2, pp. 40–44.
- Neugebauer, P.J. Reconstruction of real-world objects via simultaneous registration and robust combination of multiple range images. Int. J. Shap. Modell
**1997**, 3(1–2), 71–90. [Google Scholar] - Eggert, D.W.; Fitzgibbon, A.W.; Fisher, R.B. Simultaneous registration of multiple range views for use in reverse engineering of CAD models. CVIU
**1998**, 69(3), 253–272. [Google Scholar] - Williams, J.; Bennamoun, M. A multiple view 3D registration algorithm with statistical error modeling. IEICE Trans. Inf. Sys
**2000**, 83(8), 1662–1670. [Google Scholar] - Kang, Z.; Zlatanova, S. A new point matching algorithm for panoramic reflectance images. Proc of the 5th International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR07), Wuhan, China, 15–17 November, 2007; p. 67882F-1-10.
- Forkuo, E.; King, B. Automatic fusion of photogrammetric imagery and laser scanner point clouds. International Archives of Photogrammetry and Remote Sensing, Istanbul, Turkey, 12–23 July, 2004; 35, pp. 921–926.
- Aguilera, D.G.; Gonzalvez, P.R.; Lahoz, J.G. Automatic co-registration of terrestrial laser scanner and digital camera for the generation of hybrids models. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Espoo, Finland, 12–14 September, 2007; 36, pp. 162–168.
- Li, M. High-precision relative orientation using feature-based matching techniques. ISPRS J. Photogramm. Remote Sens
**1990**, 44, 311–324. [Google Scholar] - Mikolajczyk, K.; Schmid, C. Scale and affine invariant interest point detectors. IJCV
**2004**, 60(1), 63–86. [Google Scholar] - Lowe, D.G. Distinctive image features from scale-invariant keypoints. IJCV
**2004**, 60(2), 91–110. [Google Scholar] - Bleyer, M.; Gelautz, M. A layered stereo matching algorithm using image segmentation and global visibility constraints. ISPRS J. Photogramm. Remote Sens
**2005**, 59, 128–150. [Google Scholar] - Bentley, J.L. Multidimensional binary search trees used for associative searching. Comm. of the ACM
**1975**, 18(9), 509–517. [Google Scholar] - Boehler, W.; Vicent, M. Bogas; Marbs, A. Investigating laser scanner accuracy. Proceeding of CIPA XIXth International Symposium, Antalya, Turkey, 30 September–4 October, 2003; pp. 696–702.
- Lichti, D.D. Error modelling, calibration and analysis of an AM-CW terrestrial laser scanner system. ISPRS J. Photogramm. Remote Sens
**2007**, 61, 307–324. [Google Scholar] - Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; John Wiley: New York, 2001. [Google Scholar]
- Simon, D. Fast and Accurate Shape-Based Registration; Technical report CMU-RI-TR-96-45,; Robotics Institute, Carnegie Mellon University, 1996. [Google Scholar]
- Kretschmer, U.; Abmayr, T.; Thies, M.; Frohlich, C. Traffic construction analysis by use of terrestrial laser scanning. Proceeding of the ISPRS working group VIII/2: “Laser Scanners for Forrest and Landscape Assessment”, Freiburg, Germany, 3–6 October, 2004; 36, pp. 232–236.
- Haala, N.; Reulke, R.; Thies, M.; Aschoff, T. Combination of terrestrial laser scanning with high resolution panoramic images for investigations in forest applications and tree species recognition. International Archives of Photogrammetry and Remote Sensing, Dresden, Germany, 19–22 February, 2004.
- Amiri Parian, J.; Gruen, A. Integrated laser scanner and intensity image calibration and accuracy assessment. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Enschede, The Netherlands, 12–14 September, 2005; 36, pp. 18–23.

**Figure 3.**Four corresponding TLS points from two overlapping scans. The distance between the two points in Scan (a) should equal the distance between the two corresponding points in Scan (b), i.e., S

_{AB}= S

_{A′B′}. G and G′ are the gravity points of points A, B, C, D and A′, B′, C′, D′, respectively. The gravity points are used to verify those legitimate correspondences that are disqualified because they are surrounded by wrong matches.

**Figure 4.**Circular self-closure constraint. The solid rectangle Si denotes the real scan, and the dotted version Si’ is computed from Si using six sets of RTPs (i = 1,…,6). Under the proposed adjustment, Si is fixed, and the six sets of RTPs are adjusted to minimise the sum of the squares of the disagreements between Si and Si ’.

**Figure 5.**Floor plans of the test data sets. (a) Dataset 1; the scanned scene is the inside of an office. (b) Dataset 2; the scanned scene is the area between two buildings. (c) Dataset 3; the scanned scene is the area in front of the Aula Conference Centre of TU Delft. (d) Dataset 4; the scanned scene is an area surrounded by two buildings. The FOVs of the test datasets are 360° in all cases.

**Figure 6.**Corresponding points on two 360° reflectance images, which are identified by nearest neighbour search in terms of the Euclidean distance from the descriptor vector of the key point in the fixed image to that in the moving image. The numbers in the image are the indices of corresponding point pairs.

**Figure 8.**Corresponding points on the two reflectance images acquired after the iterative corresponding process. The numbers in the image are indices of corresponding point pairs.

**Figure 9.**Check points used to measure registration accuracy. To ensure the proper distribution of check points, in each 60° segment of the 360° FOV four pairs of corresponding points were manually selected as check points in order to compare accuracies.

**Figure 10.**Pair-wise registered point clouds. (a) Dataset 1. (b) Dataset 2. (c) Dataset 3. (d) Dataset 4.

**Figure 11.**Scale image of the distances between 24 corresponding check points in the registered point cloud.

**Figure 13.**Evenly distributed corresponding points on the building facade after the iterative corresponding process.

**Figure 15.**Registered real scans

**. (a)**Top views: before (left) and after (right) global registration

**. (b)**Front views: before (left) and after (right) global registration.

Point cloud | Scan/Image Angular resolution | Angular accuracy | Range accuracy |
---|---|---|---|

Datasets 1 and 2 | 0.036° | ±0.009° | ±3 mm |

Datasets 3 and 4 | 0.045° | ±0.009° | ±3 mm |

Proposed method | n_{1}n_{2} | iter | rms (m) | max (m) | min (m) | avg (m) | Time (min) |
---|---|---|---|---|---|---|---|

Dataset 1 | 11987424 | 2 | 0.0037 | 0.0053 | 0.0009 | 0.0035 | 5.0 |

11974976 | |||||||

Dataset 2 | 16726500 | 2 | 0.0044 | 0.0145 | 0.0018 | 0.0066 | 6.0 |

16713375 | |||||||

Dataset 3 | 10006564 | 3 | 0.0038 | 0.0185 | 0.0025 | 0.0078 | 5.5 |

10046768 | |||||||

Dataset 4 | 13266288 | 4 | 0.0039 | 0.0083 | 0.0010 | 0.0041 | 6.3 |

13259400 |

Proposed method | iter | rms (m) | time (min) | Proposed method | iter | Rms (m) | time (min) |
---|---|---|---|---|---|---|---|

Scan 2-1 | 3 | 0.0058 | 5.4 | Scan 12-11 | 3 | 0.0073 | 5.3 |

Scan 3-2 | 3 | 0.0067 | 5.5 | Scan 13-12 | 5 | 0.0098 | 6.6 |

Scan 4-3 | 4 | 0.0079 | 5.9 | Scan 14-13 | 5 | 0.0085 | 6.7 |

Scan 5-4 | 5 | 0.0074 | 6.7 | Scan 15-14 | 3 | 0.0023 | 5.3 |

Scan 6-5 | 4 | 0.0036 | 6.1 | Scan 16-15 | 4 | 0.0075 | 6.1 |

Scan 7-6 | 4 | 0.0082 | 6.0 | Scan 17-16 | 4 | 0.0056 | 6.1 |

Scan 8-7 | 5 | 0.0098 | 6.6 | Scan 18-17 | 5 | 0.0069 | 6.8 |

Scan 9-8 | 5 | 0.0095 | 6.7 | Scan 19-18 | 4 | 0.0039 | 6.0 |

Scan 10-9 | 4 | 0.0037 | 6.2 | Scan 20-19 | 3 | 0.0015 | 5.4 |

Scan 11-10 | 4 | 0.0051 | 6.0 | Scan 1–20 | 5 | 0.0079 | 6.7 |

Model | iter | RMS (m) | Time (min) |
---|---|---|---|

A | 6 | 0.0340 | 1.98 |

B (Scan 20) | 4 | 0.0351 | 0.62 |

B (Scan 14) | 4 | 0.0352 | 0.62 |

C | 4 | 0.0381 | 0.63 |

© 2009 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Kang, Z.; Li, J.; Zhang, L.; Zhao, Q.; Zlatanova, S.
Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images. *Sensors* **2009**, *9*, 2621-2646.
https://doi.org/10.3390/s90402621

**AMA Style**

Kang Z, Li J, Zhang L, Zhao Q, Zlatanova S.
Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images. *Sensors*. 2009; 9(4):2621-2646.
https://doi.org/10.3390/s90402621

**Chicago/Turabian Style**

Kang, Zhizhong, Jonathan Li, Liqiang Zhang, Qile Zhao, and Sisi Zlatanova.
2009. "Automatic Registration of Terrestrial Laser Scanning Point Clouds using Panoramic Reflectance Images" *Sensors* 9, no. 4: 2621-2646.
https://doi.org/10.3390/s90402621