# Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

^{1}

^{2}

^{3}

^{4}

^{5}

^{6}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. The Mobile Mapping System and Sensor Configuration

#### 2.1. Coordinate Systems

**M**

_{1}(

**R**

_{1},

**T**

_{1}). The LiDAR points are geo-referenced to the world coordinate system (the left-bottom dotted line) according to

**M**

_{1}and the calibration parameters

**M**

_{3}between the LiDAR sensor and the platform.

**M**

_{2}(

**R**

_{2},

**T**

_{2}) is the transformation from the camera to the vehicle platform, which is also achieved through prior calibration.

**M**from the LiDAR points to the panoramic image. Other than a static calibration which concerns only

**M**

_{23}, the time series of localization information

**M**

_{1}is considered. For simplification, the possible errors of

**M**

_{3}are ignored and the transformation is constructed directly between the georeferenced LiDAR and camera (the bottom solid line). It is assumed that there exists

**ΔM**(Δ

**R**, Δ

**T**) to meet:

**ΔM**compensates for several aspects including

**M**

_{3}(as is discussed in Section 1). The line features are extracted from both the images and the LiDAR points, which are then utilized to determine the optimal

**ΔM**for an accurate registration. The solution procedure is based on the standard least squares technique imbedded with RANSAC for removal of possible gross errors in

**M**

_{1}.

#### 2.2. Geo-Refereneced LiDAR

**M**

_{1}and the calibration parameters

**M**

_{3}between the LiDAR and the IMU [42]. In the proposed system, three low-cost SICK laser scanners (all linear-array lasers) are equipped to acquire a 3D point cloud of the object’s facade. The angular resolution (0.25°–1.0°) and scan frequency (25–75 Hz) are fixed during data acquisition. The density of LiDAR points is uneven, i.e., the closer they are to the measured surface, the higher the density of the points. For instance, the points on the ground are much denser than those on the top facade. In addition, the point density in the horizontal direction is dependent on the velocity of the MMS vehicle.

#### 2.3. Multi-Camera Rig Models

#### 2.3.1. Spherical Camera Model

**S**, the 3D points

**P**in plane π, and the panoramic image point

**u**are collinear [39]. The pixels in a panoramic image are typically expressed in polar coordinates. Assuming that the width and height of the panoramic image is W and H respectively, the horizontal 360° view is mapped to [0, W] and the vertical 180° view is mapped to [0, H]. Thus, each pixel (u, v) can be transformed to polar coordinate $\left(\theta ,\phi \right)$ by Equation (2):

**S**, 3D point

**P**, and edge pixel

**u**are collinear. The relationship between

**X**and

**P**is established by perspective transformation in Equation (4):

**P**is the coordinate of the object point, and

**X**(x, y, z) is the Cartesian coordinate of image point

**u**;

**R**and

**T**are respectively the rotation matrix and translation vector between the object space and the panoramic camera space; and $\lambda $ is the scale factor.

#### 2.3.2. Panoramic Camera Model

**C**, but it cannot be precisely located on sphere center

**S**due to the manufacturing constraints. The mono-lens center

**C**, instead of sphere center

**S**, panoramic image point

**u**, and 3D point

**P′**in object space are collinear.

**Kr**, including the projection model and the radial and tangential distortion. The index r means that every fisheye camera has its own calibration parameters. Since the straight lines, such as the boundaries of buildings, are distorted in the raw fisheye image, rectified images are used for line extraction.

**, y**

_{0}**)) and six exterior orientation parameters (T**

_{0}_{x}, T

_{y}, T

_{z}, R

_{x}, R

_{y}, R

_{z}) relative to the global coordinate system (the offsets between

**C**and

**S**in Figure 4b under spherical view). Both of them were acquired in advance by careful calibration by the manufacturer.

**p**(x, y) in the rectified images forms a 3D ray in the global coordinate system by Equation (6):

**X**(x − x

_{r}_{0}, y − y

_{0}, f) is the mono-camera coordinates of pixel

**p**and the translation vector

**T**(T

_{r}_{x}, T

_{y}, T

_{z}) and the local rotation matrix

**R**are known, the latter can be calculated by the following equation:

_{r}**X′**(x′, y′, z′) is the coordinates transformed into the global panoramic coordinate system, and the scale factor m defines the distance from the rectified image plane to the projection surface (typically a sphere or cylinder). By combining Equations (5) and (6), we can resolve m and

**X′**for a sphere projection.

**T**is small enough and vanishing. However, for the self-assembly panoramic camera whose

_{r}**T**is too large to ignore, the panoramic camera model is a better choice.

_{r}## 3. Line-Based Registration Method

**ΔM**bias. Using

**M**and

_{1}**M**in Figure 2, LiDAR point

_{2}**P**is transformed in the world coordinate into the auxiliary coordinate

_{w}**P**, as is defined in Equation (9), which is further discussed below:

#### 3.1. Transformation Model

**AB**is given in the world coordinate (actually the auxiliary coordinate in this paper). Its corresponding line in the panoramic image is detected as edge pixels. The projection ray through the perspective panoramic camera center

**C**, an edge pixel p on panoramic image, intersects

**AB**on point

**P**, as illustrated in Figure 7. By letting the line be represented by the two endpoints

**X**and

_{A}**X**, an arbitrary point

_{B}**P**is defined using Equation (10):

**P**in Equation (10) to Equation (4) yields the line-based sphere camera model:

**P**is substituted in Equations (10)–(8), yielding the line-based panoramic camera model:

**X′**can be obtained from Equation (6). The scalar parameter $\lambda $ and the line parameter t are unknown. What we try to resolve is the rotation matrix

**R**and translation

**T**.

#### 3.2. Solution

**f**

_{u}(

**R**,

**T**, t) and

**f**

_{v}(

**R**,

**T**, t). Given one pixel on the corresponding lines, the two equations in Equation (11) are formed with one line-parameter t introduced. In order to solve the six unknowns, at least six points are needed. If one point per line is used, six pairs of corresponding lines are needed; and if two points per line are used, three pairs of corresponding lines are needed. More than two points on a line does not reduce the rank deficiency but only increases the redundancy.

## 4. Line Feature Extraction from LiDAR

#### 4.1. Buildings

#### 4.2. Street Light Poles

#### 4.3. Curbs

## 5. Experiments and Results

#### 5.1. Datasets

**M**

_{1}(

**R**

_{1},

**T**

_{1}) and

**M**

_{2}(

**R**

_{2},

**T**

_{2}), respectively, in Equation (9).

_{x}, R

_{y}, R

_{z}are the rotation angles about the X, Y and Z axes, and T

_{x}, T

_{y}, T

_{z}are the translation along the X, Y and Z axes. x

_{0}, y

_{0}indicate the pixel location of the camera center, and f is the focal length. These parameters and definitions of the coordinate systems are discussed in detail in Section 2. These parameters are used in the line-based panoramic camera model (12).

#### 5.2. Registration Results

_{o}, and the number in the union binary image is n

_{u}. Finally, we defined the overlap rate as follows:

## 6. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## References

- Cornelis, N.; Leibe, B.; Cornelis, K.; Van Gool, L. 3D urban scene modeling integrating recognition and reconstruction. Int. J. Comput. Vis.
**2008**, 78, 121–141. [Google Scholar] [CrossRef] - Wonka, P.; Muller, P.; Watson, B.; Fuller, A. Urban design and procedural modeling. In ACM SIGGRAPH 2007 Courses; ACM: San Diego, CA, USA, 2007. [Google Scholar]
- Zhuang, Y.; He, G.; Hu, H.; Wu, Z. A novel outdoor scene-understanding framework for unmanned ground vehicles with 3d laser scanners. Trans. Inst. Meas. Control
**2015**, 37, 435–445. [Google Scholar] [CrossRef] - Li, D. Mobile mapping technology and its applications. Geospat. Inf.
**2006**, 4, 125. [Google Scholar] - Pu, S.; Vosselman, G. Building facade reconstruction by fusing terrestrial laser points and images. Sensors
**2009**, 9, 4525–4542. [Google Scholar] [PubMed] - Brenner, C. Building reconstruction from images and laser scanning. Int. J. Appl. Earth Obs. Geoinf.
**2005**, 6, 187–198. [Google Scholar] [CrossRef] - Pu, S. Knowledge Based Building Facade Reconstruction from Laser Point Clouds and Images; University of Twente: Enschede, The Netherlands, 2010. [Google Scholar]
- Wang, R. Towards Urban 3d Modeling Using Mobile Lidar and Images; McGill University: Montreal, QC, Canada, 2011. [Google Scholar]
- Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3d lidar instruments with a polygonal planar board. Sensors
**2014**, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed] - Naroditsky, O.; Patterson, A.; Daniilidis, K. Automatic alignment of a camera with a line scan lidar system. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 3429–3434.
- Gong, X.; Lin, Y.; Liu, J. 3d lidar-camera extrinsic calibration using an arbitrary trihedron. Sensors
**2013**, 13, 1902–1918. [Google Scholar] [CrossRef] [PubMed] - Zhuang, Y.; Yan, F.; Hu, H. Automatic extrinsic self-calibration for fusing data from monocular vision and 3-d laser scanner. IEEE Trans. Instrum. Meas.
**2014**, 63, 1874–1876. [Google Scholar] [CrossRef] - Levinson, J.; Thrun, S. Automatic Online Calibration of Cameras and Lasers. Robot. Sci. Syst.
**2013**, 2013, 24–28. [Google Scholar] - Mishra, R.; Zhang, Y. A review of optical imagery and airborne lidar data registration methods. Open Remote Sens. J.
**2012**, 5, 54–63. [Google Scholar] [CrossRef] - Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and lidar data registration using linear features. Photogramm. Eng. Remote Sens.
**2005**, 71, 699–707. [Google Scholar] [CrossRef] - Brown, L. A Survey of Image Registration Techniques. ACM Comput. Surv.
**1992**, 24, 325–376. [Google Scholar] [CrossRef] - Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell.
**1986**, 8, 679–698. [Google Scholar] [CrossRef] [PubMed] - Ballard, D.H. Generalizing the hough transform to detect arbitrary shapes. Pattern Recognit.
**1981**, 13, 111–122. [Google Scholar] [CrossRef] - Von Gioi, R.G.; Jakubowicz, J.; Morel, J.-M.; Randall, G. LSD: A line segment detector. Image Process. Line
**2012**, 2, 35–55. [Google Scholar] [CrossRef] - Liu, L.; Stamos, I. Automatic 3d to 2d registration for the photorealistic rendering of urban scenes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, 20–25 June 2005; pp. 137–143.
- Liu, L.; Stamos, I. A systematic approach for 2d-image to 3d-range registration in urban environments. Comput. Vis. Image Underst.
**2012**, 116, 25–37. [Google Scholar] [CrossRef] - Moghadam, P.; Bosse, M.; Zlot, R. Line-based extrinsic calibration of range and image sensors. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3685–3691.
- Borges, P.; Zlot, R.; Bosse, M.; Nuske, S.; Tews, A. Vision-based localization using an edge map extracted from 3d laser range data. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4902–4909.
- Lin, Y.; Wang, C.; Cheng, J.; Chen, B.; Jia, F.; Chen, Z.; Li, J. Line segment extraction for large scale unorganized point clouds. ISPRS J. Photogramm. Remote Sens.
**2015**, 102, 172–183. [Google Scholar] [CrossRef] - Yu, Y.; Li, J.; Guan, H.; Wang, C.; Yu, J. Semiautomated extraction of street light poles from mobile lidar point-clouds. IEEE Trans. Geosci. Remote Sens.
**2015**, 53, 1374–1386. [Google Scholar] [CrossRef] - Yokoyama, H.; Date, H.; Kanai, S.; Takeda, H. Pole-Like Objects Recognition from Mobile Laser Scanning Data Using Smoothing and Principal Component Analysis. Int. Archives Photogramm. Remote Sens. Spat. Inf. Sci.
**2011**, 38, 115–120. [Google Scholar] [CrossRef] - El-Halawany, S.; Moussa, A.; Lichti, D.D.; El-Sheimy, N. Detection of Road Curb from Mobile Terrestrial Laser Scanner Point Cloud. In Proceedings of the 2011 ISPRS Workshop om Laser Scanning, Calgary, AB, Canada, 29–31 August 2011.
- Tan, J.; Li, J.; An, X.; He, H. Robust curb detection with fusion of 3d-lidar and camera data. Sensors
**2014**, 14, 9046–9073. [Google Scholar] [CrossRef] [PubMed] - Ronnholm, P. Registration Quality-Towards Integration of Laser Scanning and Photogrammetry; European Spatial Data Research Network: Leuven, Belgium, 2011. [Google Scholar]
- Patias, P.; Petsa, E.; Streilein, A. Digital Line Photogrammetry: Concepts, Formulations, Degeneracies, Simulations, Algorithms, Practical Examples; ETH Zürich: Zürich, Switzerland, 1995. [Google Scholar]
- Schenk, T. From point-based to feature-based aerial triangulation. ISPRS J. Photogramm. Remote Sens.
**2004**, 58, 315–329. [Google Scholar] [CrossRef] - Zhang, Z.; Zhang, Y.; Zhang, J.; Zhang, H. Photogrammetric modeling of linear features with generalized point photogrammetry. Photogramm. Eng. Remote Sens.
**2008**, 74, 1119–1127. [Google Scholar] [CrossRef] - Mastin, A.; Kepner, J.; Fisher, J. Automatic Registration of Lidar and Optical Images of Urban Scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2639–2646.
- Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3d lidar data using statistical similarity. ISPRS J. Photogramm. Remote Sen.
**2014**, 88, 28–40. [Google Scholar] [CrossRef] - Wang, R.; Ferrie, F.P.; Macfarlane, J. Automatic registration of mobile lidar and spherical panoramas. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Providence, RI, USA, 18–20 June 2012; pp. 33–40.
- Torii, A.; Havlena, M.; Pajdla, T. From google street view to 3d city models. In Proceedings of the IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops), Kyoto, Japan, 29 September–2 October 2009; pp. 2188–2195.
- Micusik, B.; Kosecka, J. Piecewise planar city 3d modeling from street view panoramic sequences. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2906–2912.
- Shi, Y.; Ji, S.; Shi, Z.; Duan, Y.; Shibasaki, R. GPS-supported visual slam with a rigorous sensor model for a panoramic camera in outdoor environments. Sensors
**2013**, 13, 119–136. [Google Scholar] [CrossRef] [PubMed] - Ji, S.; Shi, Y.; Shi, Z.; Bao, A.; Li, J.; Yuan, X.; Duan, Y.; Shibasaki, R. Comparison of two panoramic sensor models for precise 3d measurements. Photogramm. Eng. Remote Sens.
**2014**, 80, 229–238. [Google Scholar] [CrossRef] - PointGrey. Ladybug 3. Available online: https://www.ptgrey.com/ladybug3-360-degree-firewire-spherical-camera-systems (accessed on 28 December 2016).
- SICK. Lms5xx. Available online: https://www.sick.com/de/en/product-portfolio/detection-and-ranging-solutions/2d-laser-scanners/lms5xx/c/g179651 (accessed on 28 December 2016).
- Sairam, N.; Nagarajan, S.; Ornitz, S. Development of mobile mapping system for 3d road asset inventory. Sensors
**2016**, 16, 367. [Google Scholar] [CrossRef] [PubMed] - Point Grey Research, I. Geometric Vision Using Ladybug Cameras. Available online: http://www.ptgrey.com/tan/10621 (accessed on 28 December 2016).
- Rabbani, T.; van Den Heuvel, F.; Vosselmann, G. Segmentation of point clouds using smoothness constraint. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2006**, 36, 248–253. [Google Scholar] - Schnabel, R.; Wahl, R.; Klein, R. Efficient Ransac for Point-Cloud Shape Detection. Comput. Graph. Forum
**2007**, 24, 214–226. [Google Scholar] [CrossRef] - Pu, S.; Rutzinger, M.; Vosselman, G.; Oude Elberink, S. Recognizing basic structures from mobile laser scanning data for road inventory studies. ISPRS J. Photogramm. Remote Sens.
**2011**, 66, S28–S39. [Google Scholar] [CrossRef] - Meguro, J.; Hashizume, T.; Takiguchi, J.; Kurosaki, R. Development of an autonomous mobile surveillance system using a network-based rtk-GPS. In Proceedings of the International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005.
- Comaniciu, D.; Meer, P. Mean shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell.
**2002**, 24, 603–619. [Google Scholar] [CrossRef]

**Figure 1.**The MMS used in this study: (

**a**) the vehicle; (

**b**) the panoramic camera, laser scanners, and GPS receiver.

**Figure 3.**Comparison between (

**a**) a panoramic image and (

**b**) a frame image. (The meaning of the Chinese characters on the building is Supermarket for Logistics in Central China).

**Figure 4.**Differences between the spherical and panoramic camera models. (

**a**) The dashed line shows the ray through 3D point

**P**, panoramic image point

**u**, and sphere center

**S**; (

**b**) the solid line shows the ray through 3D point

**P**′, mono-camera image point

**u**, panoramic image point u, and the mono-camera projection center

_{c}**C**.

**Figure 5.**Images of Camera 0–5: (

**a**) 6 fish-eye images; (

**b**) 6 rectified images. (The meaning of the Chinese characters on the building is Supermarket for Logistics in Central China).

**Figure 6.**Global and local coordinate systems of multi-camera rig under a cylindrical projection. (

**a**) the global panoramic camera coordinate system and (

**b**) six local coordinate systems of the rectified cameras.

**Figure 8.**Line segments fitting for building patch: (

**a**) projected points; (

**b**) boundary points; (

**c**) fitting lines using conventional least square method; and (

**d**) fitting lines using regularity constraints.

**Figure 10.**Overview of the test data: (

**a**) the test area in Google Earth; (

**b**) 3D point cloud of the test area.

**Figure 11.**Alignments of two datasets before and after registration based on the panoramic camera model with lens id 0–4. (

**a**,

**b**) are the LiDAR points projected to a panoramic image before and after registration respectively; (

**c**,

**d**) are the 3d point cloud rendered by the corresponding panoramic image pixels respectively.

**Figure 12.**Check points distribution shown on panoramic image. (The meaning of the Chinese characters on the building is Supermarket for Logistics in Central China).

**Figure 13.**The residuals of the check points before and after registration based on the panoramic camera model. The vertical axis is the residual in pixels; the horizontal axis is (

**a**) the ID of the check points and (

**b**) the ID of the lens and check points (lens ID—check point ID).

**Figure 14.**Linear features of the two datasets. (

**a**) EDISON edge pixels in the panoramic image; and (

**b**) boundary points in LiDAR point cloud.

**Figure 15.**Definition of overlap rate: (

**a**) image edge pixels; (

**b**) LiDAR projected boundary points; (

**c**) union of (a,b); (

**d**) overlap of (a,b); (

**e**) composition of (c) and highlighted (d).

POS | EOP | |
---|---|---|

X (m) | 38,535,802.519 | −0.3350 |

Y (m) | 3,400,240.762 | −0.8870 |

Z (m) | 762,11.089 | 0.4390 |

$\phi $ ($\xb0$) | 0.2483 | −1.3489 |

$\omega $ ($\xb0$) | 0.4344 | 0.6250 |

$\kappa $ ($\xb0$) | 87.5076 | 1.2000 |

**Table 2.**Parameters of mono-cameras in the panoramic camera model (image size is 1616 × 1232 in pixels).

Lens ID | R_{x} (Radians) | R_{y} (Radians) | R_{z} (Radians) | T_{x} (m) | T_{y} (m) | T_{z} (m) | x_{0} (Pixels) | y_{0} (Pixels) | f (Pixels) |
---|---|---|---|---|---|---|---|---|---|

0 | 2.1625 | 1.5675 | 2.1581 | 0.0416 | −0.0020 | −0.0002 | 806.484 | 639.546 | 400.038 |

1 | 1.0490 | 1.5620 | −0.2572 | 0.0114 | −0.0400 | 0.0002 | 794.553 | 614.885 | 402.208 |

2 | 0.6134 | 1.5625 | −1.9058 | −0.0350 | −0.0229 | 0.0006 | 783.593 | 630.813 | 401.557 |

3 | 1.7005 | 1.5633 | −2.0733 | −0.0328 | 0.0261 | −0.0003 | 790.296 | 625.776 | 400.521 |

4 | −2.2253 | 1.5625 | −0.9974 | 0.0148 | 0.0388 | −0.0003 | 806.926 | 621.216 | 406.115 |

5 | −0.0028 | 0.0052 | 0.0043 | 0.0010 | −0.0006 | 0.06202 | 776.909 | 589.499 | 394.588 |

Model | Spherical | Panoramic | ||
---|---|---|---|---|

Deltas | Errors | Deltas | Errors | |

X (m) | −3.4372 × 10^{−2} | 1.1369 × 10^{−3} | 3.4328 × 10^{−2} | 1.0373 × 10^{−3} |

Y (m) | 1.0653 | 1.2142 × 10^{−3} | 1.0929 | 1.0579 × 10^{−3} |

Z (m) | 1.9511 × 10^{−1} | 9.9237 × 10^{−4} | 2.2075 × 10^{−1} | 8.0585 × 10^{−4} |

$\phi $ ($\xb0$) | −1.2852 × 10^{−2} | 1.4211 × 10^{−3} | −1.4731 × 10^{−2} | 1.0920 × 10^{−3} |

$\omega $ ($\xb0$) | 5.8824 × 10^{−4} | 1.4489 × 10^{−4} | 1.5866 × 10^{−3} | 1.2430 × 10^{−3} |

$\kappa $ ($\xb0$) | −7.9019 × 10^{−3} | 8.4789 × 10^{−4} | −6.7691 × 10^{−3} | 7.7509 × 10^{−4} |

RMSE (pixels) | 4.718 | 4.244 |

Lens ID | Before (%) | After (%) |
---|---|---|

0 | 7.80 | 8.29 |

1 | 8.31 | 10.30 |

2 | 11.32 | 11.83 |

3 | 9.84 | 9.90 |

4 | 7.42 | 7.54 |

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Cui, T.; Ji, S.; Shan, J.; Gong, J.; Liu, K. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping. *Sensors* **2017**, *17*, 70.
https://doi.org/10.3390/s17010070

**AMA Style**

Cui T, Ji S, Shan J, Gong J, Liu K. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping. *Sensors*. 2017; 17(1):70.
https://doi.org/10.3390/s17010070

**Chicago/Turabian Style**

Cui, Tingting, Shunping Ji, Jie Shan, Jianya Gong, and Kejian Liu. 2017. "Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping" *Sensors* 17, no. 1: 70.
https://doi.org/10.3390/s17010070