# A Precisely One-Step Registration Methodology for Optical Imagery and LiDAR Data Using Virtual Point Primitives

^{*}

## Abstract

**:**

## 1. Introduction

#### 1.1. Related Work

#### 1.1.1. Registration Primitive

- (1)
- Point features are used as registration primitives. Due to the semi-random discrete characteristics of airborne LiDAR point cloud data, the horizontal accuracy of point cloud features is usually lower than the resolution of remote sensing images. In other words, it is impossible to select the registration primitive points on the LiDAR data having an equivalent horizontal accuracy as the points selected on the images [29].
- (2)
- Linear feature registration primitives have been widely used because of their variety of mathematical expressions and easy extraction [30]. In LiDAR point cloud data, high-precision line features can be obtained through the intersection of patches. This method can eliminate the influence of semi-random discrete errors in point cloud data [9]. At present, most of the registration methods based on line features are based on straight line features and registration methods based on curve features are relatively rare [18].
- (3)
- The patch features as well as the registration primitive are generally a set of coplanar points obtained based on spatial statistical methods [31], which can also eliminate the accidental error of the semi-random discrete characteristic of point cloud data. According to the perspective imaging principle of photogrammetry, the images of target objects tend to be distorted, deformed, occluded, etc. The similarity of coincidence measurement or the point-to-surface distance can only better constrain the elevation error but ignores the influence of plane error [31,32]. Therefore, there is an urgent need to find a new form of registration primitives that can both adapt to the semi-random discrete characteristics of point cloud data and eliminate the influence of uncertain errors.

#### 1.1.2. Registration Transformation Model

- (1)
- In early research, LiDAR data are converted to two-dimensional images. Image-based methods make full use of existing algorithms for image registration, which makes the registration process easier. Mastin et al. [33] suggested taking advantage of mutual information as a similarity measure when LiDAR point cloud and aerial images were to be registered in the 2D–2D mode, which also includes using the improved frequency based method (FBT) to register low resolution optical images and LiDAR data, the scale-invariant feature transform (SIFT) algorithm [34,35] for the registration of LiDAR data and photogrammetric images, or the salient image disks (SIDs) to extract control points for the registration of LiDAR data and optical satellite images. The experimental results have proven that the SIDs method is relatively better than other techniques for natural scenes. However, the inevitable errors and mismatching caused by conversion of irregularly spaced laser scanning points to digital images (an interpolation process) means the registration accuracy may not be satisfactory. Baizhu et al. [36] propose a novel registration method involving a two-step registration process where the coarse registration is carried out to achieve a rough global alignment of the aerial and LiDAR intensity image, while the fine registration is then performed by constructing a discriminative descriptor. The whole registration processing is relatively complex, with a 2-pixel accuracy and it need amount of calculation.
- (2)
- In other cases, the geometric properties of the two datasets are fully utilized. As far as the registration procedure is concerned, most of the existing methodologies rely on point primitives and some researchers apply the iterative closest point (ICP) or its variants to establish a mathematical model for transformation [37,38]. Then, dense photogrammetric points are first extracted by stereo-image matching and 3D to 3D point cloud registration algorithms, such as ICP or structure from motion (SFM), are secondly applied to establish a mathematical model for transformation [39,40,41,42]. For further research, the surface-to-surface registration has been achieved by interpolating both datasets into a uniform grid. A comparison is used to estimate the necessary shifts by analyzing the elevation at the corresponding grid posts [43,44]. Such an approach can arise based on the above methods. First, minimizing the differences along the z-direction where there are abundant flat building roofs over urban areas. All of these methodologies are mostly implemented within a comprehensive automatic registration procedure [45]. Secondly, such approaches based on processing the photogrammetric data can produce breaklines or patches in the object space [46]. The main drawback of this method is that the registration accuracy may be influenced by the result of image matching. Moreover, methods in this category require stereo images covering the same area as covered by point clouds, which increases the cost of data acquisition. For the low cost unmanned aerial system, Yang Bisheng et al. [47] propose a novel coarse-to-fine method based on correcting the trajectory and minimizing the depth discrepancy derived from SfM and the raw laser scans. This achieves accurate non-rigid registration between the image sequence and raw laser scans collected by a low-cost UAV system, resulting in an improved LiDAR point cloud. The registration process described in this paper would allow for a simpler and more robust solution of the matching problem within overlapping images [48].

#### 1.2. Paper Objective

- (1)
- Definition and expression of virtual point features based on linear features

- ●
- Research on the extraction of straight lines and curves from the LiDAR point cloud data;
- ●
- The definition and expression of virtual point registration primitives from different line features.

- (2)
- The 2D–3D direct registration transformation model based on virtual point features
- ●
- The robust direct registration model for the remote sensing images and LiDAR point cloud data based on virtual point features means the registration results do not rely on the initial values of model parameters;
- ●
- Establishment of a direct registration model between remote sensing image and LiDAR point cloud data;
- ●
- A joint solution model of the registration transformation model parameters and the auxiliary parameters when generating the virtual points.

#### 1.3. Article Structures

## 2. Detection and Selection of the Linear Registration Primitives

#### 2.1. Building Edges Extraction and Feature Selection

#### 2.2. Contour Extraction Based on Double Threshold Alpha Shapes Algorithm

_{1}and α

_{2}(α

_{1}= 2.5α

_{2}), and obtain the qualified line segment (LS) sets $\partial {S}_{1}$ and $\partial {S}_{2}$ about the point set $S$; Select one of the optional line segments ${l}_{1pq}$ from edge sets $\partial {S}_{1},$ where point $p$ and point $q$ are the two endpoints of ${l}_{1pq}$. In an undirected graph $G$ composed of the edges of the point set $S$ and $\partial {S}_{2}$, differing from the condition of $\partial {S}_{1}$, the points p and q are not always adjacent. However, starting from point p and passing through several nodes, it can always reach point q and generate a path—the path with the smallest length is recorded as ${l}_{\mathrm{min}\_pq}$. Then, set up a path selection mechanism as follows. (2) Select ${l}_{pq}$ as the final path from ${l}_{\mathrm{min}\_pq}$ and ${l}_{1\_pq}$. The same operation is performed to the edge $\partial {S}_{2}$. All the edges in the iterative processing are used to obtain the path set {${l}_{1}$, ${l}_{2}$, ${l}_{3}$, …$,{l}_{\mathrm{n}}$}. Finally, all the paths are connected in turn to obtain the high-precision point set shape. The double threshold alpha shapes algorithm is mainly composed of the following two steps, which are described in detail as follows.

- (1)
- Obtaining dual threshold α-shape

_{1}and α

_{2}, respectively. Literature has proved that α-shapes under any threshold are all sub-shapes of $DT\left(S\right)$, which means $\partial {S}_{1}\subset DT\left(S\right),\partial {S}_{2}\subset DT\left(S\right)$. Therefore, the process of obtaining α-shape is as follows: firstly, use the point-by-point insertion algorithm to construct the Delaunay triangulation $DT\left(S\right)$ of the point set S (see [52] for the detailed steps of the algorithm) and then perform an alpha shapes algorithm on each edge in $DT\left(S\right)$ in turn, as shown in Figure 7. A line pq (point p and q are adjacent boundary points) is an edge in $DT\left(S\right)$, circle C is a circle that passes through pq and has a radius of α (the coordinates of the circle center are as shown in (1) and (2), if there is no other vertices in the circle C), then the edge pq belongs to the α-shape.

$\left({x}_{p},{y}_{p}\right):$ | Coordinate of point p; |

$\left({x}_{q},{y}_{q}\right)$: | Coordinate of point q; |

$\left({x}_{c},{y}_{c}\right):$ | Coordinate of point c; c is the center of circle C; |

$\alpha :$ | Radius of circle C. |

- (2)
- Optimization of boundary path

- ✧
- As Figure 8a shows, if the length of ${l}_{1}$ is more than 5 times that of$\text{}{l}_{2}$, then discard ${l}_{1}$ and keep ${l}_{2}$;
- ✧
- As Figure 8b shows, if the two adjacent edges of ${l}_{2}$ are close to vertical (more than 60 degree), and all the distances from the endpoints of ${l}_{1}$ to any adjacent edge of ${l}_{2}$ are small, discard ${l}_{2}$ and keep ${l}_{1}$;
- ✧
- As Figure 8c shows, if the two adjacent sides of ${l}_{1}$ and ${l}_{2}$ are close to parallel, and the distance from the end point on ${l}_{1}$ to ${l}_{2}$ is less than a certain threshold (such as half the average point spacing), then ${l}_{1}$ is discarded and ${l}_{2}$ is retained.

#### 2.3. Straight Linear Feature Simplification Based on Least Square Algorithm

_{en}. The detailed steps of the algorithm are as follows:

- (1)
- Select three consecutive vertices, A, B, and C, of the polygon in order, use the least square method to fit the straight line L, and calculate the distance from the vertices A, B, C to the straight line L. If any of the distances are greater than ${d}_{max}$, then go to step (4); otherwise, let U = $\left\{\mathrm{A},\mathrm{B},\mathrm{C}\right\}$, and go to step (2);
- (2)
- Let set U have two ends p and q, which extend to both directions from p and q, respectively. A new vertex will be added and judged during the growth process. If the distance between the new vertex and the line L is less than${d}_{max}$, then add it to the set U and use it as a new starting point to continue the growth—otherwise, it will stop growing at the vertex and the direction—until both directions are finished;
- (3)
- Determine the length of the set U. If the length of U is greater than the threshold L
_{en}, keep the two ends of the set and discard the middle vertices; - (4)
- If there are three consecutive vertexes remaining to be judged, go to step (1); otherwise, calculate the size of the length threshold L
_{en}. If L_{en}is greater than 2~3 times the average point spacing, reduce the length threshold to L_{en}= 0.8 L_{en}, and go to step (1).

#### 2.4. Curve Feature Simplification Based on Least Square Algorithm

_{0}> 3 is satisfied, the arc segment C at this time can be fitted by the least square method to obtain the center o, the radius R and the arc segment angle θ (Figure 10). For any arc, the height on the roof boundary is the same, and it can be directly obtained and recorded as Z

_{0}during feature extraction. At this time, any arc can be shown in (3):

_{1}, Y

_{1}and X

_{2}, Y

_{2}: The coordinates of the two distinct end points of the arc;

_{0}: The constant height value of the arc which can usually be obtained directly from building edge point.

(X_{0}, Y_{0}, Z_{0}): | The center coordinate of the space circle where the arc is located; |

R: | The radius of the circle where the arc is located; |

θ: | The polar coordinate angle of the current point in the transformed coordinate system. |

#### 2.5. The Selection of the Linear Registration Primitives

${V}_{Profile}:\text{}$ | The volume of ith cuboid; |

${L}_{segment}^{i}$: | The ith linear feature segment associated with the ith cuboid; |

$N:$ | The threshold of the total number of points per unit area; |

${h}^{i}$: | The average height of the linear feature; |

${d}_{constant}:$ | The distance that the facet moves outwardly along the facet normal vector. |

## 3. Registration Primitive Expression and Transformation Model

#### 3.1. The Generation of the VPs

#### 3.1.1. The VPs from Straight Lines

_{AB}of LiDAR space to image point p, introducing a parameter λ, then the coordinates of P in LiDAR space can be expressed by the known P

_{A}, P

_{B}coordinates and parameter λ, as (7) shows:

(X_{A},Y_{A}, Z_{A}): | Coordinate of point A in LiDAR data; |

(X_{B},Y_{B}, Z_{B}): | Coordinate of point B in LiDAR data; |

(X_{vp},Y_{vp}, Z_{vp}): | Coordinate of point VP; |

λ: | The auxiliary parameter. |

#### 3.1.2. VPs from Curve Features

(X_{0}, Y_{0}, Z_{0}): | The center coordinate of the space circle where the arc is located; |

R: | The radius of the circle where the arc is located; |

θ: | The polar coordinate angle of the current point in the transformed coordinate system; |

X_{1},Y_{1} and X_{2},Y_{2:} | The coordinates of the two distinct end points of the arc. |

#### 3.2. The One-Step Transformation Model of the Registration

${M}_{1},{M}_{2},\dots ,{M}_{t}$: | The unknown parameters of the transformation model; |

t: | The number of the unknown parameters of the transformation model; |

${\lambda}_{VP}$: | The introduced auxiliary parameter with one pair point–straight line registration primitive; |

${\theta}_{VP}$: | The introduced auxiliary parameter with one pair point–curve registration primitive. |

#### 3.3. The Coefficient Matrix of the VPs

${v}_{x},\text{}{v}_{y}$: | The residual variables; |

${l}_{x},\text{}{l}_{y}$: | The constant value of the linearization equation; |

${A}_{11},\text{}\dots ,{A}_{1t}and{A}_{21},\text{}\dots ,{A}_{2t}$: | The coefficients of the transformation model parameters in the linearization equation; |

${B}_{11},\text{}{B}_{21}$: | The coefficients of the auxiliary parameter ${\lambda}_{VP}$ in the linearization equation; |

$t$: | The number of the unknown parameter of the transformation model; |

$\Delta {\lambda}_{VP}$: | The change value of the auxiliary parameter ${\lambda}_{VP}$. |

${v}_{xi},\text{}{v}_{yi}$: | The residual variables for the ith image; |

${l}_{xi},\text{}{l}_{yi}$: | The constant value of the linearization equation for the ith image; |

${A}_{2i-1,1},\text{}\dots ,{A}_{2i-1,t}and{A}_{2i,1},\text{}\dots ,{A}_{2i,t}$: | The coefficients of the transformation model parameters in the linearization equation for the ith image; |

${B}_{2i-1,1},\text{}{B}_{2i,1}$: | The coefficients of the auxiliary parameter ${\lambda}_{VP}$ in the linearization equation for the ith image; |

$t$: | The number of the unknown parameter of the transformation model; |

$\nabla {\lambda}_{VP}$: | The change value of the auxiliary parameter ${\lambda}_{VP}$; |

$i$: | The count number. |

t: | The number of the unknown parameters of the transformation model; |

${n}^{i}$: | The primitive number on the ith image; |

k: | The number of images; |

V: | The matrix which consists of residual variables. |

${A}_{2{n}^{i}\times t}^{i}$: | The coefficients of the transformation model parameters in the linearization equation of the ith image; |

${B}_{2{n}^{i}\times t}^{i}$: | The coefficients of the auxiliary parameter ${\lambda}_{VP}$ in the linearization equation of the ith image; |

$i$: | The count number. |

${C}_{11},\text{}{C}_{12}$: | The coefficients of the auxiliary parameter ${\theta}_{VP}$ from the linearization equation; |

${A}_{11},\text{}\dots ,{A}_{1t}and{A}_{21},\text{}\dots ,{A}_{2t}$: | The coefficients of the transformation model parameters in the linearization equation; |

$t$: | The number of the unknown parameters of the transformation model; |

$\Delta {\theta}_{VP}$: | The change value of the auxiliary parameter ${\theta}_{VP}$. |

${v}_{xi},\text{}{v}_{yi}$: | The residual variables for the ith image; |

${l}_{xi},\text{}{l}_{yi}$: | The constant value of the linearization equation for the ith image; |

${A}_{2i-1,1},\text{}\dots ,{A}_{2i-1,t}and{A}_{2i,1},\text{}\dots ,{A}_{2i,t}$: | The coefficients of the transformation model parameters in the linearization equation for the ith image; |

${B}_{2i-1,1},\text{}{B}_{2i,1}$: | The coefficients of the auxiliary parameter ${\theta}_{VP}$ in the linearization equation for the ith image; |

$t$: | The number of the unknown parameter of the transformation model; |

$\nabla {\theta}_{VP}$: | The change value of the auxiliary parameter ${\theta}_{VP}$; |

$i$: | The count number. |

t: | The number of the unknown parameters of the transformation model; |

${n}^{i}$: | The primitive number on the ith image; |

k: | The number of images; |

V: | The matrix which consists of residual variables; |

${A}_{2{n}^{i}\times t}^{i}$: | The coefficients of the transformation model parameters in the linearization equation of the ith image; |

${C}_{2{n}^{i}\times t}^{i}$: | The coefficients of the auxiliary parameter ${\theta}_{VP}$ in the linearization equation of the ith image. |

$f$: | Camera focal length; |

(${X}_{s},{Y}_{s},{Z}_{s}$): | Image principal point coordinates of exterior orientation parameter; |

${a}_{i},{b}_{i},{c}_{i};\left(i=1,2,3\right)$: | Rotation matrix coefficients based on the external angle elements; |

(${X}_{A}$, ${Y}_{A}$, ${Z}_{A}$): | Coordinate of point A in LiDAR data; |

(${X}_{B}$, ${Y}_{B}$, ${Z}_{B}$): | Coordinate of point B in LiDAR data; |

(${X}_{vp}$, ${Y}_{vp}$, ${Z}_{vp}$): | Coordinate of point VP in LiDAR data; |

$\lambda or\theta $: | The auxiliary parameter; |

$R$: | The radius of the circle where the curve feature is located; |

$\left({x}_{vp},{y}_{vp}\right):$ | Coordinate of point VP in image space. |

#### 3.4. The Iteration of the Registration Procedure

## 4. Experiments and Results

#### 4.1. Test Data and Results

#### 4.2. Registration Accurancy Evaluation Method

_{r}, y

_{r}), which correspond to the tie point (x, y) on the image, and the horizontal distance deviation between the two coordinates is defined as the registration error. The details of the test sites are shown in Table 4. Among them, the Nanning test site uses straight lines and curves to solve the registration model parameters, and the check points use common check points to achieve registration using straight and curved features, respectively.

**Test Site 1**is 0.636 pixels in image space and 0.0382 m in LiDAR point space. In

**Test Site 2**, the registration accuracy based on the mixed features is 1.339 pixels in image space and 0. 2 m in LiDAR point space. In

**Test Site 3**, using the linear and curved features, respectively, the average accuracy of registration was 1.029 pixels and 1.383 pixels, respectively, in image space, and 0.103 m and 0.138 m, respectively, in LiDAR space. From

**Test Site 1**, the deviation is also low, meaning the registration result is stable. From the above, it is obvious that the registration accuracy is determined by the resolution and the registration primitives. The tests for the registration method were performed on the data of the same test site with straight line feature and curve feature, respectively, and the registration error was almost the same—both within 2 pixels—while the registration accuracy of the straight-line feature was slightly better than that of the curve feature.

## 5. The Discussion

#### 5.1. The Influence of the Semi-Random Discrete Characteristics

#### 5.2. The Influence of the Camera Lens Distortion

#### 5.3. The Effects of Registration Primitive Types

#### 5.4. The Results Comparation with Other Method

## 6. Conclusions

- (1)
- Due to the introduction of auxiliary parameters in the line and curve features, the registration method using the direct transformation model can greatly eliminate processing errors and the influence of semi-random attributes of the point cloud data. Without the influence of lens distortion, the registration accuracy can reach the sub-pixel level with respect to the image.
- (2)
- There is lens distortion in the images obtained by non-measuring cameras. The farther away from the image center, the greater the influence of lens distortion. Therefore, to obtain higher registration accuracy, the image with lens distortion must first be eliminated.
- (3)
- Different registration features have little effect on registration accuracy. Experiments show that the registration accuracy of straight-line features is slightly better than that of curve features, mainly because the accuracy of virtual points is affected by semi-discrete properties.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Zhou, G.Q.; Zhou, X. Seamless Fusion of LiDAR and Aerial Imagery for Building Extraction. IEEE Trans. Geosci. Remote Sens.
**2014**, 52, 7393–7407. [Google Scholar] [CrossRef] - Armenakis, C.; Gao, Y.; Sohn, G. Co-registration of aerial photogrammetric and LiDAR point clouds in urban envi-ronments using automatic plane correspondence. Appl. Geomat.
**2013**, 5, 155–166. [Google Scholar] [CrossRef] - Koetz, F.B.; Morsdorf, S.; van der Linden, T.; Curt, B. Allgöwer. Multi-source land cover classification for forest fire manage-ment based on imaging spectrometry and LiDAR data. For. Ecol. Manag.
**2008**, 256, 263–271. [Google Scholar] [CrossRef] - Awrangjeb, M.; Zhang, C.; Fraser, C.S. Automatic extraction of building roofs using LIDAR data and multispec-tral imagery. ISPRS J. Photogramm. Remote Sens.
**2013**, 83, 1–18. [Google Scholar] [CrossRef] [Green Version] - Yang, L.; Sheng, Y.; Wang, B. 3D reconstruction of building facade with fused data of terrestrial LiDAR data and optical image. Optik
**2016**, 127, 2165–2168. [Google Scholar] [CrossRef] - Csatho, B.; Schenk, T.; Kyle, P.; Wilson, T.; Krabill, W.B. Airborne laser swath mapping of the summit of Ere-bus volcano, Antarctica: Applications to geological mapping of a volcano. J. Volcanol. Geotherm. Res.
**2008**, 177, 531–548. [Google Scholar] [CrossRef] - Skaloud, J.; Lichti, D. Rigorous approach to bore-sight self-calibration in airborne laser scanning. ISPRS J. Photogramm. Remote Sens.
**2006**, 61, 47–59. [Google Scholar] [CrossRef] - Palenichka, R.M.; Zaremba, M.B. Automatic Extraction of Control Points for the Registration of Optical Satellite and LiDAR Images. IEEE Trans. Geosci. Remote Sens.
**2010**, 48, 2864–2879. [Google Scholar] [CrossRef] - Xiong, B.; Elberink, S.O.; Vosselman, G. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds. ISPRS J. Photogramm. Remote Sens.
**2014**, 93, 227–242. [Google Scholar] [CrossRef] - Wu, Z.; Ni, M.; Hu, Z.; Wang, J.; Li, Q.; Wu, G. Mapping invasive plant with UAV-derived 3D mesh model in mountain area—A case study in Shenzhen Coast, China. Int. J. Appl. Earth Obs. Geoinform.
**2019**, 77, 129–139. [Google Scholar] [CrossRef] - Lopatin, J.; Fassnacht, F.E.; Kattenborn, T.; Schmidtlein, S. Mapping plant species in mixed grassland communities using close range imaging spectroscopy. Remote Sens. Environ.
**2017**, 201, 12–23. [Google Scholar] [CrossRef] - Lu, B.; He, Y. Species classification using Unmanned Aerial Vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland. ISPRS J. Photogramm. Remote Sens.
**2017**, 128, 73–85. [Google Scholar] [CrossRef] - Johnson, K.M.; Ouimet, W.B.; Dow, S.; Haverfield, C. Ouimet, Samantha Dow and Cheyenne Haverfield. Estimating Historically Cleared and Forested Land in Massachusetts, USA, Using Airborne LiDAR and Archival Records. Remote Sens.
**2021**, 13, 4318. [Google Scholar] [CrossRef] - Kwan, C.; Gribben, D.; Ayhan, B.; Bernabe, S.; Plaza, A.; Selva, M. Improving Land Cover Classification Using Extended Multi-Attribute Profiles (EMAP) Enhanced Color, Near Infrared, and LiDAR Data. Remote Sens.
**2020**, 12, 1392. [Google Scholar] [CrossRef] - Bodensteiner, C.; Huebner, W.; Juengling, K.; Mueller, J.; Arens, M. Local multi-modal image matching based on self-similarity. In Proceedings of the 2010 IEEE International Conference on Image Processing, Hong Kong, China, 26–29 September 2010; pp. 937–940. [Google Scholar] [CrossRef]
- Bodensteiner, C.; Hubner, W.; Jungling, K.; Solbrig, P.; Arens, M. Monocular camera trajectory optimization using Li-DAR data. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barlcelona, Spain, 6–13 November 2011; pp. 2018–2025. [Google Scholar]
- Al-Manasir, K.; Fraser, C.S. Automatic registration of terrestrial laser scanner data via imagery. Photogramm. Rec.
**2006**, 21, 255–268. [Google Scholar] [CrossRef] - Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS J. Photogramm. Remote Sens.
**2014**, 88, 28–40. [Google Scholar] [CrossRef] - Baltsavias, E.P. Airborne laser scanning: Existing systems and firms and other resources. ISPRS J. Photogramm. Remote Sens.
**1999**, 54, 164–198. [Google Scholar] [CrossRef] - Csanyi, N.; Toth, C.K. Improvement of Lidar Data Accuracy Using Lidar-Specific Ground Targets. Photogramm. Eng. Remote Sens.
**2007**, 73, 385–396. [Google Scholar] [CrossRef] [Green Version] - Jung, I.-K.; Lacroix, S. A robust interest points matching algorithm. In Proceedings of the Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 538–543. [Google Scholar]
- Schenk, T.; Csathó, B. Fusion of LIDAR Data and Aeria1 Imagery for a Complete Surface Description. Int. Arch. Photogramm. Remote Sens.
**2002**, 34, 310–317. [Google Scholar] - Baltsavias, E.P. A comparison between photogrammetry and laser scanning. ISPRS J. Photogramm. Remote Sens.
**1999**, 54, 83–94. [Google Scholar] [CrossRef] - Habib, A.F. Aerial triangulation using point and linear features. ISPRS J. Photogramm. Remote Sens.
**1999**, 32, 137–141. [Google Scholar] - Habib, A.; Lee, Y.; Morgan, M. Bundle Adjustment with Self-Calibration of Line Cameras Using Straight Lines. In Proceedings of the Joint Workshop of ISPRS WG I/2, I/5 and IV/7, Hanover, Germany, 19–21 September 2001. [Google Scholar]
- Habib, A.; Asmamaw, A. Linear Features in Photogrammetry; Departmental Report # 451; The Ohio State University: Columbus, OH, USA, 1999. [Google Scholar]
- Habib, A.F.; Ghanma, M.S.; Morgan, M.F.; Mitishita, E. Integration of laser and photogrammetric data for calibration purposes. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci.
**2004**, 35, 170. [Google Scholar] - Habib, A.; Ghanma, M.; Mitishita, E. Photogrammetric Georeferencing Using LIDAR Linear and Aeria1 Features. Korean J. Geom.
**2005**, 5, 7–19. [Google Scholar] - Yang, B.; Chen, C. Automatic registration of UAV-borne sequent images and LiDAR data. ISPRS J. Photogramm. Remote Sens.
**2015**, 101, 262–274. [Google Scholar] [CrossRef] - Lv, F.; Ren, K. Automatic registration of airborne LiDAR point cloud data and optical imagery depth map based on line and points features. Infrared. Phys. Technol.
**2015**, 71, 457–463. [Google Scholar] [CrossRef] - Abayowa, B.O.; Yilmaz, A.; Hardie, R.C. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models. ISPRS J. Photogramm. Remote Sens.
**2015**, 106, 68–81. [Google Scholar] [CrossRef] - Habib, A.; Schenk, T. A new approach for matching surfaces from laser scanners and optical scanners. Int. Arch. Photogramm. Remote Sens.
**1999**, 32, 55–61. [Google Scholar] - Mastin, A.; Kepner, J.; Fisher, J. Automatic Registration of LiDAR and optial images of urban scene. In Proceedings of the IEEE Conference on Computer Vision and Patten Recognition, Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
- Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic Registration of Aerial Images with 3D LiDAR Data Using a Hy-brid Intensity-Based Method. In Proceedings of the International Conference on Digital Image Computing Techniques & Applications, Fremantle, Australia, 3–5 December 2012. [Google Scholar]
- Axelsson, P. Processing of laser scanner data—algorithms and applications. ISPRS J. Photogramm. Remote Sens.
**1999**, 54, 138–147. [Google Scholar] [CrossRef] - Zhu, B.; Ye, Y.; Zhou, L.; Li, Z.; Yin, G. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features. ISPRS J. Photogramm. Remote Sens.
**2021**, 181, 129–147. [Google Scholar] [CrossRef] - Liu, Y. Improving ICP with easy implementation for free-form surface matching. Pattern Recognit.
**2004**, 37, 211–226. [Google Scholar] [CrossRef] [Green Version] - Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis.
**2004**, 60, 91–110. [Google Scholar] [CrossRef] - Schutz, C.; Jost, T.; Hugli, H. Multi-feature matching algorithm for free-form 3D surface registration. In Proceedings of the Fourteenth International Conference on Pattern Recognition, Brisbane, QLD, Australia, 20–20 August 1998; pp. 982–984. [Google Scholar]
- Habib, A.; Lee, Y.; Morgan, M. LIDAR data for photogrammetric georeferencing. In Proceedings of the Joint Workshop of ISPRS WG I/2, I/5 and IV/7, Hanover, Germany, 19–21 September 2001. [Google Scholar]
- Wong, A.; Orchard, J. Efficient FFT-Accelerated Approach to Invariant Optical–LIDAR Registration. Geosci. Remote Sens.
**2008**, 46, 17–25. [Google Scholar] [CrossRef] - Harrison, J.W.; Iles, P.J.W.; Ferrie, F.P.; Hefford, S.; Kusevic, K.; Samson, C.; Mrstik, P. Tessellation of Ground-Based LIDAR Data for ICP Registration. In Proceedings of the Canadian Conference on Computer and Robot Vision, Windsor, ON, Canada, 28–30 May 2008; pp. 345–351. [Google Scholar]
- Teo, T.-A.; Huang, S.-H. Automatic Co-Registration of Optical Satellite Images and Airborne Lidar Data Using Relative and Absolute Orientations. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2013**, 6, 2229–2237. [Google Scholar] [CrossRef] - Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr.
**2008**, 32, 31–49. [Google Scholar] - Habib, A.; Schenk, T. Utilization of ground control points for image orientation without point identification in image space. In Proceedings of the SPRS Commission III Symposium: Spatial Information from Digital Photogrammetry and Computer Vision, Munich, Germany, 5–9 September 1994; Volume 32, pp. 206–211. [Google Scholar] [CrossRef]
- Schenk, T. Determining Transformation Parameters between Surfaces without Identical Points; Technical Report; Photogrammetry No. 15; Department of Civil and Environmental Engineering and Geodetic Science, OSU: Columbus, OH, USA, 1999; p. 22. [Google Scholar]
- Li, J.; Yang, B.; Chen, C.; Habib, A. NRLI-UAV: Non-rigid registration of sequential raw laser scans and images for low-cost UAV LiDAR point cloud quality improvement. ISPRS J. Photogramm. Remote Sens.
**2019**, 158, 123–145. [Google Scholar] [CrossRef] - Kilian, J.; Haala, N.; Englich, M. Capture and evaluation of airborne laser scanner data. Int. Arch. Photogramm. Remote Sens.
**1996**, 31, 383–388. [Google Scholar] - Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and Lidar Data Registration Using Linear Features. Photogramm. Eng. Remote Sens.
**2005**, 71, 699–707. [Google Scholar] [CrossRef] - Lee, J.B.; Yu, K.Y. Coregistration of aerial photos, ALS data and digital maps using linear features. KOGSIS J.
**2006**, 14, 37–44. [Google Scholar] - Ma, R. DEM generation and building detection from lidar data. Photogramm. Eng. Remote Sens.
**2005**, 71, 847–854. [Google Scholar] [CrossRef] - Sampath, A.; Shan, J. Building boundary tracing and regularization from airborne lidar point clouds. Photogramm. Eng. Remote Sens.
**2007**, 73, 805–812. [Google Scholar] [CrossRef] [Green Version] - Edelsbrunner, H.; Kirkpatrick, D.; Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory
**1983**, 29, 551–559. [Google Scholar] [CrossRef] [Green Version] - De Berg, M.; Van Kreveld, M.; Overmars, M.; Schwarzkopf, O.C. Computational Geometry; Springer: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
- Lach, S.R.; Kerekes, J.P. Robust extraction of exterior building boundaries from topographic LiDAR data. In Proceedings of the Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008. [Google Scholar]
- Dorninger, P.; Pfeifer, N. A comprehensive automated 3D approach for building extraction, reconstruction, and regulariza-tion from airborne laser scanning point clouds. Sensors
**2008**, 8, 7323–7343. [Google Scholar] [CrossRef] [PubMed] [Green Version]

**Figure 1.**(

**a**) The workflow of the proposed method. Linear and curve features detection, virtual point generation, and registration transformation model. (

**b**) The virtual points feature and point feature in LiDAR point data and images, respectively, and the overlap result of the two data.

**Figure 5.**Alpha shapes algorithm geometry description and two special cases. (

**a**) appropriate α value. (

**b**) α value tends to infinity. (

**c**) α value tends to zero.

**Figure 6.**Problems with the alpha shapes algorithm. (

**a**) α-shape of uneven point set. (

**b**) Artificial vectorized shape of uneven point set. (

**c**) Excessive α value leads to obtuse angle. (

**d**) small α value leads to fragmentation.

**Figure 8.**Corresponding path filtering of different situation. (

**a**) ${l}_{1}\ge 5{l}_{2}$. (

**b**) the angle between ${l}_{1}$ and ${l}_{2}>60\text{}\mathrm{degrees}$. (

**c**) ${l}_{1}$ and ${l}_{2}$ are near the same line $l$.

**Figure 9.**Example of processing noisy polygonal line segments. (

**a**) Douglas Peucker’s algorithm processing. (

**b**) The least square method processing.

**Figure 10.**In the schematic diagram of coordinate transformation, any point on the arc can be presented by (X

_{0}, Y

_{0}, Z

_{0}), R and θ.

**Figure 11.**The L side features have dense points because of the wall. (

**a**)The top view of the building walls. (

**b**) The profile of the building walls.

**Figure 12.**The point ‘a’ on the image corresponds to the point A as the tie point. A cannot manually be selected on the laser point cloud. The point A is on the line L and L can be detected in the LiDAR data.

**Figure 18.**Lens distortion registration error influence of different camera type, including DMC, RCD105 and SWDC5. (

**a**) DMC camera lens distortion registration error. (

**b**) RCD105 camera lens distortion registration error. (

**c**) SWDC5 lens distortion registration error.

**Table 1.**The comparation between LiDAR point cloud data and optical image [23].

LiDAR Point Cloud Data | Remote Sensing Image |
---|---|

Rich information on homogeneous surfaces | There is almost no positional information on homogeneous surfaces |

Data can be obtained during the day and night | Most of the data can only be obtained during the day |

Obtain accurate three-dimensional coordinates directly | Obtaining three-dimensional coordinates by matching process is complicated and unreliable matching results often occur |

The vertical accuracy is better than the horizontal accuracy | The horizontal accuracy is better than the vertical accuracy |

Highly redundant information | No inherent redundant information |

Rich location information only, and it is difficult to extract semantic information | Rich semantic information |

Primitive Type | Error Description | Mathematical Expression Complexity |
---|---|---|

Point | Has the property of semi-random dispersion | |

Patch | Relatively accurate, it is generated from the whole fitting of the same patch point cloud which depends on the patch segmentation accuracy | |

Line | Line features are obtained by intersecting patches and the accuracy depends on the extraction accuracy of patches |

Properties of the Data | Henan (Test Site 1) | Xuzhou (Test Site 2) | Xianning I (Test Site 3) | Xianning II (Test Site 3) |
---|---|---|---|---|

LiDAR Point Data | ||||

Attitude of image (m) | 600 | 1500 | 1000 | 1000 |

Points density (pts/m^{2}) | 4 | 2.5 | 4.0 | 4.0 |

Attitude of points (m) | 800 | 600 | 1000 | 1000 |

LiDAR scanner type | Leica ALS 70 | A-Pilot | Leica ALS50 II | Leica ALS50 II |

Acquisition time | 2018. 11 | 2014.6 | 2009.03 | 2009.03 |

Image data | ||||

Camera type | DMC | SWDC-5 | RCD105 | RCD105 |

f (mm) | 120 | 35 | 35 | 35 |

Overlap | 60% | 70% | 70% | 70% |

CCD size (um) | 12 | 6.0 | 6.8 | 6.8 |

Image resolution (m) | 0.06 | 0.15 | 0.1 | 0.1 |

Acquisition time | 2018. 11 | 2014.6 | 2009.03 | 2009.03 |

Original primitive type | Straight Line | Straight Line and Curve | Straight Line | Curve |

Test Site 1 | Test Site 2 | Test Site 3 | Test Site 3 | |
---|---|---|---|---|

Primitive Type | Straight Line | Straight Line and Curve | Straight Line | Curve |

Image number | 9 | 9 | 4 | 4 |

Check points number | 30 | 30 | 15 | 15 |

Average error (pixel) | 0.636 | 1.339 | 1.029 | 1.383 |

Average error (m) | 0.0382 | 0.2 | 0.103 | 0.138 |

Standard deviation | 0.288 | 0.615 | 0.781 | 0.670 |

4 pt/m^{2} | 2 pt/m^{2} | |
---|---|---|

1 | 0.894524 | 1.143759 |

2 | 0.971825 | 0.920501 |

3 | 0.429864 | 0.616121 |

4 | 0.107464 | 0.209425 |

5 | 0.300463 | 0.59597 |

6 | 0.207973 | 0.614754 |

7 | 0.768295 | 0.865744 |

8 | 0.097773 | 0.795144 |

9 | 0.749856 | 0.537697 |

10 | 0.897527 | 0.96451 |

11 | 0.897527 | 1.101736 |

12 | 0.300463 | 0. 474126 |

13 | 0.353553 | 0.354412 |

14 | 0.52452 | 0.546122 |

15 | 0.894524 | 0. 705083 |

average | 0.5597 | 0.7128 |

Check Point | Linear Features | Curve Features |
---|---|---|

1 | 0.463344 | 1.324443 |

2 | 2.047939 | 1.922908 |

3 | 0.361981 | 0.186339 |

4 | 0.549812 | 2.204933 |

5 | 1.718475 | 2.224358 |

6 | 0.001718 | 1.792193 |

7 | 0.044382 | 0.454203 |

8 | 0.010391 | 0.177646 |

9 | 1.979701 | 2.279956 |

10 | 0.463344 | 1.324443 |

11 | 0.745356 | 0.891393 |

12 | 1.975545 | 1.502313 |

13 | 1.798919 | 1.589025 |

14 | 1.795055 | 1.449832 |

15 | 1.476673 | 1.419629 |

average | 1.0288 | 1.3829 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Yao, C.; Ma, H.; Luo, W.; Ma, H.
A Precisely One-Step Registration Methodology for Optical Imagery and LiDAR Data Using Virtual Point Primitives. *Remote Sens.* **2021**, *13*, 4836.
https://doi.org/10.3390/rs13234836

**AMA Style**

Yao C, Ma H, Luo W, Ma H.
A Precisely One-Step Registration Methodology for Optical Imagery and LiDAR Data Using Virtual Point Primitives. *Remote Sensing*. 2021; 13(23):4836.
https://doi.org/10.3390/rs13234836

**Chicago/Turabian Style**

Yao, Chunjing, Hongchao Ma, Wenjun Luo, and Haichi Ma.
2021. "A Precisely One-Step Registration Methodology for Optical Imagery and LiDAR Data Using Virtual Point Primitives" *Remote Sensing* 13, no. 23: 4836.
https://doi.org/10.3390/rs13234836