Next Article in Journal
Satellite SST-Based Coral Disease Outbreak Predictions for the Hawaiian Archipelago
Previous Article in Journal
Characterization of Available Light for Seagrass and Patch Reef Productivity in Sugarloaf Key, Lower Florida Keys
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Linear Feature-Based Approach for the Registration of Unmanned Aerial Vehicle Remotely-Sensed Images and Airborne LiDAR Data

1
College of Surveying and Geo-informatics, Tongji University, 1239 Siping Road, Shanghai 200092, China
2
Jiangsu Power Design Institute Co., Ltd. of China Energy Engineering Group, 58-3 Suyuan Avenue, Nanjing 211102, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(2), 82; https://doi.org/10.3390/rs8020082
Submission received: 6 October 2015 / Revised: 9 December 2015 / Accepted: 11 January 2016 / Published: 25 January 2016

Abstract

:
Compared with traditional manned airborne photogrammetry, unmanned aerial vehicle remote sensing (UAVRS) has the advantages of lower cost and higher flexibility in data acquisition. It has, therefore, found various applications in fields such as three-dimensional (3D) mapping, emergency management, and so on. However, due to the instability of the UAVRS platforms and the low accuracy of the onboard exterior orientation (EO) observations, the use of direct georeferencing image data leads to large location errors. Light detection and ranging (LiDAR) data, which is highly accurate 3D information, is treated as a complementary data source to the optical images. This paper presents a semi-automatic approach for the registration of UAVRS images and airborne LiDAR data based on linear control features. The presented approach consists of three main components, as follows. (1) Buildings are first separated from the point cloud by the integrated use of height and size filtering and RANdom SAmple Consensus (RANSAC) plane fitting, and the 3D line segments of the building ridges and boundaries are semi-automatically extracted through plane intersection and boundary regularization with manual selections; (2) the 3D line segments are projected to the image space using the initial EO parameters to obtain the approximate locations, and all the corresponding 2D line segments are semi-automatically extracted from the UAVRS images. Meanwhile, the tie points of the UAVRS images are generated using a Förstner operator and least-squares image matching; and (3) by use of the equations derived from the coplanarity constraints of the linear control features and the colinear constraints of the tie points, block bundle adjustment is carried out to update the EO parameters of the UAVRS images in the coordinate framework of the LiDAR data, achieving the co-registration of the two datasets. Experiments were performed to demonstrate the validity and effectiveness of the presented method, and a comparison with the traditional registration method based on LiDAR intensity images showed that the presented method is more accurate, and a sub-pixel accuracy level can be achieved.

Graphical Abstract

1. Introduction

Unmanned aerial vehicle remote sensing (UAVRS) platforms are usually equipped with a charge-coupled device (CCD) digital camera for image acquisition, a global positioning system (GPS), and an inertial measurement unit (IMU) for observation of the platform position and attitude. Compared with traditional manned airborne remote sensing, the advantages of UAVRS are that it can work in high-risk situations and inaccessible areas without endangering human lives, and it can also capture higher-resolution images at a lower altitude. UAVRS is also suitable for cloudy weather conditions due to its ability to fly below the clouds [1]. In the past decades, UAVRS has found various applications in many fields, such as three-dimensional (3D) mapping, forest and vegetation change monitoring, emergency management, and so on [2,3,4].
However, UAVRS platforms are not as stable as large fixed-wing manned aircraft, and tend to move erratically during flight. This, coupled with the disorientation caused by long-term viewing of the system, makes the analysis of the image data problematic [5]. The GPS and IMU equipped on a UAVRS system usually provide low-quality measurements, resulting in low-accuracy direct geolocation [6], so indirect georeferencing using ground control points (GCPs) is often performed [7]. With known ground coordinates of easily-identifiable image features, the EO parameters of UAVRS images can be solved through aerotriangulation bundle adjustment [8]. However, GCP collection from field survey is often a costly procedure, and it may be difficult or even impossible for hazardous areas, such as scenes of earthquake and accidents.
Light detection and ranging (LiDAR) can directly generate a digital elevation model (DEM) and digital surface model (DSM) by using an interpolation method, and features more accurate measurement of points [9,10,11,12]. Since the positioning accuracy of LiDAR is much higher than that of UAVRS, a possible solution is to improve the UAVRS geo-positioning accuracy based on the integration of these two kinds of datasets. James (2006) and Liu (2007) presented methods of utilizing LiDAR data and its intensity images to provide GCPs for digital photogrammetry and orthorectification processes [13,14]. Barrand (2009) optimized photogrammetric DEMs using LiDAR-derived GCPs for glacier volume change assessment [15]. LiDAR and photogrammetry are also complementary to each other, and thus the integration of both technologies is important in a number of remote sensing applications such as building extraction [16], image classification [17,18,19], 3D city modeling [20,21], and so on. The integration of LiDAR and photogrammetry is expected to produce more accurate and higher-quality products [11].
An important issue for the integration of LiDAR data and UAVRS optical images is the registration of these two different types of datasets. In general, the existing registration methods can be classified into three types, as follows [22].
(1) Registration based on a LiDAR intensity image. This turns the registration of 3D LiDAR data and 2D optical imagery into 2D image registration. However, LiDAR intensity images differ a lot from optical imagery in their gray-level properties and object description due to their very different processes of intensity recording, which makes it difficult to perform a direct similarity comparison between an optical image and a LiDAR intensity image. The property they share is the statistical similarity of the gray levels, and, hence, mutual information is employed to exploit the statistical dependencies between a LiDAR-derived intensity image and an optical imagery [23,24].
(2) Registration based on point clouds (i.e., point sets). By dense matching and forward intersection, a mass of 3D points can be generated from optical images, thus transforming the problem into registration of two point sets [25]. However, points acquired from optical images are mostly image features like breakpoints of texture or gray level, providing rich information along object space discontinuities and poor information along homogeneous surfaces with uniform texture, while LiDAR provides a discrete set of irregularly distributed points with rich information along homogeneous physical surfaces and poor information along object space discontinuities. The iterative closest point (ICP) algorithm is, therefore, required in the procedure of registration [25,26]. This procedure needs precise initial values for the iteration to avoid a local optimum. What is more, errors of image matching and forward intersection may be involved.
(3) Registration based on features. Feature-based registration utilizes corner points, lines, and planes as matching primitives [27,28,29,30,31]. There are many algorithms for feature detection from optical imagery, such as Moravec, Förstner, SUSAN, Harris, and SIFT for corner detection [32,33,34,35,36], and Canny, Sobel, and LOG for edge detection [37,38,39]. However, considering the discreteness and irregularity, it is more complex to extract features from LiDAR point clouds, and the algorithms developed for LiDAR processing are not as mature as those for optical image processing [27].
With respect to the aforementioned registration methods, the LiDAR intensity image based methods rely heavily on the quality and correctness of the intensity image, and a big difference between the LiDAR intensity image and the optical image can increase the registration difficulty and lead to registration failure. The 3D point cloud based methods may result in a local optimum if the initialization of the ICP algorithm is not precise enough. What is more, the quality of the 3D points generated from the optical images is always poor in the areas where sudden elevation changes occur, which may reduce the ultimate registration accuracy. The feature-based methods are relatively well suited for the registration of UAVRS optical images and airborne LiDAR data as they both contain enough distinctive and easily-detectable objects for the registration. There has been a considerable amount of research into feature-based registration [6,12,13,14,15,16,20,21,22,27,28,29,30,31], among which the point features are the most commonly used features, which can be attributed to their uniqueness and simplicity.
Compared with point features, linear features have advantages including [40,41]: (1) image space linear features are easier to extract with sub-pixel accuracy across the direction of the edge as they are discontinuous in one direction while point features are discontinuous in all directions; (2) linear features possess higher semantic information and geometric constraint are more likely to exist among linear features than points to reduce the matching ambiguity; and (3) linear features increase the redundancy and improve the robustness and geometric strength of photogrammetric adjustment. Therefore, Habib et al. proposed a photogrammetric and LiDAR data registration method using linear features [27,28], where two alternative approaches were introduced. One directly incorporates the LiDAR lines as control in the photogrammetric bundle adjustment, the other is a two-step procedure starting with photogrammetric 3D model generation and followed by a similarity transformation using the photogrammetric and LiDAR common lines as control for absolute orientation. The two-step strategy is able to deal with multiple 3D datasets regardless of their origin, but the disadvantage is that the orientation parameters of the images still remain uncorrected in the photogrammetric datum which does not coincide with the LiDAR datum. In the one-step strategy, the image space lines are represented by a sequence of intermediate points along the feature to cope with image distortion. It has advantages when handling long linear features where image distortion may lead to deviations from straightness of the lines. For short linear features, the deviations caused by image distortion would be very small and may probably be overwhelmed by the extraction error of the intermediate points, especially if they are extracted manually. Therefore, in our study, only two points are used to represent a linear feature in image space which is interactively extracted using line detection algorithms. Also, semi-automation is achieved with the extraction of the object space linear features from LiDAR points. Moreover, Differing from the scenarios in most of the existing studies where only a few optical images were used for the registration with LiDAR data, and each image had adequate independent control features for the registration, in our study, the registration of 109 UAVRS images and airborne LiDAR data using 16 linear control features was investigated, which is expected to enrich the methodology for the registration of UAVRS optical images and airborne LiDAR data..

2. Methodology

LiDAR data points are created as measurements in a 3D coordinate system. It is, therefore, convenient to take the coordinate system of the LiDAR data as the common framework, and the UAVRS images are then registered to the LiDAR data coordinate system. The registration involves the calculation of the EO parameters of the UAVRS images, which include the position of the exposure center (X0, Y0, Z0) and camera pose (ω, φ, κ). Planar roofs can be extracted from the LiDAR data with a high accuracy because a large amount of points can be applied to derive their parameters. Linear features subsequently derived from building roof edges and the intersection of adjacent planar roofs are used as control features. After block bundle adjustment using the coplanarity conditions derived from the linear control features and collinear conditions derived from a large number of tie points, the two datasets are registered in a common coordinate system.
Figure 1 shows the overall workflow of the proposed method for the registration of UAVRS images and airborne LiDAR data. The approach consists of four main parts. (1) Buildings are separated from the LiDAR point cloud by the integrated use of height and size filtering and RANSAC plane fitting, and 3D line segments of the building ridges and boundaries are interactively extracted through plane intersection and boundary regularization; (2) the 3D line segments in the object space are projected to the image space using the initial EO parameters to obtain the approximate locations, and all the corresponding 2D line segments are semi-automatically extracted from the UAVRS images; (3) tie points for the UAVRS images are generated using a Förstner operator and least-squares image matching; and (4) based on the equations derived from the coplanarity constraints of the linear control features and the colinear constraints of the tie points, block bundle adjustment is carried out to update the EO parameters of the UAVRS images in the coordinate framework of the LiDAR data, achieving the co-registration of the two datasets.
Figure 1. Overall workflow for the registration of UAVRS images and LiDAR data.
Figure 1. Overall workflow for the registration of UAVRS images and LiDAR data.
Remotesensing 08 00082 g001

2.1. Extraction of 3D Line Segments from LiDAR Data

2.1.1. Extraction of Building Roof Points

The airborne LiDAR data is processed in sequence. Firstly, pre-processing is performed to remove outliers. The remaining points are then divided into ground points and non-ground points. Building points are then extracted from the non-ground points, based on which the linear features are detected and extracted.
Outlier points include three types [42]: isolated points, air points, and low points. With respect to the isolated points, the number of neighboring points according to a given 3D search radius is less than a predefined threshold. For air point detection, the mean value and standard deviation of the points′ elevations are first computed, and the points whose absolute elevation difference with the mean value is more than three times the standard deviation are considered to be air points. Low points are determined if their elevation is lower than all the neighboring points by a given threshold value (such as 1 m in our experiments).
After the outlier points are removed, the remaining points are further classified into ground and non-ground points using an adaptive triangulated irregular network (TIN) model [43,44]. The procedure is as follows: (1) seed point selection is undertaken in a user-defined grid with a size bigger than the largest building, and a coarse TIN is constructed; (2) new points are added in if they meet the criteria based on the calculated threshold parameters, and the TIN model is iteratively reconstructed; and (3) the procedure stops after all the points are checked and classified as ground or object. After the object points are separated from the ground points, building points are further extracted from the object points using height and size filtering [12]. The thresholds used in our study for the experiment were 2.5 m for height and 3 m × 3 m for size.

2.1.2. Extraction of 3D Line Segments from Building Roof Points

In general, most buildings have regular shapes with perpendicular or parallel boundaries, and building roofs consist of one or more planes. There are three main methods for the detection of 3D building roof planes—region growing [45], the Hough transform [46], and RANSAC plane fitting [47]—among which RANSAC is the most efficient while the region growing algorithms are sometimes not very transparent and not homogenous, and the Hough transform is very sensitive to the segmentation parameter values [47]. Therefore, RANSAC plane fitting is adopted in our study for the roof plane detection and plane parameter estimation.
There are two types of line features that can be extracted from building roof points: roof ridge lines and roof edge lines. Roof ridge lines can be obtained by the intersection of adjacent roof planes for gable-roof buildings. However, boundary extraction from the irregular point set of a building roof is more complex. In this paper, a TIN-based algorithm is introduced to construct the boundary from the building roof points. The operational procedure of the algorithm includes three steps, as follows. (1) The building roof points are projected onto the X-Y 2D plane, and the TIN network is then constructed; (2) a threshold for edge length is determined based on the average point spacing (usually 2–3 times of the average spacing) and the edges longer than the threshold are removed; and (3) the edges that belong to only one triangle are selected to comprise the original building roof boundary.
The extracted original boundary is irregular, and further regularization is required to adjust the boundary to have a rectangular shape based on an orthogonal condition. Firstly, the main direction of the building is calculated, for which a method based on minimum direction difference is introduced. The direction difference is defined as the difference between each segment and the main direction. The optimal main direction is determined while the sum of all the direction differences is a minimum. The process of main direction detection is as follows. (1) Define the range of the main direction α l (0° ≤ α l < 90°), where αl changes from 0° to 90° with a given step of ε (ε = 90°/N, N is a given number to divide the range into N pieces); (2) in the ith iteration (i = 1, 2, …, N), calculate the direction difference d i j for each edge segment. d i j is defined as min ( | α j α l | , | α j α l + 180 ° | , | α j α l 90 ° | , | α j α l + 90 ° | ) , in which α j is the azimuth of the jth edge segment, and all the d i j s sum to D i ; and (3) complete the iterations, and the main direction α l is found when D i is the least.
After the main direction is determined, further regularization is performed to simplify the boundary to have a rectangular shape. (1) Segments are classified into two classes according to the difference between the azimuth of each line segment and the main direction. The result of this step is two groups of line segments, and they are supposed to be parallel or perpendicular to the main direction; (2) the connected segments of the same class are merged into a new edge line, and its weighted average of the center point and line azimuth are calculated. The weight used for each line segment is its length; and (3) the azimuth and location of each edge line are corrected by the use of an orthogonality constraint, and adjacent line segments of the regularized building boundaries will then be perpendicular to each other.

2.2. Extraction of Conjugate 2D Line Segments and Tie Points from UAVRS Images

After the 3D line segments are extracted from the LiDAR points, the conjugate 2D line segments are interpreted from the UAVRS images in a semi-automatic way. Firstly, the extracted 3D ground line segments are projected to the image space using the interior orientation (IO) parameters and the initial EO parameters to determine the coarse location of the buildings to which the corresponding 2D line segments belong. The linear segments of the buildings are then automatically extracted using the Hough-transform algorithm [48,49], and the conjugate 2D line segments are then manually selected. There will be multiple (not less than two) conjugate 2D line segments in the image space for a 3D line segment in the ground space, and all the available 2D line segments are extracted. These conjugate 2D line segments also serve as tie features in bundle adjustment to reduce the geometric inconsistent between adjacent images.
In addition to the 2D line segments used as a control, a large number of tie points is required for block bundle adjustment to reduce the geometric inconsistence between images and solve the EO parameters of all the UAVRS images. The tie points are automatically generated by use of the Förstner operator [33] for feature point detection and least-squares image matching [50] to establish the correspondence between conjugate points, for which geometric constraints such as epipolar constraint and parallax continuity constraint are applied to narrow the searching range and remove outliers.

2.3. Coplanarity Constraint of the Linear Control Features

After the corresponding 2D and 3D line segments are extracted, registration can be carried out using the coplanarity condition [12,27]. As shown in Figure 2, the 3D ground line segment A-B and its 2D conjugate line segment a-b in the image space are on the same plane, O-A-B, determined by the ground line A-B and the perspective center O. The advantage of using coplanarity is that no constraints are put on end points, i.e., the end points of the corresponding line segments are not necessarily conjugate points.
Figure 2. Coplanarity of corresponding 2D and 3D line segments.
Figure 2. Coplanarity of corresponding 2D and 3D line segments.
Remotesensing 08 00082 g002
In Figure 2, the coplanarity condition of the five points, O, a, b, A, and B, is equivalent to the condition that both vectors O a ¯ and O b ¯ are perpendicular to the normal vector v ¯ of the plane determined by vectors O A ¯ and O B ¯ , which can be expressed as:
( O A ¯ × O B ¯ ) O a ¯ = 0 ( O A ¯ × O B ¯ ) O b ¯ = 0
Each linear control feature provides two equations. For a single image, at least three linear control features (even distribution in image space preferable and should avoid being coplanar) are needed to solve the six unknown EO parameters. For a block of multiple overlapping images, tie points should be used to overcome the geometric inconsistency between adjacent images. Meanwhile, with the help of the tie points, the required minimum number of linear control features for the whole block of images is no more than that for a single image. However, for better accuracy, more than three well-distributed (evenly-distributed in plane and in elevation within the whole block area) linear control features are needed for redundancy checks and accuracy enhancement.

2.4. Block Bundle Adjustment

Each pair of corresponding line segments provides two independent equations, as indicated in Equation (1). For a block of n images, if we have m (m ≥ 3) pairs of linear control features evenly distributed in the entire block area, then we will have 2m equations provided by the coplanarity constraints and 6n unknown EO parameters. Meanwhile, if we have k tie points and each appears in four adjacent images (forward overlap and side overlap), they will provide 8k collinear equations and bring in 3k unknown ground coordinates.
Given the conditions above, we have 2m + 8k equations to solve the 6n + 3k unknowns, so (2m + 8k) should be no less than (6n + 3k), resulting in 2m + 5k ≥ 6n. The least-squares method is used for bundle adjustment to minimize the discrepancies among the conditions.
Initial values for the unknown parameters are required in bundle adjustment. The positions and attitudes measured by the onboard positioning and orientation system (POS) system are used as the initial values for the EO parameters of the images, and the initial ground coordinates for the tie points are calculated through space intersection using the initial EO parameter values. Both the EO parameter values of the images and the ground coordinates of the tie points are updated iteratively until the statistical error is less than the predefined threshold.

3. Study Area and Data Used

As shown in Figure 3, the study area is located in Zhangye City, Gansu province, in the northwest of China, with an area of about 45 km2. The topography in the study area is nearly flat, with an average elevation of 1550 m. The major land-cover types include farmland, trees, roads, and buildings, which are rich in linear features.
The UAVRS images used in the study were acquired in November 2011 and the UAV system used for image acquisition was ISAR-II, a fixed wing UAV equipped with a POS system including GPS and IMU for navigation and providing initial EO parameters for the acquired images, the detailed information could refer to [51]. The accuracy of the attitude data from the IMU is rated as ±2° for roll and pitch and ±5° for heading. The technical specifications of the UAV are listed in Table 1. The camera equipped on the UAV for image acquisition was a digital single lens reflex (DSLR) camera Canon EOS 5D Mark II, with single length of about 35.6 mm, recording images at a size of 5616 × 3744 pixels and pixel size is 6.41 μm. As the UAVRS mission was to provide geo-referenced image for the layout design of in situ, sensors for the HiWATER (Heihe Watershed Allied Telemetry Experimental Research) project [52], the required resolution was half meter. Therefore, considering the resolution requirement and the field condition (open country far away from flying restriction areas), in order to save the time for image acquisition and processing as much as possible, a flying height of 2500 m was designed for the image acquisition at an average resolution of 0.45 m, with overlapping of 65% along flight and 35% across flight respectively, and a total of 109 valid images were collected.
Figure 3. The study area and the UAV flight path.
Figure 3. The study area and the UAV flight path.
Remotesensing 08 00082 g003
Table 1. Technical specifications of the UAV platform used in the experiment [51].
Table 1. Technical specifications of the UAV platform used in the experiment [51].
ItemValue
Length (m)1.8
Wingspan (m)2.6
Payload (kg)4
Take-off-weight (kg)14
Endurance (h)1.8
Flying height (m)300–6000
Flying speed (km/h)80–120
PowerFuel
Flight modeManual, semi-autonomous, and autonomous
LaunchCatapult, runway
LandingSliding, parachute
The airborne LiDAR data for the same area were obtained by the use of a Leica ALS70 system onboard on an Y12 plane with flying height of about 1200 m in July 2012 [53]. The average point density was four points per square meter and the vertical accuracy is 5–30 cm [52]. In addition, 18 ground points, including road intersections and building corners, were surveyed using GPS-RTK with an accuracy better than 0.1 m, and served as checkpoints in the experiments.

4. Experiments and Result Analysis

4.1. Linear Control Features and Tie Point Extraction Results

In the experiment, 16 3D linear ground features, including building roof ridges and roof edges, were interactively extracted from the LiDAR points. The corresponding 2D linear features (50 in total) in the image space were extracted semi-automatically from the UAVRS images, providing 50 pairs of control features. The 16 3D linear ground features are shown in Figure 4, and the corresponding 2D image linear features (16 out of 50) are shown in Figure 5. The locations of the ground control features are illustrated in Figure 6, and they present an even distribution in the study area.
Furthermore, 1622 tie points (corresponding to 9261 image points) were extracted using the Förstner operator and least squares image matching, during which 0.8 was set as the correlation coefficient threshold. The generated tie points are shown in Figure 6, and also show an even distribution.
Figure 4. The 16 building roofs and 3D line segments (blue lines) extracted from the LiDAR data.
Figure 4. The 16 building roofs and 3D line segments (blue lines) extracted from the LiDAR data.
Remotesensing 08 00082 g004
Figure 5. The 16 corresponding 2D line segments (there are a total of 50 due to image overlapping, and here illustrated are the 16 ones corresponding to the 3D line segments shown in Figure 4) extracted in the UAVRS images.
Figure 5. The 16 corresponding 2D line segments (there are a total of 50 due to image overlapping, and here illustrated are the 16 ones corresponding to the 3D line segments shown in Figure 4) extracted in the UAVRS images.
Remotesensing 08 00082 g005
Figure 6. Distribution of the 16 linear control features and tie points.
Figure 6. Distribution of the 16 linear control features and tie points.
Remotesensing 08 00082 g006

4.2. Registration Result

Registration of the UAVRS images and the LiDAR data involved the calculation of the EO parameters for the UAVRS images in the coordinate system of the LiDAR data. The 50 pairs of extracted linear control features and the 1622 tie points (corresponding to 9261 image points) were utilized for least-squares block bundle adjustment to solve the EO parameters of all 109 UAVRS images, as well as the ground coordinates of the tie points.
After bundle adjustment, the EO parameters of the UAVRS images were corrected. Figure 7b shows the registration result by projecting the 3D LiDAR points to the image space using the updated EO parameters, while Figure 7a shows the projected result before registration. From Figure 7, we can see that the orientation accuracy of the UAVRS images was significantly improved, and the images are accurately registered with the LiDAR points after the bundle adjustment. In order to evaluate the registration accuracy, we calculated the distance between the extracted 2D image line segments and the projected ones. Here, the distance is defined as the average distance of the two end points of the image line to the projected line. The statistical average and the maximum distance values are listed in the last column of Table 2.
Figure 7. Comparison of before and after registration. (a) Before registration; and (b) after registration.
Figure 7. Comparison of before and after registration. (a) Before registration; and (b) after registration.
Remotesensing 08 00082 g007
Table 2. Discrepancies between the two datasets before/after registration in different scenarios (unit: pixel).
Table 2. Discrepancies between the two datasets before/after registration in different scenarios (unit: pixel).
Direct GeoreferencingFree Network AdjustmentRegistration Based on Intensity Image (16 Control Points)Registration Based on Intensity Image (32 Control Points)Registration Based on Linear Features (16 Control Lines)
Maximum602.1058.756.096.091.90
Average235.5226.121.761.420.92

4.3. Comparison with Intensity Image Based Registration and Accuracy Evaluation

Traditionally, the registration of optical images and LiDAR data is based on the control points extracted from the LiDAR intensity image and the derived DSM. Therefore, this was also undertaken in our experiments for an accuracy comparison. Firstly, the LiDAR intensity image and the DSM were generated from the LiDAR data and resampled to the same resolution as the UAVRS images. A total of 32 evenly-distributed GCPs, including building corners and road intersections (other than the 18 GPS-measured points), were then manually selected, with the plane coordinates measured from the LiDAR intensity image and the height values measured from the LiDAR-derived DSM with a resolution of 0.5 m. The corresponding image coordinates were carefully measured in the UAVRS images. Block bundle adjustment was then carried out using the control points and the 1622 tie points for registration to update the EO parameters of the UAVRS images in the coordinate framework of the LiDAR data. Here two scenarios were tested, one using 16 (equal to the number of the linear control features) control points and the other using all the 32 (equal to the end points number of the 16 linear control features) control points. The image residuals of the control points after registration were statistically calculated and are listed in the Table 2.
In addition, in order to assess the accuracy improvement after registration, the geometric discrepancies between the UAVRS images and the LiDAR data before the registration were also calculated using the 16 extracted points. There were two scenarios enacted before registration. One scenario involved direct georeferencing, and the other involved free network adjustment using the 1622 tie points to correct the internal inconsistency among the images. The results of the two scenarios are listed in the first two columns of Table 2.
From Table 2, we can see that when using the EO parameters measured by the UAV-borne POS system for direct georeferencing, the average geometric discrepancy between the UAVRS images and the LiDAR data was about 236 pixels, and the maximum reached 602 pixels. With free network adjustment to remove the internal conflict among the UAVRS images, the average discrepancy was brought down to 26.12 pixels, but large external discrepancies still existed. After registration based on the control points extracted from the LiDAR intensity image, the average discrepancy significantly decreased to 1.76 pixels using 16 control points, and slightly better to 1.42 pixels using 32 control points. Meanwhile, using the linear feature-based registration method, the average discrepancy was brought down to the sub-pixel level.
Moreover, the 18 GPS-measured ground points were used to evaluate the absolute positioning accuracy of the UAVRS images, for which there were four scenarios: direct georeferencing, free network adjustment, registration based on the LiDAR intensity image, and registration based on linear features. In all the scenarios, the ground 3D coordinates were calculated from the images by multi-view intersection, and the disparities were estimated by comparison with the GPS-measured values. The statistical results are listed in Table 3, from which we can see that the root-mean-square positioning errors by direct georeferencing were as high as 84.57 m in the horizontal plane and 169.27 m in the vertical plane, but when the internal inconsistency was corrected by free network adjustment, the errors decreased to 7.06 m in the horizontal plane and 26.11 m in the vertical plane. After registration based on the LiDAR intensity image using 16 control points, the positioning accuracies of the UAVRS images were improved to about 0.67 m in the horizontal plane and 1.98 m in the vertical plane, and slightly better to 0.59 m in horizontal and 1.49 m in vertical when using 32 control points, while after registration with the linear feature based method, the positioning accuracies in the horizontal plane were further improved to 0.41 m, which is a sub-pixel level compared to the image resolution of 0.45 m, and there was also improvement in the vertical accuracy from nearly 2 m to 1.27 m.
Table 3. Positioning accuracies of the UAVRS images in different scenarios (unit: m).
Table 3. Positioning accuracies of the UAVRS images in different scenarios (unit: m).
Maximum Absolute ErrorRoot-Mean-Square Error
XYZXYZ
Direct georeferencing211.6588.63386.8284.5744.50169.27
Free network adjustment13.8812.5860.917.065.0926.11
Registration based on LiDAR intensity image (16 control points)1.381.644.590.670.611.98
Registration based on LiDAR intensity image (32 control points)1.131.343.280.590.551.49
Registration based on linear features (16 control lines)0.670.761.890.400.411.27
Compared with the traditional registration method using control points extracted from the LiDAR-derived intensity image and DSM, the registration method using the linear control features achieved a higher registration accuracy, which can be attributed to the higher accuracy and geometric strength of the linear features compared to the point features, as linear features are discontinuous in only one direction, while point features are discontinuous in all directions. In the experiments, the accuracy in the horizontal plane after registration achieved a sub-pixel level, but the vertical accuracy was lower, which could be related to the effective distribution of the control features in the horizontal plane and the poor distribution in the vertical, as all the control features were from building roofs while the GPS check points included both building points and ground points.

5. Conclusions

Unmanned aerial vehicle remote sensing (UAVRS) has found applications in various fields, which can be attributed to its high flexibility in data acquisition and interpretable visual texture with the optical images. However, the platform instability and low accuracy of the position and attitude measurements result in big errors in direct georeferencing. As a complementary data source, LiDAR can provide accurate 3D information though is limited in object texture expression. Therefore, the integration of these two types of data is relevant and is expected to produce more accurate and higher-quality products, for which registration of the two different types of datasets is the first problem that needs to be solved. This paper has introduced a semi-automatic approach for the linear feature based registration of the UAVRS images and the airborne LiDAR data. Two aspects of the accuracy were assessed, one was the discrepancy between the two datasets after registration, which could be regarded as relative accuracy, and the other was absolute accuracy, which was assessed by comparing with the external GPS-surveyed points. From the experiments and result analysis, several conclusions can be drawn as follows.
(1) Compared with the traditional point based registration using the LiDAR intensity image, the linear feature based method directly using the LiDAR 3D data as control can provide a higher registration accuracy to the sub-pixel level, resulting in higher absolute accuracy in object space positioning. This could be attributed to the higher accuracy and geometric strength of the extracted linear control features from the LiDAR data than that of the control points from the LiDAR intensity image.
(2) The object space positioning error of the UAVRS images in the vertical is almost three times higher of that in the horizontal after registration with the LiDAR data in the experiment, which may be attributed to two aspects. One is the limited image overlapping of 65% along flight and 35% across flight, the other is that all the control features came from the line segments of the building roofs, while the check points included both the building roof points and the ground points. It could be expected that the vertical accuracy may be further improved if ground control features such linear features from roads were available in additional to the building control features.
(3) As the linear features mainly come from the manmade objects such as buildings and roads, the linear feature based registration strategy has advantages in urban areas, and is also applicable for the case with a reasonable coverage of algorithmically extractable linear features, but has limitations in a full natural environment, for which more investigation is needed for an effective solution.
(4) In spite of the advantages of the linear feature based registration strategy, automaticity in registration still remains an open problem and needs further study to improve the efficiency in practical applications, especially effort should focus on the automation with the extraction of common features from photogrammetric and LiDAR data, as well as the matching of the conjugate primitives.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (Project No. 41401531, 41325005, 41171352), the Shanghai Sailing Program (Project No. 14YF1403300), the National Key Basic Research Program of China (973 Program) (Project No. 2012CB957701, 2012CB957704), the China Special Fund for Surveying, Mapping and Geoinformation Research in the Public Interest (Project No. 201412017), the Fund of the State Key Laboratory of Geographic Information Engineering (Project No. SKLGIE2014-M-3-3), and the Fundamental Research Funds for the Central Universities.

Author Contributions

Shijie Liu and Xiaohua Tong conceived the study, supervised the experiments, and edited the manuscript. Jie Chen and Xiangfeng Liu performed the experiments and drafted the manuscript. The other co-authors contributed with the analysis, discussion, and manuscript editing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Eisenbeiss, H. UAV Photogrammetry; Institute of Geodesy and Photogrammetry, ETH Zurich: Zurich, Switzerland, 2009. [Google Scholar]
  2. Nagai, M.; Chen, T.; Shibasaki, R.; Kumagai, H.; Ahmed, A. UAV-borne 3-D mapping system by multisensor integration. IEEE Trans. Geosci. Remote Sens. 2009, 47, 701–708. [Google Scholar] [CrossRef]
  3. Berni, J.; Zarco-Tejada, P.J.; Suárez, L.; Fereres, E. Thermal and narrowband multispectral remote sensing for vegetation monitoring from an unmanned aerial vehicle. IEEE Trans. Geosci. Remote Sens. 2009, 47, 722–738. [Google Scholar] [CrossRef]
  4. Restas, A. Forest fire management supporting by UAV based air reconnaissance results of Szendro Fire Department, Hungary. In Proceedings of the International Symposium on Environment Identities and Mediterranean Area, Corte-Ajaccio, France, 10–13 July 2006; Volume 10, pp. 73–77.
  5. Wilkinson, B.E. The Design of Georeferencing Techniques for Unmanned Autonomous Aerial Vehicle Video for Use with Wildlife Inventory Surveys: A Case Study of the National Bison Range, Montana; University of Florida: Gainesville, FL, USA, 2007. [Google Scholar]
  6. Li, N.; Huang, X.; Zhang, F.; Wang, L. Registration of aerial imagery and LiDAR data in desert areas using the centroids of bushes as control information. Photogramm. Eng. Remote. Sens. 2013, 79, 743–752. [Google Scholar] [CrossRef]
  7. Cramer, M. Direct Geocoding-is Aerial Triangulation Obsolete? Fritsch, D., Spiller, R., Eds.; Wichmann Verlag: Heidelberg, Germany, 1999; pp. 59–70. [Google Scholar]
  8. Perry, J.H.; Mohamed, A.; El-Rahman, A.H.; Bowman, W.S.; Kaddoura, Y.O.; Watts, A.C. Precision directly georeferenced unmanned aerial remote sensing system: Performance evaluation. In Proceedings of the Institute of Navigation National Technical Meeting, San Diego, CA, USA, 28–30 January 2008; pp. 680–688.
  9. Ackermann, F. Airborne laser scanning—Present status and future expectations. ISPRS J. Photogramm. Remote Sens. 1999, 54, 64–67. [Google Scholar] [CrossRef]
  10. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  11. Baltsavias, E.P. A comparison between photogrammetry and laser scanning. ISPRS J. Photogramm. Remote Sens. 1999, 54, 83–94. [Google Scholar] [CrossRef]
  12. Ma, R. Building Model Reconstruction from LiDAR Data and Aerial Photographs. Ph.D. Thesis, The Ohio State University, Columbus, OH, USA, 2005. [Google Scholar]
  13. Liu, X.; Zhang, Z.; Peterson, J.; Chandra, S. LiDAR-derived high quality ground control information and DEM for image orthorectification. GeoInformatica 2007, 11, 37–53. [Google Scholar] [CrossRef] [Green Version]
  14. James, T.D.; Murray, T.; Barrand, N.E.; Barr, S.L. Extracting photogrammetric ground control from LiDAR DEMs for change detection. Photogramm. Rec. 2006, 21, 312–328. [Google Scholar] [CrossRef]
  15. Barrand, N.E.; Murray, T.; James, T.D.; Barr, S.L.; Mills, J.P. Optimizing photogrammetric DEMs for glacier volume change assessment using laser-scanning derived ground-control points. J. Glaciol. 2009, 55, 106–116. [Google Scholar] [CrossRef]
  16. Rottensteiner, F.; Jansa, J. Automatic extraction of building from LiDAR data and aerial images. Proc. Intern. ISPRS 2002, 34, 295–301. [Google Scholar]
  17. Cui, L.L.; Tang, P.; Zhao, Z.M. Study on object-oriented classification method by integrating various features. Remote Sens. 2006, 10, 104–110. [Google Scholar]
  18. Syed, S.; Dare, P.; Jones, S. Automatic classification of land cover features with high resolution imagery and LiDAR data: An object oriented approach. In Proceedings of the SSC 2005 Spatial Intelligence, Innovation and Praxis: The National Biennial Conference of the Spatial Sciences Institute, Melbourne, Australia, 14–16 September 2005.
  19. Park, J.Y.; Shrestha, R.L.; Carter, W.E.; Tuell, G.H. Land-cover classification using combined ALSM (LiDAR) and color digital photography. In Proceedings of the ASPRS Conference, St. Louis, MO, USA, 23–27 April 2001; pp. 23–27.
  20. Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LiDAR and optical images of urban scenes. Comput. Vis. Pattern Recognit. 2009. [Google Scholar] [CrossRef]
  21. Brenner, C. Building reconstruction from images and laser scanning. Int. J. Appl. Earth Obs. 2005, 6, 187–198. [Google Scholar] [CrossRef]
  22. Zhang, F.; Huang, X.F.; Li, D.R. A review of registration of laser scanner data and optical image. Bull. Surv. Mapp. 2008, 2, 004. [Google Scholar]
  23. Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS J. Photogramm. Remote Sens. 2014, 88, 28–40. [Google Scholar] [CrossRef]
  24. Parmehr, E.G.; Zhang, C.; Fraser, C.S. Automatic registration of multi-source data using mutual information. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 7, 301–308. [Google Scholar] [CrossRef]
  25. Zhao, W.; Nister, D.; Hsu, S. Alignment of continuous video onto 3D point clouds. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1305–1318. [Google Scholar] [CrossRef] [PubMed]
  26. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the Third International Conference on 3-D Digital Imaging and Modeling, Quebec, QC, Canada, 28 May–1 June 2001; pp. 145–152.
  27. Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and LiDAR data registration using linear features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  28. Habib, A.F.; Shin, S.; Kim, C.; al-Durgham, M. Integration of photogrammetric and LiDAR data in a multi-primitive triangulation environment. In Innovations in 3D Geo Information Systems; Springer: Berlin, Germany; Heidelberg, Germany, 2006; pp. 29–45. [Google Scholar]
  29. Wong, A.; Orchard, J. Efficient FFT-accelerated approach to invariant optical–LiDAR registration. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3917–3925. [Google Scholar] [CrossRef]
  30. Rönnholm, P.; Haggrén, H. Registration of laser scanning point clouds and aerial images using either artificial or natural tie features. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, 1–3, 63–68. [Google Scholar]
  31. Kwak, T.S.; Kim, Y.; Yu, K.Y.; Lee, B.K. Registration of aerial imagery and aerial LiDAR data using centroids of plane roof surfaces as control information. KSCE J. Civ. Eng. 2006, 10, 365–370. [Google Scholar] [CrossRef]
  32. Moravec, H.P. Rover visual obstacle avoidance. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; pp. 785–790.
  33. Förstner, W.; Gülch, E. A fast operator for detection and precise location of distinct points, corners and centres of circular features. In Proceedings of Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken, Switzerland, 2–4 June 1987; pp. 281–305.
  34. Smith, S.M.; Brady, J.M. SUSAN—A new approach to low level image processing. Int. J. Comput. Vis. 1997, 23, 45–78. [Google Scholar] [CrossRef]
  35. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151.
  36. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; pp. 1150–1157.
  37. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 6, 679–698. [Google Scholar] [CrossRef]
  38. Kanopoulos, N.; Vasanthavada, N.; Baker, R.L. Design of an image edge detection filter using the Sobel operator. IEEE J. Solid State Circuits 1988, 23, 358–367. [Google Scholar] [CrossRef]
  39. Marr, D.; Hildreth, E. Theory of edge detection. Proc. R. Soc. Lond. Ser. B. Biol. Sci. 1980, 207, 187–217. [Google Scholar] [CrossRef]
  40. Habib, A.F.; Morgan, M.; Lee, Y.R. Bundle adjustment with self-calibration using straight lines. Photogramm. Rec. 2002, 17, 635–650. [Google Scholar] [CrossRef]
  41. Marcato, J.J.; Tommaselli, A. Exterior orientation of CBERS-2B imagery using multi-feature control and orbital data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 219–225. [Google Scholar] [CrossRef]
  42. Tong, X.; Li, X.; Xu, X.; Xie, H.; Feng, T.; Sun, T.; Jin, Y.; Liu, X. A two-phase classification of urban vegetation using airborne LiDAR data and aerial photography. IEEE J. Sel. Topics Appl. Earth Obs. Remote Sens. 2014, 7, 4153–4166. [Google Scholar] [CrossRef]
  43. Axelsson, P. Processing of laser scanner data—Algorithms and applications. ISPRS J. Photogramm. Remote Sens. 1999, 54, 138–147. [Google Scholar]
  44. Axelsson, P. DEM generation from laser scanner data using adaptive TIN models. Int. Arch. Photogramm. Remote Sens. 2000, 33, 111–118. [Google Scholar]
  45. Forlani, G.; Nardinocchi, C. Building detection and roof extraction in laser scanning data. Int. Arch. Photogramm. Remote Sens. 2001, 34, 319–328. [Google Scholar]
  46. Borrmann, D.; Elseberg, J.; Lingemann, K.; Nuchter, A. The 3D hough transform for plane detection in point clouds: A Review and a new accumulator design. 3D Res. 2011, 2, 1–13. [Google Scholar] [CrossRef]
  47. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Hough-transform and extended RANSAC algorithms for automatic detection of 3d building roof planes from lidar data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Syst. 2007, 36, 407–412. [Google Scholar]
  48. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar] [CrossRef]
  49. Song, J.; Lyu, M.R. A Hough transform based line recognition method utilizing both parameter space and image space. Pattern Recognit. 2005, 38, 539–552. [Google Scholar] [CrossRef]
  50. Gruen, A. Adaptive least squares correlation: A powerful image matching technique. South Afr. J. Photogramm. Remote Sens. Cartogr. 1985, 14, 175–187. [Google Scholar]
  51. Tong, X.; Liu, X.; Chen, P.; Liu, S.; Luan, K.; Li, L.; Liu, S.; Liu, X.; Xie, H.; Jin, Y.; et al. Integration of UAV-based photogrammetry and terrestrial laser scanning for the three-dimensional mapping and monitoring of open-pit mine areas. Remote Sens. 2015, 7, 6635–6662. [Google Scholar] [CrossRef]
  52. Li, X.; Cheng, G.D.; Liu, S.M.; Xiao, Q.; Ma, M.; Jin, R.; Che, T.; Liu, Q.; Wang, W.; Qi, Y.; et al. Heihe Watershed Allied Telemetry Experimental Research (HiWATER): Scientific objectives and experimental design. Bull. Am. Meteorol. Soc. 2013, 94, 1145–1160. [Google Scholar] [CrossRef]
  53. Xiao, Q.; Wen, J.G. Hiwater: Airborne LiDAR Raw Data in the Middle Reaches of the Heihe River Basin; Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences: Beijing, China, 2014. [Google Scholar]

Share and Cite

MDPI and ACS Style

Liu, S.; Tong, X.; Chen, J.; Liu, X.; Sun, W.; Xie, H.; Chen, P.; Jin, Y.; Ye, Z. A Linear Feature-Based Approach for the Registration of Unmanned Aerial Vehicle Remotely-Sensed Images and Airborne LiDAR Data. Remote Sens. 2016, 8, 82. https://doi.org/10.3390/rs8020082

AMA Style

Liu S, Tong X, Chen J, Liu X, Sun W, Xie H, Chen P, Jin Y, Ye Z. A Linear Feature-Based Approach for the Registration of Unmanned Aerial Vehicle Remotely-Sensed Images and Airborne LiDAR Data. Remote Sensing. 2016; 8(2):82. https://doi.org/10.3390/rs8020082

Chicago/Turabian Style

Liu, Shijie, Xiaohua Tong, Jie Chen, Xiangfeng Liu, Wenzheng Sun, Huan Xie, Peng Chen, Yanmin Jin, and Zhen Ye. 2016. "A Linear Feature-Based Approach for the Registration of Unmanned Aerial Vehicle Remotely-Sensed Images and Airborne LiDAR Data" Remote Sensing 8, no. 2: 82. https://doi.org/10.3390/rs8020082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop