Next Article in Journal
Performance of Burn-Severity Metrics and Classification in Oak Woodlands and Grasslands
Next Article in Special Issue
Semi-Global Filtering of Airborne LiDAR Data for Fast Extraction of Digital Terrain Models
Previous Article in Journal
Consistency between In Situ, Model-Derived and High-Resolution-Image-Based Soil Temperature Endmembers: Towards a Robust Data-Based Model for Multi-Resolution Monitoring of Crop Evapotranspiration
Previous Article in Special Issue
Using Octrees to Detect Changes to Buildings and Trees in the Urban Environment from Airborne LiDAR Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic In Situ Calibration of a Spinning Beam LiDAR System in Static and Kinematic Modes

Department of Geomatics Engineering, University of Calgary, 2500 University Dr NW, Calgary, AB T2N 1N4, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(8), 10480-10500; https://doi.org/10.3390/rs70810480
Submission received: 23 June 2015 / Revised: 7 August 2015 / Accepted: 12 August 2015 / Published: 17 August 2015
(This article belongs to the Special Issue Lidar/Laser Scanning in Urban Environments)

Abstract

:
The Velodyne LiDAR series is one of the most popular spinning beam LiDAR systems currently available on the market. In this paper, the temporal stability of the range measurements of the Velodyne HDL-32E LiDAR system is first investigated as motivation for the development of a new automatic calibration method that allows quick and frequent recovery of the inherent time-varying errors. The basic principle of the method is that the LiDAR’s internal systematic error parameters are estimated by constraining point clouds of some known and automatically detected cylindrical features such as lamp poles to fit to the 3D cylinder models. This is analogous to the plumb-line calibration method in which the lens distortion parameters are estimated by constraining the image points of straight lines to fit to the 2D line model. The calibration can be performed at every measurement epoch in both static and kinematic modes. Four real datasets were used to verify the method, two of which were captured in static mode and the other two in kinematic mode. The overall results indicate that up to approximately 72% and 41% accuracy improvement were realized as a result of the calibration for the static and kinematic datasets, respectively.

Graphical Abstract

1. Introduction

Spinning multi-beam light detection and ranging (LiDAR) systems allow continuous acquisition of three-dimensional (3D) point clouds, which is important for many applications, such as unmanned vehicle navigation, mobile mapping and moving object tracking. The system is usually composed of an array of laser diodes that rotate continuously about the system’s central vertical axis. The Velodyne LiDAR series is one of the most popular of this kind; the HDL-32E is the subject of this paper. The HDL-32E is the second smallest LiDAR in the series, having a size comparable to a large standard coffee mug. It has been installed in many different platforms, including the Centaur2 and KRex robotic rovers [1] developed by the National Aeronautics and Space Administration (NASA) in the United States. In Canada, Clearpath Robotics Inc. uses the HDL-32E in their robotic vehicles for obstacle detection [2]. On the other hand, the HDL-32E is also widely used as the main optical sensor for mobile mapping systems (MMSs), for examples: the Mandli MMS [3], the ScanLook MMS by LiDAR USA Inc. [4], and the VISAT™ system [5] jointly developed by the University of Calgary and its industrial partners. Due to its compact size, the HDL-32E can be also readily installed on a backpack to form a compact MMS [6]. For object tracking, Cho et al. [7] used the HDL-32E to track human walking trajectories. Koppanyi and Toth [8] tracked a moving aeroplane by estimating the heading from the point clouds captured by the HDL-32E.
High LiDAR measurement accuracy is always desired for some important applications such as deformation monitoring, rail monitoring, and so on. More often, sub-centimetre accuracy is required and this is usually achieved by carrying out user in situ calibration for every mission. Different calibration methodologies have been developed for the Velodyne LiDAR series. Muhammad and Lacroix [9] calibrated the HDL-64E S2 using manually-extracted wall surfaces. Sub-centimetre improvements were found in the standard deviations of the check plane data after the calibration. Atanacio-Jiménez et al. [10] developed two large empty cuboid control targets with different sizes for the HDL-64E S2 calibration. Based on the manufacturer’s model, Glennie and Lichti [11] calibrated the HDL-64E S2 by conditioning groups of points lying on planar features. The calibration requires the LiDAR to be placed at different positions between several rectangular buildings, with distinct azimuths and inclinations. Temporal stability of the same LiDAR was reported in [12], with two sets of self-calibration results. The temporal analysis suggested that periodic recalibration is necessary for the LiDAR to maintain high accuracy. Glennie [13] performed the self-calibration for the HDL-64E S2 in kinematic mode using manually-extracted planes after the HDL-64E S2 was integrated into a terrestrial mobile mapping system. A similar plane-based self-calibration for the HDL-32E was reported in [6] in which the scanner was installed on a balloon as an airborne system. In order to automate the calibration of the Velodyne LiDAR, Chen and Chien [14] applied a random sample consensus (RANSAC) approach to extract vertical walls for their proposed calibration method. However, one of the limitations for this method is that the LiDAR has to be inclined to some predetermined angles to ensure the calibration planes are either parallel or orthogonal to the LiDAR’s rotation axis.
Using additional sensors to provide calibration references is another common approach for the Velodyne LiDAR calibration. Gordon and Meidow [15] estimated the planar parameters for the calibration by using the reference planar point clouds captured by another scanner with higher accuracy. The discrepancy between the HDL-64’s point clouds and the reference point clouds for the planar features are minimized. Mirzaei et al. [16] and Gong et al. [17] developed a LiDAR-camera calibration approach for calibrating the HDL-64E S2 simultaneously with the Ladybug spherical camera system. The former captured a planar object with the HDL-64E S2 and a Ladybug2 camera with 40 different configurations to calibrate both sensors simultaneously. The latter used the Ladybug3 spherical camera to capture a trihedral object comprised of two tilted vertical planes and one horizontal plane to calibrate both the sensors. Park et al. [18] calibrated the exterior orientation parameters (EOPs), i.e., the rotational and translational parameters for an HDL-32E and interior orientation parameters of a RGB two-dimensional (2D) camera simultaneously using the corresponding vertices extracted from several polygonal planar targets (triangular and rhomboidal boards).
These calibration methods require artificial targets, manually extracted features, specific scanning orientations or additional sensors such as video cameras, requirements that limit the flexibility for performing the calibration. Furthermore, none of them specifically address the problem of temporal instability of the multi-beam spinning LiDAR measurements. Systems with measurement instability should be calibrated frequently and automatically in order to maintain high point cloud accuracy over time during persistent data collection. In this paper, the temporal stability of the range measurement of the Velodyne HDL-32E is first investigated as motivation for the development of a new automatic calibration method that can be performed at every measurement epoch without the need to setting up any artificial targets, using manually-extracted features or relying upon additional sensors. The proposed method utilizes vertical cylindrical features, such as pillars and poles, which can be readily found in many urban scenes, which are automatically extracted from the point clouds. The method is flexible as it can be performed with the LiDAR operating in either static or kinematic mode. Although planar features are currently the most common reference features for in situ calibration, there are still scenes having no or rough planar features but some cylindrical features. Therefore, a calibration method which is independent of planar features is always desired as an alternative. This is particularly true for the calibration in kinematic mode as lamp poles can be more readily found around highway corridors compared to façades.

2. The Velodyne HDL-32E and Its Temporal Stability

2.1. Velodyne HDL-32E LiDAR System

The HDL-32E was introduced in 2011 as an ultra-compact and a more cost-effective version of the Velodyne HDL-64E S2. The HDL-32E has approximate dimensions of 8.5 cm × 8.5 cm × 15 cm (L × W × H) and a net weight less than 2 kg. It comprises a vertical array of 32 radially-oriented laser diode rangefinders (Figure 1) installed on a small panel. The whole panel rotates about the z-axis continuously when power is connected. It has approximately 41.3° and 360° fields of view (FOV) in the vertical and horizontal directions, respectively. Its data capture rate is approximately 700,000 points/s for the 7 Hz pre-set spinning rate [19]. The spinning rate is adjustable in the latest version. The effective measurement range is from approximately 1 m to 70 m. A slimmed-down Velodyne version of the HDL-32E, the VLP-16, features of 16 laser diode rangefinders and has 30° vertical FOV.
According to the user manual [19], the positioning equation of the HDL-32E in the scanner frame (s-frame) for point i captured by laser j at scanning position k is given by:
( x i j k y i j k z i j k ) = ( ρ i j k cos( α j )sin( θ i j k ) ρ i j k cos( α j )cos( θ i j k ) ρ i j k sin( α j ) )
where ρ and θ are the range and horizontal angular position (horizontal angle) observations, respectively, and α is the fixed vertical angle of laser j.
Figure 1. Thirty-two radially oriented lasers embedded inside the Velodyne HDL-32, with the modified manufacturer laser labelling.
Figure 1. Thirty-two radially oriented lasers embedded inside the Velodyne HDL-32, with the modified manufacturer laser labelling.
Remotesensing 07 10480 g001

2.2. Temporal Stability of the Range Measurement

The temporal stability of the measurement of the scanner indicates how the errors vary with time and, thus, reflects how frequently the scanner should be calibrated. In order to independently observe the temporal stability of the HDL-32E, a 26 cm × 26 cm Spectralon target with 99% reflectance was placed approximately 2 m away from the LiDAR to collect measurements (ρ and θ) for a 2.5 h period (Figure 2a). Two different HDL-32E systems were tested.
Figure 2. Experimental setup for examine the temporal stability of the range measurement (a) and its diagram on the xy-plane (b) for temporal stability analysis.
Figure 2. Experimental setup for examine the temporal stability of the range measurement (a) and its diagram on the xy-plane (b) for temporal stability analysis.
Remotesensing 07 10480 g002
The LiDAR was inclined several times in order to capture all 32 lasers’ measurements to the target. The averaged range measurements of all the 32 lasers between the LiDAR and the centre of the target within a small angular window (0.25° as seen in Figure 2b) are plotted in Figure 3. A thirty-minute warm-up period was observed between acquisition periods for any two inclination settings. The temporal stability of the horizontal angle is difficult to quantify since a thin vertical reference object cannot be captured. Therefore, only the range stability is discussed herein.
Figure 3. Range measurement for a fixed Spectralon target versus time. The range measurements are plotted in groups of eight adjacent rangefinders according to the numbering system shown in Figure 1.
Figure 3. Range measurement for a fixed Spectralon target versus time. The range measurements are plotted in groups of eight adjacent rangefinders according to the numbering system shown in Figure 1.
Remotesensing 07 10480 g003
As can be seen from Figure 3, none of the lasers has stable range measurements over the 2.5 h period. Many of them fluctuate by more than 0.05 m, which is at least 3 times greater than the 1.42 cm thickness of the Spectralon target assembly, and at least one varies by 0.10 m. Randomly-occurring transients are visible in many of the time series. Similar range measurement trends were observed when the experiment was performed on the second HDL-32E. Consequently, the self-calibration methods which assume the errors are time-invariant may not be able to provide an optimal calibration solution to a spinning LiDAR. Therefore, a calibration method that allows frequent estimation of the systematic range error is required.

3. Proposed Automatic Calibration Methodology

An in situ calibration method that can be performed rapidly and frequently for correcting the point cloud collected at each epoch is proposed. The principle of the proposed method is that the error parameters along with the model parameters of cylindrical features, are estimated by constraining the corresponding point observations to fit to the vertical cylinder model [20] according to the weighted least-squares criterion. The cylindrical features such as pillars and poles can be automatically identified and extracted from a scene.
This principle is similar to that of the plumb-line calibration for cameras, in which lens distortion parameters are estimated by constraining image coordinate measurements of linear features to fit the straight-line model [21]. A cylindrical pillar and a lamp pole can be treated as a magnified plumb-line (Figure 4) as they can be modelled as linear features [22]. Based on this analogy, structural concrete pillars (Figure 5) can be used for the proposed calibration with static point clouds while roadside lamp/electrical poles (Figure 6) are used for calibration with kinematic point clouds captured by an MMS.
Figure 4. (a) Plumb line, (b) pillar, (c) lamp poles.
Figure 4. (a) Plumb line, (b) pillar, (c) lamp poles.
Remotesensing 07 10480 g004
Figure 5. Proposed automatic calibration method in static mode.
Figure 5. Proposed automatic calibration method in static mode.
Remotesensing 07 10480 g005
The proposed calibration in static mode is potentially useful for object tracking and monitoring over a long period of time. For example, the calibration may help enhance people-tracking accuracy [23,24] with the Velodyne scanner in a scene having cylindrical pillars. On the other hand, since the Velodyne LiDAR systems can capture point clouds persistently, they are more often installed on mobile platforms such as vehicles for navigation or mobile mapping. The calibration in the kinematic mode should be performed as frequently as possible to maximize the system’s accuracy but without relying upon artificial reference targets distributed along the system’s trajectory. Lamp/electrical poles are abundant and evenly distributed in road corridors, so they can be used as reference objects to allow frequent calibration. Similar to the calibration in static mode, the kinematic mode calibration is also performed in the s-frame. The proposed calibration method is supported by a novel automatic cylinder segmentation technique from the Velodyne point cloud, which is described in Section 3.3.
Due to the similarity between the proposed method to the plumb-line calibration, it inherits some of important advantages: (1) only one single instrument station is needed for calibration; (2) only one cylindrical feature is required but more can be used; (3) no overlap of the feature point clouds is needed; and (4) no a priori information of the scanner EOPs is needed. The following advantages are specific to the kinematic mode calibration: (1) only data captured in one drive line are needed as no overlap of the feature point clouds is required and (2) global navigation satellite system (GNSS)/inertial measurement unit (IMU) measurements are not needed since the calibration is performed in the s-frame.
Figure 6. Proposed automatic calibration method in kinematic mode.
Figure 6. Proposed automatic calibration method in kinematic mode.
Remotesensing 07 10480 g006

3.1. Functional Model for the Calibration

The measured 3D point coordinates from the segmented pillars/poles, augmented with additional parameters (APs), are constrained to fit to the cylindrical model according to the weighted least-squares criterion. For m cylinders used to calibrate n lasers at time t, the calibration parameter vector is
X = [ x 1 T x m T x Δ ρ T x Δ θ T ] t T
where x q = [ x c q y c q ω q φ q r q ] t T is the model parameter vector for cylinder q, and
x Δ ρ = [ Δ ρ 1 Δ ρ n ] t T
x Δ θ = [ Δ θ 1 Δ θ n ] t T
are the vectors of the rangefinder offsets and horizontal angle offsets, respectively. All range and angular offset parameters are modelled as individual coefficients [11]. The functional model for point i with raw observations, l i j k = [ ρ θ ] i j k T for laser j at position k which lies on the surface of cylinder q at time t is given by:
f ( l i j k , x q , Δ ρ j , Δ θ j ) = x i j k 2 + y i j k 2 r q 2
where
( x i j k y i j k z i j k ) = R 2 ( φ q ) R 1 ( ω q ) R ( ( ρ i j k Δ ρ j ) cos ( α j ) sin ( θ i j k Δ θ j ) x c q ( ρ i j k Δ ρ j ) cos ( α j ) cos ( θ i j k Δ θ j ) y c q ( ρ i j k Δ ρ j ) sin ( α j ) )
(xc, yc) are the centre coordinates of the cylinder; ω and ϕ are the tilt angles of the cylinder about the x-axis and y-axis, respectively, and rq is the radius of the cylinder. The vertical angle, α, is not estimated in the proposed calibration. Frequent calibration of the vertical angles is not necessary since the laser diodes are rigidly mounted at fixed vertical angles during the system assembly. If conical poles are used, a gradient factor, k, should be included and estimated in Equation (5) where rq is replaced by (rqkzijk) [25]. The boresight angle matrix for rotation between the s-frame and the body-frame (b-frame) is defined as R = R s b (The boresight angle matrix is system dependent, in our case, R s b = R3(yˈ)R1(rˈ)R2(pˈ) where rˈ, pˈ and yˈ are roll, pitch and yaw, respectively) for calibration in kinematic mode, while for calibration in static mode, R = I. The boresight matrix can be obtained from an independent plane-based system calibration [26,27].

3.2. Calibration Configuration

Since the calibration is done from only one instrument location, it is not possible to simultaneously estimate both the cylinder position and all rangefinder offset parameters. Likewise, it is not possible to estimate the cylinder orientation and all angular offset parameters. The analogous situation in the plumb line calibration is the inability to simultaneously estimate the principal point co-ordinates and the line parameters. Four constraints must therefore be added to overcome the associated rank defects in the single-station, spinning beam LiDAR calibration. Consequently, two sets of the offset parameters, those at the maximum and minimum elevation angles as illustrated in Figure 7, should be held fixed while the others can be rigorously estimated along with the cylinder parameters in the calibration adjustment. The choice of which lasers to constrain was made by analyzing the condition number (Equation (7)) of the adjustment normal-equations matrix (N) for different choices. The outermost pair of lasers (shown in red in Figure 7) are constrained as it gives the smallest condition number and thus results in the most rigorous parameter estimation.
Figure 7. Configuration for constrained lasers for the calibration (red for the constrained lasers, blue for the lasers with error offsets estimated in the calibration).
Figure 7. Configuration for constrained lasers for the calibration (red for the constrained lasers, blue for the lasers with error offsets estimated in the calibration).
Remotesensing 07 10480 g007
cond ( N ) = N N 1

3.3. Cylindrical Feature Detection for the Calibration

In [28,29,30], the point clouds of pillars and poles were broken down into multiple horizontal slices for processing and recognition. This horizontal slice decomposition principle is particularly efficient for processing the Velodyne point clouds as they comprise slices (i.e., layers of points) due to the fixed radial orientation of the individual lasers. In our approach, the cylindrical objects consist of multiple circular slices (or nearly circular slices due to the errors) that are detected by the generalized Hough transform (GHT; [31]). The GHT is used because it is generally robust to incompleteness and distortion of the target shapes and the presence of noise. Uncorrected Velodyne point cloud slices are subject to all of these conditions. For detecting circles or arcs from a 2D image using the GHT, the candidate segment coordinates are transformed into sets of circle centre coordinates and radii, then those segments with higher centre and radius count will be considered as circle/arc. Although cylindrical features can be detected in point clouds by directly transforming the whole point cloud into a five-dimensional (5D) Hough space, this is computationally inefficient. Rabbani and van den Heuvel [32] decompose the 5D space into two sub-spaces (a 2D space for the cylinder rotations and a 3D space for the centre and the radius) in order to improve the search efficiency for detecting cylinders from point clouds. However, their approach is not as straightforward as the slice-based Hough circle detection method which is a nearly-2D search method (with input of the approximate radius, otherwise, it is purely 3D). The advantage of the proposed method is that computational effort can be greatly reduced by processing points in 2D for the circle identification criterion instead of processing points in 3D.
The key steps of the proposed cylindrical object segmentation method with the static data are depicted in Figure 8. The Velodyne point cloud is generated from the raw observation (Figure 8a). The horizontal layer (the layer of points captured by the laser at 0° vertical angle) of points is extracted and a range threshold is applied to keep the points within the area of our interest. (Figure 8b). The point cloud layer is then resampled as a 2D edge image. Since cylinders appear as circular arcs in the 2D image, they can be detected with the GHT as shown in Figure 8c. This is followed by the radius histogram check method [33] applied to each detected circle to determine if they are over-segmented by the GHT (Figure 8d). Finally, the detected circle in the horizontal layer is buffered and projected into the other data layers to form a 3D cylinder window and allow segmentation of the cylinder from the point cloud (the green dash lines in Figure 8e). This is followed by RANSAC cylinder fitting to remove outliers; the final result is shown in Figure 8f.
One of the challenges for segmenting poles from kinematic point clouds is that the point density is generally lower due to the small radius of the poles and short scanning time of the moving system. The low point density creates an ambiguity in the GHT’s centre and radius votes, and therefore another approach should instead be used. A variant of the proposed method, based on least-squares circle fitting for recognizing the low density circular arcs, was adopted. The least-squares circle fitting was applied to individual point layer segments obtained by using basic region growing in which points separated by the Euclidean distance less than a threshold (5 cm) were grouped as a segment. Similar to the pillar detection shown in Figure 8, the key steps of the proposed method for detecting poles from the kinematic Velodyne point cloud at each epoch are depicted in Figure 9.
Figure 8. Proposed vertical cylinder segmentation for the static Velodyne point cloud. (a) whole point cloud; (b) point layer extraction; (c) Hough circle detection; (d) radius examination; (e) cylinder extraction; (f) segmented cylinders (pillars).
Figure 8. Proposed vertical cylinder segmentation for the static Velodyne point cloud. (a) whole point cloud; (b) point layer extraction; (c) Hough circle detection; (d) radius examination; (e) cylinder extraction; (f) segmented cylinders (pillars).
Remotesensing 07 10480 g008
Figure 9. Proposed vertical cylinder segmentation for the kinematic Velodyne point cloud. (a) whole point cloud; (b) segmentation using range thresholds; (c) candidate arc identification; (d) pole arc segmentation; (e) pole extraction; (f) segmented poles.
Figure 9. Proposed vertical cylinder segmentation for the kinematic Velodyne point cloud. (a) whole point cloud; (b) segmentation using range thresholds; (c) candidate arc identification; (d) pole arc segmentation; (e) pole extraction; (f) segmented poles.
Remotesensing 07 10480 g009

4. Experiment

4.1. Static Mode Calibration Dataset

Two datasets were captured by the same HDL-32E at two different locations on the University of Calgary campus (Figure 10a,b). Both scenes have four large concrete pillars (with radii approximately 40 cm and 50 cm, respectively). The pillars were successfully segmented using the proposed method for one-second intervals over a 10 s period. The average distance between the scanner and the pillars was approximately 4.5 m in both datasets. The recognition rate is 100% for four pillars for both Datasets 1 and 2. The extracted cylinders from each second of data were then passed to the proposed calibration in static mode.
Figure 10. Two scenes contain vertical pillars at University of Calgary campus, Canada (a) for Dataset 1 (b) for Dataset 2.
Figure 10. Two scenes contain vertical pillars at University of Calgary campus, Canada (a) for Dataset 1 (b) for Dataset 2.
Remotesensing 07 10480 g010

4.2. Kinematic Mode Calibration Dataset

The Velodyne HDL-32E was installed on the VISATTM [5] mobile mapping system. Two datasets (Datasets 3 and 4) of two different road scenes containing some cylindrical features (shown in Figure 11a,b respectively) were captured by the system. The data collection was performed at the City of Calgary, Canada, at around 51°3′6″N, 114°5′41″W. Ten epochs (10 spinning rotations, 1 epoch ≈ 0.14 s) from datasets were calibrated with the proposed method in kinematic mode. The system has travelled about 21 metres with speed of 50 km/h for both Datasets 3 and 4. For Dataset 3, two cylinders and one cone were recognized on average in each epoch over a two-second period. The overall recognition rate was approximately 89% (40 poles were recognized from 45 available poles) within the calibration zone (8 m radial distance from the b-frame centre). For Dataset 4, two cylinders and one cone were recognized on average in each epoch over a two-second period. The overall recognition rate was approximately 92% (35 poles were recognized from 38 poles available). The recognition rate was lower because only small portions of some poles were scanned due to the obstacles attached to the poles such as traffic lamp panels.
Figure 11. Two scenes contain some lamp poles at the City of Calgary, Canada (a) for Dataset 3 (b) for Dataset 4.
Figure 11. Two scenes contain some lamp poles at the City of Calgary, Canada (a) for Dataset 3 (b) for Dataset 4.
Remotesensing 07 10480 g011

5. Results and Discussion

5.1. Static Mode Calibration

5.1.1. Estimated Parameters

For both static Datasets 1 and 2, the estimated error terms and their corresponding precisions for the ten-second period are plotted in Figure 12, Figure 13, Figure 14 and Figure 15. It can be seen in Figure 12a,b that for Dataset 1, Δρ varies from several millimetres to approximately 1 cm (Laser 14, α = −1.33°) over the ten-second period. The variations of the estimated Δρ for each laser do not follow a specific trend over the period. The precisions vary within a 0.5 mm interval and are slightly lower in the higher and lower vertical angles as the corresponding range observations are longer for capturing vertical cylinders.
For the same dataset (Dataset 1), the variation of the estimated Δθ is up to approximately 0.05° (Laser 30, α = 9.33°) over the ten-second period as shown in Figure 13a. The corresponding precision shown in Figure 13b varies in a similar trend as the Δρ precision, and the values fall within a 0.007° window. Zig-zag patterns can be observed in precision of both Δρ and Δθ. They are caused by differences in point density of the captured cylindrical surfaces by the neighbouring lasers. The fluctuation in precision is approximately 0.5 mm and 0.005° for Δρ and Δθ, respectively.
For the static calibration of Dataset 2, the estimated Δρ of the lasers vary within a slightly larger window (1.3 cm) compared to the 1 cm window of Dataset 1, and the corresponding precisions also vary with a similar trend and range as shown in Figure 14. Similar to the horizontal angle offset result of Dataset 1, the estimated Δθ of the lasers vary, within a range of 0.04°, with precisions varying within a range of 0.07° (Figure 15). Comparing the results of the two datasets, the estimated error parameters are consistent in terms of the order of magnitude. The results are also consistent with the results reported by Glennie and Lichti [12] for another Velodyne Scanner (Velodye HDL-64E S2) in terms of order of magnitude. Their results are obtained by performing two calibrations at two different times, and the laser’s rangefinder offsets vary up to 3 cm while the horizontal angle offset vary within a 0.02° window.
Figure 12. Estimated Δρ for Dataset 1 over a 10 second period ((a) top). The standard deviations of estimated Δρ by the calibration for Dataset 1 over a 10 second period ((b) bottom).
Figure 12. Estimated Δρ for Dataset 1 over a 10 second period ((a) top). The standard deviations of estimated Δρ by the calibration for Dataset 1 over a 10 second period ((b) bottom).
Remotesensing 07 10480 g012
Figure 13. Estimated Δθ for Dataset 1 over a 10 second period ((a) top). The standard deviations of estimated Δθ by the calibration for Dataset 1 over a 10 second period ((b) bottom).
Figure 13. Estimated Δθ for Dataset 1 over a 10 second period ((a) top). The standard deviations of estimated Δθ by the calibration for Dataset 1 over a 10 second period ((b) bottom).
Remotesensing 07 10480 g013
Figure 14. Estimated Δρ for Dataset 2 over a 10 second period ((a) top). The standard deviations of estimated Δρ by the calibration for Dataset 2 over a 10 second period ((b) bottom).
Figure 14. Estimated Δρ for Dataset 2 over a 10 second period ((a) top). The standard deviations of estimated Δρ by the calibration for Dataset 2 over a 10 second period ((b) bottom).
Remotesensing 07 10480 g014
Figure 15. Estimated Δθ for Dataset 2 over a 10 second period ((a) top). The standard deviations of estimated Δθ by the calibration for Dataset 2 over a 10 second period ((b) bottom).
Figure 15. Estimated Δθ for Dataset 2 over a 10 second period ((a) top). The standard deviations of estimated Δθ by the calibration for Dataset 2 over a 10 second period ((b) bottom).
Remotesensing 07 10480 g015

5.1.2. Calibration Accuracy

To verify the accuracy of the proposed calibration method in static mode, the estimated parameters were used to reconstruct the point cloud of four check planes which are outside the calibration zone in Datasets 1 and 2. The check planes are some known, natural, planar objects in the scenes and the corresponding point clouds were extracted manually using the Leica Cyclone 7.0.2. Least-squares fitting was used to quantify the accuracy of the reconstructed check planes, which was improved after the calibration in all cases. For both datasets, the root mean square (RMS) values of the check plane misclosures for an individual laser having the highest RMS improvement rate for the calibration over the ten-second period are shown in Table 1. Up to approximately 2 cm improvement was achieved. The improvement rates are very significant, reaching 82.4%. The highest averaged improvement rate is 71.7%.
Table 1. RMS of the check plane fitting misclosures for an individual laser in static calibration.
Table 1. RMS of the check plane fitting misclosures for an individual laser in static calibration.
EpochDataset 1Dataset 2
Before (m)After (m)Improvement (%)Before (m)After (m)Improvement (%)
10.02200.006667.00.01720.005565.8
20.02200.006667.00.02720.006682.4
30.02380.006871.40.01800.005966.1
40.02380.006871.40.02990.007680.6
50.02320.008662.70.01790.006073.0
60.02170.007864.10.02760.006079.8
70.02480.008765.20.01900.006864.5
80.02480.008765.20.02500.007669.7
90.02540.007969.00.01890.006064.8
100.02540.008369.00.02730.007170.7
Mean Improvement (%)67.871.7

5.2. Kinematic Mode Calibration

5.2.1. Estimated Parameters

The estimated parameters and their precisions for Datasets 3 and 4 are shown in Figure 16 and Figure 17, respectively, in error bar representation. As can be seen, some of the parameters at some high/low elevation angles could not be estimated since laser returns were not received from the pole for every laser beam while the system was travelling. As the system was moving, the ranges, scanning angles, incident angles, the number of poles detected, and number of lasers capturing the same pole all varied. These factors affect the accuracy and precision of the estimated parameters epoch by epoch. Therefore, it can be seen that the overall variations in the estimated parameters and the precisions are significant and no specific variation trend can be observed. However, most of the estimated rangefinder offsets for both datasets are bounded by ±5 cm, while the estimated horizontal offset is ±0.3°. The estimated rangefinder offsets have the same orders of magnitude as those estimated in the static calibration. For the horizontal angle offset, the kinematic calibration results are mostly at least four times higher than the estimates from the static calibration. On the other hand, when the data redundancy is reduced in the horizontal direction (poles have small radius), the beam divergence issue becomes more significant. This results in poorer estimation of the horizontal angle offsets.
Figure 16. Estimated parameters and their precisions by the proposed calibration in kinematic mode for Dataset 3 over 10 epochs (10 rotations, ~1.4 s).
Figure 16. Estimated parameters and their precisions by the proposed calibration in kinematic mode for Dataset 3 over 10 epochs (10 rotations, ~1.4 s).
Remotesensing 07 10480 g016
Figure 17. Estimated parameters and their precisions by the proposed calibration in kinematic mode for Dataset 4 over 10 epochs (10 rotations, ~1.4 s).
Figure 17. Estimated parameters and their precisions by the proposed calibration in kinematic mode for Dataset 4 over 10 epochs (10 rotations, ~1.4 s).
Remotesensing 07 10480 g017

5.2.2. Calibration Accuracy

Similar to the evaluation of the static calibration, two to four check planes extracted from each epoch were used to evaluate the accuracy of the proposed method in kinematic mode. The RMS values of the check plane misclosures for an individual laser having the highest RMS improvement rate using Dataset 3 and 4 are tabulated in Table 2. Among the results for both datasets, up to about 62% and 7 mm improvements in the RMS were realized. The highest overall accuracy improvement rate is about 40%, which is approximately 60% of the static case. This lower accuracy improvement rate can be mainly attributed to the much lower data redundancy in the horizontal direction due to the small radius of the poles.
Table 2. RMS of the check plane fitting misclosures for an individual laser in kinematic calibration.
Table 2. RMS of the check plane fitting misclosures for an individual laser in kinematic calibration.
EpochDataset 3Dataset 4
Before (m)After (m)Improvement (%)Before (m)After (m)Improvement (%)
10.06210.048921.20.02240.012046.4
20.03200.023825.60.02160.015130.1
30.02110.014332.30.02980.013056.4
40.02980.016245.60.02240.018915.6
50.03280.022830.50.02180.010560.0
60.00900.004352.60.01470.006556.2
70.01330.007444.70.00820.006322.6
80.01960.012237.90.01570.007651.3
90.02050.008160.70.02820.024513.1
100.01190.004562.10.03720.018251.1
Mean Improvement (%)41.339.5

6. Conclusions

In this paper, a novel cylinder-based automatic in situ calibration for a multi-beam spinning LiDAR system—the Velodyne HDL-32E, for both static and kinematic applications, is proposed. The temporal stability of the HDL-32E was first investigated. This motivated the development of the new calibration method that can support rapid and frequent recovery of the error parameters for such a system. The proposed method uses cylindrical features detected around the system as calibration references. A novel cylindrical feature detection method with its variant for the Velodyne point cloud was also proposed, based on 2D circular arc detection from the point cloud slices captured by individual lasers embedded in the LiDAR. For indoor applications, the LiDAR could be calibrated using circular pillars. For outdoor mobile mapping applications, the LiDAR could be calibrated using roadside lamp/electrical poles along the system trajectory. The basic principle of the proposed method is analogous to that of the plumb-line calibration of cameras so it inherits the major merits from the plumb-line method. The proposed method was verified with four real datasets (two for static and two for kinematic modes). The overall calibration accuracy was improved; up to approximately 72% and 41% accuracy improvement rates has been achieved for the calibration in static and kinematic modes, respectively. Even though the proposed method is a post-processing technique, it could be potentially implemented for real-time calibration. This can be the focus of the future research.

Acknowledgments

Tecterra Inc. in Canada is thanked for providing both financial and technical support. Absolute Mapping Solution Inc. in Canada is also thanked for its technical support. The research was partially supported by the Werner Graupe International Fellowship in Engineering, and the Natural Sciences and Engineering Research Council (NSERC).

Author Contributions

The research was conducted by Ting On Chan under direct supervision of Derek D. Lichti. The tasks for the research include the design and implementation of the proposed methods, data collection, and result analyses. Both authors have drafted, edited and reviewed the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pedersen, L.; Allan, M.; Utz, H.; Deans, M.; Bouyssounouse, X.; Choi, Y.; Flückiger, L.; Lee, S.Y.; To, V.; Loh, J.; et al. Tele-Operated Lunar Rover Navigation Using LiDAR. 2012. Available online: http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120016849_2012017669.pdf (accessed on 15 June 2015). [Google Scholar]
  2. Clearpath Robotics Resources. Available online: http://www.clearpathrobotics.com/resourc es/brochures/ (accessed on 15 June 2015).
  3. Mandli Communication LiDAR: Advanced asset management and visualization. Available online: http://www.mandli.com/lidar/ (accessed on 15 June 2015).
  4. LiDAR USA ScanLook System Specification. Available online: http://lidarusa.com/images/ScanLookSpecification.pdf (accessed on 15 June 2015).
  5. VISATTM Van Mobile Mapping System. Available online: http://www.amsvisat.com/VVan/documents/vvan2006_letter_lowres.pdf (accessed on 15 June 2015).
  6. Glennie, C.; Brooks, B.; Ericksen, T.; Hauser, D.; Hudnut, K.; Foster, J.; Avery, J. Compact multipurpose mobile laser scanning system—Initial tests and results. Remote Sens. 2013, 5, 521–538. [Google Scholar] [CrossRef]
  7. Cho, K.; Baeg, S.; Park, S. Object tracking with enhanced data association using a 3D range sensor for an unmanned ground vehicle. J. Mech. Sci. Technol. 2014, 28, 4381–4388. [Google Scholar] [CrossRef]
  8. Koppanyi, Z.; Toth, C.K. Estimating aircraft heading based on laserscanner derived point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, II-3/W4, 95–102. [Google Scholar] [CrossRef]
  9. Muhammad, N.; Lacroix, S. Calibration of a rotating multi-beam LiDAR. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Toulouse, France, 18–22 October 2010; pp. 5648–5653.
  10. Atanacio-Jiménez, G.; González-Barbosa, J.-J.; Hurtado-Ramos, J.B.; Francisco, J.; Jiménez-Hernández, H.; García-Ramirez, T.; González-Barbosa, R. LiDAR Velodyne HDL-64E calibration using pattern planes. Int. J. Adv. Robot. Syst. 2011, 8, 70–82. [Google Scholar] [CrossRef]
  11. Glennie, C.; Lichti, D.D. Static calibration and analysis of the Velodyne HDL-64E S2 for high accuracy mobile scanning. Remote Sens. 2010, 2, 1610–1624. [Google Scholar] [CrossRef]
  12. Glennie, C.; Lichti, D.D. Temporal stability of the Velodyne HDL-64E S2 scanner for high accuracy scanning applications. Remote Sens. 2011, 3, 539–553. [Google Scholar] [CrossRef]
  13. Glennie, C. Calibration and kinematic analysis of the Velodyne HDL-64E S2 LiDAR sensor. Photogramm. Eng. Remote Sens. 2012, 78, 1–9. [Google Scholar] [CrossRef]
  14. Chen, C.-Y.; Chien, H.-J. On-site sensor recalibration of a spinning multi-beam LiDAR system using automatically-detected planar targets. Sensors 2012, 12, 13736–13752. [Google Scholar] [CrossRef] [PubMed]
  15. Gordon, M.; Meidow, J. Calibration of a multi-beam laser system by using a TLS-generated reference. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2013, II-5/W2, 85–90. [Google Scholar] [CrossRef]
  16. Mirzaei, F.M.; Kottas, D.G.; Roumeliotis, S.I. 3D LiDAR-camera intrinsic and extrinsic calibration: Observability analysis and analytical least squares-based initialization. Int. J. Robot. Res. 2012, 31, 452–467. [Google Scholar] [CrossRef]
  17. Gong, X.; Lin, Y.; Liu, J. 3D LiDAR-camera extrinsic calibration using an arbitrary trihedron. Sensors 2013, 13, 1902–1918. [Google Scholar] [CrossRef] [PubMed]
  18. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LiDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed]
  19. Velodyne HDL-32E User’s Manual. 2015. Available online: http://velodynelidar.com/lidar/products/manual/63-9113%20HDL-32E%20manual_Rev%20E_NOV2012.pdf (accessed on 11 May 2015).
  20. Chan, T.O.; Lichti, D.D.; Belton, D. A rigorous cylinder-based self-calibration approach for terrestrial laser scanners. ISPRS J. Photogramm. Remote Sens. 2015, 99, 84–99. [Google Scholar] [CrossRef]
  21. Brown, D.C. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  22. Lari, Z.; Habib, A. An adaptive approach for the segmentation and extraction of planar and linear/cylindrical features from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2014, 93, 192–212. [Google Scholar] [CrossRef]
  23. Spinello, L.; Luber, M.; Arras, K.O. Tracking people in 3D using a bottom-up top-down detector. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 9–13 May 2011; pp. 1304–1310.
  24. Shackleton, J.; Vanvoorst, B.; Hesch, J. Tracking people with a 360-degree LiDAR. In Proceedings of the Seventh IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Boston, MA, USA, 29 August–1 September 2010; pp. 420–426.
  25. Chan, T.O.; Lichti, D.D. Geometric modelling of octagonal lamp poles. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, XL-5, 45–150. [Google Scholar] [CrossRef]
  26. Skaloud, J.; Lichti, D.D. Rigorous approach to bore-sight self-calibration in airborne laser scanning. ISPRS J. Photogramm. Remote Sens. 2006, 61, 47–59. [Google Scholar] [CrossRef]
  27. Chan, T.O.; Lichti, D.D.; Glennie, C. Mutli-feature based boresight self-calibration of a terrestrial mobile mapping system. ISPRS J. Photogramm. Remote Sens. 2013, 82, 112–124. [Google Scholar] [CrossRef]
  28. Luo, D.; Wang, Y. Rapid extracting pillars by slicing point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37 Part B3, 215–218. [Google Scholar]
  29. Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H. Detection of vertical pole-like objects in a road environment using vehicle-based laser scanning data. Remote Sens. 2010, 2, 641–664. [Google Scholar] [CrossRef]
  30. Pu, S.; Rutzinger, M.; Vosselman, G.; Oude Elberink, S.J. Recognizing basic structures from mobile laser scanning data for road inventory studies. ISPRS J. Photogramm. Remote Sens. 2011, 66, 28–39. [Google Scholar] [CrossRef]
  31. Ballard, D.H. Generalizing the Hough transform to detect arbitrary shapes. Pattern Recognit. 1981, 13, 111–122. [Google Scholar] [CrossRef]
  32. Rabbani, T.; van den Heuvel, F.A. Efficient Hough transform for automatic detection of cylinders in point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2005, 36 Part 3/W19, 60–65. [Google Scholar]
  33. Ioannou, D.; Duda, W.; Laine, F. Circle recognition through a 2D Hough transform and radius histogramming. Image Vis. Comput. 1999, 17, 15–26. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Chan, T.O.; Lichti, D.D. Automatic In Situ Calibration of a Spinning Beam LiDAR System in Static and Kinematic Modes. Remote Sens. 2015, 7, 10480-10500. https://doi.org/10.3390/rs70810480

AMA Style

Chan TO, Lichti DD. Automatic In Situ Calibration of a Spinning Beam LiDAR System in Static and Kinematic Modes. Remote Sensing. 2015; 7(8):10480-10500. https://doi.org/10.3390/rs70810480

Chicago/Turabian Style

Chan, Ting On, and Derek D. Lichti. 2015. "Automatic In Situ Calibration of a Spinning Beam LiDAR System in Static and Kinematic Modes" Remote Sensing 7, no. 8: 10480-10500. https://doi.org/10.3390/rs70810480

Article Metrics

Back to TopTop