Next Article in Journal
A High Noise Immunity, 28 × 16-Channel Finger Touch Sensing IC Using OFDM and Frequency Translation Technique
Previous Article in Journal
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
*
Authors to whom correspondence should be addressed.
Sensors 2018, 18(5), 1651; https://doi.org/10.3390/s18051651
Submission received: 24 March 2018 / Revised: 5 May 2018 / Accepted: 18 May 2018 / Published: 21 May 2018
(This article belongs to the Section Remote Sensors)

Abstract

:
We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one.

1. Introduction

Motivated by applications of vehicle navigation, urban planning and autonomous driving, the need for 3D information on urban areas has dramatically increased in recent years. Mobile mapping systems (MMS) technology has been widely used for efficient data acquisition. Information obtained by an MMS mainly includes a sequence of optical images captured by a camera and point clouds obtained by LiDAR. Optical images have rich texture information, and point clouds mainly reflect the spatial characteristics. Registering images and point clouds has important theoretical and practical value.
Optical image (2D) and point cloud (3D) data are two types of data. The imaging model of an optical image is a collinear equation. The point cloud reflects the position of an object by transmitting a laser pulse. Thus, the geometric reference frame of LiDAR points and an optical image is different. There are 3 key problems that must be solved before registration: the primitive pairs, the registration model and parameter optimization. To capture large scenes in a single image, a panoramic camera could be used on an MMS. The most common type of panoramic camera, such as Ladybug, consists of a multiple fish-eye camera rig. Images with a narrow field are first obtained by fish-eye lenses, and then projected and resampled to stitch together a panoramic image. The acquisition efficiency of an MMS camera can reach 10 fps/s; assuming the speed of the vehicle is 72 km/h, the image density is up to 2 m. As a result, the registration method of the panorama/fish-eye image and the LiDAR points should be automated and efficient.
The existing algorithms for registering panoramic/fish-eye images and LiDAR points mostly rely on control points (control point-based registration method) or GPS/IMU information (original registration method) [1,2]. LiDAR points and panoramic/fish-eye images can be registered well by control points; however, automatic extraction of the control point is a difficult task at present. Furthermore, it is impossible to extract the control point from the LiDAR points and the sequence of images manually. Original registration method uses the position and attitude parameters obtained by GPS/IMU directly, however, the accuracy of GPS/IMU is affected by interference, such as dropouts. This error accumulates over time and may cause unreliable registration for long sequence images; thus, the precision of original registration method is low. Wang et al. [3] proposed an automatic registration of mobile LiDAR and spherical panoramas; however, only part of the panoramic image was used, and the registration was based on a conventional frame camera model. Li et al. [2] proposed an automatic registration method based on semantic features extracted from panoramic images and point clouds, but the accuracy of this method relies on the extraction of primitive pairs (parked vehicles) and is only suitable for urban scenes.
A linear feature exceeds a point feature regarding registration accuracy; linear features are more reliably, accurately, and automatically extracted from images and LiDAR points [1,4]. The skyline is easily extracted from LiDAR points and optical images. The skyline registration method aims at retrieving the camera’s pose via skyline points/pixels matching. Hofmann et al. [5] extracted the building skyline from real and synthetic images; both skylines are then merged using the ICP method. However, their approach did not analyze special conditions of the skyline, such as missing data, jagged skylines, or even too much vegetation; moreover, it uses a frame camera rather than a panoramic/fish-eye image in an MMS. At present, registration methods of point clouds and optical images can be divided into 3 categories: 2D-2D-based methods, 2D-3D-based methods and 3D-3D-based methods.
2D-2D-based methods. 2D-2D-based methods cast the problem of image to point cloud registration as an iterative process of image to image registration. The point cloud should be turned into a synthetic image on which the real image can be registered, and then, initial registration parameters can be estimated using both images. The synthetic image is regenerated using the new parameters, and then, this process is iterated. 2D-2D-based methods can be divided into feature-based and statistical analysis.
Feature-based registration methods rely on feature points/lines obtained by SIFT [6], SURF [7], and ASIFT [8] on real and synthetic images, followed by establishment of the correspondences through common features to realize registration [9,10,11,12,13,14]. Statistical analysis-based registration methods are widespread for aligning an image to another image [15]; mutual information (MI) proposed by Viola [16] is the most commonly used statistical method. MI measures the similarity between two images based on the dependency of the intensity distribution [17,18,19,20]. Taylor and Nieto [21] proposed a modified form of MI using particle swarm optimization; Pascoe et al. [22] introduced a normalized information distance metric based on MI and entropy variation to retrieve the camera position.
2D-3D-based methods. 2D-3D-based methods rely on identifying the same point/line from the image and the LiDAR points and then constructing a strict geometric model to achieve high-precision registration. This method uses the correspondence between 2D and 3D points/line; the camera pose is finally obtained by solving a perspective-n-point (PnP) problem using the EPnP algorithm [23,24]. The principle and concept of line-based registration, especially straight lines, were regarded as an extension of traditional point features. Zhang [25] proposed the concept of “generalized points”, which is a series of connected points representing a linear feature. According to this concept, the traditional collinearity model can accommodate more complicated features, such as circles and rectangles, rather than only straight lines. Stamos et al. [26] and Schenk [27] laid a solid foundation in line feature-based registration, and Liu et al. [28,29] utilized linear features to determine the camera orientation and position relative to the 3D model.
3D-3D-based methods. 3D-3D-based methods require a sequence of images for 3D reconstruction and then 3D matching to achieve registration. Structure from motion (SFM) is a widely used method that reconstructs 3D points using a set of images [30,31]. A 3D matching algorithm, such as iterative closest point (ICP) [32] and normal distributions transform (NDT) [33], is often used for point registration. Zheng et al. [34] used bundle adjustment for a sequence of images; the points obtained by adjustment were matched with the laser point cloud by ICP. Zhao et al. [35] used stereo vision technology to process 3D reconstruction and used the ICP algorithm to achieve 3D point cloud registration. Abayowa et al. [36] presented a coarse to fine strategy in the estimation of the registration parameters without initial alignment.
We propose utilizing a rigorous imaging model and skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images. This method improves the accuracy of registration automatically through optimizing attitude parameters. Figure 1 shows the flow chart of this method. This paper is organized as follows. Section 2 provides the rigorous panoramic/fish-eye image registration model and the skyline matching method. In Section 3, conducted experiments that verify the effectiveness of the skyline-based method are described. Section 4 discusses the parameters in skyline pixels/points matching and the precision/automation of our method. Finally, conclusions are presented in Section 5.

2. Materials and Methods

2.1. Materials

The MMS used in this paper was jointly developed by Wuhan University and Leador Spatial Information Technology Corporation, configured with a panoramic camera, three low-cost SICK laser scanners (one for ground and two for facades) and a GPS/IMU [1], they are connected by a precision mechanical device with accurate calibration. The panoramic camera is a Ladybug5 (FLIR Integrated Imaging Solutions Inc., Richmond, BC, Canada), which includes 6 high-definition Sony ICX655 CCDs, with 5 in the side (horizontal) and 1 at the top (vertical). The camera can perform image acquisition, processing, correction and stitching of a panoramic image in real-time.
Figure 2 and Figure 3 show the point data and the optical image of the MMS, respectively. The image data include a fish-eye image (6000 × 4000) along the road direction and a panoramic image (4000 × 8000) stitched by 6 fish-eye images. The point data of the MMS corresponds to the image at the same place; this road has a length that is approximately 220 m, the data of which includes 3 million points. In addition, the initial values of the imaging position and attitude, as well as the calibration information of the MMS sensors, are known. Based on the current panoramic/fish-eye image (N), 4 panoramic/fish-eye images were selected on both sides of N, which constituted 5 consecutive images (N − 2, N − 1, N, N + 1, and N + 2). This sequence of panoramic/fish-eye images is shown in Figure 4.
To compare the accuracy of different registration methods, 38 control points are selected manually; these points are mainly distributed on the corners of buildings and billboards, the tops of lamps, etc. As shown in Figure 1 and Figure 3, the panoramic and fish-eye images contains 38 and 19 control points, respectively; an exception is that only 17 control points are in the fish-eye image (N + 2). In Appendix A, Table A1 lists the 3D coordinates of the 38 control points obtained from the LiDAR points; Table A2 lists the 2D image coordinates of the 38 control points in the panoramic image; and Table A3 lists the 2D image coordinates of control points in the fish-eye image.

2.2. Panoramic/Fish-Eye Image Registration Model

The imaging mode of a panoramic/fish-eye image must be known before registration. The panoramic camera used in an MMS is composed of multiple fish-eye lenses at present; the fish-eye images can be stitched together to form a panoramic image according to a fixed model. As shown in Equation (1), the collinear equation is a classic imaging model. The Y axis is perpendicular to the image plane (XOZ) in this model, and the panoramic/fish-eye imaging model can be transformed by a collinear equation.
{ r = f Z ¯ / Y ¯ c = f X ¯ / Y ¯
where [ X ¯ Y ¯ Z ¯ ] T = R [ x ¯ y ¯ z ¯ ] T , R = R X R Y R Z , [ x ¯ y ¯ z ¯ ] = [ x X S y Y S z Z S ] ,
R X = [ 1 0 0 0 cos ( r x ) sin ( r x ) 0 sin ( r x ) cos ( r x ) ] R Y = [ cos ( r y ) 0 sin ( r y ) 0 1 0 sin ( r y ) 0 cos ( r y ) ] R Z = [ cos ( r z ) sin ( r z ) 0 sin ( r z ) cos ( r z ) 0 0 0 1 ]
(x y z) is the coordinate of an object, f is the imaging lens focal length, (XS YS ZS) is the imaging position, (rx ry rz) is the imaging attitude, and (r c) is the perspective projection coordinate.

2.2.1. Panoramic/Fish-Eye Imaging Model

There are 4 types of fish-eye lens projections: equidistant projection, orthographic projection, equisolid-angle projection and stereographic projection [37,38]. The panoramic stitching models include a spherical model, cylinder model and cube model. Considering the experimental data in our paper, an equidistant projection and a spherical model were used as examples.
The equidistant projection of a fish-eye lens can be expressed as:
{ r = f Z ¯ / Y ¯ θ / tan θ c = f X ¯ / Y ¯ θ / tan θ
where θ = a tan ( Z ¯ 2 + X ¯ 2 / Y ¯ ) , and (rc′) is the coordinate of the object in the fish-eye image.
As shown in Equation (3), for a panoramic image using the spherical model, take vertical angle (v) as the row coordinate and the horizontal angle (h) as the column coordinate. Considering h [ π π ] and v [ π / 2 π / 2 ] , the size of the panoramic image column is 2 times that of a row.
{ tan v = Z ¯ / X ¯ 2 + Y ¯ 2 tan h = X ¯ / Y ¯
The imaging parameters can be obtained accurately by the control points. Equations (4) and (5) are transformed by Equations (2) and (3) and represent the computational model of the panoramic/fish-eye image.
{ Z ¯ / Y ¯ = r tan θ / r d X ¯ / Y ¯ = c tan θ / r d
where r d = r 2 + c 2 , θ = r d / f , and (rc′) is the fish-eye image coordinate.
{ Z ¯ / Y ¯ = tan v 1 + tan 2 h X ¯ / Y ¯ = tan h
where v = r / r o w π , h = c / c o l 2 π , (row col) expresses the size of the panoramic image, and (rc″) is the panoramic coordinate.
Let the right side of Equations (4) and (5) equal p1 and p2, which are known as the image coordinate. The adjustment model of a panoramic/fish-eye image is shown in Equation (6); these variables can be calculated by the DLT method.
{ p 1 = R 20 x ¯ + R 21 y ¯ + R 22 z ¯ R 10 x ¯ + R 11 y ¯ + R 12 z ¯ p 2 = R 00 x ¯ + R 01 y ¯ + R 02 z ¯ R 10 x ¯ + R 11 y ¯ + R 12 z ¯

2.2.2. Registration Accuracy Evaluation

Substituting the solved variables and control points’ coordinates into Equation (6), p1 and p2 can be calculated, and then the panoramic/fish-eye image coordinates of control points can be obtained by Equations (7) and (8).
{ v = a tan ( p 1 / 1 + p 2 2 ) h = a tan ( p 2 )
{ r = f p 1 θ / tan θ c = f p 2 θ / tan θ
where θ = a tan ( p 1 2 + p 2 2 ) , and the other parameters are the same as those previously indicated.
The registration error (δ) is given by Equation (9) and is considered to be the precision index of registration.
δ = i = 1 m [ ( r i r i ) 2 + ( c i c i ) 2 ] / m
where m is the number of control points, (ri ci) expresses the image coordinate of the control points from Equations (7) and (8), and (rici′) expresses the real coordinate of the control points.

2.3. Skyline-Based Method

The skyline-based method includes skyline pixels extracted from a panoramic/fish-eye image, skyline points extracted from LiDAR points and skyline pixels/points matching.

2.3.1. Skyline Pixels Extracted from Panoramic/Fish-Eye Image

Because the difference in the image pixel value between the sky and other objects is significant, the skyline can be easily extracted by this feature. The first grey jump as the skyline pixels are searched from above to below and column by column, the skyline pixels (blue pixels) extracted from the panoramic/fish-eye image can be obtained, as shown in Figure 5a,b, which include the top profile of buildings, trees, billboards, power lines and street lamps. Considering the linear feature of a power line and the discrete characteristics of LiDAR points, power line points extracted from LiDAR points are incomplete; thus, power line pixel interference must be eliminated by setting a jump buffer. Figure 5a,b show skyline pixels (red pixels) after optimization in which the power line pixels have been eliminated.

2.3.2. Skyline Points Extracted from LiDAR Points

Skyline points extracted from LiDAR points are influenced by imaging position, imaging attitude, imaging focal length and imaging model. In particular, panoramic image registration is related to imaging position and attitude, and fish-eye image registration is related to imaging position, attitude and focal length. Imaging position and attitude can be obtained by GPS/IMU and calibration of the MMS, and imaging focal length is obtained by camera calibration. Next, skyline points can be extracted from LiDAR points as follows. First, generate a panoramic image with all points using Equation (3); the highest pixel in the image is calculated column by column, and the point corresponding to this pixel is a skyline point. The extraction algorithm of fish-eye image registration is similar; the skyline points include a large amount of noise, and no skyline points correspond to skyline pixels in some regions; therefore, constraints must be added in skyline matching to ensure the rigorous correspondence of skyline points and pixels.

2.3.3. Skyline Pixels/Points Matching

The skyline pixels/points are extracted from the panoramic/fish-eye image and LiDAR points above. We define the image skyline pixels as Φ { r i , c i } i = 0 , 1 , m and the skyline points as Ω { x i , y i , z i } i = 0 , 1 , n in this section to analyze the skyline matching method. The key of this method is to restore the correspondence between Φ and Ω . We solve this problem using the brute force optimization method. The position parameters have little influence on the image compared to the attitude parameters; therefore, we only optimize attitude parameters in this paper. As shown in Equation (10), the procedure follows from constructing the correcting matrix R′, which consists of 3 correction angles (rxryrz′), and then placing R′ before R in the imaging model. (rxryrz′) can be obtained by brute force optimization, and the registration of the LiDAR points and the image is performed when the skyline pixels/points are matched.
[ X ¯ Y ¯ Z ¯ ] T = R R [ x ¯ y ¯ z ¯ ] T
where R = R X R Y R Z ,
R X = [ 1 0 0 0 cos ( r x ) sin ( r x ) 0 sin ( r x ) cos ( r x ) ] R Y = [ cos ( r y ) 0 sin ( r y ) 0 1 0 sin ( r y ) 0 cos ( r y ) ] R Z = [ cos ( r z ) sin ( r z ) 0 sin ( r z ) cos ( r z ) 0 0 0 1 ]
First, the imaging position (XS YS ZS), the attitude (rx ry rz) and the focal length (f) are obtained from GPS/IMU and calibration of the MMS [1] (as this calculation is very common, it is not described in detail here). Taking the maximum error of (rx ry rz) as the initial value range (w) and dividing it into t parts, as shown in Equation (11), (t + 1)3 panoramic/fish-eye synthetic images Ω { r i , c i } are generated using Ω { x i , y i , z i } according to Equations (2), (3) and (10). Second, we calculate the similarity between Ω { r i , c i } and Φ { r j , c j } , as shown in Equation (12). With the number of matching skyline pixels (n) used as the evaluation index, the highest similarity can be obtained from (t + 1)3 synthetic images, and taking the corresponding parameters (i, j, k) as the initial value of the next iteration, each iteration reduces the range by half.
{ r x , ( s + 1 ) = r x , ( s ) w ( s ) + i 2 w ( s ) / t r y , ( s + 1 ) = r y , ( s ) w ( s ) + j 2 w ( s ) / t r z , ( s + 1 ) = r z , ( s ) w ( s ) + k 2 w ( s ) / t
where s expresses the s-th iteration, w (s) = w/2s, rx (0) = ry(0) = r′z (0) = 0° and i, j, k = 0, 1, …, t.
n = { n + + , | Ω { r c } Φ { r c } | < e n , | Ω { r c } Φ { r c } | | > e ( c = 1 , 2 c o l )
where e denotes the gross error elimination threshold; when the row difference between Ω { r i , c i } and Φ { r i , c i } in the same column are less than e, n is added; otherwise they are unchanged. n reflects the skyline similarity between a real image and a synthetic image. e is used to eliminate mismatching of skyline pixels. Considering the differences of acquisition sensors, skyline points are impossible to match completely. For example, the building skyline in the left part of the panoramic image is easy to extract from the image; however, LiDAR points are missing in the corresponding region, and there is no matching skyline. e can eliminate this part of the skyline, such that it is not involved in the parameter optimization.

3. Experiments and Analysis

3.1. Comparison of the Registration Methods

To compare the accuracy of our method, the original registration method, the skyline-based registration method and the control point registration method (referred to below as method I, II and III, respectively) were used for the panoramic/fish-eye image registration. First, the original registration (I) was conducted according to Equations (1)–(3), and the registration parameters (f,XS,YS,ZS,rx,ry,rz) were obtained by GPS/IMU and MMS calibration. Second, by taking advantage of control points for registration (III) according to Equations (4) and (5), this method does not require the initial value. The registration results of the panoramic/fish-eye image (N) with method I and III are shown in Figure 6.
The key parameters involved in the skyline registration method include: angle error range (w), parameter segmentation (t), gross error threshold (e), number of matching pixels (n) and registration error (δ). n is the evaluation index of the parameter optimization, δ expresses the matching error of 38 (19) control points. Herein, let w = 5°, t = 6, and e = 5 pixels. Next, the skyline is extracted from the panoramic/fish-eye image and the LiDAR points, as described above. The skyline points are used for imaging 73 times by Equations (2), (3) and (10), and n between each generated image and real image skyline is calculated. We take both sides of maximum n as the starting value of the next iteration, and w is reduced by half in each iteration. After 6 iterations, the matching skyline pixels of the panoramic/fish-eye image are shown in Figure 7. The registration results with 3 methods are shown in Figure 6. Figure 8 and Figure 9 show local images to compare the registration effect.
As shown in Figure 7, the matching skyline pixels includes the top outlines of buildings, trees, lamps, etc. The influence of the unmatched skyline’s registration effect has been eliminated, such as the missing parts (the building in the left panoramic image) and cables. Simultaneously, considering the discrete types of point clouds and the continuity of the image, the number of matching skyline pixels(n) is far less than the column of the panoramic/fish-eye image. The proportion of the panoramic and fish-eye images is 17.75% and 11.35%, respectively. Figure 8 and Figure 9 show the local registration effect of the panoramic and fish-eye images, respectively, with 3 methods. A large dislocation is observed with method I, especially in the billboards, street lights, trees, buildings, etc. Conversely, method III can obtain the ideal effect, and all objects have a very high coincidence. The proposed skyline-based method can also achieve good registration, whose accuracy is between that of method I and that of method III. To quantitatively analyze the effect of the 3 methods, the registration error (δ) is calculated by the control points according to Equation (9). Table 1 lists δ in each iteration using our method, and Table 2 lists δ for 3 methods.
As shown in Table 1, δ becomes stable in both panoramic/fish-eye images after 6 iterations; w has been reduced to 0.15°, and the angle parameters are no longer the main factors affecting registration accuracy. Analysis of each iteration reveals that δ noticeably decreases as n increases, and that the rotation angle parameter has been optimized gradually. Table 2 lists δ of 3 methods. Our method has an accuracy (δ/image diagonal) of 0.97‰ and 2.04‰ in the panoramic and fish-eye images, respectively, which is greater than that of method III and less than that of method I. This conclusion is the same as the above image analysis. Finally, the position error of each control point with the 3 methods is analyzed in detail. Figure 10 shows the error of 38/19 control points in the panoramic/fish-eye image. Figure 11 shows the locations of all control points after registration.
According to Figure 10a,b, the error of each control point is clearly reduced after each iteration compared with method I, especially in the first iteration. Afterwards, the error maintenance reduces in subsequent iterations and gradually approaches that of method III. Figure 11 displays all control points calculated by the 3 methods on panorama/fish-eye images. We can see that the direction of the control points’ offset is consistent compared with the real position in method I, and this error is regular, indicating the optimized rotation angle has a theoretical basis.

3.2. Point Cloud Colouring

LiDAR points can be given an RGB value after registration, so the points not only contain spatial information but also have texture information. Using texture points can achieve a texture image with any position and attitude. The procedure follows by setting the current imaging position as N, N − 1 and N + 1 to express the imaging positions before and after N in the sequence images, as shown in Figure 12a. We generate an image according to the perspective imaging model. The main axis is along the road direction and the other rotation angle is 0°. The imaging focal length is 250 pixels and the imaging size is 1000 × 1000. Figure 12b–d show the imaging results. Through texture points imaging, we can obtain the scene from an arbitrary location and attitude to meet different vision requirements.

3.3. Sequence of Panoramic/Fish-Eye Image Registration

Our registration method is verified by the previous experiment. In this section, experiments are performed using our method with a sequence of panoramic/fish-eye images (N − 2, N − 1, N + 1, N + 2) to analyze the regularity of the optimized parameters. The parameters involved in our method are the same as those previously indicated. The rotation angle correction value (rxryrz′) of each panoramic/fish-eye image can be obtained automatically. Figure 13 shows the registration result using the 3 methods. Figure 14 and Figure 15 show the local registration effect using the 3 methods considered.
Figure 14 and Figure 15 show the local image with 3 methods to compare the effectiveness of the registration. Our method has obvious optimization effects compared with method I and approaches the effect of method III. Moreover, our method is completed without manual intervention, unlike method III. For quantitative analysis, Table 3 lists the imaging position and angle correction value of a sequence of panoramic/fish-eye images using our method, Table 4 lists the registration accuracy of a sequence of panoramic/fish-eye images for the 3 methods considered.
As shown in Table 3, the interval between sequence images is approximately 7 m, and the road height difference is within 0.3 m. The angle correction values of each panoramic/fish-eye image are different. Although the difference is small in most cases, it must be solved separately, rather than used directly for the next image. Table 4 shows that the registration error of method III is the smallest; the error of method II is approximately 2 times that of method III; and the error of method I is approximately 3 times that of method II. For further analysis, as shown in Figure 16, the position error of each control point using the 3 methods in a sequence of images is analyzed in detail.
As shown in Figure 16a,b, the error of each control point is noticeably reduced compared with that of method I. Moreover, the error distribution of the control point has a certain regularity with our method. First, the error of control points located at both ends of a panoramic image is large, such as no. (1, 2, 37 and 38). Second, the control points located at the top of the image have a larger error, such as no. (14 and 26) in the panoramic/fish-eye image (N + 1, N + 2), leading to a greater registration error of the fish-eye image (N + 1, N + 2) and the panoramic image (N + 2) than that of the others.

4. Discussion

4.1. Parameters in Skyline Pixels/Points Mat Ching

The parameters in skyline matching are angle error range (w), parameter segmentation (t) and gross error threshold (e), and keep the same in panoramic/fish-eye image registration. w is the iteration range of angle initial values (rx ry rz), in this paper, which is obtained from IMU. If IMU is unavailable, the approximate values of rx, rz could be solved by adjacent imaging position, and ry is set to 0. t determines the efficiency of registration. The nodes are set evenly in the correction angles (rxryrz′), and it produces (t + 1)3 images at each iteration. The number of iterations can be reduced when t is large, but the amount of computation will be increased by power function. In order to ensure the accuracy of registration, w is reduced to half not w/t after each iteration. e is the key parameter of optimization; the matching skyline pixels/points less than e are considered as successful matches. When e is larger, it would be more mismatched points, but when e is small, it would filter out correct matching pixels/points. In addition, e is related to the image’s rows, the rows of the panoramic and fish-eye images is 4000 and 6000 pixels respectively in our experiments, it is empirically proved that setting e to 5 pixels is suitable.

4.2. The Precision and Automation Compare with Other Methods

Our method does not require manual intervention in the registration of panoramic/fish-eye images and LiDAR points. In recent studies, some articles use primitive pairs as line and plane, and a rigid registration to obtain high-precision result. However, extracting primitive pairs automatically from images and LiDAR is still challenging. Cui et al. [1] extracted line features from LiDAR points automatically and from the corresponding rectified mono-images through a semi-automation process, and achieved the registration accuracy of 4.718 and 4.244 pixels (image size is 1616 × 1232 pixels) in spherical and panoramic camera respectively. However, the process requires a manual check for primitive pairs before registration. Li et al. [2] took parked vehicles as registration primitives, and extracted them from panoramic images via Faster-RCNN; particle swarm optimization was further utilized to refine the translations between the panoramic camera and the laser scanner and registration error less than 3 pixels (image size is 2048 × 4096 pixels) are obtained. However, this automatic registration method relies on the parked vehicles, which makes its application scenarios limited. Moreover, the extraction of vehicles often fails due to poor illumination or occlusion. Our method takes skyline as primitive pairs as it is easy to extract both from image and LiDAR and skyline exists in most of scenes. In addition, our method can be used for both fish-eye image and panoramic image.

4.3. Weakness and Future Work

The main weakness of our method is the dependence on the skyline. In an open area, the image pixels of skyline could represent very distant objects, while points cloud are limited in long distance, making skyline pixels/points failed to match. Therefore, our method is more suitable for urban area or similar places with nearby skyline. There are also some problems require further study, such as the optimization of the position parameters which could be iterated alternately with the attitude parameters. The stitching error of the panoramic image also needs to be further studied as the camera center of each fish-eye lens hardly overlaps, and the attitude of each lens is seldom in the same horizontal plane.

5. Conclusions

In this paper, we proposed a skyline-based method for automatic registration of LiDAR points and panoramic/fish-eye image sequence in an MMS. The effectiveness of this method was demonstrated by comparison with the original registration and control points registration methods. Compared with other related works, the main contribution of this study is that (1) The panoramic/fish-eye image registration model is efficient and can achieve high-precision registration of an image and LiDAR points in an MMS. (2) The skyline-based registration method can automatically optimize the initial attitude parameters and realize high-precision registration of panoramic/fish-eye images and LiDAR points in an MMS. (3) The attitude correction values are different in panoramic/fish-eye image sequence; each value must be solved individually.

Author Contributions

Conceptualization, N.Z.; Methodology, N.Z.; Validation, N.Z., Y.J. and S.J.; Formal Analysis, N.Z.; Investigation, N.Z.; Resources, S.J.; Data Curation, N.Z.; Writing-Original Draft Preparation, N.Z.; Writing-Review & Editing, N.Z., Y.J. and S.J.; Supervision, Y.J. and S.J.

Funding

This research was funded by [The Demonstration Project of Civil Military Integration in the Field of Highway Transportation], grant number [GFZX0404080102].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. 3D coordinates of the control points (m = 38) (meters).
Table A1. 3D coordinates of the control points (m = 38) (meters).
No.12345678910
X736.872750.315720.542720.324720.490720.626725.968738.342736.746729.746
Y719.029719.590700.018700.155698.666698.618694.241677.968676.344669.746
Z21.07118.65513.99412.09012.12914.00918.40023.49323.54721.818
No.11121314151617181920
X726.453701.737694.998684.251674.913675.536671.280657.947657.668647.327
Y664.691668.587675.141639.304614.069615.144649.908626.820626.328624.664
Z21.95918.06521.66447.03923.76330.85922.58123.72330.67423.370
No.21222324252627282930
X623.430609.152632.710657.169625.455680.693601.633649.663626.746651.056
Y599.102611.827637.231663.092643.725688.327630.632669.189655.939681.286
Z24.28024.34523.44422.56417.43021.74621.64517.74121.69721.765
No.3132333435363738
X673.819650.928657.382656.167678.039686.893707.353722.722
Y695.005704.462711.380720.494743.051745.828741.755730.199
Z18.17130.32830.20135.27335.38930.19722.36217.831
Table A2. 2D coordinates of control points in the panoramic image (m = 38) (pixels).
Table A2. 2D coordinates of control points in the panoramic image (m = 38) (pixels).
No.N − 2N − 1NN + 1N + 2
rcrcrcrcrc
1762.41532.3573.01602.9458.61654.4372.61686.2311.41710.0
2816.01743.5675.41761.5578.51779.7495.71788.2431.41795.2
32223.21875.21638.11841.81167.71841.4841.81846.6638.31855.2
42218.32021.41629.81989.01161.41961.4836.61944.6636.61932.4
52270.42025.41715.51995.41243.41966.9906.41948.5688.61935.7
62274.71886.81721.91854.61249.91846.4911.61850.2693.61855.8
72168.31708.31772.01682.41412.31685.91116.31706.5897.11728.1
82181.51709.21963.91690.51747.71681.31543.01681.61359.81686.8
92241.41715.12024.51694.61808.41682.81600.21681.01409.81685.7
102496.01788.72298.21763.62084.01743.91864.41733.01649.01724.7
112615.31808.22436.61781.42241.31761.92029.11745.71810.61735.8
123243.91903.13115.11876.42945.21842.42708.41807.72394.51764.2
133483.91784.33380.41729.23240.71653.43006.91552.02619.51417.9
143443.51534.83388.31489.23328.71439.43254.71388.03163.31331.7
153438.01926.43388.21917.73344.91907.03292.91903.43232.11893.7
163437.41845.83390.51830.93347.51816.23296.31804.93236.81788.8
173697.71896.43663.81880.73631.91860.73585.21843.03527.11813.4
183685.91926.13657.91916.73634.81905.83603.31901.33567.81891.0
193684.41841.63657.91825.33635.21809.33604.61794.63569.91773.6
203778.81941.33759.11935.63748.31927.43727.51925.43707.01918.6
213819.71963.93806.11962.13803.21958.63790.11961.83779.91962.1
223993.61965.03987.41962.33992.11957.53993.01960.43994.31961.2
234001.01939.13998.21932.44009.41926.04011.11921.44015.81915.2
244023.71892.14026.11876.14042.11854.84053.81833.44069.91800.3
254114.32009.84125.52012.34139.72007.74154.02011.34171.42013.4
264089.71767.34117.31703.14174.51599.64275.71426.14543.31081.8
274162.11982.94168.91979.44188.01978.44201.51980.84216.91982.3
284186.41977.04212.61974.54243.91961.34284.71955.64338.21945.3
294221.81949.54237.71944.84267.11938.34296.21933.84330.01928.1
304355.61888.04400.31870.74467.21848.44549.51822.84654.91787.1
314387.31881.94479.41847.74627.01796.64858.61718.35267.21608.7
324789.01684.74892.01650.05025.91607.25187.31562.45388.51517.6
334928.81634.05064.01591.95237.61546.55439.31498.45679.71465.7
345132.51535.05281.11495.25461.01457.75656.51424.85870.11405.3
355895.41393.56106.11389.76317.11402.26507.11423.26672.71442.0
366159.81449.56384.61463.56591.11489.56764.01515.86908.51541.8
376828.71515.57051.21576.17210.81633.27324.11658.37409.51687.8
387806.81584.77839.01675.37863.61729.87873.61753.57884.51772.4
Table A3. 2D coordinates of control points in the fish-eye image (m = 19/17) (pixels).
Table A3. 2D coordinates of control points in the fish-eye image (m = 19/17) (pixels).
No.N − 2N − 1NN + 1N + 2
rcrcrcrcrc
13928.42523.9725.92400.8454.92222.248.61971.8******
14890.51980.3799.21880.2705.51772.6589.11655.5462.51504.1
15824.02832.7723.22808.7633.42790.8526.42781.8410.82753.7
16830.42652.9733.52622.1648.52584.8544.22560.5427.92520.0
171365.02770.91296.62736.21229.32693.31135.92650.91016.42587.6
181339.22833.51282.72813.01241.82793.71169.32777.61097.52754.8
191338.52652.51288.42617.51243.82578.71179.52548.31111.42502.5
201532.32866.61496.02852.51469.42837.01430.92831.61389.22818.7
211621.32916.81597.32910.11584.02904.41564.82909.71543.92911.5
221985.92915.41984.52910.91989.12904.42000.12910.92010.72912.6
232008.42861.62007.32848.72022.02833.02032.52826.02048.12812.0
242056.62762.52066.12727.32093.42683.42121.22635.12162.82565.0
252249.23012.42266.53009.92300.23007.42332.83015.42381.43020.8
262195.52493.22253.82356.22362.62137.82537.51778.92914.31038.7
272351.72954.42368.72948.62403.12944.82436.62953.02477.82954.5
282402.92940.12467.52935.52522.52908.12612.32893.82735.62870.2
292477.32885.82514.22871.62572.82858.02636.42848.82716.62835.1
302757.42750.62852.62711.52988.32660.83158.22602.93384.32520.5
312826.62736.63014.42661.33313.02542.73771.22357.3******
*** express No.13 and 31 control points are not in the fish-eye image (N+2).

References

  1. Cui, T.; Ji, S.; Shan, J.; Gong, J.; Liu, K. Line-based registration of panoramic images and LiDAR point clouds for mobile mapping. Sensors 2017, 17. [Google Scholar] [CrossRef]
  2. Li, J.; Yang, B.; Chen, C.; Huang, R.; Dong, Z.; Xiao, W. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features. ISPRS J. Photogramm. Remote Sens. 2018, 136, 41–57. [Google Scholar] [CrossRef]
  3. Wang, R.; Ferrie, F.P.; MacFarlane, J. Automatic registration of mobile LiDAR and spherical panoramas. In Proceedings of the 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, USA, 16–21 June 2012; pp. 33–40. [Google Scholar]
  4. Habib, A.; Ghanma, M.; Morgan, M.; Al-ruzouq, R. Photogrammetric and Lidar Data Registration Using Linear Features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  5. Hofmann, S.; Eggert, D.; Brenner, C. Skyline matching based camera orientation from images and mobile mapping point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, II-5, 181–188. [Google Scholar] [CrossRef]
  6. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  7. Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  8. Morel, J.-M.; Yu, G. ASIFT: A New Framework for Fully Affine Invariant Image Comparison. SIAM J. Imaging Sci. 2009, 2, 438–469. [Google Scholar] [CrossRef]
  9. Yang, G.; Becker, J.; Stewart, C.V. Estimating the location of a camera with respect to a 3D model. In Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling, 3DIM 2007, Montreal, QC, Canada, 21–23 August 2007; pp. 159–166. [Google Scholar]
  10. González-Aguilera, D.; Rodríguez-Gonzálvez, P.; Gómez-Lahoz, J. An automatic procedure for co-registration of terrestrial laser scanners and digital cameras. ISPRS J. Photogramm. Remote Sens. 2009, 64, 308–316. [Google Scholar] [CrossRef]
  11. Ding, M.; Lyngbaek, K.; Zakhor, A. Automatic registration of aerial imagery with untextured 3D LiDAR models. In Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
  12. Brown, M.; Windridge, D.; Guillemaut, J.Y. Globally optimal 2D-3D registration from points or lines without correspondences. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; Volume 2015, pp. 2111–2119. [Google Scholar]
  13. Lv, F.; Ren, K. Automatic registration of airborne LiDAR point cloud data and optical imagery depth map based on line and points features. Infrared Phys. Technol. 2015, 71, 457–463. [Google Scholar] [CrossRef]
  14. Plötz, T.; Roth, S. Automatic Registration of Images to Untextured Geometry Using Average Shading Gradients. Int. J. Comput. Vis. 2017, 125, 65–81. [Google Scholar] [CrossRef]
  15. Parmehr, E.G.; Fraser, C.S.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS J. Photogramm. Remote Sens. 2014, 88, 28–40. [Google Scholar] [CrossRef]
  16. Viola, P.; Wells, W.M., III. Alignement by Maximization of Mutual Information. Int. J. Comput. Vis. 1997, 24, 137–154. [Google Scholar] [CrossRef]
  17. Gong, M.; Zhao, S.; Jiao, L.; Tian, D.; Wang, S. A novel coarse-to-fine scheme for automatic image registration based on SIFT and mutual information. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4328–4338. [Google Scholar] [CrossRef]
  18. Corsini, M.; Dellepiane, M.; Ponchio, F.; Scopigno, R. Image-to-geometry registration: A Mutual Information method exploiting illumination-related geometric properties. Comput. Graph. Forum 2009, 28, 1755–1764. [Google Scholar] [CrossRef]
  19. Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009, Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
  20. Miled, M.; Soheilian, B.; Habets, E.; Vallet, B. Hybrid online mobile laser scanner calibration through image alignment by mutual information. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, III-1, 25–31. [Google Scholar] [CrossRef]
  21. Taylor, Z.; Nieto, J. Automatic calibration of lidar and camera images using normalized mutual information. In Proceedings of the 2013 IEEE Conference on Robotics and Automation (ICRA 2013), Karlsruhe, Germany, 6–10 May 2013. [Google Scholar]
  22. Pascoe, G.; Maddern, W.; Newman, P. Robust Direct Visual Localisation using Normalised Information Distance. BMVC 2015, 1–13. [Google Scholar] [CrossRef]
  23. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An accurate O(n) solution to the PnP problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef] [Green Version]
  24. Shao, J.; Zhang, W.; Zhu, Y.; Shen, A. Fast registration of terrestrial LiDAR point cloud and sequence images. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences—ISPRS Archives, Wuhan, China, 18–22 September 2017; Volume 42, pp. 875–879. [Google Scholar]
  25. Zhang, Z.; Zhang, Y.; Zhang, J.; Zhang, H. Photogrammetric Modeling of Linear Features with Generalized Point Photogrammetry. Photogramm. Eng. Remote Sens. 2008, 74, 1119–1127. [Google Scholar] [CrossRef]
  26. Stamos, I.; Leordean, M. Automated feature-based range registration of urban scenes of large scale. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 2, pp. 555–561. [Google Scholar]
  27. Schenk, T. From point-based to feature-based aerial triangulation. ISPRS J. Photogramm. Remote Sens. 2004, 58, 315–329. [Google Scholar] [CrossRef]
  28. Liu, L.; Stamos, I. A systematic approach for 2D-image to 3D-range registration in urban environments. Comput. Vis. Image Underst. 2012, 116, 25–37. [Google Scholar] [CrossRef]
  29. Liu, L.; Stamos, I. Automatic 3D to 2D registration for the photorealistic rendering of urban scenes. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 137–143. [Google Scholar]
  30. Kaminsky, R.S.; Snavely, N.; Seitz, S.M.; Szeliski, R. Alignment of 3d point clouds to overhead images. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, Miami, FL, USA, 20–25 June 2009; pp. 63–70. [Google Scholar]
  31. Corsini, M.; Dellepiane, M.; Ganovelli, F.; Gherardi, R.; Fusiello, A.; Scopigno, R. Fully automatic registration of image sets on approximate geometry. Int. J. Comput. Vis. 2013, 102, 91–111. [Google Scholar] [CrossRef] [Green Version]
  32. Besl, P.; McKay, N. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  33. Magnusson, M.; Lilienthal, A.; Duckett, T. Scan registration for autonomous mining vehicles using 3D-NDT. J. Field Robot. 2007, 24, 803–827. [Google Scholar] [CrossRef] [Green Version]
  34. Zheng, S.; Huang, R.; Zhou, Y. Registration of optical images with LiDAR data and its accuracy assessment. Photogramm. Eng. Remote Sens. 2013, 79, 731–741. [Google Scholar] [CrossRef]
  35. Zhao, W.; Nister, D.; Hsu, S. Alignment of continuous video onto 3D point clouds. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1305–1318. [Google Scholar] [CrossRef] [PubMed]
  36. Abayowa, B.O.; Yilmaz, A.; Hardie, R.C. Automatic registration of optical aerial imagery to a LiDAR point cloud for generation of city models. ISPRS J. Photogramm. Remote Sens. 2015, 106, 68–81. [Google Scholar] [CrossRef]
  37. Bakstein, H.; Pajdla, T. Panoramic mosaicing with a 180/spl deg/field of view lens. In Proceedings of the 2002 Third Workshop on Omnidirectional Vision, Copenhagen, Denmark, 2 June 2002; pp. 60–67. [Google Scholar]
  38. Schneider, D.; Schwalbe, E.; Maas, H.G. Validation of geometric models for fisheye lenses. ISPRS J. Photogramm. Remote Sens. 2009, 64, 259–266. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the skyline-based registration method.
Figure 1. Flow chart of the skyline-based registration method.
Sensors 18 01651 g001
Figure 2. LiDAR points data. (a) Main view of LiDAR points. (b) Plane map of LiDAR points.
Figure 2. LiDAR points data. (a) Main view of LiDAR points. (b) Plane map of LiDAR points.
Sensors 18 01651 g002
Figure 3. Panoramic/fish-eye image and control points distribution. (a) Panoramic image (4000 × 8000). (b) Fish-eye image (6000 × 4000).
Figure 3. Panoramic/fish-eye image and control points distribution. (a) Panoramic image (4000 × 8000). (b) Fish-eye image (6000 × 4000).
Sensors 18 01651 g003
Figure 4. Sequence of panoramic/fish-eye images. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Figure 4. Sequence of panoramic/fish-eye images. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Sensors 18 01651 g004aSensors 18 01651 g004b
Figure 5. Skyline pixels (display in bold). (a) Panoramic image skyline pixels (red pixels) optimization. (b) Fish-eye image skyline pixels (red pixels) optimization.
Figure 5. Skyline pixels (display in bold). (a) Panoramic image skyline pixels (red pixels) optimization. (b) Fish-eye image skyline pixels (red pixels) optimization.
Sensors 18 01651 g005
Figure 6. Registration result with 3 methods. (a) Panoramic image registration; (b) Fish-eye image registration.
Figure 6. Registration result with 3 methods. (a) Panoramic image registration; (b) Fish-eye image registration.
Sensors 18 01651 g006
Figure 7. Matching skyline pixels, black pixels express real image skyline, display with 10 pixels width, red pixels express synthetic image skyline. (a) Panoramic image skyline (n = 1420); (b) Fish-eye image skyline (n = 454).
Figure 7. Matching skyline pixels, black pixels express real image skyline, display with 10 pixels width, red pixels express synthetic image skyline. (a) Panoramic image skyline (n = 1420); (b) Fish-eye image skyline (n = 454).
Sensors 18 01651 g007
Figure 8. Local effect comparison of panoramic image registration, black squares express real location of control points; blue, red, and green crosses are synthetic location of control points calculated by method I, II, and III respectively. (a) Building and Lamp; (b) Lamp and Billboard; (c) Lamp; (d) Building and Tree.
Figure 8. Local effect comparison of panoramic image registration, black squares express real location of control points; blue, red, and green crosses are synthetic location of control points calculated by method I, II, and III respectively. (a) Building and Lamp; (b) Lamp and Billboard; (c) Lamp; (d) Building and Tree.
Sensors 18 01651 g008
Figure 9. Local effect comparison of fish-eye image registration, black squares express real location of control points; blue, red, and green crosses are synthetic location of control points calculated by method I, II, and III respectively. (a) Lamp and Billboard; (b) Lamp; (c) Lamp and Tree.
Figure 9. Local effect comparison of fish-eye image registration, black squares express real location of control points; blue, red, and green crosses are synthetic location of control points calculated by method I, II, and III respectively. (a) Lamp and Billboard; (b) Lamp; (c) Lamp and Tree.
Sensors 18 01651 g009aSensors 18 01651 g009b
Figure 10. The error of 38/19 control points in the panoramic/fish-eye image with 3 methods, series 1 expresses the error of each control point in method I; similarly, series 2–7 express 6 iterations of our method, and series 8 expresses the result of method III. (a) Panoramic image; (b) Fish-eye image.
Figure 10. The error of 38/19 control points in the panoramic/fish-eye image with 3 methods, series 1 expresses the error of each control point in method I; similarly, series 2–7 express 6 iterations of our method, and series 8 expresses the result of method III. (a) Panoramic image; (b) Fish-eye image.
Sensors 18 01651 g010
Figure 11. Control points displayed in the panoramic/fish-eye image. (a) Fish-eye image (No.13–31); (b) Panoramic image (No.12–31); (c) Panoramic image (No.1–11); (d) Panoramic image (No.32–38).
Figure 11. Control points displayed in the panoramic/fish-eye image. (a) Fish-eye image (No.13–31); (b) Panoramic image (No.12–31); (c) Panoramic image (No.1–11); (d) Panoramic image (No.32–38).
Sensors 18 01651 g011
Figure 12. Texture point cloud imaging. (a) Plane map of texture points and the location of N − 1, N, N + 1(black crosses). (bd) Perspective imaging of texture points in different location, (b) N − 1; (c) N; (d) N + 1.
Figure 12. Texture point cloud imaging. (a) Plane map of texture points and the location of N − 1, N, N + 1(black crosses). (bd) Perspective imaging of texture points in different location, (b) N − 1; (c) N; (d) N + 1.
Sensors 18 01651 g012
Figure 13. Registration result of panoramic/fish-eye image sequence with 3 methods. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Figure 13. Registration result of panoramic/fish-eye image sequence with 3 methods. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Sensors 18 01651 g013
Figure 14. Local registration effect of panoramic images sequence with 3 methods. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Figure 14. Local registration effect of panoramic images sequence with 3 methods. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Sensors 18 01651 g014
Figure 15. Local registration effect of fish-eye images sequence with 3 methods. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Figure 15. Local registration effect of fish-eye images sequence with 3 methods. (a) N − 2; (b) N − 1; (c) N + 1; (d) N + 2.
Sensors 18 01651 g015
Figure 16. The error of 38/19(17) control points in the panoramic/fish-eye image sequence with 3 methods, series 1–3 expresses the error of each control point in method I, II and III, respectively. (a) Panoramic image(No.1–38); (b) Fish-eye image(No.13(14)–31(30)).
Figure 16. The error of 38/19(17) control points in the panoramic/fish-eye image sequence with 3 methods, series 1–3 expresses the error of each control point in method I, II and III, respectively. (a) Panoramic image(No.1–38); (b) Fish-eye image(No.13(14)–31(30)).
Sensors 18 01651 g016aSensors 18 01651 g016b
Table 1. Skyline registration error in each iteration (w, rx′, ry′, rz′(degrees), δ(pixels)).
Table 1. Skyline registration error in each iteration (w, rx′, ry′, rz′(degrees), δ(pixels)).
NO.wPanoramic ImageFish-Eye Image
rxryrznδrxryrznδ
153.33300126310.91401.666020528.850
22.53.33300126310.91400.833032023.250
31.253.75−0.4160129311.03501.25−0.41637315.231
40.6253.541−0.208013919.906−0.2081.041−0.41641517.105
50.3123.541−0.2080.10413928.895−0.1041.041−0.52043915.493
60.1563.489−0.1040.15614208.692−0.1041.093−0.52045414.723
Table 2. Registration accuracy with 3 methods (pixels).
Table 2. Registration accuracy with 3 methods (pixels).
MethodsIIIIII
Panoramic image (4000 × 8000)29.041 8.6925.883
Fish-eye image (6000 × 4000)56.34714.7234.792
Table 3. Registration parameters of panoramic/fish-eye image sequence (XS,YS,ZS(meters), rx′, ry′, rz′(degrees)).
Table 3. Registration parameters of panoramic/fish-eye image sequence (XS,YS,ZS(meters), rx′, ry′, rz′(degrees)).
NO.Imaging PositionPanoramic ImageFish-Eye Image
XSYSZSrxryrzrxryrz
N − 2710.416714.01212.2203.2290.104−0.10400.416−0.520
N − 1705.175708.42612.2493.333−0.2080.208−0.1040.729−0.416
N699.901702.81812.2943.489−0.1040.156−0.1041.093−0.520
N + 1694.606697.18012.3763.958−0.1040.208−0.2081.458−0.520
N + 2689.282691.49912.4944.375−0.1040−0.1042.083−0.416
Table 4. Registration accuracy of panoramic/fish-eye image sequence with 3 methods (pixels).
Table 4. Registration accuracy of panoramic/fish-eye image sequence with 3 methods (pixels).
MethodsPanoramic ImageFish-Eye Image
N − 2N − 1N + 1N + 2N − 2N − 1N + 1N + 2
I19.41123.52037.30246.54332.74444.56976.45695.497
II9.3059.19911.77216.43911.31014.30624.65128.169
III5.3427.2045.6745.3365.2646.8607.77513.287

Share and Cite

MDPI and ACS Style

Zhu, N.; Jia, Y.; Ji, S. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features. Sensors 2018, 18, 1651. https://doi.org/10.3390/s18051651

AMA Style

Zhu N, Jia Y, Ji S. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features. Sensors. 2018; 18(5):1651. https://doi.org/10.3390/s18051651

Chicago/Turabian Style

Zhu, Ningning, Yonghong Jia, and Shunping Ji. 2018. "Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features" Sensors 18, no. 5: 1651. https://doi.org/10.3390/s18051651

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop