Next Article in Journal
A Waist-Worn Inertial Measurement Unit for Long-Term Monitoring of Parkinson’s Disease Patients
Previous Article in Journal
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

1
College of Surveying and Geoinformatics, Tongji University, Shanghai 200092, China
2
Institute of Geography, Heidelberg University, Heidelberg D-69120, Germany
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(4), 837; https://doi.org/10.3390/s17040837
Submission received: 10 January 2017 / Revised: 5 April 2017 / Accepted: 6 April 2017 / Published: 11 April 2017
(This article belongs to the Section Remote Sensors)

Abstract

:
A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

1. Introduction

1.1. Background

The Mobile Mapping System (MMS) is an advanced system used to collect the environmental geospatial and texture data and consists of three main parts: mapping sensors, a positioning and navigation unit for spatial referencing, and a time referencing unit [1]. Since the first MMS was created by the Center for Mapping at Ohio State University [2,3], MMS has developed rapidly and become an important data source of 3D city modeling, indoor mapping and urban mapping and planning [1,4,5]. Other advanced MMSs have been developed and are used for real-time air pollution monitoring and health risk management [6,7].
Vehicle-borne MMSs are usually installed with a laser scanner and digital camera or video as mapping sensors [1]. Therefore, the integration of the data collected by different sensors, especially the fusion of point clouds and digital imagery, including depth imagery, has become an important research topic [8,9,10,11]. Panoramic cameras have been commonly utilized to replace the traditional digital camera in collecting texture information [12,13,14]. Compared with traditional digital cameras, panoramic cameras collect environmental data with a better field of view and accurate positioning module. Therefore, the fusion of panoramic images and other data from sensors, especially the point clouds from a laser scanner, is an important issue.
The following subsection presents several previous studies on the fusion of a point clouds and images, including panoramic images.

1.2. Previous Studies

Previous studies relating to the registration of panoramic images and the LiDAR (Light Detection And Ranging) point clouds describe four methods. The first method is a non-rigid ICP (Iterative Closest Point) algorithm proposed by [15], which incorporates a bundle adjustment for ICP processing and conducts a refinement using SIFT (Scale-Invariant Feature Transform) features detected from both kinds of datasets. The second method is a sensor-alignment based method, which extracts each CCD (Charge-Coupled Device) camera from the panoramic imaging system and obtains the accurate internal orientation parameters through calibration [16]. Third, a complex model from a world coordinate system to single CCD images was established to find the precise relationship between point and pixel. A collinear principle, which establishes the relationship between the center of the panoramic camera, the image point on the sphere, as well as the object point, was proposed by [17]. This method also uses the accurate relationship between GPS, POS (Positioning and Orientation System), and LiDAR sensors. The final method, the feature-line based registration model proposed by [18], extracts the linear features of the terrain from both LiDAR data and the panoramic images to establish the transformation model. All these methods were successfully applied to the registration of panoramic images and TLS (Terrestrial Laser Scanning) or MLS (Mobile Laser Scanning) point clouds. As the panoramic camera is always working during data collection, this will lead to situations where the registration between panoramic images with point clouds is a 1:N problem, as a point of the point clouds will find more than one pixel during registration.
Although only a few studies have addressed the registration between point clouds and panoramic images, the methods for the registration of point clouds and images can also be used if the panoramic images are separated into individual ones. These methods can be divided into two types.
The first type of registration method is based on the point clouds generated by laser scanners and digital cameras. An important parameter to describe this method is the overlapping rate of the images acquired by digital cameras. When the overlapping rate of digital images exceeds a set threshold, dense registration [19] or structure from motion (SFM) [20] algorithms can be applied to calculate the point clouds pixel by pixel for adjacent or unorganized images. Registration of the point cloud and digital imagery is transformed to registration between the point clouds from a laser scanner and point clouds from digital imageries by SFM or other methods. Several traditional or recently developed methods can be used in this situation, of which iterative closest point (ICP) [21] is the most common method. Several extensions and additional conditions based on the basic ICP algorithm, such as random sample consensus [22] and the least median of squares (LMS) estimators [23], have been developed to increase robustness and convergence and improve computational efficiency and performance. Various other methods based on the features extracted from both sources of the point clouds, such as corner points of buildings [24,25], polyline and polygon features [26,27,28], normal vectors of polygon features [29,30] and urban road networks [31], can also be applied. An important feature of this registration approach is that the acquisition view of the point clouds is different for the two technologies. For example, a terrestrial laser scanner acquires point clouds data on a terrain, and an unmanned aerial vehicle (UAV) acquires images from a low-altitude space. Good complementarity exists between the point clouds collected through these two methods, therefore, this registration method is commonly used to integrate point clouds, especially from complex buildings or other such infrastructure.
The second type of method is based on the features extracted from a point clouds and images. Before extracting and applying features for registration, the accurate relationship amongst sensors, such as digital cameras, GPS and IMU, should be calibrated and computed. Another pre-processing procedure for digital cameras is the calibration of the different lenses of cameras. Shift and rotation relationship data are important in the registration of different sensors. The authors of [32] calibrated the fixed mathematical relationship between laser scanners and digital cameras and synchronously acquired point clouds and image data to complete the registration. POS data is usually used to integrate a 3D point clouds and a 360° linear array panoramic camera [33]. The authors of [34] reviewed relevant feature-based methods for direct registration of point clouds and digital images. Current studies indicate that, compared with natural targets, man-made features are used more frequently [35]. Several easy-to-find, man-made features, such as linear edges [36,37], connected line segments [38], and planar features [39], are often used for registration. Several other invisible features, such as vanishing point [40] and mutual information [41,42], are also used. Amongst these methods, visible and linear feature extraction algorithms, including scale-invariant feature transform, are consistently employed [43,44]. UAVs have also been utilized to capture aerial images, and the registration of aerial-based point clouds and images has been studied [45].

1.3. Present Work

Several registration and data integration methods have been developed for the fusion of point clouds and digital images. However, from our perspective, these methods cannot be directly used to combine point clouds and panoramic images. For example, given that the overlap rate amongst the adjacent lenses of a panoramic camera is small, the dense registration method for point clouds generated by the photogrammetry method can only achieve a few point clouds. In addition, feature-based methods are usually developed to register point clouds and traditional digital images, and the application of these methods to different situations, such as panoramic images, requires further verification. Furthermore, as more than four lenses exist in a panoramic camera, the selection of corresponding imagery for a certain point is vital for registration. Therefore, a new method to register point clouds and panoramic images is required.
To address this issue, a new registration approach based on sensor constellation was developed in this study. This approach makes full use of the GPS and the panoramic camera’s position and orientation relationship. A segmentation feature point was computed based on the real-time sensor constellation. Using such segmentation feature points, point clouds acquired through different laser scanners can be divided into small blocks. Finally, points of each block were introduced to the geometric conversion model to compute the corresponding pixels in panoramic images. After that, the color or texture information of the panoramic images was extracted and fused with the 3D point clouds. Compared with other methods, considering the position of sensors was fixed after installation, the proposed method is simple, and suitable for large volume calculation.

2. Sensor Constellation of MMS

The sensor constellation of an MMS differs according to its application purpose. In this study, an MMS was designed to collect data from urban roads and extract the visible features, including symbols and markers for transportation and terrain objects around the roads. This MMS contained a panoramic camera, sectional scanners, GPS, IMU and other necessary sensors.
Three laser scanners were installed in different locations. Figure 1 shows the relationship between the panoramic camera, sectional laser scanners and GPS antenna. The first sectional laser scanner, see LS-1 in Figure 1, was installed in the rear of the vehicle to collect data from the top view. The other two laser scanners were installed on either sides of the roof of the vehicle to collect data on both sides of the road. LS-2 collected data from the left-front direction, and LS-3 acquired data from the right-front direction. Both LS-2 and LS-3 collected point clouds in a vertical plane. As the orientation and position relationship was accurately calibrated, the point clouds captured by these three laser scanners was treated with the same coordinate system for subsequent registration.
The panoramic camera used in this study was a LadyBug® (FLIR Integrated Imaging Solutions, Inc., Richmond, BC, Canada), and was installed in the center of the vehicle roof (see Figure 1a and the green arrow in Figure 1c). The camera contained six high-definition lenses and captured images from six directions simultaneously. The sampling rate was 1.0 Hz. NMEA (National Marine Electronics Association) GPS signals were input into the panoramic camera in real-time to record the position of the panoramic camera.
The sectional laser scanners were SICK® LMS 511 Pro. (SICK Vertriebs-GmbH, Düsseldorf, NRW, Germany). The maximum scanning distance of this kind of laser scanner was 80 m, and the field of view reached 190°. The scanning frequency was 100 Hz, and the angle resolution was 0.166°. The main original data collected by the laser scanner were time, distance, intensity, and scan angle.
According to the data-processing model, an integration system was developed to collect sensor data and compute point clouds data from the original IMU, GPS and laser scanner so that a 3D point clouds of surrounding objects could be obtained. The MMS comprised of three sectional laser scanners where two were installed in the front part of the vehicle, and the other one was installed in the rear. The point clouds from different scanners were separated from the original point clouds. Figure 2 shows the spatial distribution of the point clouds captured by the different laser scanners. The green points were captured by LS-1 and are located in the road and go along with the movement of the vehicle. The other points in yellow or brown were located on the roadside and were captured by LS-2 and LS-3. The scan planes of LS-2 and LS-3 were perpendicular to the horizontal plane.

3. Registration of Mobile Point Clouds and Panoramic Images Based on Sensor Constellations

In this section, the registration procedures of the mobile point clouds and the panoramic images are introduced based on the sensor constellation of the MMS. Section 3.1 outlines the flowchart of the proposed method and Section 3.2, Section 3.3 and Section 3.4 introduce the detailed processing steps.

3.1. Flowchart of Proposed Method

Three main steps were included in the sensor-constellation based method (Figure 3). First, in order to separate the whole road’s point clouds into small blocks, the sensor constellation (mainly the panoramic camera and the GPS) were analyzed and a segmentation feature point extraction model was proposed. As both the panoramic camera and GPS contained a positioning module, the position of the panoramic camera and the GPS could always be obtained. This ensured that the feature point could always be acquired while MMS was travelling. This step will be explained in Section 3.2 in detail.
After the segmentation feature point was obtained, a polygon area was extracted and defined as the central block for LS-1 (discussed in Section 3.3.1). The corner points of the central block were also computed, and the side blocks were fixed (discussed in Section 3.3.2). The central and side blocks were used to segment the whole point clouds into small blocks.
Finally, each point in the small blocks was selected to find the corresponding pixel in the panoramic image using the sensors’ relationship matrixes introduced in Section 3.4. An image search strategy is also provided in this subsection.

3.2. Segmentation Feature Point Extraction Based on Sensor Constellation

In this subsection, a feature point was extracted according to the sensor constellation between the GPS and panoramic camera. As the position of the GPS and panoramic camera can always be obtained, the relationship between the two sensors, as well as the vehicle and the ground is easy to rebuild. A diagram of the principle of segmentation feature point extraction is shown in Figure 4.
G ( x g p s T , y g p s T , h g p s T ) is assumed to be the coordinate of GPS antenna at a certain time (T), and P ( x c a m T , y c a m T , h c a m T ) is the coordinate of a panoramic camera at the same time. These positions can be obtained according to GPS observation and the relationship between the GPS and the panoramic camera.
Let β be the road inclination angle at a certain position, which can be obtained by GPS according to the adjacent epochs or the point clouds captured nearby. In this study, previous epochs of GPS were utilized to calculate the inclination angle.
h c a m is the distance from the centre of the panoramic camera to the road (see line P-P1 in Figure 4) and will stay the same after the panoramic camera has been installed. The horizontal plane is the projection plane with a certain elevation h and is also the computing side.
i is the intersection point between line PG and the road. L is the line between point j and Q, which is the projection point of i and P to the horizontal plane. In this study, point j was regarded as the segmentation feature point.
Therefore, the vertical angle, θ , can be obtained with
θ + β = a r c s i n ( z c a m T z g p s T ( x c a m T x g p s T ) 2 + ( y c a m T y g p s T ) 2 + ( z c a m T z g p s T ) 2 ) .
The distance of line i-P2, d i P 2 , can be obtained with
d i P 2 = d i P 1 d P 1 P 2 = h c a m tan θ h c a m × tan β .
Given that L is the projection of line i − P, which is also the projection of line i − P2, it can be calculated with
L = d j Q = d i P 2 × cos β = ( h c a m tan θ h c a m × tan β ) × cos β .
Figure 5 shows the relationship of the segmentation feature point (j) and mobile mapping vehicle.
The coordinate of segmentation feature point j ( x j T , y j T , h j T ) can be calculated according to azimuth angle γ and projected panoramic camera coordinate Q ( x c a m T , y c a m T , h ) at a certain time (T) as follows:
{ x j T = x c a m T L × cos γ = x c a m T ( h c a m tan θ h c a m × tan β ) cos β cos γ y j T = y c a m T L × sin γ = y c a m T ( h c a m tan θ h c a m × tan β ) cos β sin γ h j T = h

3.3. Division of Original Points into Blocks

3.3.1. Division of the Point Clouds Captured by LS-1

After the location of the segmentation feature point was calculated, the road’s point clouds was divided to separate the large volume of points into small blocks.
Based on the data collection frequency of the panoramic camera, each segmentation point during the time the panoramic camera was operational was computed. A threshold, W b a c k , which indicates the width of the block shape, was used to determine the width of the polygon (Figure 6), and the length of the block shape was calculated by the adjacent segmentation feature points (jT and jT+1). Next, the block shape was fixed (see green box in Figure 6) and the point clouds contained in the polygon was stored as a small block for further processing.
Considering that the field angle of the sectional laser scanner used in this study was more than 180°, the distance of the scan line is extremely large in theory. However, the threshold W b a c k limits the area to the most central part of the scan lines as a point’s accuracy in this area is better than that of other points. Furthermore, the points located at the side were overlapped by the point clouds from the other laser scanner. Therefore, the points that were out of the block shape were not used in the registration technique described in Section 4.

3.3.2. Division of the Point Clouds Captured by LS-2 and LS-3

The block shape for LS-1 can be accurately determined by the segmentation points and the parameter W b a c k ; however, the method introduced in Section 3.3.1 cannot be used for the two other laser scanners (LS-2 and LS-3) as no obvious target exist at the side of the vehicle. Therefore, a two-step method was introduced to divide the point clouds captured by LS-2 and LS-3 into small blocks.
First, the coordinates of four corner points were obtained at each time point (see A, B, C and D in Figure 7). Given that the coordinates of the feature points, j T ( x j T , y j T , h ) and j T + 1 ( x j T + 1 , y j T + 1 , h ) , were obtained with Equation (4), the corner points could also be calculated. For example, the coordinate of point A ( x A T , y A T , h A T ) at time T can be calculated as
{ x A T = x j T + W b a c k 2 × sin ( γ 90 ° ) y A T = y j T + W b a c k 2 × cos ( γ 90 ° ) h A T = h
where γ is the azimuth angle of line jQ (Figure 5).
Second, the shape of the block was determined as per the corner points and the scan direction of the laser scanner (LS-2 and LS-3). The scan direction of the laser scanner was determined when the sensor was installed.
Similar to the point clouds captured by LS-1, most of the point clouds captured by LS-2 or LS-3 were inserted into blocks for registration. Few points were removed as they were overlapped by the points of the central block captured by LS-1. The blocks from a different laser scanner were adjacent, albeit without overlap.

3.4. Registration of a Block’s Point Clouds and Panoramic Images

The point clouds of MMS are continuously collected and represent the locations of surrounding objects. The frequency of the laser scanner is higher than that of the panoramic camera, so the point clouds are continuous while panoramic images are discontinuous. After processing via the methods introduced in Section 3.2 and Section 3.3, the point clouds were divided into small blocks. The points in each block were used to register the images collected by the panoramic camera.
As the purpose of the registration between the point clouds and panoramic images was to fuse the texture information of the images with the geometric information of the point clouds, each point extracted from the blocks was selected and converted into the pixel coordinate of the panoramic camera via the local coordinate system of the MMS and the local coordinate system of the panoramic camera. If a proper pixel was found in the relevant lens’ imagery, the color information was stored. Otherwise, if no proper pixel was found, then the point was removed. As these steps are quite simple using photogrammetry principles and equations in textbooks, this part of the processing is omitted in this paper.
Before applying the point cloud coordinate conversion, some pre-processing, such as distortion correction for the panoramic camera, interior orientation element calibration for each lens of the panoramic camera and geometric relationship calibration between the local coordinate system of the panoramic camera and MMS, needs to be conducted. In this study, several traditional methods including [46,47,48,49] were used.
( X k w , Y k w , Z k w ) is assumed to be the coordinate of point (k) in the world coordinate system, ( X k p a n , Y k p a n , Z k p a n ) denotes the coordinate of point (k) in the panoramic camera coordinate system and ( X k m m s , Y k m m s , Z k m m s ) denotes the corresponding coordinate of point (k) in the MMS coordinate system. According to the point cloud calculation method, the relationship between ( X k w , Y k w , Z k w ) and ( X k m m s , Y k m m s , Z k m m s ) is
( X k w Y k w Z k w ) = ( Δ X m m s w Δ Y m m s w Δ Z m m s w ) + R m m s w × ( X k m m s Y k m m s Z k m m s ) .
Therefore,
( X k m m s Y k m m s Z k m m s ) = ( R m m s w ) 1 ( X k w Δ X m m s w Y k w Δ Y m m s w Z k w Δ Y m m s w )
where R m m s w and ( Δ X , Δ Y , Δ Z ) m m s w are the transformation parameters from the MMS coordinate system to the world coordinate system, which can be achieved during point cloud computation processing. The coordinate of point k in the MMS coordinate can be achieved using Equation (7).
Next, ( X k m m s , Y k m m s , Z k m m s ) was transferred to the coordinate system of the panoramic camera via
( X k p a n Y k p a n Z k p a n ) = ( Δ X m m s p a n Δ Y m m s p a n Δ Z m m s p a n ) + R m m s p a n × ( X k m m s Y k m m s Z k m m s )
where ( Δ X m m s p a n , Δ Y m m s p a n , Δ Z m m s p a n ) and R m m s p a n were determined after the panoramic camera was installed and can be obtained via the calibration of the MMS system. Finally, in the panoramic camera coordinate system, the pixel coordinate (x, y) of an image can be computed according to the following collinear equation.
{ x = f a 1 ( X k p a n X s ) + b 1 ( Y k p a n Y s ) + c 1 ( Z k p a n Z s ) a 3 ( X k p a n X s ) + b 3 ( Y k p a n Y s ) + c 3 ( Z k p a n Z s ) y = f a 2 ( X k p a n X s ) + b 2 ( Y k p a n Y s ) + c 2 ( Z k p a n Z s ) a 3 ( X k p a n X s ) + b 3 ( Y k p a n Y s ) + c 3 ( Z k p a n Z s )
where ( X s , Y s , Z s ) refer to the linear elements of the exterior orientation of the panoramic camera. These parameters can be obtained as per the GPS and IMU observations, as well as the installation relationship of the panoramic camera and GPS. ( a 1 , b 1 , c 1 , a 2 , b 2 , c 2 , a 3 , b 3 , c 3 ) are the elements of the direction cosine matrix, which can also be fixed at a certain time.
Considering that the panoramic camera used in this study contained six different lenses, for a normal registration process, the corresponding pixel should be searched for each image captured by each lens. However, this procedure reduces calculation efficiency so a registration strategy was adopted in this study.
Figure 8 shows the observation field of view of each lens of the panoramic camera. According to the installation relationship between the panoramic camera and laser scanner, the field of view of a lens is always associated with a certain laser scanner. Therefore, we performed search processing according to improve search efficiency (Table 1).
Using the coordinates of the point and the parameters listed in Equations (6)–(9), the corresponding pixel can be directly found and there is an obvious difference when registering the different laser scanners. However, when applying Equation (9), the lens’ direction cosine matrix should be given as per Table 1.

4. Case Studies

4.1. Case Area

Four areas in Shanghai were selected to validate the proposed method. Given that the main purpose of the MMS was to obtain the symbols and markers as well as the relevant objects of roads, these test cases corresponded to four different types of roads: overpass, freeway, tunnel, and surface roads with intersections. The main information of these test cases and the data collection environments and parameters are listed in Table 2.

4.2. Registration Results

4.2.1. Efficiency Evaluation for Different Laser Scanner’s Point Clouds

To evaluate the efficiency of the registration between different laser scanners and the different lenses of a panoramic camera, the total points, valid matched points, match rate, and computation time were summarized after registration processing. During the processing, the W b a c k was 160.0 m, which was twice the maximum distance of the laser scanner. Therefore, most of the point clouds was divided into small blocks and used for registration. The total points captured by different laser scanner, matched points as well as the match rate and computation time are shown in Table 3.
From the information presented in Table 3, we see that the number of points of LS-1 is higher than LS-2 and LS-3 as LS-1 was designed to collect points of the road and most of the laser beam was reflected. However, the other two laser scanners were designed to collect points around the road, and some of the laser beam was missing, as no targets exist in the air.
As seen in Table 3, most of the points were successfully matched (with an average of 99.7%) with the corresponding pixels in the panoramic images. Very few points were unmatched during the registration procedures. After summarizing the total points and computation time for each laser scanner, the average computation efficiency was 38,155 points/s, 23,870 points/s and 24,006 points/s for LS-1, LS-2 and LS-3, respectively (Table 4). The efficiency of LS-1 was 1.59 times higher than other two laser scanners, as the image search range was different (shown in Section 3.4).

4.2.2. Visualization of Registration Results

To evaluate the registration results for different objects around the road completed by our proposed method, Figure 9 shows the visualization of four different types of roads. As the light conditions within the tunnel environment was poorer than the other roads, the brightness of the tunnel point clouds (see Figure 9d) after matching was lower than the other three examples. The main objects and the symbols around the roads—for example, solid/dash line, lights, central isolation belt, acoustic panels, crosswalk, left turn lane, trees, and some vehicles—could be accurately identified using the fused point clouds. This also enhanced the possibility of automatically extracting the objects and symbols around the roads.

4.3. Accuracy Evaluation

4.3.1. Evaluation Method

The distance of checkpoints before and after registration was used to evaluate registration accuracy. Given that the main purpose of this study was to fuse the texture information of an image with the geometric information of the point clouds, the feature points were manually selected based on the fused point clouds. For example, as the arrow can be manually found in both the original (rendered by intensity) and fused point clouds (rendered by color), the arrow in the road was then selected, and the top point of the arrow was used for comparison (Figure 10).
Supposing that the coordinate of the arrow point in the fused point clouds is ( X f , Y f , Z f ) and in the original point clouds is ( X o , Y o , Z o ) , three indices can be obtained to evaluate the geometric accuracy of registration. d H indicates the horizontal offset for the arrowhead after registration.
d H = ( X f X o ) 2 + ( Y f Y o ) 2
d V = | Z f Z o |
d = d H 2 + d V 2 .

4.3.2. Evaluation Result

Twenty checkpoint pairs for each different road type were manually selected, and the relative indicators (described in Section 4.3.1) were computed to evaluate accuracy. The coordinates and the differences of checkpoints of each case are listed in Appendix A. The statistical results are shown in Table 5. d H indicates the horizontal offset, while d V indicates the horizontal offset. d is the total offset between before and after registration.
Table 5 indicates that plane geometric accuracy was approximately 0.10–0.20 m. Accuracy varied across the different environments, which may have been caused by several factors and will be analyzed in Section 4.4.
Although the tunnel elevation varied from −4.4 m to −40.6 m, the vertical accuracy of the tunnel was approximately 0.00–0.03 m, which is considered as relatively stable. The vertical accuracy of the overpass, intersection and freeway was also stable. Hence, the proposed method can be used to achieve stable and good accuracy in the vertical direction.

4.4. Discussion of the Main Factors Influence Registration Accuracy

Section 4.3.2 shows the registration accuracy evaluation results for each case area. In this subsection, the main influence factors are analyzed and discussed, including time synchronization error, GPS signal and vehicle speed.

4.4.1. Time Synchronization for Different Sensors

The time system of an MMS is an important parameter as it determines the main accuracy of the MMS. Therefore, a vital step in data processing is time synchronization amongst the different sensors. Table 6 shows the time system used in the proposed MMS. The GPS, IMU and panoramic camera adopted GPS time, whereas the laser scanner adopted the time of the operating system (Windows time). GPS time is generally accurate, and Windows time is characterized by relatively low accuracy. Therefore, the error between the GPS time and Windows time will affect the registration results. During data collection, the synchronization error accumulates.
The time synchronization error will affect accuracy in the driving direction according to the following equation.
d s = v × d t
where d s denotes the affect distance and   d t denotes the time synchronization error. v is the vehicle speed. As per the preceding equation, registration accuracy is affected by the time synchronisation error. When the time synchronisation error was 1 ms (0.001 s) and the vehicle speed was 40 km/h, accuracy decreased by 1.1 cm. Therefore, prior to data collection, the system error between GPS time and Windows time needs to be calibrated; and after data collection, the same operation should also be conducted to improve the time accuracy of the laser scanner data.

4.4.2. Vehicle Speed

As shown in Equation (13), the affect distance is associated with the time synchronization error as well as the speed of the vehicle. Vehicle speed was an important parameter when the MMS collected the data as it directly determined the efficiency of the MMS. The collection parameter of the other sensors was fixed during data collection, and only the speed of the vehicle was not determined. Therefore, the resolution of the point clouds will be affected by the vehicle’s speed and when compared with the dense point clouds, registration with sparse point clouds will obviously lead to lower accuracy. Therefore, for the data acquisition system concerned, the vehicle speed should be not as fast as possible.
Given that the vehicle speed was 30 km/h and the frequency of the laser scanner was 100 Hz, the resolution of the point clouds along the driving direction was 0.083 m. Once the speed increased, the resolution decreased rapidly and led to less registration accuracy.

4.4.3. Positioning Error

GPS was one of the main positioning sensors installed in the MMS. This sensor determined the location of the vehicle and provided it to the IMU to obtain accurate orientation parameters of the vehicle. Thus, a lack of GPS signal will affect MMS positioning and registration accuracy. Although vehicle position can be obtained by other sensors, including the angular sensor for wheels and IMU data, when the GPS signal is unlocked, the positioning error accumulates. In our studies, the GPS signal was unlocked only in the tunnel case. The spatial distribution of the checkpoints’ accuracy associated with mileage is shown in the Figure 11. The mileage can be regarded as the time after the vehicle entered the tunnel.
As seen in Figure 11, checkpoint accuracy decreased with mileage. Therefore, if a tunnel is too long, registration accuracy cannot be guaranteed. The vertical accuracy as relatively stable and varied from 0.00 to 0.02 m as the road area around the symbols was flat. Even when the horizontal difference is large, given that the road is flat, the vertical difference remains stable.

5. Discussion and Conclusions

An invisible feature point, which was calculated based on the sensor constellation, was hired for registration between the road’s point clouds with the panoramic imagery. The feature point, which is the intersection of the connecting line between the GPS antenna and the panoramic camera with a horizontal plane, was utilized to separate the large volume point clouds into blocks. This invisible feature point was fixed after the sensor constellation was determined and was non-relevance with the environment. Therefore, it can always be calculated during data processing. This ensures the 1:1 matching and thus increases the possibility of successful registration.
Four typical road types—overpass, freeway, tunnel, and surface roads—were selected to verify the proposed method. Our results show that most of the point clouds (with an average of 99.7%) successfully registered with the panoramic images with a high efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases.
The novelty of the proposed method is that an invisible feature point was calculated according to the sensor constellation, mainly the GPS and panoramic camera. As the constellation of the GPS and panoramic camera was fixed after the sensors were installed, and both sensors contained positioning modules, the feature point can always be found during travel. This improved the stability and reduced the complexity of registration computing.
In this paper, the segmentation feature point was computed based on the real-time position of the GPS and panoramic camera. However, if the sensor constellation is different to the MMS proposed in this paper, users may alter a different feature point to segment the point clouds. The selection of the segmentation feature should satisfy two conditions: first, the feature point should be fixed and easy to find after the sensors are installed; and second, the calculation of the feature point should be simple and with no iteration required. These will ensure the accuracy and efficiency of registration.
Although the calculation method in this study is relatively simple and uses only one uncertain parameter ( W b a c k ), this parameter is important for registration as it determines the width of the central blocks. An unmatched part will exist if too small a value is selected, and a mixture part will be available if a large value is used during the registration. We recommend a selection of W b a c k according to the urban environment. For example, if the MMS operates in a freeway, then the W b a c k parameter can be determined by the width of the lanes, including the emergency lane. This condition means that the first laser scanner, LS-1, will always be used to capture the point clouds in the pavement of the road.
Another important parameter in the proposed method is the road inclination angle β , which is computed during the registration for each moment when the panoramic camera operates. The road’s point clouds around the area are used to fit the inclination angle at that time and location. Furthermore, the adjacent epochs of GPS can be used to calculate the inclination angle. Therefore, the proposed method cannot be directly used in a real-time system; however, once the slopes of the roads are available for a city, the proposed method can be used to register the point clouds with panoramic images in real-time.

Acknowledgements

This study was supported by the National Science and Technology Major Program (Nos. 2016YFB0502104, 2016YFB1200602-02, 2016YFB0502102) and the National Science Foundation of China (No. 41671451). The authors also appreciate the contributions made by anonymous reviewers and the MDPI English editing service.

Author Contributions

Lianbi Yao had the initial idea for the proposed MMS and is responsible for the research design. Hangbin Wu and Yayun Li collected the data and finished the data processing procedures, especially the data fusion program design and accuracy evaluation. Among the data analysis, Bin Meng and Jinfei Qian contributed parts of the accuracy evaluation results. Hangbin Wu, Yayun Li, and Hongchao Fan wrote the paper. Chun Liu contributed to the MMS design. Hongchao Fan contributed his efforts for the improvement of research structure and the experiments.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A

Table A1. Checkpoints of overpass case (m).
Table A1. Checkpoints of overpass case (m).
IDBefore RegistrationAfter Registration d H d V d
XYZXYZ
1557,266.543,464,931.965.95557,266.543,464,931.965.950.000.000.00
2558,161.273,465,074.765.89558,161.303,465,074.785.890.040.000.04
3558,049.903,465,035.685.88558,049.813,465,035.665.910.090.030.10
4557,132.703,466,020.565.73557,132.703,466,020.465.740.100.010.10
5557,113.873,466,025.275.75557,113.873,466,025.375.750.100.000.10
655,581.993,465,065.845.82555,811.083,465,065.735.820.140.000.14
7556,456.293,465,032.107.29556,456.193,465,032.167.300.120.010.12
8556,885.173,465,062.6510.97556,885.063,465,062.5310.960.160.010.16
9556,547.913,465,037.446.99556,547.813,465,037.57.000.120.010.12
10557,127.503,466,267.065.86557,127.363,466,266.965.840.170.020.17
11557,108.583,465,007.555.47557,108.633,465,007.785.460.240.010.24
12558,050.043,465,035.635.99558,049.753,465,035.675.980.290.010.29
13557,130.753,464,592.065.92557,130.823,464,592.065.920.070.000.07
14557,131.733,465,560.567.81557,131.713,465,560.387.840.180.030.18
15557,118.363,465,905.695.88557,118.453,465,905.495.880.220.000.22
16557,144.343,464,662.675.66557,144.323,464,662.335.660.340.000.34
17556,322.903,465,038.687.58556,322.633,465,038.757.590.280.010.28
18558,458.613,465,031.805.81558,458.403,465,031.945.800.250.010.25
19557,388.513,465,059.409.48557,388.863,465,059.369.470.350.010.35
20555,258.913,465,052.207.46555,259.193,465,052.067.460.310.000.31
Table A2. Checkpoints of freeway case (m).
Table A2. Checkpoints of freeway case (m).
IDBefore RegistrationAfter Registration d H d V d
XYZXYZ
1529,034.533,426,244.347.09529,034.533,426,244.347.090.000.000.00
2529,278.043,424,996.5113.86529,278.043,424,996.5113.860.000.000.00
3529,296.453,423,696.156.23529,296.453,423,696.156.230.000.000.00
4529,601.833,422,438.626.41529,601.903,422,438.666.420.080.010.08
5530,364.203,421,574.786.68530,364.233,421,574.816.690.040.010.04
6531,210.653,420,911.277.35531,210.693,420,911.327.340.060.010.06
7531,896.873,420,146.055.94531,896.923,420,146.075.940.050.000.05
8532,229.983,419,233.145.86532,230.083,419,233.165.860.100.000.10
9532,474.613,418,398.487.56532,474.693,418,398.407.550.110.010.11
10532,079.203,417,664.186.25532,079.233,417,664.066.250.120.000.12
11532,531.103,417,673.957.63532,530.993,417,673.987.640.110.010.11
12532,504.343,418,387.897.60532,504.343,418,387.897.600.000.000.00
13532,251.383,419,256.936.05532,251.363,419,256.796.060.140.010.14
14531,904.053,420,172.376.27531,903.943,420,172.316.270.130.000.13
15531,224.953,420,924.957.56531,224.963,420,924.797.570.160.010.16
16530,387.183,421,581.646.96530,387.203,421,581.516.970.130.010.13
17529,614.893,422,455.496.17529,614.773,422,455.436.180.130.010.13
18529,316.683,423,691.466.09529,316.593,423,691.466.090.090.000.09
19529,301.143,424,989.3513.86529,301.143,424,989.3513.860.000.000.00
20529,073.283,426,233.007.77529,073.233,426,233.097.770.100.000.10
Table A3. Checkpoints of tunnel case (m).
Table A3. Checkpoints of tunnel case (m).
IDBefore RegistrationAfter Registration d H d V d
XYZXYZ
1554,018.043,464,962.70−5.59554,018.043,464,962.67−5.590.030.000.03
2553,953.543,464,949.68−8.90553,953.553,464,949.61−8.900.070.000.07
3553,871.043,464,933.35−13.28553,870.953,464,933.31−13.260.100.020.10
4553,805.493,464,922.17−16.53553,805.403,464,922.12−16.520.100.010.10
5553,722.653,464,911.07−20.63553,722.653,464,911.11−20.630.040.000.04
6553,663.133,464,905.16−23.48553,663.323,464,905.15−23.450.190.030.19
7553,612.163,464,901.45−25.88553,612.263,464,901.40−25.870.110.010.11
8553,551.503,464,898.54−28.75553,551.713,464,898.52−28.730.210.020.21
9553,491.843,464,897.35−31.51553,491.743,464,897.36−31.500.100.010.10
10553,342.613,464,901.97−38.04553,342.713,464,901.89−38.030.130.010.13
11553,273.863,464,907.56−40.30553,273.763,464,907.57−40.310.100.010.10
12553,169.333,464,920.51−40.67553,169.193,464,920.33−40.670.230.000.23
13553,080.473,464,935.61−38.84553,080.353,464,935.52−38.850.150.010.15
14553,016.103,464,948.96−37.44553,016.163,464,948.87−37.420.110.020.11
15552,928.973,464,970.59−35.29552,928.793,464,970.54−35.280.190.010.19
16552,833.603,464,998.49−31.51552,833.803,464,998.53−31.500.200.010.20
17552,686.293,465,051.83−24.51552,686.173,465,051.78−24.510.130.000.13
18552,514.303,465,128.75−15.75552,514.173,465,128.70−15.760.140.010.14
19552,454.703,465,156.64−12.71552,454.523,465,156.65−12.730.180.020.18
20552,290.843,465,229.46−4.36552,290.653,465,229.47−4.380.190.020.19
Table A4. Checkpoints of surface roads case (m).
Table A4. Checkpoints of surface roads case (m).
IDBefore RegistrationAfter Registration d H d V d
XYZXYZ
1551,179.323,451,668.885.57551,179.283,451,668.875.570.040.000.04
2551,244.463,451,317.435.56551,244.433,451,317.425.570.030.010.03
3551,341.513,451,259.736.26551,341.463,451,259.656.260.090.000.09
4551,487.133,450,511.967.77551,487.123,450,512.067.790.100.020.10
5551,361.833,450,949.487.66551,361.843,450,949.407.680.080.020.08
6551,013.693,451,205.675.76551,013.703,451,205.645.750.030.010.03
7551,253.573,451,315.075.58551,253.543,451,315.065.590.030.010.03
8551,165.413,451,722.615.81551,165.393,451,722.445.830.170.020.17
9551,458.433,450,603.5910.52551,458.473,450,603.6010.520.040.000.04
10551,532.583,450,377.086.07551,532.623,450,377.116.060.050.010.05
11551,308.993,451,118.605.32551,308.933,451,118.585.320.060.000.06
12551,197.673,451,603.265.49551,197.763,451,603.205.470.110.020.11
13551,114.853,451,986.205.57551,114.903,451,986.135.580.090.010.09
14551,043.353,452,331.865.55551,043.403,452,331.595.550.270.000.27
15550,908.493,452,987.787.50550,908.523,452,987.577.500.210.000.21
16550,873.003,453,263.9112.26550,872.983,453,264.2812.270.370.010.37
17551,009.543,452,439.485.86551,009.493,452,439.875.890.390.030.39
18551,132.003,451,866.435.82551,132.063,451,866.355.850.100.030.10
19551,101.993,452,047.105.51551,102.073,452,046.835.520.280.010.28
20550,984.513,452,623.226.30550,984.553,452,623.006.300.220.000.22

References

  1. Puente, I.; González-Jorge, H.; Martínez-Sánchez, J.; Arias, P. Review of mobile mapping and surveying technologies. Measurement 2013, 46, 2127–2145. [Google Scholar] [CrossRef]
  2. Goad, C. The Ohio State University highway mapping project: The positioning component. In Proceedings of the 47th Annual Meeting of the Institute of Navigation, Williamsburg, VA, USA, 10–12 June 1991; pp. 117–120. [Google Scholar]
  3. Novak, K. The Ohio State University highway mapping system: The stereo vision system component. In Proceedings of the 47th Annual Meeting of The Institute of Navigation, Williamsburg, VA, USA, 10–12 June 1991; pp. 121–124. [Google Scholar]
  4. Huang, F.; Wen, C.; Luo, H.; Cheng, M.; Wang, C.; Li, J. Local Quality Assessment of Point Clouds for Indoor Mobile Mapping. Neurocomputing 2016, 196, 59–69. [Google Scholar] [CrossRef]
  5. Toth, C.; Grejner-Brzezinska, D. Redefining the Paradigm of Modern Mobile Mapping: An Automated High-Precision Road Centerline Mapping System. Photogramm. Eng. Remote Sens. 2004, 70, 685–694. [Google Scholar] [CrossRef]
  6. Bossche, J.; Peters, J.; Verwaeren, J.; Botteldooren, D.; Theunis, J.; Baets, B. Mobile monitoring for mapping spatial variation in urban air quality: Development and validation of a methodology based on an extensive dataset. Atmos. Environ. 2015, 105, 148–161. [Google Scholar] [CrossRef]
  7. Adams, M.; Kanaroglou, P. Mapping real-time air pollution health risk for environmental management: Combining mobile and stationary air pollution monitoring with neural network models. J. Environ. Manag. 2015, 168, 133–141. [Google Scholar] [CrossRef] [PubMed]
  8. Rottensteiner, F.; Trinder, J.; Clode, S.; Kubik, K. Building detection by fusion of airborne laser scanner data and multi-spectral images: Performance evaluation and sensitivity analysis. ISPRS J. Photogramm. Remote Sens. 2007, 62, 135–149. [Google Scholar] [CrossRef]
  9. Torabzadeh, H.; Morsdorf, F.; Schaepman, M. Fusion of imaging spectroscopy and airborne laser scanning data for characterization of forest ecosystems—A review. ISPRS J. Photogramm. Remote Sens. 2014, 97, 25–35. [Google Scholar] [CrossRef]
  10. Budzan, S.; Kasprzyk, J. Fusion of 3D laser scanner and depth images for obstacle recognition in mobile applications. Opt. Laser Eng. 2016, 77, 230–240. [Google Scholar] [CrossRef]
  11. Gerke, M.; Xiao, J. Fusion of airborne laserscanning point clouds and images for supervised and unsupervised scene classification. ISPRS J. Photogramm. Remote Sens. 2014, 87, 78–92. [Google Scholar] [CrossRef]
  12. Hamza, A.; Hafiz, R.; Khan, M.; Cho, Y.; Cha, J. Stabilization of panoramic videos from mobile multi-camera platforms. Image Vis. Comput. 2015, 37, 20–30. [Google Scholar] [CrossRef]
  13. Ji, S.; Shi, Y.; Shan, J.; Shao, X.; Shi, Z.; Yuan, X.; Yang, P.; Wu, W.; Tang, H.; Shibasaki, R. Particle filtering methods for georeferencing panoramic image sequence in complex urban scenes. ISPRS J. Photogramm. Remote Sens. 2015, 105, 1–12. [Google Scholar] [CrossRef]
  14. Shi, Y.; Ji, S.; Shao, X.; Yang, P.; Wu, W.; Shi, Z.; Shibasaki, R. Fusion of a panoramic camera and 2D laser scanner data for constrained bundle adjustment in GPS-denied environments. Image Vis. Comput. 2015, 40, 28–37. [Google Scholar] [CrossRef]
  15. Swart, A.; Broere, J.; Veltkamp, R.; Tan, R. Refined Non-rigid Registration of a Panoramic Image Sequence to a LiDAR Point Cloud. In Photogrammetric Image Analysis, 1st ed.; Uwe, S., Franz, R., Helmut, M., Boris, J., Matthias, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 73–84. [Google Scholar]
  16. Chen, C.; Zhuo, X. Registration of vehicle based panoramic image and LiDAR point cloud. Proc. SPIE Int. Soc. Opt. Eng. 2013, 8919, 401–410. [Google Scholar]
  17. Zeng, F.; Zhong, R. The algorithm to generate color point-cloud with the registration between panoramic image and laser point-cloud. In Proceedings of the 35th International Symposium on Remote Sensing of Environment, Beijing, China, 22–26 April 2014. [Google Scholar]
  18. Cui, T.; Ji, S.; Shan, J.; Gong, J.; Liu, K. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping. Sensors 2016, 17, 70. [Google Scholar] [CrossRef] [PubMed]
  19. Tao, C. Semi-Automated Object Measurement Using Multiple-Image Matching from Mobile Mapping Image Sequences. Photogramm. Eng. Remote Sens. 2000, 66, 1477–1485. [Google Scholar]
  20. Havlena, M.; Torii, A.; Pajdla, T. Efficient Structure from Motion by Graph Optimization. In Computer Vision—ECCV 2010; Lecture Notes in Computer Science, Volume 6312; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  21. Besl, P.; McKay, N. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  22. Chen, Y.; Medioni, G. Object modelling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  23. Masuda, T.; Yokoya, N. A robust method for registration and segmentation of multiple range images. Comput. Vis. Image Underst. 1995, 61, 295–307. [Google Scholar] [CrossRef]
  24. Cheng, L.; Tong, L.; Li, M.; Liu, Y. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check. Remote Sens. 2013, 5, 6260–6283. [Google Scholar] [CrossRef]
  25. Zhong, L.; Tong, L.; Chen, Y.; Wang, Y.; Li, M.; Cheng, L. An automatic technique for registering airborne and terrestrial LiDAR data, in Geoinformatics. In Proceedings of the IEEE 21st International Conference on Geoinformatics, Kaifeng, China, 20–22 June 2013. [Google Scholar]
  26. Wu, H.; Scaioni, M.; Li, H.; Li, N.; Lu, M.; Liu, C. Feature-constrained registration of building point clouds acquired by terrestrial and airborne laser scanners. J. Appl. Remote Sens. 2014, 8, 083587. [Google Scholar] [CrossRef]
  27. Alba, M.; Barazzetti, L.; Scaioni, M.; Remondino, F. Automatic Registration of Multiple Laser Scans using Panoramic RGB and Intensity Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 3812, 49–54. [Google Scholar] [CrossRef]
  28. Weinmann, M.; Hinz, S.; Jutzi, B. Fast and automatic image-based registration of TLS data. ISPRS J. Photogramm. Remote Sens. 2011, 66, 62–70. [Google Scholar] [CrossRef]
  29. Wu, H.; Fan, H. Registration of airborne LiDAR point clouds by matching the linear plane features of building roof facets. Remote Sens. 2016, 8, 447. [Google Scholar] [CrossRef]
  30. Crosilla, F.; Visintini, D.; Sepic, F. Reliable automatic classification and segmentation of laser point clouds by statistical analysis of surface curvature values. Appl. Geomat. 2009, 1, 17–30. [Google Scholar] [CrossRef]
  31. Cheng, L.; Wu, Y.; Tong, L.; Chen, Y.; Li, M. Hierarchical Registration Method for Integration of Airborne and Vehicle LiDAR Data. Remote Sens. 2015, 7, 13921–13944. [Google Scholar] [CrossRef]
  32. Bouroumand, M.; Studnicka, N. The Fusion of Laser Scanning and Close Range Photogrammetry in Bam. Laser-photogrammetric Mapping of Bam Citadel (ARG-E-BAM) Iran. In Proceedings of the ISPRS Commission V, Istanbul, Turkey, 12–23 July 2004. [Google Scholar]
  33. Reulke, R.; Wehr, A. Mobile panoramic mapping using CCD-line camera and laser scanner with integrated position and orientation system. In Proceedings of the ISPRS Workshop Group V/1, Dresden, Germany, 19–22 February 2004; pp. 165–183. [Google Scholar]
  34. Rönnholm, P. Registration Quality—Towards Integration of Laser Scanning and Photogrammetry; EuroSDR Official Publication: Frankfurt, Germany, 2011; pp. 9–57. [Google Scholar]
  35. Rönnholm, P.; Haggrén, H. Registration of laser scanning point clouds and aerial images using either artificial or natural tie features. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 63–68. [Google Scholar] [CrossRef]
  36. Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and LiDAR data registration using linear features. Photogramm. Eng. Remote Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  37. Yang, M.; Cao, Y.; Mcdonald, J. Fusion of camera images and laser scans for wide baseline 3D scene alignment in urban environments. ISPRS J. Photogramm. Remote Sens. 2011, 66, 1879–1887. [Google Scholar] [CrossRef]
  38. Wang, L.; Neumann, U. A robust approach for automatic registration of aerial images with untextured aerial LiDAR data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 2623–2630. [Google Scholar]
  39. Mitishita, E.; Habib, A.; Centeno, J.; Machado, A.; Lay, J.; Wong, C. Photogrammetric and lidar data integration using the centroid of a rectangular roof as a control point. Photogramm. Rec. 2008, 23, 19–35. [Google Scholar] [CrossRef]
  40. Liu, L.; Stamos, I. A systematic approach for 2D-image to 3D-range registration in urban environments. Comput. Vis. Image Underst. 2012, 116, 25–37. [Google Scholar] [CrossRef]
  41. Mastin, A.; Kepner, J.; Fisher, J. Automatic registration of LIDAR and optical images of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 2639–2646. [Google Scholar]
  42. Parmehr, E.; Fraser, C.; Zhang, C.; Leach, J. Automatic registration of optical imagery with 3D LiDAR data using statistical similarity. ISPRS J. Photogramm. Remote Sens. 2014, 88, 28–40. [Google Scholar] [CrossRef]
  43. Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  44. Hough, P. A Method and Means for Recognizing Complex Patterns. U.S. Patent 3069654, 18 December 1962. [Google Scholar]
  45. Yang, B.; Chen, C. Automatic registration of UAV-borne sequent images and LiDAR data. ISPRS J. Photogramm. Remote Sens. 2015, 101, 262–274. [Google Scholar] [CrossRef]
  46. Kaasalainen, S.; Ahokas, E.; Hyyppa, J.; Suomalainen, J. Study of surface brightness from backscattered laser intensity: Calibration of laser data. IEEE Trans. Geosci. Remote Sens. Lett. 2005, 2, 255–259. [Google Scholar] [CrossRef]
  47. Kaasalainen, S.; Hyyppa, H.; Kukko, A.; Litkey, P.; Ahokas, E.; Hyyppa, J.; Lehner, H.; Jaakkola, A.; Suomalainen, J.; Akujarvi, A.; et al. Radiometric calibration of LIDAR intensity with commercially available reference targets. IEEE Trans. Geosci. Remote 2009, 47, 588–598. [Google Scholar] [CrossRef]
  48. Roncat, A.; Briese, C.; Jansa, J.; Pfeifer, N. Radiometrically calibrated features of full-waveform lidar point clouds based on statistical moments. IEEE Trans. Geosci. Remote Sens. Lett. 2014, 11, 549–553. [Google Scholar] [CrossRef]
  49. Wagner, W.; Ullrich, A.; Ducic, V.; Melzer, T.; Studnicka, N. Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner. ISPRS J. Photogramm. Remote Sens. 2006, 60, 100–112. [Google Scholar] [CrossRef]
Figure 1. Main sensors installed in the mobile mapping system. (a) Spatial distribution of sensors; (b) rear-view laser scanner and GPS antenna; and (c) two side-view laser scanners and panoramic camera.
Figure 1. Main sensors installed in the mobile mapping system. (a) Spatial distribution of sensors; (b) rear-view laser scanner and GPS antenna; and (c) two side-view laser scanners and panoramic camera.
Sensors 17 00837 g001
Figure 2. Point cloud distribution of the proposed MMS.
Figure 2. Point cloud distribution of the proposed MMS.
Sensors 17 00837 g002
Figure 3. Flowchart of point cloud division and block extraction.
Figure 3. Flowchart of point cloud division and block extraction.
Sensors 17 00837 g003
Figure 4. Principle of segmentation point calculation.
Figure 4. Principle of segmentation point calculation.
Sensors 17 00837 g004
Figure 5. Relationship of the segmentation feature point and mobile mapping vehicle.
Figure 5. Relationship of the segmentation feature point and mobile mapping vehicle.
Sensors 17 00837 g005
Figure 6. Block division for the point clouds captured by LS-1.
Figure 6. Block division for the point clouds captured by LS-1.
Sensors 17 00837 g006
Figure 7. Block division for the point clouds captured by LS-2 and LS-3.
Figure 7. Block division for the point clouds captured by LS-2 and LS-3.
Sensors 17 00837 g007
Figure 8. Spatial distribution of field of view of the different lenses of the panoramic camera.
Figure 8. Spatial distribution of field of view of the different lenses of the panoramic camera.
Sensors 17 00837 g008
Figure 9. Visualization of four different types of roads. (a) Overpass; (b) freeway; (c) surface roads; and (d) tunnel.
Figure 9. Visualization of four different types of roads. (a) Overpass; (b) freeway; (c) surface roads; and (d) tunnel.
Sensors 17 00837 g009
Figure 10. Selection of feature point for accuracy evaluation.
Figure 10. Selection of feature point for accuracy evaluation.
Sensors 17 00837 g010
Figure 11. Spatial distribution of the check points’ accuracy in the tunnel case.
Figure 11. Spatial distribution of the check points’ accuracy in the tunnel case.
Sensors 17 00837 g011
Table 1. Image search range of each laser scanner.
Table 1. Image search range of each laser scanner.
Laser ScannerLens of Panoramic Camera
LS-1Lens-0
LS-2Lens-1, Lens-2, Lens-5
LS-3Lens-3, Lens-4, Lens-5
Table 2. Environment and main parameters of data collection.
Table 2. Environment and main parameters of data collection.
Case TypeOverpassFreewayTunnelSurface Roads
Environment complexityComplexSimpleSimpleComplex
GPS signalGoodGoodNoneAverage
Length (km)30.827.52.011.6
Average speed (km/h)30403022
Time span (min)61.5041.273.3431.75
Table 3. Efficiency evaluation of the different laser scanner’s point clouds in each dataset.
Table 3. Efficiency evaluation of the different laser scanner’s point clouds in each dataset.
TypeLaser ScannerTotal PointsMatched PointsMatch Rate (%)Computation Time (s)
OverpassLS-1120,210,560120,099,09599.913163
LS-246,040,48645,750,43199.371972
LS-348,009,70047,625,62399.202060
FreewayLS-172,612,19372,601,38399.981910
LS-226,218,50326,207,95799.951092
LS-328,495,60528,486,13599.961187
Surface roadsLS-161,756,17761,695,28499.901625
LS-223,249,15423,238,37199.95968
LS-326,393,00826,381,64799.951099
TunnelLS-18,582,3028,511,41999.79199
LS-24,797,1714,787,14299.17225
LS-36,002,7965,971,45099.47250
Table 4. Average computation efficiency evaluation.
Table 4. Average computation efficiency evaluation.
Laser ScannerTotal PointsMatched PointTotal Computation Time (s)Average Computation Efficiency
LS-1263,161,232262,907,161689738,155
LS-2101,305,31499,983,901425723,870
LS-3108,901,109108,464,855459624,006
Table 5. Average geometric accuracy evaluation results (m).
Table 5. Average geometric accuracy evaluation results (m).
IndexCase Area d H d V d
Min.Max.Avg.Min.Max.Avg.Min.Max.Avg.
1Overpass0.0000.3400.1780.0000.0300.0090.0000.3520.179
2Intersection0.0310.3930.1390.0000.0200.0110.0330.3940.140
3Tunnel0.0300.2280.1350.0000.0200.0110.0300.2280.135
4Freeway0.0000.1600.0790.0000.0100.0200.0000.1610.112
Table 6. Time system used in the MMS.
Table 6. Time system used in the MMS.
SensorTime System
GPSGPS time
IMUGPS time
Panoramic cameraGPS time
Laser scannerWindows time

Share and Cite

MDPI and ACS Style

Yao, L.; Wu, H.; Li, Y.; Meng, B.; Qian, J.; Liu, C.; Fan, H. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations. Sensors 2017, 17, 837. https://doi.org/10.3390/s17040837

AMA Style

Yao L, Wu H, Li Y, Meng B, Qian J, Liu C, Fan H. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations. Sensors. 2017; 17(4):837. https://doi.org/10.3390/s17040837

Chicago/Turabian Style

Yao, Lianbi, Hangbin Wu, Yayun Li, Bin Meng, Jinfei Qian, Chun Liu, and Hongchao Fan. 2017. "Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations" Sensors 17, no. 4: 837. https://doi.org/10.3390/s17040837

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop