Next Article in Journal
MCBT: Multi-Hop Cluster Based Stable Backbone Trees for Data Collection and Dissemination in WSNs
Next Article in Special Issue
Building Reconstruction by Target Based Graph Matching on Incomplete Laser Data: Analysis and Limitations
Previous Article in Journal
Intrusion-Aware Alert Validation Algorithm for Cooperative Distributed Intrusion Detection Schemes of Wireless Sensor Networks
Previous Article in Special Issue
Potential of ILRIS3D Intensity Data for Planar Surfaces Segmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks

1
Institute of Photogrammetry and Remote Sensing, Helsinki University of Technology, P.O. Box 1200, FI-02015 TKK, Finland
2
Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, P.O. Box 15, 02431 Masala, Finland
*
Author to whom correspondence should be addressed.
Sensors 2009, 9(8), 6008-6027; https://doi.org/10.3390/s90806008
Submission received: 6 May 2009 / Revised: 19 June 2009 / Accepted: 15 July 2009 / Published: 29 July 2009
(This article belongs to the Special Issue LiDAR for 3D City Modeling)

Abstract

:
Comprehensive 3D modeling of our environment requires integration of terrestrial and airborne data, which is collected, preferably, using laser scanning and photogrammetric methods. However, integration of these multi-source data requires accurate relative orientations. In this article, two methods for solving relative orientation problems are presented. The first method includes registration by minimizing the distances between of an airborne laser point cloud and a 3D model. The 3D model was derived from photogrammetric measurements and terrestrial laser scanning points. The first method was used as a reference and for validation. Having completed registration in the object space, the relative orientation between images and laser point cloud is known. The second method utilizes an interactive orientation method between a multi-scale image block and a laser point cloud. The multi-scale image block includes both aerial and terrestrial images. Experiments with the multi-scale image block revealed that the accuracy of a relative orientation increased when more images were included in the block. The orientations of the first and second methods were compared. The comparison showed that correct rotations were the most difficult to detect accurately by using the interactive method. Because the interactive method forces laser scanning data to fit with the images, inaccurate rotations cause corresponding shifts to image positions. However, in a test case, in which the orientation differences included only shifts, the interactive method could solve the relative orientation of an aerial image and airborne laser scanning data repeatedly within a couple of centimeters.

Graphical Abstract

1. Introduction

Accurately measured 3D information from our environment has become more and more important in our daily life. For example, virtual environments, environmental and urban planning, decision making processes, and modern navigation systems require 3D data that correspond with reality. Laser scanning has become popular due to its fast 3D point cloud acquisition, improvements in post processing software and high usability of the data generated. Photogrammetric techniques have been used for a long time, but development of digital cameras, more efficient automatic and semiautomatic 3D measuring methods and good internal geometry have been key factors that have kept image-based techniques as suitable alternatives for 3D modeling. Both methods have advantages and disadvantages, and in many ways they complement each other.
Complete 3D modeling requires both terrestrial and airborne data [1]. In addition, both perspectives should preferably include both laser scanning data and images. The viewing direction of a data acquisition method defines which parts of a building can be modeled reliably. A terrestrial point of view does not allow seeing all roof shapes, and nadir perspective prevents accurate modeling of vertical structures. Thus, different perspectives offer additional information about the behavior and quality of data. Therefore, terrestrial images are also excellent for assuring the quality of airborne laser scanning data [25].
Airborne laser scanning data is widely used for, e.g., creation of digital terrain models (DTM) (e.g., [611]) and for extracting buildings (e.g., [1215]), other infrastructure such as bridges (e.g., [16]), as well as urban trees (e.g., [17]). However, the point density of airborne laser scanning is typically not enough for the most accurate modeling, such as exact positioning of building outlines [18]. Terrestrial laser scanning provides significantly denser point clouds and good accuracy [19], but suffers from stationary data acquisition that makes the method slow and costly when large areas are modeled. Vehicle-based mobile laser scanning provides faster data acquisition and relatively dense point clouds, but suffers from satellite visibility problems in urban areas resulting in systematic errors of the order of 0.1–3 meters [20,21], in the data.
Integration of data obtained by different measuring methods enables numerous applications [4], but requires accurate relative orientations. The more detailed the 3D model needed, the more accurate the relative orientation required. Direct sensor orientation with GPS and inertial equipment is essential for laser scanning data acquisition and can also be used for detecting exterior orientations of images. Accuracies as high as 5–10 cm in position and better than 0.006° for ω and φ, and 0.01° for κ in rotations [2225] have been reported. However, studies with photogrammetric frame sensors [24,26,27] have shown that insufficient satellite visibility, an incomplete relative orientation between the imaging sensor and GPS/IMU-components, inaccuracies of an imaging model, and transformations between various coordinate systems can reduce this accuracy. In general, direct sensor orientation alone is not yet providing orientations fine enough for most accurate integration of data from multiple sources [23].
Images are typically oriented in large image blocks by block adjustment. The main alternative for solving exterior orientations of aerial images is aerial triangulation utilizing image points, 3D ground features and possibly direct georeferencing observations of orientations [28]. The aerial triangulation requires manual identification of ground control points from the images, but the advantage of the method is that inaccuracies of interior orientations are compensated with self-calibration during the adjustment [29]. Multi-view image blocks, which consist of aerial and terrestrial images, are usually utilized when the size of an object to be modeled, such as a historical building, is convenient. Especially, the development of unmanned aerial vehicle (UAV) systems has recently increased the use of such multi-view image blocks (e.g., [30]). If images within a block differ much in scale, the resolution of images may reduce the interpretation accuracy, which should be taken into account. The use of multi-scale image blocks is discussed more in detail, e.g., in [31].
If the internal geometry of a terrestrial laser scanner has been calibrated and corrected [32], 3D laser scanning point clouds from multiple scans can be registered, e.g., using the iterative closest point (ICP) method or by extracting and matching tie features. The term registration means finding the geometric transformation which makes corresponding locations in the two 3D data sets [33]. ICP methods are very popular for registering two 3D data sets at the object space. An ICP algorithm minimizes point-to-point [34] or point-to-surface distances [35]. The method has several variants, which have been discussed, e.g., in [36,37]. Alternatively, several methods to register 2.5D and 3D surfaces have been reported [37]. Currently, many laser scanning software packages include ICP-based registration algorithms. Registered laser point clouds can reliably be transformed into a ground coordinate system using signalized ground control points.
Airborne laser scanning missions usually include several laser scanning strips. By applying a strip adjustment, the internal quality of laser scanning data can be improved [38]. This step is necessary in order to ensure homogeneous quality within all laser scanning strips. When airborne laser scanning data is transformed into a ground coordinate system, ground control points should be signalized with large targets since the point density is typically much lower than with terrestrial laser scanning. Large circular targets have been suggested for ground control points, e.g., in [39] and [40], but the horizontal accuracy depends highly on point density [39]. However, the correct height for the laser scanning point cloud is much easier to solve than horizontal shifts [10]. Alternatively to circular targets, pavement markings have been used as control points (e.g., [41,42]). The use of linear features has been studied, e.g., in [13,43,44], in which breaklines were extracted by finding intersections of two planes that were fitted to laser scanning data. Corresponding breaklines were extracted from images for orientation. In [45], a triangulation of multi-sensor data using straight lines and planar patches as tie features was presented. Also, the centroids of rectangular roofs have been suggested as tie features for the orientation of multi-sensor data [46]. The concept of registering airborne laser scanning data and images through photogrammetrically-derived 3D surfaces measured from stereo images has been reported by, e.g., [47,48].
One difficulty with multi-sensor orientation is that it can be a challenging task to find accurate corresponding features, if, e.g., resolution, perspective or the nature of data sets differs much. As an alternative to numerical methods, paper [2] describes how a single terrestrial image and airborne laser scanning data can be relatively oriented using an interactive orientation method. During interactive orientation, an operator adjusts a laser point cloud visually with the images by changing image orientation parameters and using anchor points. This being a manual method, an operator is able to interpret laser hits coming from both large objects and small objects. In many numeric methods, small details are usually considered as outliers and therefore filtered out even if they were to include significant information for orientation. Large objects, however, are especially suitable when laser scanning data is not dense, because shapes of point clouds can be used as interpretable tie features. In this article, an interactive orientation method is developed to use an image block instead of a single image. The advantage of using an image block is that a multiple viewing geometry gives more information about the orientation than a single viewing direction.
The objectives of the article are: 1) to demonstrate a method for an interactive orientation of multi-source and multi-scale data, 2) to compare orientation results of the depicted interactive method with reference image orientations, 3) to get a laser point cloud and a reference image block into the same coordinate system using the ICP method and 4) to integrate multi-source data.

2. Materials

The test area was located on the campus area of the Helsinki University of Technology (TKK) in Otaniemi. Test data included terrestrial images, a panoramic image, an aerial image, airborne laser scanning data, terrestrial laser scanning data, and total station measurements. Terrestrial images were taken with Olympus E-10 and Nikon D200 with image sizes of 2,240 × 1,680 and 3,872 × 2,592 pixels, respectively. A panoramic image was created from a set of concentric images taken with Olympus Camedia C-1400 L. A total of 7 images were stitched together into a rectilinear projection resulting in an image size of 10,729 × 5,558 pixels. In order to ensure concentric image acquisition, a special panoramic mount [49] was used (Figure 1). The number of other terrestrial images, acquired with Nikon D200, was 35.
A low-altitude aerial image was taken from the altitude of 200 m with a Hasselblad Landscape camera. The size of the sensor was 3,056 × 2,032 pixels and therefore the footprint of a single pixel on the ground was 4–4.5 cm, depending on the height of the object. The interior orientations of all cameras were known.
Airborne laser scanning data was acquired with a TopEye MK I helicopter-borne laser scanner. The flying altitude was 200 m, resulting in the average point density of 2–3 points/m2. The scan angle of the TopEye MK I laser scanner was ±20°, the wavelength 1.064 μm and the pulse repetition rate 7 kHz. From several laser scanning strips, only two parallel, partially overlapping strips were used. The strip adjustment was calculated using TerraMatch software yielding to a total RMS (dz) of 2.4 cm.
Terrestrial laser scanning data was acquired with a Faro LS 880 HE80 instrument. The scanner is able to achieve a measurement rate of 120,000 pulses/s, at maximum. In our experiment, the ¼ resolution of the maximum scanning resolution was used. Due to the 360° horizontal and 320° vertical coverage per scan position, the scanner is able to scan almost a complete hemisphere. The wavelength is 785 nm, and the linearity error has been reported to be 3 mm at the distance of 25 m and with a target having 84 % reflectivity (see www.faro.com). The location of the Faro scanner was selected in a way that its data completed those vertical surfaces, which had only very few hits from the airborne laser scanning.
In order to obtain all data sets into the local coordinate system, a total of 44 targets were measured using a Leica TCA 2003 total station. Leica’s 2 × 2 cm targets were modified to make them suitable also for photogrammetric measurements by framing the targets with black self-adhesive sticker paper. In addition, 21 non-reflective photogrammetric targets were placed on the scene and used as tie points in order to assist with the orientation of the photogrammetric image block.

3. Methods

3.1. Workflow

The reference orientations of images were solved in a bundle block adjustment of a multi-view, multi-scale image block. A relative orientation between the reference image block and airborne laser scanning data was calculated using the ICP method between a photogrammetrically-derived 3D model and a laser scanning point cloud. After the ICP registration, laser scanning data and the reference image block were in the same coordinate system. The result of the relative orientation was verified visually by superimposing laser scanning data onto aerial, terrestrial close-range and panoramic images.
In order to test the interactive orientation, an aerial image, a panoramic image and a close-range image were selected from the reference image block. Interior orientations were known from camera calibrations and relative orientations of all images from a bundle block adjustment. An initial exterior orientation of selected block of images was randomly chosen in such a way that the relative orientations of images were not changed. The selected images were oriented with the airborne laser scanning point cloud using the interactive method. Because the airborne laser scanning data was in the same coordinate system as the reference image block, the resulting orientations of the interactive method were comparable with the reference orientations of the original reference image block. The workflow is illustrated in Figure 2.

3.2. Reference Orientation for Laser Scanning Data

In order to obtain a reference orientation for laser scanning data, an airborne laser scanning point cloud was registered with a photogrammetrically-derived 3D model, which was selected to be the reference surface. The photogrammetric 3D model was measured using a multi-scale image block, which included both aerial and terrestrial images. The orientation of the photogrammetric 3D model was known by the ground control points. Figure 3 illustrates how a relative orientation between images and a laser scanning point cloud is known if an image-derived 3D model is registered with a laser point cloud at the object space.
In our experiments, the ICP algorithm in Geomagic Studio software was used. This algorithm minimizes the distance between a point cloud and a surface. The reference surface was created mainly from photogrammetrically-derived point clouds. However, in order to add more features having different orientations, also terrestrial laser scanning data was used. Before extracting surfaces from the terrestrial laser scanning data, it was registered with the photogrammetric reference surface using the ICP method. This task was significantly easier than registering airborne laser scanning data with the photogrammetric 3D model. The main reason for that was the high point density of the terrestrial laser scanning data.
In practice, any 3D data sets that are used for the ICP registration should correspond to each other as well as possible, because the method can be sensitive to outliers [50]. In addition, a good initial registration is needed in order to prevent iteration to stop at a local minimum. Registration areas should contain enough distinguishable features having slopes to several directions in order to assure correct registration. Typically, outliers should be filtered out and only most reliable data sets should be used.
Orientations of images were solved in a bundle-block adjustment of 37 images using 848 natural tie points and 35 signalized points. The multi-scale image block consisted of close-range images, a panoramic image and a low-altitude aerial image. The reference orientation of the image block was solved using the iWitness software [51] resulting in the overall accuracy of 1.3 cm. The estimated accuracy of image referencing was 0.72 pixels. iWitness recalculates a complete block adjustment each time when a new observation is added. Therefore, the accuracy of the previously calculated 3D model may decrease, if more inaccurate images, such as aerial images, are included in the adjustment. To achieve as accurate 3D model as possible, the first image block included only terrestrial close-range images. 3D model points measured from close-range images were re-imported in the software as ground control points, before the aerial image and the panoramic image were included. For the ICP registration, only such tie points were selected that belong to enclosed features and all unconnected points were manually discarded. Unconnected points were, however, used as tie points when the aerial image was oriented into the image block.
As Figure 4 illustrates, only selected points were included in the ICP registration. After the registration, the software reported an average deviation of 2.5 cm. The relative orientation was examined by superimposing registered laser scanning data onto close-range images. Visual inspections did not reveal any significant errors in the relative orientation.

3.3. The Interactive Orientation of a Laser Point Cloud and a Multi-view Image Block

In this research, the interactive orientation method [2] was extended to be able to handle more than one image during the orientation. The interactive orientation method includes tools for changing exterior orientation parameters as well as for setting and using anchor points. For orientations, a complete laser point cloud or a selected subset of laser points can be used as a tie feature. The usability of the method is at its best with airborne laser scanning data, when a coarse sub-sampling of the scene usually makes it difficult to extract accurate tie features. Figure 5 illustrates a typical orientation workflow in the case of a single panoramic image.
Interactive orientation of a single image can be extended to consist of an image block. In that case, if orientation parameters of any individual image are changed, all orientations of other images from the image block are calculated and updated. When the location of an active camera is shifted, all other cameras are moving along with same amount of shift. The case of rotations is more complex, because a rotation of an active image causes both shifts and rotations to all other images. Because 3D rotation matrices describe a relationship between a camera and the ground coordinate system, we have to calculate the changed 3D rotation matrices through the ground coordinate system.
For clarity, we present here a case of two images, although the calculation of all other images of the block is similar. In this example, Camera 1 is an active image, whose orientation parameters are changed interactively and the orientation of Camera 2 is calculated. At first, we define that 3D rotation matrices R1 and R2 realize transformations from the cameras to the ground coordinate system (Figure 6).
In addition, 3D rotation matrix U is the relative rotation between the two camera coordinate systems. Therefore, the equation describing a rotation between the camera coordinate systems is:
U relative = R 2 original   R 1 original T
If camera base (b) at the ground coordinate system is the difference between projection centers P10 and P20:
b ground = P 2 0 _ original P 1 0 _ original
then the camera base converted from the ground coordinate system into the camera coordinate system of Camera 1 is:
b camera 1 = R 1 original T b ground
Relative 3D rotation matrix Urelative and camera base bcamera1 remain the same no matter how an active image is rotated or shifted. If we rotate the active camera, we get a new 3D rotation matrix, R1new. The coordinates of the projection center of Camera 2 at the ground coordinate system can be calculated from:
P 2 0 _ new = R 1 new   b camera 1 + P 1 0 _ original
Using the new 3D rotation matrix of Camera 1 and the original relative 3D rotation matrix U, we can solve a new 3D rotation matrix for Camera 2:
R 2 new = U relative   R 1 original
As a special case, if images fulfill the conditions of a normal case of stereo photogrammetry, 3D rotation matrices of Cameras 1 and 2 are identical and only a new location for the projection center of Camera 2 needs to be calculated (Equation 4).
The strategy for the interactive orientation of the image block is different, depending on whether there is a need to solve only shifts or both rotations and shifts. In the case that orientation differences include only shifts, all images may be employed as a master image by turns. The strategy can be that the first master image is registered along the x and y axes of the image coordinate system as well as possible. Then another image is selected as a master image and an orientation similar to that in the case of the first master image is completed. Any image can be the master image and the selection of the master image depends on optimum visibility. Finally, the orientation iterates to the final solution when orientation cannot be improved any more from any image.
Most commonly, both rotations and shifts need to be solved. If initial orientation is not close to the correct one, interactive orientation begins with the strategy presented in Figure 5. When the orientation is well enough and laser data is visible in all images, it is recommended to use one image as the master image and to use the other images only for monitoring. After changing the orientation parameters of the master image, superimposing laser scanning data onto the other images reveals whether changes had a positive influence on the relative orientation. In our experience, use of an anchor point at a well detectable feature usually assists with the orientation. If the anchor point is set, shifts along the X and Y axes of the camera coordinate system automatically change rotations. Again, by monitoring other images the correct directions of corrections can be detected. The optimal case would be when the viewing directions of images are perpendicular to each other. However, it is not usually possible to arrange such image acquisition. If the viewing directions are not perpendicular to each other, the shifts and rotations of the master image cause changes to all rotations and shifts along axes of the camera coordinate systems of other images. In other words, it is not always easy to predict how orientation changes of a master image affect the orientations of other images. Therefore, it is recommended not to try and fit features exactly to correct locations during correction of one rotation or shift direction, but only move them closer towards a better solution. By iterating, the solution becomes closer and closer to the correct solution until superimposing laser data onto images reveals no more orientation differences between the images and laser scanning data.

3.4. Applying Transformations to Laser Point Clouds

To transfer photogrammetric data accurately to the ground coordinate system is easier, in many cases, than transferring laser scanning data. For photogrammetric data there exist standard methods, such as bundle block adjustment, to find correct orientations. The interactive orientation method, however, is more flexible if the image orientation parameters are manipulated instead of the laser point cloud orientation. After interactive orientation, inverse transformation can be applied, in which the camera orientation is fixed to be the original one and the laser scanning point cloud is transformed according to the results of interactive orientation. First, laser scanning points (x = [X Y Z]T) are shifted in such a way that the origin of the coordinate system is at the projection center of a camera after an interactive orientation (P0_after):
x shifted = x original P 0 _ after
Next, the laser point cloud is rotated around the projection center using the relative 3D rotation matrix U (Equation 1), in which R1 is a 3D rotation matrix after interactive orientation and R2 is the original 3D rotation matrix. At the same time, possible shifts:
t = [ X 0 Y 0 Z 0 ] original T [ X 0 Y 0 Z 0 ] after T = [ dX 0 dY 0 dZ 0 ] T
between locations of projection centers before and after orientations can be corrected:
x transformed = Ux shifted + t
Finally, the laser point cloud is shifted to the ground coordinate system:
x final = x transformed + P 0 _ after
As a result, the image orientation is the original one and the laser point cloud is transformed into the same coordinate system according to the results of the interactive orientation between the image and the laser point cloud. An example of inverse transformation is presented in Section 4.3.

4. Results and Discussion

4.1. The Accuracy of an Interactive Orientation of Multi-view Image Blocks and Laser Scanning Data

Accuracies were examined by comparing interactive orientations with reference orientations. At first, a panoramic image and an aerial image were selected from a larger image block. From the block adjustment, the relative orientations of images were known. An interactive orientation of the image block and laser scanning data was completed eight times, starting each time from an arbitrary chosen initial orientation. The results from the interactive orientation were compared with the reference orientations (Table 1). The maximum shift of eight individual orientations was 30.5 cm indicating that the orientation had not been perfect in all cases. One reason was that the number of features and density of laser scanning data had not enabled accurate detection of rotations. If the rotations are not solved correctly, errors are also visible at camera locations. The location of the panoramic image was detected more accurately than the location of the aerial image because of the proximity of the former to the laser point cloud and its more illustrative viewing perspective. In the case of aerial image, the interpretation of the airborne laser scanning point cloud is not as clear as from the side view because of the acquisition perspective. Both interpretation difficulties and the resolution of the aerial image reduce the final orientation accuracy.
The second experiment included a panoramic image, an aerial image and a terrestrial close-range image. In this case, the close-range image was taken in such a way that the viewing direction was almost perpendicular to the viewing direction of the panoramic image.
The previous example revealed that the panoramic image had more shift along the viewing direction (close to the direction of the Y axis, see Figure 7) than in the other directions. It was predicted that the close-range image with the perpendicular viewing direction to the viewing direction of the panoramic image could reduce uncertainty in this particular direction. As the results illustrate (Table 2), practical examples were in conjugation with this prediction.
The location of the panoramic image has a maximum shift of 1.3 cm at the direction of the Y axis of the ground coordinate system when compared with the reference orientation. A disadvantage of the close-range image was that it had only few good features for orientation. Therefore, the maximum error at the direction of the X axis of the ground coordinate system was still almost 7 cm. In addition, some errors in rotations remained. However, when using three images instead of two during the interactive orientation, the overall accuracy of the orientations became better. In the case of the aerial image, the average improvements in maximum errors were 10.3 cm in location and 0.098 degrees in rotations.
In many cases, laser scanning data can be leveled reliably using control patches [52], targets, road markings, linear features or large open planar areas, such as parking areas or football fields. As previous examples have shown, finding accurately rotation differences between images and laser scanning data is a challenging task for interactive orientation. One reason is that the observed area is typically relatively small if terrestrial images are used. In addition, the footprint of the images does not necessarily cover a sufficient number of clear corresponding features to ensure reliable determination of all three rotations. In the last experiment of the interactive orientation, the same three images as in the previous example were used. However, the orientation differences with laser scanning data included only shifts and no rotations. As can be seen from Table 3, the interactive orientation produced results very similar to those of the reference orientation.
After an interactive orientation the errors in rotations are automatically compensated with shifts of the projection center. As a result, even if the image orientations slightly differ from the reference orientation, the effect on the ground is typically much smaller. Correspondingly, the effect on the image plane is typically very small. Therefore, laser scanning data fits locally with the images. In order to detect better small errors in shifts and rotations, a larger area should be examined or more images should be included in the interactive orientation. One solution could be that 2–3 separate image blocks from the different sides of an aerial image are created and registered with laser scanning data.

4.2. Integrated Multi-source Data

After the orientations, airborne laser scanning data, part of one terrestrial laser scanning data and photogrammetrically-derived 3D points were integrated. In addition, point clouds were colorized using both aerial and terrestrial panoramic images (Figure 8). Terrestrial laser scanning data is easy to differentiate from airborne laser data because of its superior point density.

4.3. Confirmation of the Inverse Transformation from Image Orientations to Laser Scanning Point Cloud Transformations

Inverse transformation (Section 3.4) is the final step of the workflow if interactive orientation is applied and laser data is required to be introduced into the original coordinate system of an image block. An experiment was carried out in which an image included calibration targets and the point cloud included target observations from an automatic camera calibration in iWitness software.
Initially, the test image and the 3D point cloud were at different coordinate systems. Because the interpretation of targets in the image was simple, only a single image was used when an interactive orientation between 3D points and the image was carried out. After the interactive orientation, the image was oriented into the same coordinate system as the 3D points. According to the relative orientation that was solved using the interactive orientation, the 3D points were transformed into the coordinate system of the initial camera pose. Figure 9, on one hand, illustrates the misfit between the 3D points and the targets on images when the initial orientations were used while superimposing the points onto the image (small red dots). On the other hand, the blue dots in the figure show how 3D points are again well registered with the image after the interactive orientation and the inverse transformation.

5. Conclusions

Integration of multi-source data requires accurate relative orientations. In this article, an interactive method for registering multi-view, multi-scale image blocks with laser scanning data was presented. During an interactive orientation of an image block, the image orientation parameters of any image from the block can be changed. When an orientation of one image is changed, new orientations are calculated for other images according to the original relative orientation. Laser scanning data is superimposed onto all images using new orientations, which reveals visually the quality of the relative orientation.
Results from the interactive orientation were compared with the orientations of a reference image block, which was calculated using a bundle block adjustment. The laser scanning point cloud, which was fixed during interactive orientation, was pre-registered with a 3D model that was measured photogrammetically using the reference image block. In other words, the reference image block and laser scanning data were in the same coordinate system. Therefore, image orientations after the interactive orientation were comparable with reference orientations.
The first example included an aerial image and a terrestrial panoramic image. The interactive orientation was done eight times, resulting in a maximum shift of 30.5 cm and a maximum rotation difference of 0.194 degrees. In the second example, a terrestrial close-range image was added to the image block to be used for the interactive orientation. As a result, the differences with the reference orientation decreased significantly. At this time, the maximum shift was 20.9 cm and the maximum rotation was 0.105 degrees. The last example included the same images as in the previous experiment. However, the initial image block orientation had no rotation differences compared to the laser data orientation, only shifts. Because the interpretation of laser data became easier when no rotations needed to be solved, the maximum shift was only 2.6 cm.
Typically, images are easier than airborne laser scanning data to orient with regards to the ground coordinate system. Interactive orientation, however, is more flexible if all orientation changes are done to the image orientation parameters. The equations for transferring laser data to the original coordinate system of an image block according to the relative orientation results from the interactive orientation were presented.
Because the accuracy of an interactive orientation is highly dependent on the image block geometry, on the number of images and on the amount of distinguishable features within the image footprints, extensive numerical results cannot be generalized. The area for an interactive orientation should be selected carefully in order to capture many clear features in the images. In areas, in which the amount of distinguishable features is low, thus posing a challenge, the interpretation skills of an operator become significant. The results, however, verify that including more images to interactive orientation increases accuracy. In addition, if laser scanning data is already leveled, an interactive orientation can provide very accurate orientation.
To conclude, interactive orientation improves the quality of multi-source data integration and thus the quality of the final products. However, the cost-effectiveness of the approach in practical applications has to be separately studied. Applications, where, e.g., sparse laser-scanning point clouds are densified with photogrammetrically-generated point clouds, accurate registration of the data can totally dominate the usability of the data, and, thus the highest quality approaches are expected to be needed.

Acknowledgments

The financial support of the Ministry of the Environment and Academy of Finland for the project “The use of ICT 3D measurement techniques for high-quality construction” and the Academy of Finland for the projects “Economy and technology of a global peer produced 3D geographical information system in built environment” and “Processing and use of 3D/4D information for civil and environmental engineering” are gratefully acknowledged. In addition, we would like to thank Keijo Inkilä and Panu Salo at the Helsinki University of Technology for their valuable contribution.

References and Notes

  1. Böhm, J.; Haala, N. Efficient Integration of Aerial and Terrestrial Laser Data for Virtual City Modeling Using Lasermaps. Int. Arch. Photogramm. Remote Sens 2005, 36, 192–197. [Google Scholar]
  2. Rönnholm, P.; Hyyppä, H.; Pöntinen, P.; Haggrén, H.; Hyyppä, J. A Method for Interactive Orientation of Digital Images Using Backprojection of 3D Data. Photogramm. J. Finland 2003, 18, 58–69. [Google Scholar]
  3. Rönnholm, P.; Hyyppä, J.; Hyyppä, H.; Haggrén, H.; Yu, X.; Kaartinen, H. Calibration of Laser-Derived Tree Height Estimates by Means of Photogrammetric Techniques. Scand. J. Forest Res 2004, 19, 524–528. [Google Scholar]
  4. Rönnholm, P.; Honkavaara, E.; Litkey, P.; Hyyppä, H.; Hyyppä, J. Integration of Laser Scanning and Photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2007, 36, 355–362. [Google Scholar]
  5. Kajuutti, K.; Jokinen, O.; Geist, T.; Pitkänen, T. Terrestrial Photography for Verification of Airborne Laser Scanner Data on Hintereisferner in Austria. Nordic J. Surv. Real Estate Res 2007, 4, 24–39. [Google Scholar]
  6. Kraus, K.; Pfeifer, N. Determination of Terrain Models in Wooded Areas with Airborne Laser Scanner Data. ISPRS J. Photogramm. Remote Sens 1998, 53, 193–203. [Google Scholar]
  7. Hyyppä, J.; Pyysalo, U.; Hyyppä, H.; Haggren, H.; Ruppert, G. Accuracy of Laser Scanning for DTM Generation in Forested Areas. Proceedings of SPIE Laser Radar Technology and Applications V, Orlando, FL, USA, 2000; 4035, pp. 119–130.
  8. Reutebuch, S.; McGaughey, R.; Andersen, H.; Carson, W. Accuracy of a High-resolution Lidar Terrain Model under a Conifer Forest Canopy. Can. J. Remot. Sens 2003, 29, 527–535. [Google Scholar]
  9. Hyyppä, H.; Yu, X.; Hyyppä, J.; Kaartinen, H.; Kaasalainen, S.; Honkavaara, E.; Rönnholm, P. Factors Affecting the Quality of DTM Generation in Forested Areas. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2005, 36, 97–102. [Google Scholar]
  10. Ahokas, E.; Kaartinen, H.; Hyyppä, J. On the Quality Checking of the Airborne Laser Scanning-based Nationwide Elevation Model in Finland. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2008, 37, 267–270. [Google Scholar]
  11. Sithole, G.; Vosselman, G. Experimental Comparison of Filter Algorithms for Bare-Earth Extraction from Airborne Laser Scanning Point Clouds. ISPRS J. Photogramm. Remote Sens 2004, 59, 85–101. [Google Scholar]
  12. Brenner, C.; Haala, N. Rapid Acquisition of Virtual Reality City Models from Multiple Data Sources. Int. Arch. Photogramm. Remot. Sens 1998, 32, 323–330. [Google Scholar]
  13. Vosselman, G. Building Reconstruction Using Planar Faces in Very High Density Data. Int. Arch. Photogramm. Remot. Sens 1999, 32, 87–92. [Google Scholar]
  14. Brenner, C. Towards Fully Automatic Generation of City Models. Int. Arch. Photogramm. Remot. Sens 2000, 33, 85–92. [Google Scholar]
  15. Rottensteiner, F.; Briese, C. A New Method for Building Extraction in Urban Areas from High-resolution LIDAR Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2002, 34, 295–301. [Google Scholar]
  16. Sithole, G.; Vosselman, G. Bridge Detection in Airborne Laser Scanner Data. ISPRS J. Photogramm. Remote Sens 2006, 61, 33–46. [Google Scholar]
  17. Rutzinger, M.; Höfle, B.; Pfeifer, N. Detection of High Urban Vegetation with Airborne Laser Scanning Data. Proceedings of ForestSat’07, Montpellier, France, 2007; Available online: http://www.ipf.tuwien.ac.at/np/Publications/rutzinger_forestsat.pdf/ (accessed on April 16, 2009).
  18. Dorninger, P.; Pfeifer, N. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds. Sensors 2009, 8, 7323–7343. [Google Scholar]
  19. Boehler, W.; Bordas Vicent, M.; Marbs, A. Investigating Laser Scanner Accuracy. Proceedings of XIXth CIPA Symposium, Antalya, Turkey, 2003; pp. 696–702.
  20. Hunter, G.; Cox, C.; Kremer, J. Development of a Commercial Laser Scanning Mobile Mapping System – StreetMapper. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36. Available online at: http://www.streetmapper.net/articles/Paper%20-%20Development%20of%20a%20Commercial%20Laser%20Scanning%20Mobile%20Mapping%20System%20-%20Pegasus.pdf/ (accessed on April 16, 2009).
  21. Kukko, A. Road Environment Mapper - 3D Data Capturing with Mobile Mapping, Licentiate Thesis.. Department of Surveying, Helsinki University of Technology, Espoo, Finland, 2009.
  22. Kremer, J. CCNS and AEROcontrol: Products for Efficient Photogrammetric Data Collection. In Photogrammetric Week 2001; Fritsch, D, Spiller, R., Eds.; Wichmann Verlag: Heidelberg, Germany, 2001; pp. 85–92. [Google Scholar]
  23. Heipke, C.; Jacobsen, K.; Wegmann, H. Analysis of the Results of the OEEPE Test Integrated Sensor Orientation. OEEPE Official Publication.. 2002, 43, pp. 31–45. Avalable online: http://www.gtbi.net/export/sites/default/GTBiWeb/soporte/descargas/AnalisisOeepeOrientacionIntegrada-en.pdf/ (accessed on April 16, 2009).
  24. Honkavaara, E.; Ilves, R.; Jaakkola, J. Practical Results of GPS/IMU/Camera-system Calibration. Proceedings of International Workshop: Theory, Technology and Realities of Inertial/GPS Sensor Orientation, Castelldefels, Spain, 2003; Avalable online: http://www.isprs.org/commission1/theory_tech_realities/pdf/p06_s3.pdf/ (accessed on 16 April 2009).
  25. Hutton, J.; Bourke, T.; Scherzinger, B.; Hill, R. New developments of inertial navigation system at applanix. In Photogrammetric Week ’07; Fritsch, D., Ed.; Wichmann Verlag: Heidelberg, Germany, 2007; pp. 201–213. [Google Scholar]
  26. Jacobsen, K. Direct Integrated Sensor Orientation – Pros and Cons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2004, 35, 829–835. [Google Scholar]
  27. Merchant, D.; Schenk, A.; Habib, A.; Yoon, T. 2004. USGS/OSU Progress with Digital Camera in situ Calibration Methods. Int. Arch. Photogramm. Remote Sens 2004, 35, 19–24. [Google Scholar]
  28. Schenk, T. Towards Automatic Aerial Triangulation. ISPRS J. Photogramm. Remote Sens 1997, 52, 110–121. [Google Scholar]
  29. Alamús, R.; Kornus, W. DMC Geometry Analysis and Virtual Image Characterization. Photogramm. Rec 2008, 23, 353–371. [Google Scholar]
  30. Püschel, H.; Sauerbier, M.; Eisenbeiss, H. A 3D Model of Castle Landenberg (CH) from Combined Photogrammetric Processing of Terrestrial and UAV-based Images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2008, 37, 93–98. [Google Scholar]
  31. Zhu, L.; Erving, A.; Koistinen, K.; Nuikka, M.; Junnilainen, H.; Heiska, N.; Haggrén, H. Georeferencing Multi-temporal and Multi-scale Imagery in Photogrammetry. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2008, 37, 225–230. [Google Scholar]
  32. Lichti, D.; Licht, G. Experiences with Terrestrial Laser Scanner Modelling and Accuracy Assessment. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2006, 36, 155–160. [Google Scholar]
  33. Brenner, C.; Dold, C.; Ripperda, N. Coarse Orientation of Terrestrial Laser Scans in Urban Environments. ISPRS J. Photogramm. Remote Sens 2008, 63, 4–18. [Google Scholar]
  34. Besl, P.; McKay, N. A Method for Registration of 3D Shapes. IEEE Trans. Patt. Anal. Machine Intell 1992, 14, 239–256. [Google Scholar]
  35. Chen, Y.; Medioni, G. Object Modeling by Registration of Multiple Range Images. Image Vis. Comp 1992, 10, 145–155. [Google Scholar]
  36. Eggert, D.; Fitzgibbon, A.; Fisher, R. Simultaneous Registration of Multiple Range Views for Use in Reverse Engineering of CAD Models. Comp. Vis. Image Underst 1998, 69, 253–272. [Google Scholar]
  37. Akca, D. Registration of Point Clouds Using Range and Intensity Information. The International Workshop on Recording, Modeling and Visualization of Cultural Heritage, Ascona, Switzerland, 2005; pp. 115–126.
  38. Pfeifer, N. Airborne Laser Scanning Strip Adjustment and Automation of Tie Surface Measurement. Bol. Ciênc. Geod 2005, 11, 3–22. [Google Scholar]
  39. Csanyi, N.; Toth, C. Improvement of Lidar Data Accuracy Using Lidar-specific Ground Targets. Photogramm. Eng. Remote Sens 2007, 73, 385–396. [Google Scholar]
  40. Yastikli, N.; Toth, C.; Brzezinska, D. Multi Sensor Airborne Systems: the Potential for in situ Sensor Calibration. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2008, 37, 89–94. [Google Scholar]
  41. Hyyppä, H.; Rönnholm, P.; Soininen, A.; Hyyppä, J. Scope for Laser Scanning to Provide Road Environment Information. Photogramm. J. Finland 2005, 19, 19–33. [Google Scholar]
  42. Toth, C.; Paska, E.; Brzezinska, D. Using Pavement Markings to Support the QA/QC of Lidar Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci 2007, 36, 173–178. [Google Scholar]
  43. Schenk, T. Modeling and Analyzing Systematic Errors of Airborne Laser Scanners. In Technical Notes in Photogrammetry No. 19; Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University: Columbus, OH, USA, 2001; pp. 1–42. [Google Scholar]
  44. Habib, A.; Ghanma, M.; Mitishita, E. Co-registration of Photogrammetric and Lidar Data: Methodology and case study. Brazilian J. Cartogr 2004, 56, 1–13. [Google Scholar]
  45. Shin, S.; Habib, A.; Ghanma, M.; Kim, C.; Kim, E.-M. Algorithms for Multi-sensor and Multi-primitive Photogrammetric Triangulation. ETRI J 2007, 29, 411–420. [Google Scholar]
  46. Mitishita, E.; Habib, A.; Centeno, J.; Machado, A.; Lay, J.; Wong, C. Photogrammetric and Lidar Data Integration Using the Centroid of a Rectangular Roof as a Control Point. Photogramm. Rec 2008, 23, 19–35. [Google Scholar]
  47. Postolov, Y.; Krupnik, A.; McIntosh, K. Registration of Airborne Laser Data to Surfaces Generated by Photogrammetric Means. Int. Arch. Photogramm. Remote Sens 1999, 32, 95–99. [Google Scholar]
  48. McIntosh, K.; Krupnik, A.; Schenk, T. Utilizing Airborne Laser Altimetry for the Improvement of Automatically Generated DEMs over Urban Areas. Int. Arch. Photogramm. Remote Sens 1999, 32, 89–94. [Google Scholar]
  49. Pöntinen, P. Camera Calibration by Rotation. Int. Arch. Photogramm. Remote Sens 2002, 34, 585–589. [Google Scholar]
  50. Phillips, J.; Liu, R.; Tomasi, C. Outlier Robust ICP for Minimizing Fractional RMSD. Sixth International Conference on 3-D Digital Imaging and Modeling, Montréal, Canada, 2007; pp. 427–434.
  51. Fraser, C.; Hanley, H. Developments in Close Range Photogrammetry for 3D Modelling: the iWitness Example. Proceedings of International Workshop on Processing & Visualization Using High-Resolution Imagery, Pitsanulok, Thailand, 2004; Avalable online: http://www.photogrammetry.ethz.ch/pitsanulok_workshop/papers/09.pdf/ (accessed on April 16, 2009).
  52. Kager, H.; Kraus, K. Height Discrepancies between Overlapping Laser Scanner Strips. Proceedings of Conference on Optical 3-D Measurement Techniques V, Vienna, Austria, 2001; pp. 103–110.
Figure 1. In order to create a measureable panoramic image, a concentric image acquisition was ensured with a calibrated camera mount.
Figure 1. In order to create a measureable panoramic image, a concentric image acquisition was ensured with a calibrated camera mount.
Sensors 09 06008f1
Figure 2. The workflow of the research.
Figure 2. The workflow of the research.
Sensors 09 06008f2
Figure 3. The relative orientation of an image block and a laser point cloud can be solved through the 3D object space.
Figure 3. The relative orientation of an image block and a laser point cloud can be solved through the 3D object space.
Sensors 09 06008f3
Figure 4. ICP registration results. The reference surface was a combination of photogrammetric and terrestrial laser scanning data. Only red laser points were included in registration.
Figure 4. ICP registration results. The reference surface was a combination of photogrammetric and terrestrial laser scanning data. Only red laser points were included in registration.
Sensors 09 06008f4
Figure 5. A suggestive workflow for an interactive orientation of a single image.
Figure 5. A suggestive workflow for an interactive orientation of a single image.
Sensors 09 06008f5
Figure 6. With 3D rotation matrices R1, R2 and U, camera coordinate observations can be rotated to a coordinate system parallel to the target coordinate system. Because 3D rotation matrices are orthogonal, inverse matrices can be calculated with matrix transposes.
Figure 6. With 3D rotation matrices R1, R2 and U, camera coordinate observations can be rotated to a coordinate system parallel to the target coordinate system. Because 3D rotation matrices are orthogonal, inverse matrices can be calculated with matrix transposes.
Sensors 09 06008f6
Figure 7. Laser scanning data, which was used for interactive orientation, superimposed onto aerial, close-range and panoramic images. The color-coding is illustrating the heights of laser points. The coordinate axes illustrate the approximate directions of the ground coordinate system.
Figure 7. Laser scanning data, which was used for interactive orientation, superimposed onto aerial, close-range and panoramic images. The color-coding is illustrating the heights of laser points. The coordinate axes illustrate the approximate directions of the ground coordinate system.
Sensors 09 06008f7
Figure 8. After the orientations, 3D points from photogrammetric measurements, terrestrial laser scanning and airborne laser scanning were integrated. 3D points were colorized using both aerial images and terrestrial panoramic image.
Figure 8. After the orientations, 3D points from photogrammetric measurements, terrestrial laser scanning and airborne laser scanning were integrated. 3D points were colorized using both aerial images and terrestrial panoramic image.
Sensors 09 06008f8
Figure 9. Small red dots illustrate the initial orientation and blue dots the registration after the interactive orientation and the inverse transformation of 3D points according to the relative orientation parameters.
Figure 9. Small red dots illustrate the initial orientation and blue dots the registration after the interactive orientation and the inverse transformation of 3D points according to the relative orientation parameters.
Sensors 09 06008f9
Table 1. Differences of exterior orientation parameters (interactive orientation – reference). The interactive orientation was applied using simultaneously a terrestrial panoramic image and an aerial image, whose relative orientation was known. Statistics were calculated from 8 individual orientations.
Table 1. Differences of exterior orientation parameters (interactive orientation – reference). The interactive orientation was applied using simultaneously a terrestrial panoramic image and an aerial image, whose relative orientation was known. Statistics were calculated from 8 individual orientations.
Aerial image
X (cm)Y (cm)Z (cm)ω (deg)φ (deg)κ (deg)

Average−8.12.2−1.1−0.062−0.0120.046
Std10.615.42.50.0640.0410.038
Max20.030.55.60.1940.0690.106

Panoramic image
X (cm)Y (cm)Z (cm)ω (deg)φ (deg)κ (deg)

Average−3.1−2.8−0.6−0.005−0.0630.043
Std5.17.21.80.0550.0560.033
Max10.218.54.10.1010.1760.065
Table 2. Differences of exterior orientation parameters (interactive orientation – reference). The interactive orientation was applied using simultaneously a close-range normal-angle image, a terrestrial panoramic image and an aerial image, whose relative orientations were known. Statistics were calculated from 8 individual orientations.
Table 2. Differences of exterior orientation parameters (interactive orientation – reference). The interactive orientation was applied using simultaneously a close-range normal-angle image, a terrestrial panoramic image and an aerial image, whose relative orientations were known. Statistics were calculated from 8 individual orientations.
Aerial image
X (cm)Y (cm)Z (cm)ω (deg)φ (deg)κ (deg)

Average−9.61.5−0.4−0.033−0.0170.038
Std7.412.01.20.0310.0360.021
Max20.916.32.40.0970.0650.065

Panoramic image
X (cm)Y (cm)Z (cm)ω (deg)φ (deg)κ (deg)

Average0.60.7−1.20.024−0.0170.045
Std2.60.41.40.0350.0220.028
Max6.81.32.80.0650.0460.078

Close-range image
X (cm)Y (cm)Z (cm)ω (deg)φ (deg)κ (deg)

Average1.20.3−0.6−0.0340.0220.008
Std2.30.20.80.0620.0230.050
Max6.50.62.00.1050.0540.097
Table 3. Differences of shifts (interactive orientation – reference). Because there were no rotation differences between laser scanning data and the image block coordinate system, the differences of shifts were the same for all images. Statistics were calculated from 8 individual orientations.
Table 3. Differences of shifts (interactive orientation – reference). Because there were no rotation differences between laser scanning data and the image block coordinate system, the differences of shifts were the same for all images. Statistics were calculated from 8 individual orientations.
Image block
X (cm)Y (cm)Z (cm)
Average0.70.3−0.9
Std1.20.30.5
Max2.60.61.8

Share and Cite

MDPI and ACS Style

Rönnholm, P.; Hyyppä, H.; Hyyppä, J.; Haggrén, H. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks. Sensors 2009, 9, 6008-6027. https://doi.org/10.3390/s90806008

AMA Style

Rönnholm P, Hyyppä H, Hyyppä J, Haggrén H. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks. Sensors. 2009; 9(8):6008-6027. https://doi.org/10.3390/s90806008

Chicago/Turabian Style

Rönnholm, Petri, Hannu Hyyppä, Juha Hyyppä, and Henrik Haggrén. 2009. "Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks" Sensors 9, no. 8: 6008-6027. https://doi.org/10.3390/s90806008

Article Metrics

Back to TopTop