Next Article in Journal
Landsat-Based Indices Reveal Consistent Recovery of Forested Stream Catchments from Acid Deposition
Previous Article in Journal
Spatiotemporal Derivation of Intermittent Ponding in a Maize–Soybean Landscape from Planet Labs CubeSat Images
Previous Article in Special Issue
Geo-Registering Consecutive Construction Site Recordings Using a Pre-Registered Reference Module
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Using ALS Data to Improve Co-Registration of Photogrammetry-Based Point Cloud Data in Urban Areas

by
Ranjith Gopalakrishnan
1,*,
Daniela Ali-Sisto
1,
Mikko Kukkonen
1,
Pekka Savolainen
2 and
Petteri Packalen
1
1
School of Forest Sciences, Faculty of Science and Forestry, University of Eastern Finland, P.O. Box 111, 80101 Joensuu, Finland
2
TerraTec Oy, Karjalankatu 2, 00520 Helsinki, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 1943; https://doi.org/10.3390/rs12121943
Submission received: 1 May 2020 / Revised: 9 June 2020 / Accepted: 11 June 2020 / Published: 16 June 2020

Abstract

:
Globally, urban areas are rapidly expanding and high-quality remote sensing products are essential to help guide such development towards efficient and sustainable pathways. Here, we present an algorithm to address a common problem in digital aerial photogrammetry (DAP)-based image point clouds: vertical mis-registration. The algorithm uses the ground as inferred from airborne laser scanning (ALS) data as a reference surface and re-aligns individual point clouds to this surface. We demonstrate the effectiveness of the proposed method for the city of Kuopio, in central Finland. Here, we use the standard deviation of the vertical coordinate values as a measure of the mis-registration. We show that such standard deviation decreased substantially (more than 1.0 m) for a large proportion (23.2%) of the study area. Moreover, it was shown that the method performed better in urban and suburban areas, compared to vegetated areas (parks, forested areas, and so on). Hence, we demonstrate that the proposed algorithm is a simple and effective method to improve the quality and usability of DAP-based point clouds in urban areas.

Graphical Abstract

1. Introduction

Urbanization has been a prevalent global trend for the past several decades, with drastic consequences [1]. Urban areas are expanding considerably on a daily basis; it is estimated to be as much as 110 km2 every single day [2]. Urbanization has drastic consequences: patterns of land-cover, hydrological cycle components, climate, biogeochemistry, and biodiversity are all altered on a global scale as a result of it [3]. Another consequential aspect with respect to land use in urban centers is that over the past several decades, the land expansion rate has been higher than or equal to population growth rates; this suggests that urban growth is becoming more expansive than compact [1]. Urbanization has deep implications for global sustainability. It has been opined that the key to global ecological sustainability may lie in urban areas as they account for ~75% of global GDP [4]. Some positive aspects can also be associated with such urbanization trends. For instance, this represents a unique opportunity to build more sustainable spaces. These are cities that have ‘green’ buildings, sustainable transport options over reduced distances, parks and such green spaces, and ample domestic and industrial recycling opportunities.
Remote sensing data is crucial to city planners for monitoring urban areas, devising effective interventions and for planning urban infrastructure [2]. The importance of remote sensing in the mapping and monitoring of diverse sustainability factors (e.g., land use and cover, biodiversity, climate, hydrological systems, biogeochemistry) has been previously noted [2,3]. Remote sensing scientists are being tasked by policy and decision makers to generate more and better-quality urban maps and related products. Digital aerial photogrammetry-based 3D point cloud data is becoming increasingly popular in the urban context [5,6]. Point clouds have a unique advantage over other remote sensing data types in the context of urban environments because they are able to capture the 3D aspect of urban infrastructure such as buildings, elevated bridges, walkways, and open spaces. This is advantageous, especially because spatial visualizations are important for urban planners for creating and communicating various strategies and approaches [7]. Point cloud data is useful for a varied list of tasks such as urban planning, vehicular traffic management, and creation of digital surface models, 3D modeling, virtual reality simulations, and overall environmental monitoring [8]. Dense point clouds can be created with processing routines of Structure from Motion (SfM) [9], which can utilize a range of popular stereo matching algorithms such as semi-global matching (SGM) [10], the ‘next-generation automatic terrain extraction’ algorithm (NGATE) [11], or more recently, deep learning [12].
The vertical positional accuracy of digital aerial photogrammetry (DAP)-based point clouds is highly important for the creation of relatively error free digital surface models. However, it can be limited in urban environments because of several factors. First, there could be slight errors associated with the global navigation satellite system (GNSS) positioning equipment or with the inertial measurements units (IMU). This is more probable when low-cost equipment is used, such as in the context of ‘citizen science’ related data acquisition projects. Second, urban areas pose unique challenges to the point cloud generation algorithms. This is because of low texture in some areas, occlusion effects, heterogeneous surfaces and surface types, and shadow effects (e.g., [13]). Indeed, the noise and outlier-laden nature of DAP data when compared to airborne laser scanning (ALS) has been noted by several authors [14,15,16]. As part of the many DAP-based point cloud generation procedures, multiple point clouds are generated independently, one for each image-pair used. We have noticed in some of our urban DAP-based point cloud datasets that there is ~1m vertical misalignment between such point clouds (from different image pairs). This is the primary motivation for the development of a simple and intuitive algorithm to reduce such misalignment. There have been several attempts for image point cloud quality improvement. For example, a smoothing operation based on the similarity between various patches and their color has been proposed [17]. A technique of iteratively smoothing the point cloud by estimating quadric surfaces for various point cloud regions was suggested by Fua et al. [18]. A simpler version of this strategy of using best-fitting tangent planes (for all points) so as to detect and remove noise and outlier points have also been suggested [19]. Yet, to our knowledge, this is the first one that leverages on an ALS-based ground surface to correct DAP-based point clouds. A similar method has been validated for forested areas [20]. Here, we describe a version of the algorithm that is suitable for urban areas, and we also provide some quantitative estimates of the efficacy of our proposed algorithm.
The primary motivation behind our proposed scheme of correcting DAP-based point clouds using ALS data is because of their differing acquisition frequencies in the real world. On average, DAP data is roughly one-third the cost of ALS [21]. Hence, in many instances over urban centers, ALS acquisitions happen once in 10 or 15 years, while DAP-related acquisitions (i.e., collection of aerial images) happen once every two or three years. In fact, annual DAP acquisitions may also be justified for some fast-expanding or changing cities.
The main objective of this article is to study the efficacy of an image point cloud correction technique in the urban context. Specifically, we try to answer the following questions in an urban context:
  • Can an ALS-based digital terrain model (DTM) be used to realign DAP-based point clouds consistently?
  • If so, what are the quantified improvements seen in such point clouds?
By answering these questions, we hope to establish the efficacy of a simple yet powerful method to increase the technical quality and usability of image point clouds over metropolitan and similar city areas.

2. Materials and Methods

2.1. Study Area

Our study area is the city of Kuopio (Figure 1), an urban area with a population of around 120,000.

2.2. Remote Sensing Datasets

The ALS data were collected during May 2010 using the Optech ALTM GEMINI (05SEN180) system, a widely used sensor [23]. The mean flying height was 2000 m above ground level, the field of view used was 30 degrees, resulting in a side overlap of approximately 20%. This flying configuration resulted in a nominal sampling density of about 0.8 emitted pulses per m2. A DTM was constructed by first classifying points as ground and non-ground echoes according to the approach described by Axelsson [24]. A raster DTM of 2 m spatial resolution was then obtained by computing the mean of the ground echoes within each raster cell. Values for the cells with no ground echoes were interpolated using Delaunay triangulation and triangular interpolation.
Aerial photographs were also acquired over Kuopio on July 2014 using a Canon EOS-1Ds Mark III. The sensor was a nadir-facing RGB camera in a multiple-camera setup. Position and orientation came from the integrated Applanix POS/AV 310 Inertial Navigation System with GNSS and IMU units. The average flying altitude was 915 m above ground level. A total of 958 aerial photos were acquired, with an average end-lap of 60% and an average side-lap of 45%. Exterior camera orientations from the GNSS/IMU system were used as such without block adjustment (i.e., direct sensor orientation was used).
Block bundle adjustment is quite common in photogrammetry pipelines, even while using automatically detected tie points in images in the absence of ground control points. However, such an adjustment was skipped in this case for two reasons: First, our objective here was to recreate scenarios where such (potentially costly and time consuming) processing techniques may not be available or feasible. Second, we intend to showcase our algorithm as being able to correct point clouds with systematic spatial misalignments, no matter what the error source. Other possible sources for misalignment could be cheap equipment or less favorable acquisition conditions.

Creation of Image Point Clouds

We used an image matching algorithm called SGM [10] to create the initial DAP-based point clouds from overlapping aerial images. The basic idea is to pixelwise match images, but with global and local search techniques used to establish corresponding points. We used an SGM implementation done in ERDAS IMAGINE photogrammetry suite (version 15.0; Geosystems, 2014) for creating the point clouds. This implementation always results in image-pair based point clouds (multi-view stereo is not possible).

2.3. Height Adjustment Algorithm

The proposed algorithm was used to adjust height of DAP-based point clouds, by adjusting the height (z) values associated with each included point. A similar version of this algorithm was previously presented [20]. Herein, we presented a modified adaptation. The algorithm is parametrized by a set of user-specified input parameters, and is executed for each point cloud image-pair at a time. The following high-level description summarizes the essential elements of the procedure in eight steps:
Input layers:
  • The unadjusted DAP-based point cloud data.
  • The ALS-based DTM of the area.
  • Demarcation of built-up area (BUA) for the area in consideration (e.g., a GIS-vector representation).
Parameters relevant to the procedure:
  • The pixel size (pixel_size).
  • The standard deviation threshold (SDmax).
  • The minimum number of points threshold (npmin).
  • Ground pixel outlier detection threshold (zcorrmax).
Output layer:
  • The adjusted DAP-based point cloud data.
Procedural steps:
  • The elevations of the unadjusted DAP-based points (ZDAP) are scaled so that they are now relative to the ALS ground level (∆ZDAP). This is done by subtracting the (ALS-based) DTM from the DAP point clouds z values.
  • A user-defined pixel size (pixel_size) was used to tessellate the spatial extend of the point cloud into a regular square-grid.
  • Grid-elements (pixels) that belong to buildings and similar structures are labelled as ‘built-up area’ (BUA) pixels. This is done using the ‘demarcation of built-up area (BUA)’ layer. This is elaborated further in Section 2.4.
  • Each grid-element (pixel) of this tessellated grid was examined as to whether it would qualify as a ground pixel (i.e., a pixel that represents a patch of ground) or not. This was done by the following three criteria:
    • It should not be labelled as a BUA pixel (see step 3 above).
    • The ‘flat surface’ criterion: The standard deviation of vertical heights of the set of points must be below a user-specified threshold (SDmax).
    • The number of (point cloud) points in the pixel should be greater than a user-defined value npmin.
    In this step, criteria (a) and (b) helps us identify flat surfaces that are not the tops of buildings and such structures, while (c) is for avoiding spurious ground points. Hence, there is high likelihood that these would be ground surface patches. This step is implemented by using a for-loop and looping through and examining all pixels in the area under consideration.
  • Then, we compute a ‘vertical correction estimate’ (∆ZCGP, correction to ground pixel) for each such identified ground pixel:
    ∆ZCGP = mean(∆ZDAP)
    where ∆ZDAP is the set of Z values of (point cloud) points in that (ground) pixel.
  • We then iteratively trim the list of candidates of DAP-based point cloud ground pixels identified in the above steps using the following criteria:
    • Drop candidate ground pixels which specify too high absolute correction values (i.e., abs(∆ZCGP) > zcorrmax).
    • Compute the vertical displacement difference between the highest and lowest candidate ground pixels. If this is greater than a user-defined threshold (hdiffmax), then both those candidate ground pixels are dropped. This is mainly to exclude points which are either too high (such as tree top points) or too low (noise-related below-ground artifacts) from being selected as spurious ground points.
  • We interpolate a raster (∆ZCOR) representing a correction surface for the full spatial extend of the original DAP-based point cloud from these set of ∆ZCGP points. This interpolation surface is computed using Delaunay triangulation (linear interpolation inside triangles) followed by Gaussian filtering using a user-specified sigma value.
  • The Z value of each point in the point cloud is corrected (adjusted) as follows:
    ZADJ = ZDAP − ∆ZCOR
    where ZADJ is the adjusted height of the point, ZDAP is the original height of the point and ∆ZCOR is the value of the correction surface raster at that point.
The gist of these steps are to identify patches of ground in the DAP-based point cloud and use this information to generate a correction surface with respect to the ALS DTM. This correction surface is then used to correct original heights of all points in the point cloud (Figure 2). This procedure is repeated for each point cloud generated independently from image pairs.

2.4. Built-Up Area (BUA) Exclusion

As mentioned before, we exclude built-up areas while searching for flat areas (ground patches). Our definition of built-up areas for this purpose is limited to the footprint of multi-story buildings present in the center of most cities, suburban houses, large factories, warehouses, shops and similar structures. The reason for this exclusion is that the rooftop of such structures are relatively flat, and would generate false ground patches, if they were included. These false patches would then introduce substantial errors to the correction raster.

2.5. Evaluation of Efficacy of Algorithm

The efficacy of our algorithm for urban sites was evaluated using DAP-based point clouds generated from images acquired over the city of Kuopio, Finland. Kuopio is a rapidly urbanizing area and is representative of many such medium-scale cities around the world.
  • After the examination of several areas in and around the city, the pixel size for flat area detection and correction raster generation (pixel_size) was fixed at 3.0 m.
  • The standard deviation threshold (SDmax) was set to 0.2 m. This was done after examining several vertical profiles of ground patches. Most of the points for these samples fell within ±0.5 m from the mean.
  • The value of npmin was set to 10. This was done after realizing that pixels with less than ten points were mostly erroneously classified.
  • Meanwhile, the value of zcorrmax was set to 100.0 m. This was done after some initial runs, and realizing that some outlier points (displaced more than 100 m) need to be filtered off.
  • The building exclusion step was done using a built-up area shapefile obtained from the National Land Survey of Finland (NLS), based on data collected in 2016.
We first performed a ‘close-up’ evaluation of our algorithm by inspecting the vertical profile of the DAP-based point clouds before and after the correction, for several representative areas. We then performed a quantitative evaluation over the entire study area in the following ways. For any given pixel, the standard deviation (SD) of the vertical elevation of the (point cloud) points that fall in the pixel boundary is a good indication of the adequacy of the vertical co-registration at that pixel. For example, for a flat building rooftop, the point clouds generated from different image pairs for this surface may be offset by several meters (e.g., poor co-registration, high standard deviation) or may be almost vertically coincident (e.g., good co-registration, low standard deviation). Our algorithm should affect a substantial decrease in vertical standard deviation in most areas if it was effective. Another similar metric of vertical spread (that should decrease) is the 95% interquantile range (IQR95), which we define in the following way:
IQR95 = h97.5h2.5
where h97.5 and h2.5 are the 97.5% and 2.5% percentile heights above ground of the point cloud. For both of these metrics, we calculated the per-pixel change after our algorithm was used for correcting the DAP-based point cloud. For example:
SD = SDafter adjustmentSDoriginal DAP point cloud
where ∆SD is the change in standard deviation affected by our algorithm. The overall mean changes over the entire area of Kuopio was also calculated. Hence, we quantified the effectiveness of our approach by studying changes in the vertical dispersion of points in the DAP point cloud.

3. Results

We examined the vertical profile of the DAP-based point clouds at several areas before and after our adjustment technique was applied. We noticed that the vertical co-registration was improved in many instances (Figure 3). The ground patches represented in the DAP point clouds overlapped much better. Additionally, the various DAP-based point clouds generated from different image pairs co-registered much better in the vertical direction.
The change in standard deviation (∆SD, see (4)) between the original and adjusted point cloud elevations was quantified over the entire study area using 3 × 3 m pixels. Most areas (76.7%) registered relatively small change (less than 1.0 m). A substantial decrease in standard deviation (greater than 1.0 m), signaling better vertical co-registration, was recorded in 23.2% of the area. Meanwhile, the standard deviation increased substantially (more than 1.0 m) in only 0.05% of the study area. The spatial patterns of these changes over the whole study area are shown in Figure 4. From the figure (part (a)), it can be seen that our calibration method provides a decrease in standard deviation of height over almost all parts of the study area (blue pixels), with slightly more effect in the forested fringes. Meanwhile, in part (b) of the figure, we see that most of the pixels register a decrease, many by more than 5.0 m. We also observe that these decreases are more prevalent at the edges and corners of buildings than at flat surfaces. We examined the change in the 95% interquantile range (IQR95, see (3)) between the original and adjusted point cloud elevations in a similar fashion. Again, a majority of the area (56.1%) registered relatively negligible changes in this measure of dispersion (less than 1.0 m), while calibration affected a decrease for this metric in 43.8% of the area. The 95% interquantile range increased in a very small proportion (0.09%) of the area.
We also analyzed the change in standard deviation with respect to three typical urban land-use divisions in our study area (Table 1). This analysis indicates that the algorithm is most effective in urban areas, followed by suburban, and then forested areas. The reason for this seems to be that the magnitude of standard deviations is on the higher side in urban regions, especially for areas with tall buildings. The performance of image matching algorithms also degrade for flat urban surfaces (roads, parking lots, top of buildings) with minimal texture information. Hence, for land-use divisions with more proportion of such flat areas (such as urban downtown divisions), there will be more proportion of areas with substantial change in standard deviation.

4. Discussion

Remote sensing provides an effective path forward for monitoring urban areas. 3D remote sensing data are useful in urban analysis but acquisition costs may be high, particularly because repeated measurements are often indispensable. DAP-based point cloud data has its advantages given that the typical acquisition cost is 30-50% that of ALS data [20]. This would enable a high frequency of observations (say, every 2-5 years), useful in understanding many constantly changing urban areas [25].
We have demonstrated that the proposed height adjustment technique clearly improved vertical co-registration and vertical accuracy of image point clouds. In the verification urban site, standard deviation of the vertical displacement of (DAP point cloud) points decreased in 99.9% of the study area and the decrease was substantial in 23.2% of the area. The height adjustment was most effective in urban and suburban areas, and was less so in vegetated areas. However, since the comparison was based on the vertical standard deviation, the differences most likely arise from the fact that in urban areas height variations are higher than in vegetated areas, which enables higher improvement in vertical accuracy.
There are several drawbacks related to DAP-based image point cloud data. Some image processing pipelines use only measurements from the GNSS and IMU instruments, rather than those from ground control points (GCPs), also known as ‘direct georeferencing’. Thus, they avoid the collection of (costly) GCP data. The positions of the exposure centers can be obtained either in real time (e.g., real-time kinematic, RTK) or during post-processing (e.g., post-processed kinematic, PPK). PPK may improve particularly the accuracy of low-cost GNSS receivers. However, angular errors caused by the IMU are likely greater than positioning errors, particularly with low-cost micro-electro-mechanical system (MEMS) based IMU. The increased availability of lightweight and survey-grade GNSS and IMU systems has led to more georeferencing options, but their associated spatial accuracy is still lesser than is achievable using GCPs [16,17]. All GNSS/IMU instruments have non-negligible errors associated with them. The resulting errors in external orientation are ultimately propagated as errors of positioning (both horizontal and vertical) of individual points in the generated point cloud. Another source of error is the displacement vector between the antennae and the center of the camera projection (lever-arm). Omission or the use of approximations of this element in calculations can be problematic. One should also consider even the differences between the orientation of the IMU and downward vector of gravity [26,27]. As the altitude of imaging gets higher, geometrical distortions such as refraction phenomena and the curvature of the earth also result in positional inaccuracies. Sources of errors associated with vegetated land-covers are canopy tops swaying in the breeze and occlusion of some patches. Hence, canopy gaps that could be distinguished in ALS data are usually not identified in DAP point cloud data.
A crucial element of the algorithm proposed in this paper involves the identification of several ground patches in the urban scene. Hence, it would be less effective for image pairs where very little ground patches are represented. This is possible for example, in the case of city center sections with large buildings. Nevertheless, such conditions do not dominate the landscape of the average city, and are usually restricted to a limited area. Another aspect of the method outlined in this paper is the need to generate built-up area estimates for the urban area in consideration (Section 2.4). In our case, the built-up area shapefile was from the National Land Survey of Finland. Similar products are possible, and may be available in other areas, too. It is possible to extract building footprints from remote sensing datasets such as high-resolution images [28,29]. However, sufficiently dense point clouds generated from ALS data used alone or in fusion with other datasets remain the primary approach used for contemporary urban footprint extraction [30]. Wang et al. describes how a combination of relief-corrected aerial imagery and ALS data has been used to identify and map buildings in urban areas [31]. A simple heuristic is to use an ALS-based digital surface model depicting height above ground (nDSM) to identify buildings (for e.g., nDSM height > 10 m) and exclude such areas from the ground patch search. If the nDSM predates the DAP-based point cloud, one need to assume that new buildings were not built in the area of interest in the interim period.

5. Conclusions

In this article, we have proposed a simple and effective method to leverage on existing ALS-based digital elevation data to better co-register and align DAP-based point clouds. We also demonstrated the effectiveness of our method using data from a large, heterogeneous urban area, i.e., Kuopio city. DAP data is expected to become cheaper and more prevalent, given the advancement of technology related to acquisition systems (i.e., acquisition from high-altitude aircraft; from near-surface unmanned aerial systems). Meanwhile ALS data will continue to the popular as a higher quality but less frequently available and updatable option. This paper presents a method for leveraging on one (the ALS data) to enhance the quality of the other (DAP point clouds). The proposed algorithm is computationally tractable and can be easily parallelized. Given that the amount of urban remote sensing data is rapidly increasing, we hope that efforts such as ours will help to create high-quality, analysis-ready, and policy-relevant spatial datasets.

Author Contributions

R.G. and P.P. conceived and designed the research experiments, with inputs from D.A. and M.K.; the algorithm was developed by D.A.-S. and associated software was written by D.A.-S. and R.G.; auxiliary datasets were generated by M.K.; analysis was done by R.G. with contributions from P.P.; writing–original draft preparation, R.G.; all the authors contributed to writing–review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Finnish Forest Centre and the Academy of Finland through two projects: ALS4D (the Research Council for Natural Sciences and Engineering) under Grant 295341 and FORBIO (the Strategic Research Council) under Grant 314224.

Acknowledgments

The authors are grateful to National Land Survey (NLS), Finland for providing the ALS data and Blom Kartta Oy, Finland for providing processed aerial images.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Seto, K.C.; Fragkias, M.; Güneralp, B.; Reilly, M.K. A meta-analysis of global urban land expansion. PLoS ONE 2011, 6, e23777. [Google Scholar] [CrossRef] [PubMed]
  2. Zhu, Z.; Zhou, Y.; Seto, K.C.; Stokes, E.C.; Deng, C.; Pickett, S.T.; Taubenböck, H. Understanding an urbanizing planet: Strategic directions for remote sensing. Remote Sens. Environ. 2019, 228, 164–182. [Google Scholar] [CrossRef]
  3. Grimm, N.B.; Faeth, S.H.; Golubiewski, N.E.; Redman, C.L.; Wu, J.; Bai, X.; Briggs, J.M. Global change and the ecology of cities. science 2008, 319, 756–760. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Seto, K.C.; Dhakal, S.; Bigio, A.; Blanco, H.; Delgado, G.C.; Dewar, D.; Huang, L.; Inaba, A.; Kansal, A.; Lwasa, S. Human settlements, infrastructure and spatial planning. In Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change; Cambridge University Press: Cambridge, UK; New York, NY, USA, 2014. [Google Scholar]
  5. McClune, A.P.; Mills, J.P.; Miller, P.E.; Holland, D.A. Automatic 3D building reconstruction from a dense image matching dataset. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41. [Google Scholar]
  6. Vu, H.-H.; Labatut, P.; Pons, J.-P.; Keriven, R. High accuracy and visibility-consistent dense multiview stereo. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 889–901. [Google Scholar] [CrossRef]
  7. Marshall, V.J.; Cadenasso, M.L.; McGrath, B.P.; Pickett, S.T. Patch Atlas: Integrating Design Principles and Ecological Knowledge for Cities as Complex Systems; Yale University Press: Cumberland, RI, USA, 2020. [Google Scholar]
  8. Vosselman, G.; Maas, H.-G. Airborne and Terrestrial Laser Scanning; CRC Press: Boca Raton, FL, USA, 2010. [Google Scholar]
  9. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. In ACM Siggraph 2006 Papers; Association for Computing Machinery (ACM): New York, NY, USA, 2006; pp. 835–846. [Google Scholar]
  10. Hirschmuller, H. Stereo processing by semiglobal matching and mutual information. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 328–341. [Google Scholar] [CrossRef] [PubMed]
  11. Zhang, B.; Miller, S.; Walker, S.; DeVenencia, K. Next generation automatic terrain extraction using Microsoft ultracam imagery. In Proceedings of the ASPS 2007 Annual Conference, Tampa, FL, USA, 7–11 May 2007. [Google Scholar]
  12. Zhou, K.; Meng, X.; Cheng, B. Review of Stereo Matching Algorithms Based on Deep Learning. Comput. Intell. Neurosci. 2020, 2020. [Google Scholar] [CrossRef]
  13. Widyaningrum, E.; Gorte, B.G.H. Comprehensive comparison of two image-based point clouds from aerial photos with airborne LiDAR for large-scale mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42, 557–565. [Google Scholar] [CrossRef] [Green Version]
  14. Nex, F.; Gerke, M. Photogrammetric DSM denoising. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 2014, 40, 231. [Google Scholar] [CrossRef] [Green Version]
  15. Zhu, Q.; Li, Y.; Hu, H.; Wu, B. Robust point cloud classification based on multi-level semantic relationships for urban scenes. Isprs J. Photogramm. Remote Sens. 2017, 129, 86–102. [Google Scholar] [CrossRef]
  16. Hu, H.; Chen, C.; Wu, B.; Yang, X.; Zhu, Q.; Ding, Y. Texture-aware dense image matching using ternary census transform. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-3, 2016 XXIII ISPRS Congress, Prague, Czech Republic, 12–19 July 2016. [Google Scholar]
  17. Huhle, B.; Schairer, T.; Jenke, P.; Straßer, W. Robust non-local denoising of colored depth data. In Proceedings of the 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Anchorage, AK, USA, 23–28 June 2008; pp. 1–7. [Google Scholar]
  18. Fua, P.; Sander, P. Reconstructing surfaces from unstructured 3d points. In Proceedings of the Second European Conference on Computer Vision (ECCV’90), Santa Margherita Ligure, Italy, 19–22 May 1992. [Google Scholar]
  19. Wang, J.; Yu, Z.; Zhu, W.; Cao, J. Feature-preserving surface reconstruction from unoriented, noisy point data. Comput. Gr. Forum 2013, 32, 164–176. [Google Scholar] [CrossRef]
  20. Ali-Sisto, D.; Gopalakrishnan, R.; Kukkonen, M.; Savolainen, P.; Packalen, P. A method for vertical adjustment of digital aerial photogrammetry data by using a high-quality digital terrain model. Int. J. Appl. Earth Obs. Geoinf. 2020, 84, 101954. [Google Scholar] [CrossRef]
  21. White, J.C.; Wulder, M.A.; Vastaranta, M.; Coops, N.C.; Pitt, D.; Woods, M. The utility of image-based point clouds for forest inventory: A comparison with airborne laser scanning. Forests 2013, 4, 518–536. [Google Scholar] [CrossRef] [Green Version]
  22. Gopalakrishnan, R.; Seppänen, A.; Kukkonen, M.; Packalen, P. Utility of image point cloud data towards generating enhanced multitemporal multisensor land cover maps. Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102012. [Google Scholar] [CrossRef]
  23. Zhang, Z.; Liu, X. Support vector machines for tree species identification using LiDAR-derived structure and intensity variables. Geocarto Int. 2013, 28, 364–378. [Google Scholar] [CrossRef]
  24. Axelsson, P. DEM generation from laser scanner data using adaptive TIN models. Int. Arch. Photogramm. Remote Sens. 2000, 33, 111–118. [Google Scholar]
  25. Li, W.; Zhou, W.; Bai, Y.; Pickett, S.T.; Han, L. The smart growth of Chinese cities: Opportunities offered by vacant land. Land Degrad. Dev. 2018, 29, 3512–3520. [Google Scholar] [CrossRef]
  26. Pepe, M.; Prezioso, G.; Santamaria, R. Impact of vertical deflection on direct georeferencing of airborne images. Surv. Rev. 2015, 47, 71–76. [Google Scholar] [CrossRef]
  27. Grejner-Brzezinska, D.A.; Yi, Y.; Toth, C.; Anderson, R.; Davenport, J.; Kopcha, D.; Salman, R. On improved gravity modeling supporting direct georeferencing of multisensor systems. Proc. Int. Soc. Photogramm. Remote Sens 2004, 35, 908–913. [Google Scholar]
  28. Ahmadi, S.; Zoej, M.V.; Ebadi, H.; Moghaddam, H.A.; Mohammadzadeh, A. Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 150–157. [Google Scholar] [CrossRef]
  29. Aldred, D.A.; Wang, J. A method for obtaining and applying classification parameters in object-based urban rooftop extraction from VHR multispectral images. Int. J. Remote Sens. 2011, 32, 2811–2823. [Google Scholar] [CrossRef]
  30. Vu, T.T.; Yamazaki, F.; Matsuoka, M. Multi-scale solution for building extraction from LiDAR and image data. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 281–289. [Google Scholar] [CrossRef]
  31. Wang, J.; Lehrbass, B.; Zeng, C. Urban Building Mapping using LiDAR and Relief-Corrected Colour-Infrared Aerial Images. In Proceedings of the 34th International Symposium on Remote Sensing of Environment, Sydney, Australia, 10–15 April 2011; pp. 10–15. [Google Scholar]
Figure 1. Location of the site used in this study is shown. (a) The location of the urban site of Kuopio city in Finland is marked with a red square. (b) The Kuopio study area is demarcated with a red polygon. An aerial image from 2014 is the background, where white color denotes ‘nodata’ areas. Adapted with permission [22].
Figure 1. Location of the site used in this study is shown. (a) The location of the urban site of Kuopio city in Finland is marked with a red square. (b) The Kuopio study area is demarcated with a red polygon. An aerial image from 2014 is the background, where white color denotes ‘nodata’ areas. Adapted with permission [22].
Remotesensing 12 01943 g001
Figure 2. (a) Pictorial representation of the salient steps of our method, to be applied to each image pair. (b) Steps for the identification of ground pixels.
Figure 2. (a) Pictorial representation of the salient steps of our method, to be applied to each image pair. (b) Steps for the identification of ground pixels.
Remotesensing 12 01943 g002
Figure 3. Sample vertical profiles to demonstrate the efficacy of our approach. Here, the ground points identified from the airborne laser scanning (ALS) data are shown in dark red, while point clouds derived from image pair matchings are shown in pink and blue. (a) The original vertical profile showing part of two tall buildings from the downtown area and the alleyway between them. The vertical co-registration is poor, especially at the rightmost part. (b) The height adjusted profile. The vertical co-registration of the digital aerial photogrammetry (DAP) point clouds are much better with respect to the ALS ground points and to each other; (c) The original vertical profile of an urban park area, with low tree density. The ground surfaces associated with ALS and the DAP point clouds do not coincide. (d) The corrected profile, where the ground surfaces coincide very well.
Figure 3. Sample vertical profiles to demonstrate the efficacy of our approach. Here, the ground points identified from the airborne laser scanning (ALS) data are shown in dark red, while point clouds derived from image pair matchings are shown in pink and blue. (a) The original vertical profile showing part of two tall buildings from the downtown area and the alleyway between them. The vertical co-registration is poor, especially at the rightmost part. (b) The height adjusted profile. The vertical co-registration of the digital aerial photogrammetry (DAP) point clouds are much better with respect to the ALS ground points and to each other; (c) The original vertical profile of an urban park area, with low tree density. The ground surfaces associated with ALS and the DAP point clouds do not coincide. (d) The corrected profile, where the ground surfaces coincide very well.
Remotesensing 12 01943 g003
Figure 4. (a) Pixels in our study area (demarcated by light green polygon) where the standard deviation changed substantially (more than 1.0 m) after adjustment. We further analyze the area demarcated by the black box (part of downtown Kuopio) in part (b). (b) Changes in standard deviation over a representative part of the downtown urban area (see black demarcation box in (a)). The underlying aerial image shows several building blocks from above. Transparent areas denote negligible decease in SD (less than 1.0 m).
Figure 4. (a) Pixels in our study area (demarcated by light green polygon) where the standard deviation changed substantially (more than 1.0 m) after adjustment. We further analyze the area demarcated by the black box (part of downtown Kuopio) in part (b). (b) Changes in standard deviation over a representative part of the downtown urban area (see black demarcation box in (a)). The underlying aerial image shows several building blocks from above. Transparent areas denote negligible decease in SD (less than 1.0 m).
Remotesensing 12 01943 g004
Table 1. Proportion of area with substantial decrease in dispersion metric (decrease of more than 1.0 m) for three land-use categories in our urban study region. The proportion of area with increases were very small (less than 0.1%) and hence, we do not report them below.
Table 1. Proportion of area with substantial decrease in dispersion metric (decrease of more than 1.0 m) for three land-use categories in our urban study region. The proportion of area with increases were very small (less than 0.1%) and hence, we do not report them below.
UrbanSuburbanVegetated (Parks, Urban Forests)
Proportion of area with decreased standard deviation, %70.172.060.0
Proportion of area with decreased 95% interquantile range, %81.083.869.5

Share and Cite

MDPI and ACS Style

Gopalakrishnan, R.; Ali-Sisto, D.; Kukkonen, M.; Savolainen, P.; Packalen, P. Using ALS Data to Improve Co-Registration of Photogrammetry-Based Point Cloud Data in Urban Areas. Remote Sens. 2020, 12, 1943. https://doi.org/10.3390/rs12121943

AMA Style

Gopalakrishnan R, Ali-Sisto D, Kukkonen M, Savolainen P, Packalen P. Using ALS Data to Improve Co-Registration of Photogrammetry-Based Point Cloud Data in Urban Areas. Remote Sensing. 2020; 12(12):1943. https://doi.org/10.3390/rs12121943

Chicago/Turabian Style

Gopalakrishnan, Ranjith, Daniela Ali-Sisto, Mikko Kukkonen, Pekka Savolainen, and Petteri Packalen. 2020. "Using ALS Data to Improve Co-Registration of Photogrammetry-Based Point Cloud Data in Urban Areas" Remote Sensing 12, no. 12: 1943. https://doi.org/10.3390/rs12121943

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop