remotesensing-logo

Journal Browser

Journal Browser

Fusion of LiDAR Point Clouds and Optical Images

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (30 April 2017) | Viewed by 77207

Special Issue Editors


E-Mail
Guest Editor
President of National Quality Inspection and Testing Center for Surveying and Mapping Products, full professor of Chinese Academy of Surveying and Mapping (CASM), Beijing 100830, China
Interests: image processing; geographic information systems; digital photogrammetry; pattern recognition and intelligent control

E-Mail
Guest Editor
Chinese Academy of Surveying and Mapping, No. 28 Lianhuachixi Road, Beijing 100830, China
Interests: road extraction; vehicle extraction; cropland extraction; object-based image analysis; data fusion; information extraction from LiDAR point clouds

Special Issue Information

Dear Colleagues,

Optical image and LiDAR (Light Detection And Ranging) point cloud are two types of major data sources in the fields of photogrammetry and remote sensing, computer vision, pattern recognition, machine learning, etc. However, the two types of data have quite different history. Specifically, the images, often collected by various types of cameras or imaging spectrometers, have a long history. Compared with images, LiDAR point cloud, acquired by the newly rising laser scanning technique, is a new data type.

The advantages of one type of data source over the other have been the topic of studies and discussions during the last two decades. After evaluating both the merits and demerits, some researchers prefer LiDAR point cloud to images. On the other hand, some scholars advocate that image-based-photogrammetry still has a continued role in both industry and scientific research. Moreover, engineering applications suggest that each type of data has its role in service. Recently, more and more researchers conclude that optical imagery and LiDAR point clouds have distinct characteristics that render them preferable in certain applications, and fusion of LiDAR point cloud and image would achieve a better performance in various applications than that can be achieved using a single type of data source. The fusion of LiDAR point cloud and imagery has been performed in various areas, including registration, generation of true orthophotograph, pixel-based image pan-sharpening, classification, target recognition, 3D reconstruction, change detection, forest inventory, etc.

Against this background, this Special Issue will document the methodologies, developments, techniques and applications of “Fusion of LiDAR Point Clouds and Optical Images”. Well-prepared, unpublished submissions that address one or more of the following topics are solicited:

  • Registration
  • Generation of digital true orthophotographs
  • Land use and land cover classification
  • Ground detection
  • Road detection
  • Building detection
  • Vehicle detection
  • 3D building reconstruction
  • 3D City reconstruction
  • 3D reconstruction of cultural heritage
  • Change detection
  • Forest inventory
  • Tree species classification
  • Individual tree delineation
  • Forest parameters estimation
  • Biomass estimation
  • Population estimation

Prof. Dr. Jixian Zhang
Dr. Xiangguo Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

3584 KiB  
Article
Mapping Spartina alterniflora Biomass Using LiDAR and Hyperspectral Data
by Jing Wang, Zhengjun Liu, Haiying Yu and Fangfang Li
Remote Sens. 2017, 9(6), 589; https://doi.org/10.3390/rs9060589 - 10 Jun 2017
Cited by 26 | Viewed by 6344
Abstract
Large-scale coastal reclamation has caused significant changes in Spartina alterniflora (S. alterniflora) distribution in coastal regions of China. However, few studies have focused on estimation of the wetland vegetation biomass, especially of S. alterniflora, in coastal regions using LiDAR and [...] Read more.
Large-scale coastal reclamation has caused significant changes in Spartina alterniflora (S. alterniflora) distribution in coastal regions of China. However, few studies have focused on estimation of the wetland vegetation biomass, especially of S. alterniflora, in coastal regions using LiDAR and hyperspectral data. In this study, the applicability of LiDAR and hypersectral data for estimating S. alterniflora biomass and mapping its distribution in coastal regions of China was explored to attempt problems of wetland vegetation biomass estimation caused by different vegetation types and different canopy height. Results showed that the highest correlation coefficient with S. alterniflora biomass was vegetation canopy height (0.817), followed by Normalized Difference Vegetation Index (NDVI) (0.635), Atmospherically Resistant Vegetation Index (ARVI) (0.631), Visible Atmospherically Resistant Index (VARI) (0.599), and Ratio Vegetation Index (RVI) (0.520). A multivariate linear estimation model of S. alterniflora biomass using a variable backward elimination method was developed with R squared coefficient of 0.902 and the residual predictive deviation (RPD) of 2.62. The model accuracy of S. alterniflora biomass was higher than that of wetland vegetation for mixed vegetation types because it improved the estimation accuracy caused by differences in spectral features and canopy heights of different kinds of wetland vegetation. The result indicated that estimated S. alterniflora biomass was in agreement with the field survey result. Owing to its basis in the fusion of LiDAR data and hyperspectral data, the proposed method provides an advantage for S. alterniflora mapping. The integration of high spatial resolution hyperspectral imagery and LiDAR data derived canopy height had significantly improved the accuracy of mapping S. alterniflora biomass. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

4151 KiB  
Article
An Improved RANSAC for 3D Point Cloud Plane Segmentation Based on Normal Distribution Transformation Cells
by Lin Li, Fan Yang, Haihong Zhu, Dalin Li, You Li and Lei Tang
Remote Sens. 2017, 9(5), 433; https://doi.org/10.3390/rs9050433 - 3 May 2017
Cited by 216 | Viewed by 20504
Abstract
Plane segmentation is a basic task in the automatic reconstruction of indoor and urban environments from unorganized point clouds acquired by laser scanners. As one of the most common plane-segmentation methods, standard Random Sample Consensus (RANSAC) is often used to continually detect planes [...] Read more.
Plane segmentation is a basic task in the automatic reconstruction of indoor and urban environments from unorganized point clouds acquired by laser scanners. As one of the most common plane-segmentation methods, standard Random Sample Consensus (RANSAC) is often used to continually detect planes one after another. However, it suffers from the spurious-plane problem when noise and outliers exist due to the uncertainty of randomly sampling the minimum subset with 3 points. An improved RANSAC method based on Normal Distribution Transformation (NDT) cells is proposed in this study to avoid spurious planes for 3D point-cloud plane segmentation. A planar NDT cell is selected as a minimal sample in each iteration to ensure the correctness of sampling on the same plane surface. The 3D NDT represents the point cloud with a set of NDT cells and models the observed points with a normal distribution within each cell. The geometric appearances of NDT cells are used to classify the NDT cells into planar and non-planar cells. The proposed method is verified on three indoor scenes. The experimental results show that the correctness exceeds 88.5% and the completeness exceeds 85.0%, which indicates that the proposed method identifies more reliable and accurate planes than standard RANSAC. It also executes faster. These results validate the suitability of the method. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

1560 KiB  
Article
Prediction of Species-Specific Volume Using Different Inventory Approaches by Fusing Airborne Laser Scanning and Hyperspectral Data
by Kaja Kandare, Michele Dalponte, Hans Ole Ørka, Lorenzo Frizzera and Erik Næsset
Remote Sens. 2017, 9(5), 400; https://doi.org/10.3390/rs9050400 - 26 Apr 2017
Cited by 25 | Viewed by 4975
Abstract
Fusion of ALS and hyperspectral data can offer a powerful basis for the discrimination of tree species and enables an accurate prediction of species-specific attributes. In this study, the fused airborne laser scanning (ALS) data and hyperspectral images were used to model and [...] Read more.
Fusion of ALS and hyperspectral data can offer a powerful basis for the discrimination of tree species and enables an accurate prediction of species-specific attributes. In this study, the fused airborne laser scanning (ALS) data and hyperspectral images were used to model and predict the total and species-specific volumes based on three forest inventory approaches, namely the individual tree crown (ITC) approach, the semi-ITC approach, and the area-based approach (ABA). The performances of these inventory approaches were analyzed and compared at the plot level in a complex Alpine forest in Italy. For the ITC and semi-ITC approaches, an ITC delineation algorithm was applied. With the ITC approach, the species-specific volumes were predicted with allometric models for each crown segment and aggregated to the total volume. For the semi-ITC and ABA, a multivariate k-most similar neighbor method was applied to simultaneously predict the total and species-specific volumes using leave-one-out cross-validation at the plot level. In both methods, the ALS and hyperspectral variables were important for volume modeling. The total volume of the ITC, semi-ITC, and ABA resulted in relative root mean square errors (RMSEs) of 25.31%, 17.41%, 30.95% of the mean and systematic errors (mean differences) of 21.59%, −0.27%, and −2.69% of the mean, respectively. The ITC approach achieved high accuracies but large systematic errors for minority species. For majority species, the semi-ITC performed slightly better compared to the ABA, resulting in higher accuracies and smaller systematic errors. The results indicated that the semi-ITC outperformed the two other inventory approaches. To conclude, we suggest that the semi-ITC method is further tested and assessed with attention to its potential in operational forestry applications, especially in cases for which accurate species-specific forest biophysical attributes are needed. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

4470 KiB  
Article
A Flexible, Generic Photogrammetric Approach to Zoom Lens Calibration
by Zheng Wang, Jon Mills, Wen Xiao, Rongyong Huang, Shunyi Zheng and Zhenhong Li
Remote Sens. 2017, 9(3), 244; https://doi.org/10.3390/rs9030244 - 6 Mar 2017
Cited by 9 | Viewed by 5973
Abstract
Compared with prime lenses, zoom lenses have inherent advantages in terms of operational flexibility. Zoom lens camera systems have therefore been extensively adopted in computer vision where precise measurement is not the primary objective. However, the variation of intrinsic camera parameters with respect [...] Read more.
Compared with prime lenses, zoom lenses have inherent advantages in terms of operational flexibility. Zoom lens camera systems have therefore been extensively adopted in computer vision where precise measurement is not the primary objective. However, the variation of intrinsic camera parameters with respect to zoom lens settings poses a series of calibration challenges that have inhibited widespread use in close-range photogrammetry. A flexible zoom lens calibration methodology is therefore proposed in this study, developed with the aim of simplifying the calibration process and promoting practical photogrammetric application. A zoom-dependent camera model that incorporates empirical zoom-related intrinsic parameters into the collinearity condition equations is developed. Coefficients of intrinsic parameters are solved in a single adjustment based on this zoom lens camera model. To validate the approach, experiments on both optical- and digital-zoom lens cameras were conducted using a planar board with evenly distributed circular targets. Zoom lens calibration was performed with images taken at four different zoom settings spread throughout the zoom range of a lens. Photogrammetric accuracies achieved through both mono-focal and multi-focal triangulations were evaluated after calibration. The relative accuracies for mono-focal triangulations ranged from 1: 6300 to 1: 18,400 for the two cameras studied, whereas the multi-focal triangulation accuracies ranged from 1: 11,300 to 1: 16,200. In order to demonstrate the applicability of the approach, calibrated zoom lens imagery was used to render a laser-scanned point cloud of a building façade. Considered alongside experimental results, the successful application demonstrates the feasibility of the proposed calibration method, thereby facilitating the adoption of zoom lens cameras in close range photogrammetry for a wide range of scientific and practical applications. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

2498 KiB  
Article
Combining Airborne Laser Scanning and Aerial Imagery Enhances Echo Classification for Invasive Conifer Detection
by Jonathan P. Dash, Grant D. Pearse, Michael S. Watt and Thomas Paul
Remote Sens. 2017, 9(2), 156; https://doi.org/10.3390/rs9020156 - 15 Feb 2017
Cited by 15 | Viewed by 5738
Abstract
The spread of exotic conifers from commercial plantation forests has significant economic and ecological implications. Accurate methods for invasive conifer detection are required to enable monitoring and guide control. In this research, we combined spectral information from aerial imagery with data from airborne [...] Read more.
The spread of exotic conifers from commercial plantation forests has significant economic and ecological implications. Accurate methods for invasive conifer detection are required to enable monitoring and guide control. In this research, we combined spectral information from aerial imagery with data from airborne laser scanning (ALS) to develop methods to identify invasive conifers using remotely-sensed data. We examined the effect of ALS pulse density and the height threshold of the training dataset on classification accuracy. The results showed that adding spectral values to the ALS metrics/variables in the training dataset led to significant increases in classification accuracy. The most accurate models (kappa range of 0.773–0.837) had either four or five explanatory variables, including ALS elevation, the near-infrared band and different combinations of ALS intensity and red and green bands. The best models were found to be relatively invariant to changes in pulse density (1–21 pls/m2) or the height threshold (0–2 m) used for the inclusion of data in the training dataset. This research has extended and improved the methods for scattered single tree detection and offered valuable insight into campaign settings for the monitoring of invasive conifers (tree weeds) using remote sensing approaches. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Figure 1

2874 KiB  
Article
Estimating the Biomass of Maize with Hyperspectral and LiDAR Data
by Cheng Wang, Sheng Nie, Xiaohuan Xi, Shezhou Luo and Xiaofeng Sun
Remote Sens. 2017, 9(1), 11; https://doi.org/10.3390/rs9010011 - 27 Dec 2016
Cited by 86 | Viewed by 8753
Abstract
The accurate estimation of crop biomass during the growing season is very important for crop growth monitoring and yield estimation. The objective of this paper was to explore the potential of hyperspectral and light detection and ranging (LiDAR) data for better estimation of [...] Read more.
The accurate estimation of crop biomass during the growing season is very important for crop growth monitoring and yield estimation. The objective of this paper was to explore the potential of hyperspectral and light detection and ranging (LiDAR) data for better estimation of the biomass of maize. First, we investigated the relationship between field-observed biomass with each metric, including vegetation indices (VIs) derived from hyperspectral data and LiDAR-derived metrics. Second, the partial least squares (PLS) regression was used to estimate the biomass of maize using VIs (only) and LiDAR-derived metrics (only), respectively. Third, the fusion of hyperspectral and LiDAR data was evaluated in estimating the biomass of maize. Finally, the biomass estimates were validated by a leave-one-out cross-validation (LOOCV) method. Results indicated that all VIs showed weak correlation with field-observed biomass and the highest correlation occurred when using the red edge-modified simple ratio index (ReMSR). Among all LiDAR-derived metrics, the strongest relationship was observed between coefficient of variation (H C V of digital terrain model (DTM) normalized point elevations with field-observed biomass. The combination of VIs through PLS regression could not improve the biomass estimation accuracy of maize due to the high correlation between VIs. In contrast, the H C V combined with H m e a n performed better than one LiDAR-derived metric alone in biomass estimation (R2 = 0.835, RMSE = 374.655 g/m2, RMSECV = 393.573 g/m2). Additionally, our findings indicated that the fusion of hyperspectral and LiDAR data can provide better biomass estimates of maize (R2 = 0.883, RMSE = 321.092 g/m2, RMSECV = 337.653 g/m2) compared with LiDAR or hyperspectral data alone. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

8311 KiB  
Article
Building Change Detection Using Old Aerial Images and New LiDAR Data
by Shouji Du, Yunsheng Zhang, Rongjun Qin, Zhihua Yang, Zhengrong Zou, Yuqi Tang and Chong Fan
Remote Sens. 2016, 8(12), 1030; https://doi.org/10.3390/rs8121030 - 17 Dec 2016
Cited by 60 | Viewed by 9474
Abstract
Building change detection is important for urban area monitoring, disaster assessment and updating geo-database. 3D information derived from image dense matching or airborne light detection and ranging (LiDAR) is very effective for building change detection. However, combining 3D data from different sources is [...] Read more.
Building change detection is important for urban area monitoring, disaster assessment and updating geo-database. 3D information derived from image dense matching or airborne light detection and ranging (LiDAR) is very effective for building change detection. However, combining 3D data from different sources is challenging, and so far few studies have focused on building change detection using both images and LiDAR data. This study proposes an automatic method to detect building changes in urban areas using aerial images and LiDAR data. First, dense image matching is carried out to obtain dense point clouds and then co-registered LiDAR point clouds using the iterative closest point (ICP) algorithm. The registered point clouds are further resampled to a raster DSM (Digital Surface Models). In a second step, height difference and grey-scale similarity are calculated as change indicators and the graph cuts method is employed to determine changes considering the contexture information. Finally, the detected results are refined by removing the non-building changes, in which a novel method based on variance of normal direction of LiDAR points is proposed to remove vegetated areas for positive building changes (newly building or taller) and nEGI (normalized Excessive Green Index) is used for negative building changes (demolish building or lower). To evaluate the proposed method, a test area covering approximately 2.1 km2 and consisting of many different types of buildings is used for the experiment. Results indicate 93% completeness with correctness of 90.2% for positive changes, while 94% completeness with correctness of 94.1% for negative changes, which demonstrate the promising performance of the proposed method. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

6082 KiB  
Article
A Semantic Modelling Framework-Based Method for Building Reconstruction from Point Clouds
by Qingdong Wang, Li Yan, Li Zhang, Haibin Ai and Xiangguo Lin
Remote Sens. 2016, 8(9), 737; https://doi.org/10.3390/rs8090737 - 8 Sep 2016
Cited by 11 | Viewed by 5963
Abstract
Over the past few years, there has been an increasing need for semantic information in automatic city modelling. However, due to the complexity of building structure, the semantic reconstruction of buildings is still a challenging task because it is difficult to extract architectural [...] Read more.
Over the past few years, there has been an increasing need for semantic information in automatic city modelling. However, due to the complexity of building structure, the semantic reconstruction of buildings is still a challenging task because it is difficult to extract architectural rules and semantic information from the data. To improve the insufficiencies, we present a semantic modelling framework-based approach for automated building reconstruction using the semantic information extracted from point clouds or images. In this approach, a semantic modelling framework is designed to describe and generate the building model, and a workflow is established for extracting the semantic information of buildings from an unorganized point cloud and converting the semantic information into the semantic modelling framework. The technical feasibility of our method is validated using three airborne laser scanning datasets, and the results are compared with other related works comprehensively, which indicate that our approach can simplify the reconstruction process from a point cloud and generate 3D building models with high accuracy and rich semantic information. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

3654 KiB  
Article
Fusion of WorldView-2 and LiDAR Data to Map Fuel Types in the Canary Islands
by Alfonso Alonso-Benito, Lara A. Arroyo, Manuel Arbelo and Pedro Hernández-Leal
Remote Sens. 2016, 8(8), 669; https://doi.org/10.3390/rs8080669 - 18 Aug 2016
Cited by 33 | Viewed by 6939
Abstract
Wildland fires are one of the factors causing the deepest disturbances on the natural environment and severely threatening many ecosystems, as well as economic welfare and public health. Having accurate and up-to-date fuel type maps is essential to properly manage wildland fire risk [...] Read more.
Wildland fires are one of the factors causing the deepest disturbances on the natural environment and severely threatening many ecosystems, as well as economic welfare and public health. Having accurate and up-to-date fuel type maps is essential to properly manage wildland fire risk areas. This research aims to assess the viability of combining Geographic Object-Based Image Analysis (GEOBIA) and the fusion of a WorldView-2 (WV2) image and low density Light Detection and Ranging (LiDAR) data in order to produce fuel type maps within an area of complex orography and vegetation distribution located in the island of Tenerife (Spain). Independent GEOBIAs were applied to four datasets to create four fuel type maps according to the Prometheus classification. The following fusion methods were compared: Image Stack (IS), Principal Component Analysis (PCA) and Minimum Noise Fraction (MNF), as well as the WV2 image alone. Accuracy assessment of the maps was conducted by comparison against the fuel types assessed in the field. Besides global agreement, disagreement measures due to allocation and quantity were estimated, both globally and by fuel type. This made it possible to better understand the nature of disagreements linked to each map. The global agreement of the obtained maps varied from 76.23% to 85.43%. Maps obtained through data fusion reached a significantly higher global agreement than the map derived from the WV2 image alone. By integrating LiDAR information with the GEOBIAs, global agreement improvements by over 10% were attained in all cases. No significant differences in global agreement were found among the three classifications performed on WV2 and LiDAR fusion data (IS, PCA, MNF). These study’s findings show the validity of the combined use of GEOBIA, high-spatial resolution multispectral data and low density LiDAR data in order to generate fuel type maps in the Canary Islands. Full article
(This article belongs to the Special Issue Fusion of LiDAR Point Clouds and Optical Images)
Show Figures

Graphical abstract

Back to TopTop