remotesensing-logo

Journal Browser

Journal Browser

Advances in the Application of Lidar

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (15 January 2025) | Viewed by 19151

Special Issue Editors

School of Public Policy and Urban Affairs, Northeastern University, Boston, MA, USA
Interests: landscape mapping; object-based image analysis using LiDAR; machine learning algorithm
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

LiDAR (light detection and ranging, also LIDAR, LiDAR, and LADAR) has advanced rapidly since 1960. It is an active remote sensing technology that measures the distance by taking the speed of light and the time elapsed to travel from the target object to the sensor. The most common information provided by LiDAR is elevation and the structural profile of the terrain surface. Applications of LiDAR include flood risk measurements, terrain modeling, tree height measurements, etc. There are also advanced approaches for data fusion using both LiDAR data and high spatial and high spectral resolution images for ground cover classification and target recognition. In the last few decades, remote sensing technology has evolved dramatically with the better quality of LiDAR products as well as a rapid development of machine and deep learning algorithms. These most advanced technologies lead the remote sensing community to further explore the advanced applications of LiDAR data.

This Special Issue aims at studies that provide insights into the most advanced LiDAR remote sensing and their applications at local, regional, or global scales. Topics may include anything from advances of the physical principles or data processing of LiDAR to LiDAR for agriculture and forest application, urban application, change detection, or geoscience applications, etc. In addition, multisource data fusion with LiDAR, classification algorithms development, accuracy assessment, etc., are all welcome.

Suggested themes and article types for submissions.

  • LiDAR data fusion technologies and algorithm development.
  • LiDAR application in forestry: e.g., individual tree mapping, canopy mapping, etc.
  • LiDAR application in agriculture: e.g., crop planning, yield forecasting, etc.
  • LiDAR urban application e.g., road extraction, building extraction, etc.
  • LiDAR application in disaster management: e.g., post-flooding mapping, etc.
  • LiDAR application for geoscience such as surface hydrology, fluvial landforms etc.

Dr. Fang Fang
Dr. Yaqian He
Dr. Qinghua Xie
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • LiDAR remote sensing
  • change detection
  • remote sensing
  • classification
  • machine learning
  • LiDAR in forestry

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 12712 KiB  
Article
A Feature Line Extraction Method for Building Roof Point Clouds Considering the Grid Center of Gravity Distribution
by Jinzheng Yu, Jingxue Wang, Dongdong Zang and Xiao Xie
Remote Sens. 2024, 16(16), 2969; https://doi.org/10.3390/rs16162969 - 13 Aug 2024
Cited by 3 | Viewed by 1238
Abstract
Feature line extraction for building roofs is a critical step in the 3D model reconstruction of buildings. A feature line extraction algorithm for building roof point clouds based on the linear distribution characteristics of neighborhood points was proposed in this study. First, the [...] Read more.
Feature line extraction for building roofs is a critical step in the 3D model reconstruction of buildings. A feature line extraction algorithm for building roof point clouds based on the linear distribution characteristics of neighborhood points was proposed in this study. First, the virtual grid was utilized to provide local neighborhood information for the point clouds, aiding in identifying the linear distribution characteristics of the center of the gravity points on the feature line and determining the potential feature point set in the original point clouds. Next, initial segment elements were selected from the feature point set, and the iterative growth of these initial segment elements was performed by combining the RANSAC linear fitting algorithm with the distance constraint. Compatibility was used to determine the need for merging growing results to obtain roof feature lines. Lastly, according to the distribution characteristics of the original points near the feature lines, the endpoints of the feature lines were determined and optimized. Experiments were conducted using two representative building datasets. The results of the experiments showed that the proposed algorithm could directly extract high-quality roof feature lines from point clouds for both single buildings and multiple buildings. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

28 pages, 19723 KiB  
Article
A Novel Approach for As-Built BIM Updating Using Inertial Measurement Unit and Mobile Laser Scanner
by Yuchen Yang, Yung-Tsang Chen, Craig Hancock, Nicholas A. S. Hamm and Zhiang Zhang
Remote Sens. 2024, 16(15), 2743; https://doi.org/10.3390/rs16152743 - 26 Jul 2024
Cited by 1 | Viewed by 1138
Abstract
Building Information Modeling (BIM) has recently been widely applied in the Architecture, Engineering, and Construction Industry (AEC). BIM graphical information can provide a more intuitive display of the building and its contents. However, during the Operation and Maintenance (O&M) stage of the building [...] Read more.
Building Information Modeling (BIM) has recently been widely applied in the Architecture, Engineering, and Construction Industry (AEC). BIM graphical information can provide a more intuitive display of the building and its contents. However, during the Operation and Maintenance (O&M) stage of the building lifecycle, changes may occur in the building’s contents and cause inaccuracies in the BIM model, which could lead to inappropriate decisions. This study aims to address this issue by proposing a novel approach to creating 3D point clouds for updating as-built BIM models. The proposed approach is based on Pedestrian Dead Reckoning (PDR) for an Inertial Measurement Unit (IMU) integrated with a Mobile Laser Scanner (MLS) to create room-based 3D point clouds. Unlike conventional methods previously undertaken where a Terrestrial Laser Scanner (TLS) is used, the proposed approach utilizes low-cost MLS in combination with IMU to replace the TLS for indoor scanning. The approach eliminates the process of selecting scanning points and leveling of the TLS, enabling a more efficient and cost-effective creation of the point clouds. Scanning of three buildings with varying sizes and shapes was conducted. The results indicated that the proposed approach created room-based 3D point clouds with centimeter-level accuracy; it also proved to be more efficient than the TLS in updating the BIM models. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

25 pages, 15955 KiB  
Article
Combining Cylindrical Voxel and Mask R-CNN for Automatic Detection of Water Leakages in Shield Tunnel Point Clouds
by Qiong Chen, Zhizhong Kang, Zhen Cao, Xiaowei Xie, Bowen Guan, Yuxi Pan and Jia Chang
Remote Sens. 2024, 16(5), 896; https://doi.org/10.3390/rs16050896 - 3 Mar 2024
Cited by 12 | Viewed by 2081
Abstract
Water leakages can affect the safety and durability of shield tunnels, so rapid and accurate identification and diagnosis are urgently needed. However, current leakage detection methods are mostly based on mobile LiDAR data, making it challenging to detect leakage damage in both mobile [...] Read more.
Water leakages can affect the safety and durability of shield tunnels, so rapid and accurate identification and diagnosis are urgently needed. However, current leakage detection methods are mostly based on mobile LiDAR data, making it challenging to detect leakage damage in both mobile and terrestrial LiDAR data simultaneously, and the detection results are not intuitive. Therefore, an integrated cylindrical voxel and Mask R-CNN method for water leakage inspection is presented in this paper. This method includes the following three steps: (1) a 3D cylindrical-voxel data organization structure is constructed to transform the tunnel point cloud from disordered to ordered and achieve the projection of a 3D point cloud to a 2D image; (2) automated leakage segmentation and localization is carried out via Mask R-CNN; (3) the segmentation results of water leakage are mapped back to the 3D point cloud based on a cylindrical-voxel structure of shield tunnel point cloud, achieving the expression of water leakage disease in 3D space. The proposed approach can efficiently detect water leakage and leakage not only in mobile laser point cloud data but also in ground laser point cloud data, especially in processing its curved parts. Additionally, it achieves the visualization of water leakage in shield tunnels in 3D space, making the water leakage results more intuitive. Experimental validation is conducted based on the MLS and TLS point cloud data collected in Nanjing and Suzhou, respectively. Compared with the current commonly used detection method, which combines cylindrical projection and Mask R-CNN, the proposed method can achieve water leakage detection and 3D visualization in different tunnel scenarios, and the accuracy of water leakage detection of the method in this paper has improved by nearly 10%. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

31 pages, 15712 KiB  
Article
UAS Quality Control and Crop Three-Dimensional Characterization Framework Using Multi-Temporal LiDAR Data
by Nadeem Fareed, Anup Kumar Das, Joao Paulo Flores, Jitin Jose Mathew, Taofeek Mukaila, Izaya Numata and Ubaid Ur Rehman Janjua
Remote Sens. 2024, 16(4), 699; https://doi.org/10.3390/rs16040699 - 16 Feb 2024
Cited by 4 | Viewed by 2773
Abstract
Information on a crop’s three-dimensional (3D) structure is important for plant phenotyping and precision agriculture (PA). Currently, light detection and ranging (LiDAR) has been proven to be the most effective tool for crop 3D characterization in constrained, e.g., indoor environments, using terrestrial laser [...] Read more.
Information on a crop’s three-dimensional (3D) structure is important for plant phenotyping and precision agriculture (PA). Currently, light detection and ranging (LiDAR) has been proven to be the most effective tool for crop 3D characterization in constrained, e.g., indoor environments, using terrestrial laser scanners (TLSs). In recent years, affordable laser scanners onboard unmanned aerial systems (UASs) have been available for commercial applications. UAS laser scanners (ULSs) have recently been introduced, and their operational procedures are not well investigated particularly in an agricultural context for multi-temporal point clouds. To acquire seamless quality point clouds, ULS operational parameter assessment, e.g., flight altitude, pulse repetition rate (PRR), and the number of return laser echoes, becomes a non-trivial concern. This article therefore aims to investigate DJI Zenmuse L1 operational practices in an agricultural context using traditional point density, and multi-temporal canopy height modeling (CHM) techniques, in comparison with more advanced simulated full waveform (WF) analysis. Several pre-designed ULS flights were conducted over an experimental research site in Fargo, North Dakota, USA, on three dates. The flight altitudes varied from 50 m to 60 m above ground level (AGL) along with scanning modes, e.g., repetitive/non-repetitive, frequency modes 160/250 kHz, return echo modes (1n), (2n), and (3n), were assessed over diverse crop environments, e.g., dry corn, green corn, sunflower, soybean, and sugar beet, near to harvest yet with changing phenological stages. Our results showed that the return echo mode (2n) captures the canopy height better than the (1n) and (3n) modes, whereas (1n) provides the highest canopy penetration at 250 kHz compared with 160 kHz. Overall, the multi-temporal CHM heights were well correlated with the in situ height measurements with an R2 (0.99–1.00) and root mean square error (RMSE) of (0.04–0.09) m. Among all the crops, the multi-temporal CHM of the soybeans showed the lowest height correlation with the R2 (0.59–0.75) and RMSE (0.05–0.07) m. We showed that the weaker height correlation for the soybeans occurred due to the selective height underestimation of short crops influenced by crop phonologies. The results explained that the return echo mode, PRR, flight altitude, and multi-temporal CHM analysis were unable to completely decipher the ULS operational practices and phenological impact on acquired point clouds. For the first time in an agricultural context, we investigated and showed that crop phenology has a meaningful impact on acquired multi-temporal ULS point clouds compared with ULS operational practices revealed by WF analyses. Nonetheless, the present study established a state-of-the-art benchmark framework for ULS operational parameter optimization and 3D crop characterization using ULS multi-temporal simulated WF datasets. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

19 pages, 8091 KiB  
Article
Scene Classification Method Based on Multi-Scale Convolutional Neural Network with Long Short-Term Memory and Whale Optimization Algorithm
by Yingying Ran, Xiaobin Xu, Minzhou Luo, Jian Yang and Ziheng Chen
Remote Sens. 2024, 16(1), 174; https://doi.org/10.3390/rs16010174 - 31 Dec 2023
Cited by 3 | Viewed by 1795
Abstract
Indoor mobile robots can be localized by using scene classification methods. Recently, two-dimensional (2D) LiDAR has achieved good results in semantic classification with target categories such as room and corridor. However, it is difficult to achieve the classification of different rooms owing to [...] Read more.
Indoor mobile robots can be localized by using scene classification methods. Recently, two-dimensional (2D) LiDAR has achieved good results in semantic classification with target categories such as room and corridor. However, it is difficult to achieve the classification of different rooms owing to the lack of feature extraction methods in complex environments. To address this issue, a scene classification method based on a multi-scale convolutional neural network (CNN) with long short-term memory (LSTM) and a whale optimization algorithm (WOA) is proposed. Firstly, the distance data obtained from the original LiDAR are converted into a data sequence. Secondly, a scene classification method integrating multi-scale CNN and LSTM is constructed. Finally, WOA is used to tune critical training parameters and optimize network performance. The actual scene data containing eight rooms are collected to conduct ablation experiments, highlighting the performance with the proposed algorithm with 98.87% classification accuracy. Furthermore, experiments with the FR079 public dataset are conducted to demonstrate that compared with advanced algorithms, the classification accuracy of the proposed algorithm achieves the highest of 94.35%. The proposed method can provide technical support for the precise positioning of robots. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

20 pages, 4470 KiB  
Article
LiDAR-Generated Images Derived Keypoints Assisted Point Cloud Registration Scheme in Odometry Estimation
by Haizhou Zhang, Xianjia Yu, Sier Ha and Tomi Westerlund
Remote Sens. 2023, 15(20), 5074; https://doi.org/10.3390/rs15205074 - 23 Oct 2023
Cited by 4 | Viewed by 2434
Abstract
Keypoint detection and description play a pivotal role in various robotics and autonomous applications, including Visual Odometry (VO), visual navigation, and Simultaneous Localization And Mapping (SLAM). While a myriad of keypoint detectors and descriptors have been extensively studied in conventional camera images, the [...] Read more.
Keypoint detection and description play a pivotal role in various robotics and autonomous applications, including Visual Odometry (VO), visual navigation, and Simultaneous Localization And Mapping (SLAM). While a myriad of keypoint detectors and descriptors have been extensively studied in conventional camera images, the effectiveness of these techniques in the context of LiDAR-generated images, i.e., reflectivity and ranges images, has not been assessed. These images have gained attention due to their resilience in adverse conditions, such as rain or fog. Additionally, they contain significant textural information that supplements the geometric information provided by LiDAR point clouds in the point cloud registration phase, especially when reliant solely on LiDAR sensors. This addresses the challenge of drift encountered in LiDAR Odometry (LO) within geometrically identical scenarios or where not all the raw point cloud is informative and may even be misleading. This paper aims to analyze the applicability of conventional image keypoint extractors and descriptors on LiDAR-generated images via a comprehensive quantitative investigation. Moreover, we propose a novel approach to enhance the robustness and reliability of LO. After extracting keypoints, we proceed to downsample the point cloud, subsequently integrating it into the point cloud registration phase for the purpose of odometry estimation. Our experiment demonstrates that the proposed approach has comparable accuracy but reduced computational overhead, higher odometry publishing rate, and even superior performance in scenarios prone to drift by using the raw point cloud. This, in turn, lays a foundation for subsequent investigations into the integration of LiDAR-generated images with LO. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

12 pages, 3827 KiB  
Communication
Three-Dimensional Mapping of Habitats Using Remote-Sensing Data and Machine-Learning Algorithms
by Meisam Amani, Fatemeh Foroughnia, Armin Moghimi, Sahel Mahdavi and Shuanggen Jin
Remote Sens. 2023, 15(17), 4135; https://doi.org/10.3390/rs15174135 - 23 Aug 2023
Cited by 6 | Viewed by 2711
Abstract
Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in [...] Read more.
Progress toward habitat protection goals can effectively be performed using satellite imagery and machine-learning (ML) models at various spatial and temporal scales. In this regard, habitat types and landscape structures can be discriminated against using remote-sensing (RS) datasets. However, most existing research in three-dimensional (3D) habitat mapping primarily relies on same/cross-sensor features like features derived from multibeam Light Detection And Ranging (LiDAR), hydrographic LiDAR, and aerial images, often overlooking the potential benefits of considering multi-sensor data integration. To address this gap, this study introduced a novel approach to creating 3D habitat maps by using high-resolution multispectral images and a LiDAR-derived Digital Surface Model (DSM) coupled with an object-based Random Forest (RF) algorithm. LiDAR-derived products were also used to improve the accuracy of the habitat classification, especially for the habitat classes with similar spectral characteristics but different heights. Two study areas in the United Kingdom (UK) were chosen to explore the accuracy of the developed models. The overall accuracies for the two mentioned study areas were high (91% and 82%), which is indicative of the high potential of the developed RS method for 3D habitat mapping. Overall, it was observed that a combination of high-resolution multispectral imagery and LiDAR data could help the separation of different habitat types and provide reliable 3D information. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Graphical abstract

Review

Jump to: Research

25 pages, 3167 KiB  
Review
Application of LiDAR Sensors for Crop and Working Environment Recognition in Agriculture: A Review
by Md Rejaul Karim, Md Nasim Reza, Hongbin Jin, Md Asrakul Haque, Kyu-Ho Lee, Joonjea Sung and Sun-Ok Chung
Remote Sens. 2024, 16(24), 4623; https://doi.org/10.3390/rs16244623 - 10 Dec 2024
Cited by 3 | Viewed by 3485
Abstract
LiDAR sensors have great potential for enabling crop recognition (e.g., plant height, canopy area, plant spacing, and intra-row spacing measurements) and the recognition of agricultural working environments (e.g., field boundaries, ridges, and obstacles) using agricultural field machinery. The objective of this study was [...] Read more.
LiDAR sensors have great potential for enabling crop recognition (e.g., plant height, canopy area, plant spacing, and intra-row spacing measurements) and the recognition of agricultural working environments (e.g., field boundaries, ridges, and obstacles) using agricultural field machinery. The objective of this study was to review the use of LiDAR sensors in the agricultural field for the recognition of crops and agricultural working environments. This study also highlights LiDAR sensor testing procedures, focusing on critical parameters, industry standards, and accuracy benchmarks; it evaluates the specifications of various commercially available LiDAR sensors with applications for plant feature characterization and highlights the importance of mounting LiDAR technology on agricultural machinery for effective recognition of crops and working environments. Different studies have shown promising results of crop feature characterization using an airborne LiDAR, such as coefficient of determination (R2) and root-mean-square error (RMSE) values of 0.97 and 0.05 m for wheat, 0.88 and 5.2 cm for sugar beet, and 0.50 and 12 cm for potato plant height estimation, respectively. A relative error of 11.83% was observed between sensor and manual measurements, with the highest distribution correlation at 0.675 and an average relative error of 5.14% during soybean canopy estimation using LiDAR. An object detection accuracy of 100% was found for plant identification using three LiDAR scanning methods: center of the cluster, lowest point, and stem–ground intersection. LiDAR was also shown to effectively detect ridges, field boundaries, and obstacles, which is necessary for precision agriculture and autonomous agricultural machinery navigation. Future directions for LiDAR applications in agriculture emphasize the need for continuous advancements in sensor technology, along with the integration of complementary systems and algorithms, such as machine learning, to improve performance and accuracy in agricultural field applications. A strategic framework for implementing LiDAR technology in agriculture includes recommendations for precise testing, solutions for current limitations, and guidance on integrating LiDAR with other technologies to enhance digital agriculture. Full article
(This article belongs to the Special Issue Advances in the Application of Lidar)
Show Figures

Figure 1

Back to TopTop