Resilient UAV Autonomy and Remote Sensing

A special issue of Drones (ISSN 2504-446X).

Deadline for manuscript submissions: 31 May 2024 | Viewed by 17096

Special Issue Editors

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430079, China
Interests: image/LiDAR point clouds processing; sensor fusion; SLAM; unmanned systems; remote sensing methods for the power industry
1. School of Computer Sciences, China University of Geosciences, Wuhan 430074, China
2. Department of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University, Hong Kong 999077, China
Interests: image retrieval; image matching; structure from motion; multi-view stereo; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Safety Science and Emergency Management, Wuhan University of Technology, Luoshi Road 122, Wuhan 430079, China
Interests: laser scanning; point cloud segmentation; object recognition; semantic segmentation; image classification; instance segmentation
College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
Interests: multi-view stereo; LiDAR data processing; deep learning; computer vision

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: UAV; mobile mapping; laser scanning; point cloud; inertial navigation
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: autonomous vehicle; mobile mapping; HD map; multi-sensor fusion

Special Issue Information

Dear Colleagues,

With the development of aerial imaging, oblique photogrammetry, laser scanning techniques and unmanned aircraft systems (UAVs), accurate and efficient perception, the reconstruction and recognition of large-scale 3D scenes have become popular topics in the fields of photogrammetry and computer vision. However, there are still several problems that need to be urgently solved, such as a low processing efficiency, difficulty to render the details of objects, and poor robustness of dense 3D reconstructions for poor-textured and occluded areas. Motivated by this rapid development, we are excited to invite you to submit a research paper to this Special Issue of Drones titled” UAV Image and LiDAR Processing”. The UAV data, including primarily UAV image and LiDAR data, has been widely used in aerial surveillance, 3D reconstruction and visualization, autonomous driving, and smart cities. This Special Issue aims to promote the further application of the UAV data, specifically in the fields of instance segmentation, object detection/tracking, SLAM, SFM, MVS, 3D mesh surface reconstruction, etc. Original submissions aligned with the above-mentioned research areas are highly welcomed.

Papers are welcomed from all fields directly related to these topics, including but not limited to the following:

  • Trajectory planning for UAV data acquisition;
  • The fusion of UAV sensor data (image/point clouds/GNSS/IMU);
  • The registration of UAV image/point clouds;
  • Real-time AI in motion planning and control, data gathering and analysis of UAVs;
  • Image/LiDAR feature extraction, matching and bundle adjustment between UAV and UGV;
  • Semantic/instance segmentation, classification, object detection and tracking with UAV data using the deep learning method;
  • 3D reconstructions from UAV image/point clouds;
  • SfM and SLAM using UAVs image/LiDAR data;
  • Cooperative perception and mapping utilizing multiple UAVs and UGVs;
  • Mobile edge computing (MEC) in UAVs;
  • UAV image/point clouds processing in inspection, surveillance, GNSS-denied environment (underground/indoor spaces), etc.;
  • UAV image/point clouds processing in power/oil/ industry, hydraulics, agriculture, ecology, emergency response and smart cities;

Dr. Chi Chen
Dr. San Jiang
Dr. Xijiang Chen
Dr. Mao Tian
Dr. Jianping Li
Dr. Jian Zhou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • drones
  • UAV
  • UAV swarm
  • computer vision
  • photogrammetry
  • remote sensing
  • LiDAR
  • aerial imagery
  • image and point cloud fusion
  • detection and tracking
  • segmentation
  • SLAM
  • path planning
  • 3D reconstruction
  • 3D visualization

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 4350 KiB  
Article
Easy Rocap: A Low-Cost and Easy-to-Use Motion Capture System for Drones
by Haoyu Wang, Chi Chen, Yong He, Shangzhe Sun, Liuchun Li, Yuhang Xu and Bisheng Yang
Drones 2024, 8(4), 137; https://doi.org/10.3390/drones8040137 - 02 Apr 2024
Viewed by 626
Abstract
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera [...] Read more.
Fast and accurate pose estimation is essential for the local motion control of robots such as drones. At present, camera-based motion capture (Mocap) systems are mostly used by robots. However, this kind of Mocap system is easily affected by light noise and camera occlusion, and the cost of common commercial Mocap systems is high. To address these challenges, we propose Easy Rocap, a low-cost, open-source robot motion capture system, which can quickly and robustly capture the accurate position and orientation of the robot. Firstly, based on training a real-time object detector, an object-filtering algorithm using class and confidence is designed to eliminate false detections. Secondly, multiple-object tracking (MOT) is applied to maintain the continuity of the trajectories, and the epipolar constraint is applied to multi-view correspondences. Finally, the calibrated multi-view cameras are used to calculate the 3D coordinates of the markers and effectively estimate the 3D pose of the target robot. Our system takes in real-time multi-camera data streams, making it easy to integrate into the robot system. In the simulation scenario experiment, the average position estimation error of the method is less than 0.008 m, and the average orientation error is less than 0.65 degrees. In the real scenario experiment, we compared the localization results of our method with the advanced LiDAR-Inertial Simultaneous Localization and Mapping (SLAM) algorithm. According to the experimental results, SLAM generates drifts during turns, while our method can overcome the drifts and accumulated errors of SLAM, making the trajectory more stable and accurate. In addition, the pose estimation speed of our system can reach 30 Hz. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

23 pages, 15746 KiB  
Article
IMUC: Edge–End–Cloud Integrated Multi-Unmanned System Payload Management and Computing Platform
by Jie Tang, Ruofei Zhong, Ruizhuo Zhang and Yan Zhang
Drones 2024, 8(1), 19; https://doi.org/10.3390/drones8010019 - 12 Jan 2024
Viewed by 1282
Abstract
Multi-unmanned systems are primarily composed of unmanned vehicles, drones, and multi-legged robots, among other unmanned robotic devices. By integrating and coordinating the operation of these robotic devices, it is possible to achieve collaborative multitasking and autonomous operations in various environments. In the field [...] Read more.
Multi-unmanned systems are primarily composed of unmanned vehicles, drones, and multi-legged robots, among other unmanned robotic devices. By integrating and coordinating the operation of these robotic devices, it is possible to achieve collaborative multitasking and autonomous operations in various environments. In the field of surveying and mapping, the traditional single-type unmanned device data collection mode is no longer sufficient to meet the data acquisition tasks in complex spatial scenarios (such as low-altitude, surface, indoor, underground, etc.). Faced with the data collection requirements in complex spaces, employing different types of robots for collaborative operations is an important means to improve operational efficiency. Additionally, the limited computational and storage capabilities of unmanned systems themselves pose significant challenges to multi-unmanned systems. Therefore, this paper designs an edge–end–cloud integrated multi-unmanned system payload management and computing platform (IMUC) that combines edge, end, and cloud computing. By utilizing the immense computational power and storage resources of the cloud, the platform enables cloud-based online task management and data acquisition visualization for multi-unmanned systems. The platform addresses the high complexity of task execution in various scenarios by considering factors such as space, time, and task completion. It performs data collection tasks at the end terminal, optimizes processing at the edge, and finally transmits the data to the cloud for visualization. The platform seamlessly integrates edge computing, terminal devices, and cloud resources, achieving efficient resource utilization and distributed execution of computing tasks. Test results demonstrate that the platform can successfully complete the entire process of payload management and computation for multi-unmanned systems in complex scenarios. The platform exhibits low response time and produces normal routing results, greatly enhancing operational efficiency in the field. These test results validate the practicality and reliability of the platform, providing a new approach for efficient operations of multi-unmanned systems in surveying and mapping requirements, combining cloud computing with the construction of smart cities. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

19 pages, 4759 KiB  
Article
Drone Multiline Light Detection and Ranging Data Filtering in Coastal Salt Marshes Using Extreme Gradient Boosting Model
by Xixiu Wu, Kai Tan, Shuai Liu, Feng Wang, Pengjie Tao, Yanjun Wang and Xiaolong Cheng
Drones 2024, 8(1), 13; https://doi.org/10.3390/drones8010013 - 04 Jan 2024
Viewed by 1437
Abstract
Quantitatively characterizing coastal salt-marsh terrains and the corresponding spatiotemporal changes are crucial for formulating comprehensive management plans and clarifying the dynamic carbon evolution. Multiline light detection and ranging (LiDAR) exhibits great capability for terrain measuring for salt marshes with strong penetration performance and [...] Read more.
Quantitatively characterizing coastal salt-marsh terrains and the corresponding spatiotemporal changes are crucial for formulating comprehensive management plans and clarifying the dynamic carbon evolution. Multiline light detection and ranging (LiDAR) exhibits great capability for terrain measuring for salt marshes with strong penetration performance and a new scanning mode. The prerequisite to obtaining the high-precision terrain requires accurate filtering of the salt-marsh vegetation points from the ground/mudflat ones in the multiline LiDAR data. In this study, a new alternative salt-marsh vegetation point-cloud filtering method is proposed for drone multiline LiDAR based on the extreme gradient boosting (i.e., XGBoost) model. According to the basic principle that vegetation and the ground exhibit different geometric and radiometric characteristics, the XGBoost is constructed to model the relationships of point categories with a series of selected basic geometric and radiometric metrics (i.e., distance, scan angle, elevation, normal vectors, and intensity), where absent instantaneous scan geometry (i.e., distance and scan angle) for each point is accurately estimated according to the scanning principles and point-cloud spatial distribution characteristics of drone multiline LiDAR. Based on the constructed model, the combination of the selected features can accurately and intelligently predict the category of each point. The proposed method is tested in a coastal salt marsh in Shanghai, China by a drone 16-line LiDAR system. The results demonstrate that the averaged AUC and G-mean values of the proposed method are 0.9111 and 0.9063, respectively. The proposed method exhibits enhanced applicability and versatility and outperforms the traditional and other machine-learning methods in different areas with varying topography and vegetation-growth status, which shows promising potential for point-cloud filtering and classification, particularly in extreme environments where the terrains, land covers, and point-cloud distributions are highly complicated. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

19 pages, 6120 KiB  
Article
MGFNet: A Progressive Multi-Granularity Learning Strategy-Based Insulator Defect Recognition Algorithm for UAV Images
by Zhouxian Lu, Yong Li and Feng Shuang
Drones 2023, 7(5), 333; https://doi.org/10.3390/drones7050333 - 22 May 2023
Viewed by 1098
Abstract
Due to the low efficiency and safety of a manual insulator inspection, research on intelligent insulator inspections has gained wide attention. However, most existing defect recognition methods extract abstract features of the entire image directly by convolutional neural networks (CNNs), which lack multi-granularity [...] Read more.
Due to the low efficiency and safety of a manual insulator inspection, research on intelligent insulator inspections has gained wide attention. However, most existing defect recognition methods extract abstract features of the entire image directly by convolutional neural networks (CNNs), which lack multi-granularity feature information, rendering the network insensitive to small defects. To address this problem, we propose a multi-granularity fusion network (MGFNet) to diagnose the health status of the insulator. An MGFNet includes a traversal clipping module (TC), progressive multi-granularity learning strategy (PMGL), and region relationship attention module (RRA). A TC effectively resolves the issue of distortion in insulator images and can provide a more detailed diagnosis for the local areas of insulators. A PMGL acquires the multi-granularity features of insulators and combines them to produce more resilient features. An RRA utilizes non-local interactions to better learn the difference between normal features and defect features. To eliminate the interference of the UAV images’ background, an MGFNet can be flexibly combined with object detection algorithms to form a two-stage object detection algorithm, which can accurately identify insulator defects in UAV images. The experimental results show that an MGFNet achieves 91.27% accuracy, outperforming other advanced methods. Furthermore, the successful deployment on a drone platform has enabled the real-time diagnosis of insulators, further confirming the practical applications value of an MGFNet. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

23 pages, 5396 KiB  
Article
Extraction and Mapping of Cropland Parcels in Typical Regions of Southern China Using Unmanned Aerial Vehicle Multispectral Images and Deep Learning
by Shikun Wu, Yingyue Su, Xiaojun Lu, Han Xu, Shanggui Kang, Boyu Zhang, Yueming Hu and Luo Liu
Drones 2023, 7(5), 285; https://doi.org/10.3390/drones7050285 - 24 Apr 2023
Cited by 3 | Viewed by 1408
Abstract
The accurate extraction of cropland distribution is an important issue for precision agriculture and food security worldwide. The complex characteristics in southern China pose great challenges to the extraction. In this study, for the objective of accurate extraction and mapping of cropland parcels [...] Read more.
The accurate extraction of cropland distribution is an important issue for precision agriculture and food security worldwide. The complex characteristics in southern China pose great challenges to the extraction. In this study, for the objective of accurate extraction and mapping of cropland parcels in multiple crop growth stages in southern China, we explored a method based on unmanned aerial vehicle (UAV) data and deep learning algorithms. Our method considered cropland size, cultivation patterns, spectral characteristics, and the terrain of the study area. From two aspects—model architecture of deep learning and the data form of UAV—four groups of experiments are performed to explore the optimal method for the extraction of cropland parcels in southern China. The optimal result obtained in October 2021 demonstrated an overall accuracy (OA) of 95.9%, a Kappa coefficient of 89.2%, and an Intersection-over-Union (IoU) of 95.7%. The optimal method also showed remarkable results in the maps of cropland distribution in multiple crop growth stages, with an average OA of 96.9%, an average Kappa coefficient of 89.5%, and an average IoU of 96.7% in August, November, and December of the same year. This study provides a valuable reference for the extraction of cropland parcels in multiple crop growth stages in southern China or regions with similar characteristics. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

19 pages, 3065 KiB  
Article
Visual-Inertial Odometry Using High Flying Altitude Drone Datasets
by Anand George, Niko Koivumäki, Teemu Hakala, Juha Suomalainen and Eija Honkavaara
Drones 2023, 7(1), 36; https://doi.org/10.3390/drones7010036 - 04 Jan 2023
Cited by 6 | Viewed by 6029
Abstract
Positioning of unoccupied aerial systems (UAS, drones) is predominantly based on Global Navigation Satellite Systems (GNSS). Due to potential signal disruptions, redundant positioning systems are needed for reliable operation. The objective of this study was to implement and assess a redundant positioning system [...] Read more.
Positioning of unoccupied aerial systems (UAS, drones) is predominantly based on Global Navigation Satellite Systems (GNSS). Due to potential signal disruptions, redundant positioning systems are needed for reliable operation. The objective of this study was to implement and assess a redundant positioning system for high flying altitude drone operation based on visual-inertial odometry (VIO). A new sensor suite with stereo cameras and an inertial measurement unit (IMU) was developed, and a state-of-the-art VIO algorithm, VINS-Fusion, was used for localisation. Empirical testing of the system was carried out at flying altitudes of 40–100 m, which cover the common flight altitude range of outdoor drone operations. The performance of various implementations was studied, including stereo-visual-odometry (stereo-VO), monocular-visual-inertial-odometry (mono-VIO) and stereo-visual-inertial-odometry (stereo-VIO). The stereo-VIO provided the best results; the flight altitude of 40–60 m was the most optimal for the stereo baseline of 30 cm. The best positioning accuracy was 2.186 m for a 800 m-long trajectory. The performance of the stereo-VO degraded with the increasing flight altitude due to the degrading base-to-height ratio. The mono-VIO provided acceptable results, although it did not reach the performance level of the stereo-VIO. This work presented new hardware and research results on localisation algorithms for high flying altitude drones that are of great importance since the use of autonomous drones and beyond visual line-of-sight flying are increasing and will require redundant positioning solutions that compensate for potential disruptions in GNSS positioning. The data collected in this study are published for analysis and further studies. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

23 pages, 13738 KiB  
Article
Graph-Based Image Segmentation for Road Extraction from Post-Disaster Aerial Footage
by Nicholas Paul Sebasco and Hakki Erhan Sevil
Drones 2022, 6(11), 315; https://doi.org/10.3390/drones6110315 - 26 Oct 2022
Cited by 2 | Viewed by 1598
Abstract
This research effort proposes a novel method for identifying and extracting roads from aerial images taken after a disaster using graph-based image segmentation. The dataset that is used consists of images taken by an Unmanned Aerial Vehicle (UAV) at the University of West [...] Read more.
This research effort proposes a novel method for identifying and extracting roads from aerial images taken after a disaster using graph-based image segmentation. The dataset that is used consists of images taken by an Unmanned Aerial Vehicle (UAV) at the University of West Florida (UWF) after hurricane Sally. Ground truth masks were created for these images, which divide the image pixels into three categories: road, non-road, and uncertain. A specific pre-processing step was implemented, which used Catmull–Rom cubic interpolation to resize the image. Moreover, the Gaussian filter used in Efficient Graph-Based Image Segmentation is replaced with a median filter, and the color space is converted from RGB to HSV. The Efficient Graph-Based Image Segmentation is further modified by (i) changing the Moore pixel neighborhood to the Von Neumann pixel neighborhood, (ii) introducing a new adaptive isoperimetric quotient threshold function, (iii) changing the distance function used to create the graph edges, and (iv) changing the sorting algorithm so that the algorithm can run more effectively. Finally, a simple function to automatically compute the k (scale) parameter is added. A new post-processing heuristic is proposed for road extraction, and the Intersection over Union evaluation metric is used to quantify the road extraction performance. The proposed method maintains high performance on all of the images in the dataset and achieves an Intersection over Union (IoU) score, which is significantly higher than the score of a similar road extraction technique using K-means clustering. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

Review

Jump to: Research

23 pages, 3723 KiB  
Review
Target Localization for Autonomous Landing Site Detection: A Review and Preliminary Result with Static Image Photogrammetry
by Jayasurya Arasur Subramanian, Vijanth Sagayan Asirvadam, Saiful Azrin B. M. Zulkifli, Narinderjit Singh Sawaran Singh, N. Shanthi and Ravi Kumar Lagisetty
Drones 2023, 7(8), 509; https://doi.org/10.3390/drones7080509 - 02 Aug 2023
Cited by 1 | Viewed by 1614
Abstract
The advancement of autonomous technology in Unmanned Aerial Vehicles (UAVs) has piloted a new era in aviation. While UAVs were initially utilized only for the military, rescue, and disaster response, they are now being utilized for domestic and civilian purposes as well. In [...] Read more.
The advancement of autonomous technology in Unmanned Aerial Vehicles (UAVs) has piloted a new era in aviation. While UAVs were initially utilized only for the military, rescue, and disaster response, they are now being utilized for domestic and civilian purposes as well. In order to deal with its expanded applications and to increase autonomy, the ability for UAVs to perform autonomous landing will be a crucial component. Autonomous landing capability is greatly dependent on computer vision, which offers several advantages such as low cost, self-sufficiency, strong anti-interference capability, and accurate localization when combined with an Inertial Navigation System (INS). Another significant benefit of this technology is its compatibility with LiDAR technology, Digital Elevation Models (DEM), and the ability to seamlessly integrate these components. The landing area for UAVs can vary, ranging from static to dynamic or complex, depending on their environment. By comprehending these characteristics and the behavior of UAVs, this paper serves as a valuable reference for autonomous landing guided by computer vision and provides promising preliminary results with static image photogrammetry. Full article
(This article belongs to the Special Issue Resilient UAV Autonomy and Remote Sensing)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Semantic segmentation of indoor UAV point cloud
Authors: Xijiang Chen; Peng Li; Hui Deng
Affiliation: School of Safety Science and Emergency Management, Wuhan University of Technology, Wuhan, China
Abstract: The extraction of objects in an indoor environment is the key for many applications including object identification for UAV indoor navigation, facility management and reconstruction of construction. In view of this, this paper uses deep learning to conduct the point semantic segmentation. First, we apply exponential function to construct density clustering model according to the local density within cut-off distance. Second, the local density model of different objects is constructed and the constraint distance is determined according to the size of local density. Simultaneously, we find the cluster centres are recognized as points for which the product value of the local density and the constraint distance is anomalously large. Third, the cluster belonging of each point to the cluster centre is obtained according to the distance between point and cluster centre. Finally, we use the deep learning to conduct the fine point cloud semantic segmentation. Experiments show that the proposed method can conduct the point cloud segmentation and it is not affected by the point cloud distance resolution.

Title: Superpixel Hierarchical Stereo Matching Using Tree Dynamic Programming
Authors: Mao Tiana; Jiajin Fana; Qiaosheng Lia
Affiliation: College of Computer Science and Technology, Chongqing University of Posts and Telecommunications
Abstract: With the rapid development of sensor hardware and oblique photogrammetry technologies, stereo matching technology has been widely used for urban scene 3D modelling. Aiming at the low efficiency and poor robustness of disparity map reconstruction of traditional stereo matching algorithms. This approach converts the disparity map reconstruction problem into a slanted-plane-based continuous global energy optimization model, the PatchMatch and tree dynamic programming strategies are utilized to solve the super-pixel-based energy model optimization, and experimental data is used to verify the effectiveness and robustness of the proposed algorithm. The experimental results show that the proposed method can quickly and accurately reconstruct the geometric structure of 3D scene.

Title: Automatically extract the DBH and height of individual tree using TLS and UAV LiDAR points
Authors: Huang Xia; ZHU Ningning; LIU Rundong
Affiliation: State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, Hubei, China
Abstract: Forest biomass is an important biophysical parameter to describe the function and productivity of forest ecosystem. Biomass model is the main method to estimate forest biomass. The DBH and tree height are the two most important parameters in the construction of biomass model. In this paper, the method of automatic extraction of diameter at breast height (DBH) and tree height from TLS and LiDAR points is studied. First, based on the fact that individual tree is approximately perpendicular to the ground, the TLS and UAV LiDAR points are divided into different sizes with horizontal grids; Second, the number of points, the lowest, highest and height difference .et al. in each grid are calculated, and the combination of these features is used to quickly separate the individual tree from TLS and UAV LiDAR points; Third, the DBH were fitted by RANSAC algorithm with different thickness TLS LiDAR points, and the tree height were calculated by the grid height; Finally, 5 TLS and UAV LiDAR points are used to experiment the effectiveness of our proposed method.

Title: : Automatic registration of UAV-based photogrametry and lidar data using individual trees in forestry area
Authors: Xin Zhao; Rundong Liu; Ruibo Chen; Jianping Li; ChiChen
Affiliation: 1. State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, Hubei, China 2. Guangxi Zhuang Autonomous Region Remote Sensing Institute of Natural Resources, Nanning 530000, Guangxi, China
Abstract: With the development of digital forestry and precision forestry, the demand for digital visualization of forest regions is becoming more and more prominent. In the field of 3d modeling of forest area, UAV-based photogrammetry and lidar technology can provide reliable image data and lidar data respectively. Image can contribute rich spectral information and texture information, while lidar can directly provide three-dimensional coordinates and reflection intensity of ground objects. In forestry areas, ground object information is more scarce than in urban areas, and it is difficult to obtain the overall information by using a single data source. Registration of the image and lidar data can make up for the deficiency of single data and improve the property information of ground objects. Therefore, it is of great significance to integrate the two kinds of data. Due to the lack of artificial features in forest areas, features such as house corners and building facades in urban areas cannot be adopted, while individual trees in forest contain obvious geometric features. Therefore, this paper proposes to successfully realize the fusion of point cloud and image by extracting the features of individual trees as the basis of registration. For each tree, the linear structure is extracted from the trunk part and the spherical structure is extracted from the crown part. Meanwhile, height information is recorded and the plane distance between the tree and other trees within the set radius is measured to form a unique feature. The relative registration of image and image data is realized by matching the features of the image.

Title: : Automatic individual tree estimation using a low-cost helmet-based laser scanning system
Authors: Weitong Wu; Rundong Liu; Ruibo Chen; Jianping Li1
Affiliation: 1. State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, Hubei, China 2. Guangxi Zhuang Autonomous Region Remote Sensing Institute of Natural Resources, Nanning 530000, Guangxi, China
Abstract: Automated individual tree estimation using point clouds plays an increasingly significant role in efficiently, accurately, and completely monitoring forests. This paper presents an automatic tree detection with an estimation of two dendrometric variables: diameter at breast height (DBH) and total tree height (TH) using a low-cost helmet-based laser scanning system(HLS). Operative processes for data collection and automatic forest inventory are described in detail. The approach used is based on the clustering of points belonging to each individual tree, the isolation of the trunks, the iterative fitting of circles for the DBH calculation and the computation of the TH of each tree. TLS and HLS point clouds were compared by the statistical analysis of both estimated forest dendrometric parameters and the possible presence of bias. Results show that the apparent differences in point density and relative precision between both 3D forest models do not affect tree detection, and helmet-based laser scanning system can meet the needs of forest survey.

Back to TopTop