Next Article in Journal
Photometric Long-Range Positioning of LED Targets for Cooperative Navigation in UAVs
Next Article in Special Issue
Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images
Previous Article in Journal
Improving Intertidal Reef Mapping Using UAV Surface, Red Edge, and Near-Infrared Data
Article

Deep Learning-Based Damage Detection from Aerial SfM Point Clouds

1
Postdoctoral Research Associate, Department of Civil Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588-0531, USA
2
Graduate Research Assistant, Department of Civil Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588-0531, USA
3
Assistant Professor, Department of Civil Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588-0531, USA
*
Author to whom correspondence should be addressed.
Drones 2019, 3(3), 68; https://doi.org/10.3390/drones3030068
Received: 30 June 2019 / Revised: 21 August 2019 / Accepted: 23 August 2019 / Published: 27 August 2019
(This article belongs to the Special Issue Deep Learning for Drones and Its Applications)
Aerial data collection is well known as an efficient method to study the impact following extreme events. While datasets predominately include images for post-disaster remote sensing analyses, images alone cannot provide detailed geometric information due to a lack of depth or the complexity required to extract geometric details. However, geometric and color information can easily be mined from three-dimensional (3D) point clouds. Scene classification is commonly studied within the field of machine learning, where a workflow follows a pipeline operation to compute a series of engineered features for each point and then points are classified based on these features using a learning algorithm. However, these workflows cannot be directly applied to an aerial 3D point cloud due to a large number of points, density variation, and object appearance. In this study, the point cloud datasets are transferred into a volumetric grid model to be used in the training and testing of 3D fully convolutional network models. The goal of these models is to semantically segment two areas that sustained damage after Hurricane Harvey, which occurred in 2017, into six classes, including damaged structures, undamaged structures, debris, roadways, terrain, and vehicles. These classes are selected to understand the distribution and intensity of the damage. The point clouds consist of two distinct areas assembled using aerial Structure-from-Motion from a camera mounted on an unmanned aerial system. The two datasets contain approximately 5000 and 8000 unique instances, and the developed methods are assessed quantitatively using precision, accuracy, recall, and intersection over union metrics. View Full-Text
Keywords: three-dimensional convolutional neural network; deep learning; unmanned aerial systems; semantic segmentation; point clouds; Hurricane Harvey three-dimensional convolutional neural network; deep learning; unmanned aerial systems; semantic segmentation; point clouds; Hurricane Harvey
Show Figures

Figure 1

MDPI and ACS Style

Mohammadi, M.E.; Watson, D.P.; Wood, R.L. Deep Learning-Based Damage Detection from Aerial SfM Point Clouds. Drones 2019, 3, 68. https://doi.org/10.3390/drones3030068

AMA Style

Mohammadi ME, Watson DP, Wood RL. Deep Learning-Based Damage Detection from Aerial SfM Point Clouds. Drones. 2019; 3(3):68. https://doi.org/10.3390/drones3030068

Chicago/Turabian Style

Mohammadi, Mohammad E., Daniel P. Watson, and Richard L. Wood 2019. "Deep Learning-Based Damage Detection from Aerial SfM Point Clouds" Drones 3, no. 3: 68. https://doi.org/10.3390/drones3030068

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop