Next Article in Journal
Assessing the Effects of Spatial Scales on Regional Evapotranspiration Estimation by the SEBAL Model and Multiple Satellite Datasets: A Case Study in the Agro-Pastoral Ecotone, Northwestern China
Next Article in Special Issue
Using Sentinel-1, Sentinel-2, and Planet Imagery to Map Crop Type of Smallholder Farms
Previous Article in Journal
Editorial for the Special Issue “Remote Sensing of the Oceans: Blue Economy and Marine Pollution”
Previous Article in Special Issue
Assessing within-Field Corn and Soybean Yield Variability from WorldView-3, Planet, Sentinel-2, and Landsat 8 Satellite Imagery
 
 
Article

Assessing Deep Convolutional Neural Networks and Assisted Machine Perception for Urban Mapping

1
Department of Geography, Virginia Tech, 238 Wallace Hall, Blacksburg, VA 24060, USA
2
Department of Geography, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3220, USA
*
Author to whom correspondence should be addressed.
Academic Editor: Garik Gutman
Remote Sens. 2021, 13(8), 1523; https://doi.org/10.3390/rs13081523
Received: 12 March 2021 / Revised: 8 April 2021 / Accepted: 13 April 2021 / Published: 15 April 2021
High-spatial-resolution satellite imagery has been widely applied for detailed urban mapping. Recently, deep convolutional neural networks (DCNNs) have shown promise in certain remote sensing applications, but they are still relatively new techniques for general urban mapping. This study examines the use of two DCNNs (U-Net and VGG16) to provide an automatic schema to support high-resolution mapping of buildings, road/open built-up, and vegetation cover. Using WorldView-2 imagery as input, we first applied an established OBIA method to characterize major urban land cover classes. An OBIA-derived urban map was then divided into a training and testing region to evaluate the DCNNs’ performance. For U-Net mapping, we were particularly interested in how sample size or the number of image tiles affect mapping accuracy. U-Net generated cross-validation accuracies ranging from 40.5 to 95.2% for training sample sizes from 32 to 4096 image tiles (each tile was 256 by 256 pixels). A per-pixel accuracy assessment led to 87.8 percent overall accuracy for the testing region, suggesting U-Net’s good generalization capabilities. For the VGG16 mapping, we proposed an object-based framing paradigm that retains spatial information and assists machine perception through Gaussian blurring. Gaussian blurring was used as a pre-processing step to enhance the contrast between objects of interest and background (contextual) information. Combined with the pre-trained VGG16 and transfer learning, this analytical approach generated a 77.3 percent overall accuracy for per-object assessment. The mapping accuracy could be further improved given more robust segmentation algorithms and better quantity/quality of training samples. Our study shows significant promise for DCNN implementation for urban mapping and our approach can transfer to a number of other remote sensing applications. View Full-Text
Keywords: deep convolutional neural networks; U-Net; VGG16; urban mapping deep convolutional neural networks; U-Net; VGG16; urban mapping
Show Figures

Graphical abstract

MDPI and ACS Style

Shao, Y.; Cooner, A.J.; Walsh, S.J. Assessing Deep Convolutional Neural Networks and Assisted Machine Perception for Urban Mapping. Remote Sens. 2021, 13, 1523. https://doi.org/10.3390/rs13081523

AMA Style

Shao Y, Cooner AJ, Walsh SJ. Assessing Deep Convolutional Neural Networks and Assisted Machine Perception for Urban Mapping. Remote Sensing. 2021; 13(8):1523. https://doi.org/10.3390/rs13081523

Chicago/Turabian Style

Shao, Yang, Austin J. Cooner, and Stephen J. Walsh. 2021. "Assessing Deep Convolutional Neural Networks and Assisted Machine Perception for Urban Mapping" Remote Sensing 13, no. 8: 1523. https://doi.org/10.3390/rs13081523

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop