Next Article in Journal
Mapping Plantations in Myanmar by Fusing Landsat-8, Sentinel-2 and Sentinel-1 Data along with Systematic Error Quantification
Next Article in Special Issue
Retrieval of High Spatial Resolution Aerosol Optical Depth from HJ-1 A/B CCD Data
Previous Article in Journal
Numerical Assessments of Leaf Area Index in Tropical Savanna Rangelands, South Africa Using Landsat 8 OLI Derived Metrics and In-Situ Measurements
Previous Article in Special Issue
Spatial–Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-Borne Hyperspectral Remote Sensing Imagery
Open AccessArticle

Building Footprint Extraction from High-Resolution Images via Spatial Residual Inception Convolutional Neural Network

1
School of Geography and Planning, Sun Yat-Sen University, West Xingang Road, Guangzhou 510275, China
2
Guangdong Key Laboratory for Urbanization and Geo-simulation, Sun Yat-Sen University, West Xingang Road, Guangzhou 510275, China
3
School of Geographical Sciences, Guangzhou University, West Waihuan Street/Road, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(7), 830; https://doi.org/10.3390/rs11070830
Received: 19 February 2019 / Revised: 22 March 2019 / Accepted: 3 April 2019 / Published: 7 April 2019
(This article belongs to the Special Issue Advanced Topics in Remote Sensing)
The rapid development in deep learning and computer vision has introduced new opportunities and paradigms for building extraction from remote sensing images. In this paper, we propose a novel fully convolutional network (FCN), in which a spatial residual inception (SRI) module is proposed to capture and aggregate multi-scale contexts for semantic understanding by successively fusing multi-level features. The proposed SRI-Net is capable of accurately detecting large buildings that might be easily omitted while retaining global morphological characteristics and local details. On the other hand, to improve computational efficiency, depthwise separable convolutions and convolution factorization are introduced to significantly decrease the number of model parameters. The proposed model is evaluated on the Inria Aerial Image Labeling Dataset and the Wuhan University (WHU) Aerial Building Dataset. The experimental results show that the proposed methods exhibit significant improvements compared with several state-of-the-art FCNs, including SegNet, U-Net, RefineNet, and DeepLab v3+. The proposed model shows promising potential for building detection from remote sensing images on a large scale. View Full-Text
Keywords: semantic segmentation; high-resolution image; building footprints extraction; fully convolutional network; multi-scale contexts semantic segmentation; high-resolution image; building footprints extraction; fully convolutional network; multi-scale contexts
Show Figures

Figure 1

MDPI and ACS Style

Liu, P.; Liu, X.; Liu, M.; Shi, Q.; Yang, J.; Xu, X.; Zhang, Y. Building Footprint Extraction from High-Resolution Images via Spatial Residual Inception Convolutional Neural Network. Remote Sens. 2019, 11, 830.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop