Next Article in Journal
Quality Improvement of Satellite Soil Moisture Products by Fusing with In-Situ Measurements and GNSS-R Estimates in the Western Continental U.S.
Next Article in Special Issue
Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework
Previous Article in Journal
Quality Assessment of the Bidirectional Reflectance Distribution Function for NIR Imagery Sequences from UAV
Previous Article in Special Issue
Sentinel-2 Image Fusion Using a Deep Residual Network
Open AccessArticle

A Multiple-Feature Reuse Network to Extract Buildings from Remote Sensing Imagery

School of Resource and Environment Sciences, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
Collaborative Innovation Centre of Geospatial Technology, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(9), 1350;
Received: 23 June 2018 / Revised: 4 August 2018 / Accepted: 20 August 2018 / Published: 24 August 2018
(This article belongs to the Special Issue Recent Advances in Neural Networks for Remote Sensing)
PDF [8088 KB, uploaded 24 August 2018]


Automatic building extraction from remote sensing imagery is important in many applications. The success of convolutional neural networks (CNNs) has also led to advances in using CNNs to extract man-made objects from high-resolution imagery. However, the large appearance and size variations of buildings make it difficult to extract both crowded small buildings and large buildings. High-resolution imagery must be segmented into patches for CNN models due to GPU memory limitations, and buildings are typically only partially contained in a single patch with little context information. To overcome the problems involved when using different levels of image features with common CNN models, this paper proposes a novel CNN architecture called a multiple-feature reuse network (MFRN) in which each layer is connected to all the subsequent layers of the same size, enabling the direct use of the hierarchical features in each layer. In addition, the model includes a smart decoder that enables precise localization with less GPU load. We tested our model on a large real-world remote sensing dataset and obtained an overall accuracy of 94.5% and an 85% F1 score, which outperformed the compared CNN models, including a 56-layer fully convolutional DenseNet with 93.8% overall accuracy and an F1 score of 83.5%. The experimental results indicate that the MFRN approach to connecting convolutional layers improves the performance of common CNN models for extracting buildings of different sizes and can achieve high accuracy with a consumer-level GPU. View Full-Text
Keywords: building extraction; deep learning; CNN; FCN building extraction; deep learning; CNN; FCN

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Li, L.; Liang, J.; Weng, M.; Zhu, H. A Multiple-Feature Reuse Network to Extract Buildings from Remote Sensing Imagery. Remote Sens. 2018, 10, 1350.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top