Next Article in Journal
Tropical Cyclone Rainfall Estimates from FY-3B MWRI Brightness Temperatures Using the WS Algorithm
Previous Article in Journal
Structural and Spectral Analysis of Cereal Canopy Reflectance and Reflectance Anisotropy
Previous Article in Special Issue
An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection
Article Menu

Export Article

Open AccessArticle
Remote Sens. 2018, 10(11), 1768; https://doi.org/10.3390/rs10111768

Building Extraction in Very High Resolution Imagery by Dense-Attention Networks

1,2
,
2,3,4,* , 2
,
2,3,4,* , 2,4
and
5
1
School of Resource and Environmental Science, Wuhan University, Wuhan 430079, China
2
School of Resources and Environmental Engineering, Anhui University, Hefei 230601, China
3
Institute of Physical Science and Information Technology, Anhui University, Hefei 230601, China
4
Anhui Engineering Research Center for Geographical Information Intelligent Technology, Hefei 230601, China
5
Department of Information Engineering, China University of Geosciences, Wuhan 430074, China
*
Authors to whom correspondence should be addressed.
Received: 6 September 2018 / Revised: 4 November 2018 / Accepted: 6 November 2018 / Published: 8 November 2018
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Full-Text   |   PDF [4759 KB, uploaded 8 November 2018]   |  

Abstract

Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods. View Full-Text
Keywords: building extraction; deep learning; attention mechanism; very high resolution; imagery building extraction; deep learning; attention mechanism; very high resolution; imagery
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Yang, H.; Wu, P.; Yao, X.; Wu, Y.; Wang, B.; Xu, Y. Building Extraction in Very High Resolution Imagery by Dense-Attention Networks. Remote Sens. 2018, 10, 1768.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top