E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Remote Sensing based Building Extraction"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 31 October 2019

Special Issue Editors

Guest Editor
Dr. Mohammad Awrangjeb

Institute for Integrated and Intelligent Systems, School of Information and Communication Technology, Griffith University, Australia
Website | E-Mail
Phone: +61 7 373 55032
Interests: building detection and modeling; building change detection; rooftop solar potential estimation; forest tree modeling and biomass estimation; image retrieval and transformed image identification; multi modal image registration and data fusion; and multimedia security
Guest Editor
Prof. Xiangyun Hu

School of Remote Sensing and Information Engineering, Wuhan University, 129 Luoyu Road, Wuhan, Hubei province, 430079, China
Website | E-Mail
Interests: Feature extraction; Computer vision; Pattern recognition; LiDAR data processing; Machine learning
Guest Editor
Prof. Bisheng Yang

Professor in GeoInformatics, State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan, Hubei, 430072, China
Website | E-Mail
Interests: Laser Scanning; mobile mapping; UAV mapping; point cloud processing; 3D scene understanding; GIS applications
Guest Editor
Dr. Jiaojiao Tian

Remote Sensing Technology Institute, German Aerospace Center (DLR), Muenchener Strasse 20, 82234, Wessling, Germany
Website | E-Mail
Interests: building extraction; data fusion; 2D/3D change detection; computer vision; 3D reconstruction; classification

Special Issue Information

Dear Colleagues,

The rapid growth of sensor technologies, such as airborne and terrestrial laser scanning, and satellite and aerial imaging systems, poses unique challenges in the detection, extraction and modelling of buildings from remote sensing data. In fact, building detection, boundary extraction, and rooftop modelling from remotely-sensed data are important to various applications, such as the real estate industry, city planning, homeland security, automatic solar potential estimation, and disaster (flood or bushfire) management. The automated extraction of building boundaries is a crucial step towards generating city models. In addition, automatic building change detection is vital for monitoring urban growth and locating illegal building extensions.

Despite the fact that significant research has been ongoing for more than two decades, the success of automatic building extraction and modelling is still largely impeded by scene complexity, incomplete cue extraction and sensor dependency of data. Vegetation, and especially trees, can be the prime cause of scene complexity and incomplete cue extraction. The situation becomes complex in hilly and densely-vegetated areas where only a few buildings are present, these being surrounded by trees. Important building cues can be completely or partially missed due to occlusions and shadowing from trees. Trees also change colors in different seasons and may be deciduous. Moreover, image quality may vary for the same scene even if images are captured by the same sensor, but at different dates and times. Thus, when the same detection and modelling algorithms are applied to two different sets of data of the same scene, the outcomes may well be different. Particularly, small building structures, such as garden sheds and roof planes, are often missed in low-resolution data. The automatically-generated models either require significant human interaction to fix inaccuracies (as a post-processing step) or are useless in practical applications.

Therefore, intelligent and innovative algorithms are in dire need for the success of automatic building extraction and modelling. This Special Issue will focus on the newly-developed methods for classification and feature extraction from remote sensing data and will cover (but is not limited to) the following topics:

  • Aerial and satellite data collected from different sensors (VHR, hyperspectral, SAR, LiDAR, UAV, thermal imagery, oblique imagery, etc.);
  • Data analysis and data fusion for building detection, boundary extraction, rooftop modelling, and change detection;
  • Data analysis and data fusion for land cover classification (semantic segmentation, buildings/roads extraction, vehicle detection, land use/cover mapping, etc.).

Moreover, we cordially welcome application papers, including change detection, urban growth monitoring, disaster management, and technical reviews.

Dr. Mohammad Awrangjeb
Prof. Xiangyun Hu
Prof. Bisheng Yang
Dr. Jiaojiao Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Building detection
  • Building extraction
  • Roof reconstruction
  • 3D building modelling
  • Building change detection
  • Remote sensing data
  • LiDAR
  • VHR
  • Hyperspectral imagery
  • Multispectral imagery
  • SAR
  • Data fusion
  • point cloud
  • Aerial imagery
  • Satellite imagery

Published Papers (5 papers)

View options order results:
result details:
Displaying articles 1-5
Export citation of selected articles as:

Research

Open AccessArticle Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
Remote Sens. 2018, 10(11), 1768; https://doi.org/10.3390/rs10111768
Received: 6 September 2018 / Revised: 4 November 2018 / Accepted: 6 November 2018 / Published: 8 November 2018
PDF Full-text (4759 KB) | HTML Full-text | XML Full-text
Abstract
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this
[...] Read more.
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection
Remote Sens. 2018, 10(10), 1512; https://doi.org/10.3390/rs10101512
Received: 13 June 2018 / Revised: 10 September 2018 / Accepted: 19 September 2018 / Published: 21 September 2018
PDF Full-text (9376 KB) | HTML Full-text | XML Full-text
Abstract
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data
[...] Read more.
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data is still in its development stage for a number of reasons. For instance, there are difficulties in determining the neighbourhood relationships among the planes on a complex building roof, locating the step edges from point cloud data often requires additional information or may impose constraints, and missing roof planes attract human interaction and often produces high reconstruction errors. This research introduces a new 3-D roof reconstruction technique that constructs an adjacency matrix to define the topological relationships among the roof planes. It identifies any missing planes through an analysis using the 3-D plane intersection lines between adjacent planes. Then, it generates new planes to fill gaps of missing planes. Finally, it obtains complete building models through insertion of approximate wall planes and building floor. The reported research in this paper then uses the generated building models to detect 3-D changes in buildings. Plane connections between neighbouring planes are first defined to establish relationships between neighbouring planes. Then, each building in the reference and test model sets is represented using a graph data structure. Finally, the height intensity images, and if required the graph representations, of the reference and test models are directly compared to find and categorise 3-D changes into five groups: new, unchanged, demolished, modified and partially-modified planes. Experimental results on two Australian datasets show high object- and pixel-based accuracy in terms of completeness, correctness, and quality for both 3-D roof reconstruction and change detection techniques. The proposed change detection technique is robust to various changes including addition of a new veranda to or removal of an existing veranda from a building and increase of the height of a building. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle Detecting Building Edges from High Spatial Resolution Remote Sensing Imagery Using Richer Convolution Features Network
Remote Sens. 2018, 10(9), 1496; https://doi.org/10.3390/rs10091496
Received: 21 August 2018 / Revised: 17 September 2018 / Accepted: 18 September 2018 / Published: 19 September 2018
Cited by 1 | PDF Full-text (5112 KB) | HTML Full-text | XML Full-text
Abstract
As the basic feature of building, building edges play an important role in many fields such as urbanization monitoring, city planning, surveying and mapping. Building edges detection from high spatial resolution remote sensing (HSRRS) imagery has always been a long-standing problem. Inspired by
[...] Read more.
As the basic feature of building, building edges play an important role in many fields such as urbanization monitoring, city planning, surveying and mapping. Building edges detection from high spatial resolution remote sensing (HSRRS) imagery has always been a long-standing problem. Inspired by the recent success of deep-learning-based edge detection, a building edge detection model using a richer convolutional features (RCF) network is employed in this paper to detect building edges. Firstly, a dataset for building edges detection is constructed by the proposed most peripheral constraint conversion algorithm. Then, based on this dataset the RCF network is retrained. Finally, the edge probability map is obtained by RCF-building model, and this paper involves a geomorphological concept to refine edge probability map according to geometric morphological analysis of topographic surface. The experimental results suggest that RCF-building model can detect building edges accurately and completely, and that this model has an edge detection F-measure that is at least 5% higher than that of other three typical building extraction methods. In addition, the ablation experiment result proves that using the most peripheral constraint conversion algorithm can generate more superior dataset, and the involved refinement algorithm shows a higher F-measure and better visual effect contrasted with the non-maximal suppression algorithm. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle Extracting Building Boundaries from High Resolution Optical Images and LiDAR Data by Integrating the Convolutional Neural Network and the Active Contour Model
Remote Sens. 2018, 10(9), 1459; https://doi.org/10.3390/rs10091459
Received: 18 July 2018 / Revised: 30 August 2018 / Accepted: 11 September 2018 / Published: 12 September 2018
Cited by 2 | PDF Full-text (9969 KB) | HTML Full-text | XML Full-text
Abstract
Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results
[...] Read more.
Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle A Boundary Regulated Network for Accurate Roof Segmentation and Outline Extraction
Remote Sens. 2018, 10(8), 1195; https://doi.org/10.3390/rs10081195
Received: 16 June 2018 / Revised: 25 July 2018 / Accepted: 26 July 2018 / Published: 30 July 2018
Cited by 1 | PDF Full-text (29239 KB) | HTML Full-text | XML Full-text
Abstract
The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance
[...] Read more.
The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance for this task. However, pursuing more accurate segmentation results is still critical for additional applications, such as automatic mapping and building change detection. In this study, we propose a boundary regulated network called BR-Net, which utilizes both local and global information, to perform roof segmentation and outline extraction. The BR-Net method consists of a shared backend utilizing a modified U-Net and a multitask framework to generate predictions for segmentation maps and building outlines based on a consistent feature representation from the shared backend. Because of the restriction and regulation of additional boundary information, the proposed model can achieve superior performance compared to existing methods. Experiments on an aerial image dataset covering 32 km2 and containing more than 58,000 buildings indicate that our method performs well at both roof segmentation and outline extraction. The proposed BR-Net method significantly outperforms the classic FCN8s model. Compared to the state-of-the-art U-Net model, our BR-Net achieves 6.2% (0.869 vs. 0.818), 10.6% (0.772 vs. 0.698), and 8.7% (0.840 vs. 0.773) improvements in F1 score, Jaccard index, and kappa coefficient, respectively. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Back to Top