E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Remote Sensing based Building Extraction"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: 31 October 2019.

Special Issue Editors

Guest Editor
Dr. Mohammad Awrangjeb

Institute for Integrated and Intelligent Systems, School of Information and Communication Technology, Griffith University, Australia
Website | E-Mail
Phone: +61 7 373 55032
Interests: automatic assessment of building rooftop solar potential; building change detection; building reconstruction and modelling from remote sensing data; dynamic data mining; feature extraction and matching; forest tree modeling and biomass estimation; hyperspectral image processing for object recognition and modelling; image retrieval and transformed image identification; lidar point cloud data processing; multi modal image registration and data fusion; multimedia security; watermarking of images and videos
Guest Editor
Prof. Xiangyun Hu

School of Remote Sensing and Information Engineering, Wuhan University, 129 Luoyu Road, Wuhan, Hubei province, 430079, China
Website | E-Mail
Interests: Feature extraction; Computer vision; Pattern recognition; LiDAR data processing; Machine learning
Guest Editor
Prof. Bisheng Yang

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan, Hubei, 430072, China
Website | E-Mail
Interests: Laser Scanning; mobile mapping; UAV mapping; point cloud processing; 3D scene understanding; GIS applications
Guest Editor
Dr. Jiaojiao Tian

Remote Sensing Technology Institute, German Aerospace Center (DLR), Muenchener Strasse 20, 82234, Wessling, Germany
Website | E-Mail
Interests: building extraction; data fusion; 2D/3D change detection; computer vision; 3D reconstruction; classification

Special Issue Information

Dear Colleagues,

The rapid growth of sensor technologies, such as airborne and terrestrial laser scanning, and satellite and aerial imaging systems, poses unique challenges in the detection, extraction and modelling of buildings from remote sensing data. In fact, building detection, boundary extraction, and rooftop modelling from remotely-sensed data are important to various applications, such as the real estate industry, city planning, homeland security, automatic solar potential estimation, and disaster (flood or bushfire) management. The automated extraction of building boundaries is a crucial step towards generating city models. In addition, automatic building change detection is vital for monitoring urban growth and locating illegal building extensions.

Despite the fact that significant research has been ongoing for more than two decades, the success of automatic building extraction and modelling is still largely impeded by scene complexity, incomplete cue extraction and sensor dependency of data. Vegetation, and especially trees, can be the prime cause of scene complexity and incomplete cue extraction. The situation becomes complex in hilly and densely-vegetated areas where only a few buildings are present, these being surrounded by trees. Important building cues can be completely or partially missed due to occlusions and shadowing from trees. Trees also change colors in different seasons and may be deciduous. Moreover, image quality may vary for the same scene even if images are captured by the same sensor, but at different dates and times. Thus, when the same detection and modelling algorithms are applied to two different sets of data of the same scene, the outcomes may well be different. Particularly, small building structures, such as garden sheds and roof planes, are often missed in low-resolution data. The automatically-generated models either require significant human interaction to fix inaccuracies (as a post-processing step) or are useless in practical applications.

Therefore, intelligent and innovative algorithms are in dire need for the success of automatic building extraction and modelling. This Special Issue will focus on the newly-developed methods for classification and feature extraction from remote sensing data and will cover (but is not limited to) the following topics:

  • Aerial and satellite data collected from different sensors (VHR, hyperspectral, SAR, LiDAR, UAV, thermal imagery, oblique imagery, etc.);
  • Data analysis and data fusion for building detection, boundary extraction, rooftop modelling, and change detection;
  • Data analysis and data fusion for land cover classification (semantic segmentation, buildings/roads extraction, vehicle detection, land use/cover mapping, etc.).

Moreover, we cordially welcome application papers, including change detection, urban growth monitoring, disaster management, and technical reviews.

Dr. Mohammad Awrangjeb
Prof. Xiangyun Hu
Prof. Bisheng Yang
Dr. Jiaojiao Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Building detection
  • Building extraction
  • Roof reconstruction
  • 3D building modelling
  • Building change detection
  • Remote sensing data
  • LiDAR
  • VHR
  • Hyperspectral imagery
  • Multispectral imagery
  • SAR
  • Data fusion
  • point cloud
  • Aerial imagery
  • Satellite imagery

Published Papers (16 papers)

View options order results:
result details:
Displaying articles 1-16
Export citation of selected articles as:

Research

Open AccessArticle
Web-Net: A Novel Nest Networks with Ultra-Hierarchical Sampling for Building Extraction from Aerial Imageries
Remote Sens. 2019, 11(16), 1897; https://doi.org/10.3390/rs11161897
Received: 13 June 2019 / Revised: 9 August 2019 / Accepted: 9 August 2019 / Published: 14 August 2019
PDF Full-text (5129 KB) | HTML Full-text | XML Full-text
Abstract
How to efficiently utilize vast amounts of easily accessed aerial imageries is a critical challenge for researchers with the proliferation of high-resolution remote sensing sensors and platforms. Recently, the rapid development of deep neural networks (DNN) has been a focus in remote sensing, [...] Read more.
How to efficiently utilize vast amounts of easily accessed aerial imageries is a critical challenge for researchers with the proliferation of high-resolution remote sensing sensors and platforms. Recently, the rapid development of deep neural networks (DNN) has been a focus in remote sensing, and the networks have achieved remarkable progress in image classification and segmentation tasks. However, the current DNN models inevitably lose the local cues during the downsampling operation. Additionally, even with skip connections, the upsampling methods cannot properly recover the structural information, such as the edge intersections, parallelism, and symmetry. In this paper, we propose the Web-Net, which is a nested network architecture with hierarchical dense connections, to handle these issues. We design the Ultra-Hierarchical Sampling (UHS) block to absorb and fuse the inter-level feature maps to propagate the feature maps among different levels. The position-wise downsampling/upsampling methods in the UHS iteratively change the shape of the inputs while preserving the number of their parameters, so that the low-level local cues and high-level semantic cues are properly preserved. We verify the effectiveness of the proposed Web-Net in the Inria Aerial Dataset and WHU Dataset. The results of the proposed Web-Net achieve an overall accuracy of 96.97% and an IoU (Intersection over Union) of 80.10% on the Inria Aerial Dataset, which surpasses the state-of-the-art SegNet 1.8% and 9.96%, respectively; the results on the WHU Dataset also support the effectiveness of the proposed Web-Net. Additionally, benefitting from the nested network architecture and the UHS block, the extracted buildings on the prediction maps are obviously sharper and more accurately identified, and even the building areas that are covered by shadows can also be correctly extracted. The verified results indicate that the proposed Web-Net is both effective and efficient for building extraction from high-resolution remote sensing images. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Semantic Segmentation of Urban Buildings from VHR Remote Sensing Imagery Using a Deep Convolutional Neural Network
Remote Sens. 2019, 11(15), 1774; https://doi.org/10.3390/rs11151774
Received: 22 June 2019 / Revised: 22 July 2019 / Accepted: 26 July 2019 / Published: 28 July 2019
PDF Full-text (7862 KB) | HTML Full-text | XML Full-text
Abstract
Urban building segmentation is a prevalent research domain for very high resolution (VHR) remote sensing; however, various appearances and complicated background of VHR remote sensing imagery make accurate semantic segmentation of urban buildings a challenge in relevant applications. Following the basic architecture of [...] Read more.
Urban building segmentation is a prevalent research domain for very high resolution (VHR) remote sensing; however, various appearances and complicated background of VHR remote sensing imagery make accurate semantic segmentation of urban buildings a challenge in relevant applications. Following the basic architecture of U-Net, an end-to-end deep convolutional neural network (denoted as DeepResUnet) was proposed, which can effectively perform urban building segmentation at pixel scale from VHR imagery and generate accurate segmentation results. The method contains two sub-networks: One is a cascade down-sampling network for extracting feature maps of buildings from the VHR image, and the other is an up-sampling network for reconstructing those extracted feature maps back to the same size of the input VHR image. The deep residual learning approach was adopted to facilitate training in order to alleviate the degradation problem that often occurred in the model training process. The proposed DeepResUnet was tested with aerial images with a spatial resolution of 0.075 m and was compared in performance under the exact same conditions with six other state-of-the-art networks—FCN-8s, SegNet, DeconvNet, U-Net, ResUNet and DeepUNet. Results of extensive experiments indicated that the proposed DeepResUnet outperformed the other six existing networks in semantic segmentation of urban buildings in terms of visual and quantitative evaluation, especially in labeling irregular-shape and small-size buildings with higher accuracy and entirety. Compared with the U-Net, the F1 score, Kappa coefficient and overall accuracy of DeepResUnet were improved by 3.52%, 4.67% and 1.72%, respectively. Moreover, the proposed DeepResUnet required much fewer parameters than the U-Net, highlighting its significant improvement among U-Net applications. Nevertheless, the inference time of DeepResUnet is slightly longer than that of the U-Net, which is subject to further improvement. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Figure 1

Open AccessArticle
A Building Extraction Approach Based on the Fusion of LiDAR Point Cloud and Elevation Map Texture Features
Remote Sens. 2019, 11(14), 1636; https://doi.org/10.3390/rs11141636
Received: 15 May 2019 / Revised: 30 June 2019 / Accepted: 3 July 2019 / Published: 10 July 2019
PDF Full-text (2813 KB) | HTML Full-text | XML Full-text
Abstract
Building extraction is an important way to obtain information in urban planning, land management, and other fields. As remote sensing has various advantages such as large coverage and real-time capability, it becomes an essential approach for building extraction. Among various remote sensing technologies, [...] Read more.
Building extraction is an important way to obtain information in urban planning, land management, and other fields. As remote sensing has various advantages such as large coverage and real-time capability, it becomes an essential approach for building extraction. Among various remote sensing technologies, the capability of providing 3D features makes the LiDAR point cloud become a crucial means for building extraction. However, the LiDAR point cloud has difficulty distinguishing objects with similar heights, in which case texture features are able to extract different objects in a 2D image. In this paper, a building extraction method based on the fusion of point cloud and texture features is proposed, and the texture features are extracted by using an elevation map that expresses the height of each point. The experimental results show that the proposed method obtains better extraction results than that of other texture feature extraction methods and ENVI software in all experimental areas, and the extraction accuracy is always higher than 87%, which is satisfactory for some practical work. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
The Comparison of Fusion Methods for HSRRSI Considering the Effectiveness of Land Cover (Features) Object Recognition Based on Deep Learning
Remote Sens. 2019, 11(12), 1435; https://doi.org/10.3390/rs11121435
Received: 21 May 2019 / Revised: 11 June 2019 / Accepted: 13 June 2019 / Published: 17 June 2019
PDF Full-text (40517 KB) | HTML Full-text | XML Full-text
Abstract
The efficient and accurate application of deep learning in the remote sensing field largely depends on the pre-processing technology of remote sensing images. Particularly, image fusion is the essential way to achieve the complementarity of the panchromatic band and multispectral bands in high [...] Read more.
The efficient and accurate application of deep learning in the remote sensing field largely depends on the pre-processing technology of remote sensing images. Particularly, image fusion is the essential way to achieve the complementarity of the panchromatic band and multispectral bands in high spatial resolution remote sensing images. In this paper, we not only pay attention to the visual effect of fused images, but also focus on the subsequent application effectiveness of information extraction and feature recognition based on fused images. Based on the WorldView-3 images of Tongzhou District of Beijing, we apply the fusion results to conduct the experiments of object recognition of typical urban features based on deep learning. Furthermore, we perform a quantitative analysis for the existing pixel-based mainstream fusion methods of IHS (Intensity-Hue Saturation), PCS (Principal Component Substitution), GS (Gram Schmidt), ELS (Ehlers), HPF (High-Pass Filtering), and HCS (Hyper spherical Color Space) from the perspectives of spectrum, geometric features, and recognition accuracy. The results show that there are apparent differences in visual effect and quantitative index among different fusion methods, and the PCS fusion method has the most satisfying comprehensive effectiveness in the object recognition of land cover (features) based on deep learning. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Building Extraction from UAV Images Jointly Using 6D-SLIC and Multiscale Siamese Convolutional Networks
Remote Sens. 2019, 11(9), 1040; https://doi.org/10.3390/rs11091040
Received: 28 February 2019 / Revised: 29 April 2019 / Accepted: 29 April 2019 / Published: 1 May 2019
Cited by 1 | PDF Full-text (15428 KB) | HTML Full-text | XML Full-text
Abstract
Automatic building extraction using a single data type, either 2D remotely-sensed images or light detection and ranging 3D point clouds, remains insufficient to accurately delineate building outlines for automatic mapping, despite active research in this area and the significant progress which has been [...] Read more.
Automatic building extraction using a single data type, either 2D remotely-sensed images or light detection and ranging 3D point clouds, remains insufficient to accurately delineate building outlines for automatic mapping, despite active research in this area and the significant progress which has been achieved in the past decade. This paper presents an effective approach to extracting buildings from Unmanned Aerial Vehicle (UAV) images through the incorporation of superpixel segmentation and semantic recognition. A framework for building extraction is constructed by jointly using an improved Simple Linear Iterative Clustering (SLIC) algorithm and Multiscale Siamese Convolutional Networks (MSCNs). The SLIC algorithm, improved by additionally imposing a digital surface model for superpixel segmentation, namely 6D-SLIC, is suited for building boundary detection under building and image backgrounds with similar radiometric signatures. The proposed MSCNs, including a feature learning network and a binary decision network, are used to automatically learn a multiscale hierarchical feature representation and detect building objects under various complex backgrounds. In addition, a gamma-transform green leaf index is proposed to truncate vegetation superpixels for further processing to improve the robustness and efficiency of building detection, the Douglas–Peucker algorithm and iterative optimization are used to eliminate jagged details generated from small structures as a result of superpixel segmentation. In the experiments, the UAV datasets, including many buildings in urban and rural areas with irregular shapes and different heights and that are obscured by trees, are collected to evaluate the proposed method. The experimental results based on the qualitative and quantitative measures confirm the effectiveness and high accuracy of the proposed framework relative to the digitized results. The proposed framework performs better than state-of-the-art building extraction methods, given its higher values of recall, precision, and intersection over Union (IoU). Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Building Extraction from High-Resolution Aerial Imagery Using a Generative Adversarial Network with Spatial and Channel Attention Mechanisms
Remote Sens. 2019, 11(8), 917; https://doi.org/10.3390/rs11080917
Received: 18 March 2019 / Revised: 12 April 2019 / Accepted: 12 April 2019 / Published: 15 April 2019
Cited by 1 | PDF Full-text (6913 KB) | HTML Full-text | XML Full-text
Abstract
Segmentation of high-resolution remote sensing images is an important challenge with wide practical applications. The increasing spatial resolution provides fine details for image segmentation but also incurs segmentation ambiguities. In this paper, we propose a generative adversarial network with spatial and channel attention [...] Read more.
Segmentation of high-resolution remote sensing images is an important challenge with wide practical applications. The increasing spatial resolution provides fine details for image segmentation but also incurs segmentation ambiguities. In this paper, we propose a generative adversarial network with spatial and channel attention mechanisms (GAN-SCA) for the robust segmentation of buildings in remote sensing images. The segmentation network (generator) of the proposed framework is composed of the well-known semantic segmentation architecture (U-Net) and the spatial and channel attention mechanisms (SCA). The adoption of SCA enables the segmentation network to selectively enhance more useful features in specific positions and channels and enables improved results closer to the ground truth. The discriminator is an adversarial network with channel attention mechanisms that can properly discriminate the outputs of the generator and the ground truth maps. The segmentation network and adversarial network are trained in an alternating fashion on the Inria aerial image labeling dataset and Massachusetts buildings dataset. Experimental results show that the proposed GAN-SCA achieves a higher score (the overall accuracy and intersection over the union of Inria aerial image labeling dataset are 96.61% and 77.75%, respectively, and the F1-measure of the Massachusetts buildings dataset is 96.36%) and outperforms several state-of-the-art approaches. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Semantic Segmentation-Based Building Footprint Extraction Using Very High-Resolution Satellite Images and Multi-Source GIS Data
Remote Sens. 2019, 11(4), 403; https://doi.org/10.3390/rs11040403
Received: 8 January 2019 / Revised: 8 February 2019 / Accepted: 13 February 2019 / Published: 16 February 2019
Cited by 4 | PDF Full-text (7881 KB) | HTML Full-text | XML Full-text
Abstract
Automatic extraction of building footprints from high-resolution satellite imagery has become an important and challenging research issue receiving greater attention. Many recent studies have explored different deep learning-based semantic segmentation methods for improving the accuracy of building extraction. Although they record substantial land [...] Read more.
Automatic extraction of building footprints from high-resolution satellite imagery has become an important and challenging research issue receiving greater attention. Many recent studies have explored different deep learning-based semantic segmentation methods for improving the accuracy of building extraction. Although they record substantial land cover and land use information (e.g., buildings, roads, water, etc.), public geographic information system (GIS) map datasets have rarely been utilized to improve building extraction results in existing studies. In this research, we propose a U-Net-based semantic segmentation method for the extraction of building footprints from high-resolution multispectral satellite images using the SpaceNet building dataset provided in the DeepGlobe Satellite Challenge of IEEE Conference on Computer Vision and Pattern Recognition 2018 (CVPR 2018). We explore the potential of multiple public GIS map datasets (OpenStreetMap, Google Maps, and MapWorld) through integration with the WorldView-3 satellite datasets in four cities (Las Vegas, Paris, Shanghai, and Khartoum). Several strategies are designed and combined with the U-Net–based semantic segmentation model, including data augmentation, post-processing, and integration of the GIS map data and satellite images. The proposed method achieves a total F1-score of 0.704, which is an improvement of 1.1% to 12.5% compared with the top three solutions in the SpaceNet Building Detection Competition and 3.0% to 9.2% compared with the standard U-Net–based method. Moreover, the effect of each proposed strategy and the possible reasons for the building footprint extraction results are analyzed substantially considering the actual situation of the four cities. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Figure 1

Open AccessArticle
An Automatic Morphological Attribute Building Extraction Approach for Satellite High Spatial Resolution Imagery
Remote Sens. 2019, 11(3), 337; https://doi.org/10.3390/rs11030337
Received: 21 December 2018 / Revised: 23 January 2019 / Accepted: 6 February 2019 / Published: 8 February 2019
PDF Full-text (12319 KB) | HTML Full-text | XML Full-text
Abstract
A new morphological attribute building index (MABI) and shadow index (MASI) are proposed here for automatically extracting building features from very high-resolution (VHR) remote sensing satellite images. By investigating the associated attributes in morphological attribute filters (AFs), the proposed method establishes a relationship [...] Read more.
A new morphological attribute building index (MABI) and shadow index (MASI) are proposed here for automatically extracting building features from very high-resolution (VHR) remote sensing satellite images. By investigating the associated attributes in morphological attribute filters (AFs), the proposed method establishes a relationship between AFs and the characteristics of buildings/shadows in VHR images (e.g., high local contrast, internal homogeneity, shape, and size). In the pre-processing step of the proposed work, attribute filtering was conducted on the original VHR spectral reflectance data to obtain the input, which has a high homogeneity, and to suppress elongated objects (potential non-buildings). Then, the MABI and MASI were calculated by taking the obtained input as a base image. The dark buildings were considered separately in the MABI to reduce the omission of the dark roofs. To better detect buildings from the MABI feature image, an object-oriented analysis and building-shadow concurrence relationships were utilized to further filter out non-building land covers, such as roads and bare ground, that are confused for buildings. Three VHR datasets from two satellite sensors, i.e., Worldview-2 and QuickBird, were tested to determine the detection performance. In view of both the visual inspection and quantitative assessment, the results of the proposed work are superior to recent automatic building index and supervised binary classification approach results. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Comparison of Digital Building Height Models Extracted from AW3D, TanDEM-X, ASTER, and SRTM Digital Surface Models over Yangon City
Remote Sens. 2018, 10(12), 2008; https://doi.org/10.3390/rs10122008
Received: 25 October 2018 / Revised: 2 December 2018 / Accepted: 8 December 2018 / Published: 11 December 2018
Cited by 1 | PDF Full-text (17485 KB) | HTML Full-text | XML Full-text
Abstract
Vertical urban growth in the form of urban volume or building height is increasingly being seen as a significant indicator and constituent of the urban environment. Although high-resolution digital surface models can provide valuable information, various places lack access to such resources. The [...] Read more.
Vertical urban growth in the form of urban volume or building height is increasingly being seen as a significant indicator and constituent of the urban environment. Although high-resolution digital surface models can provide valuable information, various places lack access to such resources. The objective of this study is to explore the feasibility of using open digital surface models (DSMs), such as the AW3D30, ASTER, and SRTM datasets, for extracting digital building height models (DBHs) and comparing their accuracy. A multidirectional processing and slope-dependent filtering approach for DBH extraction was used. Yangon was chosen as the study location since it represents a rapidly developing Asian city where urban changes can be observed during the acquisition period of the aforementioned open DSM datasets (2001–2011). The effect of resolution degradation on the accuracy of the coarse AW3D30 DBH with respect to the high-resolution AW3D5 DBH was also examined. It is concluded that AW3D30 is the most suitable open DSM for DBH generation and for observing buildings taller than 9 m. Furthermore, the AW3D30 DBH, ASTER DBH, and SRTM DBH are suitable for observing vertical changes in urban structures. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Hierarchical Regularization of Building Boundaries in Noisy Aerial Laser Scanning and Photogrammetric Point Clouds
Remote Sens. 2018, 10(12), 1996; https://doi.org/10.3390/rs10121996
Received: 29 October 2018 / Revised: 5 December 2018 / Accepted: 7 December 2018 / Published: 10 December 2018
Cited by 1 | PDF Full-text (34534 KB) | HTML Full-text | XML Full-text
Abstract
Aerial laser scanning or photogrammetric point clouds are often noisy at building boundaries. In order to produce regularized polygons from such noisy point clouds, this study proposes a hierarchical regularization method for the boundary points. Beginning with detected planar structures from raw point [...] Read more.
Aerial laser scanning or photogrammetric point clouds are often noisy at building boundaries. In order to produce regularized polygons from such noisy point clouds, this study proposes a hierarchical regularization method for the boundary points. Beginning with detected planar structures from raw point clouds, two stages of regularization are employed. In the first stage, the boundary points of an individual plane are consolidated locally by shifting them along their refined normal vector to resist noise, and then grouped into piecewise smooth segments. In the second stage, global regularities among different segments from different planes are softly enforced through a labeling process, in which the same label represents parallel or orthogonal segments. This is formulated as a Markov random field and solved efficiently via graph cut. The performance of the proposed method is evaluated for extracting 2D footprints and 3D polygons of buildings in metropolitan area. The results reveal that the proposed method is superior to the state-of-art methods both qualitatively and quantitatively in compactness. The simplified polygons could fit the original boundary points with an average residuals of 0.2 m, and in the meantime reduce up to 90% complexities of the edges. The satisfactory performances of the proposed method show a promising potential for 3D reconstruction of polygonal models from noisy point clouds. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Extraction of Buildings from Multiple-View Aerial Images Using a Feature-Level-Fusion Strategy
Remote Sens. 2018, 10(12), 1947; https://doi.org/10.3390/rs10121947
Received: 27 September 2018 / Revised: 21 November 2018 / Accepted: 28 November 2018 / Published: 4 December 2018
PDF Full-text (20293 KB) | HTML Full-text | XML Full-text
Abstract
Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework [...] Read more.
Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework for detecting and regularizing the boundary of individual buildings using a feature-level-fusion strategy based on features from dense image matching (DIM) point clouds, orthophoto and original aerial images. The proposed framework is divided into three stages. In the first stage, the features from the original aerial image and DIM points are fused to detect buildings and obtain the so-called blob of an individual building. Then, a feature-level fusion strategy is applied to match the straight-line segments from original aerial images so that the matched straight-line segment can be used in the later stage. Finally, a new footprint generation algorithm is proposed to generate the building footprint by combining the matched straight-line segments and the boundary of the blob of the individual building. The performance of our framework is evaluated on a vertical aerial image dataset (Vaihingen) and two oblique aerial image datasets (Potsdam and Lunen). The experimental results reveal 89% to 96% per-area completeness with accuracy above almost 93%. Relative to six existing methods, our proposed method not only is more robust but also can obtain a similar performance to the methods based on LiDAR and images. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
Remote Sens. 2018, 10(11), 1768; https://doi.org/10.3390/rs10111768
Received: 6 September 2018 / Revised: 4 November 2018 / Accepted: 6 November 2018 / Published: 8 November 2018
Cited by 7 | PDF Full-text (4759 KB) | HTML Full-text | XML Full-text
Abstract
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this [...] Read more.
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection
Remote Sens. 2018, 10(10), 1512; https://doi.org/10.3390/rs10101512
Received: 13 June 2018 / Revised: 10 September 2018 / Accepted: 19 September 2018 / Published: 21 September 2018
Cited by 1 | PDF Full-text (9376 KB) | HTML Full-text | XML Full-text
Abstract
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data [...] Read more.
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data is still in its development stage for a number of reasons. For instance, there are difficulties in determining the neighbourhood relationships among the planes on a complex building roof, locating the step edges from point cloud data often requires additional information or may impose constraints, and missing roof planes attract human interaction and often produces high reconstruction errors. This research introduces a new 3-D roof reconstruction technique that constructs an adjacency matrix to define the topological relationships among the roof planes. It identifies any missing planes through an analysis using the 3-D plane intersection lines between adjacent planes. Then, it generates new planes to fill gaps of missing planes. Finally, it obtains complete building models through insertion of approximate wall planes and building floor. The reported research in this paper then uses the generated building models to detect 3-D changes in buildings. Plane connections between neighbouring planes are first defined to establish relationships between neighbouring planes. Then, each building in the reference and test model sets is represented using a graph data structure. Finally, the height intensity images, and if required the graph representations, of the reference and test models are directly compared to find and categorise 3-D changes into five groups: new, unchanged, demolished, modified and partially-modified planes. Experimental results on two Australian datasets show high object- and pixel-based accuracy in terms of completeness, correctness, and quality for both 3-D roof reconstruction and change detection techniques. The proposed change detection technique is robust to various changes including addition of a new veranda to or removal of an existing veranda from a building and increase of the height of a building. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Detecting Building Edges from High Spatial Resolution Remote Sensing Imagery Using Richer Convolution Features Network
Remote Sens. 2018, 10(9), 1496; https://doi.org/10.3390/rs10091496
Received: 21 August 2018 / Revised: 17 September 2018 / Accepted: 18 September 2018 / Published: 19 September 2018
Cited by 8 | PDF Full-text (5112 KB) | HTML Full-text | XML Full-text
Abstract
As the basic feature of building, building edges play an important role in many fields such as urbanization monitoring, city planning, surveying and mapping. Building edges detection from high spatial resolution remote sensing (HSRRS) imagery has always been a long-standing problem. Inspired by [...] Read more.
As the basic feature of building, building edges play an important role in many fields such as urbanization monitoring, city planning, surveying and mapping. Building edges detection from high spatial resolution remote sensing (HSRRS) imagery has always been a long-standing problem. Inspired by the recent success of deep-learning-based edge detection, a building edge detection model using a richer convolutional features (RCF) network is employed in this paper to detect building edges. Firstly, a dataset for building edges detection is constructed by the proposed most peripheral constraint conversion algorithm. Then, based on this dataset the RCF network is retrained. Finally, the edge probability map is obtained by RCF-building model, and this paper involves a geomorphological concept to refine edge probability map according to geometric morphological analysis of topographic surface. The experimental results suggest that RCF-building model can detect building edges accurately and completely, and that this model has an edge detection F-measure that is at least 5% higher than that of other three typical building extraction methods. In addition, the ablation experiment result proves that using the most peripheral constraint conversion algorithm can generate more superior dataset, and the involved refinement algorithm shows a higher F-measure and better visual effect contrasted with the non-maximal suppression algorithm. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
Extracting Building Boundaries from High Resolution Optical Images and LiDAR Data by Integrating the Convolutional Neural Network and the Active Contour Model
Remote Sens. 2018, 10(9), 1459; https://doi.org/10.3390/rs10091459
Received: 18 July 2018 / Revised: 30 August 2018 / Accepted: 11 September 2018 / Published: 12 September 2018
Cited by 8 | PDF Full-text (9969 KB) | HTML Full-text | XML Full-text
Abstract
Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results [...] Read more.
Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Open AccessArticle
A Boundary Regulated Network for Accurate Roof Segmentation and Outline Extraction
Remote Sens. 2018, 10(8), 1195; https://doi.org/10.3390/rs10081195
Received: 16 June 2018 / Revised: 25 July 2018 / Accepted: 26 July 2018 / Published: 30 July 2018
Cited by 4 | PDF Full-text (29239 KB) | HTML Full-text | XML Full-text
Abstract
The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance [...] Read more.
The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance for this task. However, pursuing more accurate segmentation results is still critical for additional applications, such as automatic mapping and building change detection. In this study, we propose a boundary regulated network called BR-Net, which utilizes both local and global information, to perform roof segmentation and outline extraction. The BR-Net method consists of a shared backend utilizing a modified U-Net and a multitask framework to generate predictions for segmentation maps and building outlines based on a consistent feature representation from the shared backend. Because of the restriction and regulation of additional boundary information, the proposed model can achieve superior performance compared to existing methods. Experiments on an aerial image dataset covering 32 km2 and containing more than 58,000 buildings indicate that our method performs well at both roof segmentation and outline extraction. The proposed BR-Net method significantly outperforms the classic FCN8s model. Compared to the state-of-the-art U-Net model, our BR-Net achieves 6.2% (0.869 vs. 0.818), 10.6% (0.772 vs. 0.698), and 8.7% (0.840 vs. 0.773) improvements in F1 score, Jaccard index, and kappa coefficient, respectively. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Figures

Graphical abstract

Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top