remotesensing-logo

Journal Browser

Journal Browser

Deep Learning for Very-High Resolution Land-Cover Mapping

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (20 December 2022) | Viewed by 26643

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Environmental Management and Land-Use Planning, Université libre de Bruxelles, Brussels, Belgium
Interests: land cover mapping; urban remote sensing; machine Learning; deep learning; geoinformation; very high resolution; object-based image analysis; weak supervision; big data; automation; change detection; uncertainty; human geography
Special Issues, Collections and Topics in MDPI journals
Department of Geographic Information Science, Nanjing University, Nanjing 210046, China
Interests: remote sensing image information extraction; object-based image analysis (OBIA); machine learning or data mining with applications in remote sensing and geospatial analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Earth Observation Science, Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, P.O. Box 217, 7500 AE Enschede, The Netherlands
Interests: remote sensing; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institut National de L’Information Géographique et Forestière, Saint-Mande, France
Interests: land use land cover mapping; remote sensing; classification; machine learning; deep learning; very high resolution imagery; historical images
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In the field of remote sensing, land cover mapping has always been a popular subject, which is attributable to the fact that land cover data are essential for a multitude of applications. Thanks to their detailed resolution, very high resolution imagery is a privileged tool for studying the earth's surface, and a large number of publications on land cover mapping from such data have been published. For a long time, pixel-oriented and object-oriented approaches, through either rule-based classifications or conventional machine learning algorithms, have been well established and used by a great majority of the community to produce land cover maps from very high resolution Earth observation data. However, since 2015, deep learning techniques from computer vision have made their way into the remote sensing field and have since been successfully applied for various applications. While deep learning approaches can largely outperform conventional machine learning approaches for some tasks, several studies have shown that, instead, conventional machine learning and object-oriented classification approaches still perform better in some contexts.

For this Special Issue, we welcome state-of-the-art research or review papers that are focused on land cover mapping from very high resolution earth observation data, using deep learning methods and addressing topics including (but not limited to): architecture comparison; data fusion; comparison of conventional machine learning and deep learning approaches; deep learning model comprehension; explainable deep learning; weak supervision and semi-supervision; scene classification; semantic segmentation;  instance segmentation; change detection; existing land cover map enrichment.

Dr. Tais Grippa
Dr. Lei Ma
Dr. Claudio Persello
Dr. Arnaud Le Bris
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep learning
  • Explainable deep learning
  • Land cover mapping
  • Geo-information extraction
  • Very high resolution images
  • Data augmentation
  • Data fusion
  • Semantic segmentation
  • Instance segmentation
  • Change detection
  • Weak and semi-supervision

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

30 pages, 15742 KiB  
Article
Hyperspectral Image Classification with a Multiscale Fusion-Evolution Graph Convolutional Network Based on a Feature-Spatial Attention Mechanism
by Haoyu Jing, Yuanyuan Wang, Zhenhong Du and Feng Zhang
Remote Sens. 2022, 14(11), 2653; https://doi.org/10.3390/rs14112653 - 1 Jun 2022
Cited by 2 | Viewed by 2281
Abstract
Convolutional neural network (CNN) has achieved excellent performance in the classification of hyperspectral images (HSI) due to its ability to extract spectral and spatial feature information. However, the conventional CNN model does not perform well in regions with irregular geometric appearances. The recently [...] Read more.
Convolutional neural network (CNN) has achieved excellent performance in the classification of hyperspectral images (HSI) due to its ability to extract spectral and spatial feature information. However, the conventional CNN model does not perform well in regions with irregular geometric appearances. The recently proposed graph convolutional network (GCN) has been successfully applied to the analysis of non-Euclidean data and is suitable for irregular image regions. However, conventional GCN has problems such as very high computational cost on HSI data and cannot make full use of information in the image spatial domain. To this end, this paper proposes a multi-scale fusion-evolution graph convolutional network based on the feature-spatial attention mechanism (MFEGCN-FSAM). Our model enables the graph to be automatically evolved during the graph convolution process to produce more accurate embedding features. We have established multiple local and global input graphs to utilize the multiscale spectral and spatial information of the image. In addition, this paper designs a feature-spatial attention module to extract important features and structural information from the graph. The experimental results on four typical datasets show that the MFEGCN-FSAM proposed in this paper has better performance than most existing HSI classification methods. Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Figure 1

25 pages, 6710 KiB  
Article
Hierarchical Disentangling Network for Building Extraction from Very High Resolution Optical Remote Sensing Imagery
by Jianhao Li, Yin Zhuang, Shan Dong, Peng Gao, Hao Dong, He Chen, Liang Chen and Lianlin Li
Remote Sens. 2022, 14(7), 1767; https://doi.org/10.3390/rs14071767 - 6 Apr 2022
Cited by 6 | Viewed by 2303
Abstract
Building extraction using very high resolution (VHR) optical remote sensing imagery is an essential interpretation task that impacts human life. However, buildings in different environments exhibit various scales, complicated spatial distributions, and different imaging conditions. Additionally, with the spatial resolution of images increasing, [...] Read more.
Building extraction using very high resolution (VHR) optical remote sensing imagery is an essential interpretation task that impacts human life. However, buildings in different environments exhibit various scales, complicated spatial distributions, and different imaging conditions. Additionally, with the spatial resolution of images increasing, there are diverse interior details and redundant context information present in building and background areas. Thus, the above-mentioned situations would create large intra-class variances and poor inter-class discrimination, leading to uncertain feature descriptions for building extraction, which would result in over- or under-extraction phenomena. In this article, a novel hierarchical disentangling network with an encoder–decoder architecture called HDNet is proposed to consider both the stable and uncertain feature description in a convolution neural network (CNN). Next, a hierarchical disentangling strategy is set up to individually generate strong and weak semantic zones using a newly designed feature disentangling module (FDM). Here, the strong and weak semantic zones set up the stable and uncertain description individually to determine a more stable semantic main body and uncertain semantic boundary of buildings. Next, a dual-stream semantic feature description is built to gradually integrate strong and weak semantic zones by the designed component feature fusion module (CFFM), which is able to generate a powerful semantic description for more complete and refined building extraction. Finally, extensive experiments are carried out on three published datasets (i.e., WHU satellite, WHU aerial, and INRIA), and the comparison results show that the proposed HDNet outperforms other state-of-the-art (SOTA) methods. Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Figure 1

23 pages, 53493 KiB  
Article
Capsule–Encoder–Decoder: A Method for Generalizable Building Extraction from Remote Sensing Images
by Zhenchao Tang, Calvin Yu-Chian Chen, Chengzhen Jiang, Dongying Zhang, Weiran Luo, Zhiming Hong and Huaiwei Sun
Remote Sens. 2022, 14(5), 1235; https://doi.org/10.3390/rs14051235 - 2 Mar 2022
Cited by 5 | Viewed by 2920
Abstract
Due to the inconsistent spatiotemporal spectral scales, a remote sensing dataset over a large-scale area and over long-term time series will have large variations and large statistical distribution features, which will lead to a performance drop of the deep learning model that is [...] Read more.
Due to the inconsistent spatiotemporal spectral scales, a remote sensing dataset over a large-scale area and over long-term time series will have large variations and large statistical distribution features, which will lead to a performance drop of the deep learning model that is only trained on the source domain. For building an extraction task, deep learning methods perform weak generalization from the source domain to the other domain. To solve the problem, we propose a Capsule–Encoder–Decoder model. We use a vector named capsule to store the characteristics of the building and its parts. In our work, the encoder extracts capsules from remote sensing images. Capsules contain the information of the buildings’ parts. Additionally, the decoder calculates the relationship between the target building and its parts. The decoder corrects the buildings’ distribution and up-samples them to extract target buildings. Using remote sensing images in the lower Yellow River as the source dataset, building extraction experiments were trained on both our method and the mainstream methods. Compared with the mainstream methods on the source dataset, our method achieves convergence faster, and our method shows higher accuracy. Significantly, without fine tuning, our method can reduce the error rates of building extraction results on an almost unfamiliar dataset. The building parts’ distribution in capsules has high-level semantic information, and capsules can describe the characteristics of buildings more comprehensively, which are more explanatory. The results prove that our method can not only effectively extract buildings but also perform great generalization from the source remote sensing dataset to another. Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Graphical abstract

20 pages, 14293 KiB  
Article
A Deformable Convolutional Neural Network with Spatial-Channel Attention for Remote Sensing Scene Classification
by Di Wang and Jinhui Lan
Remote Sens. 2021, 13(24), 5076; https://doi.org/10.3390/rs13245076 - 14 Dec 2021
Cited by 10 | Viewed by 3142
Abstract
Remote sensing scene classification converts remote sensing images into classification information to support high-level applications, so it is a fundamental problem in the field of remote sensing. In recent years, many convolutional neural network (CNN)-based methods have achieved impressive results in remote sensing [...] Read more.
Remote sensing scene classification converts remote sensing images into classification information to support high-level applications, so it is a fundamental problem in the field of remote sensing. In recent years, many convolutional neural network (CNN)-based methods have achieved impressive results in remote sensing scene classification, but they have two problems in extracting remote sensing scene features: (1) fixed-shape convolutional kernels cannot effectively extract features from remote sensing scenes with complex shapes and diverse distributions; (2) the features extracted by CNN contain a large number of redundant and invalid information. To solve these problems, this paper constructs a deformable convolutional neural network to adapt the convolutional sampling positions to the shape of objects in the remote sensing scene. Meanwhile, the spatial and channel attention mechanisms are used to focus on the effective features while suppressing the invalid ones. The experimental results indicate that the proposed method is competitive to the state-of-the-art methods on three remote sensing scene classification datasets (UCM, NWPU, and AID). Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Graphical abstract

21 pages, 14112 KiB  
Article
Building Polygon Extraction from Aerial Images and Digital Surface Models with a Frame Field Learning Framework
by Xiaoyu Sun, Wufan Zhao, Raian V. Maretto and Claudio Persello
Remote Sens. 2021, 13(22), 4700; https://doi.org/10.3390/rs13224700 - 20 Nov 2021
Cited by 6 | Viewed by 3903
Abstract
Deep learning-based models for building delineation from remotely sensed images face the challenge of producing precise and regular building outlines. This study investigates the combination of normalized digital surface models (nDSMs) with aerial images to optimize the extraction of building polygons using the [...] Read more.
Deep learning-based models for building delineation from remotely sensed images face the challenge of producing precise and regular building outlines. This study investigates the combination of normalized digital surface models (nDSMs) with aerial images to optimize the extraction of building polygons using the frame field learning method. Results are evaluated at pixel, object, and polygon levels. In addition, an analysis is performed to assess the statistical deviations in the number of vertices of building polygons compared with the reference. The comparison of the number of vertices focuses on finding the output polygons that are the easiest to edit by human analysts in operational applications. It can serve as guidance to reduce the post-processing workload for obtaining high-accuracy building footprints. Experiments conducted in Enschede, the Netherlands, demonstrate that by introducing nDSM, the method could reduce the number of false positives and prevent missing the real buildings on the ground. The positional accuracy and shape similarity was improved, resulting in better-aligned building polygons. The method achieved a mean intersection over union (IoU) of 0.80 with the fused data (RGB + nDSM) against an IoU of 0.57 with the baseline (using RGB only) in the same area. A qualitative analysis of the results shows that the investigated model predicts more precise and regular polygons for large and complex structures. Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Graphical abstract

14 pages, 5867 KiB  
Article
Sequentially Delineation of Rooftops with Holes from VHR Aerial Images Using a Convolutional Recurrent Neural Network
by Wei Huang, Zeping Liu, Hong Tang and Jiayi Ge
Remote Sens. 2021, 13(21), 4271; https://doi.org/10.3390/rs13214271 - 24 Oct 2021
Cited by 7 | Viewed by 1957
Abstract
Semantic and instance segmentation methods are commonly used to build extraction from high-resolution images. The semantic segmentation method involves assigning a class label to each pixel in the image, thus ignoring the geometry of the building rooftop, which results in irregular shapes of [...] Read more.
Semantic and instance segmentation methods are commonly used to build extraction from high-resolution images. The semantic segmentation method involves assigning a class label to each pixel in the image, thus ignoring the geometry of the building rooftop, which results in irregular shapes of the rooftop edges. As for instance segmentation, there is a strong assumption within this method that there exists only one outline polygon along the rooftop boundary. In this paper, we present a novel method to sequentially delineate exterior and interior contours of rooftops with holes from VHR aerial images, where most of the buildings have holes, by integrating semantic segmentation and polygon delineation. Specifically, semantic segmentation from the Mask R-CNN is used as a prior for hole detection. Then, the holes are used as objects for generating the internal contours of the rooftop. The external and internal contours of the rooftop are inferred separately using a convolutional recurrent neural network. Experimental results showed that the proposed method can effectively delineate the rooftops with both one and multiple polygons and outperform state-of-the-art methods in terms of the visual results and six statistical indicators, including IoU, OA, F1, BoundF, RE and Hd. Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Figure 1

Review

Jump to: Research

28 pages, 2402 KiB  
Review
A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability
by Rongjun Qin and Tao Liu
Remote Sens. 2022, 14(3), 646; https://doi.org/10.3390/rs14030646 - 29 Jan 2022
Cited by 45 | Viewed by 8378
Abstract
As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the [...] Read more.
As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor. Full article
(This article belongs to the Special Issue Deep Learning for Very-High Resolution Land-Cover Mapping)
Show Figures

Figure 1

Back to TopTop