Special Issue "Deep Transfer Learning for Remote Sensing"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2019.

Special Issue Editors

Dr. Jianzhe Lin
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of British Columbia, 2366 Main Mall, V6T 1Z4, Canada
Tel. 1-778-323-1225
Interests: deep learning; machine learning; hyperspectral image classification; band selection
Dr. Zhiyu Jiang
E-Mail Website
Guest Editor
Northwestern Polytechnical University, 127 West Youyi Road, Xi'an Shaanxi, 710072, P.R. China
Tel. +86 02988495716
Interests: machine learning; remote sensing; semantic segmentation; scene parsing; small-sample learning
Dr. Sarbjit Sarkaria
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of British Columbia, 2366 Main Mall, V6T 1Z4, Canada
Interests: reinforcement learning; deep learning; image classification
Dr. Dandan Ma
E-Mail Website
Guest Editor
University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing, P.R. China
Interests: machine learning; remote sensing; target detection; hyperspectral anomaly detection; hyperspectral classification
Dr. Yang Zhao
E-Mail Website
Guest Editor
Xi'an Institute of Optics and Precision Mechanics, CAS, NO. 17 Xinxi Road, New Industrial Park, Xi'an Hi-Tech Industrial Development Zone, Xi'an, Shaanxi, P.R. China
Interests: machine learning; remote sensing; action recognition; video segmentation; hyperpectral classification

Special Issue Information

Dear Colleagues,

Recently, deep learning (DL) for remote sensing (RS) image processing has gradually become a hot topic. Many deep learning models, including ResNet, AlexNet, as well as the newly proposed capsule network, have all been proven to have decent performance on RS images with enough prior knowledge for training. One existing problem is the limitation of label information for newly collected RS data, and this phenomenon will make it even more difficult for the DL models to process the RS images. With the development of modern satellite sensors and easy access to new RS data, the problem of processing such a large amount of data becomes even more serious and urgent. A straightforward consideration is to resort to existing labeled RS data to help with the unknown new data. To achieve this purpose, deep transfer learning-based frameworks that can overcome the semantic gap between different datasets have become a research frontier in RS data processing. The deep information of existing labeled data is exploited to predict the label of newly collected RS data.

This Special Issue is devoted to exploring the potential of deep transfer learning framework in RS image processing. Due to different acquisition conditions and sensors, the spectra observed on a new scene can be quite different from the existing scene even if they represent the same types of objects. This spectral difference brings huge semantic disparity among different RS datasets. Therefore, how to select, construct, and correlate the deep networks by transfer learning for different RS datasets will be the major concern of this Special Issue.

Topics of interest include, but are not limited to:

  • Theories for domain adaptation and generalization;
  • Auto-encoder-based transfer learning for remote sensing;
  • CNN-based transfer learning for remote sensing;
  • RNN-based transfer learning for remote sensing;
  • Capsule network-based transfer learning for remote sensing;
  • Domain generalization algorithms for visual problems;
  • Deep representation learning for domain adaptation and generalization.

Dr. Jianzhe Lin
Dr. Zhiyu Jiang
Dr. Sarbjit Sarkaria
Dr. Dandan Ma
Dr. Yang Zhao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep Transfer Learning
  • Domain Adaptation
  • Machine Learning
  • Convolutional Network
  • Remote Sensing Image

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Category-Sensitive Domain Adaptation for Land Cover Mapping in Aerial Scenes
Remote Sens. 2019, 11(22), 2631; https://doi.org/10.3390/rs11222631 - 11 Nov 2019
Abstract
Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. [...] Read more.
Since manually labeling aerial images for pixel-level classification is expensive and time-consuming, developing strategies for land cover mapping without reference labels is essential and meaningful. As an efficient solution for this issue, domain adaptation has been widely utilized in numerous semantic labeling-based applications. However, current approaches generally pursue the marginal distribution alignment between the source and target features and ignore the category-level alignment. Therefore, directly applying them to land cover mapping leads to unsatisfactory performance in the target domain. In our research, to address this problem, we embed a geometry-consistent generative adversarial network (GcGAN) into a co-training adversarial learning network (CtALN), and then develop a category-sensitive domain adaptation (CsDA) method for land cover mapping using very-high-resolution (VHR) optical aerial images. The GcGAN aims to eliminate the domain discrepancies between labeled and unlabeled images while retaining their intrinsic land cover information by translating the features of the labeled images from the source domain to the target domain. Meanwhile, the CtALN aims to learn a semantic labeling model in the target domain with the translated features and corresponding reference labels. By training this hybrid framework, our method learns to distill knowledge from the source domain and transfers it to the target domain, while preserving not only global domain consistency, but also category-level consistency between labeled and unlabeled images in the feature space. The experimental results between two airborne benchmark datasets and the comparison with other state-of-the-art methods verify the robustness and superiority of our proposed CsDA. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Deep Transfer Learning for Few-Shot SAR Image Classification
Remote Sens. 2019, 11(11), 1374; https://doi.org/10.3390/rs11111374 - 08 Jun 2019
Cited by 1
Abstract
The reemergence of Deep Neural Networks (DNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon’s Mechanical [...] Read more.
The reemergence of Deep Neural Networks (DNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon’s Mechanical Turk that recruit ordinary people to label data. Unlike the EO domain, labeling the Synthetic Aperture Radar (SAR) domain data can be much more challenging, and for various reasons, using crowdsourcing platforms is not feasible for labeling the SAR domain data. As a result, training deep networks using supervised learning is more challenging in the SAR domain. In the paper, we present a new framework to train a deep neural network for classifying Synthetic Aperture Radar (SAR) images by eliminating the need for a huge labeled dataset. Our idea is based on transferring knowledge from a related EO domain problem, where labeled data are easy to obtain. We transfer knowledge from the EO domain through learning a shared invariant cross-domain embedding space that is also discriminative for classification. To this end, we train two deep encoders that are coupled through their last year to map data points from the EO and the SAR domains to the shared embedding space such that the distance between the distributions of the two domains is minimized in the latent embedding space. We use the Sliced Wasserstein Distance (SWD) to measure and minimize the distance between these two distributions and use a limited number of SAR label data points to match the distributions class-conditionally. As a result of this training procedure, a classifier trained from the embedding space to the label space using mostly the EO data would generalize well on the SAR domain. We provide a theoretical analysis to demonstrate why our approach is effective and validate our algorithm on the problem of ship classification in the SAR domain by comparing against several other competing learning approaches. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Figure 1

Open AccessArticle
Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks
Remote Sens. 2019, 11(11), 1309; https://doi.org/10.3390/rs11111309 - 01 Jun 2019
Cited by 3
Abstract
Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep [...] Read more.
Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep learning detection network. Individual crown delineation has been a long-standing challenge in remote sensing and available algorithms produce mixed results. We show that deep learning models can leverage existing Light Detection and Ranging (LIDAR)-based unsupervised delineation to generate trees that are used for training an initial RGB crown detection model. Despite limitations in the original unsupervised detection approach, this noisy training data may contain information from which the neural network can learn initial tree features. We then refine the initial model using a small number of higher-quality hand-annotated RGB images. We validate our proposed approach while using an open-canopy site in the National Ecological Observation Network. Our results show that a model using 434,551 self-generated trees with the addition of 2848 hand-annotated trees yields accurate predictions in natural landscapes. Using an intersection-over-union threshold of 0.5, the full model had an average tree crown recall of 0.69, with a precision of 0.61 for the visually-annotated data. The model had an average tree detection rate of 0.82 for the field collected stems. The addition of a small number of hand-annotated trees improved the performance over the initial self-supervised model. This semi-supervised deep learning approach demonstrates that remote sensing can overcome a lack of labeled training data by generating noisy data for initial training using unsupervised methods and retraining the resulting models with high quality labeled data. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Effective Airplane Detection in Remote Sensing Images Based on Multilayer Feature Fusion and Improved Nonmaximal Suppression Algorithm
Remote Sens. 2019, 11(9), 1062; https://doi.org/10.3390/rs11091062 - 05 May 2019
Cited by 2
Abstract
Aiming at the problem of insufficient representation ability of weak and small objects and overlapping detection boxes in airplane object detection, an effective airplane detection method in remote sensing images based on multilayer feature fusion and an improved nonmaximal suppression algorithm is proposed. [...] Read more.
Aiming at the problem of insufficient representation ability of weak and small objects and overlapping detection boxes in airplane object detection, an effective airplane detection method in remote sensing images based on multilayer feature fusion and an improved nonmaximal suppression algorithm is proposed. Firstly, based on the common low-level visual features of natural images and airport remote sensing images, region-based convolutional neural networks are chosen to conduct transfer learning for airplane images using a limited amount of data. Then, the L2 norm normalization, feature connection, scale scaling, and feature dimension reduction are introduced to achieve effective fusion of low- and high-level features. Finally, a nonmaximal suppression method based on a soft decision function is proposed to solve the overlap problem of detection boxes. The experimental results show that the proposed method can effectively improve the representation ability of weak and small objects, as well as quickly and accurately detect airplane objects in the airport area. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Graphical abstract

Open AccessArticle
Aerial Image Road Extraction Based on an Improved Generative Adversarial Network
Remote Sens. 2019, 11(8), 930; https://doi.org/10.3390/rs11080930 - 17 Apr 2019
Cited by 2
Abstract
Aerial photographs and satellite images are one of the resources used for earth observation. In practice, automated detection of roads on aerial images is of significant values for the application such as car navigation, law enforcement, and fire services. In this paper, we [...] Read more.
Aerial photographs and satellite images are one of the resources used for earth observation. In practice, automated detection of roads on aerial images is of significant values for the application such as car navigation, law enforcement, and fire services. In this paper, we present a novel road extraction method from aerial images based on an improved generative adversarial network, which is an end-to-end framework only requiring a few samples for training. Experimental results on the Massachusetts Roads Dataset show that the proposed method provides better performance than several state of the art techniques in terms of detection accuracy, recall, precision and F1-score. Full article
(This article belongs to the Special Issue Deep Transfer Learning for Remote Sensing)
Show Figures

Figure 1

Back to TopTop