Special Issue "Deep Learning and Feature Mining Using Hyperspectral Imagery"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 December 2020.

Special Issue Editors

Prof. Dr. Jonathan C-W Chan
Website
Guest Editor
Department of Electronics and Informatics, Vrije Universiteit Brussel, 1050 Brussels, Belgium
Interests: Hyperspectral analysis; land cover classification; machine learning; superresolution enhancement
Special Issues and Collections in MDPI journals
Prof. Jocelyn Chanussot
Website
Guest Editor
GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D'HERES CEDEX, France
Interests: image processing; machine learning; mathematical morphology; hyperspectral imaging; data fusion
Special Issues and Collections in MDPI journals
Prof. Dr. Begüm Demir
Website
Guest Editor
Remote Sensing Image Analysis (RSiM) Group, Technische Universität Berlin, 10587 Berlin, Germany
Interests: remote sensing; big data processing and analysis; image processing; signal processing; machine learning; deep learning; image retrieval and classification
Special Issues and Collections in MDPI journals
Dr. Pedram Ghamisi
Website
Guest Editor
1: Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology, Chemnitzer Str. 40, D-09599 Freiberg, Germany
2: CTO and co-founder at VasoGnosis, 313 N Plankinton Ave, Suite 211, Milwaukee, WI 53203, USA
Interests: Multisensor Data Fusion; Machine and Deep Learning; Image and Signal Processing; Hyperspectral Image Analysis
Special Issues and Collections in MDPI journals
Prof. Xiuping Jia
Website
Guest Editor
School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2600, Australia
Interests: image processing; data analysis and remote sensing applications
Special Issues and Collections in MDPI journals
Prof. Ying Li
Website
Guest Editor
School of Computer Science, Northwestern Polytechnical University,Xi'an, China
Interests: Information Extraction, Remote Sensing
Dr. Naoto Yokoya
Website
Guest Editor
Prof. Xiaoxiang Zhu
Website
Guest Editor
Signal Processing in Earth Observation, TUM, Department Head "EO Data Science", DLR, Germany
Interests: Signal processing; Remote Sensing; Synthetic Aperture Radar; Hyperspectral Imaging
Special Issues and Collections in MDPI journals
Prof. Yongqiang Zhao
Website
Guest Editor
School of Automation, Northwestern Polytechnical University, Youyi West Road 127#, Xi’An 710072, China
Interests: hyperspectral remote sensing; superresolution; polarization imaging; image processing; sparse coding; image fusion; deep learning
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Current and future hyperspectral (HS) EO missions will provide data coverage that has never been available before and with a largely untapped potential. While international scientific communities have been preparing with immense efforts for manipulation and exploitation of new hyperspectral data, we feel there is still quite a large gap between our understanding and the wealth of knowledge that spaceborne EO hyperspectral data can provide. Hence, powerful feature mining (FM) algorithms are required to mine useful information. Deep learning (DL), or ANN inspired algorithms, for hyperspectral data processing has received unprecedented attention and popularity. Even with so much literature devoted to this topic, there is still so much we do not know about deep learning. This Special Issue is dedicated to hyperspectral analyses with deep learning and novel feature mining algorithms. The scope is broad but contributions with a sufficiently specific focus are preferred.

For this Special Issue, we welcome contributions related to:

  • Understanding of DL architecture for HS processing
  • DL-based transfer learning
  • Distributed DL for big HS data analysis
  • DL/FM for multi-modal fusion (HS with MSI, Lidar, Radar ..)
  • Unsupervised feature learning with DL or novel feature mining algorithms for HS
  • DL for new spaceborne EO HS data
  • New HS applications with DL/FM algorithms

Prof. Dr. Jonathan C-W Chan
Prof. Jocelyn Chanussot
Prof. Begüm Demir
Dr. Pedram Ghamisi
Dr. Xiuping Jia
Prof. Ying Li
Dr. Naoto Yokoya
Prof. Yongqiang Zhao
Prof. Xiaoxiang Zhu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Transfer Change Rules from Recurrent Fully Convolutional Networks for Hyperspectral Unmanned Aerial Vehicle Images without Ground Truth Data
Remote Sens. 2020, 12(7), 1099; https://doi.org/10.3390/rs12071099 - 30 Mar 2020
Abstract
Change detection (CD) networks based on supervised learning have been used in diverse CD tasks. However, such supervised CD networks require a large amount of data and only use information from current images. In addition, it is time consuming to manually acquire the [...] Read more.
Change detection (CD) networks based on supervised learning have been used in diverse CD tasks. However, such supervised CD networks require a large amount of data and only use information from current images. In addition, it is time consuming to manually acquire the ground truth data for newly obtained images. Here, we proposed a novel method for CD in case of a lack of training data in an area near by another one with the available ground truth data. The proposed method automatically entails generating training data and fine-tuning the CD network. To detect changes in target images without ground truth data, the difference images were generated using spectral similarity measure, and the training data were selected via fuzzy c-means clustering. Recurrent fully convolutional networks with multiscale three-dimensional filters were used to extract objects of various sizes from unmanned aerial vehicle (UAV) images. The CD network was pre-trained on labeled source domain data; then, the network was fine-tuned on target images using generated training data. Two further CD networks were trained with a combined weighted loss function. The training data in the target domain were iteratively updated using he prediction map of the CD network. Experiments on two hyperspectral UAV datasets confirmed that the proposed method is capable of transferring change rules and improving CD results based on training data extracted in an unsupervised way. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Graphical abstract

Open AccessArticle
An Integrative Remote Sensing Application of Stacked Autoencoder for Atmospheric Correction and Cyanobacteria Estimation Using Hyperspectral Imagery
Remote Sens. 2020, 12(7), 1073; https://doi.org/10.3390/rs12071073 - 27 Mar 2020
Abstract
Hyperspectral image sensing can be used to effectively detect the distribution of harmful cyanobacteria. To accomplish this, physical- and/or model-based simulations have been conducted to perform an atmospheric correction (AC) and an estimation of pigments, including phycocyanin (PC) and chlorophyll-a (Chl-a), in cyanobacteria. [...] Read more.
Hyperspectral image sensing can be used to effectively detect the distribution of harmful cyanobacteria. To accomplish this, physical- and/or model-based simulations have been conducted to perform an atmospheric correction (AC) and an estimation of pigments, including phycocyanin (PC) and chlorophyll-a (Chl-a), in cyanobacteria. However, such simulations were undesirable in certain cases, due to the difficulty of representing dynamically changing aerosol and water vapor in the atmosphere and the optical complexity of inland water. Thus, this study was focused on the development of a deep neural network model for AC and cyanobacteria estimation, without considering the physical formulation. The stacked autoencoder (SAE) network was adopted for the feature extraction and dimensionality reduction of hyperspectral imagery. The artificial neural network (ANN) and support vector regression (SVR) were sequentially applied to achieve AC and estimate cyanobacteria concentrations (i.e., SAE-ANN and SAE-SVR). Further, the ANN and SVR models without SAE were compared with SAE-ANN and SAE-SVR models for the performance evaluations. In terms of AC performance, both SAE-ANN and SAE-SVR displayed reasonable accuracy with the Nash–Sutcliffe efficiency (NSE) > 0.7. For PC and Chl-a estimation, the SAE-ANN model showed the best performance, by yielding NSE values > 0.79 and > 0.77, respectively. SAE, with fine tuning operators, improved the accuracy of the original ANN and SVR estimations, in terms of both AC and cyanobacteria estimation. This is primarily attributed to the high-level feature extraction of SAE, which can represent the spatial features of cyanobacteria. Therefore, this study demonstrated that the deep neural network has a strong potential to realize an integrative remote sensing application. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Graphical abstract

Open AccessArticle
A Two-stage Deep Domain Adaptation Method for Hyperspectral Image Classification
Remote Sens. 2020, 12(7), 1054; https://doi.org/10.3390/rs12071054 - 25 Mar 2020
Abstract
Deep learning has attracted extensive attention in the field of hyperspectral images (HSIs) classification. However, supervised deep learning methods heavily rely on a large amount of label information. To address this problem, in this paper, we propose a two-stage deep domain adaptation method [...] Read more.
Deep learning has attracted extensive attention in the field of hyperspectral images (HSIs) classification. However, supervised deep learning methods heavily rely on a large amount of label information. To address this problem, in this paper, we propose a two-stage deep domain adaptation method for hyperspectral image classification, which can minimize the data shift between two domains and learn a more discriminative deep embedding space with very few labeled target samples. A deep embedding space is first learned by minimizing the distance between the source domain and the target domain based on Maximum Mean Discrepancy (MMD) criterion. The Spatial–Spectral Siamese Network is then exploited to reduce the data shift and learn a more discriminative deep embedding space by minimizing the distance between samples from different domains but the same class label and maximizes the distance between samples from different domains and class labels based on pairwise loss. For the classification task, the softmax layer is replaced with a linear support vector machine, in which learning minimizes a margin-based loss instead of the cross-entropy loss. The experimental results on two sets of hyperspectral remote sensing images show that the proposed method can outperform several state-of-the-art methods. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Graphical abstract

Open AccessArticle
Joint Spatial-spectral Resolution Enhancement of Multispectral Images with Spectral Matrix Factorization and Spatial Sparsity Constraints
Remote Sens. 2020, 12(6), 993; https://doi.org/10.3390/rs12060993 - 19 Mar 2020
Abstract
This paper presents a joint spatial-spectral resolution enhancement technique to improve the resolution of multispectral images in the spatial and spectral domain simultaneously. Reconstructed hyperspectral images (HSIs) from an input multispectral image represent the same scene in higher spatial resolution, with more spectral [...] Read more.
This paper presents a joint spatial-spectral resolution enhancement technique to improve the resolution of multispectral images in the spatial and spectral domain simultaneously. Reconstructed hyperspectral images (HSIs) from an input multispectral image represent the same scene in higher spatial resolution, with more spectral bands of narrower wavelength width than the input multispectral image. Many existing improvement techniques focus on spatial- or spectral-resolution enhancement, which may cause spectral distortions and spatial inconsistency. The proposed scheme introduces virtual intermediate variables to formulate a spectral observation model and a spatial observation model. The models alternately solve spectral dictionary and abundances to reconstruct desired high-resolution HSIs. An initial spectral dictionary is trained from prior HSIs captured in different landscapes. A spatial dictionary trained from a panchromatic image and its sparse coefficients provide high spatial-resolution information. The sparse coefficients are used as constraints to obtain high spatial-resolution abundances. Experiments performed on simulated datasets from AVIRIS/Landsat 7 and a real Hyperion/ALI dataset demonstrate that the proposed method outperforms the state-of-the-art spatial- and spectral-resolution enhancement methods. The proposed method also worked well for combination of exiting spatial- and spectral-resolution enhancement methods. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Figure 1

Open AccessArticle
Classification of Hyperspectral Image Based on Double-Branch Dual-Attention Mechanism Network
Remote Sens. 2020, 12(3), 582; https://doi.org/10.3390/rs12030582 - 10 Feb 2020
Cited by 1
Abstract
In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are [...] Read more.
In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are designed in DBDA to capture plenty of spectral and spatial features contained in HSI. Furthermore, a channel attention block and a spatial attention block are applied to these two branches respectively, which enables DBDA to refine and optimize the extracted feature maps. A series of experiments on four hyperspectral datasets show that the proposed framework has superior performance to the state-of-the-art algorithm, especially when the training samples are signally lacking. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Graphical abstract

Open AccessArticle
Fine-Grained Classification of Hyperspectral Imagery Based on Deep Learning
Remote Sens. 2019, 11(22), 2690; https://doi.org/10.3390/rs11222690 - 18 Nov 2019
Cited by 2
Abstract
Hyperspectral remote sensing obtains abundant spectral and spatial information of the observed object simultaneously. It is an opportunity to classify hyperspectral imagery (HSI) with a fine-grained manner. In this study, the fine-grained classification of HSI, which contains a large number of classes, is [...] Read more.
Hyperspectral remote sensing obtains abundant spectral and spatial information of the observed object simultaneously. It is an opportunity to classify hyperspectral imagery (HSI) with a fine-grained manner. In this study, the fine-grained classification of HSI, which contains a large number of classes, is investigated. On one hand, traditional classification methods cannot handle fine-grained classification of HSI well; on the other hand, deep learning methods have shown their powerfulness in fine-grained classification. So, in this paper, deep learning is explored for HSI supervised and semi-supervised fine-grained classification. For supervised HSI fine-grained classification, densely connected convolutional neural network (DenseNet) is explored for accurate classification. Moreover, DenseNet is combined with pre-processing technique (i.e., principal component analysis or auto-encoder) or post-processing technique (i.e., conditional random field) to further improve classification performance. For semi-supervised HSI fine-grained classification, a generative adversarial network (GAN), which includes a discriminative CNN and a generative CNN, is carefully designed. The GAN fully uses the labeled and unlabeled samples to improve classification accuracy. The proposed methods were tested on the Indian Pines data set, which contains 33,3951 samples with 52 classes. The experimental results show that the deep learning-based methods provide great improvements compared with other traditional methods, which demonstrate that deep models have huge potential for HSI fine-grained classification. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Figure 1

Open AccessArticle
A Multi-Scale Wavelet 3D-CNN for Hyperspectral Image Super-Resolution
Remote Sens. 2019, 11(13), 1557; https://doi.org/10.3390/rs11131557 - 30 Jun 2019
Cited by 3
Abstract
Super-resolution (SR) is significant for hyperspectral image (HSI) applications. In single-frame HSI SR, how to reconstruct detailed image structures in high resolution (HR) HSI is challenging since there is no auxiliary image (e.g., HR multispectral image) providing structural information. Wavelet could capture image [...] Read more.
Super-resolution (SR) is significant for hyperspectral image (HSI) applications. In single-frame HSI SR, how to reconstruct detailed image structures in high resolution (HR) HSI is challenging since there is no auxiliary image (e.g., HR multispectral image) providing structural information. Wavelet could capture image structures in different orientations, and emphasis on predicting high-frequency wavelet sub-bands is helpful for recovering the detailed structures in HSI SR. In this study, we propose a multi-scale wavelet 3D convolutional neural network (MW-3D-CNN) for HSI SR, which predicts the wavelet coefficients of HR HSI rather than directly reconstructing the HR HSI. To exploit the correlation in the spectral and spatial domains, the MW-3D-CNN is built with 3D convolutional layers. An embedding subnet and a predicting subnet constitute the MW-3D-CNN, the embedding subnet extracts deep spatial-spectral features from the low resolution (LR) HSI and represents the LR HSI as a set of feature cubes. The feature cubes are then fed to the predicting subnet. There are multiple output branches in the predicting subnet, each of which corresponds to one wavelet sub-band and predicts the wavelet coefficients of HR HSI. The HR HSI can be obtained by applying inverse wavelet transform to the predicted wavelet coefficients. In the training stage, we propose to train the MW-3D-CNN with L1 norm loss, which is more suitable than the conventional L2 norm loss for penalizing the errors in different wavelet sub-bands. Experiments on both simulated and real spaceborne HSI demonstrate that the proposed algorithm is competitive with other state-of-the-art HSI SR methods. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Figure 1

Back to TopTop