E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Robust Multispectral/Hyperspectral Image Analysis and Classification"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 May 2019

Special Issue Editors

Guest Editor
Dr. Chen Chen

Department of Electrical and Computer Engineering, The University of North Carolina at Charlotte, Charlotte, NC 28223
Website | E-Mail
Interests: compressed sensing; signal and image processing; pattern recognition; computer vision; hyperspectral image analysis
Guest Editor
Dr. Junjun Jiang

National Institute of Informatics, Tokyo, Japan
Website | E-Mail
Interests: multi- and hyperspectral remote sensing image processing and analysis; super-resolution, fusion, denoising, unmixing, classification, feature extraction
Guest Editor
Dr. Jiayi Ma

Electronic Information School, Wuhan University, Wuhan 430072, China
Website | E-Mail
Interests: deep learning; computer vision; pattern recognition; remote sensing
Guest Editor
Dr. Sidike Paheding

Remote Sensing Lab, Dept. of Earth & Atmospheric Sciences, Saint Louis University St. Louis, MO, 63108, USA
Website | E-Mail
Interests: computer vision; machine learning; remote sensing; biomedical imaging

Special Issue Information

Dear Colleagues,

We observe that satellite imagery, such as a multispectral/hyperspectral image, is a powerful source of information, as it contains different spatial, spectral and temporal resolutions, compared to traditional images. In the past decade, the remote sensing community has introduced intensive works to establish accurate remote sensing image classifiers. However, there are inherent challenges for remote sensing imagery analysis and classification. For example, the quantity of labeled data for remote sensing imagery (e.g., multispectral and hyperspectral image) is limited since it is time-consuming and expensive to obtain a large number of samples with class labels. Also, actual hyperspectral image data inevitably contain considerable noise (Gaussian noise, dead-lines, and other mixed noise) due to the physical limitations of the imaging sensors. In addition, label noise (i.e. mis-labeling of pixels) poses challenges for supervised classification algorithms. Therefore, developing robust image classification and analysis methods that can handle these issues becomes a pressing need for practical applications.

The aim of this Special Issue is to gather cutting-edge works that address the aforementioned challenges in multispectral/hyperspectral image analysis and classification. The main topics include, but not limited to:

  • Robust multispectral/hyperspectral image classification algorithms and feature representations under the conditions of
    • Noisy data
    • Noisy label
    • Small sample size
    • Data imbalance
  • Multispectral/hyperspectral image denoising
  • Missing data reconstruction
  • Multispectral/hyperspectral data unmixing
  • Illumination Enhancement
  • Noise robust multispectral/hyperspectral image analysis
    • Compression
    • Compressive sensing
    • Object/target/anomaly detection
    • Super-resolution
    • Feature/corresponding matching
    • Fusion
Dr. Chen Chen
Dr. Junjun Jiang
Dr. Jiayi Ma
Dr. Sidike Paheding
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multispectral/hyperspectral remote sensing
  • Remote sensing image analysis
  • Noise robust classification
  • Data imbalance
  • Computer vision
  • Machine learning

Published Papers (4 papers)

View options order results:
result details:
Displaying articles 1-4
Export citation of selected articles as:

Research

Open AccessArticle Dense Semantic Labeling with Atrous Spatial Pyramid Pooling and Decoder for High-Resolution Remote Sensing Imagery
Remote Sens. 2019, 11(1), 20; https://doi.org/10.3390/rs11010020
Received: 17 November 2018 / Revised: 14 December 2018 / Accepted: 19 December 2018 / Published: 22 December 2018
Cited by 2 | PDF Full-text (4704 KB) | HTML Full-text | XML Full-text
Abstract
Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional networks (FCN), various types of network architectures have largely improved performance. Among them, [...] Read more.
Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional networks (FCN), various types of network architectures have largely improved performance. Among them, atrous spatial pyramid pooling (ASPP) and encoder-decoder are two successful ones. The former structure is able to extract multi-scale contextual information and multiple effective field-of-view, while the latter structure can recover the spatial information to obtain sharper object boundaries. In this study, we propose a more efficient fully convolutional network by combining the advantages from both structures. Our model utilizes the deep residual network (ResNet) followed by ASPP as the encoder and combines two scales of high-level features with corresponding low-level features as the decoder at the upsampling stage. We further develop a multi-scale loss function to enhance the learning procedure. In the postprocessing, a novel superpixel-based dense conditional random field is employed to refine the predictions. We evaluate the proposed method on the Potsdam and Vaihingen datasets and the experimental results demonstrate that our method performs better than other machine learning or deep learning methods. Compared with the state-of-the-art DeepLab_v3+ our model gains 0.4% and 0.6% improvements in overall accuracy on these two datasets respectively. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Figures

Graphical abstract

Open AccessArticle Hyperspectral Unmixing with Bandwise Generalized Bilinear Model
Remote Sens. 2018, 10(10), 1600; https://doi.org/10.3390/rs10101600
Received: 5 September 2018 / Revised: 30 September 2018 / Accepted: 6 October 2018 / Published: 9 October 2018
Cited by 1 | PDF Full-text (2999 KB) | HTML Full-text | XML Full-text
Abstract
Generalized bilinear model (GBM) has received extensive attention in the field of hyperspectral nonlinear unmixing. Traditional GBM unmixing methods are usually assumed to be degraded only by additive white Gaussian noise (AWGN), and the intensity of AWGN in each band of hyperspectral image [...] Read more.
Generalized bilinear model (GBM) has received extensive attention in the field of hyperspectral nonlinear unmixing. Traditional GBM unmixing methods are usually assumed to be degraded only by additive white Gaussian noise (AWGN), and the intensity of AWGN in each band of hyperspectral image (HSI) is assumed to be the same. However, the real HSIs are usually degraded by mixture of various kinds of noise, which include Gaussian noise, impulse noise, dead pixels or lines, stripes, and so on. Besides, the intensity of AWGN is usually different for each band of HSI. To address the above mentioned issues, we propose a novel nonlinear unmixing method based on the bandwise generalized bilinear model (NU-BGBM), which can be adapted to the presence of complex mixed noise in real HSI. Besides, the alternative direction method of multipliers (ADMM) is adopted to solve the proposed NU-BGBM. Finally, extensive experiments are conducted to demonstrate the effectiveness of the proposed NU-BGBM compared with some other state-of-the-art unmixing methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Figures

Graphical abstract

Open AccessArticle Self-Dictionary Regression for Hyperspectral Image Super-Resolution
Remote Sens. 2018, 10(10), 1574; https://doi.org/10.3390/rs10101574
Received: 28 June 2018 / Revised: 13 September 2018 / Accepted: 21 September 2018 / Published: 1 October 2018
Cited by 1 | PDF Full-text (2566 KB) | HTML Full-text | XML Full-text
Abstract
Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. [...] Read more.
Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. In recent years, various methods have been proposed to fuse HSI and multispectral image (MSI) from an unmixing or a spectral dictionary perspective. However, these methods extract the spectral information from each image individually, and therefore ignore the cross-correlation between the observed HSI and MSI. It is difficult to achieve high-spatial-resolution while preserving the spatial-spectral consistency between low-resolution HSI and high-resolution HSI. In this paper, a self-dictionary regression based method is proposed to utilize cross-correlation between the observed HSI and MSI. Both the observed low-resolution HSI and MSI are simultaneously considered to estimate the endmember dictionary and the abundance code. To preserve the spectral consistency, the endmember dictionary is extracted by performing a common sparse basis selection on the concatenation of observed HSI and MSI. Then, a consistent constraint is exploited to ensure the spatial consistency between the abundance code of low-resolution HSI and the abundance code of high-resolution HSI. Extensive experiments on three datasets demonstrate that the proposed method outperforms the state-of-the-art methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Figures

Figure 1

Open AccessArticle ERN: Edge Loss Reinforced Semantic Segmentation Network for Remote Sensing Images
Remote Sens. 2018, 10(9), 1339; https://doi.org/10.3390/rs10091339
Received: 7 July 2018 / Revised: 14 August 2018 / Accepted: 14 August 2018 / Published: 22 August 2018
PDF Full-text (5425 KB) | HTML Full-text | XML Full-text
Abstract
The semantic segmentation of remote sensing images faces two major challenges: high inter-class similarity and interference from ubiquitous shadows. In order to address these issues, we develop a novel edge loss reinforced semantic segmentation network (ERN) that leverages the spatial boundary context to [...] Read more.
The semantic segmentation of remote sensing images faces two major challenges: high inter-class similarity and interference from ubiquitous shadows. In order to address these issues, we develop a novel edge loss reinforced semantic segmentation network (ERN) that leverages the spatial boundary context to reduce the semantic ambiguity. The main contributions of this paper are as follows: (1) we propose a novel end-to-end semantic segmentation network for remote sensing, which involves multiple weighted edge supervisions to retain spatial boundary information; (2) the main representations of the network are shared between the edge loss reinforced structures and semantic segmentation, which means that the ERN simultaneously achieves semantic segmentation and edge detection without significantly increasing the model complexity; and (3) we explore and discuss different ERN schemes to guide the design of future networks. Extensive experimental results on two remote sensing datasets demonstrate the effectiveness of our approach both in quantitative and qualitative evaluation. Specifically, the semantic segmentation performance in shadow-affected regions is significantly improved. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Figures

Figure 1

Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top