remotesensing-logo

Journal Browser

Journal Browser

Robust Multispectral/Hyperspectral Image Analysis and Classification

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 December 2019) | Viewed by 113841

Special Issue Editors

Department of Electrical & Computer Engineering, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
Interests: compressed sensing; signal and image processing; pattern recognition; computer vision; hyperspectral image analysis
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
National Institute of Informatics, Tokyo, Japan
Interests: multi- and hyperspectral remote sensing image processing and analysis; super-resolution, fusion, denoising, unmixing, classification, feature extraction
Special Issues, Collections and Topics in MDPI journals
Electronic Information School, Wuhan University, Wuhan 430072, China
Interests: machine learning; computer vision; information fusion; image super resolution; hyperspectral image analysis; infrared imaging; image denoising
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We observe that satellite imagery, such as a multispectral/hyperspectral image, is a powerful source of information, as it contains different spatial, spectral and temporal resolutions, compared to traditional images. In the past decade, the remote sensing community has introduced intensive works to establish accurate remote sensing image classifiers. However, there are inherent challenges for remote sensing imagery analysis and classification. For example, the quantity of labeled data for remote sensing imagery (e.g., multispectral and hyperspectral image) is limited since it is time-consuming and expensive to obtain a large number of samples with class labels. Also, actual hyperspectral image data inevitably contain considerable noise (Gaussian noise, dead-lines, and other mixed noise) due to the physical limitations of the imaging sensors. In addition, label noise (i.e. mis-labeling of pixels) poses challenges for supervised classification algorithms. Therefore, developing robust image classification and analysis methods that can handle these issues becomes a pressing need for practical applications.

The aim of this Special Issue is to gather cutting-edge works that address the aforementioned challenges in multispectral/hyperspectral image analysis and classification. The main topics include, but not limited to:

  • Robust multispectral/hyperspectral image classification algorithms and feature representations under the conditions of
    • Noisy data
    • Noisy label
    • Small sample size
    • Data imbalance
  • Multispectral/hyperspectral image denoising
  • Missing data reconstruction
  • Multispectral/hyperspectral data unmixing
  • Illumination Enhancement
  • Noise robust multispectral/hyperspectral image analysis
    • Compression
    • Compressive sensing
    • Object/target/anomaly detection
    • Super-resolution
    • Feature/corresponding matching
    • Fusion
Dr. Chen Chen
Dr. Junjun Jiang
Dr. Jiayi Ma
Dr. Sidike Paheding
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multispectral/hyperspectral remote sensing
  • Remote sensing image analysis
  • Noise robust classification
  • Data imbalance
  • Computer vision
  • Machine learning

Published Papers (22 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 17443 KiB  
Article
Arbitrary-Oriented Inshore Ship Detection based on Multi-Scale Feature Fusion and Contextual Pooling on Rotation Region Proposals
by Tian Tian, Zhihong Pan, Xiangyu Tan and Zhengquan Chu
Remote Sens. 2020, 12(2), 339; https://doi.org/10.3390/rs12020339 - 20 Jan 2020
Cited by 25 | Viewed by 4117
Abstract
Inshore ship detection plays an important role in many civilian and military applications. The complex land environment and the diversity of target sizes and distributions make it still challenging for us to obtain accurate detection results. In order to achieve precise localization and [...] Read more.
Inshore ship detection plays an important role in many civilian and military applications. The complex land environment and the diversity of target sizes and distributions make it still challenging for us to obtain accurate detection results. In order to achieve precise localization and suppress false alarms, in this paper, we propose a framework which integrates a multi-scale feature fusion network, rotation region proposal network and contextual pooling together. Specifically, in order to describe ships of various sizes, different convolutional layers are fused to obtain multi-scale features based on the baseline feature extraction network. Then, for the purpose of accurate target localization and arbitrary-oriented ship detection, a rotation region proposal network and skew non-maximum suppression are employed. Finally, on account of the disadvantages that the employment of a rotation bounding box usually causes more false alarms, we implement inclined context feature pooling on rotation region proposals. A dataset including port images collected from Google Earth and a public ship dataset HRSC2016 are employed in our experiments to test the proposed method. Experimental results of model analysis validate the contribution of each module mentioned above, and contrast results show that our proposed pipeline is able to achieve state-of-the-art performance of arbitrary-oriented inshore ship detection. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

20 pages, 1990 KiB  
Article
Semi-Supervised Hyperspectral Image Classification via Spatial-Regulated Self-Training
by Yue Wu, Guifeng Mu, Can Qin, Qiguang Miao, Wenping Ma and Xiangrong Zhang
Remote Sens. 2020, 12(1), 159; https://doi.org/10.3390/rs12010159 - 02 Jan 2020
Cited by 73 | Viewed by 5424
Abstract
Because there are many unlabeled samples in hyperspectral images and the cost of manual labeling is high, this paper adopts semi-supervised learning method to make full use of many unlabeled samples. In addition, those hyperspectral images contain much spectral information and the convolutional [...] Read more.
Because there are many unlabeled samples in hyperspectral images and the cost of manual labeling is high, this paper adopts semi-supervised learning method to make full use of many unlabeled samples. In addition, those hyperspectral images contain much spectral information and the convolutional neural networks have great ability in representation learning. This paper proposes a novel semi-supervised hyperspectral image classification framework which utilizes self-training to gradually assign highly confident pseudo labels to unlabeled samples by clustering and employs spatial constraints to regulate self-training process. Spatial constraints are introduced to exploit the spatial consistency within the image to correct and re-assign the mistakenly classified pseudo labels. Through the process of self-training, the sample points of high confidence are gradually increase, and they are added to the corresponding semantic classes, which makes semantic constraints gradually enhanced. At the same time, the increase in high confidence pseudo labels also contributes to regional consistency within hyperspectral images, which highlights the role of spatial constraints and improves the HSIc efficiency. Extensive experiments in HSIc demonstrate the effectiveness, robustness, and high accuracy of our approach. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

25 pages, 4739 KiB  
Article
Multiple Kernel-Based SVM Classification of Hyperspectral Images by Combining Spectral, Spatial, and Semantic Information
by Yi Wang, Wenke Yu and Zhice Fang
Remote Sens. 2020, 12(1), 120; https://doi.org/10.3390/rs12010120 - 01 Jan 2020
Cited by 42 | Viewed by 5075
Abstract
In this study, we present a hyperspectral image classification method by combining spectral, spatial, and semantic information. The main steps of the proposed method are summarized as follows: First, principal component analysis transform is conducted on an original image to produce its extended [...] Read more.
In this study, we present a hyperspectral image classification method by combining spectral, spatial, and semantic information. The main steps of the proposed method are summarized as follows: First, principal component analysis transform is conducted on an original image to produce its extended morphological profile, Gabor features, and superpixel-based segmentation map. To model spatial information, the extended morphological profile and Gabor features are used to represent structure and texture features, respectively. Moreover, the mean filtering is performed within each superpixel to maintain the homogeneity of the spatial features. Then, the k-means clustering and the entropy rate superpixel segmentation are combined to produce semantic feature vectors by using a bag of visual-words model for each superpixel. Next, three kernel functions are constructed to describe the spectral, spatial, and semantic information, respectively. Finally, the composite kernel technique is used to fuse all the features into a multiple kernel function that is fed into a support vector machine classifier to produce a final classification map. Experiments demonstrate that the proposed method is superior to the most popular kernel-based classification methods in terms of both visual inspection and quantitative analysis, even if only very limited training samples are available. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

30 pages, 13891 KiB  
Article
EMCM: A Novel Binary Edge-Feature-Based Maximum Clique Framework for Multispectral Image Matching
by Bin Fang, Kun Yu, Jie Ma and Pei An
Remote Sens. 2019, 11(24), 3026; https://doi.org/10.3390/rs11243026 - 15 Dec 2019
Cited by 3 | Viewed by 2997
Abstract
Seeking reliable correspondence between multispectral images is a fundamental and important task in computer vision. To overcome the nonlinearity problem occurring in multispectral image matching, a novel, edge-feature-based maximum clique-matching frame (EMCM) is proposed, which contains three main parts: (1) a novel strong [...] Read more.
Seeking reliable correspondence between multispectral images is a fundamental and important task in computer vision. To overcome the nonlinearity problem occurring in multispectral image matching, a novel, edge-feature-based maximum clique-matching frame (EMCM) is proposed, which contains three main parts: (1) a novel strong edge binary feature descriptor, (2) a new correspondence-ranking algorithm based on keypoint distinctiveness analysis algorithms in the feature space of the graph, and (3) a false match removal algorithm based on maximum clique searching in the correspondence space of the graph considering both position and angle consistency. Extensive experiments are conducted on two standard multispectral image datasets with respect to the three parts. The feature-matching experiments suggest that the proposed feature descriptor is of high descriptiveness, robustness, and efficiency. The correspondence-ranking experiments validate the superiority of our correspondences-ranking algorithm over the nearest neighbor algorithm, and the coarse registration experiments show the robustness of EMCM with varied interferences. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

18 pages, 16754 KiB  
Article
Purifying SLIC Superpixels to Optimize Superpixel-Based Classification of High Spatial Resolution Remote Sensing Image
by Hengjian Tong, Fei Tong, Wei Zhou and Yun Zhang
Remote Sens. 2019, 11(22), 2627; https://doi.org/10.3390/rs11222627 - 10 Nov 2019
Cited by 9 | Viewed by 4031
Abstract
Fast and accurate classification of high spatial resolution remote sensing image is important for many applications. The usage of superpixels in classification has been proposed to accelerate the speed of classification. However, although most superpixels only contain pixels from single class, there are [...] Read more.
Fast and accurate classification of high spatial resolution remote sensing image is important for many applications. The usage of superpixels in classification has been proposed to accelerate the speed of classification. However, although most superpixels only contain pixels from single class, there are still some mixed superpixels, which mostly locate near the edge of different classes, and contain pixels from more than one class. Such mixed superpixels will cause misclassification regardless of classification methods used. In this paper, a superpixels purification algorithm based on color quantization is proposed to purify mixed Simple Linear Iterative Clustering (SLIC) superpixels. After purifying, the mixed SLIC superpixel will be separated into smaller superpixels. These smaller superpixels are pure superpixels which only contain a single kind of ground object. The experiments on images from the dataset BSDS500 show that the purified SLIC superpixels outperform the original SLIC superpixels on three segmentation evaluation metrics. With the purified SLIC superpixels, a classification scheme in which only edge superpixels are selected to be purified is proposed. The strategy of purifying edge superpixels not only improves the efficiency of the algorithm, but also improves the accuracy of the classification. The experiments on a remote sensing image from WorldView-2 satellite demonstrate that purified SLIC superpixels at all scales can generate classification result with higher accuracy than original SLIC superpixels, especially at the scale of 20 × 20 , for which the accuracy increase is higher than 4%. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

27 pages, 4345 KiB  
Article
Hyperspectral Unmixing with Gaussian Mixture Model and Spatial Group Sparsity
by Qiwen Jin, Yong Ma, Erting Pan, Fan Fan, Jun Huang, Hao Li, Chenhong Sui and Xiaoguang Mei
Remote Sens. 2019, 11(20), 2434; https://doi.org/10.3390/rs11202434 - 20 Oct 2019
Cited by 15 | Viewed by 3289
Abstract
In recent years, endmember variability has received much attention in the field of hyperspectral unmixing. To solve the problem caused by the inaccuracy of the endmember signature, the endmembers are usually modeled to assume followed by a statistical distribution. However, those distribution-based methods [...] Read more.
In recent years, endmember variability has received much attention in the field of hyperspectral unmixing. To solve the problem caused by the inaccuracy of the endmember signature, the endmembers are usually modeled to assume followed by a statistical distribution. However, those distribution-based methods only use the spectral information alone and do not fully exploit the possible local spatial correlation. When the pixels lie on the inhomogeneous region, the abundances of the neighboring pixels will not share the same prior constraints. Thus, in this paper, to achieve better abundance estimation performance, a method based on the Gaussian mixture model (GMM) and spatial group sparsity constraint is proposed. To fully exploit the group structure, we take the superpixel segmentation (SS) as preprocessing to generate the spatial groups. Then, we use GMM to model the endmember distribution, incorporating the spatial group sparsity as a mixed-norm regularization into the objective function. Finally, under the Bayesian framework, the conditional density function leads to a standard maximum a posteriori (MAP) problem, which can be solved using generalized expectation-maximization (GEM). Experiments on simulated and real hyperspectral data demonstrate that the proposed algorithm has higher unmixing precision compared with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

20 pages, 1278 KiB  
Article
Spectral-Spatial Hyperspectral Image Classification with Superpixel Pattern and Extreme Learning Machine
by Yongshan Zhang, Xinwei Jiang, Xinxin Wang and Zhihua Cai
Remote Sens. 2019, 11(17), 1983; https://doi.org/10.3390/rs11171983 - 22 Aug 2019
Cited by 27 | Viewed by 3465
Abstract
Spectral-spatial classification of hyperspectral images (HSIs) has recently attracted great attention in the research domain of remote sensing. It is well-known that, in remote sensing applications, spectral features are the fundamental information and spatial patterns provide the complementary information. With both spectral features [...] Read more.
Spectral-spatial classification of hyperspectral images (HSIs) has recently attracted great attention in the research domain of remote sensing. It is well-known that, in remote sensing applications, spectral features are the fundamental information and spatial patterns provide the complementary information. With both spectral features and spatial patterns, hyperspectral image (HSI) applications can be fully explored and the classification performance can be greatly improved. In reality, spatial patterns can be extracted to represent a line, a clustering of points or image texture, which denote the local or global spatial characteristic of HSIs. In this paper, we propose a spectral-spatial HSI classification model based on superpixel pattern (SP) and kernel based extreme learning machine (KELM), called SP-KELM, to identify the land covers of pixels in HSIs. In the proposed SP-KELM model, superpixel pattern features are extracted by an advanced principal component analysis (PCA), which is based on superpixel segmentation in HSIs and used to denote spatial information. The KELM method is then employed to be a classifier in the proposed spectral-spatial model with both the original spectral features and the extracted spatial pattern features. Experimental results on three publicly available HSI datasets verify the effectiveness of the proposed SP-KELM model, with the performance improvement of 10% over the spectral approaches. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

20 pages, 1325 KiB  
Article
Fast and Effective Techniques for LWIR Radiative Transfer Modeling: A Dimension-Reduction Approach
by Nicholas Westing, Brett Borghetti and Kevin C. Gross
Remote Sens. 2019, 11(16), 1866; https://doi.org/10.3390/rs11161866 - 09 Aug 2019
Cited by 5 | Viewed by 3296
Abstract
The increasing spatial and spectral resolution of hyperspectral imagers yields detailed spectroscopy measurements from both space-based and airborne platforms. These detailed measurements allow for material classification, with many recent advancements from the fields of machine learning and deep learning. In many scenarios, the [...] Read more.
The increasing spatial and spectral resolution of hyperspectral imagers yields detailed spectroscopy measurements from both space-based and airborne platforms. These detailed measurements allow for material classification, with many recent advancements from the fields of machine learning and deep learning. In many scenarios, the hyperspectral image must first be corrected or compensated for atmospheric effects. Radiative Transfer (RT) computations can provide look up tables (LUTs) to support these corrections. This research investigates a dimension-reduction approach using machine learning methods to create an effective sensor-specific long-wave infrared (LWIR) RT model. The utility of this approach is investigated emulating the Mako LWIR hyperspectral sensor ( Δ λ 0.044   μ m , Δ ν ˜ 3.9 cm 1 ). This study employs physics-based metrics and loss functions to identify promising dimension-reduction techniques and reduce at-sensor radiance reconstruction error. The derived RT model shows an overall root mean square error (RMSE) of less than 1 K across reflective to emissive grey-body emissivity profiles. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

19 pages, 3445 KiB  
Article
Satellite Image Super-Resolution via Multi-Scale Residual Deep Neural Network
by Tao Lu, Jiaming Wang, Yanduo Zhang, Zhongyuan Wang and Junjun Jiang
Remote Sens. 2019, 11(13), 1588; https://doi.org/10.3390/rs11131588 - 04 Jul 2019
Cited by 83 | Viewed by 10204
Abstract
Recently, the application of satellite remote sensing images is becoming increasingly popular, but the observed images from satellite sensors are frequently in low-resolution (LR). Thus, they cannot fully meet the requirements of object identification and analysis. To utilize the multi-scale characteristics of objects [...] Read more.
Recently, the application of satellite remote sensing images is becoming increasingly popular, but the observed images from satellite sensors are frequently in low-resolution (LR). Thus, they cannot fully meet the requirements of object identification and analysis. To utilize the multi-scale characteristics of objects fully in remote sensing images, this paper presents a multi-scale residual neural network (MRNN). MRNN adopts the multi-scale nature of satellite images to reconstruct high-frequency information accurately for super-resolution (SR) satellite imagery. Different sizes of patches from LR satellite images are initially extracted to fit different scale of objects. Large-, middle-, and small-scale deep residual neural networks are designed to simulate differently sized receptive fields for acquiring relative global, contextual, and local information for prior representation. Then, a fusion network is used to refine different scales of information. MRNN fuses the complementary high-frequency information from differently scaled networks to reconstruct the desired high-resolution satellite object image, which is in line with human visual experience (“look in multi-scale to see better”). Experimental results on the SpaceNet satellite image and NWPU-RESISC45 databases show that the proposed approach outperformed several state-of-the-art SR algorithms in terms of objective and subjective image qualities. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

18 pages, 1312 KiB  
Article
Spectral-Spatial Hyperspectral Image Classification via Robust Low-Rank Feature Extraction and Markov Random Field
by Xiangyong Cao, Zongben Xu and Deyu Meng
Remote Sens. 2019, 11(13), 1565; https://doi.org/10.3390/rs11131565 - 02 Jul 2019
Cited by 25 | Viewed by 4179
Abstract
In this paper, a new supervised classification algorithm which simultaneously considers spectral and spatial information of a hyperspectral image (HSI) is proposed. Since HSI always contains complex noise (such as mixture of Gaussian and sparse noise), the quality of the extracted feature inclines [...] Read more.
In this paper, a new supervised classification algorithm which simultaneously considers spectral and spatial information of a hyperspectral image (HSI) is proposed. Since HSI always contains complex noise (such as mixture of Gaussian and sparse noise), the quality of the extracted feature inclines to be decreased. To tackle this issue, we utilize the low-rank property of local three-dimensional, patch and adopt complex noise strategy to model the noise embedded in each local patch. Specifically, we firstly use the mixture of Gaussian (MoG) based low-rank matrix factorization (LRMF) method to simultaneously extract the feature and remove noise from each local matrix unfolded from the local patch. Then, a classification map is obtained by applying some classifier to the extracted low-rank feature. Finally, the classification map is processed by Markov random field (MRF) in order to further utilize the smoothness property of the labels. To ease experimental comparison for different HSI classification methods, we built an open package to make the comparison fairly and efficiently. By using this package, the proposed classification method is verified to obtain better performance compared with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

16 pages, 20923 KiB  
Article
Feedback Unilateral Grid-Based Clustering Feature Matching for Remote Sensing Image Registration
by Zhaohui Zheng, Hong Zheng, Yong Ma, Fan Fan, Jianping Ju, Bichao Xu, Mingyu Lin and Shuilin Cheng
Remote Sens. 2019, 11(12), 1418; https://doi.org/10.3390/rs11121418 - 14 Jun 2019
Cited by 6 | Viewed by 2764
Abstract
In feature-based image matching, implementing a fast and ultra-robust feature matching technique is a challenging task. To solve the problems that the traditional feature matching algorithm suffers from, such as long running time and low registration accuracy, an algorithm called feedback unilateral grid-based [...] Read more.
In feature-based image matching, implementing a fast and ultra-robust feature matching technique is a challenging task. To solve the problems that the traditional feature matching algorithm suffers from, such as long running time and low registration accuracy, an algorithm called feedback unilateral grid-based clustering (FUGC) is presented which is able to improve computation efficiency, accuracy and robustness of feature-based image matching while applying it to remote sensing image registration. First, the image is divided by using unilateral grids and then fast coarse screening of the initial matching feature points through local grid clustering is performed to eliminate a great deal of mismatches in milliseconds. To ensure that true matches are not erroneously screened, a local linear transformation is designed to take feedback verification further, thereby performing fine screening between true matching points deleted erroneously and undeleted false positives in and around this area. This strategy can not only extract high-accuracy matching from coarse baseline matching with low accuracy, but also preserves the true matching points to the greatest extent. The experimental results demonstrate the strong robustness of the FUGC algorithm on various real-world remote sensing images. The FUGC algorithm outperforms current state-of-the-art methods and meets the real-time requirement. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

22 pages, 10011 KiB  
Article
Region Merging Method for Remote Sensing Spectral Image Aided by Inter-Segment and Boundary Homogeneities
by Yuhan Zhang, Xi Wang, Haishu Tan, Chang Xu, Xu Ma and Tingfa Xu
Remote Sens. 2019, 11(12), 1414; https://doi.org/10.3390/rs11121414 - 14 Jun 2019
Cited by 3 | Viewed by 2742
Abstract
Image segmentation is extensively used in remote sensing spectral image processing. Most of the existing region merging methods assess the heterogeneity or homogeneity using global or pre-defined parameters, which lack the flexibility to further improve the goodness-of-fit. Recently, the local spectral angle (SA) [...] Read more.
Image segmentation is extensively used in remote sensing spectral image processing. Most of the existing region merging methods assess the heterogeneity or homogeneity using global or pre-defined parameters, which lack the flexibility to further improve the goodness-of-fit. Recently, the local spectral angle (SA) threshold was used to produce promising segmentation results. However, this method falls short of considering the inherent relationship between adjacent segments. In order to overcome this limitation, an adaptive SA thresholds methods, which combines the inter-segment and boundary homogeneities of adjacent segment pairs by their respective weights to refine predetermined SA threshold, is employed in a hybrid segmentation framework to enhance the image segmentation accuracy. The proposed method can effectively improve the segmentation accuracy with different kinds of reference objects compared to the conventional segmentation approaches based on the global SA and local SA thresholds. The results of the visual comparison also reveal that our method can match more accurately with reference polygons of varied sizes and types. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

29 pages, 5721 KiB  
Article
Spatial Filtering in DCT Domain-Based Frameworks for Hyperspectral Imagery Classification
by Razika Bazine, Huayi Wu and Kamel Boukhechba
Remote Sens. 2019, 11(12), 1405; https://doi.org/10.3390/rs11121405 - 13 Jun 2019
Cited by 5 | Viewed by 3518
Abstract
In this article, we propose two effective frameworks for hyperspectral imagery classification based on spatial filtering in Discrete Cosine Transform (DCT) domain. In the proposed approaches, spectral DCT is performed on the hyperspectral image to obtain a spectral profile representation, where the most [...] Read more.
In this article, we propose two effective frameworks for hyperspectral imagery classification based on spatial filtering in Discrete Cosine Transform (DCT) domain. In the proposed approaches, spectral DCT is performed on the hyperspectral image to obtain a spectral profile representation, where the most significant information in the transform domain is concentrated in a few low-frequency components. The high-frequency components that generally represent noisy data are further processed using a spatial filter to extract the remaining useful information. For the spatial filtering step, both two-dimensional DCT (2D-DCT) and two-dimensional adaptive Wiener filter (2D-AWF) are explored. After performing the spatial filter, an inverse spectral DCT is applied on all transformed bands including the filtered bands to obtain the final preprocessed hyperspectral data, which is subsequently fed into a linear Support Vector Machine (SVM) classifier. Experimental results using three hyperspectral datasets show that the proposed framework Cascade Spectral DCT Spatial Wiener Filter (CDCT-WF_SVM) outperforms several state-of-the-art methods in terms of classification accuracy, the sensitivity regarding different sizes of the training samples, and computational time. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

22 pages, 2508 KiB  
Article
Double-Branch Multi-Attention Mechanism Network for Hyperspectral Image Classification
by Wenping Ma, Qifan Yang, Yue Wu, Wei Zhao and Xiangrong Zhang
Remote Sens. 2019, 11(11), 1307; https://doi.org/10.3390/rs11111307 - 01 Jun 2019
Cited by 213 | Viewed by 6788
Abstract
Recently, Hyperspectral Image (HSI) classification has gradually been getting attention from more and more researchers. HSI has abundant spectral and spatial information; thus, how to fuse these two types of information is still a problem worth studying. In this paper, to extract spectral [...] Read more.
Recently, Hyperspectral Image (HSI) classification has gradually been getting attention from more and more researchers. HSI has abundant spectral and spatial information; thus, how to fuse these two types of information is still a problem worth studying. In this paper, to extract spectral and spatial feature, we propose a Double-Branch Multi-Attention mechanism network (DBMA) for HSI classification. This network has two branches to extract spectral and spatial feature respectively which can reduce the interference between the two types of feature. Furthermore, with respect to the different characteristics of these two branches, two types of attention mechanism are applied in the two branches respectively, which ensures to extract more discriminative spectral and spatial feature. The extracted features are then fused for classification. A lot of experiment results on three hyperspectral datasets shows that the proposed method performs better than the state-of-the-art method. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

19 pages, 4033 KiB  
Article
Label Noise Cleansing with Sparse Graph for Hyperspectral Image Classification
by Qingming Leng, Haiou Yang and Junjun Jiang
Remote Sens. 2019, 11(9), 1116; https://doi.org/10.3390/rs11091116 - 10 May 2019
Cited by 11 | Viewed by 3755
Abstract
In a real hyperspectral image classification task, label noise inevitably exists in training samples. To deal with label noise, current methods assume that noise obeys the Gaussian distribution, which is not the real case in practice, because in most cases, we are more [...] Read more.
In a real hyperspectral image classification task, label noise inevitably exists in training samples. To deal with label noise, current methods assume that noise obeys the Gaussian distribution, which is not the real case in practice, because in most cases, we are more likely to misclassify training samples at the boundaries between different classes. In this paper, we propose a spectral–spatial sparse graph-based adaptive label propagation (SALP) algorithm to address a more practical case, where the label information is contaminated by random noise and boundary noise. Specifically, the SALP mainly includes two steps: First, a spectral–spatial sparse graph is constructed to depict the contextual correlations between pixels within the same superpixel homogeneous region, which are generated by superpixel image segmentation, and then a transfer matrix is produced to describe the transition probability between pixels. Second, after randomly splitting training pixels into “clean” and “polluted,” we iteratively propagate the label information from “clean” to “polluted” based on the transfer matrix, and the relabeling strategy for each pixel is adaptively adjusted along with its spatial position in the corresponding homogeneous region. Experimental results on two standard hyperspectral image datasets show that the proposed SALP over four major classifiers can significantly decrease the influence of noisy labels, and our method achieves better performance compared with the baselines. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

18 pages, 2162 KiB  
Article
Spectral-Spatial Attention Networks for Hyperspectral Image Classification
by Xiaoguang Mei, Erting Pan, Yong Ma, Xiaobing Dai, Jun Huang, Fan Fan, Qinglei Du, Hong Zheng and Jiayi Ma
Remote Sens. 2019, 11(8), 963; https://doi.org/10.3390/rs11080963 - 23 Apr 2019
Cited by 191 | Viewed by 12938
Abstract
Many deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN), have been successfully applied to extracting deep features for hyperspectral tasks. Hyperspectral image classification allows distinguishing the characterization of land covers by utilizing their abundant information. Motivated by [...] Read more.
Many deep learning models, such as convolutional neural network (CNN) and recurrent neural network (RNN), have been successfully applied to extracting deep features for hyperspectral tasks. Hyperspectral image classification allows distinguishing the characterization of land covers by utilizing their abundant information. Motivated by the attention mechanism of the human visual system, in this study, we propose a spectral-spatial attention network for hyperspectral image classification. In our method, RNN with attention can learn inner spectral correlations within a continuous spectrum, while CNN with attention is designed to focus on saliency features and spatial relevance between neighboring pixels in the spatial dimension. Experimental results demonstrate that our method can fully utilize the spectral and spatial information to obtain competitive performance. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

23 pages, 11328 KiB  
Article
Hyperspectral Unmixing with Gaussian Mixture Model and Low-Rank Representation
by Yong Ma, Qiwen Jin, Xiaoguang Mei, Xiaobing Dai, Fan Fan, Hao Li and Jun Huang
Remote Sens. 2019, 11(8), 911; https://doi.org/10.3390/rs11080911 - 15 Apr 2019
Cited by 26 | Viewed by 4164
Abstract
Gaussian mixture model (GMM) has been one of the most representative models for hyperspectral unmixing while considering endmember variability. However, the GMM unmixing models only have proper smoothness and sparsity prior constraints on the abundances and thus do not take into account the [...] Read more.
Gaussian mixture model (GMM) has been one of the most representative models for hyperspectral unmixing while considering endmember variability. However, the GMM unmixing models only have proper smoothness and sparsity prior constraints on the abundances and thus do not take into account the possible local spatial correlation. When the pixels that lie on the boundaries of different materials or the inhomogeneous region, the abundances of the neighboring pixels do not have those prior constraints. Thus, we propose a novel GMM unmixing method based on superpixel segmentation (SS) and low-rank representation (LRR), which is called GMM-SS-LRR. we adopt the SS in the first principal component of HSI to get the homogeneous regions. Moreover, the HSI to be unmixed is partitioned into regions where the statistical property of the abundance coefficients have the underlying low-rank property. Then, to further exploit the spatial data structure, under the Bayesian framework, we use GMM to formulate the unmixing problem, and put the low-rank property into the objective function as a prior knowledge, using generalized expectation maximization to solve the objection function. Experiments on synthetic datasets and real HSIs demonstrated that the proposed GMM-SS-LRR is efficient compared with other current popular methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

26 pages, 7045 KiB  
Article
Divide-and-Conquer Dual-Architecture Convolutional Neural Network for Classification of Hyperspectral Images
by Jie Feng, Lin Wang, Haipeng Yu, Licheng Jiao and Xiangrong Zhang
Remote Sens. 2019, 11(5), 484; https://doi.org/10.3390/rs11050484 - 27 Feb 2019
Cited by 18 | Viewed by 5564
Abstract
Convolutional neural network (CNN) is well-known for its powerful capability on image classification. In hyperspectral images (HSIs), fixed-size spatial window is generally used as the input of CNN for pixel-wise classification. However, single fixed-size spatial architecture hinders the excellent performance of CNN due [...] Read more.
Convolutional neural network (CNN) is well-known for its powerful capability on image classification. In hyperspectral images (HSIs), fixed-size spatial window is generally used as the input of CNN for pixel-wise classification. However, single fixed-size spatial architecture hinders the excellent performance of CNN due to the neglect of various land-cover distributions in HSIs. Moreover, insufficient samples in HSIs may cause the overfitting problem. To address these problems, a novel divide-and-conquer dual-architecture CNN (DDCNN) method is proposed for HSI classification. In DDCNN, a novel regional division strategy based on local and non-local decisions is devised to distinguish homogeneous and heterogeneous regions. Then, for homogeneous regions, a multi-scale CNN architecture with larger spatial window inputs is constructed to learn joint spectral-spatial features. For heterogeneous regions, a fine-grained CNN architecture with smaller spatial window inputs is constructed to learn hierarchical spectral features. Moreover, to alleviate the problem of insufficient training samples, unlabeled samples with high confidences are pre-labeled under adaptively spatial constraint. Experimental results on HSIs demonstrate that the proposed method provides encouraging classification performance, especially region uniformity and edge preservation with limited training samples. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

18 pages, 4704 KiB  
Article
Dense Semantic Labeling with Atrous Spatial Pyramid Pooling and Decoder for High-Resolution Remote Sensing Imagery
by Yuhao Wang, Binxiu Liang, Meng Ding and Jiangyun Li
Remote Sens. 2019, 11(1), 20; https://doi.org/10.3390/rs11010020 - 22 Dec 2018
Cited by 65 | Viewed by 7474
Abstract
Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional networks (FCN), various types of network architectures have largely improved performance. Among them, [...] Read more.
Dense semantic labeling is significant in high-resolution remote sensing imagery research and it has been widely used in land-use analysis and environment protection. With the recent success of fully convolutional networks (FCN), various types of network architectures have largely improved performance. Among them, atrous spatial pyramid pooling (ASPP) and encoder-decoder are two successful ones. The former structure is able to extract multi-scale contextual information and multiple effective field-of-view, while the latter structure can recover the spatial information to obtain sharper object boundaries. In this study, we propose a more efficient fully convolutional network by combining the advantages from both structures. Our model utilizes the deep residual network (ResNet) followed by ASPP as the encoder and combines two scales of high-level features with corresponding low-level features as the decoder at the upsampling stage. We further develop a multi-scale loss function to enhance the learning procedure. In the postprocessing, a novel superpixel-based dense conditional random field is employed to refine the predictions. We evaluate the proposed method on the Potsdam and Vaihingen datasets and the experimental results demonstrate that our method performs better than other machine learning or deep learning methods. Compared with the state-of-the-art DeepLab_v3+ our model gains 0.4% and 0.6% improvements in overall accuracy on these two datasets respectively. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

19 pages, 2999 KiB  
Article
Hyperspectral Unmixing with Bandwise Generalized Bilinear Model
by Chang Li, Yu Liu, Juan Cheng, Rencheng Song, Hu Peng, Qiang Chen and Xun Chen
Remote Sens. 2018, 10(10), 1600; https://doi.org/10.3390/rs10101600 - 09 Oct 2018
Cited by 18 | Viewed by 3493
Abstract
Generalized bilinear model (GBM) has received extensive attention in the field of hyperspectral nonlinear unmixing. Traditional GBM unmixing methods are usually assumed to be degraded only by additive white Gaussian noise (AWGN), and the intensity of AWGN in each band of hyperspectral image [...] Read more.
Generalized bilinear model (GBM) has received extensive attention in the field of hyperspectral nonlinear unmixing. Traditional GBM unmixing methods are usually assumed to be degraded only by additive white Gaussian noise (AWGN), and the intensity of AWGN in each band of hyperspectral image (HSI) is assumed to be the same. However, the real HSIs are usually degraded by mixture of various kinds of noise, which include Gaussian noise, impulse noise, dead pixels or lines, stripes, and so on. Besides, the intensity of AWGN is usually different for each band of HSI. To address the above mentioned issues, we propose a novel nonlinear unmixing method based on the bandwise generalized bilinear model (NU-BGBM), which can be adapted to the presence of complex mixed noise in real HSI. Besides, the alternative direction method of multipliers (ADMM) is adopted to solve the proposed NU-BGBM. Finally, extensive experiments are conducted to demonstrate the effectiveness of the proposed NU-BGBM compared with some other state-of-the-art unmixing methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

23 pages, 2566 KiB  
Article
Self-Dictionary Regression for Hyperspectral Image Super-Resolution
by Dongsheng Gao, Zhentao Hu and Renzhen Ye
Remote Sens. 2018, 10(10), 1574; https://doi.org/10.3390/rs10101574 - 01 Oct 2018
Cited by 9 | Viewed by 3087
Abstract
Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. [...] Read more.
Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. In recent years, various methods have been proposed to fuse HSI and multispectral image (MSI) from an unmixing or a spectral dictionary perspective. However, these methods extract the spectral information from each image individually, and therefore ignore the cross-correlation between the observed HSI and MSI. It is difficult to achieve high-spatial-resolution while preserving the spatial-spectral consistency between low-resolution HSI and high-resolution HSI. In this paper, a self-dictionary regression based method is proposed to utilize cross-correlation between the observed HSI and MSI. Both the observed low-resolution HSI and MSI are simultaneously considered to estimate the endmember dictionary and the abundance code. To preserve the spectral consistency, the endmember dictionary is extracted by performing a common sparse basis selection on the concatenation of observed HSI and MSI. Then, a consistent constraint is exploited to ensure the spatial consistency between the abundance code of low-resolution HSI and the abundance code of high-resolution HSI. Extensive experiments on three datasets demonstrate that the proposed method outperforms the state-of-the-art methods. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

23 pages, 5425 KiB  
Article
ERN: Edge Loss Reinforced Semantic Segmentation Network for Remote Sensing Images
by Shuo Liu, Wenrui Ding, Chunhui Liu, Yu Liu, Yufeng Wang and Hongguang Li
Remote Sens. 2018, 10(9), 1339; https://doi.org/10.3390/rs10091339 - 22 Aug 2018
Cited by 72 | Viewed by 9362
Abstract
The semantic segmentation of remote sensing images faces two major challenges: high inter-class similarity and interference from ubiquitous shadows. In order to address these issues, we develop a novel edge loss reinforced semantic segmentation network (ERN) that leverages the spatial boundary context to [...] Read more.
The semantic segmentation of remote sensing images faces two major challenges: high inter-class similarity and interference from ubiquitous shadows. In order to address these issues, we develop a novel edge loss reinforced semantic segmentation network (ERN) that leverages the spatial boundary context to reduce the semantic ambiguity. The main contributions of this paper are as follows: (1) we propose a novel end-to-end semantic segmentation network for remote sensing, which involves multiple weighted edge supervisions to retain spatial boundary information; (2) the main representations of the network are shared between the edge loss reinforced structures and semantic segmentation, which means that the ERN simultaneously achieves semantic segmentation and edge detection without significantly increasing the model complexity; and (3) we explore and discuss different ERN schemes to guide the design of future networks. Extensive experimental results on two remote sensing datasets demonstrate the effectiveness of our approach both in quantitative and qualitative evaluation. Specifically, the semantic segmentation performance in shadow-affected regions is significantly improved. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Figure 1

Back to TopTop