Special Issue "Feature Extraction and Data Classification in Hyperspectral Imaging"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (28 February 2021).

Special Issue Editors

Dr. Jaime Zabalza
E-Mail Website
Guest Editor
University of Strathclyde, Glasgow, UK
Interests: signal and image processing; hyperspectral imaging; remote sensing; data mining; machine learning; artificial intelligence.
Special Issues and Collections in MDPI journals
Dr. Yijun Yan
E-Mail Website
Guest Editor
University of Strathclyde, Glasgow, UK
Interests: signal and image processing; hyperspectral imaging; remote sensing; data mining; machine learning; artificial intelligence

Special Issue Information

Dear Colleagues,

Hyperspectral imaging is currently a fast-moving area of not only research but also industrial development, where captured hyperspectral cubes provide abundant information with great potential in many different applications. In this Special Issue, we aim to compile state-of-the-art research on how to tackle the “big data” problem of extracting the most useful information out of the hyperspectral paradigm.

This Special Issue is open to any researcher working on hyperspectral data mining and data classification. Specific topics include (but are not limited to) the following:

  • Denoising and enhancement;
  • Band selection and data reduction;
  • Supervised and unsupervised feature extraction and feature selection;
  • Compressive sensing and optimised data acquisition;
  • Spatial–spectral data fusion;
  • Spectral unmixing and super-resolution for improved classification;
  • Deep learning approaches for data mining and data classification;
  • Visualisation of the data and features;
  • Fast implementation of the algorithms using GPU, etc.;
  • Emerging new datasets and applications.
Dr. Jaime Zabalza
Dr. Jinchang Ren
Dr. Yijun Yan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Hyperspectral data
  • Feature extraction
  • Dimensionality reduction
  • Classification
  • Deep learning
  • Efficient computation

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Distance Transform-Based Spectral-Spatial Feature Vector for Hyperspectral Image Classification with Stacked Autoencoder
Remote Sens. 2021, 13(9), 1732; https://doi.org/10.3390/rs13091732 - 29 Apr 2021
Viewed by 192
Abstract
Pixel-wise classification of hyperspectral images (HSIs) from remote sensing data is a common approach for extracting information about scenes. In recent years, approaches based on deep learning techniques have gained wide applicability. An HSI dataset can be viewed either as a collection of [...] Read more.
Pixel-wise classification of hyperspectral images (HSIs) from remote sensing data is a common approach for extracting information about scenes. In recent years, approaches based on deep learning techniques have gained wide applicability. An HSI dataset can be viewed either as a collection of images, each one captured at a different wavelength, or as a collection of spectra, each one associated with a specific point (pixel). Enhanced classification accuracy is enabled if the spectral and spatial information are combined in the input vector. This allows simultaneous classification according to spectral type but also according to geometric relationships. In this study, we proposed a novel spatial feature vector which improves accuracies in pixel-wise classification. Our proposed feature vector is based on the distance transform of the pixels with respect to the dominant edges in the input HSI. In other words, we allow the location of pixels within geometric subdivisions of the dataset to modify the contribution of each pixel to the spatial feature vector. Moreover, we used the extended multi attribute profile (EMAP) features to add more geometric features to the proposed spatial feature vector. We have performed experiments with three hyperspectral datasets. In addition to the Salinas and University of Pavia datasets, which are commonly used in HSI research, we include samples from our Surrey BC dataset. Our proposed method results compares favorably to traditional algorithms as well as to some recently published deep learning-based algorithms. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Figure 1

Open AccessArticle
Investigating the Effects of a Combined Spatial and Spectral Dimensionality Reduction Approach for Aerial Hyperspectral Target Detection Applications
Remote Sens. 2021, 13(9), 1647; https://doi.org/10.3390/rs13091647 - 23 Apr 2021
Viewed by 210
Abstract
Target detection and classification is an important application of hyperspectral imaging in remote sensing. A wide range of algorithms for target detection in hyperspectral images have been developed in the last few decades. Given the nature of hyperspectral images, they exhibit large quantities [...] Read more.
Target detection and classification is an important application of hyperspectral imaging in remote sensing. A wide range of algorithms for target detection in hyperspectral images have been developed in the last few decades. Given the nature of hyperspectral images, they exhibit large quantities of redundant information and are therefore compressible. Dimensionality reduction is an effective means of both compressing and denoising data. Although spectral dimensionality reduction is prevalent in hyperspectral target detection applications, the spatial redundancy of a scene is rarely exploited. By applying simple spatial masking techniques as a preprocessing step to disregard pixels of definite disinterest, the subsequent spectral dimensionality reduction process is simpler, less costly and more informative. This paper proposes a processing pipeline to compress hyperspectral images both spatially and spectrally before applying target detection algorithms to the resultant scene. The combination of several different spectral dimensionality reduction methods and target detection algorithms, within the proposed pipeline, are evaluated. We find that the Adaptive Cosine Estimator produces an improved F1 score and Matthews Correlation Coefficient when compared to unprocessed data. We also show that by using the proposed pipeline the data can be compressed by over 90% and target detection performance is maintained. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Machine Learning Optimised Hyperspectral Remote Sensing Retrieves Cotton Nitrogen Status
Remote Sens. 2021, 13(8), 1428; https://doi.org/10.3390/rs13081428 - 07 Apr 2021
Viewed by 531
Abstract
Hyperspectral imaging spectrometers mounted on unmanned aerial vehicle (UAV) can capture high spatial and spectral resolution to provide cotton crop nitrogen status for precision agriculture. The aim of this research was to explore machine learning use with hyperspectral datacubes over agricultural fields. Hyperspectral [...] Read more.
Hyperspectral imaging spectrometers mounted on unmanned aerial vehicle (UAV) can capture high spatial and spectral resolution to provide cotton crop nitrogen status for precision agriculture. The aim of this research was to explore machine learning use with hyperspectral datacubes over agricultural fields. Hyperspectral imagery was collected over a mature cotton crop, which had high spatial (~5.2 cm) and spectral (5 nm) resolution over the spectral range 475–925 nm that allowed discrimination of individual crop rows and field features as well as a continuous spectral range for calculating derivative spectra. The nominal reflectance and its derivatives clearly highlighted the different treatment blocks and were strongly related to N concentration in leaf and petiole samples, both in traditional vegetation indices (e.g., Vogelman 1, R2 = 0.8) and novel combinations of spectra (R2 = 0.85). The key hyperspectral bands identified were at the red-edge inflection point (695–715 nm). Satellite multispectral was compared against the UAV hyperspectral remote sensing’s performance by testing the ability of Sentinel MSI to predict N concentration using the bands in VIS-NIR spectral region. The Sentinel 2A Green band (B3; mid-point 559.8 nm) explained the same amount of variation in N as the hyperspectral data and more than the Sentinel Red Edge Point 1 (B5; mid-point 704.9 nm) with the lower 10 m resolution Green band reporting an R2 = 0.85, compared with the R2 = 0.78 of downscaled Sentinel Red Edge Point 1 at 5 m. The remaining Sentinel bands explained much lower variation (maximum was NIR at R2 = 0.48). Investigation of the red edge peak region in the first derivative showed strong promise with RIDAmid (R2 = 0.81) being the best index. The machine learning approach narrowed the range of bands required to investigate plant condition over this trial site, greatly improved processing time and reduced processing complexity. While Sentinel performed well in this comparison and would be useful in a broadacre crop production context, the impact of pixel boundaries relative to a region of interest and coarse spatial and temporal resolution impacts its utility in a research capacity. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Predicting Water Stress in Wild Blueberry Fields Using Airborne Visible and Near Infrared Imaging Spectroscopy
Remote Sens. 2021, 13(8), 1425; https://doi.org/10.3390/rs13081425 - 07 Apr 2021
Viewed by 1607
Abstract
Water management and irrigation practices are persistent challenges for many agricultural systems, exacerbated by changing seasonal and weather patterns. The wild blueberry industry is at heightened susceptibility due to its unique growing conditions and uncultivated nature. Stress detection in agricultural fields can prompt [...] Read more.
Water management and irrigation practices are persistent challenges for many agricultural systems, exacerbated by changing seasonal and weather patterns. The wild blueberry industry is at heightened susceptibility due to its unique growing conditions and uncultivated nature. Stress detection in agricultural fields can prompt management responses to mitigate detrimental conditions, including drought and disease. We assessed airborne spectral data accompanied by ground sampled water potential over three developmental stages of wild blueberries collected throughout the 2019 summer on two adjacent fields, one irrigated and one non-irrigated. Ground sampled leaves were collected in tandem to the hyperspectral image collection with an unoccupied aerial vehicle (UAV) and then measured for leaf water potential. Using methods in machine learning and statistical analysis, we developed models to determine irrigation status and water potential. Seven models were assessed in this study, with four used to process six hyperspectral cube images for analysis. These images were classified as irrigated or non-irrigated and estimated for water potential levels, resulting in an R2 of 0.62 and verified with a validation dataset. Further investigation relating imaging spectroscopy and water potential will be beneficial in understanding the dynamics between the two for future studies. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
3DeepM: An Ad Hoc Architecture Based on Deep Learning Methods for Multispectral Image Classification
Remote Sens. 2021, 13(4), 729; https://doi.org/10.3390/rs13040729 - 17 Feb 2021
Viewed by 832
Abstract
Current predefined architectures for deep learning are computationally very heavy and use tens of millions of parameters. Thus, computational costs may be prohibitive for many experimental or technological setups. We developed an ad hoc architecture for the classification of multispectral images using deep [...] Read more.
Current predefined architectures for deep learning are computationally very heavy and use tens of millions of parameters. Thus, computational costs may be prohibitive for many experimental or technological setups. We developed an ad hoc architecture for the classification of multispectral images using deep learning techniques. The architecture, called 3DeepM, is composed of 3D filter banks especially designed for the extraction of spatial-spectral features in multichannel images. The new architecture has been tested on a sample of 12210 multispectral images of seedless table grape varieties: Autumn Royal, Crimson Seedless, Itum4, Itum5 and Itum9. 3DeepM was able to classify 100% of the images and obtained the best overall results in terms of accuracy, number of classes, number of parameters and training time compared to similar work. In addition, this paper presents a flexible and reconfigurable computer vision system designed for the acquisition of multispectral images in the range of 400 nm to 1000 nm. The vision system enabled the creation of the first dataset consisting of 12210 37-channel multispectral images (12 VIS + 25 IR) of five seedless table grape varieties that have been used to validate the 3DeepM architecture. Compared to predefined classification architectures such as AlexNet, ResNet or ad hoc architectures with a very high number of parameters, 3DeepM shows the best classification performance despite using 130-fold fewer parameters than the architecture to which it was compared. 3DeepM can be used in a multitude of applications that use multispectral images, such as remote sensing or medical diagnosis. In addition, the small number of parameters of 3DeepM make it ideal for application in online classification systems aboard autonomous robots or unmanned vehicles. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Data Augmentation and Spectral Structure Features for Limited Samples Hyperspectral Classification
Remote Sens. 2021, 13(4), 547; https://doi.org/10.3390/rs13040547 - 03 Feb 2021
Viewed by 643
Abstract
For both traditional classification and current popular deep learning methods, the limited sample classification problem is very challenging, and the lack of samples is an important factor affecting the classification performance. Our work includes two aspects. First, the unsupervised data augmentation for all [...] Read more.
For both traditional classification and current popular deep learning methods, the limited sample classification problem is very challenging, and the lack of samples is an important factor affecting the classification performance. Our work includes two aspects. First, the unsupervised data augmentation for all hyperspectral samples not only improves the classification accuracy greatly with the newly added training samples, but also further improves the classification accuracy of the classifier by optimizing the augmented test samples. Second, an effective spectral structure extraction method is designed, and the effective spectral structure features have a better classification accuracy than the true spectral features. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Figure 1

Open AccessArticle
An Efficient Spectral Feature Extraction Framework for Hyperspectral Images
Remote Sens. 2020, 12(23), 3967; https://doi.org/10.3390/rs12233967 - 04 Dec 2020
Viewed by 493
Abstract
Extracting diverse spectral features from hyperspectral images has become a hot topic in recent years. However, these models are time consuming for training and test and suffer from a poor discriminative ability, resulting in low classification accuracy. In this paper, we design an [...] Read more.
Extracting diverse spectral features from hyperspectral images has become a hot topic in recent years. However, these models are time consuming for training and test and suffer from a poor discriminative ability, resulting in low classification accuracy. In this paper, we design an effective feature extracting framework for the spectra of hyperspectral data. We construct a structured dictionary to encode spectral information and apply learning machine to map coding coefficients. To reduce training and testing time, the sparsity constraint is replaced by a block-diagonal constraint to accelerate the iteration, and an efficient extreme learning machine is employed to fit the spectral characteristics. To optimize the discriminative ability of our model, we first add spectral convolution to extract abundant spectral information. Then, we design shared constraints for subdictionaries so that the common features of subdictionaries can be expressed more effectively, and the discriminative and reconstructive ability of dictionary will be improved. The experimental results on diverse databases show that the proposed feature extraction framework can not only greatly reduce the training and testing time, but also lead to very competitive accuracy performance compared with deep learning models. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Pixel-Wise Classification of High-Resolution Ground-Based Urban Hyperspectral Images with Convolutional Neural Networks
Remote Sens. 2020, 12(16), 2540; https://doi.org/10.3390/rs12162540 - 07 Aug 2020
Cited by 1 | Viewed by 1341
Abstract
Using ground-based, remote hyperspectral images from 0.4–1.0 micron in ∼850 spectral channels—acquired with the Urban Observatory facility in New York City—we evaluate the use of one-dimensional Convolutional Neural Networks (CNNs) for pixel-level classification and segmentation of built and natural materials in urban environments. [...] Read more.
Using ground-based, remote hyperspectral images from 0.4–1.0 micron in ∼850 spectral channels—acquired with the Urban Observatory facility in New York City—we evaluate the use of one-dimensional Convolutional Neural Networks (CNNs) for pixel-level classification and segmentation of built and natural materials in urban environments. We find that a multi-class model trained on hand-labeled pixels containing Sky, Clouds, Vegetation, Water, Building facades, Windows, Roads, Cars, and Metal structures yields an accuracy of 90–97% for three different scenes. We assess the transferability of this model by training on one scene and testing to another with significantly different illumination conditions and/or different content. This results in a significant (∼45%) decrease in the model precision and recall as does training on all scenes at once and testing on the individual scenes. These results suggest that while CNNs are powerful tools for pixel-level classification of very high-resolution spectral data of urban environments, retraining between scenes may be necessary. Furthermore, we test the dependence of the model on several instrument- and data-specific parameters including reduced spectral resolution (down to 15 spectral channels) and number of available training instances. The results are strongly class-dependent; however, we find that the classification of natural materials is particularly robust, especially the Vegetation class with a precision and recall >94% for all scenes and model transfers and >90% with only a single training instance. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
Automatic Annotation of Hyperspectral Images and Spectral Signal Classification of People and Vehicles in Areas of Dense Vegetation with Deep Learning
Remote Sens. 2020, 12(13), 2111; https://doi.org/10.3390/rs12132111 - 01 Jul 2020
Viewed by 1074
Abstract
Despite recent advances in image and video processing, the detection of people or cars in areas of dense vegetation is still challenging due to landscape, illumination changes and strong occlusion. In this paper, we address this problem with the use of a hyperspectral [...] Read more.
Despite recent advances in image and video processing, the detection of people or cars in areas of dense vegetation is still challenging due to landscape, illumination changes and strong occlusion. In this paper, we address this problem with the use of a hyperspectral camera—installed on the ground or possibly a drone—and detection based on spectral signatures. We introduce a novel automatic method for annotating spectral signatures based on a combination of state-of-the-art deep learning methods. After we collected millions of samples with our method, we used a deep learning approach to train a classifier to detect people and cars. Our results show that, based only on spectral signature classification, we can achieve an Matthews Correlation Coefficient of 0.83. We evaluate our classification method in areas with varying vegetation and discuss the limitations and constraints that the current hyperspectral imaging technology has. We conclude that spectral signature classification is possible with high accuracy in uncontrolled outdoor environments. Nevertheless, even with state-of-the-art compact passive hyperspectral imaging technology, high dynamic range of illumination and relatively low image resolution continue to pose major challenges when developing object detection algorithms for areas of dense vegetation. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Open AccessArticle
A Lightweight Spectral–Spatial Feature Extraction and Fusion Network for Hyperspectral Image Classification
Remote Sens. 2020, 12(9), 1395; https://doi.org/10.3390/rs12091395 - 28 Apr 2020
Cited by 3 | Viewed by 1032
Abstract
Hyperspectral image (HSI) classification accuracy has been greatly improved by employing deep learning. The current research mainly focuses on how to build a deep network to improve the accuracy. However, these networks tend to be more complex and have more parameters, which makes [...] Read more.
Hyperspectral image (HSI) classification accuracy has been greatly improved by employing deep learning. The current research mainly focuses on how to build a deep network to improve the accuracy. However, these networks tend to be more complex and have more parameters, which makes the model difficult to train and easy to overfit. Therefore, we present a lightweight deep convolutional neural network (CNN) model called S2FEF-CNN. In this model, three S2FEF blocks are used for the joint spectral–spatial features extraction. Each S2FEF block uses 1D spectral convolution to extract spectral features and 2D spatial convolution to extract spatial features, respectively, and then fuses spectral and spatial features by multiplication. Instead of using the full connected layer, two pooling layers follow three blocks for dimension reduction, which further reduces the training parameters. We compared our method with some state-of-the-art HSI classification methods based on deep network on three commonly used hyperspectral datasets. The results show that our network can achieve a comparable classification accuracy with significantly reduced parameters compared to the above deep networks, which reflects its potential advantages in HSI classification. Full article
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Show Figures

Graphical abstract

Back to TopTop