remotesensing-logo

Journal Browser

Journal Browser

Recent Advances in the Processing of Hyperspectral Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 16 August 2024 | Viewed by 3955

Special Issue Editors


E-Mail Website
Guest Editor
College of Information and Communication Engineering, Dalian Minzu University, Dalian 116600, China
Interests: remote sensing image processing and machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
Interests: space intelligent remote sensing; multi-mode hyperspectral remote sensing; intelligent application of remote sensing big data
Special Issues, Collections and Topics in MDPI journals
College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210000, China
Interests: remote sensing imagery processing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
Interests: intelligent sensors; machine learning; data analytics; information fusion; IoT
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Hyperspectral imagery (HSI) has become one of the most important forms of data for analyzing the monitoring and evaluation of resources and the ecological environment. However, due to the limitations of sensors and the complexity of resource ecological environment, there are often many mixed pixels in the obtained HSI, which bring great challenges to the mapping of resource ecological environments. Therefore, one of the hot spectral issues in remote sensing research is how to process mixed pixels for HSI to obtain more accurate resource ecological environment mapping information. Many hyperspectral image processing techniques are developing rapidly to process mixed pixels. Particularly, the development of computer technology and calculation techniques such as artificial intelligence, deep learning, and weakly supervised learning has expanded and enhanced the application direction and scope of hyperspectral image processing in recent years. However, several challenges and open problems are still waiting for efficient solutions and novel methodologies. The main goal of this Special Issue is to address advanced topics related to hyperspectral image processing.

  • Fusion and resolution enhancement;
  • Denoising, restoration, and super resolution;
  • Endmember extraction and unmixing;
  • Dimensionality reduction and band selection;
  • Classification and segmentation;
  • Subpixel mapping;
  • Change detection and time-series HSI analysis;
  • Artificial intelligence for HSI;
  • Deep learning for HSI.

Prof. Dr. Liguo Wang
Prof. Dr. Yanfeng Gu
Dr. Peng Wang
Prof. Dr. Henry Leung
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing image processing
  • hyperspectral image
  • mixed pixels
  • machine learning
  • deep learning

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

26 pages, 6148 KiB  
Article
A Multi-Hyperspectral Image Collaborative Mapping Model Based on Adaptive Learning for Fine Classification
by Xiangrong Zhang, Zitong Liu, Xianhao Zhang and Tianzhu Liu
Remote Sens. 2024, 16(8), 1384; https://doi.org/10.3390/rs16081384 - 14 Apr 2024
Viewed by 364
Abstract
Hyperspectral (HS) data, encompassing hundreds of spectral channels for the same area, offer a wealth of spectral information and are increasingly utilized across various fields. However, their limitations in spatial resolution and imaging width pose challenges for precise recognition and fine classification in [...] Read more.
Hyperspectral (HS) data, encompassing hundreds of spectral channels for the same area, offer a wealth of spectral information and are increasingly utilized across various fields. However, their limitations in spatial resolution and imaging width pose challenges for precise recognition and fine classification in large scenes. Conversely, multispectral (MS) data excel in providing spatial details for vast landscapes but lack spectral precision. In this article, we proposed an adaptive learning-based mapping model, including an image fusion module, spectral super-resolution network, and adaptive learning network. Spectral super-resolution networks learn the mapping between multispectral and hyperspectral images based on the attention mechanism. The image fusion module leverages spatial and spectral consistency in training data, providing pseudo labels for spectral super-resolution training. And the adaptive learning network incorporates spectral response priors via unsupervised learning, adjusting the output of the super-resolution network to preserve spectral information in reconstructed data. Through the experiment, the model eliminates the need for the manual setting of image prior information and complex parameter selection, and can adjust the network structure and parameters dynamically, eventually enhancing the reconstructed image quality, and enabling the fine classification of large-scale scenes with high spatial resolution. Compared with the recent dictionary learning and deep learning spectral super-resolution methods, our approach exhibits superior performance in terms of both image similarity and classification accuracy. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

24 pages, 4519 KiB  
Article
Joint Classification of Hyperspectral and LiDAR Data Based on Adaptive Gating Mechanism and Learnable Transformer
by Minhui Wang, Yaxiu Sun, Jianhong Xiang, Rui Sun and Yu Zhong
Remote Sens. 2024, 16(6), 1080; https://doi.org/10.3390/rs16061080 - 19 Mar 2024
Viewed by 653
Abstract
Utilizing multi-modal data, as opposed to only hyperspectral image (HSI), enhances target identification accuracy in remote sensing. Transformers are applied to multi-modal data classification for their long-range dependency but often overlook intrinsic image structure by directly flattening image blocks into vectors. Moreover, as [...] Read more.
Utilizing multi-modal data, as opposed to only hyperspectral image (HSI), enhances target identification accuracy in remote sensing. Transformers are applied to multi-modal data classification for their long-range dependency but often overlook intrinsic image structure by directly flattening image blocks into vectors. Moreover, as the encoder deepens, unprofitable information negatively impacts classification performance. Therefore, this paper proposes a learnable transformer with an adaptive gating mechanism (AGMLT). Firstly, a spectral–spatial adaptive gating mechanism (SSAGM) is designed to comprehensively extract the local information from images. It mainly contains point depthwise attention (PDWA) and asymmetric depthwise attention (ADWA). The former is for extracting spectral information of HSI, and the latter is for extracting spatial information of HSI and elevation information of LiDAR-derived rasterized digital surface models (LiDAR-DSM). By omitting linear layers, local continuity is maintained. Then, the layer Scale and learnable transition matrix are introduced to the original transformer encoder and self-attention to form the learnable transformer (L-Former). It improves data dynamics and prevents performance degradation as the encoder deepens. Subsequently, learnable cross-attention (LC-Attention) with the learnable transfer matrix is designed to augment the fusion of multi-modal data by enriching feature information. Finally, poly loss, known for its adaptability with multi-modal data, is employed in training the model. Experiments in the paper are conducted on four famous multi-modal datasets: Trento (TR), MUUFL (MU), Augsburg (AU), and Houston2013 (HU). The results show that AGMLT achieves optimal performance over some existing models. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

20 pages, 4527 KiB  
Article
Hyperspectral Image Classification with the Orthogonal Self-Attention ResNet and Two-Step Support Vector Machine
by Heting Sun, Liguo Wang, Haitao Liu and Yinbang Sun
Remote Sens. 2024, 16(6), 1010; https://doi.org/10.3390/rs16061010 - 13 Mar 2024
Viewed by 588
Abstract
Hyperspectral image classification plays a crucial role in remote sensing image analysis by classifying pixels. However, the existing methods require more spatial–global information interaction and feature extraction capabilities. To overcome these challenges, this paper proposes a novel model for hyperspectral image classification using [...] Read more.
Hyperspectral image classification plays a crucial role in remote sensing image analysis by classifying pixels. However, the existing methods require more spatial–global information interaction and feature extraction capabilities. To overcome these challenges, this paper proposes a novel model for hyperspectral image classification using an orthogonal self-attention ResNet and a two-step support vector machine (OSANet-TSSVM). The OSANet-TSSVM model comprises two essential components: a deep feature extraction network and an improved support vector machine (SVM) classification module. The deep feature extraction network incorporates an orthogonal self-attention module (OSM) and a channel attention module (CAM) to enhance the spatial–spectral feature extraction. The OSM focuses on computing 2D self-attention weights for the orthogonal dimensions of an image, resulting in a reduced number of parameters while capturing comprehensive global contextual information. In contrast, the CAM independently learns attention weights along the channel dimension. The CAM autonomously learns attention weights along the channel dimension, enabling the deep network to emphasise crucial channel information and enhance the spectral feature extraction capability. In addition to the feature extraction network, the OSANet-TSSVM model leverages an improved SVM classification module known as the two-step support vector machine (TSSVM) model. This module preserves the discriminative outcomes of the first-level SVM subclassifier and remaps them as new features for the TSSVM training. By integrating the results of the two classifiers, the deficiencies of the individual classifiers were effectively compensated, resulting in significantly enhanced classification accuracy. The performance of the proposed OSANet-TSSVM model was thoroughly evaluated using public datasets. The experimental results demonstrated that the model performed well in both subjective and objective evaluation metrics. The superiority of this model highlights its potential for advancing hyperspectral image classification in remote sensing applications. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

24 pages, 10424 KiB  
Article
Hyperspectral Images Weakly Supervised Classification with Noisy Labels
by Chengyang Liu, Lin Zhao and Haibin Wu
Remote Sens. 2023, 15(20), 4994; https://doi.org/10.3390/rs15204994 - 17 Oct 2023
Viewed by 917
Abstract
The deep network model relies on sufficient training samples to achieve superior processing performance, which limits its application in hyperspectral image (HSI) classification. In order to perform HSI classification with noisy labels, a robust weakly supervised feature learning (WSFL) architecture combined with multi-model [...] Read more.
The deep network model relies on sufficient training samples to achieve superior processing performance, which limits its application in hyperspectral image (HSI) classification. In order to perform HSI classification with noisy labels, a robust weakly supervised feature learning (WSFL) architecture combined with multi-model attention is proposed. Specifically, the input noisy labeled data are first subjected to multiple groups of residual spectral attention models and multi-granularity residual spatial attention models, enabling WSFL to refine and optimize the extracted spectral and spatial features, with a focus on extracting clean samples information and reducing the model’s dependence on labels. Finally, the fused and optimized spectral-spatial features are mapped to the multilayer perceptron (MLP) classifier to increase the constraint of the model on the noisy samples. The experimental results on public datasets, including Pavia Center, WHU-Hi LongKou, and HangZhou, show that WSFL is better at classifying noise labels than excellent models such as spectral-spatial residual network (SSRN) and dual channel residual network (DCRN). On Hangzhou dataset, the classification accuracy of WSFL is superior to DCRN by 6.02% and SSRN by 7.85%, respectively. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Figure 1

Other

Jump to: Research

17 pages, 9054 KiB  
Technical Note
MADANet: A Lightweight Hyperspectral Image Classification Network with Multiscale Feature Aggregation and a Dual Attention Mechanism
by Binge Cui, Jiaxiang Wen, Xiukai Song and Jianlong He
Remote Sens. 2023, 15(21), 5222; https://doi.org/10.3390/rs15215222 - 03 Nov 2023
Cited by 1 | Viewed by 747
Abstract
Hyperspectral remote sensing images, with their continuous, narrow, and rich spectra, hold distinct significance in the precise classification of land cover. Deep convolutional neural networks (CNNs) and their variants are increasingly utilized for hyperspectral classification, but solving the conflict between the number of [...] Read more.
Hyperspectral remote sensing images, with their continuous, narrow, and rich spectra, hold distinct significance in the precise classification of land cover. Deep convolutional neural networks (CNNs) and their variants are increasingly utilized for hyperspectral classification, but solving the conflict between the number of model parameters, performance, and accuracy has become a pressing challenge. To alleviate this problem, we propose MADANet, a lightweight hyperspectral image classification network that combines multiscale feature aggregation and a dual attention mechanism. By employing depthwise separable convolution, multiscale features can be extracted and aggregated to capture local contextual information effectively. Simultaneously, the dual attention mechanism harnesses both channel and spatial dimensions to acquire comprehensive global semantic information. Ultimately, techniques such as global average pooling (GAP) and full connection (FC) are employed to integrate local contextual information with global semantic knowledge, thereby enabling the accurate classification of hyperspectral pixels. The results from the experiments conducted on representative hyperspectral images demonstrate that MADANet not only attains the highest classification accuracy but also maintains significantly fewer parameters compared to the other methods. Experimental results show that our proposed framework significantly reduces the number of model parameters while still achieving the highest classification accuracy. As an example, the model has only 0.16 M model parameters in the Indian Pines (IP) dataset, but the overall accuracy is as high as 98.34%. Similarly, the framework achieves an overall accuracy of 99.13%, 99.17%, and 99.08% on the University of Pavia (PU), Salinas (SA), and WHU Hi LongKou (LongKou) datasets, respectively. This result exceeds the classification accuracy of existing state-of-the-art frameworks under the same conditions. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Graphical abstract

Back to TopTop