Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = hyperspectral images classification (HSIC)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2508 KiB  
Article
Class-Discrepancy Dynamic Weighting for Cross-Domain Few-Shot Hyperspectral Image Classification
by Chen Ding, Jiahao Yue, Sirui Zheng, Yizhuo Dong, Wenqiang Hua, Xueling Chen, Yu Xie, Song Yan, Wei Wei and Lei Zhang
Remote Sens. 2025, 17(15), 2605; https://doi.org/10.3390/rs17152605 - 27 Jul 2025
Viewed by 340
Abstract
In recent years, cross-domain few-shot learning (CDFSL) has demonstrated remarkable performance in hyperspectral image classification (HSIC), partially alleviating the distribution shift problem. However, most domain adaptation methods rely on similarity metrics to establish cross-domain class matching, making it difficult to simultaneously account for [...] Read more.
In recent years, cross-domain few-shot learning (CDFSL) has demonstrated remarkable performance in hyperspectral image classification (HSIC), partially alleviating the distribution shift problem. However, most domain adaptation methods rely on similarity metrics to establish cross-domain class matching, making it difficult to simultaneously account for intra-class sample size variations and inherent inter-class differences. To address this problem, existing studies have introduced a class weighting mechanism within the prototype network framework, determining class weights by calculating inter-sample similarity through distance metrics. However, this method suffers from a dual limitation: susceptibility to noise interference and insufficient capacity to capture global class variations, which may lead to distorted weight allocation and consequently result in alignment bias. To solve these issues, we propose a novel class-discrepancy dynamic weighting-based cross-domain FSL (CDDW-CFSL) framework. It integrates three key components: (1) the class-weighted domain adaptation (CWDA) method dynamically measures cross-domain distribution shifts using global class mean discrepancies. It employs discrepancy-sensitive weighting to strengthen the alignment of critical categories, enabling accurate domain adaptation while maintaining feature topology; (2) the class mean refinement (CMR) method incorporates class covariance distance to compute distribution discrepancies between support set samples and class prototypes, enabling the precise capture of cross-domain feature internal structures; (3) a novel multi-dimensional feature extractor that captures both local spatial details and continuous spectral characteristics simultaneously, facilitating deep cross-dimensional feature fusion. The results in three publicly available HSIC datasets show the effectiveness of the CDDW-CFSL. Full article
Show Figures

Figure 1

23 pages, 10648 KiB  
Article
Meta-Learning-Integrated Neural Architecture Search for Few-Shot Hyperspectral Image Classification
by Aili Wang, Kang Zhang, Haibin Wu, Haisong Chen and Minhui Wang
Electronics 2025, 14(15), 2952; https://doi.org/10.3390/electronics14152952 - 24 Jul 2025
Viewed by 218
Abstract
In order to address the limitations of the number of label samples in practical accurate classification scenarios and the problems of overfitting and an insufficient generalization ability caused by Few-Shot Learning (FSL) in hyperspectral image classification (HSIC), this paper designs and implements a [...] Read more.
In order to address the limitations of the number of label samples in practical accurate classification scenarios and the problems of overfitting and an insufficient generalization ability caused by Few-Shot Learning (FSL) in hyperspectral image classification (HSIC), this paper designs and implements a neural architecture search (NAS) for a few-shot HSI classification method that combines meta learning. Firstly, a multi-source domain learning framework was constructed to integrate heterogeneous natural images and homogeneous remote sensing images to improve the information breadth of few-sample learning, enabling the final network to enhance its generalization ability under limited labeled samples by learning the similarity between different data sources. Secondly, by constructing precise and robust search spaces and deploying different units at different locations, the classification accuracy and model transfer robustness of the final network can be improved. This method fully utilizes spatial texture information and rich category information of multi-source data and transfers the learned meta knowledge to the optimal architecture for HSIC execution through precise and robust search space design, achieving HSIC tasks with limited samples. Experimental results have shown that our proposed method achieved an overall accuracy (OA) of 98.57%, 78.39%, and 98.74% for classification on the Pavia Center, Indian Pine, and WHU-Hi-LongKou datasets, respectively. It is fully demonstrated that utilizing spatial texture information and rich category information of multi-source data, and through precise and robust search space design, the learned meta knowledge is fully transmitted to the optimal architecture for HSIC, perfectly achieving classification tasks with few-shot samples. Full article
Show Figures

Figure 1

23 pages, 10182 KiB  
Article
HyperSMamba: A Lightweight Mamba for Efficient Hyperspectral Image Classification
by Mengyuan Sun, Liejun Wang, Shaochen Jiang, Shuli Cheng and Lihan Tang
Remote Sens. 2025, 17(12), 2008; https://doi.org/10.3390/rs17122008 - 11 Jun 2025
Viewed by 660
Abstract
Deep learning has recently achieved remarkable progress in hyperspectral image (HSI) classification. Among these advancements, the Transformer-based models have gained considerable attention due to their ability to establish long-range dependencies. However, the quadratic computational complexity of the self-attention mechanism limits its application in [...] Read more.
Deep learning has recently achieved remarkable progress in hyperspectral image (HSI) classification. Among these advancements, the Transformer-based models have gained considerable attention due to their ability to establish long-range dependencies. However, the quadratic computational complexity of the self-attention mechanism limits its application in hyperspectral image classification (HSIC). Recently, the Mamba architecture has shown outstanding performance in 1D sequence modeling tasks owing to its lightweight linear sequence operations and efficient parallel scanning capabilities. Nevertheless, its application in HSI classification still faces challenges. Most existing Mamba-based approaches adopt various selective scanning strategies for HSI serialization, ensuring the adjacency of scanning sequences to enhance spatial continuity. However, these methods lead to substantially increased computational overhead. To overcome these challenges, this study proposes the Hyperspectral Spatial Mamba (HyperSMamba) model for HSIC, aiming to reduce computational complexity while improving classification performance. The suggested framework consists of the following key components: (1) a Multi-Scale Spatial Mamba (MS-Mamba) encoder, which refines the state-space model (SSM) computation by incorporating a Multi-Scale State Fusion Module (MSFM) after the state transition equations of original SSMs. This module aggregates adjacent state representations to reinforce spatial dependencies among local features; (2) our proposed Adaptive Fusion Attention Module (AFAttention) to dynamically fuse bidirectional Mamba outputs for optimizing feature representation. Experiments were performed on three HSI datasets, and the findings demonstrate that HyperSMamba attains overall accuracy of 94.86%, 97.72%, and 97.38% on the Indian Pines, Pavia University, and Salinas datasets, while maintaining low computational complexity. These results confirm the model’s effectiveness and potential for practical application in HSIC tasks. Full article
Show Figures

Figure 1

20 pages, 2994 KiB  
Article
Segment Anything Model-Based Hyperspectral Image Classification for Small Samples
by Kaifeng Ma, Changxu Yao, Bing Liu, Qingfeng Hu, Shiming Li, Peipei He and Jing Han
Remote Sens. 2025, 17(8), 1349; https://doi.org/10.3390/rs17081349 - 10 Apr 2025
Cited by 2 | Viewed by 1001
Abstract
Hyperspectral image classification (HSIC) represents a significant area of research within the domain of remote sensing. Given the intricate nature of hyperspectral images and the substantial volume of data they generate, it is essential to introduce innovative methodologies to effectively address the data [...] Read more.
Hyperspectral image classification (HSIC) represents a significant area of research within the domain of remote sensing. Given the intricate nature of hyperspectral images and the substantial volume of data they generate, it is essential to introduce innovative methodologies to effectively address the data pre-processing challenges encountered in HSIC. In this paper, we draw inspiration from the Segment Anything Model (SAM) within the realm of large language models to propose its application for HSIC, aiming to achieve significant advancements and breakthroughs in this field. Initially, we constructed the SAM and labeled a limited number of samples as segmentation prompts for the model. We conducted HSIC experiments utilizing three publicly available hyperspectral image datasets: Indian Pines (IP), Salinas (SA), and Pavia University (PU). Furthermore, a voting strategy was implemented during these experiments, with only five samples selected from each land type. The classification results obtained from the SAM-based hyperspectral images were compared with those derived from eight distinct machine learning, deep learning, and Transformer models. The findings indicate that the SAM requires only a limited number of samples to effectively perform hyperspectral image classification, achieving higher accuracy than the other models discussed in this paper. Building on this foundation, a voting strategy was implemented, leading to significant enhancements in the overall accuracy (OA) of HSIC across three datasets. The improvements were quantified at 66.76%, 74.66%, and 70.53%, respectively, culminating in final accuracies of 80.29%, 90.66%, and 86.51%. In this study, the SAM is utilized for unsupervised classification, thereby reducing the need for sample labeling while attaining effective classification outcomes. Full article
Show Figures

Figure 1

33 pages, 8279 KiB  
Article
A Dense Pyramidal Residual Network with a Tandem Spectral–Spatial Attention Mechanism for Hyperspectral Image Classification
by Yunlan Guan, Zixuan Li and Nan Wang
Sensors 2025, 25(6), 1858; https://doi.org/10.3390/s25061858 - 17 Mar 2025
Viewed by 581
Abstract
In recent years, convolutional neural networks (CNNs) have become a potent tool for hyperspectral image classification (HSIC), where classification accuracy, computational cost, and generalization ability are the main focuses. In this study, a novel approach to hyperspectral image classification is proposed. A tandem [...] Read more.
In recent years, convolutional neural networks (CNNs) have become a potent tool for hyperspectral image classification (HSIC), where classification accuracy, computational cost, and generalization ability are the main focuses. In this study, a novel approach to hyperspectral image classification is proposed. A tandem spectral–spatial attention module (TAM) was designed to select significant spectral and spatial features automatically. At the same time, a dense pyramidal residual module (DPRM) with three residual units (RUs) was constructed, where feature maps exhibit linear growth; a dense connection structure was employed between each RU, and a TAM was embedded in each RU. Dilated convolution structures were used in the last two layers of the pyramid network, which can enhance the network’s perception of fine textures and features, improving information transfer efficiency. Tests on four public datasets, namely, the Pavia University, Salinas, TeaFarm, and WHU-Hi-HongHu datasets, were carried out, and the classification accuracies of our method were 99.60%, 99.95%, 99.81%, and 99.84%, respectively. Moreover, the method enhanced the processing speed, especially for large datasets such as WHU-Hi-HongHu. The training time and testing time of one epoch were 53 s and 1.28 s, respectively. Comparative experiments with five methods showed the correctness and high efficiency of our method. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

24 pages, 24497 KiB  
Article
An Adaptive Feature Enhanced Gaussian Weighted Network for Hyperspectral Image Classification
by Fei Zhu, Cuiping Shi, Liguo Wang and Haizhu Pan
Remote Sens. 2025, 17(5), 763; https://doi.org/10.3390/rs17050763 - 22 Feb 2025
Viewed by 618
Abstract
Recently, research on hyperspectral image classification (HSIC) methods has made significant progress. However, current models commonly only focus on the primary features, overlooking the valuable information contained in secondary features that can enhance the model’s learning capabilities. To address this issue, an adaptive [...] Read more.
Recently, research on hyperspectral image classification (HSIC) methods has made significant progress. However, current models commonly only focus on the primary features, overlooking the valuable information contained in secondary features that can enhance the model’s learning capabilities. To address this issue, an adaptive feature enhanced gaussian weighted network (AFGNet) is proposed in this paper. Firstly, an adaptive feature enhancement module (AFEM) was designed to evaluate the effectiveness of different features and enhance those that are more conducive to model learning. Secondly, a gaussian weighted feature fusion module (GWF2) was constructed to integrate local and global feature information effectively. Finally, a multi-head collaborative attention (MHCA) mechanism was proposed. MHCA enhances the feature extraction capability of the model for sequence data through direct interaction and global modeling. Extensive experiments were conducted on five challenging datasets. The experimental results demonstrate that the proposed method outperforms several SOTA methods. Full article
(This article belongs to the Special Issue Deep Learning for Spectral-Spatial Hyperspectral Image Classification)
Show Figures

Graphical abstract

24 pages, 10895 KiB  
Article
Orthogonal Capsule Network with Meta-Reinforcement Learning for Small Sample Hyperspectral Image Classification
by Prince Yaw Owusu Amoako, Guo Cao, Boshan Shi, Di Yang and Benedict Boakye Acka
Remote Sens. 2025, 17(2), 215; https://doi.org/10.3390/rs17020215 - 9 Jan 2025
Cited by 4 | Viewed by 1200
Abstract
Most current hyperspectral image classification (HSIC) models require a large number of training samples, and when the sample size is small, the classification performance decreases. To address this issue, we propose an innovative model that combines an orthogonal capsule network with meta-reinforcement learning [...] Read more.
Most current hyperspectral image classification (HSIC) models require a large number of training samples, and when the sample size is small, the classification performance decreases. To address this issue, we propose an innovative model that combines an orthogonal capsule network with meta-reinforcement learning (OCN-MRL) for small sample HSIC. The OCN-MRL framework employs Meta-RL for feature selection and CapsNet for classification with a small data sample. The Meta-RL module through clustering, augmentation, and multiview techniques enables the model to adapt to new HSIC tasks with limited samples. Learning a meta-policy with a Q-learner generalizes across different tasks to effectively select discriminative features from the hyperspectral data. Integrating orthogonality into CapsNet reduces the network complexity while maintaining the ability to preserve spatial hierarchies and relationships in the data with a 3D convolution layer, suitably capturing complex patterns. Experimental results on four rich Chinese hyperspectral datasets demonstrate the OCN-MRL model’s competitiveness in both higher classification accuracy and less computational cost compared to existing CapsNet-based methods. Full article
Show Figures

Graphical abstract

27 pages, 7948 KiB  
Article
SSUM: Spatial–Spectral Unified Mamba for Hyperspectral Image Classification
by Song Lu, Min Zhang, Yu Huo, Chenhao Wang, Jingwen Wang and Chenyu Gao
Remote Sens. 2024, 16(24), 4653; https://doi.org/10.3390/rs16244653 (registering DOI) - 12 Dec 2024
Cited by 5 | Viewed by 1761
Abstract
How to effectively extract spectral and spatial information and apply it to hyperspectral image classification (HSIC) has been a hot research topic. In recent years, the transformer-based HSIC models have attracted much interest due to their advantages in long-distance modeling of spatial and [...] Read more.
How to effectively extract spectral and spatial information and apply it to hyperspectral image classification (HSIC) has been a hot research topic. In recent years, the transformer-based HSIC models have attracted much interest due to their advantages in long-distance modeling of spatial and spectral features in hyperspectral images (HSIs). However, the transformer-based method suffers from high computational complexity, especially in HSIC tasks that require processing large amounts of data. In addition, the spatial variability inherent in HSIs limits the performance improvement of HSIC. To handle these challenges, a novel Spectral–Spatial Unified Mamba (SSUM) model is proposed, which introduces the State Space Model (SSM) into HSIC tasks to reduce computational complexity and improve model performance. The SSUM model is composed of two branches, i.e., the Spectral Mamba branch and the Spatial Mamba branch, designed to extract the features of HSIs from both spectral and spatial perspectives. Specifically, in the Spectral Mamba branch, a nearest-neighbor spectrum fusion (NSF) strategy is proposed to alleviate the interference caused by the spatial variability (i.e., same object having different spectra). In addition, a novel sub-spectrum scanning (SS) mechanism is proposed, which scans along the sub-spectrum dimension to enhance the model’s perception of subtle spectral details. In the Spatial Mamba branch, a Spatial Mamba (SM) module is designed by combining a 2D Selective Scan Module (SS2D) and Spatial Attention (SA) into a unified network to sufficiently extract the spatial features of HSIs. Finally, the classification results are derived by uniting the output feature of the Spectral Mamba and Spatial Mamba branch, thus improving the comprehensive performance of HSIC. The ablation studies verify the effectiveness of the proposed NSF, SS, and SM. Comparison experiments on four public HSI datasets show the superior of the proposed SSUM. Full article
Show Figures

Graphical abstract

28 pages, 24617 KiB  
Article
Noise-Disruption-Inspired Neural Architecture Search with Spatial–Spectral Attention for Hyperspectral Image Classification
by Aili Wang, Kang Zhang, Haibin Wu, Shiyu Dai, Yuji Iwahori and Xiaoyu Yu
Remote Sens. 2024, 16(17), 3123; https://doi.org/10.3390/rs16173123 - 24 Aug 2024
Cited by 2 | Viewed by 1528
Abstract
In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that [...] Read more.
In view of the complexity and diversity of hyperspectral images (HSIs), the classification task has been a major challenge in the field of remote sensing image processing. Hyperspectral classification (HSIC) methods based on neural architecture search (NAS) is a current attractive frontier that not only automatically searches for neural network architectures best suited to the characteristics of HSI data, but also avoids the possible limitations of manual design of neural networks when dealing with new classification tasks. However, the existing NAS-based HSIC methods have the following limitations: (1) the search space lacks efficient convolution operators that can fully extract discriminative spatial–spectral features, and (2) NAS based on traditional differentiable architecture search (DARTS) has performance collapse caused by unfair competition. To overcome these limitations, we proposed a neural architecture search method with receptive field spatial–spectral attention (RFSS-NAS), which is specifically designed to automatically search the optimal architecture for HSIC. Considering the core needs of the model in extracting more discriminative spatial–spectral features, we designed a novel and efficient attention search space. The core component of this innovative space is the receptive field spatial–spectral attention convolution operator, which is capable of precisely focusing on the critical information in the image, thus greatly enhancing the quality of feature extraction. Meanwhile, for the purpose of solving the unfair competition issue in the traditional differentiable architecture search (DARTS) strategy, we skillfully introduce the Noisy-DARTS strategy. The strategy ensures the fairness and efficiency of the search process and effectively avoids the risk of performance crash. In addition, to further improve the robustness of the model and ability to recognize difficult-to-classify samples, we proposed a fusion loss function by combining the advantages of the label smoothing loss and the polynomial expansion perspective loss function, which not only smooths the label distribution and reduces the risk of overfitting, but also effectively handles those difficult-to-classify samples, thus improving the overall classification accuracy. Experiments on three public datasets fully validate the superior performance of RFSS-NAS. Full article
(This article belongs to the Special Issue Recent Advances in the Processing of Hyperspectral Images)
Show Figures

Graphical abstract

23 pages, 7913 KiB  
Article
A Dual-Branch Fusion of a Graph Convolutional Network and a Convolutional Neural Network for Hyperspectral Image Classification
by Pan Yang and Xinxin Zhang
Sensors 2024, 24(14), 4760; https://doi.org/10.3390/s24144760 - 22 Jul 2024
Cited by 1 | Viewed by 1769
Abstract
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To [...] Read more.
Semi-supervised graph convolutional networks (SSGCNs) have been proven to be effective in hyperspectral image classification (HSIC). However, limited training data and spectral uncertainty restrict the classification performance, and the computational demands of a graph convolution network (GCN) present challenges for real-time applications. To overcome these issues, a dual-branch fusion of a GCN and convolutional neural network (DFGCN) is proposed for HSIC tasks. The GCN branch uses an adaptive multi-scale superpixel segmentation method to build fusion adjacency matrices at various scales, which improves the graph convolution efficiency and node representations. Additionally, a spectral feature enhancement module (SFEM) enhances the transmission of crucial channel information between the two graph convolutions. Meanwhile, the CNN branch uses a convolutional network with an attention mechanism to focus on detailed features of local areas. By combining the multi-scale superpixel features from the GCN branch and the local pixel features from the CNN branch, this method leverages complementary features to fully learn rich spatial–spectral information. Our experimental results demonstrate that the proposed method outperforms existing advanced approaches in terms of classification efficiency and accuracy across three benchmark data sets. Full article
Show Figures

Figure 1

23 pages, 40689 KiB  
Article
Multiscale Feature Search-Based Graph Convolutional Network for Hyperspectral Image Classification
by Ke Wu, Yanting Zhan, Ying An and Suyi Li
Remote Sens. 2024, 16(13), 2328; https://doi.org/10.3390/rs16132328 - 26 Jun 2024
Cited by 7 | Viewed by 1753
Abstract
With the development of hyperspectral sensors, the availability of hyperspectral images (HSIs) has increased significantly, prompting advancements in deep learning-based hyperspectral image classification (HSIC) methods. Recently, graph convolutional networks (GCNs) have been proposed to process graph-structured data in non-Euclidean domains, and have been [...] Read more.
With the development of hyperspectral sensors, the availability of hyperspectral images (HSIs) has increased significantly, prompting advancements in deep learning-based hyperspectral image classification (HSIC) methods. Recently, graph convolutional networks (GCNs) have been proposed to process graph-structured data in non-Euclidean domains, and have been used for HSIC. The superpixel segmentation should be implemented first in the GCN-based methods, however, it is difficult to manually select the optimal superpixel segmentation sizes to obtain the useful information for classification. To solve this problem, we constructed a HSIC model based on a multiscale feature search-based graph convolutional network (MFSGCN) in this study. Firstly, pixel-level features of HSIs are extracted sequentially using 3D asymmetric decomposition convolution and 2D convolution. Then, superpixel-level features at different scales are extracted using multilayer GCNs. Finally, the neural architecture search (NAS) method is used to automatically assign different weights to different scales of superpixel features. Thus, a more discriminative feature map is obtained for classification. Compared with other GCN-based networks, the MFSGCN network can automatically capture features and obtain higher classification accuracy. The proposed MFSGCN model was implemented on three commonly used HSI datasets and compared to some state-of-the-art methods. The results confirm that MFSGCN effectively improves accuracy. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)
Show Figures

Figure 1

24 pages, 11948 KiB  
Article
Adaptive Learnable Spectral–Spatial Fusion Transformer for Hyperspectral Image Classification
by Minhui Wang, Yaxiu Sun, Jianhong Xiang, Rui Sun and Yu Zhong
Remote Sens. 2024, 16(11), 1912; https://doi.org/10.3390/rs16111912 - 26 May 2024
Cited by 4 | Viewed by 1801
Abstract
In hyperspectral image classification (HSIC), every pixel of the HSI is assigned to a land cover category. While convolutional neural network (CNN)-based methods for HSIC have significantly enhanced performance, they encounter challenges in learning the relevance of deep semantic features and grappling with [...] Read more.
In hyperspectral image classification (HSIC), every pixel of the HSI is assigned to a land cover category. While convolutional neural network (CNN)-based methods for HSIC have significantly enhanced performance, they encounter challenges in learning the relevance of deep semantic features and grappling with escalating computational costs as network depth increases. In contrast, the transformer framework is adept at capturing the relevance of high-level semantic features, presenting an effective solution to address the limitations encountered by CNN-based approaches. This article introduces a novel adaptive learnable spectral–spatial fusion transformer (ALSST) to enhance HSI classification. The model incorporates a dual-branch adaptive spectral–spatial fusion gating mechanism (ASSF), which captures spectral–spatial fusion features effectively from images. The ASSF comprises two key components: the point depthwise attention module (PDWA) for spectral feature extraction and the asymmetric depthwise attention module (ADWA) for spatial feature extraction. The model efficiently obtains spectral–spatial fusion features by multiplying the outputs of these two branches. Furthermore, we integrate the layer scale and DropKey into the traditional transformer encoder and multi-head self-attention (MHSA) to form a new transformer with a layer scale and DropKey (LD-Former). This innovation enhances data dynamics and mitigates performance degradation in deeper encoder layers. The experiments detailed in this article are executed on four renowned datasets: Trento (TR), MUUFL (MU), Augsburg (AU), and the University of Pavia (UP). The findings demonstrate that the ALSST model secures optimal performance, surpassing some existing models. The overall accuracy (OA) is 99.70%, 89.72%, 97.84%, and 99.78% on four famous datasets: Trento (TR), MUUFL (MU), Augsburg (AU), and University of Pavia (UP), respectively. Full article
(This article belongs to the Special Issue Recent Advances in Remote Sensing Image Processing Technology)
Show Figures

Figure 1

28 pages, 16525 KiB  
Article
DMAF-NET: Deep Multi-Scale Attention Fusion Network for Hyperspectral Image Classification with Limited Samples
by Hufeng Guo and Wenyi Liu
Sensors 2024, 24(10), 3153; https://doi.org/10.3390/s24103153 - 15 May 2024
Cited by 3 | Viewed by 1788
Abstract
In recent years, deep learning methods have achieved remarkable success in hyperspectral image classification (HSIC), and the utilization of convolutional neural networks (CNNs) has proven to be highly effective. However, there are still several critical issues that need to be addressed in the [...] Read more.
In recent years, deep learning methods have achieved remarkable success in hyperspectral image classification (HSIC), and the utilization of convolutional neural networks (CNNs) has proven to be highly effective. However, there are still several critical issues that need to be addressed in the HSIC task, such as the lack of labeled training samples, which constrains the classification accuracy and generalization ability of CNNs. To address this problem, a deep multi-scale attention fusion network (DMAF-NET) is proposed in this paper. This network is based on multi-scale features and fully exploits the deep features of samples from multiple levels and different perspectives with an aim to enhance HSIC results using limited samples. The innovation of this article is mainly reflected in three aspects: Firstly, a novel baseline network for multi-scale feature extraction is designed with a pyramid structure and densely connected 3D octave convolutional network enabling the extraction of deep-level information from features at different granularities. Secondly, a multi-scale spatial–spectral attention module and a pyramidal multi-scale channel attention module are designed, respectively. This allows modeling of the comprehensive dependencies of coordinates and directions, local and global, in four dimensions. Finally, a multi-attention fusion module is designed to effectively combine feature mappings extracted from multiple branches. Extensive experiments on four popular datasets demonstrate that the proposed method can achieve high classification accuracy even with fewer labeled samples. Full article
(This article belongs to the Special Issue Remote Sensing Technology for Agricultural and Land Management)
Show Figures

Figure 1

21 pages, 2266 KiB  
Article
MEA-EFFormer: Multiscale Efficient Attention with Enhanced Feature Transformer for Hyperspectral Image Classification
by Qian Sun, Guangrui Zhao, Yu Fang, Chenrong Fang, Le Sun and Xingying Li
Remote Sens. 2024, 16(9), 1560; https://doi.org/10.3390/rs16091560 - 27 Apr 2024
Cited by 4 | Viewed by 2248
Abstract
Hyperspectral image classification (HSIC) has garnered increasing attention among researchers. While classical networks like convolution neural networks (CNNs) have achieved satisfactory results with the advent of deep learning, they are confined to processing local information. Vision transformers, despite being effective at establishing long-distance [...] Read more.
Hyperspectral image classification (HSIC) has garnered increasing attention among researchers. While classical networks like convolution neural networks (CNNs) have achieved satisfactory results with the advent of deep learning, they are confined to processing local information. Vision transformers, despite being effective at establishing long-distance dependencies, face challenges in extracting high-representation features for high-dimensional images. In this paper, we present the multiscale efficient attention with enhanced feature transformer (MEA-EFFormer), which is designed for the efficient extraction of spectral–spatial features, leading to effective classification. MEA-EFFormer employs a multiscale efficient attention feature extraction module to initially extract 3D convolution features and applies effective channel attention to refine spectral information. Following this, 2D convolution features are extracted and integrated with local binary pattern (LBP) spatial information to augment their representation. Then, the processed features are fed into a spectral–spatial enhancement attention (SSEA) module that facilitates interactive enhancement of spectral–spatial information across the three dimensions. Finally, these features undergo classification through a transformer encoder. We evaluate MEA-EFFormer against several state-of-the-art methods on three datasets and demonstrate its outstanding HSIC performance. Full article
(This article belongs to the Special Issue Deep Learning for the Analysis of Multi-/Hyperspectral Images II)
Show Figures

Figure 1

31 pages, 4589 KiB  
Article
Band Selection via Band Density Prominence Clustering for Hyperspectral Image Classification
by Chein-I Chang, Yi-Mei Kuo and Kenneth Yeonkong Ma
Remote Sens. 2024, 16(6), 942; https://doi.org/10.3390/rs16060942 - 7 Mar 2024
Cited by 2 | Viewed by 1461
Abstract
Band clustering has been widely used for hyperspectral band selection (BS). However, selecting an appropriate band to represent a band cluster is a key issue. Density peak clustering (DPC) provides an effective means for this purpose, referred to as DPC-based BS (DPC-BS). It [...] Read more.
Band clustering has been widely used for hyperspectral band selection (BS). However, selecting an appropriate band to represent a band cluster is a key issue. Density peak clustering (DPC) provides an effective means for this purpose, referred to as DPC-based BS (DPC-BS). It uses two indicators, cluster density and cluster distance, to rank all bands for BS. This paper reinterprets cluster density and cluster distance as band local density (BLD) and band distance (BD) and also introduces a new concept called band prominence value (BPV) as a third indicator. Combining BLD and BD with BPV derives new band prioritization criteria for BS, which can extend the currently used DPC-BS to a new DPC-BS method referred to as band density prominence clustering (BDPC). By taking advantage of the three key indicators of BDPC, i.e., cut-off band distance bc, k nearest neighboring-band local density, and BPV, two versions of BDPC can be derived called bc-BDPC and k-BDPC, both of which are quite different from existing DPC-based BS methods in three aspects. One is that the parameter bc of bc-BDPC and the parameter k of k-BDPC can be automatically determined by the number of clusters and virtual dimensionality (VD), respectively. Another is that instead of using Euclidean distance, a spectral discrimination measure is used to calculate BD as well as inter-band correlation. The most important and significant aspect is a novel idea that combines BPV with BLD and BD to derive new band prioritization criteria for BS. Extensive experiments demonstrate that BDPC generally performs better than DPC-BS as well as many current state-of-the art BS methods. Full article
(This article belongs to the Special Issue Advances in Hyperspectral Data Exploitation II)
Show Figures

Figure 1

Back to TopTop