Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,234)

Search Parameters:
Keywords = hyperspectral imaging (HSI)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13685 KB  
Article
CAT: Causal Attention with Linear Complexity for Efficient and Interpretable Hyperspectral Image Classification
by Ying Liu, Zhipeng Shen, Haojiao Yang, Waixi Liu and Xiaofei Yang
Remote Sens. 2026, 18(2), 358; https://doi.org/10.3390/rs18020358 - 21 Jan 2026
Abstract
Hyperspectral image (HSI) classification is pivotal in remote sensing, yet deep learning models, particularly Transformers, remain susceptible to spurious spectral–spatial correlations and suffer from limited interpretability. These issues stem from their inability to model the underlying causal structure in high-dimensional data. This paper [...] Read more.
Hyperspectral image (HSI) classification is pivotal in remote sensing, yet deep learning models, particularly Transformers, remain susceptible to spurious spectral–spatial correlations and suffer from limited interpretability. These issues stem from their inability to model the underlying causal structure in high-dimensional data. This paper introduces the Causal Attention Transformer (CAT), a novel architecture that integrates causal inference with a hierarchical CNN-Transformer backbone to address these limitations. CAT incorporates three key modules: (1) a Causal Attention Mechanism that enforces temporal and spatial causality via triangular masking and axial decomposition to eliminate spurious dependencies; (2) a Dual-Path Hierarchical Fusion module that adaptively integrates spectral and spatial causal features using learnable gating; and (3) a Linearized Causal Attention module that reduces the computational complexity from O(N2) to O(N) via kernelized cumulative summation, enabling scalable high-resolution HSI processing. Extensive experiments on three benchmark datasets (Indian Pines, Pavia University, Houston2013) demonstrate that CAT achieves state-of-the-art performance, outperforming leading CNN and Transformer models in both accuracy and robustness. Furthermore, CAT provides inherently interpretable spectral–spatial causal maps, offering valuable insights for reliable remote sensing analysis. Full article
Show Figures

Figure 1

22 pages, 1604 KB  
Article
Recursive Deep Feature Learning for Hyperspectral Image Super-Resolution
by Jiming Liu, Chen Yi and Hehuan Li
Appl. Sci. 2026, 16(2), 1060; https://doi.org/10.3390/app16021060 - 20 Jan 2026
Abstract
The advancement of hyperspectral image super-resolution (HSI-SR) has been significantly propelled by deep learning techniques. However, current methods predominantly rely on 2D or 3D convolutional networks, which are inherently local and thus limited in modeling long-range spectral–depth interactions. This work introduces a novel [...] Read more.
The advancement of hyperspectral image super-resolution (HSI-SR) has been significantly propelled by deep learning techniques. However, current methods predominantly rely on 2D or 3D convolutional networks, which are inherently local and thus limited in modeling long-range spectral–depth interactions. This work introduces a novel network architecture designed to address this gap through recursive deep feature learning. Our model initiates with 3D convolutions to extract preliminary spectral–spatial features, which are progressively refined via densely connected grouped convolutions. A core innovation is a recursively formulated generalized self-attention mechanism, which captures long-range dependencies across the spectral dimension with linear complexity. To reconstruct fine spatial details across multiple scales, a progressive upsampling strategy is further incorporated. Evaluations on several public benchmarks demonstrate that the proposed approach outperforms existing state-of-the-art methods in both quantitative metrics and visual quality. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application, 2nd Edition)
22 pages, 4087 KB  
Article
Wrapped Unsupervised Hyperspectral Band Selection via Reconstruction Error from Wasserstein Generative Adversarial Network
by Haoyang Yu, Hongna Zheng, Tao Yao, Yuling Zhang and Deyin Zhang
Remote Sens. 2026, 18(2), 326; https://doi.org/10.3390/rs18020326 - 18 Jan 2026
Viewed by 124
Abstract
Wrapped unsupervised band selection (WUBS) is a powerful means of reducing the dimensions of hyperspectral images (HSIs) and has drawn much focus recently. Nevertheless, numerous WUBS approaches struggle to strike a balance between computational complexity and performance and typically disregard high-level information between [...] Read more.
Wrapped unsupervised band selection (WUBS) is a powerful means of reducing the dimensions of hyperspectral images (HSIs) and has drawn much focus recently. Nevertheless, numerous WUBS approaches struggle to strike a balance between computational complexity and performance and typically disregard high-level information between bands. This paper presents a new reconstruction error-based algorithm called distance density (DD) and Wasserstein generative adversarial network (WGAN)-driven WUBS (DW-WUBS), which is intended to overcome these problems. Minutely, DW-WUBS employs DD to weigh the spectral fluctuation in different band groups and thus determine the detailed expression of the importance of each group. At the same time, it uses a sequential search method on the important band group instead of the original HSIs, thereby reducing the computational complexity of band retrieval. Afterwards, DW-WUBS trains a WGAN and applies its critical network to test the representativeness of the searched bands by considering their contribution to HSI reconstruction. This automatically derives underlying and higher-level structure information of the spectrum. The superiority of DW-WUBS is certified by comprehensive experiments on three benchmark data sets. For instance, on the Pavia Center scene, the peaked mean accuracy (MA) using the twelve bands chosen via DW-WUBS with the CART classifier exceeds the baseline (i.e., all bands) by 0.91% in the classification task. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

14 pages, 10595 KB  
Article
Light Sources in Hyperspectral Imaging Simultaneously Influence Object Detection Performance and Vase Life of Cut Roses
by Yong-Tae Kim, Ji Yeong Ham and Byung-Chun In
Plants 2026, 15(2), 215; https://doi.org/10.3390/plants15020215 - 9 Jan 2026
Viewed by 189
Abstract
Hyperspectral imaging (HSI) is a noncontact camera-based technique that enables deep learning models to learn various plant conditions by detecting light reflectance under illumination. In this study, we investigated the effects of four light sources—halogen (HAL), incandescent (INC), fluorescent (FLU), and light-emitting diodes [...] Read more.
Hyperspectral imaging (HSI) is a noncontact camera-based technique that enables deep learning models to learn various plant conditions by detecting light reflectance under illumination. In this study, we investigated the effects of four light sources—halogen (HAL), incandescent (INC), fluorescent (FLU), and light-emitting diodes (LED)—on the quality of spectral images and the vase life (VL) of cut roses, which are vulnerable to abiotic stresses. Cut roses ‘All For Love’ and ‘White Beauty’ were used to compare cultivar-specific visible reflectance characteristics associated with contrasting petal pigmentation. HSI was performed at four time points, yielding 640 images per light source from 40 cut roses. The results revealed that the light source strongly affected both the image quality (mAP@0.5 60–80%) and VL (0–3 d) of cut roses. The HAL lamp produced high-quality spectral images across wavelengths (WL) ranging from 480 to 900 nm and yielded the highest object detection performance (ODP), reaching mAP@0.5 of 85% in ‘All For Love’ and 83% in ‘White Beauty’ with the YOLOv11x models. However, it increased petal temperature by 2.7–3 °C, thereby stimulating leaf transpiration and consequently shortening the VL of the flowers by 1–2.5 d. In contrast, INC produced unclear images with low spectral signals throughout the WL and consequently resulted in lower ODP, with mAP@0.5 of 74% and 69% in ‘All For Love’ and ‘White Beauty’, respectively. The INC only slightly increased petal temperature (1.2–1.3 °C) and shortened the VL by 1 d in the both cultivars. Although FLU and LED had only minor effects on petal temperature and VL, these illuminations generated transient spectral peaks in the WL range of 480–620 nm, resulting in decreased ODP (mAP@0.5 60–75%). Our results revealed that HAL provided reliable, high-quality spectral image data and high object detection accuracy, but simultaneously had negative effects on flower quality. Our findings suggest an alternative two-phase approach for illumination applications that uses HAL during the initial exploration of spectra corresponding to specific symptoms of interest, followed by LED for routine plant monitoring. Optimizing illumination in HSI will improve the accuracy of deep learning-based prediction and thereby contribute to the development of an automated quality sorting system that is urgently required in the cut flower industry. Full article
(This article belongs to the Special Issue Application of Optical and Imaging Systems to Plants)
Show Figures

Figure 1

17 pages, 3642 KB  
Article
Spatiotemporal Analysis for Real-Time Non-Destructive Brix Estimation in Apples
by Ha-Na Kim, Myeong-Won Bae, Yong-Jin Cho and Dong-Hoon Lee
Agriculture 2026, 16(2), 172; https://doi.org/10.3390/agriculture16020172 - 9 Jan 2026
Viewed by 144
Abstract
Predicting internal quality parameters, such as Brix and water content, of apples, is essential for quality control. Existing near-infrared (NIR) and hyperspectral imaging (HSI)-based techniques have limited applicability due to their dependence on equipment and environmental sensitivity. In this study, a transportable quality [...] Read more.
Predicting internal quality parameters, such as Brix and water content, of apples, is essential for quality control. Existing near-infrared (NIR) and hyperspectral imaging (HSI)-based techniques have limited applicability due to their dependence on equipment and environmental sensitivity. In this study, a transportable quality assessment system was proposed using spatiotemporal domain analysis with long-wave infrared (LWIR)-based thermal diffusion phenomics, enabling non-destructive prediction of the internal Brix of apples during transport. After cooling, the thermal gradient of the apple surface during the cooling-to-equilibrium interval was extracted. This gradient was used as an input variable for multiple linear regression, Ridge, and Lasso models, and the prediction performance was assessed. Overall, 492 specimens of 5 cultivars of apple (Hongro, Arisoo, Sinano Gold, Stored Fuji, and Fuji) were included in the experiment. The thermal diffusion response of each specimen was imaged at a sampling frequency of 8.9 Hz using LWIR-based thermal imaging, and the temperature changes over time were compared. In cross-validation of the integrated model for all cultivars, the coefficient of determination (R2cv) was 0.80, and the RMSEcv was 0.86 °Brix, demonstrating stable prediction accuracy within ±1 °Brix. In terms of cultivar, Arisoo (Cultivar 2) and Fuji (Cultivar 5) showed high prediction reliability (R2cv = 0.74–0.77), while Hongro (Cultivar 1) and Stored Fuji (Cultivar 4) showed relatively weak correlations. This is thought to be due to differences in thermal diffusion characteristics between cultivars, depending on their tissue density and water content. The LWIR-based thermal diffusion analysis presented in this study is less sensitive to changes in reflectance and illuminance compared to conventional NIR and visible light spectrophotometry, as it enables real-time measurements during transport without requiring a separate light source. Surface heat distribution phenomics due to external heat sources serves as an index that proximally reflects changes in the internal Brix of apples. Later, this could be developed into a reliable commercial screening system to obtain extensive data accounting for diversity between cultivars and to elucidate the effects of interference using external environmental factors. Full article
Show Figures

Figure 1

29 pages, 79553 KB  
Article
A2Former: An Airborne Hyperspectral Crop Classification Framework Based on a Fully Attention-Based Mechanism
by Anqi Kang, Hua Li, Guanghao Luo, Jingyu Li and Zhangcai Yin
Remote Sens. 2026, 18(2), 220; https://doi.org/10.3390/rs18020220 - 9 Jan 2026
Viewed by 137
Abstract
Crop classification of farmland is of great significance for crop monitoring and yield estimation. Airborne hyperspectral systems can provide large-format hyperspectral farmland images. However, traditional machine learning-based classification methods rely heavily on handcrafted feature design, resulting in limited representation capability and poor computational [...] Read more.
Crop classification of farmland is of great significance for crop monitoring and yield estimation. Airborne hyperspectral systems can provide large-format hyperspectral farmland images. However, traditional machine learning-based classification methods rely heavily on handcrafted feature design, resulting in limited representation capability and poor computational efficiency when processing large-format data. Meanwhile, mainstream deep-learning-based hyperspectral image (HSI) classification methods primarily rely on patch-based input methods, where a label is assigned to each patch, limiting the full utilization of hyperspectral datasets in agricultural applications. In contrast, this paper focuses on the semantic segmentation task in the field of computer vision and proposes a novel HSI crop classification framework named All-Attention Transformer (A2Former), which combines CNN and Transformer based on a fully attention-based mechanism. First, a CNN-based encoder consisting of two blocks, the overlap-downsample and the spectral–spatial attention weights block (SSWB) is constructed to extract multi-scale spectral–spatial features effectively. Second, we propose a lightweight C-VIT block to enhance high-dimensional features while reducing parameter count and computational cost. Third, a Transformer-based decoder block with gated-style weighted fusion and interaction attention (WIAB), along with a fused segmentation head (FH), is developed to precisely model global and local features and align semantic information across multi-scale features, thereby enabling accurate segmentation. Finally, a checkerboard-style sampling strategy is proposed to avoid information leakage and ensure the objectivity and accuracy of model performance evaluation. Experimental results on two public HSI datasets demonstrate the accuracy and efficiency of the proposed A2Former framework, outperforming several well-known patch-free and patch-based methods on two public HSI datasets. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

31 pages, 10745 KB  
Article
CNN-GCN Coordinated Multimodal Frequency Network for Hyperspectral Image and LiDAR Classification
by Haibin Wu, Haoran Lv, Aili Wang, Siqi Yan, Gabor Molnar, Liang Yu and Minhui Wang
Remote Sens. 2026, 18(2), 216; https://doi.org/10.3390/rs18020216 - 9 Jan 2026
Viewed by 221
Abstract
The existing multimodal image classification methods often suffer from several key limitations: difficulty in effectively balancing local detail and global topological relationships in hyperspectral image (HSI) feature extraction; insufficient multi-scale characterization of terrain features from light detection and ranging (LiDAR) elevation data; and [...] Read more.
The existing multimodal image classification methods often suffer from several key limitations: difficulty in effectively balancing local detail and global topological relationships in hyperspectral image (HSI) feature extraction; insufficient multi-scale characterization of terrain features from light detection and ranging (LiDAR) elevation data; and neglect of deep inter-modal interactions in traditional fusion methods, often accompanied by high computational complexity. To address these issues, this paper proposes a comprehensive deep learning framework combining convolutional neural network (CNN), a graph convolutional network (GCN), and wavelet transform for the joint classification of HSI and LiDAR data, including several novel components: a Spectral Graph Mixer Block (SGMB), where a CNN branch captures fine-grained spectral–spatial features by multi-scale convolutions, while a parallel GCN branch models long-range contextual features through an enhanced gated graph network. This dual-path design enables simultaneous extraction of local detail and global topological features from HSI data; a Spatial Coordinate Block (SCB) to enhance spatial awareness and improve the perception of object contours and distribution patterns; a Multi-Scale Elevation Feature Extraction Block (MSFE) for capturing terrain representations across varying scales; and a Bidirectional Frequency Attention Encoder (BiFAE) to enable efficient and deep interaction between multimodal features. These modules are intricately designed to work in concert, forming a cohesive end-to-end framework, which not only achieves a more effective balance between local details and global contexts but also enables deep yet computationally efficient interaction across features, significantly strengthening the discriminability and robustness of the learned representation. To evaluate the proposed method, we conducted experiments on three multimodal remote sensing datasets: Houston2013, Augsburg, and Trento. Quantitative results demonstrate that our framework outperforms state-of-the-art methods, achieving OA values of 98.93%, 88.05%, and 99.59% on the respective datasets. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

17 pages, 20645 KB  
Data Descriptor
Multimodal MRI–HSI Synthetic Brain Tissue Dataset Based on Agar Phantoms
by Manuel Villa, Jaime Sancho, Gonzalo Rosa-Olmeda, Aure Enkaoua, Sara Moccia and Eduardo Juarez
Data 2026, 11(1), 12; https://doi.org/10.3390/data11010012 - 8 Jan 2026
Viewed by 230
Abstract
Magnetic resonance imaging (MRI) and hyperspectral imaging (HSI) provide complementary information for image-guided neurosurgery, combining high-resolution anatomical detail with tissue-specific optical characterization. This work presents a novel multimodal phantom dataset specifically designed for MRI–HSI integration. The phantoms reproduce a three-layer tissue structure comprising [...] Read more.
Magnetic resonance imaging (MRI) and hyperspectral imaging (HSI) provide complementary information for image-guided neurosurgery, combining high-resolution anatomical detail with tissue-specific optical characterization. This work presents a novel multimodal phantom dataset specifically designed for MRI–HSI integration. The phantoms reproduce a three-layer tissue structure comprising white matter, gray matter, tumor, and superficial blood vessels, using agar-based compositions that mimic MRI contrasts of the rat brain while providing consistent hyperspectral signatures. The dataset includes two designs of phantoms with MRI, HSI, RGB-D, and tracking acquisitions, along with pixel-wise labels and corresponding 3D models, comprising 13 phantoms in total. The dataset facilitates the evaluation of registration, segmentation, and classification algorithms, as well as depth estimation, multimodal fusion, and tracking-to-camera calibration procedures. By providing reproducible, labeled multimodal data, these phantoms reduce the need for animal experiments in preclinical imaging research and serve as a versatile benchmark for MRI–HSI integration and other multimodal imaging studies. Full article
Show Figures

Figure 1

28 pages, 11618 KB  
Article
Cascaded Multi-Attention Feature Recurrent Enhancement Network for Spectral Super-Resolution Reconstruction
by He Jin, Jinhui Lan, Zhixuan Zhuang and Yiliang Zeng
Remote Sens. 2026, 18(2), 202; https://doi.org/10.3390/rs18020202 - 8 Jan 2026
Viewed by 223
Abstract
Hyperspectral imaging (HSI) captures the same scene across multiple spectral bands, providing richer spectral characteristics of materials than conventional RGB images. The spectral reconstruction task seeks to map RGB images into hyperspectral images, enabling high-quality HSI data acquisition without additional hardware investment. Traditional [...] Read more.
Hyperspectral imaging (HSI) captures the same scene across multiple spectral bands, providing richer spectral characteristics of materials than conventional RGB images. The spectral reconstruction task seeks to map RGB images into hyperspectral images, enabling high-quality HSI data acquisition without additional hardware investment. Traditional methods based on linear models or sparse representations struggle to effectively model the nonlinear characteristics of hyperspectral data. Although deep learning approaches have made significant progress, issues such as detail loss and insufficient modeling of spatial–spectral relationships persist. To address these challenges, this paper proposes the Cascaded Multi-Attention Feature Recurrent Enhancement Network (CMFREN). This method achieves targeted breakthroughs over existing approaches through a cascaded architecture of feature purification, spectral balancing and progressive enhancement. This network comprises two core modules: (1) the Hierarchical Residual Attention (HRA) module, which suppresses artifacts in illumination transition regions through residual connections and multi-scale contextual feature fusion, and (2) the Cascaded Multi-Attention (CMA) module, which incorporates a Spatial–Spectral Balanced Feature Extraction (SSBFE) module and a Spectral Enhancement Module (SEM). The SSBFE combines Multi-Scale Residual Feature Enhancement (MSRFE) with Spectral-wise Multi-head Self-Attention (S-MSA) to achieve dynamic optimization of spatial–spectral features, while the SEM synergistically utilizes attention and convolution to progressively enhance spectral details and mitigate spectral aliasing in low-resolution scenes. Experiments across multiple public datasets demonstrate that CMFREN achieves state-of-the-art (SOTA) performance on metrics including RMSE, PSNR, SAM, and MRAE, validating its superiority under complex illumination conditions and detail-degraded scenarios. Full article
Show Figures

Figure 1

23 pages, 10516 KB  
Article
SSGTN: Spectral–Spatial Graph Transformer Network for Hyperspectral Image Classification
by Haotian Shi, Zihang Luo, Yiyang Ma, Guanquan Zhu and Xin Dai
Remote Sens. 2026, 18(2), 199; https://doi.org/10.3390/rs18020199 - 7 Jan 2026
Viewed by 303
Abstract
Hyperspectral image (HSI) classification is fundamental to a wide range of remote sensing applications, such as precision agriculture, environmental monitoring, and urban planning, because HSIs provide rich spectral signatures that enable the discrimination of subtle material differences. Deep learning approaches, including Convolutional Neural [...] Read more.
Hyperspectral image (HSI) classification is fundamental to a wide range of remote sensing applications, such as precision agriculture, environmental monitoring, and urban planning, because HSIs provide rich spectral signatures that enable the discrimination of subtle material differences. Deep learning approaches, including Convolutional Neural Networks (CNNs), Graph Convolutional Networks (GCNs), and Transformers, have achieved strong performance in learning spatial–spectral representations. However, these models often face difficulties in jointly modeling long-range dependencies, fine-grained local structures, and non-Euclidean spatial relationships, particularly when labeled training data are scarce. This paper proposes a Spectral–Spatial Graph Transformer Network (SSGTN), a dual-branch architecture that integrates superpixel-based graph modeling with Transformer-based global reasoning. SSGTN consists of four key components, namely (1) an LDA-SLIC superpixel graph construction module that preserves discriminative spectral–spatial structures while reducing computational complexity, (2) a lightweight spectral denoising module based on 1×1 convolutions and batch normalization to suppress redundant and noisy bands, (3) a Spectral–Spatial Shift Module (SSSM) that enables efficient multi-scale feature fusion through channel-wise and spatial-wise shift operations, and (4) a dual-branch GCN-Transformer block that jointly models local graph topology and global spectral–spatial dependencies. Extensive experiments on three public HSI datasets (Indian Pines, WHU-Hi-LongKou, and Houston2018) under limited supervision (1% training samples) demonstrate that SSGTN consistently outperforms state-of-the-art CNN-, Transformer-, Mamba-, and GCN-based methods in overall accuracy, Average Accuracy, and the κ coefficient. The proposed framework provides an effective baseline for HSI classification under limited supervision and highlights the benefits of integrating graph-based structural priors with global contextual modeling. Full article
Show Figures

Figure 1

29 pages, 3983 KB  
Review
A Dive into Generative Adversarial Networks in the World of Hyperspectral Imaging: A Survey of the State of the Art
by Pallavi Ranjan, Ankur Nandal, Saurabh Agarwal and Rajeev Kumar
Remote Sens. 2026, 18(2), 196; https://doi.org/10.3390/rs18020196 - 6 Jan 2026
Viewed by 521
Abstract
Hyperspectral imaging (HSI) captures rich spectral information across a wide range of wavelengths, enabling advanced applications in remote sensing, environmental monitoring, medical diagnosis, and related domains. However, the high dimensionality, spectral variability, and inherent noise of HSI data present significant challenges for efficient [...] Read more.
Hyperspectral imaging (HSI) captures rich spectral information across a wide range of wavelengths, enabling advanced applications in remote sensing, environmental monitoring, medical diagnosis, and related domains. However, the high dimensionality, spectral variability, and inherent noise of HSI data present significant challenges for efficient processing and reliable analysis. In recent years, Generative Adversarial Networks (GANs) have emerged as transformative deep learning paradigms, demonstrating strong capabilities in data generation, augmentation, feature learning, and representation modeling. Consequently, the integration of GANs into HSI analysis has gained substantial research attention, resulting in a diverse range of architectures tailored to HSI-specific tasks. Despite these advances, existing survey studies often focus on isolated problems or individual application domains, limiting a comprehensive understanding of the broader GAN–HSI landscape. To address this gap, this paper presents a comprehensive review of GAN-based hyperspectral imaging research. The review systematically examines the evolution of GAN–HSI integration, categorizes representative GAN architectures, analyzes domain-specific applications, and discusses commonly adopted hyperparameter tuning strategies. Furthermore, key research challenges and open issues are identified, and promising future research directions are outlined. This synergy addresses critical hyperspectral data analysis challenges while unlocking transformative innovations across multiple sectors. Full article
Show Figures

Figure 1

41 pages, 25791 KB  
Article
TGDHTL: Hyperspectral Image Classification via Transformer–Graph Convolutional Network–Diffusion with Hybrid Domain Adaptation
by Zarrin Mahdavipour, Nashwan Alromema, Abdolraheem Khader, Ghulam Farooque, Ali Ahmed and Mohamed A. Damos
Remote Sens. 2026, 18(2), 189; https://doi.org/10.3390/rs18020189 - 6 Jan 2026
Viewed by 375
Abstract
Hyperspectral image (HSI) classification is pivotal for remote sensing applications, including environmental monitoring, precision agriculture, and urban land-use analysis. However, its accuracy is often limited by scarce labeled data, class imbalance, and domain discrepancies between standard RGB and HSI imagery. Although recent deep [...] Read more.
Hyperspectral image (HSI) classification is pivotal for remote sensing applications, including environmental monitoring, precision agriculture, and urban land-use analysis. However, its accuracy is often limited by scarce labeled data, class imbalance, and domain discrepancies between standard RGB and HSI imagery. Although recent deep learning approaches, such as 3D convolutional neural networks (3D-CNNs), transformers, and generative adversarial networks (GANs), show promise, they struggle with spectral fidelity, computational efficiency, and cross-domain adaptation in label-scarce scenarios. To address these challenges, we propose the Transformer–Graph Convolutional Network–Diffusion with Hybrid Domain Adaptation (TGDHTL) framework. This framework integrates domain-adaptive alignment of RGB and HSI data, efficient synthetic data generation, and multi-scale spectral–spatial modeling. Specifically, a lightweight transformer, guided by Maximum Mean Discrepancy (MMD) loss, aligns feature distributions across domains. A class-conditional diffusion model generates high-quality samples for underrepresented classes in only 15 inference steps, reducing labeled data needs by approximately 25% and computational costs by up to 80% compared to traditional 1000-step diffusion models. Additionally, a Multi-Scale Stripe Attention (MSSA) mechanism, combined with a Graph Convolutional Network (GCN), enhances pixel-level spatial coherence. Evaluated on six benchmark datasets including HJ-1A and WHU-OHS, TGDHTL consistently achieves high overall accuracy (e.g., 97.89% on University of Pavia) with just 11.9 GFLOPs, surpassing state-of-the-art methods. This framework provides a scalable, data-efficient solution for HSI classification under domain shifts and resource constraints. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

29 pages, 7184 KB  
Article
Double-Gated Mamba Multi-Scale Adaptive Feature Learning Network for Unsupervised Single RGB Image Hyperspectral Image Reconstruction
by Zhongmin Jiang, Zhen Wang, Wenju Wang and Jifan Zhu
J. Imaging 2026, 12(1), 19; https://doi.org/10.3390/jimaging12010019 - 31 Dec 2025
Viewed by 287
Abstract
Existing methods for reconstructing hyperspectral images from single RGB images struggle to obtain a large number of labeled RGB-HSI paired images. These methods face issues such as detail loss, insufficient robustness, low reconstruction accuracy, and the difficulty of balancing the spatial–spectral trade-off. To [...] Read more.
Existing methods for reconstructing hyperspectral images from single RGB images struggle to obtain a large number of labeled RGB-HSI paired images. These methods face issues such as detail loss, insufficient robustness, low reconstruction accuracy, and the difficulty of balancing the spatial–spectral trade-off. To address these challenges, a Double-Gated Mamba Multi-Scale Adaptive Feature (DMMAF) learning network model is proposed. DMMAF designs a reflection dot-product adaptive dual-noise-aware feature extraction method, which is used to supplement edge detail information in spectral images and improve robustness. DMMAF also constructs a deformable attention-based global feature extraction method and a double-gated Mamba local feature extraction approach, enhancing the interaction between local and global information during the reconstruction process, thereby improving image accuracy. Meanwhile, DMMAF introduces a structure-aware smooth loss function, which, by combining smoothing, curvature, and attention supervision losses, effectively resolves the spatial–spectral resolution balance problem. This network model is applied to three datasets—NTIRE 2020, Harvard, and CAVE—achieving state-of-the-art unsupervised reconstruction performance compared to existing advanced algorithms. Experiments on the NTIRE 2020, Harvard, and CAVE datasets demonstrate that this model achieves state-of-the-art unsupervised reconstruction performance. On the NTIRE 2020 dataset, our method attains MRAE, RMSE, and PSNR values of 0.133, 0.040, and 31.314, respectively. On the Harvard dataset, it achieves RMSE and PSNR values of 0.025 and 34.955, respectively, while on the CAVE dataset, it achieves RMSE and PSNR values of 0.041 and 30.983, respectively. Full article
(This article belongs to the Special Issue Multispectral and Hyperspectral Imaging: Progress and Challenges)
Show Figures

Figure 1

23 pages, 36341 KB  
Article
Global–Local Mamba-Based Dual-Modality Fusion for Hyperspectral and LiDAR Data Classification
by Khanzada Muzammil Hussain, Keyun Zhao, Sachal Pervaiz and Ying Li
Remote Sens. 2026, 18(1), 138; https://doi.org/10.3390/rs18010138 - 31 Dec 2025
Viewed by 542
Abstract
Hyperspectral image (HSI) and light detection and ranging (LiDAR) data offer complementary spectral and structural information; however, the integration of these high-dimensional, heterogeneous modalities poses significant challenges. We propose a Global–Local Mamba dual-modality fusion framework (GL-Mamba) for HSI–LiDAR classification. Each sensor’s input is [...] Read more.
Hyperspectral image (HSI) and light detection and ranging (LiDAR) data offer complementary spectral and structural information; however, the integration of these high-dimensional, heterogeneous modalities poses significant challenges. We propose a Global–Local Mamba dual-modality fusion framework (GL-Mamba) for HSI–LiDAR classification. Each sensor’s input is decomposed into low- and high-frequency sub-bands: lightweight 3D/2D CNNs process low-frequency spectral–spatial structures, while compact transformers handle high-frequency details. The outputs are aggregated using a global–local Mamba block, a state-space sequence model that retains local context while capturing long-range dependencies with linear complexity. A cross-attention module aligns spectral and elevation features, yielding a lightweight, efficient architecture that preserves fine textures and coarse structures. Experiments on Trento, Augsburg, and Houston2013 datasets show that GL-Mamba outperforms eight leading baselines in accuracy and kappa coefficient, while maintaining high inference speed due to its dual-frequency design. These results highlight the practicality and accuracy of our model for multimodal remote-sensing applications. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

25 pages, 8239 KB  
Article
Weighted Total Variation for Hyperspectral Image Denoising Based on Hyper-Laplacian Scale Mixture Distribution
by Xiaoyu Yu, Jianli Zhao, Sheng Fang, Tianheng Zhang, Liang Li and Xinyue Huang
Remote Sens. 2026, 18(1), 135; https://doi.org/10.3390/rs18010135 - 31 Dec 2025
Viewed by 376
Abstract
Conventional total variation (TV) regularization methods based on Laplacian or fixed-scale Hyper-Laplacian priors impose uniform sparsity penalties on gradients. These uniform penalties fail to capture the heterogeneous sparsity characteristics across different regions and directions, often leading to the over-smoothing of edges and loss [...] Read more.
Conventional total variation (TV) regularization methods based on Laplacian or fixed-scale Hyper-Laplacian priors impose uniform sparsity penalties on gradients. These uniform penalties fail to capture the heterogeneous sparsity characteristics across different regions and directions, often leading to the over-smoothing of edges and loss of fine details. To address this limitation, we propose a novel regularization Hyper-Laplacian Adaptive Weighted Total Variation (HLAWTV). The proposed regularization employs a proportional mixture of Hyper-Laplacian distributions to dynamically adapt the sparsity decay rate based on image structure. Simultaneously, the adaptive weights can be adjusted based on local gradient statistics and exhibit strong robustness in texture preservation when facing different datasets and noise. Then, we propose an hyperspectral image (HSI) denoising method based on the HLAWTV regularizer. Extensive experiments on both synthetic and real hyperspectral datasets demonstrate that our denoising method consistently outperforms state-of-the-art methods in terms of quantitative metrics and visual quality. Moreover, incorporating our adaptive weighting mechanism into existing TV-based models yields significant performance gains, confirming the generality and robustness of the proposed approach. Full article
Show Figures

Figure 1

Back to TopTop