Next Article in Journal
Detection of Significant Seismic Quiescence Patterns in the Mexican Subduction Zone Using Extended Schreider Algorithms
Previous Article in Journal
b-Value Evaluation and Applications to Seismic Hazard Assessment
Previous Article in Special Issue
Entropy-Regularized Attention for Explainable Histological Classification with Convolutional and Hybrid Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning

1
Hubei Provincial Key Laboratory of Green Intelligent Computing Power Network, Hubei University of Technology, Wuhan 430068, China
2
School of Computer Science, Hubei University of Technology, Wuhan 430068, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(9), 959; https://doi.org/10.3390/e27090959
Submission received: 4 August 2025 / Revised: 11 September 2025 / Accepted: 13 September 2025 / Published: 15 September 2025

Abstract

With rapid advancements in transformers, the reconstruction of hyperspectral images from RGB images, also known as spectral super-resolution (SSR), has made significant breakthroughs. However, existing transformer-based methods often struggle to balance computational efficiency with long-range receptive fields. Recently, Mamba has demonstrated linear complexity in modeling long-range dependencies and shown broad applicability in vision tasks. This paper proposes a multi-scale spectral–spatial sequence learning method, named MSS-Mamba, for reconstructing hyperspectral images from RGB images. First, we introduce a continuous spectral–spatial scan (CS3) mechanism to improve cross-dimensional feature extraction of the foundational Mamba model. Second, we propose a sequence tokenization strategy that generates multi-scale-aware sequences to overcome Mamba’s limitations in hierarchically learning multi-scale information. Specifically, we design the multi-scale information fusion (MIF) module, which tokenizes input sequences before feeding them into Mamba. The MIF employs a dual-branch architecture to process global and local information separately, dynamically fusing features through an adaptive router that generates weighting coefficients. This produces feature maps that contain both global contextual information and local details, ultimately reconstructing a high-fidelity hyperspectral image. Experimental results on the ARAD_1k, CAVE and grss_dfc_2018 dataset demonstrate the performance of MSS-Mamba.

1. Introduction

Hyperspectral imaging technology is capable of capturing the distribution of multiple continuous spectral bands that can reflect rich physical and chemical information [1], which is different from the traditional RGB imaging mode. Therefore, hyperspectral images (HSIs) have shown significant progress in many fields such as large-area ground object classification [2] and urban construction [3]. Due to the limitations of current imaging technology, HSI still faces challenges such as high time cost of imaging process [4].
Reconstructing hyperspectral images from RGB images, also known as the spectral super-resolution (SSR) technology, provides a low-cost solution to the aforementioned challenge. The goal of SSR is to enhance the spectral resolution of RGB images to match the fineness of HSI. However, recovering multiple channels from the three RGB channels is an ill-posed problem [5]. Based on the theory of sparse coding, early works primarily focuses on extracting prior knowledge from a large number of HSI training samples. This prior knowledge is utilized to generate complete sparse hyperspectral dictionaries [6,7,8], which are then used to reconstruct HSI. However, manual priors are unable to handle tasks in complex environments.
Recently, convolutional neural networks (CNNs) serve as the core framework for most deep learning-based methods. These methods achieve the reconstruction of HSI by learning the nonlinear mapping relationship between RGB images or multispectral images and HSI. However, to ensure computational efficiency, the convolutional kernels in CNNs are usually set to be small. This limits the receptive field of the CNN, thereby affecting its ability to capture global information from the image [9,10,11] and imposing certain constraints on the performance improvement of CNN-based SSR methods.
Recent studies have also pointed out that activating more pixels usually leads to better recovery results [12]. Due to larger receptive fields, transformer-based methods [13,14] typically outperform CNN-based methods. Although the self-attention mechanism offers many advantages, there is an inherent trade-off between global receptive field and computational efficiency. The quadratic computational complexity of the standard Transformer [15,16] is often infeasible for SSR tasks. Although some efficient attention mechanisms, such as shifted window attention [17], can reduce computational costs to a certain extent, they may sacrifice the global receptive field. This indicates that finding a balance between global effectiveness and efficient computation remains a challenge [18].
Recently, the proposal of Mamba [19] has provided new possibilities for balancing global receptive field and computational efficiency. Mamba models long-distance dependencies through the discretized state space equations, and its structured re-parameterization method further reduces computational complexity. This linear complexity characteristic makes it significantly better than Transformers when dealing with long sequences. However, the standard Mamba algorithm was originally designed for sequence data processing and is not suitable for direct application to the SSR task [20]. As a result, many researchers have explored various scanning methods, such as BiDirectional Scan [20] and Cross-Scan [21], to arrange multi-dimensional data into 1D sequences in a specific order. Nevertheless, most current scanning methods only focus on scanning in the spatial domain and neglect the long-range dependencies in the spectral domain. In the field of SSR, the exploration of the spatial–spectral dependency of images using Mamba remains an issue. On the other hand, Transformer [22] typically employs hierarchical downsampling operations to achieve multi-scale feature learning. However, repeated downsampling operations inevitably lead to progressive degradation of spatial details and confusion of spectral and spatial information.
In this paper, we propose a multi-scale spectral–spatial sequence learning Mamba (MSS-Mamba) for SSR. (1) To fully explore the spectral–spatial correlations in images, we integrate the spectral dimension into the sequence scanning process and propose the Continuous Spectral–Spatial Scan (CS3). CS3 rearranges the row, column and band dimensions by first adopting a channel-first scanning strategy to construct sequences enriched with channel features, followed by row scanning to iteratively embed spatial information from different pixels into the sequences, forming channel-row composite sequences. Subsequently, column scanning is applied to vertically connect channel-row sequences. Additionally, by swapping the execution order of row and column scanning, the channel-column sequences of each row are horizontally concatenated to enhance directional diversity learning. (2) Inspired by sequence modeling [23], we design a multi-scale information fusion block (MIF). Following CS3, an additional sequence tokenization step is incorporated, employing sliding window slicing with varying strides and patch sizes to generate sequences containing multi-scale information, where large patches prioritize spatial dimension scanning to learn global spatial relationships, while small patches focus on channel scanning to extract local spectral features. By setting patch sizes larger than or equal to the channel number of feature maps, each generated sequence retains complete spectral information and partial spatial context, thereby strengthening spectral–spatial synergy. The multi-scale sequences are then fed into the Mamba model for feature learning, and their outputs are fused through a routing weight generation module to reconstruct high-quality images. Our method operates directly on the original feature maps without degrading image sizes, while fully leveraging Mamba’s long-sequence modeling capability. Unlike existing Mamba-based vision methods relying on repeated downsampling [21,24,25], our approach addresses the inherent limitations of Mamba in image serialization. The contributions of this paper are as follows:
  • We propose a novel MSS-Mamba for SSR tasks, achieving dynamic fusion of spectral–spatial features through multi-scale spectral–spatial sequence joint modeling.
  • We design a CS3 mechanism to construct directionally complementary composite sequences, enhancing long-sequence modeling capability while preserving local spatial continuity.
  • We conduct experiments on three HSI datasets, and the results demonstrate that the proposed MSS-Mamba outperforms compared methods.
The rest is organized as follows. Section 2 reviews related methods. Section 3 introduces the proposed MSS-Mamba. Section 4 reports the experiments. Finally, Section 5 gives the conclusion.

2. Related Works

2.1. SSR

The SSR methods can be categorized into traditional methods and deep learning-based approaches. Traditional methods address the spectral reconstruction problem by mining properties of high-dimensional data (e.g., correlations and redundancies) and applying handcrafted priors or assumptions as constraints [26]. For instance, Arad et al. [27] leveraged hyperspectral priors to create hyperspectral features while developing sparse dictionaries of corresponding RGB projections for these features. Chen et al. [28] extended this approach by introducing matrix factorization, and Geng et al. [29] further improved reconstruction performance by incorporating spatial constraints. However, due to the significant spectral discrepancy between RGB images and hyperspectral images, accurately representing the spectral characteristics of real-world objects under limited prior knowledge or assumptions remains a challenging task [30].
The challenge of accurately reconstructing the spectral–spatial properties of HSIs is not limited to SSR but is a recurring theme across various hyperspectral image processing tasks. For instance, in the related domain of hyperspectral image restoration, Duan et al. [31] addressed the problem of shadow removal through a multiexposure fusion framework. While targeting a different application, their work exemplifies the broader need for developing specialized techniques that can handle the complex degradation processes inherent in real-world HSI data, a challenge that also underpins the SSR problem.
Deep learning-based methods have surpassed traditional approaches due to their strong feature extraction capabilities and generalization across diverse datasets. Among these methods, CNN-based approaches are the most prevalent. The earliest deep learning method was DenseUnet proposed by Galliani et al. [32]. Subsequently, Xiong et al. [33] improved the very deep super-resolution CNN (VDSR) network [34] to develop HSCNN for SSR tasks. Later, Shi et al. [35] further enhanced this framework by constructing HSCNN-R with deep residual structures and HSCNN-D with dense connection architectures, respectively.
With the remarkable success of attention mechanisms, CNN-based models have progressively incorporated attention modules to adaptively learn more informative features, significantly enhancing network learning capabilities. For instance, Li et al. [36] proposed AWAN, which redistributes channel feature responses by integrating channel correlations to achieve more accurate reconstruction. Zhao et al. [37] designed a 4-level hierarchical regression network (HRNet), leveraging residual dense blocks for artifact removal and residual global blocks for modeling remote pixel correlation. Subsequently, Li et al. [38] introduced a deep hybrid 2D-3D CNN network with dual second-order attention (HSACS) to fully exploit sufficient spatial–spectral contextual information, achieving effective modeling of spatial–spectral dependencies. In recent work, Duan et al. [39] proposed a spectral–spatial-frequency fusion network (SSFDF), marking a first attempt to incorporate frequency-domain information into the SSR pipeline. Sun et al. [40] proposed a hybrid spectral and texture attention pyramid network, which utilizes a learnable texture feature extraction module to extract texture features from RGB images and enables comprehensive exploration of spatial–spectral correlations through spatial–spectral cross-attention.
Subsequently, the self-attention mechanism has demonstrated immense potential, and Transformer-based models have proven to be effective tools in computer vision [41]. For example, Cai et al. [42] proposed a Multi-stage Spectral-wise Transformer (MST++) to learn correlations between different spectral bands of HSI for efficient spectral reconstruction. While existing deep learning-based SSR methods integrate multiple modules and achieve visually satisfactory results, they inevitably face several challenges. First, deeper networks capable of capturing more information typically entail substantial parameters and floating-point operations. Second, advanced attention networks often operate at the expense of global receptive fields, and the dilemma of balancing computational efficiency with global modeling remains largely unresolved.

2.2. State Space Models

State Space Models (SSMs) [43,44], originating from control theory, were initially designed to describe and predict the dynamic evolution of systems over time [45]. Inspired by SSMs, the structured state space sequence model (S4) [43] integrates Hippo matrices with discretization operations to achieve long-range sequence modeling capabilities, demonstrating significant potential in processing sequential data. Subsequently, Fu et al. [46] bridged the efficacy gap separating SSMs from Transformers in natural language processing through their H3 architecture. Mehta et al. [47] further enhanced the representational capacity of SSMs by introducing gating mechanisms. Recently, Mamba has outperformed Transformers in natural language processing while maintaining linear scaling with input length [19].
Given Mamba’s exceptional sequence processing capabilities, recent visual tasks [21,48] have initiated preliminary attempts to adopt Mamba as foundational framework. For example, Li et al. [49] combined spatial–spectral fusion blocks with Mamba for hyperspectral image classification. Similarly, Ahmad et al. [50] used wavelet transforms combined with Mamba for hyperspectral classification. Li et al. [51] combined CNN with Vision Mamba for hyperspectral object detection. However, Mamba’s potential remains largely unexplored in the SSR domain to date.
Recent pioneering efforts have sought to adapt Mamba for SSR, yet they exhibit critical limitations in modeling the joint spectral–spatial nature of the task. Wang et al. [52] introduced gradient attention to guide Mamba for spectral reconstruction; however, their adoption of VMamba’s Cross-Scan module restricts the SSM to operate independently on single channels, failing to capture inter-spectral dependencies. In contrast, Lin et al. [53] proposed a hybrid Transformer–Mamba architecture to balance global modeling capacity and computational efficiency. Although they considered channel scanning, their “head-to-tail” serialization strategy neglects fine-grained local pixel interactions. Moreover, Lin’s method relies on a U-Net-like structure with repeated downsampling and upsampling operations, which inevitably leads to the loss of detailed spatial information. The shortcomings of these works collectively highlight a prevailing challenge: existing approaches struggle to perform continuous and lossless spectral–spatial sequence modeling. This work aims to pioneer the exploration of Mamba’s capabilities in SSR and proposes a novel perspective for the SSR task.

3. Method

3.1. Preliminaries

Structured State Space Sequence (S4) model leverages a continuous-time state space modeling framework to process discrete data, mapping the input sequence x ( t ) R to the output sequence y ( t ) R through an implicit latent state h ( t ) R N , where N denotes the state space dimension. The mathematical formulation is expressed as follows:
h ( t ) = A h ( t ) + B x ( t ) y ( t ) = C h ( t ) + D x ( t )
The state transition matrix A R N × N autonomously governs latent state evolution by integrating historical information to maintain system memory. B R N × 1 dynamically weights input signals to regulate their influence on latent state updates. C R 1 × N transforms latent states into observable outputs, while D R provides direct input-to-output connections to preserve transient signal characteristics.
The continuous parameters are then discretized via the zero-order hold (ZOH) rule, integrating actual algorithms into deep learning. The definition is as follows:
A ¯ = exp ( Δ A ) B ¯ = ( Δ A ) 1 ( exp ( A ) I ) · Δ B
where Δ represents the learnable time scale parameter for converting continuous parameters A into discrete parameters A ¯ and B ¯ . Then Equation (1) can be rewritten as follows:
h t = A ¯ h t 1 + B ¯ x t y t = C h t + D x t
In addition, Equation (3) can also be transformed into convolution form:
K ¯ = Δ ( C B ¯ , C A B ¯ , , C A ¯ L 1 B ¯ ) y = x K ¯
Among them, K ¯ R L is the structured convolution kernel, L is the length of the input sequence, and ∗ represents the convolution operation.
The S4 framework achieves enhanced computational efficiency through dynamic parameter adaptation, a capability further refined in its advanced iteration S6 (Mamba). By rendering parameters B , C and Δ input-dependent, Mamba enables context-aware processing tailored to varying input characteristics.

3.2. Overall Architecture

As illustrated in Figure 1, MSS-Mamba architecture comprises four core components: shallow feature extraction, continuous spectral–spatial scan (CS3), multi-scale information fusion (MIF) and high-quality reconstruction. The network processes a single RGB image I R R 3 × H × W as input. During shallow feature extraction, 1 × 1 2D convolution initially extract low-level feature maps F S R C × H × W , where C × H × W represents channel depth, height and width.
F S = Conv 2 D ( I R )
The shallow features F S are subsequently fed into the CS3 module to generate sequential representations S n , which encode rich spatial–spectral correlations for robust multi-scale feature extraction. These sequences S n are then processed by the MIF module to learn high-level discriminative features F D . The entire process can be described as follows:
S 1 = BRC - S ( F S ) S 2 = BCR - S ( F S ) F D = MIF ( S 1 ) + MIF ( S 2 )
The CS3 is divided into two scanning modes: BRC-S and BCR-S. Finally, the high-quality reconstruction stage synthesizes F D into high-fidelity hyperspectral images I H :
I H = Reconstruction ( F S + F D )

3.3. Continuous Spectral–Spatial Scan

Original Mamba demonstrates remarkable advantages in long-sequence modeling tasks, particularly in global receptive field establishment and long-range dependency learning. However, its application to image processing necessitates specialized scanning strategies to transform multidimensional data into 1D sequences. Existing vision-oriented scanning methods primarily focus on spatial dimensions, which exhibit critical limitations in adaptability when extended to 3D hyperspectral data characterized by intricate spatial–spectral interdependencies. To address this, we propose CS3, a novel technique that generates sequences rich in spatial–spectral information to accommodate the complexity of 3D image data while preserving cross-dimensional correlations.
As illustrated in Figure 2, the CS3 strategy unfolds 3D feature maps in a pixel-wise manner, where each pixel has dependency relationships with surrounding pixels across rows, columns and bands. Taking BRC-S as an example, the input 3D feature map F S R C × H × W undergoes dimensional rearrangement to F S 1 R H × W × C , which is then scanned into sequence S 1 R H × L with L = W × C . This process sequentially scans each column following band and row orders. Specifically, BRC-S first scans band-row planes before proceeding to column traversal, as visualized in Figure 2a. The generated sequences preserve inter-pixel dependencies through tail-to-tail or head-to-head concatenation, where adjacent elements in the sequence maintain spatial–spectral correlations.
Mamba’s unidirectional recurrent processing propagates dependencies solely from preceding tokens, potentially isolating spatially proximate pixels in distant sequence positions and causing local information degradation. To mitigate this, as visualized in Figure 2b, BRC-S introduces band-column-row scanning to create complementary sequences that enhance long-range modeling diversity. Collectively, CS3 innovatively encodes cross-dimensional relationships into sequences through intelligent dimensional permutations, effectively exploiting spatial–spectral synergies.

3.4. Multi-Scale Information Fusion

While Transformer-based multi-scale learning frameworks improve efficiency through image patch partitioning, they inherently suffer from two critical limitations: inadequate interaction between global and local contextual features [42], and persistent incompatibility between global receptive fields and computational efficiency. Addressing these challenges, our Multi-scale Information Fusion (MIF) block introduces a sequence tokenization approach that performs multi-scale slicing operations prior to feeding scanned sequences into Mamba. Inspired by temporal sequence modeling principles in Mamba, this strategy processes hierarchical visual patterns through adaptive window slicing while preserving sequence continuity. The proposed method significantly enhances cross-scale feature interactions by jointly optimizing multi-granularity pattern extraction and sequence dependency propagation, thereby achieving synergistic integration of global contextual awareness and local detail preservation within an efficient computational paradigm.
Multi-Scale Patch: As shown in Figure 1, the sequences generated by CS3 are first fed into a multi-scale patch generator. This module produces multi-scale sequences by configuring varying parameters for sequence tokenization. Specifically, larger patches retain more spatial pixels, enhancing the model’s understanding of structural trends and holistic contextual relationships to facilitate global spatial modeling. Conversely, smaller patches encompass more spectral channel pixels, strengthening the model’s ability to learn variation trends and correlations in adjacent spectral bands for local spectral modeling. To ensure effective spectral–spatial correlation learning, the patch size is set to exceed the number of feature channels. The specific form is shown in Figure 3. Formally, consider the sequence S 1 = { x ( 1 ) , x ( 2 ) , , x ( H ) } R L × H generated by BRC-S, where each column vector x ( i ) R L × 1 represents a spectral–spatial feature stream. The sequence tokenization process applies sliding window slicing with patch length P and stride S t r , generating N = ( L P ) / S t r + 1 patches per column. This operation transforms each x ( i ) into a patch matrix y ( i ) R N × P , effectively expanding the original 2D sequence S 1 into a 3D tensor S 1 R H × N × P . Here, N becomes the new sequence dimension encoding multi-scale contextual hierarchies, while P serves as the variable dimension capturing localized pattern characteristics.
Local Information Richness (LIR): To measure the richness of local spectral information in a sequence, we propose a new metric called LIR. The specific calculation method is as follows:
LIR = N ( P S t r ) = L P S t r + 1 ( P S t r ) P S t r
The LIR metric is intrinsically governed by two critical parameters, P and S t r , which jointly determine its computational characteristics. The LIR value is positively correlated with local spectral information density — higher LIR values indicate richer localized patterns within sequences. Notably, patch dimensions differentially modulate receptive fields: larger P expands spatial–spectral coverage, enabling efficient global spatial modeling with reduced computational overhead while maintaining stride sizes. Conversely, smaller P prioritizes localized spectral information capture through decreased S t r , which increases inter-patch overlaps to enhance fine-grained feature extraction. This parametric flexibility allows LIR to adaptively balance global contextual learning and local discriminative analysis.
The entire multi-scale patch process can be described by the following formula:
S L , S G = MP ( S n , P L , S t r L ) , MP ( S n , P G , S t r G )
S L and S G represent the generated high and low LIR sequences, respectively. MP is the multi-scale patch generator. P L , S t r L , P G and S t r G represent different slice sizes, with P G > P L and S t r G > S t r L .
Long-Short Router: The proposed Long-Short Router module dynamically allocates computational resources by learning adaptive, input-dependent weights to integrate global and local representations. As shown in Figure 1, unlike traditional Mixture-of-Experts (MoE) routers that perform discrete path selection, our router learns the continuous relative contributions of two complementary pathways through a dual-attention mechanism followed by adaptive interpolation.
Formally, given the input feature map F S R C × H × W from the shallow extraction module, it is first reshaped into a token sequence X = reshape ( F S ) R C × M where M = H × W indexes the spatial locations. This sequence X is then processed by the router to generate pathway weights. The router leverages parallel channel-wise and spatial-wise attention branches to capture both semantic and structural information, generating a comprehensive representation:
V channel = ϕ θ 1 ( S 1 ) , V spatial = ψ θ 2 ( S 1 )
where ϕ θ 1 denotes a pointwise convolution ( 1 × 1 ) for channel attention, and ψ θ 2 denotes a depthwise convolution ( 3 × 3 ) for spatial attention. These two attentions are combined via element-wise multiplication to form a joint feature representation V = V channel V spatial .
Subsequently, the combined representation is aggregated via global average pooling and projected to a 2-dimensional weight vector:
w = AdaptiveAvgPool 1 d ( V ) , [ w L , w G ] = softmax ( f θ 3 ( w ) )
where f θ 3 is a linear projection layer. The final output weights w L , w G ( 0 , 1 ) are broadcasted and applied to modulate the contributions of the local and global pathways, respectively. This design enables instance-specific resource allocation without the need for discrete routing or expert selection.
Then, the multi-scale sequences generated by Equation (9) are input into Mamba for feature learning. The feature maps obtained are weighted and summed by the weights generated from Equation (11). Finally, the depth feature map is restored to the input format via restore. The specific formulas are as follows:
F M = Restore w L · f Mamba ( i ) ( S L ) + w G · f Mamba ( i ) ( S G )
where F M R C × H × W is the output depth feature map, and f Mamba ( i ) represents the i-th Mamba layer in the hierarchical stack. Finally, the feature maps from different sequences are element-wise summed via Equation (6), and a high-quality image is reconstructed via Equation (7).

4. Experiments

4.1. Dataset and Evaluation Metrics

The proposed MSS-Mamba is evaluated on three widely recognized hyperspectral reconstruction benchmarks: the ARAD_1k dataset from NTIRE 2022 competition [54], the CAVE dataset [55] and the IEEE grss_dfc_2018 dataset.
The ARAD1K dataset consists of 1000 aligned RGB and hyperspectral image pairs with a spatial resolution of 512 × 482 pixels. This dataset is designed specifically for large-scale spectral recovery tasks and covers most application scenarios. Following the official competition protocol, 900 image pairs are allocated for training and 50 for validation to rigorously assess generalization performance on unseen data.
The CAVE dataset serves as a standard benchmark for few-shot hyperspectral imaging research. It contains 32 high-resolution (512 × 512 pixels) hyperspectral cubes captured under laboratory conditions, covering diverse real-world materials such as fabrics, paints and organic objects. We adopt a standardized split with 22 images for training and 10 for testing, simulating scenarios with limited training samples.
IEEE grss_dfc_2018 dataset was collected by the National Center for Airborne Laser Mapping (NCALM) from Houston University [56]. It comprises a hyperspectral image with a spatial resolution of 4172 × 1202 pixels and 48 spectral bands covering the wavelength range of 380–1050 nm. Following the configuration in [57], bands 23, 12 and 5 were selected to synthesize the RGB image as input. The dataset was cropped into 27 paired 512 × 512 patches, with three non-overlapping patches reserved for the testing set.
To quantitatively evaluate the performance of MSS-Mamba, five widely adopted metrics were employed: the Mean Relative Absolute Error (MRAE) and Root Mean Square Error (RMSE) to quantify spectral reconstruction accuracy, Peak Signal-to-Noise Ratio (PSNR) to assess spatial fidelity, Spectral Angle Mapper (SAM) to measure spectral angular deviations and Structural Similarity Index (SSIM) to evaluate structural preservation. Specifically, lower values of MRAE, RMSE and SAM indicate reduced spectral distortions, while higher PSNR and SSIM values reflect superior spatial consistency and perceptual quality. These metrics collectively provide a rigorous multi-dimensional assessment of hyperspectral reconstruction performance, spanning spectral accuracy, spatial fidelity and structural integrity, thereby comprehensively validating the model’s effectiveness in spectral recovery tasks.

4.2. Parameter Setting

For the proposed MSS-Mamba network, we set a low-LIR ( L I R = 0.13 ,   P = 256 ,   S t r = 128 ) for sliding window slicing as the global information sequence and a high-LIR ( L I R = 0.18 , P = 128 ,   S t r = 64 ) as the local information sequence. The number of Mamba layers is set to 5. The original samples are cropped to produce 32 × 32 RGB and HSI pairs, with an overlap of 8 pixels, the initial feature dimension is set to 128. The ARAD1K dataset was trained for 100 epochs with 5000 iterations per epoch, while the CAVE and Houston datasets were trained for 50 epochs due to their smaller data volumes. Optimization used Adam with an initial learning rate of 1 × 10 4 , dynamically annealed to 1 × 10 6 via cosine scheduling, gradually reducing parameter update magnitudes as the model approaches convergence to achieve coarse-to-fine optimization.

4.3. Comparison with Other Methods

The proposed method is rigorously compared with eight Cutting-edge approaches on each dataset: DenseUnet [32], HSCNN+ [35], sRCNN [58], HRNet [37], GDNet [59], SSDCN [60], GMSR [52] and SSRMamba [61]. All comparative methods adhere to standardized dataset splits and utilize publicly released implementations with pre-trained models to ensure reproducibility. This comprehensive benchmarking framework validates the method’s robustness across diverse hyperspectral reconstruction scenarios while maintaining strict experimental parity in training configurations, evaluation criteria and computational environments.

4.3.1. Quantitative Results

Table 1, Table 2 and Table 3 presents the quantitative results on three datasets. It is evident from the table that our proposed MSS-Mamba consistently outperforms other methods across most of the evaluation metrics. On the ARAD1K dataset, MSS-Mamba demonstrates significant advantages by achieving optimal performance in MRAE, PSNR and SSIM. Our approach reduces spectral distortion with a 3.25% improvement in MRAE over the nearest competitor (SSDCN) and enhances reconstruction fidelity with a 1.73% PSNR gain against GMSR. However, it ranks slightly lower on SAM and RMSE. We attribute this to a inherent trade-off in our model’s design: the MSS-Mamba prioritizes global spectral–spatial consistency and perceptual quality (reflected in PSNR/SSIM), which may slightly relax the constraint on per-pixel spectral angle accuracy (SAM) in highly heterogeneous regions. The complex real-world scenes in ARAD1K make this trade-off more apparent. Nevertheless, our method maintains highly competitive performance across all metrics while providing superior overall reconstruction fidelity.
MSS-Mamba leads all metrics on both the CAVE and Houston datasets. On CAVE, it shows major gains in PSNR and SAM, proving effective in spatial–spectral modeling. On Houston, it reduces MRAE by 7.99% over DenseUnet and RMSE by 5.32% over GDNet, demonstrating strong generalization. The method simultaneously improves spectral accuracy and spatial quality, setting a new state-of-the-art for robust hyperspectral imaging across diverse data environments.

4.3.2. Visual Results

For the visual inspection of HSI reconstruction outcomes, we designed different visualization comparison graphs.On the ARAD1K dataset, Figure 4 and Figure 5 present true-color composites (bands 27, 17 and 10) along with the corresponding MRAE error maps. As shown in Figure 4, while the true-color composites show that all methods produce generally plausible results, close examination reveals that GDNet exhibits noticeable global color distortion, indicative of a fundamental spectral miscalibration. HSCNN+ and HRNet show more localized color artifacts on building surfaces, suggesting difficulties with material-specific spectral reconstruction.
The MRAE maps in Figure 5, however, are far more discriminative. Two key observations can be made: (1) Spectral Consistency in Homogeneous Regions: The sky region exhibits high error (intense red) for most methods. This indicates a widespread failure to model subtle spectral variations in low-texture areas. Our method shows a marked reduction in error here. This can be directly attributed to the CS3 module’s continuous scanning strategy, which traverses the spatial and spectral dimensions simultaneously. Unlike patch-based methods that may break the continuity of the sky region, our approach maintains a global contextual understanding, allowing it to model these subtle, large-scale spectral variations more effectively. (2) Spatial–Spectral Complexity in Vegetation: In contrast to DenseUnet and HSCNN+, which produce spatially coherent errors indicating a fundamental misrepresentation of the vegetation’s structure, our method exhibits a more diffuse and lower-magnitude error pattern. This suggests that the MIF block successfully fuses multi-scale features to better represent the complex interplay of texture and spectral signature inherent in natural scenes, thereby avoiding such structured artifacts.
The reconstruction results on the CAVE dataset are presented in Figure 6 and Figure 7. Figure 6 displays the true-color composite images (bands 27, 17 and 10). While the outputs from all compared methods are visually plausible and exhibit high perceptual quality at a glance, making fine-grained distinctions challenging based on RGB visualization alone, the corresponding MRAE error maps in Figure 7 provide a more objective and discriminative assessment. The minimal visual discrepancy in color composites underscores the challenging nature of this benchmark and necessitates a quantitative evaluation to uncover perceptually subtle yet critical differences in spectral–spatial fidelity. As revealed in Figure 7, several deep learning-based methods exhibit pronounced errors in specific regions, such as the teddy bear’s nose and the surface of the chili pepper. In comparison, SSDCN, GMSR and our method achieve relatively lower error levels overall. Notably, our approach demonstrates superior performance in detail preservation and spectral continuity—particularly evident in these structurally and spectrally complex regions—affirming the efficacy of the proposed joint spectral–spatial scanning strategy.
Figure 8 illustrates the true-color composites (bands 23, 12 and 5) from the IEEE grss_dfc_2018 dataset. Figure 9 further provides absolute error maps between the reconstructed results of each method and the ground-truth references. Overall, all methods perform satisfactorily in reconstructing mid-band wavelengths, whereas their adaptability diminishes toward both lower and higher bands, underscoring the challenging nature of the Houston dataset. Certain methods, such as HRNet, exhibit significant errors in particular regions (e.g., buildings in higher bands), indicating limited generalization capability and regional adaptability. In contrast, the proposed MSS-Mamba method maintains consistently lower error levels across various bands and regions, further confirming its ability to reconstruct both global structures and fine local details with high fidelity.
To investigate MSS-Mamba’s spectral reconstruction capability across diverse surfaces, we compared spectral reflectance curves across 31 bands between ground-truth data and reconstructed images from various methods. As shown in Figure 10, spectral reflectance profiles from six representative locations demonstrate that MSS-Mamba achieves the closest alignment with ground-truth measurements-particularly for challenging surfaces like pottery jars (Figure 10d) and Soil wall (Figure 10f). These results validate MSS-Mamba’s superior performance in reconstructing complex spectral signatures compared to alternative approaches. It is worth noting that the reconstruction fidelity can vary depending on the material properties and spectral characteristics of the region. While our method demonstrates robust performance in most areas, certain materials with specific spectral signatures in high-frequency bands present a valuable challenge for future work.

4.4. Ablation Analysis

4.4.1. Multi-Scale Information Learning

To investigate the impact of sequence modeling strategies on multi-scale feature learning, we conducted three ablation studies on the ARAD_1K dataset. First, to validate the effectiveness of global and local sequence modeling, experiments were performed by retaining only the global or local information learning modules. Second, the contribution of the adaptive weighting fusion module was analyzed by disabling this component.
As shown in Table 4, the full model achieves optimal performance. Among them, in the global mode, MSS-Mamba outperforms the local mode in terms of PSNR, but SAM is lower then local mode. This indicates that the global mode is more effective in preserving the overall structure of the image, while the local mode can more accurately estimate the relative changes in the spectrum.Integrating global and local modules without adaptive fusion yields intermediate results, highlighting their complementary roles. Enabling adaptive fusion further refines performance, there has been a significant improvement in each indicator. These results validate the necessity of adaptive weighting for balancing multi-scale spectral–spatial features, ultimately achieving state-of-the-art reconstruction fidelity.

4.4.2. The Effectiveness of CS3

To investigate the effects of CS3 combined with foundation model(Base) on the model, we designed four sets of experiments: using the foundation model of the original sequence instead of CS3 (Non-CS3), Base combined with BRC-S, Base combined with BCR-S and Base combined with CS3. The experiments were conducted on the ARAD1K dataset.
As summarized in Table 5, The combined implementation of BRC-S and BCR-S with the Base model achieves optimal performance, underscoring the effectiveness of our multi-directional scanning strategy. Although the single-sequence model—particularly Base+BCR-S—exhibits a marginal advantage in SAM metrics, highlighting the strength of CS3 scanning in reconstructing and exploring spectral information, it falls short in capturing fine spatial details, as reflected in lower PSNR and SSIM scores. In contrast, non-CS3 configurations underperform across all metrics. These results emphasize the limitations of conventional scanning sequences in spatial–spectral feature extraction: their failure to preserve continuous information flow results in the gradual degradation of spectral–spatial features throughout processing.

4.4.3. Different Number of Mamba Layers

To validate the efficacy of stacking Mamba layers in MSS-Mamba, we conducted ablation studies with varying layer depths (1–6) on the ARAD1K dataset. As detailed in Table 6, increasing Mamba layers raises parameter count by approximately 0.88 M per layer. Configurations with one or two layers exhibit limited representational capacity, resulting in compromised recovery of complex spectral–spatial features. The 3-layer configuration demonstrates strong and competitive performance, particularly in spectral accuracy: it achieves the best MRAE and a very low SAM, indicating excellent spectral fidelity with minimal distortion. The fact that SAM does not improve substantially with deeper architectures suggests that the 3-layer model already captures essential spectral characteristics effectively. With only 3.12 M parameters, it offers an attractive balance between efficiency and reconstruction quality, making it highly suitable for applications that prioritize spectral precision and computational economy.
The 5-layer model, while requiring 1.76M more parameters, delivers superior overall reconstruction quality, excelling in perceptually critical metrics including PSNR and SSIM. It also maintains strong all-around performance without significant degradation in any metric, indicating better generalization across diverse spectral and spatial features. While the 6-layer model achieves marginally higher PSNR and lower RMSE, the performance gain over the 5-layer model is minimal compared to the additional parameter cost, suggesting diminishing returns beyond five layers.Thus, the 3-layer model stands out as a compact and spectrally accurate configuration, whereas the 5-layer version provides the optimal trade-off for high-fidelity reconstruction across both spatial and spectral domains, justifying its selection as our primary architecture.

4.4.4. Different Fusion Methods

To evaluate the impact of our proposed routing mechanism, we compared three fusion strategies: element-wise addition (Add), a simple gating mechanism (Gate) and an attention-enhanced gating mechanism (Gate+Att). As summarized in Table 7, the attention-based gating approach achieves the best overall performance, attaining optimal values in MRAE, RMSE, PSNR and SSIM. Although the plain gating mechanism yields a slightly better SAM, the minor degradation in our method is likely due to the attention mechanism prioritizing broader spatial–spectral contextual integration over extreme angular accuracy. In contrast, simple addition fails to adaptively weight features, resulting in consistently inferior performance across most metrics and confirming its limited capacity for effective spatial–spectral fusion.

5. Conclusions

This study proposes a multi-scale spectral–spatial sequence learning SSR network MSS-Mamba, which integrates continuous spectral–spatial scanning mechanism with adaptive multi-scale feature fusion strategy. Specifically, MSS-Mamba achieves collaborative optimization of global structural consistency learning and local spectral sensitivity through a dual path architecture, combined with complementary scanning modes of BRC-S and BCR-S, significantly enhancing the diversity expression of spatial–spectral features. The adaptive weight fusion module further dynamically balances the contribution of multi-scale information, overcoming the limitations of static fusion strategies. In future work, we plan to further refine the model architecture to enhance its performance and applicability. Several specific directions are envisioned:
First, while the proposed CS3 scanning strategy effectively captures continuous spectral–spatial dependencies—addressing a key limitation of current scanning approaches—it may still disrupt certain local spatial relationships. To mitigate this, we will explore more rigorous scanning mechanisms and investigate the integration of more effective positional encoding techniques to better preserve structural integrity.
Second, the sliding-window slicing strategy introduced in this work reduces the reliance on down-sampling for multi-scale feature learning in visual Mamba. Building on this idea, we aim to design more comprehensive and adaptive slicing schemes to capture richer contextual information across scales.
Finally, we will focus on developing lightweight and efficient variants of the model that demand minimal computational resources, thereby improving the practicality and deployment potential of spectral super-resolution methods in real-world applications.

Author Contributions

Conceptualization, W.C. and R.G.; methodology, W.C. and L.L.; software, L.L. and R.G.; validation, L.L. and R.G.; formal analysis, R.G.; investigation, W.C. and L.L.; resources, L.L.; data curation, L.L.; writing—original draft preparation, W.C. and L.L.; writing—review and editing, W.C., L.L. and R.G.; visualization, R.G.; supervision, R.G.; project administration, W.C.; funding acquisition, W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Natural Science Foundation of China under Grant 42401473.

Data Availability Statement

Data will be made available on request.

Acknowledgments

The authors are thankful to the anonymous reviewers and editors for their comments to improve this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, C.; Xiao, Z.; Wang, S. Multi-scale hyperspectral recovery networks: RGB-hyperspectral imaging consistency empowered deep spectral super-resolution. Optics Express 2024, 32, 23392–23403. [Google Scholar] [CrossRef] [PubMed]
  2. Sun, H.; Chen, H.; Chen, W.; Wang, C.; Xie, W.; Lu, X. Learning Positive–Negative Prompts for Open-Set Remote Sensing Scene Classification. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5637914. [Google Scholar] [CrossRef]
  3. Hong, D.; Zhang, B.; Li, X.; Li, Y.; Li, C.; Yao, J.; Yokoya, N.; Li, H.; Ghamisi, P.; Jia, X.; et al. SpectralGPT: Spectral Remote Sensing Foundation Model. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5227–5244. [Google Scholar] [CrossRef] [PubMed]
  4. He, J.; Yuan, Q.; Li, J.; Xiao, Y.; Liu, D.; Shen, H.; Zhang, L. Spectral super-resolution meets deep learning: Achievements and challenges. Inf. Fusion 2023, 97, 101812. [Google Scholar] [CrossRef]
  5. Zhang, J.; Sun, Y.; Chen, J.; Yang, D.; Liang, R. Deep-learning-based hyperspectral recovery from a single RGB image. Opt. Lett. 2020, 45, 5676–5679. [Google Scholar] [CrossRef]
  6. Han, X.; Zhang, H.; Xue, J.H.; Sun, W. A Spectral–Spatial Jointed Spectral Super-Resolution and Its Application to HJ-1A Satellite Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5505905. [Google Scholar] [CrossRef]
  7. Wu, J.; Aeschbacher, J.; Timofte, R. In Defense of Shallow Learned Spectral Reconstruction from RGB Images. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 471–479. [Google Scholar] [CrossRef]
  8. Han, X.; Leng, W.; Zhang, H.; Wang, W.; Xu, Q.; Sun, W. Spectral Library-Based Spectral Super-Resolution Under Incomplete Spectral Coverage Conditions. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5516312. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Wei, D.; Qin, C.; Wang, H.; Pfister, H.; Fu, Y. Context Reasoning Attention Network for Image Super-Resolution. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 4258–4267. [Google Scholar] [CrossRef]
  10. Xu, H.; Chen, W.; Tan, C.; Ning, H.; Sun, H.; Xie, W. Orientational Clustering Learning for Open-Set Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5508605. [Google Scholar] [CrossRef]
  11. Wang, C.; Jiang, J.; Zhong, Z.; Liu, X. Spatial-Frequency Mutual Learning for Face Super-Resolution. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 22356–22366. [Google Scholar] [CrossRef]
  12. Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; Dong, C. Activating More Pixels in Image Super-Resolution Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 22367–22377. [Google Scholar]
  13. Wu, C.; Li, J.; Song, R.; Li, Y.; Du, Q. HPRN: Holistic Prior-Embedded Relation Network for Spectral Super-Resolution. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11409–11423. [Google Scholar] [CrossRef]
  14. Xu, H.; Yang, J.; Lin, T.; Liu, J.; Liu, F.; Xiao, L. Hyperspectral Reconstruction From RGB Images via Physically Guided Graph Deep Prior Learning. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5533614. [Google Scholar] [CrossRef]
  15. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
  16. Yao, H.; Chen, R.; Chen, W.; Sun, H.; Xie, W.; Lu, X. Pseudolabel-Based Unreliable Sample Learning for Semi-Supervised Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5527116. [Google Scholar] [CrossRef]
  17. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9992–10002. [Google Scholar] [CrossRef]
  18. Guo, H.; Li, J.; Dai, T.; Ouyang, Z.; Ren, X.; Xia, S.T. MambaIR: A Simple Baseline for Image Restoration with State-Space Model. arXiv 2024, arXiv:2402.15648. [Google Scholar] [CrossRef]
  19. Gu, A.; Dao, T. Mamba: Linear-Time Sequence Modeling with Selective State Spaces. arXiv 2024, arXiv:2312.00752. [Google Scholar] [CrossRef]
  20. Zhu, L.; Liao, B.; Zhang, Q.; Wang, X.; Liu, W.; Wang, X. Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model. arXiv 2024, arXiv:2401.09417. [Google Scholar] [CrossRef]
  21. Liu, Y.; Tian, Y.; Zhao, Y.; Yu, H.; Xie, L.; Wang, Y.; Ye, Q.; Jiao, J.; Liu, Y. VMamba: Visual State Space Model. arXiv 2024, arXiv:2401.10166. [Google Scholar] [CrossRef]
  22. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929. [Google Scholar] [CrossRef]
  23. Xu, X.; Chen, C.; Liang, Y.; Huang, B.; Bai, G.; Zhao, L.; Shu, K. SST: Multi-Scale Hybrid Mamba-Transformer Experts for Long-Short Range Time Series Forecasting. arXiv 2024, arXiv:2404.14757. [Google Scholar] [CrossRef]
  24. Ma, C.; Wang, Z. Semi-Mamba-UNet: Pixel-Level Contrastive and Pixel-Level Cross-Supervised Visual Mamba-based UNet for Semi-Supervised Medical Image Segmentation. arXiv 2024, arXiv:2402.07245. [Google Scholar] [CrossRef]
  25. Shi, Y.; Xia, B.; Jin, X.; Wang, X.; Zhao, T.; Xia, X.; Xiao, X.; Yang, W. VmambaIR: Visual State Space Model for Image Restoration. arXiv 2024, arXiv:2403.11423. [Google Scholar] [CrossRef]
  26. Hang, R.; Liu, Q.; Li, Z. Spectral Super-Resolution Network Guided by Intrinsic Properties of Hyperspectral Imagery. IEEE Trans. Image Process. 2021, 30, 7256–7265. [Google Scholar] [CrossRef] [PubMed]
  27. Arad, B.; Ben-Shahar, O. Sparse Recovery of Hyperspectral Signal from Natural RGB Images. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016. [Google Scholar]
  28. Yi, C.; Zhao, Y.Q.; Chan, J.C.W. Spectral Super-Resolution for Multispectral Image Based on Spectral Improvement Strategy and Spatial Preservation Strategy. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9010–9024. [Google Scholar] [CrossRef]
  29. Geng, Y.; Mei, S.; Tian, J.; Zhang, Y.; Du, Q. Spatial Constrained Hyperspectral Reconstruction from RGB Inputs Using Dictionary Representation. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 3169–3172. [Google Scholar] [CrossRef]
  30. Wan, W.; Zhang, B.; Vella, M.; Mota, J.F.; Chen, W. Robust RGB-Guided Super-Resolution of Hyperspectral Images via TV3 Minimization. IEEE Signal Process. Lett. 2022, 29, 957–961. [Google Scholar] [CrossRef]
  31. Duan, P.; Hu, S.; Kang, X.; Li, S. Shadow Removal of Hyperspectral Remote Sensing Images With Multiexposure Fusion. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5537211. [Google Scholar] [CrossRef]
  32. Galliani, S.; Lanaras, C.; Marmanis, D.; Baltsavias, E.; Schindler, K. Learned Spectral Super-Resolution. arXiv 2017, arXiv:1703.09470. [Google Scholar] [CrossRef]
  33. Xiong, Z.; Shi, Z.; Li, H.; Wang, L.; Liu, D.; Wu, F. HSCNN: CNN-Based Hyperspectral Image Recovery from Spectrally Undersampled Projections. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 518–525. [Google Scholar] [CrossRef]
  34. Kim, J.; Lee, J.K.; Lee, K.M. Accurate Image Super-Resolution Using Very Deep Convolutional Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar] [CrossRef]
  35. Shi, Z.; Chen, C.; Xiong, Z.; Liu, D.; Wu, F. HSCNN+: Advanced CNN-Based Hyperspectral Recovery from RGB Images. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 1052–10528. [Google Scholar] [CrossRef]
  36. Li, J.; Wu, C.; Song, R.; Li, Y.; Liu, F. Adaptive Weighted Attention Network with Camera Spectral Sensitivity Prior for Spectral Reconstruction from RGB Images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1894–1903. [Google Scholar] [CrossRef]
  37. Zhao, Y.; Po, L.M.; Yan, Q.; Liu, W.; Lin, T. Hierarchical Regression Network for Spectral Reconstruction from RGB Images. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 1695–1704. [Google Scholar] [CrossRef]
  38. Li, J.; Wu, C.; Song, R.; Li, Y.; Xie, W.; He, L.; Gao, X. Deep Hybrid 2-D–3-D CNN Based on Dual Second-Order Attention With Camera Spectral Sensitivity Prior for Spectral Super-Resolution. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 623–634. [Google Scholar] [CrossRef]
  39. Duan, P.; Shan, T.; Kang, X.; Li, S. Spectral Super-Resolution in Frequency Domain. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 12338–12348. [Google Scholar] [CrossRef]
  40. Sun, W.; Wang, Y.; Liu, W.; Shao, S.; Yang, S.; Yang, G.; Ren, K.; Chen, B. STANet: A Hybrid Spectral and Texture Attention Pyramid Network for Spectral Super-Resolution of Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5525915. [Google Scholar] [CrossRef]
  41. Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-Trained Image Processing Transformer. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 12294–12305. [Google Scholar]
  42. Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; Pfister, H.; Timofte, R.; Gool, L.V. MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral Reconstruction. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 744–754. [Google Scholar]
  43. Gu, A.; Goel, K.; R’e, C. Efficiently Modeling Long Sequences with Structured State Spaces. arXiv 2021, arXiv:2111.00396. [Google Scholar]
  44. Gu, A.; Johnson, I.; Goel, K.; Saab, K.K.; Dao, T.; Rudra, A.; R’e, C. Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers. In Proceedings of the Neural Information Processing Systems, Online, 6–14 December 2021. [Google Scholar]
  45. Basar, T. A New Approach to Linear Filtering and Prediction Problems. In Control Theory: Twenty-Five Seminal Papers (1932–1981); Wiley: Hoboken, NJ, USA, 2001; pp. 167–179. [Google Scholar] [CrossRef]
  46. Dao, T.; Fu, D.Y.; Saab, K.K.; Thomas, A.W.; Rudra, A.; Ré, C. Hungry Hungry Hippos: Towards Language Modeling with State Space Models. arXiv 2022, arXiv:2212.14052. [Google Scholar]
  47. Mehta, H.; Gupta, A.; Cutkosky, A.; Neyshabur, B. Long Range Language Modeling via Gated State Spaces. arXiv 2022, arXiv:2206.13947. [Google Scholar] [CrossRef]
  48. Luan, X.; Fan, H.; Wang, Q.; Yang, N.; Liu, S.; Li, X.; Tang, Y. FMambaIR: A Hybrid State-Space Model and Frequency Domain for Image Restoration. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4201614. [Google Scholar] [CrossRef]
  49. Li, Y.; Luo, Y.; Zhang, L.; Wang, Z.; Du, B. MambaHSI: Spatial–Spectral Mamba for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5524216. [Google Scholar] [CrossRef]
  50. Ahmad, M.; Usama, M.; Mazzara, M.; Distefano, S. WaveMamba: Spatial-Spectral Wavelet Mamba for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2025, 22, 5500505. [Google Scholar] [CrossRef]
  51. Li, W.; Yuan, F.; Zhang, H.; Lv, Z.; Wu, B. Hyperspectral Object Detection Based on Spatial–Spectral Fusion and Visual Mamba. Remote Sens. 2024, 16, 4482. [Google Scholar] [CrossRef]
  52. Wang, X.; Huang, Z.; Zhang, S.; Zhu, J.; Feng, L. GMSR:Gradient-Guided Mamba for Spectral Reconstruction from RGB Images. arXiv 2024, arXiv:2405.07777. [Google Scholar]
  53. Lin, M.; Mo, Z.; Zhang, H.; Fu, X.; Xu, M. Spectral Reconstruction via Dual Cross-Scanning and Cross-Attention Mechanisms. In Proceedings of the 2024 14th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS), Helsinki, Finland, 9–11 December 2024; pp. 1–5. [Google Scholar] [CrossRef]
  54. Arad, B.; Timofte, R.; Yahel, R.; Morag, N.; Bernat, A.; Cai, Y.; Lin, J.; Lin, Z.; Wang, H.; Zhang, Y.; et al. NTIRE 2022 Spectral Recovery Challenge and Data Set. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), New Orleans, LA, USA, 19–20 June 2022; pp. 862–880. [Google Scholar] [CrossRef]
  55. Yasuma, F.; Mitsunaga, T.; Iso, D.; Nayar, S.K. Generalized Assorted Pixel Camera: Postcapture Control of Resolution, Dynamic Range, and Spectrum. IEEE Trans. Image Process. 2010, 19, 2241–2253. [Google Scholar] [CrossRef] [PubMed]
  56. Xu, Y.; Du, B.; Zhang, L.; Cerra, D.; Pato, M.; Carmona, E.; Prasad, S.; Yokoya, N.; Hänsch, R.; Le Saux, B. Advanced Multi-Sensor Optical Remote Sensing for Urban Land Use and Land Cover Classification: Outcome of the 2018 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1709–1724. [Google Scholar] [CrossRef]
  57. Chen, B.; Liu, L.; Liu, C.; Zou, Z.; Shi, Z. Spectral-Cascaded Diffusion Model for Remote Sensing Image Spectral Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5528414. [Google Scholar] [CrossRef]
  58. Gewali, U.B.; Monteiro, S.T.; Saber, E.S. Spectral Super-Resolution with Optimized Bands. Remote. Sens. 2019, 11, 1648. [Google Scholar] [CrossRef]
  59. Zhu, Z.; Liu, H.; Hou, J.; Jia, S.; Zhang, Q. Deep Amended Gradient Descent for Efficient Spectral Reconstruction From Single RGB Images. IEEE Trans. Comput. Imaging 2021, 7, 1176–1188. [Google Scholar] [CrossRef]
  60. Chen, W.; Zheng, X.; Lu, X. Semisupervised Spectral Degradation Constrained Network for Spectral Super-Resolution. IEEE Geosci. Remote Sens. Lett. 2022, 19, 5506205. [Google Scholar] [CrossRef]
  61. Li, B.; Wang, X.; Xu, H. SSRMamba: Efficient Visual State Space Model for Spectral Super-Resolution. In Proceedings of the ICASSP 2025—2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 6–11 April 2025; pp. 1–5. [Google Scholar] [CrossRef]
Figure 1. The overall network architecture of the proposed Multi-scale Spectral–Spatial Sequence Learning Mamba (MSS-Mamba).
Figure 1. The overall network architecture of the proposed Multi-scale Spectral–Spatial Sequence Learning Mamba (MSS-Mamba).
Entropy 27 00959 g001
Figure 2. Continuous spectral–spatial scan. The (a) band–row–column scan strategy; the (b) band–column–row scan strategy.
Figure 2. Continuous spectral–spatial scan. The (a) band–row–column scan strategy; the (b) band–column–row scan strategy.
Entropy 27 00959 g002
Figure 3. Sliding window slicing in Multi-Scale Patch.
Figure 3. Sliding window slicing in Multi-Scale Patch.
Entropy 27 00959 g003
Figure 4. True-color synthesis images (bands 27, 17 and 10) of a sample from the ARAD1K dataset, comparing the results of different methods.
Figure 4. True-color synthesis images (bands 27, 17 and 10) of a sample from the ARAD1K dataset, comparing the results of different methods.
Entropy 27 00959 g004
Figure 5. Comparative visualization across three spectral bands for architectural and vegetation samples from the ARAD1K dataset. Per-pixel reconstruction error is derived from MRAE values between super-resolution and ground-truth spectral vectors. MRAE heat maps are display-scaled for optimal visualization.
Figure 5. Comparative visualization across three spectral bands for architectural and vegetation samples from the ARAD1K dataset. Per-pixel reconstruction error is derived from MRAE values between super-resolution and ground-truth spectral vectors. MRAE heat maps are display-scaled for optimal visualization.
Entropy 27 00959 g005
Figure 6. True-color synthesis images (bands 27, 17 and 10) of a sample from the CAVE dataset, comparing the results of different methods.
Figure 6. True-color synthesis images (bands 27, 17 and 10) of a sample from the CAVE dataset, comparing the results of different methods.
Entropy 27 00959 g006
Figure 7. Visual reconstruction results from two different images from the CAVE dataset, each taken from band 18. Per-pixel reconstruction error is derived from MRAE values between super-resolution and ground-truth spectral vectors. MRAE heat maps are display-scaled for optimal visualization.
Figure 7. Visual reconstruction results from two different images from the CAVE dataset, each taken from band 18. Per-pixel reconstruction error is derived from MRAE values between super-resolution and ground-truth spectral vectors. MRAE heat maps are display-scaled for optimal visualization.
Entropy 27 00959 g007
Figure 8. True-color synthesis images (bands 23, 12 and 5) of a sample from the IEEE grss_dfc_2018 dataset, comparing the results of different methods.
Figure 8. True-color synthesis images (bands 23, 12 and 5) of a sample from the IEEE grss_dfc_2018 dataset, comparing the results of different methods.
Entropy 27 00959 g008
Figure 9. Absolute differences between the reconstructed images and the ground truth at bands 1, 12, 24, 36 and 48 on the IEEE grss_dfc_2018 dataset. The scale values on the color bar on the right side represent the absolute difference divided by the maximum possible value in the reconstructed HSI.
Figure 9. Absolute differences between the reconstructed images and the ground truth at bands 1, 12, 24, 36 and 48 on the IEEE grss_dfc_2018 dataset. The scale values on the color bar on the right side represent the absolute difference divided by the maximum possible value in the reconstructed HSI.
Entropy 27 00959 g009
Figure 10. Spectral curves on six objects generated by different methods. (a) Red flowers. (b) Building. (c) Trees. (d) Pottery jar. (e) Graffiti. (f) Soil wall.
Figure 10. Spectral curves on six objects generated by different methods. (a) Red flowers. (b) Building. (c) Trees. (d) Pottery jar. (e) Graffiti. (f) Soil wall.
Entropy 27 00959 g010
Table 1. Performance of different methods on the ARAD1K dataset.
Table 1. Performance of different methods on the ARAD1K dataset.
ModelMRAE  ( ) RMSE  ( ) PSNR  ( ) SAM  ( ) SSIM  ( )
DenseUnet0.45670.063434.90836.53000.9652
HSCNN+0.43710.063834.90836.63350.9695
sRCNN0.45920.066334.32017.61210.9679
HRNet0.41340.060835.14616.66060.9756
GDNet0.42500.059034.96858.53810.9694
SSDCN0.35950.055835.70127.90820.9509
GMSR0.38210.056635.72867.88360.9739
SSRMamba0.37240.063234.67646.80820.9634
Ours0.34780.056936.34736.72230.9790
Table 2. Performance of different methods on the CAVE dataset.
Table 2. Performance of different methods on the CAVE dataset.
ModelMRAE  ( ) RMSE  ( ) PSNR  ( ) SAM  ( ) SSIM  ( )
DenseUnet0.22170.050242.26229.79530.9592
HSCNN+0.21500.053942.245210.18160.9565
sRCNN0.22320.052142.158410.78990.9588
HRNet0.20430.051542.82569.73350.9587
GDNet0.23260.051742.116310.05150.9578
SSDCN0.22020.051042.40119.94170.9593
GMSR0.22180.049542.28999.75610.9586
SSRMamba0.21910.049442.162510.42000.9545
Ours0.20420.049242.99309.67510.9616
Table 3. Performance of different methods on the Houston dataset.
Table 3. Performance of different methods on the Houston dataset.
ModelMRAE   ( ) RMSE   ( ) PSNR  ( ) SAM   ( ) SSIM  ( )
DenseUnet0.19900.056436.62697.87340.9884
HSCNN+0.20180.059136.38418.09270.9874
sRCNN0.24780.071434.85709.69550.9822
HRNet0.20870.059336.79327.93430.9899
GDNet0.21330.056436.15748.25540.9892
SSDCN0.22760.058936.03598.43280.9890
GMSR0.22690.057836.03228.26700.9902
SSRMamba0.20640.060735.56738.64180.9882
Ours0.18310.053437.00637.78820.9924
Table 4. Ablation experiments on the effect of introduced components on ARAD1K dataset.
Table 4. Ablation experiments on the effect of introduced components on ARAD1K dataset.
GlobalLocalRouterMRAE  ( ) RMSE  ( ) PSNR  ( ) SAM  ( ) SSIM  ( )
0.36960.064835.20767.85700.9649
0.39340.061234.50046.60770.9796
0.36730.059135.53956.76150.9729
0.34780.056936.34736.72230.9790
Table 5. Ablation experiments for different scan modes on ARAD1K dataset.
Table 5. Ablation experiments for different scan modes on ARAD1K dataset.
Scan ModeMRAE  ( ) RMSE  ( ) PSNR  ( ) SAM  ( ) SSIM  ( )
Non-CS30.36960.064835.20767.85700.9649
Base+BRC-S0.34150.060435.85636.46600.9734
Base+BCR-S0.33800.057635.55626.01460.9664
Base+CS30.34780.056936.34736.72230.9790
Table 6. Ablation experiments of Mamba layer on ARAD1K dataset.
Table 6. Ablation experiments of Mamba layer on ARAD1K dataset.
Mamba LayerParams (M)MRAE  ( ) RMSE  ( ) PSNR  ( ) SAM  ( ) SSIM  ( )
11.370.35540.057134.91267.48120.9638
22.250.34620.060935.35767.01280.9679
33.120.34190.055635.52426.81260.9703
44.010.39650.058535.63796.74780.9836
54.880.34780.056936.34736.72230.9790
65.750.34200.052236.67956.62590.9763
Table 7. Ablation experiments for different fusion methods on ARAD1K dataset.
Table 7. Ablation experiments for different fusion methods on ARAD1K dataset.
Fusion MethodMRAE  ( ) RMSE  ( ) PSNR  ( ) SAM  ( ) SSIM  ( )
Add0.35170.060735.67417.25360.9671
Gate0.37730.059535.65176.70730.9672
Gate+Att0.34780.056936.34736.72230.9790
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, W.; Liu, L.; Gao, R. Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning. Entropy 2025, 27, 959. https://doi.org/10.3390/e27090959

AMA Style

Chen W, Liu L, Gao R. Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning. Entropy. 2025; 27(9):959. https://doi.org/10.3390/e27090959

Chicago/Turabian Style

Chen, Wenjing, Lang Liu, and Rong Gao. 2025. "Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning" Entropy 27, no. 9: 959. https://doi.org/10.3390/e27090959

APA Style

Chen, W., Liu, L., & Gao, R. (2025). Reconstructing Hyperspectral Images from RGB Images by Multi-Scale Spectral–Spatial Sequence Learning. Entropy, 27(9), 959. https://doi.org/10.3390/e27090959

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop