Next Article in Journal
Identification of Comorbidities in Obstructive Sleep Apnea Using Diverse Data and a One-Dimensional Convolutional Neural Network
Previous Article in Journal
Highly Efficient Deep Learning-Enabled Parameterization and 3D Reconstruction of Traditional Chinese Roof Structures
Previous Article in Special Issue
Visual-to-Tactile Cross-Modal Generation Using a Class-Conditional GAN with Multi-Scale Discriminator and Hybrid Loss
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution

1
National Supercomputing Center in Zhengzhou, Zhengzhou University, Zhengzhou 450001, China
2
The School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China
*
Author to whom correspondence should be addressed.
Sensors 2026, 26(3), 1055; https://doi.org/10.3390/s26031055
Submission received: 26 December 2025 / Revised: 21 January 2026 / Accepted: 26 January 2026 / Published: 6 February 2026
(This article belongs to the Special Issue Intelligent Sensing and Artificial Intelligence for Image Processing)

Abstract

In recent years, the methods based on convolutional neural networks have achieved significant progress in hyperspectral image super-resolution. However, existing methods still face two key challenges: (1) they fail to fully extract edge detail information from hyperspectral images; (2) they struggle to simultaneously capture local and global features. To address these issues, we propose an Edge-Distilled and Local–Global Feature Selection network (EDLGFS) for hyperspectral image super-resolution. This network aims to effectively leverage edge details and local–global features, thereby enhancing super-resolution reconstruction quality. Firstly, we design an edge-guided super-resolution network based on knowledge distillation. This network transfers edge knowledge to improve the reconstruction. Secondly, we propose a Local–Global Feature Selection mechanism (LGFS), which integrates convolutions of different sizes with the self-attention mechanism. This design models spatial correlations across features with different receptive fields, achieving efficient feature selection to more effectively capture local and global features. Finally, we propose a dynamic loss mechanism to more effectively balance the contribution of each loss term. Extensive experimental results on three public datasets demonstrate that the proposed EDLGFS achieves superior super-resolution reconstruction quality.

1. Introduction

Hyperspectral images (HSIs) are typically acquired by capturing tens to hundreds of continuous spectral bands within near-infrared, mid-infrared, visible light, and other bands of the electromagnetic spectrum [1]. Unlike traditional RGB or multispectral images, HSIs possess extremely high spectral resolution, enabling them to capture detailed spectral characteristics of target objects at every spatial location, which allows for precise material identification. Therefore, HSIs find extensive applications in various fields, such as target detection [2,3], mineral exploration [4], and medical diagnostics [5]. Furthermore, core hyperspectral analysis tasks including spectral unmixing and fine-grained land cover classification also heavily rely on high-quality HSIs. Recent advanced methods, such as spatial-channel multiscale Transformer networks for unmixing [6] and multi-scale memory networks for detection [7], further demonstrate the growing demand for precise spatial–spectral representations. However, due to fundamental imaging system constraints, achieving high spectral resolution often compromises spatial resolution. This limits the performance of the aforementioned applications [8]. Therefore, super-resolution (SR) techniques are required to enhance the spatial resolution of HSIs without hardware upgrades, thereby providing a superior data foundation for advanced computational models.
Recently, hyperspectral image super-resolution (HSISR) has become a vibrant research topic. Wang et al. [9] provided a systematic review of HSISR. This review categorizes HSISR techniques into fusion-based techniques [10,11] and single-image SR techniques [12,13], and highlights key challenges including spectral distortion and edge preservation. Fusion-based techniques typically integrate low-resolution (LR) HSIs with complementary high-resolution (HR) images, such as RGB or multispectral images. This produces results with enhanced spatial details while preserving spectral fidelity. However, these methods depend on HR auxiliary images and require complete registration with the HSIs. These requirements pose significant challenges in practical applications. Single-image SR does not depend on auxiliary images and only utilizes the LR HSIs to improve spatial resolution, making it more flexible in practical applications.
In the past, traditional single HSISR methods primarily relied on manually defined prior constraints and assumptions, such as sparse regularization [14] and three-dimensional total variation [15]. These prior constraints often fail to capture the complex features of HSIs, thereby limiting the models’ generalization ability. Recent surveys, such as Wang et al. [9], have also noted that prior constraints are often too simplistic to model complex real-world scenes. This has motivated a shift toward data-driven deep learning approaches.
Currently, convolutional neural networks (CNNs) have been extensively applied to natural image SR [16,17]. The core principle lies in extracting complex structural features in images through multiple convolutional layers and feature learning mechanisms [18,19,20]. Due to the outstanding performance of CNNs in image SR, researchers have extended their application to HSISR. The existing deep learning-based HSISR networks are mainly divided into 2D CNNs [21,22,23] and 3D CNNs [24,25,26,27]. Two-dimensional CNNs conduct independent convolution operations on each spectral band, which ignores spectral continuity. Three-dimensional CNNs can explore both spatial context and spectral correlations between adjacent bands. However, 3D CNNs fail to capture long-range spatial correlations and spectral similarities. Additionally, 3D convolutions introduce significant computational complexity.
To overcome these limitations, the Transformer architecture has been introduced into HSISR in recent studies [28,29]. As a deep learning model based on self-attention mechanisms, the Transformer excels at capturing global information and long-range dependencies. In contrast, CNNs efficiently extract fine local features due to their local receptive fields. Their complementary strengths in feature extraction have been applied to HSISR. For example, SwinIR [30] and SST [31] incorporate convolutional layers after multiple Transformer modules, combining the local inductive bias of CNNs with the global attention capability of the Transformer. Based on these complementary features, the DSSTSR network [32] designs the dual self-attention Swin Transformer, which utilizes spatial–spectral self-attention to minimize spectral distortion while extracting spatial features.
However, these methods typically suffer from two main limitations. First, they fail to fully utilize fine edge details. Second, they cannot model both local and global features simultaneously. These issues not only degrade visual quality but also impair the performance of downstream vision tasks that rely on precise spatial and spectral information [33], such as target detection, land cover classification, and fine-grained material identification. Therefore, it is crucial to develop an SR method that can effectively preserve edges and captures both local and global features.
Inspired by this, we propose an Edge-Distilled and Local–Global Feature Selection network (EDLGFS) for HSISR. This network adopts a parallel dual-path architecture. The main branch captures the complex local–global features and the auxiliary edge branch focuses on extracting and refining edge details. This separation treats edge information as explicit prior knowledge. It prevents edge details from being suppressed by other features, which is a common issue in single-stream designs. A core component of the network is the intermediate supervision strategy. We design a dynamic loss mechanism between the two branches. This guides the main branch to learn the edge details from the auxiliary branch instead of directly fusing features. During the intermediate feature extraction, we propose a Local–Global Feature Selection (LGFS) module. It combines convolutions of different sizes with self-attention to model spatial correlations among features of different receptive fields. This module achieves efficient feature selection, thereby capturing local–global features more effectively. Extensive experiments on three public datasets demonstrate that EDLGFS achieves superior SR reconstruction quality.
The core innovation of this paper lies in the integrated design of the overall architecture. It incorporates edge knowledge distillation, local–global feature selection, and a dynamic loss mechanism. In this study, our main contributions are summarized as follows:
(1)
We propose a super-resolution network using an edge distillation architecture. The auxiliary edge branch transfers knowledge only during training and is removed for inference. This guides the main branch to learn edge details without increasing computational complexity.
(2)
We design a Local–Global Feature Selection (LGFS) module. This module combines convolutions of different sizes with the self-attention. This fully captures local–global features through efficient feature selection.
(3)
We introduce a dynamic edge loss mechanism. By assigning learnable weights to different loss terms, it adaptively balances edge detail preservation and overall reconstruction. This method enhances training stability and improves the model’s reconstruction performance.
The structure of the remaining part of this article is as follows: Section 2 reviews existing SR methods for HSIs. Section 3 details the proposed EDLGFS method. Section 4 presents the datasets, experimental results and ablation studies. Finally, Section 5 concludes the paper.

2. Related Work

2.1. CNN-Based Single HSISR

Deep learning techniques have recently driven significant progress in single HSISR. Consequently, numerous convolutional neural networks (CNNs) have been developed for this task [12,21,23,24,34]. Li et al. [21] proposed an HSISR method combining a spatial constraint (SCT) strategy with a deep spectral difference CNN (SDCNN), which effectively enhances spatial resolution while preserving spectral integrity. Jia et al. [23] proposed a Spectral–Spatial Network (SSN) that divides the reconstruction task into a spatial section, enhanced by a maximum variance principle, and a spectral section optimized via a spectral angle error loss function to preserve spectral signatures. Yuan et al. [35] transferred knowledge from natural images to learn a low-to-high-resolution mapping for HSIs. They also used collaborative non-negative matrix factorization (CNMF) to preserve spectral characteristics. In order to capture the spectral continuity across adjacent bands in HSIs, Mei et al. [24] proposed a 3D full CNN (3D-FCNN) for HSISR. Li et al. [36] proposed a Mixed Convolution Network (MCNet) for HSISR. It combines 2D and 3D convolutions to better capture latent spatial features. Liu et al. [37] proposed a fully 3D U-Net (F3DUN) with skip connections for deep multi-scale feature extraction. Their work demonstrated the efficacy of pure 3D CNN for HSISR. Wang et al. [9] provided a comprehensive review of deep learning-based HSISR methods. They categorized techniques into single image, panchromatic image-assisted, and multispectral image-assisted approaches. Additionally, they summarized common datasets, metrics, and applications. Hu et al. [38] proposed a novel HSISR method named SNLSR, which recasts the SR task into the abundance domain. It utilizes a spatial-preserving decomposition network and spectral non-local attention to restore high-frequency details. Li et al. [39] proposed a Test-Time Training framework for HSISR that incorporates a novel self-training strategy and Spectral Mixup augmentation, effectively overcoming data scarcity to significantly enhance reconstruction performance across diverse re-al-world scenarios. However, these CNN-based methods primarily extract local features. They often fail to effectively model long-range spatial correlations and spectral similarities.

2.2. Transformer-Based Single HSISR

The Transformer possesses robust long-range dependency modeling capabilities and is widely applied in HSISR tasks. Liu et al. [40] proposed an innovative method to address HSISR by fusing a Transformer with 3D CNN. Their Interactformer model uses a dual-branch architecture. It effectively preserves spectral integrity while enhancing spatial details. Chen et al. [41] proposed a Multi-Scale Deformable Transformer (MSDformer). This method combines the local feature extraction strengths of CNNs with the global modeling capabilities of Transformers. It utilizes a Multi-Scale Spectral Attention Module to precisely extract local multi-scale features and employs a Deformation Convolution-based Transformation Module to effectively capture global long-range dependencies. Zhang et al. [42] proposed an efficient Transformer model named ESSAformer, which incorporates a linear complexity attention mechanism based on the spectral correlation coefficient (SCC). This approach not only reduces computational cost but also enhances reconstruction quality. Chen et al. [43] introduced a novel Cross-range Spatial–Spectral Transformer (CST). This method employs cross-attention mechanisms across spatial and spectral dimensions to capture long-range spatial–spectral dependencies. Zhang et al. [44] proposed a spatial–spectral aggregation Transformer that incorporates diffusion priors. It extracts prior features using a self-supervised diffusion model. By integrating an adaptive fusion module, it significantly improves reconstruction quality. However, these methods primarily focus on enhancing overall reconstruction quality. They often fail to fully exploit fine features like image edges.

2.3. Edge-Guided Single Image SR

Researchers have explored various edge-guided strategies to fully exploit edge details in HSIs. For example, Yang et al. [45] proposed a deep edge-guided recurrent residual network named DEGREE, which progressively restores high-frequency details using recurrent residuals and edge information. Zhao et al. [46] proposed G-RDN, which enhances image reconstruction quality by utilizing spatial gradients to highlight edges and textural details. Wang et al. [47] introduced the Edge-Guided Super-Resolution Network (EGSRN). This network employs an Edge Net module to explicitly extract edge features from LR images. It then integrates edge and image features through multi-layer Feature Extraction Modules and an Edge Information Fusion mechanism. However, these methods often fail to effectively capture local–global features, limiting the completeness of feature representation. This paper proposes an Edge-Distilled and Local–Global Feature Selection network (EDLGFS) to address these challenges. This network efficiently extracts fine edge features while simultaneously capturing local features and global contextual information.

3. Materials and Methods

3.1. Overall Network

Figure 1 depicts the overall framework of EDLGFS, which consists of two parallel branches. The main branch learns complex local–global features, while the auxiliary edge branch focuses on extracting and refining edge details. The auxiliary edge branch guides the main branch through knowledge distillation. We denote the input LR HSIs as I L R R C × H × W , the original HR HSIs as I H R R C × s H × s W , and the reconstructed HSIs as I S R R C × s H × s W , where H and W represent height and width respectively. s denotes the SR scaling factor, and C denotes the number of spectra bands. We first extract edge maps from I L R for each spectral band using the Sobel operator. This is expressed as follows:
E L R = h s o b e l ( I L R )
where h s o b e l ( · ) represents the Sobel edge extraction function. E L R R C × H × W represents the edge image extracted from I L R . The shallow features are extracted through a 3 × 3 convolution layer. This process can be represented as follows:
I 0 = h c ( I L R )
E 0 = e c ( E L R )
where h c ( · ) and e c ( · ) denote shallow feature extraction functions and I 0 and E 0 represent the corresponding shallow features. Subsequently, I 0 is processed through a series of Local–Global Feature Selection Stages (LGFSSs) to extract deep features. Meanwhile, E 0 is processed by a sequence of Edge Net modules to extract deep edge features.
The LGFSS comprises two parallel branches. The first branch employs a series of Local–Global Feature Selection layers (LGFSLs) followed by a 3 × 3 convolution. The second branch consists of two consecutive 3 × 3 convolutional layers and a spectral attention layer [43]. The features from both branches are adaptively fused via residual connections. The deep feature I n can be expressed as follows:
I n = h n ( I n 1 ) , n = 1 , 2 , , N
where h n ( · ) represents the function of the n-th LGFSS, and I n represents the corresponding deep features extracted by the n-th LGFSS. In parallel, E 0 is processed by an equal number of Edge Net modules to extract hierarchical edge features. Each Edge Net module consists of two consecutive 3 × 3 convolutional layers and residual connections. Deep edge feature extraction is expressed as follows:
E n = e n ( E n 1 ) , n = 1 , 2 , , N
where e n ( · ) denotes the n-th Edge Net function, and E n represents the deep edge features extracted by the n-th Edge Net. We employ an edge loss function ( L f n ( Θ ) ) to connect all levels of I n and E n separately, so that I n can learn the edge feature from E n . Then, the outputs from the last layer ( I n and E n ) are processed via skip connections and a convolutional layer. The final deep features are represented as follows:
I d = C o n v ( I n + I 0 )
E d = C o n v ( E n + E 0 )
where I d and E d represent the deep features. Finally, the image reconstruction layer processes the deep features to generate the SR image, which is represented as follows:
I S R = h u p ( I d )
E S R = e u p ( E d )
where h u p ( · ) and e u p ( · ) denote the upsampling operations via the PixelShuffle method. I S R and E S R represent the reconstructed HSI and edge map, respectively. Finally, I S R learns image-level edge information from E S R through an edge loss function ( L S T ( Θ ) ).

3.2. Local–Global Feature Selection (LGFS)

To fully capture the local–global features, we introduce the Local–Global Feature Selection layer (LGFSL), inspired by the Metaformer architecture [48]. Additionally, we incorporate the Cross-Scope Spectral Self-Attention module (CSE) [43] within LGFSL to extract cross-range spectral correlations in HSIs. As shown in Figure 2a, LGFSL consists of two LayerNorm layers, a Local–Global Feature Selection (LGFS) module, a CSE module, and a Feed-Forward Network (MLP). These modules are connected through two residual structures.
For the input feature I n _ i n R C × H × W , the whole process of LGFSL is represented as follows:
N = L N ( I n _ i n )
I s = C S E ( L G F S L ( N ) ) + I n _ i n
M = L N ( I s )
I n _ o u t = M L P ( M ) + I s
where I n _ o u t R C × H × W denotes the output feature, L N ( · ) denotes LayerNorm, L G F S L ( · ) denotes LGFSL, C S E ( · ) denotes CSE, and M L P ( · ) denotes the multi-layer perceptron module.
Due to the ability to capture long-range dependencies [49,50,51,52], self-attention mechanisms have been widely applied in many SR methods. However, these methods often fail to effectively capture local–global features. We propose the Local–Global Feature Selection (LGFS) module to address this issue. Its structure is shown in Figure 2b.
Given that N R C × H × W denotes the input feature of LGFS, then, N is projected into query ( Q R C × H × W ), key ( K R C × H × W ), and value ( V R C × H × W ) through a large kernel convolution, a small kernel convolution, and a point-wise convolution, respectively. It can be expressed as follows:
Q = W q ( N ) , K = W k ( N ) , V = W ν ( N )
where W q ( · ) , W k ( · ) , and W ν ( · ) denote large kernel, small kernel, and point-wise convolutions, respectively. Large kernels have a larger receptive field, enabling them to capture broader contextual information and enhancing global modeling capabilities. Small kernels focus on local detailed features and refined spatial structures. By combining small and large kernels, the network can efficiently capture local–global features. Next, after transposing and flattening the spatial dimensions of Q , K , and V , they are reshaped into Q R H W × C , K R C × H W , and V R C × H W . Then, we obtain the attention score by matrix multiplication:
a t t n _ s c o r e s = Q K
where a t t n _ s c o r e s R H W × H W represents the similarity between all spatial positions, and the symbol represents matrix multiplication. Then, we apply the softmax function to obtain the attention weights, which is represented as follows:
a t t n _ w e i g h t = h S o f t ( a t t n _ s c o r e s )
where a t t n _ w e i g h t R H W × H W represents the attention weights. The softmax function is applied along the last dimension to ensure that the sum of weights at each position is 1. Then the feature V is weighted and aggregated using the attention weight, which is represented as follows:
S = a t t n _ w e i g h t V T
Finally, S R H W × C is transposed and reshaped into S R C × H × W , which is added to the original input N to realize the residual connection. It is expressed as follows:
Y = S + N
where Y R C × H × W represents the module output. The residual connection facilitates gradient flow and stabilizes the training process. The LGFS module enables effective local–global feature selection, thereby enhancing SR reconstruction quality.

3.3. Dynamic Loss Mechanism

At present, many works have demonstrated that l 1 and l 2 loss functions have achieved good results in SR tasks [53]. The l 2 loss function encourages finding a reasonable pixel-level average, which may lead to too smooth results. The l 1 loss function can better balance the error distribution. Therefore, we employ the l 1 loss function for both assessing the quality of the SR reconstruction and guiding the main network to learn edge features. Additionally, we designed learnable dynamic weights to more effectively balance the contribution of each loss term.
Given I n R C × H × W represents the deep features obtained by the n-th LGFSS block of the main network, and E n R C × H × W represents the deep edge features obtained by the n-th Edge Net block of the auxiliary edge network. Then, the edge loss L f n ( Θ ) of the n-th deep feature and the total edge loss L f ( Θ ) of all deep features can be expressed as follows:
L f n ( Θ ) = 1 M m = 1 M   I n m E n m 1
L f ( Θ ) = n = 1 N   L f n ( Θ )
where M denotes the batch size, N indicates the number of deep features in the network, I n m denotes the n-th deep feature of the m-th image, E n m represents the n-th deep edge feature of the m-th image, and Θ represents the learnable parameters in our network.
In addition, for the reconstructed HSIs I S R and the reconstructed edge image E S R , we use the l 1 loss function to guide the network to learn the reconstructed edge features, which can be expressed as follows:
L S T ( Θ ) = 1 M m = 1 M   I S R m E S R m 1
where M denotes the batch size and I S R m and E S R m represent the m-th reconstructed HSIs and the m-th reconstructed edge image respectively.
In addition to the above edge loss, we also designed loss functions L S D ( Θ ) and L T D ( Θ ) for data monitoring. L S D ( Θ ) is to compare the reconstructed HSIs with the real HR HSIs, and L T D ( Θ ) is to compare the reconstructed edge image with the real edge image. L S D ( Θ ) and L T D ( Θ ) can be expressed as follows:
L S D ( Θ ) = 1 M m = 1 M   I S R m I H R m 1
L T D ( Θ ) = 1 M m = 1 M   E S R m E H R m 1
where M is the number of inputs in the training batch, I S R m and I H R m , respectively, represent the m-th reconstructed HSIs and real HR HSIs, E S R m and E H R m represent the m-th reconstructed edge image and real HR edge image, respectively, and E H R m is extracted from I H R m by using Sobel operator.
We define the total loss function for the network as follows:
L t o t a l ( Θ ) = λ 1 L S D + λ 2 ( L f + L S T + L T D )
where λ 1 and λ 2 are the dynamic learnable weights we designed, aiming to balance the contribution of different loss terms.

4. Experiments and Results

4.1. Datasets

To evaluate our method, we conduct experiments on three public HSI datasets: Houston, Pavia Center, and Chikusei.
(1)
Houston
The Houston 2018 dataset is a part of the 2018 IEEE GRSS Data Fusion Competition. It includes Multispectral-LiDAR point cloud data, hyperspectral data, and very-high-resolution RGB imagery. Hyperspectral data was obtained by the ITRES CASI 1500 spectral imaging instrument on the University of Houston campus in Houston, TX, USA. It covers a spectral range of 380–1050 nm with 48 bands and has a ground sampling distance (GSD) of 1 m. The spatial dimensions are 601 × 2384. After normalization, this data is used in this study.
(2)
Pavia Center
The Pavia Center dataset was obtained by the Reflective Optical System Imaging Spectrometer (ROSIS) sensor. It was collected over central Pavia, Italy, in 2001. The dataset covers a wavelength range of 430 to 860 nm and contains 102 spectral bands. The spatial dimensions are 1096 × 1096 and the ground sampling distance is 1.3 m. After removing the low-quality areas and bands with low signal-to-noise ratio from the image, the final image size is 1096 × 715 × 102. After normalization, the image is used as the dataset for this study.
(3)
Chikusei
Covering agricultural and urban regions in Chikusei, Ibaraki, Japan, the Chikusei dataset was captured using the Headwall Hyperspec-VNIR-C imaging sensor. The ground sampling distance of this dataset is 2.5 m. It consists of 128 spectral bands, and the spectral range covers 363 to 1018 nm. The original spatial dimensions are 2517 × 2335. After removing invalid edge areas, the final size of the Chikusei data is 2304 × 2048 × 128. After normalization, the image is used as the dataset for this study.

4.2. Evaluation Metrics and Training Details

We compare EDLGFS against eight advanced methods: the classical bicubic interpolation, 3D-FCNN [24], MCNet [36], LN-atten-CNN [34], G-RDN [46], MSDformer [41], SNLSR [38], and CST [43]. All hyperparameters are kept consistent with their original references as much as possible. However, some parameters were adjusted due to hardware constraints and dataset variations. For example, we have verified that PSNR saturates before 100 epochs for all methods. Therefore, we have uniformly set the epochs to 100. Specifically, due to the high computational cost of MCNet, its batch size was set to 8 for the Chikusei dataset. The parameter settings for the comparison methods are shown in Table 1. Three widely adopted metrics are employed for evaluation: Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) and Spectral Angle Mapper (SAM). Their ideal values are +∞ for PSNR, 1 for SSIM, and 0 for SAM.
In the proposed EDLGFS, we employ 3 × 3 kernels for standard convolutions and 1 × 1 kernels for pointwise convolutions. The input channels of the first 3 × 3 convolution correspond to number of bands in the input HSIs. We set the number of feature channels to 96. The numbers of LGFSS and LGFSL are both set to 4. In the LGFS module, the large and small kernel sizes are set to 5 × 5 and 3 × 3, respectively. We use a progressive upsampling strategy [54] to upscale LR HSIs (for example, when the scaling factor is 4, upsampling twice, and when the scaling factor is 8, upsampling three times). The learnable weights λ 1 and λ 2 in the loss function are initialized to 0.95 and 0.05, respectively. The network is trained for 100 epochs using the Adam optimizer with a learning rate of 10−4. The batch size is set to 32. All experiments were implemented in PyTorch 2.1.0 on the platform of the National Supercomputing Center in Zhengzhou.

4.3. Results of Houston Dataset

For the Houston 2018 dataset, we extract four non-overlapping 256 × 256 × 48 patches from the left region for testing. The remaining region serves as the training set. We augment this data by rotating the images by 90°, 180°, and 270°. Then, we crop the augmented images into 64 × 64 × 48 patches with a spatial overlap of 44 pixels. These patches serve as the reference HR HSIs. According to the Wald protocol [55,56], we generate corresponding LR image patches through band-wise Gaussian filtering. This process constructs the HR and LR image pairs. Specifically, we apply downsampling factors of 2, 4, and 8. This yields LR patches with spatial dimensions of 32 × 32, 16 × 16, and 8 × 8 pixels, respectively.
Table 2 presents the average metrics of all comparative algorithms on the Houston dataset. The best results are highlighted in bold, and the second-best are underlined. For scale factors ×2, ×4, and ×8, all deep learning-based methods consistently outperform the traditional bicubic interpolation method by a significant margin. The proposed EDLGFS achieves the best performance across all scale factors. In terms of PSNR and SSIM, EDLGFS significantly surpasses both traditional and advanced methods. This indicates that the proposed EDLGFS restores the spatial details and structural information more accurately. Additionally, EDLGFS achieves lower SAM values than competing methods. This suggests effective mitigation of spectral distortion and a better balance between spatial enhancement and spectral preservation. EDLGFS demonstrates greater robustness across increasing scale factors (from ×2 to ×8), exhibiting less performance degradation than other methods. Especially in the challenging scale factor ×8, it still maintains a significant advantage, demonstrating its adaptability to diverse resolution demands. Furthermore, we calculate the number of parameters and the computational cost (GFLOPs) during the inference process for a test image, as shown in Table 2. The computational cost of the proposed EDLGFS is significantly lower than that of 3D CNN-based methods. The results indicate that EDLGFS achieves a better balance between reconstruction accuracy and computational efficiency.
We present qualitative results on the Houston (scale factor ×4) in Figure 3, to visually illustrate the effectiveness of EDLGFS. Pseudo-R-G-B images are generated by combining the 16th, 32nd, and 40th spectral bands. From the results, the image reconstructed by the traditional Bicubic interpolation method is significantly blurred with substantial loss of structural details. Deep learning methods, such as 3D-FCNN, MSDformer, and CST among others, have achieved satisfactory reconstruction quality. However, there is still mild blurring at the building edges, insufficient contour sharpness, and discontinuous local details. In contrast, the proposed EDLGFS produces clearer edges and superior texture details. Through the enlarged picture in the red box, we can see our advantages more clearly. In addition, Figure 4 visualizes the mean error maps across all spectral bands to assess pixel-wise reconstruction accuracy. In the error map, blue and red indicate lower and higher reconstruction errors. As indicated by the enlarged area of the red box, EDLGFS exhibits lower reconstruction errors than other methods, indicating that EDLGFS has superior reconstruction quality. Finally, Figure 5 shows the average spectral difference curves (for scale factors ×4 and ×8) across the test images, which are used to evaluate the spectral reconstruction quality. A lower curve indicates higher spectral consistency with the Ground Truth (GT). These results confirm that EDLGFS achieves the best spectral fidelity across different scales.

4.4. Results on Pavia Center Dataset

For the Pavia Center dataset, four test images of size 256 × 256 × 102 are extracted from the left region without overlap. The remaining region is used for training. We first perform data augmentation on this remaining region by rotating it by 90°, 180°, and 270°. Then, we crop the augmented images into 64 × 64 × 102 patches with a spatial overlap of 52 pixels. These patches serve as the reference HR HSIs. We generate corresponding LR image patches from these reference HR image patches through band-wise Gaussian filtering. This process constructs the HR and LR image pairs. Specifically, two, four, and eight Gaussian kernels are used to downsample the HR image patches, yielding their corresponding LR image patches of sizes 32 × 32 × 102, 16 × 16 × 102, and 8 × 8× 102.
For scale factors ×2, ×4, and ×8 on the Pavia Center dataset, Table 3 lists the average performance metrics (PSNR, SSIM, SAM) of all comparison methods. The Pavia Center dataset, acquired in 2001, presents inherent challenges including lower native spatial resolution and a limited available training area. Consequently, the overall results for all methods are lower than those obtained on the Houston dataset. Despite these challenges, the proposed EDLGFS consistently outperforms other methods in terms of PSNR, SSIM, and SAM across all scales. This result further confirms that EDLGFS has stable performance in HSISR. Consistent with the results on the Houston dataset, the computational complexity of EDLGFS on the Pavia Center dataset is significantly lower than that of 3D CNN-based methods. This demonstrates a favorable trade-off between reconstruction accuracy and efficiency.
We present the qualitative visualization results of each method on the Pavia Center dataset at a scale factor of ×4 in Figure 6. We select the 96th, 30th, and 15th bands of the images and combine them into pseudo-R-G-B images for visual comparison. Visual inspection reveals that other methods exhibit blurred details. Specifically, building textures and edge contours lack sharpness. However, the proposed EDLGFS demonstrates superior visual fidelity. It achieves overall quality closer to the Ground Truth (GT) and accurately restores edge details and building textures. The enlarged area corresponding to the red box in the figure more intuitively highlights the significant advantage of the proposed EDLGFS in detail restoration. Figure 7 compares error distributions of each method for a scale factor ×4. In Figure 7, the proposed EDLGFS shows a smaller extent of red (high-error) regions, confirming its superior spatial reconstruction accuracy. Finally, the spectral fidelity is evaluated through average spectral difference curves for scale factors ×4 and ×8, as shown in Figure 8. The proposed EDLGFS achieves the lowest spectral difference curves at both scales, demonstrating that it better preserves spectral features.

4.5. Results on Chikusei Dataset

For the Chikusei dataset, eight test images of size 256 × 256 × 128 are extracted from the top region without overlap. The remaining region is used for training. We augment this data by rotating the images by 90°, 180°, and 270°. Then, we crop the augmented images into 64 × 64 × 128 patches with a spatial overlap of 24 pixels. These patches serve as the reference HR HSIs. We generate corresponding LR image patches from these reference HR image patches through band-wise Gaussian filtering. This process constructs the HR and LR image pairs. Specifically, two, four, and eight Gaussian kernels are used to downsample the HR image patches, yielding their corresponding LR image patches of sizes 32 × 32 × 128, 16 × 16 × 128, and 8 × 8× 128.
Table 4 lists the quantitative results (PSNR, SSIM, SAM) on the Chikusei dataset for scale factors ×2, ×4, and ×8. The best and second-best values are shown in bold and underlined, respectively. Across all scales, the proposed EDLGFS demonstrates superior performance, leading the comparison both in reconstruction fidelity (PSNR, SSIM) and spectral accuracy (SAM). These results demonstrate that the proposed EDLGFS excels at reconstructing spatial details while preserving spectral quality. This further validates the effectiveness of the edge distillation strategy and the Local–Global Feature Selection mechanism. The proposed EDLGFS has achieved optimal performance across all three test datasets. For the Chikusei dataset, the computational complexity of the proposed EDLGFS is still lower than that of 3D CNN-based methods. These consistent results across diverse datasets demonstrate the strong generalization capabilities of EDLGFS.
Figure 9 presents a qualitative comparison of the Chikusei dataset at a scale factor of 4. For visualization, we select the 70th, 100th, and 36th spectral bands to generate pseudo-R-G-B images. The visualization results reveal that other comparison methods often result in edge blurring. In contrast, the proposed EDLGFS excels at restoring sharp edge details, offering a distinct visual advantage. This can be intuitively verified through the local magnified area marked by the red box. Figure 10 displays the error distribution maps for each method at a scale factor of 4. From the result, EDLGFS exhibits the smallest red (high-error) regions. This indicates minimal deviation from the Ground Truth (GT) and higher reconstruction precision. The average spectral difference curves for scale factors ×4 and ×8 are shown in Figure 11. The proposed EDLGFS achieves the lowest spectral difference curves, demonstrating its ability to effectively enhance spatial resolution while preserving spectral features more accurately than other methods.

4.6. Ablation Study

To assess the contribution of each core module to the overall super-resolution performance, an ablation study is conducted in this section. All ablation models are trained and evaluated on the Houston dataset at a scale factor of ×4.

4.6.1. Ablation Study on the Number of LGFSSs

Our core feature reconstruction network consists of a series of LGFSSs. In this section, we investigate the impact of the number of LGFSSs (denoted as N) on the model’s reconstruction ability. As shown in Table 5, when N increases from 3 to 4, the number of model parameters increases from 6.00 M to 7.71 M, and the computational cost (GFLOPs) increases from 34.32 to 41.52. Concurrently, both PSNR and SSIM improve. At this point, PSNR (33.2695) and SSIM (0.9862) reach the optimal level in the table (indicated in bold). As N continues to increase to 5, the number of parameters further rises to 9.43 M, and computational cost increases to 48.71, but metrics such as PSNR and SSIM begin to decline. When N reaches 6, the model parameters quantity reaches 11.14 M, and the computational cost rises to 55.91. Meanwhile, the accuracy of the reconstruction indicators such as PSNR, SSIM, and SAM has significantly decreased. This result indicates that increasing the number of LGFSSs leads to a significant increase in model parameters and computational cost. However, the reconstruction performance of the model does not continuously improve as N increases. We attribute this to overfitting caused by increased depth. Deeper networks typically require larger training datasets to effectively learn feature mappings.

4.6.2. Break-Down Ablation

The proposed EDLGFS integrates three core designs: Edge Distillation, LGFS, and learnable loss weights. We conduct ablation studies on the Houston dataset (×4 and ×8 scaling), Pavia Center dataset (×4 scaling), and Chikusei dataset (×4 scaling) to evaluate the independent contributions of each design element. Each experiment has been independently run five times with five different random seeds. The results are reported as the mean ± standard deviation. Results are summarized in Table 6, Table 7 and Table 8 (best scores in bold). Experiments demonstrate that the proposed EDLGFS exhibits stable performance across all metrics with minimal standard deviation, indicating the model’s good stability. The following analysis is conducted from three aspects: removing the edge distillation branch, removing the LGFS module, and fixing learnable weights.
The design objective of the edge distillation strategy is to guide the model to focus on learning image edge details. When removing the edge branch, the edge-related loss term is disabled, and its corresponding weight is removed. Experiments demonstrate that removing the edge distillation branch leads to performance degradation across all datasets and scaling factors, particularly at higher scaling factors (such as ×8). For example, on the Houston dataset, after removing edge distillation, the PSNR decreases by approximately 0.16 dB and 0.43 dB at ×4 and ×8 scaling factors, respectively. On the Pavia Center and Chikusei datasets, the PSNR decreases by approximately 0.29 dB and 0.18 dB, respectively. This finding demonstrates that the edge distillation strategy effectively enhances the model’s ability to recover edge details. In addition, we only calculate the number of parameters and computational cost (GFLOPs) during the inference process for a test image, as shown in Table 6, Table 7 and Table 8. All parameters and computational costs are calculated using the “thop” library of PyTorch. Notably, the complete model imposes no additional parameters or computational cost during inference compared to the model without the edge branch. This is because the auxiliary edge network is utilized only during training and is discarded during inference. Thus, the strategy improves edge extraction without increasing inference overhead.
LGFS is designed to simultaneously capture both local spatial information and global long-range dependencies within images. The ablation experiment demonstrates that after removing this module, the model’s PSNR on the Houston dataset (×4 scaling) decreases by approximately 0.06 dB, and decreases by approximately 0.07 dB and 0.05 dB on the Pavia Center and Chikusei datasets, respectively. As shown in Table 6, Table 7 and Table 8, the parameters and computational costs significantly decrease after removing LGFS. This indicates that LGFS constitutes the primary component of computational cost in the proposed model. Nevertheless, LGFS significantly enhances the model’s ability to capture local–global features by integrating multi-scale convolution with a self-attention mechanism, thereby bringing stable improvements in multiple evaluation metrics.
The learnable weights in the loss function are designed to dynamically balance the contribution of each loss term. We have tested the impact of fixing the learnable weights to their initial values (0.95, 0.05). The experimental results show that on the Houston dataset (×4 scaling), the model’s PSNR decreases by approximately 0.03 dB. On Pavia Center and Chikusei datasets, the PSNR decreases by approximately 0.03 dB and 0.02 dB, respectively. Although the decrease is small, it still indicates that the learnable weights can adaptively balance the contributions of different loss terms, thereby optimizing the overall performance of the proposed model.

4.6.3. Ablation Study on the Different Convolution Kernel Sizes of LGFS

The proposed LGFS employs convolution with varying kernel sizes to capture features with different receptive fields. This section investigates the impact of different kernel size combinations on model performance. Table 9 presents quantitative comparison results for the Houston test dataset at a scale factor of ×4 (bold indicates optimal metrics). As shown in Table 9, when using the convolution kernel combination of (5 × 5, 3 × 3), the number of parameters size is 7.71 M and the computational cost is 41.52 GFLOPs, which is at a moderate level. Moreover, it simultaneously achieves the optimal level in PSNR, SSIM, and SAM. When using the combination of (3 × 3, 3 × 3) convolution kernels, although the number of parameters and computational cost are the lowest, the model is limited by a fixed receptive field and is difficult to effectively model cross-regional correlations (such as the overall object contours or distant context). For combinations such as (7 × 7, 3 × 3) and (7 × 7, 5 × 5) that involve larger-sized convolution kernels, the number of parameters and computational cost significantly increase. Furthermore, due to overly large receptive fields, the model tends to lose fine-grained local details during training, leading to performance degradation.

4.6.4. Ablation Study on the Different Initial Weights of Loss Function

This section investigates the impact of initial values for learnable weight parameters in the loss function. As shown in Table 10, the model achieves optimal reconstruction results when λ 1 = 0.95 and λ 2 = 0.05 , achieving optimal values for PSNR, SSIM, and SAM. As λ 1 decreases and λ 2 increases, the accuracy of reconstruction metrics such as PSNR, SSIM, and SAM shows a significant decline. The core cause of this trend is as follows: The loss term corresponding to λ 1 directly restricts the global fitting degree between the model output and the real samples, which is the core constraint to ensure the overall accuracy of the reconstruction result. The loss term corresponding to λ 2 focuses on edge details and belongs to an auxiliary constraint at the detail level. If the proportion of λ 1 decreases and the proportion of λ 2 increases, the model will overly focus on the matching of edge details, thereby weakening the core constraint on the overall reconstruction accuracy and ultimately leading to a decline in overall performance.

4.6.5. Robustness Analysis Against Degradations

Hyperspectral images are often affected by various degradations during imaging, such as noise, transmission errors, or sensor failures. Therefore, evaluating the robustness of super-resolution models against such data variations is crucial. In this section, we explore the reconstruction stability of the proposed EDLGFS under degraded conditions through robustness ablation experiments. To simulate noise effects and sensor failures during imaging, we introduced Gaussian noise and random value degradation into low-resolution image patches during data preparation. As shown in Table 11, the EDLGFS model demonstrates robust stability against both Gaussian noise and random value degradation. The slight degradation in performance is within an acceptable range, demonstrating the model’s adaptability and robustness to common data defects in real-world complex scenarios.

5. Conclusions

In this paper, a novel method named EDLGFS is proposed for HSISR. The proposed EDLGFS employs two parallel network branches. The main network learns the complex local–global features and the auxiliary edge network focuses on extracting and refining edge details. These two branches are connected through a knowledge distillation framework, where an edge loss function guides the main network to learn the edge details by the auxiliary edge network. Subsequently, we design a Local–Global Feature Selection mechanism (LGFS). This module first extracts feature representations with varying receptive fields through convolutional kernels of different sizes. Then, it employs the self-attention mechanism to model spatial dependencies between these features. By leveraging these spatial dependencies, it achieves an efficient feature selection mechanism that significantly enhances the ability to capture local–global feature. Additionally, we design a learnable dynamic loss mechanism, which assigns learnable weights to different loss terms, allowing the model to more effectively balance their contributions. Extensive experiments across multiple public datasets demonstrate that the proposed EDLGFS achieves superior reconstruction quality in HSISR.
Although the proposed EDLGFS demonstrates good performance in HSISR, it still has certain limitations. Firstly, the edge distillation branch relies on the Sobel operator for initial edge extraction. This method performs well in most cases, but it is sensitive to noise and complex textures, which may affect the stability of edge guidance in complex scenes. Future work could explore more robust edge detection algorithms or learnable edge extraction modules. Secondly, while the current loss function exhibits adaptability, it primarily optimizes pixel-level errors without sufficiently incorporating perceptual quality or spectral consistency constraints. Developing a more advanced loss function is expected to provide more effective guidance for edge restoration and spectral preservation. Importantly, these limitations do not undermine the validity of our core contributions but rather offer specific directions for future improvements.

Author Contributions

Conceptualization, X.L. and M.F.; methodology, X.L. and J.S.; software, X.L. and X.Z.; validation, X.L., M.F. and J.S.; formal analysis, X.L.; investigation, X.L.; resources, X.L.; data curation, X.L.; writing—original draft preparation, X.L.; writing—review and editing, X.L. and X.Z.; visualization, X.L.; supervision, X.L., M.F., X.Z. and J.S.; project administration, X.L., J.S., M.F. and X.Z.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Major Science and Technology Project of Henan Province, China, grant number “221100210600”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the authors.

Acknowledgments

We gratefully acknowledge the computational resources provided by the National Supercomputing Center in Zhengzhou for enabling all experimental work in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  2. Yu, H.; Shang, X.; Song, M.; Hu, J.; Jiao, T.; Guo, Q. Union of Class-Dependent Collaborative Representation Based on Maximum Margin Projection for Hyperspectral Imagery Classification. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2021, 14, 553–566. [Google Scholar] [CrossRef]
  3. Xu, Y.; Zhang, L.; Du, B.; Zhang, L. Hyperspectral Anomaly Detection Based on Machine Learning: An Overview. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2022, 15, 3351–3364. [Google Scholar] [CrossRef]
  4. Tan, Y.; Lu, L.; Bruzzone, L.; Guan, R.; Chang, Z.; Yang, C. Hyperspectral band selection for lithologic discrimination and geological mapping. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2020, 13, 471–486. [Google Scholar] [CrossRef]
  5. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
  6. Sun, H.; Cao, Q.; Meng, F.; Xu, J.; Cheng, M. Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing. Sensors 2025, 25, 4493. [Google Scholar] [CrossRef]
  7. Huo, Y.; Dong, Y.; Wang, C.; Zhang, M.; Wang, H. Multi-scale memory network with separation training for hyperspectral anomaly detection. Inf. Process. Manag. 2026, 63, 104494. [Google Scholar] [CrossRef]
  8. Landgrebe, D.A.; Serpico, S.B.; Crawford, M.M.; Singhroy, V. Introduction to the special issue on analysis of hyperspectral image data. IEEE Trans. Geosci. Remote Sens. 2002, 39, 1343–1345. [Google Scholar] [CrossRef]
  9. Wang, X.; Hu, Q.; Cheng, Y.; Ma, J. Hyperspectral Image Super-Resolution Meets Deep Learning: A Survey and Perspective. IEEE-CAA J. Automatica Sin. 2023, 18, 1668–1691. [Google Scholar] [CrossRef]
  10. Li, S.; Dian, R.; Fang, L. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef]
  11. Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B. Jocelyn Chanussot, Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. 2022, 112, 102926. [Google Scholar] [CrossRef]
  12. Li, Q.; Wang, Q.; Li, X. Exploring the relationship between 2D/3D convolution for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8693–8703. [Google Scholar] [CrossRef]
  13. Wang, X.; Ma, J.; Jiang, J.; Zhang, X.-P. Dilated projection correction network based on autoencoder for hyperspectral image super-resolution. Neural Netw. 2022, 146, 107–119. [Google Scholar] [CrossRef] [PubMed]
  14. Li, J.; Yuan, Q.; Shen, H.; Meng, X.; Zhang, L. Hyperspectral image super-resolution by spectral mixture analysis and spatial–spectral group sparsity. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1250–1254. [Google Scholar] [CrossRef]
  15. Wang, Y.; Chen, X.; Han, Z.; He, S. Hyperspectral image super-resolution via nonlocal low-rank tensor approximation and total variation regularization. Remote Sens. 2017, 9, 1286. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef]
  17. Anwar, S.; Khan, S.; Barnes, N. A deep journey into super-resolution: A survey. ACM Comput. Surv. (CSUR) 2020, 53, 60. [Google Scholar] [CrossRef]
  18. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Computer Vision—ECCV 2014; Springer: Cham, Switzerland, 2014. [Google Scholar] [CrossRef]
  19. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
  20. Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
  21. Li, Y.; Hu, J.; Zhao, X.; Xie, W.; Li, J. Hyperspectral image super-resolution using deep convolutional neural network. Ijon 2017, 266, 29–41. [Google Scholar] [CrossRef]
  22. Li, Y.; Zhang, L.; Ding, C.; Wei, W.; Zhang, Y. Single Hyperspectral Image Super-Resolution with Grouped Deep Recursive Residual Network. In Proceedings of the IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018. [Google Scholar] [CrossRef]
  23. Jia, J.; Ji, L.; Zhao, Y.; Geng, X. Hyperspectral image super-resolution with spectral–spatial network. Int. J. Remote Sens. 2018, 39, 7806–7829. [Google Scholar] [CrossRef]
  24. Mei, S.; Yuan, X.; Ji, J.; Zhang, Y.; Wan, S.; Du, Q. Hyperspectral Image Spatial Super-Resolution via 3D Full Convolutional Neural Network. Remote Sens. 2017, 9, 1139. [Google Scholar] [CrossRef]
  25. Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W.; Xiao, L. A Multi-Scale Wavelet 3D-CNN for Hyperspectral Image Super-Resolution. Remote Sens. 2019, 11, 1557. [Google Scholar] [CrossRef]
  26. Li, J.; Cui, R.; Li, Y.; Li, B.; Du, Q.; Ge, C. Multitemporal Hyperspectral Image Super-Resolution through 3D Generative Adversarial Network. In Proceedings of the 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Shanghai, China, 5–7 August 2019. [Google Scholar] [CrossRef]
  27. Wang, Q.; Li, Q.; Li, X. Spatial-spectral residual network for hyperspectral image super-resolution. arXiv 2020. [Google Scholar] [CrossRef]
  28. Xu, Q.; Liu, S.; Wang, J.; Jiang, B.; Tang, J. AS3ITransUNet: Spatial–Spectral Interactive Transformer U-Net With Alternating Sampling for Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5523913. [Google Scholar] [CrossRef]
  29. Li, M.; Liu, J.; Fu, Y.; Zhang, Y.; Dou, D. Spectral Enhanced Rectangle Transformer for Hyperspectral Image Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar] [CrossRef]
  30. Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021. [Google Scholar] [CrossRef]
  31. Li, M.; Fu, Y.; Zhang, Y. Spatial-spectral transformer for hyperspectral image denoising. arXiv 2022. [Google Scholar] [CrossRef]
  32. Long, Y.; Wang, X.; Xu, M.; Zhang, S.; Jiang, S.; Jia, S. Dual self-attention Swin transformer for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5512012. [Google Scholar] [CrossRef]
  33. Li, H.; Zhao, F.; Xue, F.; Wang, J.; Liu, Y.; Chen, Y.; Wu, Q.; Tao, J.; Zhang, G.; Xi, D.; et al. Succulent-YOLO: Smart UAV-Assisted Succulent Farmland Monitoring with CLIP-Based YOLOv10 and Mamba Computer Vision. Remote Sens. 2025, 17, 2219. [Google Scholar] [CrossRef]
  34. Yang, J.; Xiao, L.; Zhao, Y.; Chan, C.-W.J. Hybrid Local and Nonlocal 3-D Attentive CNN for Hyperspectral Image Super-Resolution. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1274–1278. [Google Scholar] [CrossRef]
  35. Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral image superresolution by transfer learning. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2017, 10, 1963–1974. [Google Scholar] [CrossRef]
  36. Li, Q.; Wang, Q.; Li, X. Mixed 2D/3D Convolutional Network for Hyperspectral Image Super-Resolution. Remote Sens. 2020, 12, 1660. [Google Scholar] [CrossRef]
  37. Liu, Z.; Wang, W.; Ma, Q.; Liu, X.; Jiang, J. Rethinking 3D-CNN in Hyperspectral Image Super-Resolution. Remote Sens. 2023, 15, 2574. [Google Scholar] [CrossRef]
  38. Hu, Q.; Wang, X.; Jiang, J.; Zhang, X.-P.; Ma, J. Exploring the Spectral Prior for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2024, 33, 5260–5272. [Google Scholar] [CrossRef] [PubMed]
  39. Li, K.; Van Gool, L.; Dai, D. Test-Time Training for Hyperspectral Image Super-Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 7231–7242. [Google Scholar] [CrossRef] [PubMed]
  40. Liu, Y.; Hu, J.; Kang, X.; Luo, J.; Fan, S. Interactformer: Interactive transformer and CNN for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531715. [Google Scholar] [CrossRef]
  41. Chen, S.; Zhang, L.; Zhang, L. MSDformer: Multiscale deformable transformer for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5525614. [Google Scholar] [CrossRef]
  42. Zhang, M.; Zhang, C.; Zhang, Q.; Guo, J.; Gao, X.; Zhang, J. ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023. [Google Scholar] [CrossRef]
  43. Chen, S.; Zhang, L.; Zhang, L. Cross-scope spatial-spectral information aggregation for hyperspectral image super-resolution. IEEE Trans. Image Process. 2024, 33, 5878–5891. [Google Scholar] [CrossRef]
  44. Zhang, M.; Wang, X.; Wu, S.; Wang, Z.; Gong, M.; Zhou, Y.; Jiang, F.; Wu, Y. Spatial-Spectral Aggregation Transformer with Diffusion Prior for Hyperspectral Image Super-Resolution. IEEE Trans. Circuit Syst. Video Technol. 2025, 35, 3557–3572. [Google Scholar] [CrossRef]
  45. Yang, W.; Feng, J.; Yang, J.; Zhao, F.; Liu, J.; Guo, Z. Deep edge guided recurrent residual learning for image super-resolution. IEEE Trans. Image Process. 2017, 26, 5895–5907. [Google Scholar] [CrossRef]
  46. Zhao, M.; Ning, J.; Hu, J.; Li, T. Hyperspectral Image Super-Resolution under the Guidance of Deep Gradient Information. Remote Sens. 2021, 13, 2382. [Google Scholar] [CrossRef]
  47. Wang, Y.; Huang, Z.; Wang, X.; Zhang, S.; Liu, S.; Feng, L. Lightweight Edge-Guided Super-Resolution Network for Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5626714. [Google Scholar] [CrossRef]
  48. Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X. MetaFormer is Actually What You Need for Vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar] [CrossRef]
  49. Xia, B.; Hang, Y.; Tian, Y.; Yang, W.; Liao, Q.; Zhou, J. Efficient Non-local Contrastive Attention for Image Super-resolution. Proc. AAAI Conf. Artif. Intell. 2022, 36, 2759–2767. [Google Scholar] [CrossRef]
  50. Mei, Y.; Fan, Y.; Zhou, Y.; Huang, L.; Huang, T.S.; Shi, H. Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
  51. Mei, Y.; Fan, Y.; Zhou, Y. Image super-resolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar] [CrossRef]
  52. Liu, D.; Wen, B.; Fan, Y.; Loy, C.C.; Huang, T.S. Non-local recurrent network for image restoration. arXiv 2018. [Google Scholar] [CrossRef]
  53. Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
  54. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
  55. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  56. Qu, J.; Xu, Z.; Dong, W.; Xiao, S.; Li, Y.; Du, Q. A spatio-spectral fusion method for hyperspectral images using residual hyper-dense network. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 2235–2249. [Google Scholar] [CrossRef]
Figure 1. The overall architecture of EDLGFS.
Figure 1. The overall architecture of EDLGFS.
Sensors 26 01055 g001
Figure 2. Structure of the designed LGFSL and LGFS.
Figure 2. Structure of the designed LGFSL and LGFS.
Sensors 26 01055 g002
Figure 3. Comparative visual results on the Houston test dataset with spectral bands 16-32-40 as R-G-B (scale ×4), from left to right: Ground Truth (GT), Bicubic, 3D-FCNN, MCNet, LN-atten-CNN, G-RDN, MSDformer, SNLSR, CST, and the proposed EDLGFS.
Figure 3. Comparative visual results on the Houston test dataset with spectral bands 16-32-40 as R-G-B (scale ×4), from left to right: Ground Truth (GT), Bicubic, 3D-FCNN, MCNet, LN-atten-CNN, G-RDN, MSDformer, SNLSR, CST, and the proposed EDLGFS.
Sensors 26 01055 g003
Figure 4. Error maps for the test HSI on Houston (scale ×4).
Figure 4. Error maps for the test HSI on Houston (scale ×4).
Sensors 26 01055 g004
Figure 5. Mean absolute spectral difference curve for the test HSI on Houston (left: scale ×4, right: scale ×8).
Figure 5. Mean absolute spectral difference curve for the test HSI on Houston (left: scale ×4, right: scale ×8).
Sensors 26 01055 g005
Figure 6. Comparative visual results on the Pavia Center test dataset with spectral bands 96-30-15 as R-G-B (scale ×4), from left to right: Ground Truth (GT), bicubic, 3D-FCNN, MCNet, LN-atten-CNN, G-RDN, MSDformer, SNLSR, CST, and the proposed EDLGFS.
Figure 6. Comparative visual results on the Pavia Center test dataset with spectral bands 96-30-15 as R-G-B (scale ×4), from left to right: Ground Truth (GT), bicubic, 3D-FCNN, MCNet, LN-atten-CNN, G-RDN, MSDformer, SNLSR, CST, and the proposed EDLGFS.
Sensors 26 01055 g006
Figure 7. Error maps for the test HSI on Pavia Center (scale ×4).
Figure 7. Error maps for the test HSI on Pavia Center (scale ×4).
Sensors 26 01055 g007
Figure 8. Mean absolute spectral difference curve for the test HSI on Pavia Center (left: scale ×4, right: scale ×8).
Figure 8. Mean absolute spectral difference curve for the test HSI on Pavia Center (left: scale ×4, right: scale ×8).
Sensors 26 01055 g008
Figure 9. Comparative visual results of the Chikusei test dataset with spectral bands 70-100-36 as R-G-B (scale ×4), from left to right: Ground Truth (GT), bicubic, 3D-FCNN, MCNet, LN-atten-CNN, G-RDN, MSDformer, SNLSR, CST, and the proposed EDLGFS.
Figure 9. Comparative visual results of the Chikusei test dataset with spectral bands 70-100-36 as R-G-B (scale ×4), from left to right: Ground Truth (GT), bicubic, 3D-FCNN, MCNet, LN-atten-CNN, G-RDN, MSDformer, SNLSR, CST, and the proposed EDLGFS.
Sensors 26 01055 g009
Figure 10. Error maps for the test HSI on Chikusei (scale ×4).
Figure 10. Error maps for the test HSI on Chikusei (scale ×4).
Sensors 26 01055 g010
Figure 11. Mean absolute spectral difference curve for the test HSI on Chikusei (left: scale ×4, right: scale ×8).
Figure 11. Mean absolute spectral difference curve for the test HSI on Chikusei (left: scale ×4, right: scale ×8).
Sensors 26 01055 g011
Table 1. Parameter settings for comparison methods.
Table 1. Parameter settings for comparison methods.
MethodBatch SizeEpochLearning RateOptimizer
3D-FCNN [24]161000.00005Adam
MCNet [36]16/81000.0001Adam
LN-atten-CNN [34]161000.001Adam
G-RDN [46]161000.0001Adam
MSDformer [41]321000.00005Adam
SNLSR [38]81000.0002Adam
CST [43]321000.0001Adam
EDLGFS321000.0001Adam
Table 2. Performance comparison on the Houston test dataset at various scales.
Table 2. Performance comparison on the Houston test dataset at various scales.
MethodScaleParam (M)GFLOPsPSNRSSIMSAM
Bicubic×2--34.95990.99051.6953
3D-FCNN [24]0.039123.6237.88650.99541.3927
MCNet [36]1.931978.1838.91170.99641.2032
LN-atten-CNN [34]9.786396.7138.86620.99631.2117
G-RDN [46]2.1741.8238.45820.99601.3278
MSDformer [41]10.69273.8039.35120.99671.1631
SNLSR [38]1.3338.0538.87410.99641.2509
CST [43]2.8350.1539.43620.99681.1672
EDLGFS7.38137.6839.53420.99691.1346
Bicubic×4--29.17270.96183.2352
3D-FCNN [24]0.039123.6231.18750.97762.7072
MCNet [36]2.171735.5731.90650.98082.5406
LN-atten-CNN [34]9.902219.6931.94070.98112.5139
G-RDN [46]2.1716.6131.65790.98042.5848
MSDformer [41]12.77112.6232.22630.98262.2550
SNLSR [38]1.4812.8532.06470.98172.3682
CST [43]3.1622.0533.00540.98542.1366
EDLGFS7.7141.5233.26950.98622.1222
Bicubic×8--24.72350.88485.5512
3D-FCNN [24]0.039123.6225.71760.91724.9909
MCNet [36]2.963955.5526.56330.93104.6756
LN-atten-CNN [34]10.292315.7526.44590.92954.6940
G-RDN [46]2.1710.3126.45140.93144.4292
MSDformer [41]14.8472.3226.67620.93384.2053
SNLSR [38]1.6210.6126.60950.93324.3372
CST [43]3.4915.0326.70800.93384.1420
EDLGFS8.0419.7427.11690.94004.0347
Bold represents the best. Underlined represents the second-best.
Table 3. Performance comparison on the Pavia Center test dataset at various scales.
Table 3. Performance comparison on the Pavia Center test dataset at various scales.
MethodScaleParam (M)GFLOPsPSNRSSIMSAM
Bicubic×2--31.10880.93935.5004
3D-FCNN [24]0.039262.7033.97640.96974.7763
MCNet [36]1.934203.6434.97830.97544.4994
LN-atten-CNN [34]9.7813,593.0234.86470.97504.4984
G-RDN [46]2.3352.1035.11050.97594.4454
MSDformer [41]11.81427.7035.27170.97674.3512
SNLSR [38]1.4540.1035.04690.97494.4706
CST [43]2.9757.0235.81130.97904.2460
EDLGFS7.52144.5635.86700.97924.2357
Bicubic×4--26.99820.83087.4899
3D-FCNN [24]0.039262.7028.33870.89026.8244
MCNet [36]2.173688.0928.56790.89646.7685
LN-atten-CNN [34]9.904716.8528.61420.89756.7592
G-RDN [46]2.3326.5128.49780.89566.7869
MSDformer [41]13.89162.5628.83410.90296.5861
SNLSR [38]1.5913.4428.55550.89406.6835
CST [43]3.3028.3628.98940.90756.4145
EDLGFS7.8547.8229.43840.91496.2443
Bicubic×8--24.32340.64169.3097
3D-FCNN [24]0.039262.7024.94460.72978.8270
MCNet [36]2.968405.5425.13140.74738.7901
LN-atten-CNN [34]10.294920.9725.08830.74468.8178
G-RDN [46]2.3320.1225.13090.74888.7251
MSDformer [41]15.9696.2725.13890.74358.6188
SNLSR [38]1.7410.8325.14380.74738.6372
CST [43]3.6321.1925.19650.75188.5790
EDLGFS8.1825.9025.50810.76878.2387
Bold represents the best. Underlined represents the second-best.
Table 4. Performance comparison of the Chikusei test dataset at various scales.
Table 4. Performance comparison of the Chikusei test dataset at various scales.
MethodScaleParam (M)GFLOPsPSNRSSIMSAM
Bicubic×2--34.24970.96932.6459
3D-FCNN [24]0.039329.6637.58950.98582.0777
MCNet [36]1.935275.1638.97050.98931.8702
LN-atten-CNN [34]9.7817,057.9038.96090.98931.8770
G-RDN [46]2.4358.2839.11920.98951.8678
MSDformer [41]12.31494.4839.65030.99051.7618
SNLSR [38]1.5041.0839.36130.98921.8795
CST [43]3.0460.3439.76110.99071.7507
EDLGFS7.59147.8839.86140.99091.7243
Bicubic×4--29.22190.89754.5112
3D-FCNN [24]0.039329.6630.53960.93033.9475
MCNet [36]2.174628.2031.52120.94453.5525
LN-atten-CNN [34]9.905919.1831.46050.94373.5722
G-RDN [46]2.4332.5131.58240.94513.5622
MSDformer [41]14.38184.7731.94850.94963.2910
SNLSR [38]1.6513.7231.79180.94703.3576
CST [43]3.3731.3932.03290.95063.2436
EDLGFS7.9250.8632.18640.95243.2159
Bicubic×8--26.34010.78456.4412
3D-FCNN [24]0.039329.6626.92180.82785.9352
MCNet [36]2.9610,548.1327.38720.84665.5592
LN-atten-CNN [34]10.296175.3327.34480.84565.6082
G-RDN [46]2.4326.0627.39360.84745.5645
MSDformer [41]16.46107.3527.47270.85115.3929
SNLSR [38]1.8010.9427.60380.85425.3245
CST [43]3.7024.1627.60390.85485.2503
EDLGFS8.2528.8727.68640.85955.1611
Bold represents the best. Underlined represents the second-best.
Table 5. Quantitative evaluation of the number of LGFSSs in the Houston test dataset (scale ×4).
Table 5. Quantitative evaluation of the number of LGFSSs in the Houston test dataset (scale ×4).
Number (N)Param (M)GFLOPsPSNRSSIMSAM
36.0034.3233.23230.98612.1258
47.7141.5233.26950.98622.1222
59.4348.7133.22890.98602.1457
611.1455.9133.06140.98552.1830
Bold represents the best.
Table 6. Break-down ablation study. We report the testing results on the Houston test dataset (scale ×4 and scale ×8).
Table 6. Break-down ablation study. We report the testing results on the Houston test dataset (scale ×4 and scale ×8).
Edge
Distilled
LGFSLearnable
Weights
ScaleParam (M)GFLOPsPSNRSSIMSAM
×47.7141.5233.2471 ± 0.02190.9861 ± 0.00012.1266 ± 0.0075
××7.7141.5233.0910 ± 0.02850.9857 ± 0.00012.1189 ± 0.0083
×2.5519.5733.1917 ± 0.03300.9859 ± 0.00012.1341 ± 0.0019
×7.7141.5233.2129 ± 0.02310.9860 ± 0.00012.1455 ± 0.0099
×88.0419.7427.1146 ± 0.02370.9398 ± 0.00024.0776 ± 0.0350
××8.0419.7426.6879 ± 0.02760.9339 ± 0.00054.1574 ± 0.0155
×2.8814.4126.8651 ± 0.10180.9361 ± 0.00164.0776 ± 0.0210
×8.0419.7427.0779 ± 0.02970.9395 ± 0.00034.0815 ± 0.0159
Bold represents the best. √ indicates that this module is present; × indicates that this module has been excluded.
Table 7. Break-down ablation study. We report the testing results on the Pavia Center test dataset (scale ×4).
Table 7. Break-down ablation study. We report the testing results on the Pavia Center test dataset (scale ×4).
Edge
Distilled
LGFSLearnable
Weights
ScaleParam (M)GFLOPsPSNRSSIMSAM
×47.8547.8229.4675 ± 0.01780.9156 ± 0.00046.2211 ± 0.0133
××7.8547.8229.1791 ± 0.03600.9106 ± 0.00076.2816 ± 0.0174
×2.6925.8829.4004 ± 0.02710.9141 ± 0.00056.2862 ± 0.0184
×7.8547.8229.4406 ± 0.03250.9151 ± 0.00086.2571 ± 0.0355
Bold represents the best. √ indicates that this module is present; × indicates that this module has been excluded.
Table 8. Break-down ablation study. We report the testing results on the Chikusei test dataset (scale ×4).
Table 8. Break-down ablation study. We report the testing results on the Chikusei test dataset (scale ×4).
Edge
Distilled
LGFSLearnable
Weights
ScaleParam (M)GFLOPsPSNRSSIMSAM
×47.9250.8632.1509 ± 0.03330.9519 ± 0.00043.2037 ± 0.0118
××7.9250.8631.9663 ± 0.03600.9501 ± 0.00033.2488 ± 0.0106
×2.7528.9232.1039 ± 0.03420.9514 ± 0.00033.2181 ± 0.0132
×7.9250.8632.1283 ± 0.03140.9517 ± 0.00043.2071 ± 0.0087
Bold represents the best. √ indicates that this module is present; × indicates that this module has been excluded.
Table 9. Quantitative evaluation of the different convolution kernel sizes of LGFS on the Houston test dataset (scale ×4).
Table 9. Quantitative evaluation of the different convolution kernel sizes of LGFS on the Houston test dataset (scale ×4).
Kernel SizeParam (M)GFLOPsPSNRSSIMSAM
(3 × 3, 3 × 3)5.3631.8533.18200.98592.1543
(5 × 5, 3 × 3)7.7141.5233.26950.98622.1222
(7 × 7, 3 × 3)11.2556.0133.18530.98592.1246
(7 × 7, 5 × 5)13.6165.6833.10450.98572.1464
Bold represents the best.
Table 10. Quantitative evaluation of the different initial weights of loss function on the Houston test dataset (scale ×4).
Table 10. Quantitative evaluation of the different initial weights of loss function on the Houston test dataset (scale ×4).
( λ 1 ,   λ 2 ) PSNRSSIMSAM
(0.95, 0.05)33.26950.98622.1222
(0.9, 0.1)33.15450.98582.1666
(0.85, 0.15)32.93680.98502.2003
(0.8, 0.2)32.79820.98452.2214
Bold represents the best.
Table 11. Comparison of the SR reconstruction performance of the proposed EDLGFS under different degradation conditions.
Table 11. Comparison of the SR reconstruction performance of the proposed EDLGFS under different degradation conditions.
Degradation TypePSNRSSIMSAM
Raw Data33.26950.98622.1222
Noisy Data32.95220.98522.1688
Random Degradation32.79380.98492.2020
Bold represents the best.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, X.; Fan, M.; Zheng, X.; Shang, J. Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution. Sensors 2026, 26, 1055. https://doi.org/10.3390/s26031055

AMA Style

Li X, Fan M, Zheng X, Shang J. Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution. Sensors. 2026; 26(3):1055. https://doi.org/10.3390/s26031055

Chicago/Turabian Style

Li, Xinzhao, Mengzhe Fan, Xiaoqing Zheng, and Jiandong Shang. 2026. "Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution" Sensors 26, no. 3: 1055. https://doi.org/10.3390/s26031055

APA Style

Li, X., Fan, M., Zheng, X., & Shang, J. (2026). Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution. Sensors, 26(3), 1055. https://doi.org/10.3390/s26031055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop