Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution
Abstract
1. Introduction
- (1)
- We propose a super-resolution network using an edge distillation architecture. The auxiliary edge branch transfers knowledge only during training and is removed for inference. This guides the main branch to learn edge details without increasing computational complexity.
- (2)
- We design a Local–Global Feature Selection (LGFS) module. This module combines convolutions of different sizes with the self-attention. This fully captures local–global features through efficient feature selection.
- (3)
- We introduce a dynamic edge loss mechanism. By assigning learnable weights to different loss terms, it adaptively balances edge detail preservation and overall reconstruction. This method enhances training stability and improves the model’s reconstruction performance.
2. Related Work
2.1. CNN-Based Single HSISR
2.2. Transformer-Based Single HSISR
2.3. Edge-Guided Single Image SR
3. Materials and Methods
3.1. Overall Network
3.2. Local–Global Feature Selection (LGFS)
3.3. Dynamic Loss Mechanism
4. Experiments and Results
4.1. Datasets
- (1)
- Houston
- (2)
- Pavia Center
- (3)
- Chikusei
4.2. Evaluation Metrics and Training Details
4.3. Results of Houston Dataset
4.4. Results on Pavia Center Dataset
4.5. Results on Chikusei Dataset
4.6. Ablation Study
4.6.1. Ablation Study on the Number of LGFSSs
4.6.2. Break-Down Ablation
4.6.3. Ablation Study on the Different Convolution Kernel Sizes of LGFS
4.6.4. Ablation Study on the Different Initial Weights of Loss Function
4.6.5. Robustness Analysis Against Degradations
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral Remote Sensing Data Analysis and Future Challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
- Yu, H.; Shang, X.; Song, M.; Hu, J.; Jiao, T.; Guo, Q. Union of Class-Dependent Collaborative Representation Based on Maximum Margin Projection for Hyperspectral Imagery Classification. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2021, 14, 553–566. [Google Scholar] [CrossRef]
- Xu, Y.; Zhang, L.; Du, B.; Zhang, L. Hyperspectral Anomaly Detection Based on Machine Learning: An Overview. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2022, 15, 3351–3364. [Google Scholar] [CrossRef]
- Tan, Y.; Lu, L.; Bruzzone, L.; Guan, R.; Chang, Z.; Yang, C. Hyperspectral band selection for lithologic discrimination and geological mapping. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2020, 13, 471–486. [Google Scholar] [CrossRef]
- Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 010901. [Google Scholar] [CrossRef] [PubMed]
- Sun, H.; Cao, Q.; Meng, F.; Xu, J.; Cheng, M. Spatial-Channel Multiscale Transformer Network for Hyperspectral Unmixing. Sensors 2025, 25, 4493. [Google Scholar] [CrossRef]
- Huo, Y.; Dong, Y.; Wang, C.; Zhang, M.; Wang, H. Multi-scale memory network with separation training for hyperspectral anomaly detection. Inf. Process. Manag. 2026, 63, 104494. [Google Scholar] [CrossRef]
- Landgrebe, D.A.; Serpico, S.B.; Crawford, M.M.; Singhroy, V. Introduction to the special issue on analysis of hyperspectral image data. IEEE Trans. Geosci. Remote Sens. 2002, 39, 1343–1345. [Google Scholar] [CrossRef]
- Wang, X.; Hu, Q.; Cheng, Y.; Ma, J. Hyperspectral Image Super-Resolution Meets Deep Learning: A Survey and Perspective. IEEE-CAA J. Automatica Sin. 2023, 18, 1668–1691. [Google Scholar] [CrossRef]
- Li, S.; Dian, R.; Fang, L. Fusing hyperspectral and multispectral images via coupled sparse tensor factorization. IEEE Trans. Image Process. 2018, 27, 4118–4130. [Google Scholar] [CrossRef]
- Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B. Jocelyn Chanussot, Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. 2022, 112, 102926. [Google Scholar] [CrossRef]
- Li, Q.; Wang, Q.; Li, X. Exploring the relationship between 2D/3D convolution for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8693–8703. [Google Scholar] [CrossRef]
- Wang, X.; Ma, J.; Jiang, J.; Zhang, X.-P. Dilated projection correction network based on autoencoder for hyperspectral image super-resolution. Neural Netw. 2022, 146, 107–119. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Yuan, Q.; Shen, H.; Meng, X.; Zhang, L. Hyperspectral image super-resolution by spectral mixture analysis and spatial–spectral group sparsity. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1250–1254. [Google Scholar] [CrossRef]
- Wang, Y.; Chen, X.; Han, Z.; He, S. Hyperspectral image super-resolution via nonlocal low-rank tensor approximation and total variation regularization. Remote Sens. 2017, 9, 1286. [Google Scholar] [CrossRef]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual Dense Network for Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar] [CrossRef]
- Anwar, S.; Khan, S.; Barnes, N. A deep journey into super-resolution: A survey. ACM Comput. Surv. (CSUR) 2020, 53, 60. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Computer Vision—ECCV 2014; Springer: Cham, Switzerland, 2014. [Google Scholar] [CrossRef]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef]
- Li, Y.; Hu, J.; Zhao, X.; Xie, W.; Li, J. Hyperspectral image super-resolution using deep convolutional neural network. Ijon 2017, 266, 29–41. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, L.; Ding, C.; Wei, W.; Zhang, Y. Single Hyperspectral Image Super-Resolution with Grouped Deep Recursive Residual Network. In Proceedings of the IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China, 13–16 September 2018. [Google Scholar] [CrossRef]
- Jia, J.; Ji, L.; Zhao, Y.; Geng, X. Hyperspectral image super-resolution with spectral–spatial network. Int. J. Remote Sens. 2018, 39, 7806–7829. [Google Scholar] [CrossRef]
- Mei, S.; Yuan, X.; Ji, J.; Zhang, Y.; Wan, S.; Du, Q. Hyperspectral Image Spatial Super-Resolution via 3D Full Convolutional Neural Network. Remote Sens. 2017, 9, 1139. [Google Scholar] [CrossRef]
- Yang, J.; Zhao, Y.-Q.; Chan, J.C.-W.; Xiao, L. A Multi-Scale Wavelet 3D-CNN for Hyperspectral Image Super-Resolution. Remote Sens. 2019, 11, 1557. [Google Scholar] [CrossRef]
- Li, J.; Cui, R.; Li, Y.; Li, B.; Du, Q.; Ge, C. Multitemporal Hyperspectral Image Super-Resolution through 3D Generative Adversarial Network. In Proceedings of the 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Shanghai, China, 5–7 August 2019. [Google Scholar] [CrossRef]
- Wang, Q.; Li, Q.; Li, X. Spatial-spectral residual network for hyperspectral image super-resolution. arXiv 2020. [Google Scholar] [CrossRef]
- Xu, Q.; Liu, S.; Wang, J.; Jiang, B.; Tang, J. AS3ITransUNet: Spatial–Spectral Interactive Transformer U-Net With Alternating Sampling for Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5523913. [Google Scholar] [CrossRef]
- Li, M.; Liu, J.; Fu, Y.; Zhang, Y.; Dou, D. Spectral Enhanced Rectangle Transformer for Hyperspectral Image Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar] [CrossRef]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Gool, L.; Timofte, R. SwinIR: Image Restoration Using Swin Transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021. [Google Scholar] [CrossRef]
- Li, M.; Fu, Y.; Zhang, Y. Spatial-spectral transformer for hyperspectral image denoising. arXiv 2022. [Google Scholar] [CrossRef]
- Long, Y.; Wang, X.; Xu, M.; Zhang, S.; Jiang, S.; Jia, S. Dual self-attention Swin transformer for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5512012. [Google Scholar] [CrossRef]
- Li, H.; Zhao, F.; Xue, F.; Wang, J.; Liu, Y.; Chen, Y.; Wu, Q.; Tao, J.; Zhang, G.; Xi, D.; et al. Succulent-YOLO: Smart UAV-Assisted Succulent Farmland Monitoring with CLIP-Based YOLOv10 and Mamba Computer Vision. Remote Sens. 2025, 17, 2219. [Google Scholar] [CrossRef]
- Yang, J.; Xiao, L.; Zhao, Y.; Chan, C.-W.J. Hybrid Local and Nonlocal 3-D Attentive CNN for Hyperspectral Image Super-Resolution. IEEE Geosci. Remote Sens. Lett. 2021, 18, 1274–1278. [Google Scholar] [CrossRef]
- Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral image superresolution by transfer learning. IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens. 2017, 10, 1963–1974. [Google Scholar] [CrossRef]
- Li, Q.; Wang, Q.; Li, X. Mixed 2D/3D Convolutional Network for Hyperspectral Image Super-Resolution. Remote Sens. 2020, 12, 1660. [Google Scholar] [CrossRef]
- Liu, Z.; Wang, W.; Ma, Q.; Liu, X.; Jiang, J. Rethinking 3D-CNN in Hyperspectral Image Super-Resolution. Remote Sens. 2023, 15, 2574. [Google Scholar] [CrossRef]
- Hu, Q.; Wang, X.; Jiang, J.; Zhang, X.-P.; Ma, J. Exploring the Spectral Prior for Hyperspectral Image Super-Resolution. IEEE Trans. Image Process. 2024, 33, 5260–5272. [Google Scholar] [CrossRef] [PubMed]
- Li, K.; Van Gool, L.; Dai, D. Test-Time Training for Hyperspectral Image Super-Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 7231–7242. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Hu, J.; Kang, X.; Luo, J.; Fan, S. Interactformer: Interactive transformer and CNN for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5531715. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, L.; Zhang, L. MSDformer: Multiscale deformable transformer for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5525614. [Google Scholar] [CrossRef]
- Zhang, M.; Zhang, C.; Zhang, Q.; Guo, J.; Gao, X.; Zhang, J. ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023. [Google Scholar] [CrossRef]
- Chen, S.; Zhang, L.; Zhang, L. Cross-scope spatial-spectral information aggregation for hyperspectral image super-resolution. IEEE Trans. Image Process. 2024, 33, 5878–5891. [Google Scholar] [CrossRef]
- Zhang, M.; Wang, X.; Wu, S.; Wang, Z.; Gong, M.; Zhou, Y.; Jiang, F.; Wu, Y. Spatial-Spectral Aggregation Transformer with Diffusion Prior for Hyperspectral Image Super-Resolution. IEEE Trans. Circuit Syst. Video Technol. 2025, 35, 3557–3572. [Google Scholar] [CrossRef]
- Yang, W.; Feng, J.; Yang, J.; Zhao, F.; Liu, J.; Guo, Z. Deep edge guided recurrent residual learning for image super-resolution. IEEE Trans. Image Process. 2017, 26, 5895–5907. [Google Scholar] [CrossRef]
- Zhao, M.; Ning, J.; Hu, J.; Li, T. Hyperspectral Image Super-Resolution under the Guidance of Deep Gradient Information. Remote Sens. 2021, 13, 2382. [Google Scholar] [CrossRef]
- Wang, Y.; Huang, Z.; Wang, X.; Zhang, S.; Liu, S.; Feng, L. Lightweight Edge-Guided Super-Resolution Network for Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5626714. [Google Scholar] [CrossRef]
- Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X. MetaFormer is Actually What You Need for Vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar] [CrossRef]
- Xia, B.; Hang, Y.; Tian, Y.; Yang, W.; Liao, Q.; Zhou, J. Efficient Non-local Contrastive Attention for Image Super-resolution. Proc. AAAI Conf. Artif. Intell. 2022, 36, 2759–2767. [Google Scholar] [CrossRef]
- Mei, Y.; Fan, Y.; Zhou, Y.; Huang, L.; Huang, T.S.; Shi, H. Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar] [CrossRef]
- Mei, Y.; Fan, Y.; Zhou, Y. Image super-resolution with non-local sparse attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar] [CrossRef]
- Liu, D.; Wen, B.; Fan, Y.; Loy, C.C.; Huang, T.S. Non-local recurrent network for image restoration. arXiv 2018. [Google Scholar] [CrossRef]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image Super-Resolution Using Very Deep Residual Channel Attention Networks. In Proceedings of the European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018. [Google Scholar] [CrossRef]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
- Qu, J.; Xu, Z.; Dong, W.; Xiao, S.; Li, Y.; Du, Q. A spatio-spectral fusion method for hyperspectral images using residual hyper-dense network. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 2235–2249. [Google Scholar] [CrossRef]











| Method | Batch Size | Epoch | Learning Rate | Optimizer |
|---|---|---|---|---|
| 3D-FCNN [24] | 16 | 100 | 0.00005 | Adam |
| MCNet [36] | 16/8 | 100 | 0.0001 | Adam |
| LN-atten-CNN [34] | 16 | 100 | 0.001 | Adam |
| G-RDN [46] | 16 | 100 | 0.0001 | Adam |
| MSDformer [41] | 32 | 100 | 0.00005 | Adam |
| SNLSR [38] | 8 | 100 | 0.0002 | Adam |
| CST [43] | 32 | 100 | 0.0001 | Adam |
| EDLGFS | 32 | 100 | 0.0001 | Adam |
| Method | Scale | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|---|
| Bicubic | ×2 | - | - | 34.9599 | 0.9905 | 1.6953 |
| 3D-FCNN [24] | 0.039 | 123.62 | 37.8865 | 0.9954 | 1.3927 | |
| MCNet [36] | 1.93 | 1978.18 | 38.9117 | 0.9964 | 1.2032 | |
| LN-atten-CNN [34] | 9.78 | 6396.71 | 38.8662 | 0.9963 | 1.2117 | |
| G-RDN [46] | 2.17 | 41.82 | 38.4582 | 0.9960 | 1.3278 | |
| MSDformer [41] | 10.69 | 273.80 | 39.3512 | 0.9967 | 1.1631 | |
| SNLSR [38] | 1.33 | 38.05 | 38.8741 | 0.9964 | 1.2509 | |
| CST [43] | 2.83 | 50.15 | 39.4362 | 0.9968 | 1.1672 | |
| EDLGFS | 7.38 | 137.68 | 39.5342 | 0.9969 | 1.1346 | |
| Bicubic | ×4 | - | - | 29.1727 | 0.9618 | 3.2352 |
| 3D-FCNN [24] | 0.039 | 123.62 | 31.1875 | 0.9776 | 2.7072 | |
| MCNet [36] | 2.17 | 1735.57 | 31.9065 | 0.9808 | 2.5406 | |
| LN-atten-CNN [34] | 9.90 | 2219.69 | 31.9407 | 0.9811 | 2.5139 | |
| G-RDN [46] | 2.17 | 16.61 | 31.6579 | 0.9804 | 2.5848 | |
| MSDformer [41] | 12.77 | 112.62 | 32.2263 | 0.9826 | 2.2550 | |
| SNLSR [38] | 1.48 | 12.85 | 32.0647 | 0.9817 | 2.3682 | |
| CST [43] | 3.16 | 22.05 | 33.0054 | 0.9854 | 2.1366 | |
| EDLGFS | 7.71 | 41.52 | 33.2695 | 0.9862 | 2.1222 | |
| Bicubic | ×8 | - | - | 24.7235 | 0.8848 | 5.5512 |
| 3D-FCNN [24] | 0.039 | 123.62 | 25.7176 | 0.9172 | 4.9909 | |
| MCNet [36] | 2.96 | 3955.55 | 26.5633 | 0.9310 | 4.6756 | |
| LN-atten-CNN [34] | 10.29 | 2315.75 | 26.4459 | 0.9295 | 4.6940 | |
| G-RDN [46] | 2.17 | 10.31 | 26.4514 | 0.9314 | 4.4292 | |
| MSDformer [41] | 14.84 | 72.32 | 26.6762 | 0.9338 | 4.2053 | |
| SNLSR [38] | 1.62 | 10.61 | 26.6095 | 0.9332 | 4.3372 | |
| CST [43] | 3.49 | 15.03 | 26.7080 | 0.9338 | 4.1420 | |
| EDLGFS | 8.04 | 19.74 | 27.1169 | 0.9400 | 4.0347 |
| Method | Scale | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|---|
| Bicubic | ×2 | - | - | 31.1088 | 0.9393 | 5.5004 |
| 3D-FCNN [24] | 0.039 | 262.70 | 33.9764 | 0.9697 | 4.7763 | |
| MCNet [36] | 1.93 | 4203.64 | 34.9783 | 0.9754 | 4.4994 | |
| LN-atten-CNN [34] | 9.78 | 13,593.02 | 34.8647 | 0.9750 | 4.4984 | |
| G-RDN [46] | 2.33 | 52.10 | 35.1105 | 0.9759 | 4.4454 | |
| MSDformer [41] | 11.81 | 427.70 | 35.2717 | 0.9767 | 4.3512 | |
| SNLSR [38] | 1.45 | 40.10 | 35.0469 | 0.9749 | 4.4706 | |
| CST [43] | 2.97 | 57.02 | 35.8113 | 0.9790 | 4.2460 | |
| EDLGFS | 7.52 | 144.56 | 35.8670 | 0.9792 | 4.2357 | |
| Bicubic | ×4 | - | - | 26.9982 | 0.8308 | 7.4899 |
| 3D-FCNN [24] | 0.039 | 262.70 | 28.3387 | 0.8902 | 6.8244 | |
| MCNet [36] | 2.17 | 3688.09 | 28.5679 | 0.8964 | 6.7685 | |
| LN-atten-CNN [34] | 9.90 | 4716.85 | 28.6142 | 0.8975 | 6.7592 | |
| G-RDN [46] | 2.33 | 26.51 | 28.4978 | 0.8956 | 6.7869 | |
| MSDformer [41] | 13.89 | 162.56 | 28.8341 | 0.9029 | 6.5861 | |
| SNLSR [38] | 1.59 | 13.44 | 28.5555 | 0.8940 | 6.6835 | |
| CST [43] | 3.30 | 28.36 | 28.9894 | 0.9075 | 6.4145 | |
| EDLGFS | 7.85 | 47.82 | 29.4384 | 0.9149 | 6.2443 | |
| Bicubic | ×8 | - | - | 24.3234 | 0.6416 | 9.3097 |
| 3D-FCNN [24] | 0.039 | 262.70 | 24.9446 | 0.7297 | 8.8270 | |
| MCNet [36] | 2.96 | 8405.54 | 25.1314 | 0.7473 | 8.7901 | |
| LN-atten-CNN [34] | 10.29 | 4920.97 | 25.0883 | 0.7446 | 8.8178 | |
| G-RDN [46] | 2.33 | 20.12 | 25.1309 | 0.7488 | 8.7251 | |
| MSDformer [41] | 15.96 | 96.27 | 25.1389 | 0.7435 | 8.6188 | |
| SNLSR [38] | 1.74 | 10.83 | 25.1438 | 0.7473 | 8.6372 | |
| CST [43] | 3.63 | 21.19 | 25.1965 | 0.7518 | 8.5790 | |
| EDLGFS | 8.18 | 25.90 | 25.5081 | 0.7687 | 8.2387 |
| Method | Scale | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|---|
| Bicubic | ×2 | - | - | 34.2497 | 0.9693 | 2.6459 |
| 3D-FCNN [24] | 0.039 | 329.66 | 37.5895 | 0.9858 | 2.0777 | |
| MCNet [36] | 1.93 | 5275.16 | 38.9705 | 0.9893 | 1.8702 | |
| LN-atten-CNN [34] | 9.78 | 17,057.90 | 38.9609 | 0.9893 | 1.8770 | |
| G-RDN [46] | 2.43 | 58.28 | 39.1192 | 0.9895 | 1.8678 | |
| MSDformer [41] | 12.31 | 494.48 | 39.6503 | 0.9905 | 1.7618 | |
| SNLSR [38] | 1.50 | 41.08 | 39.3613 | 0.9892 | 1.8795 | |
| CST [43] | 3.04 | 60.34 | 39.7611 | 0.9907 | 1.7507 | |
| EDLGFS | 7.59 | 147.88 | 39.8614 | 0.9909 | 1.7243 | |
| Bicubic | ×4 | - | - | 29.2219 | 0.8975 | 4.5112 |
| 3D-FCNN [24] | 0.039 | 329.66 | 30.5396 | 0.9303 | 3.9475 | |
| MCNet [36] | 2.17 | 4628.20 | 31.5212 | 0.9445 | 3.5525 | |
| LN-atten-CNN [34] | 9.90 | 5919.18 | 31.4605 | 0.9437 | 3.5722 | |
| G-RDN [46] | 2.43 | 32.51 | 31.5824 | 0.9451 | 3.5622 | |
| MSDformer [41] | 14.38 | 184.77 | 31.9485 | 0.9496 | 3.2910 | |
| SNLSR [38] | 1.65 | 13.72 | 31.7918 | 0.9470 | 3.3576 | |
| CST [43] | 3.37 | 31.39 | 32.0329 | 0.9506 | 3.2436 | |
| EDLGFS | 7.92 | 50.86 | 32.1864 | 0.9524 | 3.2159 | |
| Bicubic | ×8 | - | - | 26.3401 | 0.7845 | 6.4412 |
| 3D-FCNN [24] | 0.039 | 329.66 | 26.9218 | 0.8278 | 5.9352 | |
| MCNet [36] | 2.96 | 10,548.13 | 27.3872 | 0.8466 | 5.5592 | |
| LN-atten-CNN [34] | 10.29 | 6175.33 | 27.3448 | 0.8456 | 5.6082 | |
| G-RDN [46] | 2.43 | 26.06 | 27.3936 | 0.8474 | 5.5645 | |
| MSDformer [41] | 16.46 | 107.35 | 27.4727 | 0.8511 | 5.3929 | |
| SNLSR [38] | 1.80 | 10.94 | 27.6038 | 0.8542 | 5.3245 | |
| CST [43] | 3.70 | 24.16 | 27.6039 | 0.8548 | 5.2503 | |
| EDLGFS | 8.25 | 28.87 | 27.6864 | 0.8595 | 5.1611 |
| Number (N) | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|
| 3 | 6.00 | 34.32 | 33.2323 | 0.9861 | 2.1258 |
| 4 | 7.71 | 41.52 | 33.2695 | 0.9862 | 2.1222 |
| 5 | 9.43 | 48.71 | 33.2289 | 0.9860 | 2.1457 |
| 6 | 11.14 | 55.91 | 33.0614 | 0.9855 | 2.1830 |
| Edge Distilled | LGFS | Learnable Weights | Scale | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|---|---|---|
| √ | √ | √ | ×4 | 7.71 | 41.52 | 33.2471 ± 0.0219 | 0.9861 ± 0.0001 | 2.1266 ± 0.0075 |
| × | √ | × | 7.71 | 41.52 | 33.0910 ± 0.0285 | 0.9857 ± 0.0001 | 2.1189 ± 0.0083 | |
| √ | × | √ | 2.55 | 19.57 | 33.1917 ± 0.0330 | 0.9859 ± 0.0001 | 2.1341 ± 0.0019 | |
| √ | √ | × | 7.71 | 41.52 | 33.2129 ± 0.0231 | 0.9860 ± 0.0001 | 2.1455 ± 0.0099 | |
| √ | √ | √ | ×8 | 8.04 | 19.74 | 27.1146 ± 0.0237 | 0.9398 ± 0.0002 | 4.0776 ± 0.0350 |
| × | √ | × | 8.04 | 19.74 | 26.6879 ± 0.0276 | 0.9339 ± 0.0005 | 4.1574 ± 0.0155 | |
| √ | × | √ | 2.88 | 14.41 | 26.8651 ± 0.1018 | 0.9361 ± 0.0016 | 4.0776 ± 0.0210 | |
| √ | √ | × | 8.04 | 19.74 | 27.0779 ± 0.0297 | 0.9395 ± 0.0003 | 4.0815 ± 0.0159 |
| Edge Distilled | LGFS | Learnable Weights | Scale | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|---|---|---|
| √ | √ | √ | ×4 | 7.85 | 47.82 | 29.4675 ± 0.0178 | 0.9156 ± 0.0004 | 6.2211 ± 0.0133 |
| × | √ | × | 7.85 | 47.82 | 29.1791 ± 0.0360 | 0.9106 ± 0.0007 | 6.2816 ± 0.0174 | |
| √ | × | √ | 2.69 | 25.88 | 29.4004 ± 0.0271 | 0.9141 ± 0.0005 | 6.2862 ± 0.0184 | |
| √ | √ | × | 7.85 | 47.82 | 29.4406 ± 0.0325 | 0.9151 ± 0.0008 | 6.2571 ± 0.0355 |
| Edge Distilled | LGFS | Learnable Weights | Scale | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|---|---|---|
| √ | √ | √ | ×4 | 7.92 | 50.86 | 32.1509 ± 0.0333 | 0.9519 ± 0.0004 | 3.2037 ± 0.0118 |
| × | √ | × | 7.92 | 50.86 | 31.9663 ± 0.0360 | 0.9501 ± 0.0003 | 3.2488 ± 0.0106 | |
| √ | × | √ | 2.75 | 28.92 | 32.1039 ± 0.0342 | 0.9514 ± 0.0003 | 3.2181 ± 0.0132 | |
| √ | √ | × | 7.92 | 50.86 | 32.1283 ± 0.0314 | 0.9517 ± 0.0004 | 3.2071 ± 0.0087 |
| Kernel Size | Param (M) | GFLOPs | PSNR | SSIM | SAM |
|---|---|---|---|---|---|
| (3 × 3, 3 × 3) | 5.36 | 31.85 | 33.1820 | 0.9859 | 2.1543 |
| (5 × 5, 3 × 3) | 7.71 | 41.52 | 33.2695 | 0.9862 | 2.1222 |
| (7 × 7, 3 × 3) | 11.25 | 56.01 | 33.1853 | 0.9859 | 2.1246 |
| (7 × 7, 5 × 5) | 13.61 | 65.68 | 33.1045 | 0.9857 | 2.1464 |
| PSNR | SSIM | SAM | |
|---|---|---|---|
| (0.95, 0.05) | 33.2695 | 0.9862 | 2.1222 |
| (0.9, 0.1) | 33.1545 | 0.9858 | 2.1666 |
| (0.85, 0.15) | 32.9368 | 0.9850 | 2.2003 |
| (0.8, 0.2) | 32.7982 | 0.9845 | 2.2214 |
| Degradation Type | PSNR | SSIM | SAM |
|---|---|---|---|
| Raw Data | 33.2695 | 0.9862 | 2.1222 |
| Noisy Data | 32.9522 | 0.9852 | 2.1688 |
| Random Degradation | 32.7938 | 0.9849 | 2.2020 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Li, X.; Fan, M.; Zheng, X.; Shang, J. Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution. Sensors 2026, 26, 1055. https://doi.org/10.3390/s26031055
Li X, Fan M, Zheng X, Shang J. Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution. Sensors. 2026; 26(3):1055. https://doi.org/10.3390/s26031055
Chicago/Turabian StyleLi, Xinzhao, Mengzhe Fan, Xiaoqing Zheng, and Jiandong Shang. 2026. "Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution" Sensors 26, no. 3: 1055. https://doi.org/10.3390/s26031055
APA StyleLi, X., Fan, M., Zheng, X., & Shang, J. (2026). Edge-Distilled and Local–Global Feature Selection Network for Hyperspectral Image Super-Resolution. Sensors, 26(3), 1055. https://doi.org/10.3390/s26031055
