Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution
Abstract
1. Introduction
- A dynamic blur kernel generation network module is designed. By splitting dual-channel latent variables, we achieve the decoupled encoding of spectral degradation information and spatial degradation information, enhancing the model’s ability for dynamic feature decoupling and fusion.
- A dual-channel feature separation module is designed to decouple the spectral control sub-vector and the spatial control sub-vector. After decoupling, each spectral band corresponds to an independent feature channel, which is extended to the spatial band through spatial broadcasting + learnable weights; each spatial location also corresponds to an independent feature channel, which is extended to the spectral band through spectral broadcasting + learnable weights.
- A spectral spatial dynamic cross attention integration module is designed to deeply combine dynamic kernel estimation with cross attention mechanisms, allowing spectral features to guide spatial kernel optimization. Utilizing spectral degradation information, it is possible to calculate attention weights, thereby adapting the kernel parameters of edge spatial positions to the spectral continuity of the corresponding band. At the same time, it enables spatial features to constrain spectral kernel adjustment, calculates attention weights using spectral degradation information, and adapts the kernel parameters of edge spatial positions to the spectral continuity of corresponding bands, achieving bidirectional enhancement of spectral information guided spatial features and spatial information guided spectral features.
- A multi-scale spectral–spatial collaborative constraint (MSSCC) loss function was designed. Through the dynamically generated kernel total loss, multi-scale spectral total loss, and multi-scale spatial total loss, it ensures modeling rationality, spectral continuity, and spatial detail fidelity, and also enables the end-to-end optimization of degradation modeling and image restoration.
2. Related Work
2.1. Traditional SSR Algorithms
2.1.1. Operator-Based Methods
2.1.2. Degradation Model-Based Methods
2.2. Deep Learning-Based SSR
3. Methods
3.1. Degradation Model
3.2. DADFN
3.2.1. Dual-Channel Split Module (DCSM)
3.2.2. Spectral–Spatial Feature Alignment Module (SSFAM)
3.2.3. Spectral–Spatial Dynamic Cross-Attention Fusion Module (SSDCAF)
3.3. Loss Function
3.3.1. Total Dynamic Kernel Loss for Blur Kernel Dynamic Generation
3.3.2. Total Multi-Scale Spectral Loss
3.3.3. Total Multi-Scale Spatial Loss
4. Experiments
4.1. Datasets and Experimental Settings
4.1.1. Dataset Construction
4.1.2. Experimental Settings
4.1.3. Evaluation Metrics
4.2. Experimental Results
4.2.1. Quantitative Evaluation Results
4.2.2. Qualitative Evaluation Results
4.2.3. Ablation Experiments
Discussion on the Downsampling Hyperparameters
Discussion on the Effectiveness of Network Modules
Discussion on Loss Function Hyperparameters
Complexity Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Alahmari, S.; Yonbawi, S.; Racharla, S.; Lydia, E.L.; Ishak, M.K.; Alkahtani, H.K.; Aljarbouh, A.; Mostafa, S.M. Hybrid Multi-Strategy Aquila Optimization with Deep Learning Driven Crop Type Classification on Hyperspectral Images. Comput. Syst. Sci. Eng. 2025, 47, 375–391. [Google Scholar] [CrossRef]
- Efrat, N.; Glasner, D.; Apartsin, A.; Nadler, B.; Levin, A. Accurate blur models vs. image priors in single image super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 2832–2839. [Google Scholar]
- Cao, J.; Cao, Y.; Pang, L.; Meng, D.; Cao, X. Hair: Hypernetworks-based all-in-one image restoration. arXiv 2024, arXiv:2408.08091. [Google Scholar] [CrossRef]
- Khonina, S.N.; Kazanskiy, N.L.; Oseledets, I.V.; Nikonorov, A.V.; Butt, M.A. Synergy between artificial intelligence and hyperspectral imagining—A review. Technologies 2024, 12, 163. [Google Scholar] [CrossRef]
- Maiseli, B.; Abdalla, A.T. Seven decades of image super-resolution: Achievements, challenges, and opportunities. EURASIP J. Adv. Signal Process. 2024, 1, 78. [Google Scholar] [CrossRef]
- Wang, Q.; Li, Q.; Li, X. Hyperspectral image superresolution using spectrum and feature context. IEEE Trans. Ind. Electron. 2020, 68, 11276–11285. [Google Scholar] [CrossRef]
- Liu, T.; Liu, Y.; Zhang, C.; Yuan, L.; Sui, X.; Chen, Q. Hyperspectral image super-resolution via dual-domain network based on hybrid convolution. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–18. [Google Scholar] [CrossRef]
- Ranjan, P.; Girdhar, A. A comprehensive systematic review of deep learning methods for hyperspectral images classification. Int. J. Remote Sens. 2022, 43, 6221–6306. [Google Scholar] [CrossRef]
- Wang, H.; Quan, S.; Liu, J.; Xiao, H.; Peng, Y.; Wang, Z.; Li, H. Progressive multi-scale multi-attention fusion for hyperspectral image classification. Sci. Rep. 2025, 15, 29288. [Google Scholar] [CrossRef]
- Li, J.; Wang, H.; Li, Y.; Zhang, H. A Comprehensive Review of Image Restoration Research Based on Diffusion Models. Mathematics 2025, 13, 2079. [Google Scholar] [CrossRef]
- Zhang, W.; Shi, G.; Liu, Y.; Dong, C.; Wu, X.M. A closer look at blind super-resolution: Degradation models, baselines, and performance upper bounds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 21–24 June 2022. [Google Scholar]
- Wang, J.; Xiang, L.; Liu, L.; Xu, J.; Li, P.; Xu, Q.; He, Z. Towards Real-World Remote Sensing Image Super-Resolution: A New Benchmark and an Efficient Model. IEEE Trans. Geosci. Remote Sens. 2024, 63, 1–13. [Google Scholar] [CrossRef]
- Zhang, K.; Liang, J.; Van Gool, L.; Timofte, R. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Kumar, A.; Kashyap, Y.; Sharma, K.M.; Vittal, K.P.; Shubhanga, K.N. MSSEAG-UNet: A Novel Deep Learning Architecture for Cloud Segmentation in Fisheye Sky Images and Solar Energy Forecast. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–13. [Google Scholar] [CrossRef]
- Wang, Z.; Cao, X.; Yao, Y.; Feng, L.; Qin, H. Segmentation of Green Roofs in High-Resolution Remote Sensing Images with GR-Net. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–16. [Google Scholar] [CrossRef]
- Patnaik, A.; Bhuyan, M.K.; Alfarhood, S.; Safran, M. Hyperspectral Image Super-Resolution via Grouped Second-Order Spatial Features and Spectral Attention Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 19974–19987. [Google Scholar] [CrossRef]
- Liu, S.; Zhangn, J.; Zhang, Z.; Hu, S.; Xiao, B. Ground-Based Remote Sensing Cloud Image Segmentation Using Convolution-MLP Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 14, 11280. [Google Scholar] [CrossRef]
- Zhang, J.; Qu, H.; Jia, J.; Li, Y.; Jiang, B.; Chen, X.; Peng, J. Multi-scale Spatial-Spectral CNN-Transformer Network for Hyperspectral Image Super-Resolution. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, 18, 12116–12132. [Google Scholar] [CrossRef]
- Zhao, G.; Wu, H.; Luo, D.; Ou, X.; Zhang, Y. Spatial spectral interaction super-resolution cnn-mamba network for fusion of satellite hyperspectral and multispectral image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 18489–18501. [Google Scholar] [CrossRef]
- Zhang, L.; Nie, J.; Wei, W.; Li, Y.; Zhang, Y. Deep blind hyperspectral image super-resolution. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2388–2400. [Google Scholar] [CrossRef] [PubMed]
- Yue, Z.; Zhao, Q.; Xie, J.; Zhang, L.; Meng, D.; Wong, K.Y.K. Blind image super-resolution with elaborate degradation modeling on noise and kernel. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Wu, G.; Jiang, J.; Jiang, J.; Liu, X. Transforming image super-resolution: A convformer-based efficient approach. IEEE Trans. Image Process. 2024, 33, 6071–6082. [Google Scholar] [CrossRef]
- Xu, H.; Quan, Y.; Qin, M.; Wang, Y.; Fang, C.; Li, Y.; Zheng, J. Nonlinear Learnable Triple-Domain Transform Tensor Nuclear Norm for Hyperspectral Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–17. [Google Scholar] [CrossRef]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Jenefa, A.; Kuriakose, B.M.; Edward Naveen, V.; Lincy, A. EDSR: Empowering super-resolution algorithms with high-quality DIV2K images. Intell. Decis. Technol. 2023, 17, 1249–1263. [Google Scholar]
- Huang, Y.; Jiang, Z.; Lan, R.; Zhang, S.; Pi, K. Infrared image super-resolution via transfer learning and PSRGAN. IEEE Signal Process. Lett. 2021, 28, 982–986. [Google Scholar] [CrossRef]
- Yang, P.; Ma, Y.; Mei, X.; Chen, Q.; Wu, M.; Ma, J. Deep blind super-resolution for hyperspectral images. Pattern Recognit. 2025, 157, 110916. [Google Scholar] [CrossRef]
- Liu, D.; Li, J.; Yuan, Q. A spectral grouping and attention-driven residual dense network for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7711–7725. [Google Scholar] [CrossRef]
- Naganuma, K.; Ono, S. Toward robust hyperspectral unmixing: Mixed noise modeling and image-domain regularization. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 8117–8138. [Google Scholar] [CrossRef]
- Akewar, M.; Chandak, M. Hyperspectral imaging algorithms and applications: A review. TechRxiv 2024. [Google Scholar] [CrossRef]
- Wu, C.; Li, J.; Song, R.; Li, Y.; Du, Q. HPRN: Holistic prior-embedded relation network for spectral super-resolution. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 11409–11423. [Google Scholar] [CrossRef]
- Mai, G.; Lao, N.; Sun, W.; Ma, Y.; Song, J.; Meng, C.; Ermon, S. Learning continuous image representation for spatial-spectral super-resolution. arXiv 2023, arXiv:2310.00413. [Google Scholar] [CrossRef]
- Parmar, M.; Lansel, S.; Wandell, B.A. Spatio-spectral reconstruction of the multispectral datacube using sparse recovery. In Proceedings of the IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008. [Google Scholar]
- Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. Multispectral and hyperspectral image fusion using a 3-D-convolutional neural network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 639–643. [Google Scholar] [CrossRef]
- Lee, C.M.; Cheng, C.H.; Lin, Y.F.; Cheng, Y.C.; Liao, W.T.; Yang, F.E.; Wang, Y.C.F.; Hsu, C.C. Prompthsi: Universal hyperspectral image restoration framework for composite degradation. arXiv 2024, arXiv:2411.15922. [Google Scholar] [CrossRef]
- Li, J.; Cui, R.; Li, B.; Song, R.; Li, Y.; Dai, Y.; Du, Q. Hyperspectral image super-resolution by band attention through adversarial learning. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4304–4318. [Google Scholar] [CrossRef]
- Zhang, Y.; Liang, S.; Li, W.; Ma, H.; Xu, J.; Ma, Y.; Xia, X.G. UniTS: Unified Time Series Generative Model for Remote Sensing. arXiv 2025, arXiv:2512.04461. [Google Scholar] [CrossRef]
- Cai, Y.; Lin, J.; Wang, H.; Yuan, X.; Ding, H.; Zhang, Y.; Gool, L.V. Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging. Adv. Neural Inf. Process. Syst. 2022, 35, 37749–37761. [Google Scholar]
- Park, J.; Kim, H.; Kang, M.G. Kernel estimation using total variation guided GAN for image super-resolution. Sensors 2023, 23, 3734. [Google Scholar] [CrossRef]
- Qin, L.; Huang, X.; Dong, Q.L.; Tang, Y. Accelerated Douglas-Rachford splitting algorithm using neural net-work. Commun. Nonlinear Sci. Numer. Simul. 2025, 152, 109462. [Google Scholar] [CrossRef]
- Gu, J.; Lu, H.; Zuo, W.; Dong, C. Blind super-resolution with iterative kernel correction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Huang, Y.; Li, S.; Wang, L.; Tan, T. Unfolding the alternating optimization for blind super resolution. Adv. Neural Inf. Process. Syst. 2020, 33, 5632–5643. [Google Scholar]
- Cao, X.; Lian, Y.; Liu, Z.; Zhou, H.; Wang, B.; Hunag, B.; Zhang, W. Universal high spatial resolution hyperspectral imaging using hybrid-resolution image fusion. Opt. Eng. 2023, 62, 033107. [Google Scholar] [CrossRef]
- Khan, M.M. High Dynamic Range Image Deghosting Using Spectral Angle Mapper. Computers 2019, 8, 15. [Google Scholar] [CrossRef]
- Michel, J.; Kalinicheva, E.; Inglada, J. Revisiting remote sensing cross-sensor Single Image Super-Resolution: The overlooked impact of geometric and radiometric distortion. IEEE Trans. Geosci. Remote Sens. 2025, 63, 1–22. [Google Scholar] [CrossRef]
- Gong, J.; Huang, Z.; Yang, Z.; Ding, X.; Li, F. Spectral Information Divergence-Driven Diffusion Networks for Hyperspectral Target Detection. Appl. Sci. 2025, 15, 4076. [Google Scholar] [CrossRef]






| Dataset | Number of Images | Spatial Resolution | Spectral Range (nm) |
|---|---|---|---|
| CAVE | 32 | 512 × 512 | 400–700 |
| Harvard | 50 | 1024 × 1024 | 420–720 |
| Baseline Methods | PSNR (dB) | SSIM | SAM (×10−2 Rad) | SID (×10−2) |
|---|---|---|---|---|
| EDSR | 30.21 | 0.892 | 8.76 | 6.21 |
| RCAN | 31.56 | 0.915 | 7.50 | 5.83 |
| DBSR | 32.14 | 0.923 | 6.89 | 5.12 |
| KernelGAN | 32.87 | 0.931 | 6.15 | 4.87 |
| DADFN | 34.52 | 0.958 | 4.23 | 2.56 |
| Baseline Methods | PSNR (dB) | SSIM | SAM (×10−2 Rad) | SID (×10−2) |
|---|---|---|---|---|
| EDSR | 28.93 | 0.867 | 9.54 | 7.12 |
| RCAN | 30.15 | 0.889 | 8.21 | 6.53 |
| DBSR | 30.87 | 0.896 | 7.58 | 5.98 |
| KernelGAN | 31.62 | 0.908 | 6.93 | 5.42 |
| DADFN | 33.21 | 0.943 | 5.12 | 2.85 |
| s = 2 | s = 2 | s = 4 | s = 4 | |
|---|---|---|---|---|
| PSNR (dB) | SID (×10−2) | PSNR (dB) | SID (×10−2) | |
| EDSR | 30.21 | 6.21 | 27.58 | 9.87 |
| RCAN | 31.56 | 5.83 | 28.93 | 8.92 |
| DBSR | 32.14 | 5.12 | 28.45 | 7.65 |
| KernelGAN | 32.87 | 4.87 | 30.12 | 7.13 |
| DADFN | 34.52 | 2.56 | 33.87 | 3.21 |
| Method | PSNR (dB) | SSIM | SAM (×10−2 Rad) | SID (×10−2) |
|---|---|---|---|---|
| DADFN | 34.52 | 0.958 | 4.23 | 2.56 |
| No DCS | 29.93 | 0.865 | 6.75 | 6.02 |
| No MLP | 31.12 | 0.901 | 4.80 | 3.25 |
| No SSDCAF | 30.21 | 0.878 | 5.52 | 4.11 |
| Hyperparameter Combination | PSNR (dB) | SSIM | SAM (×10−2 Rad) | SID (×10−2) |
|---|---|---|---|---|
| α = 0.2, β = 0.3, γ = 0.4 | 34.36 | 0.951 | 4.33 | 2.63 |
| α = 0.2, β = 0.4, γ = 0.4 | 34.38 | 0.953 | 4.31 | 2.60 |
| α = 0.2, β = 0.5, γ = 0.3 | 34.43 | 0.955 | 4.35 | 2.60 |
| α = 0.2, β = 0.6, γ = 0.2 | 31.12 | 0.958 | 4.30 | 2.58 |
| α = 0.3, β = 0.3, γ = 0.4 | 34.48 | 0.951 | 4.28 | 2.56 |
| α = 0.3, β = 0.4, γ = 0.3 | 34.52 | 0.958 | 4.23 | 2.56 |
| α = 0.3, β = 0.5, γ = 0.2 | 34.51 | 0.954 | 4.24 | 2.67 |
| α = 0.4, β = 0.3, γ = 0.3 | 34.48 | 0.956 | 4.32 | 2.71 |
| α = 0.5, β = 0.3, γ = 0.2 | 34.41 | 0.956 | 4.41 | 2.69 |
| Hyperparameter Combination | PSNR (dB) | SSIM | SAM (×10−2 Rad) | SID (×10−2) |
|---|---|---|---|---|
| 33.78 | 0.958 | 4.76 | 2.61 | |
| 32.15 | 0.963 | 6.89 | 2.58 | |
| 33.92 | 0.915 | 4.18 | 2.60 | |
| 34.52 | 0.958 | 4.23 | 2.56 |
| Hyperparameter Combination | PSNR (dB) | SSIM | SAM (×10−2 Rad) | SID (×10−2) |
|---|---|---|---|---|
| 34.48 | 0.957 | 4.26 | 2.56 | |
| 34.42 | 0.955 | 4.18 | 2.56 | |
| 34.35 | 0.955 | 4.32 | 2.60 | |
| 34.52 | 0.958 | 4.23 | 2.56 | |
| 34.40 | 0.954 | 4.23 | 2.60 | |
| 34.20 | 0.952 | 4.39 | 2.68 | |
| 34.25 | 0.953 | 4.36 | 2.65 |
| Method | Parameter Count (M) | FLOPs (G) | Training Time (h) | Inference Time (ms) |
|---|---|---|---|---|
| EDSR | 43.2 | 18.7 | 12.3 | 28.5 |
| RCAN | 82.6 | 35.9 | 21.7 | 45.2 |
| DBSR | 67.8 | 29.3 | 18.5 | 39.7 |
| KernelGAN | 75.4 | 32.6 | 20.1 | 42.3 |
| DADFN | 78.9 | 33.8 | 21.2 | 44.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Liu, H.; Liang, H.; Wang, Q. Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution. Sensors 2026, 26, 1362. https://doi.org/10.3390/s26041362
Liu H, Liang H, Wang Q. Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution. Sensors. 2026; 26(4):1362. https://doi.org/10.3390/s26041362
Chicago/Turabian StyleLiu, Huadong, Haifeng Liang, and Qian Wang. 2026. "Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution" Sensors 26, no. 4: 1362. https://doi.org/10.3390/s26041362
APA StyleLiu, H., Liang, H., & Wang, Q. (2026). Degradation-Aware Dynamic Kernel Generation Network for Hyperspectral Super-Resolution. Sensors, 26(4), 1362. https://doi.org/10.3390/s26041362

