Global-Local-Structure Collaborative Approach for Cross-Domain Reference-Based Image Super-Resolution
Highlights
- A degradation-aware diffusion-based super-resolution framework is proposed, which explicitly models complex and mixed degradations in remote sensing images through adaptive conditional priors.
- A dual-decoder recursive generation strategy effectively balances local detail recovery and global structural consistency, achieving superior robustness under both ideal and blind degradation settings.
- Explicit degradation modeling and structural regularization significantly improve the reliability of super-resolution for real remote sensing scenarios.
- The proposed framework provides a practical and extensible solution for high-fidelity remote sensing image enhancement in downstream Earth observation tasks.
Abstract
1. Introduction
1.1. Background and Challenges
1.2. Related Work
1.3. Motivation and Contributions
- Unlike existing degradation-aware diffusion SR methods that rely on implicit or stochastic degradation conditioning, we propose an explicit degradation-aware modeling module that deterministically encodes multi-source and multi-scale degradation priors and injects them into the diffusion latent space.
- Different from prior diffusion-based RSISR frameworks that decouple local texture enhancement and global structural modeling, we design a dual-decoder global–local collaborative framework that tightly couples structural reconstruction with degradation-aware diffusion, enabling progressive and structurally consistent refinement.
- We introduce a static regularization guidance strategy in the diffusion latent space to stabilize structural preservation and improve perceptual quality.
- Extensive experiments on benchmark datasets demonstrate that the proposed method outperforms state-of-the-art approaches under both idealized and realistic degradation scenarios, showing strong robustness and generalization.
2. Methodology
2.1. Overall Framework
2.2. Degradation-Aware Modeling Module
2.2.1. Lightweight Feature Extractor
2.2.2. Degradation-Aware Channel Recalibration
2.2.3. Degradation-Aware Conditional Injection
2.3. Dual-Decoder Design and Recursive Generation
2.3.1. Dual-Decoder Design
- (1)
- Decoding Output Formulation
- (2)
- Local Decoder
- (3)
- Global Decoder
- (4)
- Complementary Fusion Module
- (5)
- Inference Process and Computational Characteristics
2.3.2. Recursive Generation Mechanism
- (1)
- Time-wise Recursion
- (2)
- Residual Correction Recursion
2.4. Static Regularization Guidance
2.4.1. Overall Loss Function
2.4.2. Total Variation Regularization
2.4.3. Gradient Consistency Loss
3. Results
3.1. Datasets and Evaluation Metrics
3.2. Comparison with Existing Methods
4. Discussion
4.1. Ablation Study
4.2. Further Analysis: Robustness Under Different Degradation Levels
4.3. Computational Efficiency Analysis
4.4. Limitations and Future Work
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, L.; Zhang, L.; Du, B. Deep learning for remote sensing data: A technical tutorial on the state of the art. IEEE Geosci. Remote Sens. Mag. 2016, 4, 22–40. [Google Scholar]
- Li, Y.; Qi, F.; Wan, Y. Improvements on bicubic image interpolation. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; Volume 1, pp. 1316–1320. [Google Scholar]
- Keys, R. Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 2003, 29, 1153–1160. [Google Scholar] [CrossRef]
- Dong, W.; Zhang, L.; Shi, G.; Wu, X. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 2011, 20, 1838–1857. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
- Pereira, G.A.; Hussain, M. A review of transformer-based models for computer vision tasks: Capturing global context and spatial relationships. arXiv 2024, arXiv:2408.15178. [Google Scholar] [CrossRef]
- Wang, X.; Yi, J.; Guo, J.; Song, Y.; Lyu, J.; Xu, J.; Yan, W.; Zhao, J.; Cai, Q.; Min, H. A review of image super-resolution approaches based on deep learning and applications in remote sensing. Remote Sens. 2022, 14, 5423. [Google Scholar]
- Yang, D.; Li, Z.; Xia, Y.; Chen, Z. Remote sensing image super-resolution: Challenges and approaches. In Proceedings of the 2015 IEEE International Conference on Digital Signal Processing (DSP), Singapore, 21–24 July 2015; pp. 196–200. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Zhang, N.; Wang, Y.; Zhang, X.; Xu, D.; Wang, X.; Ben, G.; Zhao, Z.; Li, Z. A multi-degradation aided method for unsupervised remote sensing image super resolution with convolution neural networks. IEEE Trans. Geosci. Remote Sens. 2020, 60, 1–14. [Google Scholar]
- Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
- Saharia, C.; Ho, J.; Chan, W.; Salimans, T.; Fleet, D.J.; Norouzi, M. Image super-resolution via iterative refinement. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 4713–4726. [Google Scholar] [CrossRef]
- Dhariwal, P.; Nichol, A. Diffusion models beat gans on image synthesis. Adv. Neural Inf. Process. Syst. 2021, 34, 8780–8794. [Google Scholar]
- Song, J.; Meng, C.; Ermon, S. Denoising diffusion implicit models. arXiv 2020, arXiv:2010.02502. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2016; pp. 694–711. [Google Scholar]
- Ho, J.; Saharia, C.; Chan, W.; Fleet, D.J.; Norouzi, M.; Salimans, T. Cascaded diffusion models for high fidelity image generation. J. Mach. Learn. Res. 2022, 23, 1–33. [Google Scholar]
- Yue, Z.; Wang, J.; Loy, C.C. Resshift: Efficient diffusion model for image super-resolution by residual shifting. Adv. Neural Inf. Process. Syst. 2023, 36, 13294–13307. [Google Scholar]
- Nichol, A.Q.; Dhariwal, P. Improved denoising diffusion probabilistic models. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 8162–8171. [Google Scholar]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
- Wang, Y.; Yang, W.; Chen, X.; Wang, Y.; Guo, L.; Chau, L.P.; Liu, Z.; Qiao, Y.; Kot, A.C.; Wen, B. Sinsr: Diffusion-based image super-resolution in a single step. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 25796–25805. [Google Scholar]
- Dong, R.; Yuan, S.; Luo, B.; Chen, M.; Zhang, J.; Zhang, L.; Li, W.; Zheng, J.; Fu, H. Building bridges across spatial and temporal resolutions: Reference-based super-resolution via change priors and conditional diffusion model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 27684–27694. [Google Scholar]
- Xiao, Y.; Yuan, Q.; Jiang, K.; He, J.; Jin, X.; Zhang, L. EDiffSR: An efficient diffusion probabilistic model for remote sensing image super-resolution. IEEE Trans. Geosci. Remote Sens. 2023, 62, 1–14. [Google Scholar]
- Zhang, K.; Liang, J.; Van Gool, L.; Timofte, R. Designing a practical degradation model for deep blind image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 4791–4800. [Google Scholar]
- Haris, M.; Shakhnarovich, G.; Ukita, N. Deep back-projection networks for super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1664–1673. [Google Scholar]
- Yue, Z.; Zhao, Q.; Xie, J.; Zhang, L.; Meng, D.; Wong, K.Y.K. Blind image super-resolution with elaborate degradation modeling on noise and kernel. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2128–2138. [Google Scholar]
- Wang, L.; Wang, Y.; Dong, X.; Xu, Q.; Yang, J.; An, W.; Guo, Y. Unsupervised degradation representation learning for blind super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 10581–10590. [Google Scholar]
- Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
- Dong, R.; Mou, L.; Zhang, L.; Fu, H.; Zhu, X.X. Real-world remote sensing image super-resolution via a practical degradation model and a kernel-aware network. ISPRS J. Photogramm. Remote Sens. 2022, 191, 155–170. [Google Scholar]
- Zhang, J.; Xu, T.; Li, J.; Jiang, S.; Zhang, Y. Single-image super resolution of remote sensing images with real-world degradation modeling. Remote Sens. 2022, 14, 2895. [Google Scholar]
- Qin, Y.; Nie, H.; Wang, J.; Liu, H.; Sun, J.; Zhu, M.; Lu, J.; Pan, Q. Multi-degradation super-resolution reconstruction for remote sensing images with reconstruction features-guided kernel correction. Remote Sens. 2024, 16, 2915. [Google Scholar]
- Liang, J.; Zeng, H.; Zhang, L. Efficient and degradation-adaptive network for real-world image super-resolution. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 574–591. [Google Scholar]
- Aybar, C.; Montero, D.; Contreras, J.; Donike, S.; Kalaitzis, F.; Gómez-Chova, L. SEN2NAIP: A large-scale dataset for Sentinel-2 Image Super-Resolution. Sci. Data 2024, 11, 1389. [Google Scholar]
- Zhu, H.; Tang, X.; Xie, J.; Song, W.; Mo, F.; Gao, X. Spatio-temporal super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement. Sensors 2018, 18, 498. [Google Scholar] [CrossRef]
- Wang, Y.; Shao, Z.; Lu, T.; Huang, X.; Wang, J.; Zhang, Z.; Zuo, X. Lightweight remote sensing super-resolution with multi-scale graph attention network. Pattern Recognit. 2025, 160, 111178. [Google Scholar] [CrossRef]
- Chen, Y.; Zhang, X. Ddsr: Degradation-aware diffusion model for spectral reconstruction from rgb images. Remote Sens. 2024, 16, 2692. [Google Scholar] [CrossRef]
- Wang, Z.; Xia, M.; Weng, L.; Hu, K.; Lin, H. Dual encoder–decoder network for land cover segmentation of remote sensing image. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 2372–2385. [Google Scholar]
- Kim, S.P.; Su, W.Y. Recursive high-resolution reconstruction of blurred multiframe images. IEEE Trans. Image Process. 1993, 2, 534–539. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Zhu, K.; Chen, G.; Tan, X.; Zhang, L.; Dai, F.; Liao, P.; Gong, Y. Geospatial object detection on high resolution remote sensing imagery based on double multi-scale feature pyramid network. Remote Sens. 2019, 11, 755. [Google Scholar] [CrossRef]
- Gao, H.; Zhang, Y.; Yang, J.; Dang, D. Mixed hierarchy network for image restoration. Pattern Recognit. 2025, 161, 111313. [Google Scholar] [CrossRef]
- Aleissaee, A.A.; Kumar, A.; Anwer, R.M.; Khan, S.; Cholakkal, H.; Xia, G.S.; Khan, F.S. Transformers in remote sensing: A survey. Remote Sens. 2023, 15, 1860. [Google Scholar]
- Zafar, A.; Aftab, D.; Qureshi, R.; Fan, X.; Chen, P.; Wu, J.; Ali, H.; Nawaz, S.; Khan, S.; Shah, M. Single stage adaptive multi-attention network for image restoration. IEEE Trans. Image Process. 2024, 33, 2924–2935. [Google Scholar] [CrossRef]
- Chung, H.; Kim, J.; Mccann, M.T.; Klasky, M.L.; Ye, J.C. Diffusion posterior sampling for general noisy inverse problems. arXiv 2022, arXiv:2209.14687. [Google Scholar]
- Ng, M.K.; Shen, H.; Lam, E.Y.; Zhang, L. A total variation regularization based super-resolution reconstruction algorithm for digital video. EURASIP J. Adv. Signal Process. 2007, 2007, 074585. [Google Scholar] [CrossRef]
- Ma, C.; Rao, Y.; Cheng, Y.; Chen, C.; Lu, J.; Zhou, J. Structure-preserving super resolution with gradient guidance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7769–7778. [Google Scholar]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar]
- Jähne, B. Digital Image Processing; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
- Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]








| Scale | Method | UCMerced | AID |
|---|---|---|---|
| EDSR | 32.14/0.918/0.112/0.025/2.0 | 31.02/0.912/0.125/0.028/2.1 | |
| RCAN | 32.48/0.923/0.108/0.023/2.1 | 31.30/0.917/0.120/0.026/2.2 | |
| SwinIR | 32.60/0.925/0.106/0.022/1.9 | 31.40/0.919/0.118/0.025/2.0 | |
| Uformer | 32.35/0.922/0.109/0.024/2.0 | 31.25/0.916/0.121/0.027/2.1 | |
| SinSR | 32.55/0.924/0.107/0.023/2.0 | 31.38/0.918/0.119/0.026/2.0 | |
| RefDiff | 32.50/0.923/0.108/0.023/2.0 | 31.35/0.917/0.120/0.026/2.1 | |
| EDiffSR | 32.58/0.924/0.107/0.022/1.9 | 31.39/0.918/0.119/0.025/2.0 | |
| Ours | 33.12/0.932/0.098/0.020/1.8 | 32.01/0.926/0.107/0.022/1.8 | |
| EDSR | 30.50/0.870/0.140/0.033/2.1 | 29.45/0.858/0.155/0.035/2.2 | |
| RCAN | 30.80/0.875/0.135/0.031/2.2 | 29.70/0.862/0.150/0.033/2.3 | |
| SwinIR | 31.00/0.878/0.132/0.030/2.0 | 29.85/0.865/0.148/0.032/2.1 | |
| Uformer | 30.75/0.873/0.136/0.032/2.1 | 29.65/0.860/0.151/0.034/2.2 | |
| SinSR | 30.95/0.876/0.134/0.031/2.0 | 29.80/0.863/0.149/0.033/2.1 | |
| RefDiff | 30.90/0.875/0.135/0.031/2.1 | 29.75/0.862/0.150/0.033/2.1 | |
| EDiffSR | 30.97/0.876/0.134/0.030/2.0 | 29.82/0.863/0.149/0.032/2.1 | |
| Ours | 31.55/0.888/0.125/0.028/1.9 | 30.20/0.875/0.138/0.030/1.9 | |
| EDSR | 28.90/0.820/0.170/0.038/2.2 | 27.80/0.805/0.185/0.040/2.3 | |
| RCAN | 29.30/0.825/0.165/0.036/2.3 | 28.20/0.810/0.180/0.038/2.4 | |
| SwinIR | 29.45/0.828/0.162/0.035/2.1 | 28.35/0.813/0.178/0.037/2.2 | |
| Uformer | 29.20/0.823/0.166/0.036/2.2 | 28.15/0.808/0.181/0.038/2.3 | |
| SinSR | 29.40/0.826/0.163/0.035/2.1 | 28.33/0.811/0.179/0.037/2.2 | |
| RefDiff | 29.35/0.825/0.164/0.035/2.2 | 28.30/0.810/0.180/0.037/2.3 | |
| EDiffSR | 29.42/0.826/0.163/0.035/2.1 | 28.34/0.811/0.179/0.037/2.2 | |
| Ours | 30.10/0.838/0.150/0.032/1.9 | 29.05/0.823/0.163/0.034/1.9 |
| Scale | Method | UCMerced | AID |
|---|---|---|---|
| EDSR | 31.20/0.905/0.125/0.028/2.1 | 30.10/0.898/0.138/0.030/2.2 | |
| RCAN | 31.55/0.910/0.120/0.026/2.2 | 30.40/0.903/0.133/0.028/2.3 | |
| SwinIR | 31.70/0.913/0.118/0.025/2.0 | 30.55/0.905/0.131/0.027/2.1 | |
| Uformer | 31.45/0.911/0.121/0.027/2.1 | 30.35/0.903/0.134/0.029/2.2 | |
| SinSR | 31.65/0.912/0.119/0.026/2.1 | 30.50/0.905/0.132/0.028/2.1 | |
| RefDiff | 31.60/0.911/0.120/0.026/2.1 | 30.45/0.904/0.133/0.028/2.2 | |
| EDiffSR | 31.68/0.912/0.119/0.025/2.0 | 30.52/0.905/0.132/0.027/2.1 | |
| Ours | 32.70/0.925/0.105/0.022/1.9 | 31.55/0.916/0.118/0.024/1.9 | |
| EDSR | 29.45/0.885/0.142/0.033/2.2 | 28.40/0.872/0.157/0.035/2.3 | |
| RCAN | 29.80/0.890/0.138/0.031/2.3 | 28.75/0.877/0.152/0.033/2.4 | |
| SwinIR | 29.95/0.892/0.135/0.030/2.1 | 28.90/0.880/0.149/0.032/2.2 | |
| Uformer | 29.70/0.889/0.139/0.032/2.2 | 28.70/0.876/0.153/0.034/2.3 | |
| SinSR | 29.90/0.891/0.136/0.031/2.1 | 28.85/0.879/0.150/0.033/2.2 | |
| RefDiff | 29.85/0.890/0.137/0.031/2.2 | 28.80/0.878/0.151/0.033/2.2 | |
| EDiffSR | 29.92/0.891/0.136/0.030/2.1 | 28.87/0.879/0.150/0.032/2.2 | |
| Ours | 30.95/0.905/0.121/0.028/1.9 | 29.90/0.893/0.134/0.030/1.9 | |
| EDSR | 27.90/0.860/0.165/0.038/2.3 | 26.85/0.848/0.180/0.040/2.4 | |
| RCAN | 28.35/0.868/0.160/0.036/2.4 | 27.30/0.855/0.175/0.038/2.5 | |
| SwinIR | 28.50/0.871/0.157/0.035/2.2 | 27.45/0.858/0.172/0.037/2.3 | |
| Uformer | 28.25/0.868/0.161/0.036/2.3 | 27.25/0.855/0.174/0.038/2.4 | |
| SinSR | 28.45/0.870/0.158/0.035/2.2 | 27.40/0.857/0.173/0.037/2.3 | |
| RefDiff | 28.40/0.869/0.159/0.035/2.3 | 27.35/0.856/0.174/0.037/2.4 | |
| EDiffSR | 28.48/0.870/0.158/0.035/2.2 | 27.42/0.857/0.173/0.037/2.3 | |
| Ours | 29.50/0.882/0.142/0.032/1.9 | 28.45/0.871/0.155/0.034/1.9 |
| Method | DAM | LGDF | SRG | PSNR/SSIM/LPIPS |
|---|---|---|---|---|
| Base | 27.15/0.812/0.138 | |||
| Base + DAM | ✓ | 27.78/0.823/0.130 | ||
| Base + DAM + LGDF | ✓ | ✓ | 28.05/0.809/0.123 | |
| Base + DAM + LGDG + SRG | ✓ | ✓ | ✓ | 28.45/0.837/0.115 |
| Method | Weak | Medium | Strong | Drop |
|---|---|---|---|---|
| EDSR | 29.12 | 27.65 | 25.48 | −3.64 |
| RCAN | 29.36 | 27.82 | 25.70 | −3.66 |
| SwinIR | 29.58 | 28.10 | 25.95 | −3.63 |
| Ours | 29.92 | 28.65 | 26.78 | −3.14 |
| Method | Params (M) | FLOPs (G) |
|---|---|---|
| EDSR | 43.1 | 290.0 |
| RCAN | 15.6 | 330.0 |
| SwinIR | 11.9 | 215.0 |
| Uformer | 16.8 | 240.0 |
| SinSR | 19.4 | 260.0 |
| RefDiff | 22.7 | 285.0 |
| EDiffSR | 24.1 | 310.0 |
| Ours | 18.9 | 245.0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Cai, X.; Diwu, C.; Fan, T.; Wang, W.; He, J. Global-Local-Structure Collaborative Approach for Cross-Domain Reference-Based Image Super-Resolution. Remote Sens. 2026, 18, 487. https://doi.org/10.3390/rs18030487
Cai X, Diwu C, Fan T, Wang W, He J. Global-Local-Structure Collaborative Approach for Cross-Domain Reference-Based Image Super-Resolution. Remote Sensing. 2026; 18(3):487. https://doi.org/10.3390/rs18030487
Chicago/Turabian StyleCai, Xiuxia, Chenyang Diwu, Ting Fan, Wenjing Wang, and Jinglu He. 2026. "Global-Local-Structure Collaborative Approach for Cross-Domain Reference-Based Image Super-Resolution" Remote Sensing 18, no. 3: 487. https://doi.org/10.3390/rs18030487
APA StyleCai, X., Diwu, C., Fan, T., Wang, W., & He, J. (2026). Global-Local-Structure Collaborative Approach for Cross-Domain Reference-Based Image Super-Resolution. Remote Sensing, 18(3), 487. https://doi.org/10.3390/rs18030487
