TDEGAN: A Texture-Detail-Enhanced Dense Generative Adversarial Network for Remote Sensing Image Super-Resolution
Abstract
:1. Introduction
- Based on the GAN idea, TDEGAN for remote sensing image SR is presented, which can generate more realistic texture details and reduces the impact of artifacts in reconstructed remote sensing images;
- The use of multi-level dense connections, SA, and residual connections to improve the generation network structure of the GAN enhances its feature extraction ability;
- A PatchGAN-style discrimination network is designed which allows the input image to be output after it has passed through multiple convolution layers. The receptive field of view has a certain size and can help the network generate richer texture details through local discrimination;
- Using the artifact loss function for local statistics to distinguish between artifacts and realistic texture details, combined with EMA technology, can punish artifacts and help the network generate more realistic texture details.
2. Related Work
3. Method
3.1. Network Architecture
3.1.1. Generation Network
3.1.2. Discrimination Network
3.2. Loss Functions
3.2.1. Pixel Loss
3.2.2. Perception Loss
3.2.3. Adversarial Loss
3.2.4. Artifact Loss
3.2.5. Total Loss
4. Experiments and Results
4.1. Dataset
4.2. Evaluation Metrics
4.3. Analysis of Image Quality Metrics during Training Process
4.4. Comparative Experiment
4.5. Ablation Experiment
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, Z.; Jiang, K.; Yi, P.; Han, Z.; He, Z. Ultra-dense GAN for satellite imagery super-resolution. Neurocomputing 2020, 398, 328–337. [Google Scholar] [CrossRef]
- Lim, S.B.; Seo, C.W.; Yun, H.C. Digital Map Updates with UAV Photogrammetric Methods. J. Korean Soc. Surv. Geod. Photogramm. Cartogr. 2015, 33, 397–405. [Google Scholar] [CrossRef]
- Guo, M.; Liu, H.; Xu, Y.; Huang, Y. Building Extraction Based on U-Net with an Attention Block and Multiple Losses. Remote Sens. 2020, 12, 1400. [Google Scholar] [CrossRef]
- Sun, H.; Sun, X.; Wang, H.; Li, Y.; Li, X. Automatic Target Detection in High-Resolution Remote Sensing Images Using Spatial Sparse Coding Bag-of-Words Model. IEEE Geosci. Remote Sens. Lett. 2012, 9, 109–113. [Google Scholar] [CrossRef]
- Xia, L.; Zhang, X.; Zhang, J.; Wu, W.; Gao, X. Refined extraction of buildings with the semantic edge-assisted approach from very high-resolution remotely sensed imagery. Int. J. Remote Sens. 2020, 41, 8352–8365. [Google Scholar] [CrossRef]
- Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and Multispectral Data Fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
- Cui, K.; Li, R.; Polk, S.L.; Lin, Y.; Zhang, H.; Murphy, J.M.; Plemmons, R.J.; Chan, R.H. Superpixel-Based and Spatially Regularized Diffusion Learning for Unsupervised Hyperspectral Image Clustering. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–18. [Google Scholar] [CrossRef]
- Wang, P.; Sertel, E. Channel-spatial attention-based pan-sharpening of very high-resolution satellite images. Knowl.-Based Syst. 2021, 229, 107324. [Google Scholar] [CrossRef]
- Koester, E.; Sahin, C.S. A Comparison of Super-Resolution and Nearest Neighbors Interpolation Applied to Object Detection on Satellite Data. arXiv 2019, arXiv:1907.05283. [Google Scholar]
- Xiang-guang, Z. A New Kind of Super-Resolution Reconstruction Algorithm Based on the ICM and the Bicubic Interpolation. In Proceedings of the 2008 International Symposium on Intelligent Information Technology, Shanghai, China, 21–22 December 2008; pp. 817–820. [Google Scholar]
- Xiang-guang, Z. A New Kind of Super-Resolution Reconstruction Algorithm Based on the ICM and the Bilinear Interpolation. In Proceedings of the 2008 International Seminar on Future BioMedical Information Engineering, Wuhan, China, 18 December 2008; pp. 183–186. [Google Scholar]
- Rasti, P.; Demirel, H.; Anbarjafari, G. UIterative Back Projection based Image Resolution Enhancement. In Proceedings of the 2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP), Zanjan, Iran, 10–12 September 2013; pp. 237–240. [Google Scholar]
- Schultz, R.; Stevenson, R. Extraction of high-resolution frames from video sequences. IEEE Trans. Image Process. 1996, 5, 996–1011. [Google Scholar] [CrossRef] [PubMed]
- Stark, H.; Oskoui, P. High-Resolution Image Recovery from Image-Plane Arrays, Using Convex Projections. J. Opt. Soc. Am. A-Opt. Image Sci. Vis. 1989, 6, 1715–1726. [Google Scholar] [CrossRef]
- Xu, J.; Gao, Y.; Xing, J.; Fan, J.; Gao, Q.; Tang, S. Two-direction self-learning super-resolution propagation based on neighbor embedding. Signal Process. 2021, 183, 108033. [Google Scholar] [CrossRef]
- Zhang, J.; Shao, M.; Yu, L.; Li, Y. Image super-resolution reconstruction based on sparse representation and deep learning. Signal Process.-Image Commun. 2020, 87, 115925. [Google Scholar] [CrossRef]
- Yao, T.; Luo, Y.; Chen, Y.; Yang, D.; Zhao, L. Single-Image Super-Resolution: A Survey. In Proceedings of the 2018 CSPS Volume II: Signal Processing; Springer: Singapore, 2020; pp. 119–125. [Google Scholar]
- Bashir, S.M.A.; Wang, Y.; Khan, M.; Niu, Y. A comprehensive review of deep learning-based single image super-resolution. PeerJ Comput. Sci. 2021, 232, 621. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Chen, J.; Hoi, S.C.H. Deep Learning for Image Super-Resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3365–3387. [Google Scholar] [CrossRef] [PubMed]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 391–407. [Google Scholar]
- Xu, L.; Ren, J.S.J.; Liu, C.; Jia, J. Deep Convolutional Neural Network for Image Deconvolution. In Advances in Neural Information Processing Systems 27 (NIPS 2014); IEEE: New York, NY, USA, 2014; pp. 1905–1914. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Deeply-Recursive Convolutional Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1637–1645. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 1132–1140. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ju, C.; Su, X.; Yang, H.; Ning, H. Single-image super-resolution reconstruction via generative adversarial network. In Proceedings of the 9th International Symposium on Advanced Optical Manufacturing and Testing Technologies: Optoelectronic Materials and Devices for Sensing and Imaging, Chengdu, China, 26–29 June 2018; Volume 10843, p. 108430. [Google Scholar]
- Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 105–114. [Google Scholar]
- Johnson, J.; Alahi, A.; Fei-Fei, L. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Volume 9906, pp. 694–711. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Loy, C.C. ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Rakotonirina, N.C.; Rasoanaivo, A. ESRGAN plus: Further Improving Enhanced Super-Resolution Generative Adversarial Network. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, Virtual, 4–9 May 2020; pp. 3637–3641. [Google Scholar]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
- Wang, P.; Bayram, B.; Sertel, E. A comprehensive review on deep learning based remote sensing image super-resolution methods. Earth-Sci. Rev. 2022, 232, 104110. [Google Scholar] [CrossRef]
- Ma, W.; Pan, Z.; Guo, J.; Lei, B. Super-Resolution of Remote Sensing Images Based on Transferred Generative Adversarial NetworkK. In Proceedings of the IGARSS IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 1148–1151. [Google Scholar]
- Sustika, R.; Suksmono, A.B.; Danudirdjo, D.; Wikantika, K. Generative Adversarial Network with Residual Dense Generator for Remote Sensing Image Super Resolution. In Proceedings of the 2020 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), Tangerang, Indonesia, 18–20 November 2020; pp. 34–39. [Google Scholar]
- Guo, D.; Xia, Y.; Xu, L.; Li, W.; Luo, X. Remote sensing image super-resolution using cascade generative adversarial nets. Neurocomputing 2021, 443, 117–130. [Google Scholar] [CrossRef]
- Huang, Z.X.; Jing, C.W. Super-Resolution Reconstruction Method of Remote Sensing Image Based on Multi-Feature Fusion. IEEE Acess 2020, 8, 18764–18771. [Google Scholar] [CrossRef]
- Moustafa, M.S.; Sayed, S.A. Satellite Imagery Super-Resolution Using Squeeze-and-Excitation-Based GAN. Int. J. Aeronaut. Space Sci. 2021, 22, 1481–1492. [Google Scholar] [CrossRef]
- Li, Y.; Mavromatis, S.; Zhang, F.; Du, Z.; Sequeira, J.; Wang, Z.; Zhao, X.; Liu, R. Single-Image Super-Resolution for Remote Sensing Images Using a Deep Generative Adversarial Network With Local and Global Attention Mechanisms. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–24. [Google Scholar] [CrossRef]
- Gao, L.; Sun, H.M.; Cui, Z.; Du, Y.B.; Sun, H.B.; Jia, R.S. Super-resolution reconstruction of single remote sensing images based on residual channel attention. J. Appl. Remote Sens. 2021, 15, 16513. [Google Scholar] [CrossRef]
- Jia, S.; Wang, Z.; Li, Q.; Jia, X.; Xu, M. Multiattention Generative Adversarial Network for Remote Sensing Image Super-Resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–15. [Google Scholar] [CrossRef]
- Xu, Y.; Luo, W.; Hu, A.; Xie, Z.; Xie, X.; Tao, L. TE-SAGAN: An Improved Generative Adversarial Network for Remote Sensing Super-Resolution Images. Remote Sens. 2022, 14, 2425. [Google Scholar] [CrossRef]
- Guo, X.; Yang, H.; Huang, D. Image Inpainting via Conditional Texture and Structure Dual Generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14114–14123. [Google Scholar]
- Wang, W.; Zhang, J.; Niu, L.; Ling, H.; Yang, X.; Zhang, L. Parallel Multi-Resolution Fusion Network for Image Inpainting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14539–14548. [Google Scholar]
- Xu, M.; Chen, Y.; Liu, S.; Li, T.H.; Li, G. Structure-transformed Texture-enhanced Network for Person Image Synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 13839–13848. [Google Scholar]
- Xu, B.; Wang, N.; Chen, T.; Li, M. Empirical Evaluation of Rectified Activations in Convolutional Network. arXiv 2015, arXiv:1505.00853. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4278–4284. [Google Scholar]
- Zhang, Q.L.; Yang, Y.B. SA-NET: Shuffle Attention for Deep Convolutional Neural Networks. In Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, ON, Canada, 6–11 June 2021; pp. 2235–2239. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, R. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Wu, Y.; He, K. Group Normalization. Int. J. Comput. Vis. 2020, 128, 742–755. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5967–5976. [Google Scholar]
- Zhu, Z.; Lei, Y.; Qin, Y.; Zhu, C.; Zhu, Y. IRE: Improved Image Super-Resolution Based on Real-ESRGAN. IEEE Access 2023, 11, 45334–45348. [Google Scholar] [CrossRef]
- Cheng, J.; Yang, Y.; Tang, X.; Xiong, N.; Zhang, Y.; Lei, F. Generative Adversarial Networks: A Literature Review. KSII Trans. Internet Inf. Syst. 2020, 14, 4625–4647. [Google Scholar]
- Zhao, Z.; Sun, Q.; Yang, H.; Qiao, H.; Wang, Z.; Wu, D.O. Compression artifacts reduction by improved generative adversarial networks. Eurasip J. Image Video Process. 2019, 2019, 1–7. [Google Scholar] [CrossRef]
- Liang, J.; Zeng, H.; Zhang, L. Details or Artifacts: A Locally Discriminative Learning Approach to Realistic Image Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5647–5656. [Google Scholar]
- Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 4217–4228. [Google Scholar] [CrossRef] [PubMed]
- Guo, M.; Zhang, Z.; Liu, H.; Huang, Y. NDSRGAN: A Novel Dense Generative Adversarial Network for Real Aerial Imagery Super-Resolution Reconstruction. Remote Sens. 2022, 14, 1574. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Ma, C.; Rao, Y.; Lu, J.; Zhou, J. Structure-Preserving Image Super-Resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7898–7911. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Hao, Z.; Tang, Y.; Guo, J.; Yang, Y.; Han, K.; Wang, Y. SAM-DiffSR: Structure-Modulated Diffusion Model for Image Super-Resolution. arXiv 2024, arXiv:2402.17133. [Google Scholar]
Number of Layers | Formula | The Size of the Receptive Field |
---|---|---|
1 | ||
2 | ||
3 | ||
4 | ||
5 | ||
6 | ||
7 |
Metrics | Bicubic | SRCNN | EDSR | SRGAN | ESRGAN | SPSR | SAM-DiffSR | Ours |
---|---|---|---|---|---|---|---|---|
PSNR | 20.74721 | 19.7635 | 19.56991 | 19.09575 | 19.39452 | 19.53729 | ||
SSIM | 0.516645 | 0.529895 | 0.457065 | 0.460728 | 0.463676 | 0.460472 | ||
LPIPS | 0.678191 | 0.663481 | 0.705114 | 0.422320 | 0.385649 | 0.376603 | 0.376351 |
Metrics | PSNR | SSIM | LPIPS |
---|---|---|---|
Baseline | 19.09575 | 0.460728 | 0.385649 |
Baseline + DCSADRDB | 19.33431 | 0.466154 | 0.375832 |
Baseline + DCSADRDB + PatchGAN | 19.5638 | 0.469212 | 0.373932 |
TDEGAN |
Metrics | PSNR | SSIM | LPIPS |
---|---|---|---|
RRDB | 19.334308 | 0.455212 | 0.378594 |
DCDRDB | 19.495753 | 0.464235 | 0.371603 |
DCSADRDB |
Metrics | PSNR | SSIM | LPIPS |
---|---|---|---|
= 0.5 | 19.373898 | 0.434669 | 0.375832 |
= 1 | |||
= 1.5 | 19.563804 | 0.466154 | 0.373375 |
Dataset | Metrics | Bicubic | SRCNN | EDSR | SRGAN | ESRGAN | SPSR | SAM-DiffSR | Ours |
---|---|---|---|---|---|---|---|---|---|
airport | PSNR | 26.8010 | 27.1036 | 23.8835 | 24.8501 | 23.3905 | 23.7396 | ||
SSIM | 0.6307 | 0.6451 | 0.5048 | 0.5514 | 0.4765 | 0.5031 | |||
LPIPS | 0.6055 | 0.4709 | 0.4992 | 0.3267 | 0.3351 | 0.3478 | 0.3428 | ||
forest | PSNR | 27.2523 | 27.2899 | 22.2937 | 25.5113 | 24.1087 | 25.1358 | ||
SSIM | 0.5256 | 0.5245 | 0.3283 | 0.4320 | 0.3584 | 0.3952 | |||
LPIPS | 0.7453 | 0.7246 | 0.7080 | 0.6923 | 0.3708 | 0.3947 | 0.3694 | ||
harbor | PSNR | 22.1255 | 22.5642 | 19.3436 | 20.4798 | 19.2086 | 19.8244 | ||
SSIM | 0.6241 | 0.6801 | 0.5670 | 0.6127 | 0.5784 | 0.5974 | |||
LPIPS | 0.5839 | 0.4185 | 0.3639 | 0.2402 | 0.2382 | 0.2515 | 0.2386 | ||
mountain | PSNR | 27.9947 | 28.0932 | 23.5633 | 25.9737 | 24.2595 | 25.6357 | ||
SSIM | 0.6325 | 0.6335 | 0.4852 | 0.5384 | 0.4332 | 0.5259 | |||
LPIPS | 0.6426 | 0.5561 | 0.5593 | 0.3877 | 0.3482 | 0.3944 | 0.3571 | ||
runway | PSNR | 29.5734 | 30.1370 | 27.0469 | 28.1608 | 27.6943 | 27.7498 | ||
SSIM | 0.7619 | 0.7814 | 0.6636 | 0.7145 | 0.6829 | 0.6972 | |||
LPIPS | 0.5259 | 0.4488 | 0.3771 | 0.3102 | 0.2924 | 0.2925 | 0.2915 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, M.; Xiong, F.; Zhao, B.; Huang, Y.; Xie, Z.; Wu, L.; Chen, X.; Zhang, J. TDEGAN: A Texture-Detail-Enhanced Dense Generative Adversarial Network for Remote Sensing Image Super-Resolution. Remote Sens. 2024, 16, 2312. https://doi.org/10.3390/rs16132312
Guo M, Xiong F, Zhao B, Huang Y, Xie Z, Wu L, Chen X, Zhang J. TDEGAN: A Texture-Detail-Enhanced Dense Generative Adversarial Network for Remote Sensing Image Super-Resolution. Remote Sensing. 2024; 16(13):2312. https://doi.org/10.3390/rs16132312
Chicago/Turabian StyleGuo, Mingqiang, Feng Xiong, Baorui Zhao, Ying Huang, Zhong Xie, Liang Wu, Xueye Chen, and Jiaming Zhang. 2024. "TDEGAN: A Texture-Detail-Enhanced Dense Generative Adversarial Network for Remote Sensing Image Super-Resolution" Remote Sensing 16, no. 13: 2312. https://doi.org/10.3390/rs16132312
APA StyleGuo, M., Xiong, F., Zhao, B., Huang, Y., Xie, Z., Wu, L., Chen, X., & Zhang, J. (2024). TDEGAN: A Texture-Detail-Enhanced Dense Generative Adversarial Network for Remote Sensing Image Super-Resolution. Remote Sensing, 16(13), 2312. https://doi.org/10.3390/rs16132312