Activated Sparsely Sub-Pixel Transformer for Remote Sensing Image Super-Resolution
Abstract
:1. Introduction
- We employ sparse activation to direct feature extraction during the self-attention process, and we choose to focus on the correlation of adjacent pixels in sub-pixel space to augment the capacity for feature extraction and comprehension. The enhancement of the feature extraction capability is realized through the integration of sparse coding and sub-pixels.
- We propose an encoder–decoder structure that further integrates and utilizes the advantages of both Transformer and CNN. In terms of encoder–decoder interaction, we have designed a multi-scale network structure to better align with the task characteristics of super-resolution in remote sensing images. At the same time, our proposed structure offers a more lightweight design with a reduced parameter count compared to other Transformer-based models.
- Through comparative experiments conducted on the UCMerced dataset and the AID dataset, our method demonstrates satisfactory performance.
2. Related Work
2.1. CNN-Based Image SR
2.2. Transformer-Based Image SR
2.3. Remote Sensing Image SR
3. Proposed Method
3.1. Overall Structure
3.2. Transformer Encoder for Sparse Representation
3.3. Subpixel Multi-Level Decoder
4. Experiment
4.1. Experimental Setup
4.2. Comparisons with Other Methods
4.3. Ablation Study
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Yue, L.; Shen, H.; Li, J.; Yuan, Q.; Zhang, H.; Zhang, L. Image super-resolution: The techniques, applications, and future. Signal Process. 2016, 128, 389–408. [Google Scholar] [CrossRef]
- Hou, B.; Zhou, K.; Jiao, L. Adaptive super-resolution for remote sensing images based on sparse representation with global joint dictionary model. IEEE Trans. Geosci. Remote. Sens. 2017, 56, 2312–2327. [Google Scholar] [CrossRef]
- Pan, Z.; Ma, W.; Guo, J.; Lei, B. Super-resolution of single remote sensing image based on residual dense backprojection networks. IEEE Trans. Geosci. Remote. Sens. 2019, 57, 7918–7933. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z.; Zou, Z. Coupled adversarial training for remote sensing image super-resolution. IEEE Trans. Geosci. Remote. Sens. 2019, 58, 3633–3643. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z. Hybrid-scale self-similarity exploitation for remote sensing image super-resolution. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5401410. [Google Scholar] [CrossRef]
- Lei, S.; Shi, Z.; Zou, Z. Super-resolution for remote sensing images via local–global combined network. IEEE Geosci. Remote. Sens. Lett. 2017, 14, 1243–1247. [Google Scholar] [CrossRef]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Hu, Y.; Li, J.; Huang, Y.; Gao, X. Channel-wise and spatial feature modulation network for single image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 3911–3927. [Google Scholar] [CrossRef]
- Li, J.; Fang, F.; Li, J.; Mei, K.; Zhang, G. MDCN: Multi-scale dense cross network for image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2547–2561. [Google Scholar] [CrossRef]
- Tong, T.; Li, G.; Liu, X.; Gao, Q. Image super-resolution using dense skip connections. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4799–4807. [Google Scholar]
- Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel, 23–27 October 2018; pp. 286–301. [Google Scholar]
- Dai, T.; Cai, J.; Zhang, Y.; Xia, S.T.; Zhang, L. Second-order attention network for single image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 11065–11074. [Google Scholar]
- Liu, J.; Zhang, W.; Tang, Y.; Tang, J.; Wu, G. Residual feature aggregation network for image super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2359–2368. [Google Scholar]
- Chen, Z.; Zhang, Y.; Gu, J.; Kong, L.; Yang, X.; Yu, F. Dual aggregation transformer for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 12312–12321. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liang, J.; Cao, J.; Sun, G.; Zhang, K.; Van Gool, L.; Timofte, R. Swinir: Image restoration using swin transformer. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 1833–1844. [Google Scholar]
- Chen, X.; Wang, X.; Zhou, J.; Qiao, Y.; Dong, C. Activating more pixels in image super-resolution transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 22367–22377. [Google Scholar]
- Zhou, Y.; Li, Z.; Guo, C.L.; Bai, S.; Cheng, M.M.; Hou, Q. Srformer: Permuted self-attention for single image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 12780–12791. [Google Scholar]
- Lei, S.; Shi, Z.; Mo, W. Transformer-based multistage enhancement for remote sensing image super-resolution. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5615611. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017): 31st Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1646–1654. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
- Yu, J.; Fan, Y.; Yang, J.; Xu, N.; Wang, Z.; Wang, X.; Huang, T. Wide activation for efficient and accurate image super-resolution. arXiv 2018, arXiv:1808.08718. [Google Scholar]
- Lai, W.S.; Huang, J.B.; Ahuja, N.; Yang, M.H. Deep laplacian pyramid networks for fast and accurate super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 624–632. [Google Scholar]
- Li, J.; Fang, F.; Mei, K.; Zhang, G. Multi-scale residual network for image super-resolution. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 517–532. [Google Scholar]
- Wang, Y.; Shao, Z.; Lu, T.; Wu, C.; Wang, J. Remote sensing image super-resolution via multiscale enhancement network. IEEE Geosci. Remote. Sens. Lett. 2023, 20, 5000905. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018; pp. 63–79. [Google Scholar]
- Zhang, W.; Liu, Y.; Dong, C.; Qiao, Y. Ranksrgan: Generative adversarial networks with ranker for image super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3096–3105. [Google Scholar]
- Wang, X.; Xie, L.; Dong, C.; Shan, Y. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 1905–1914. [Google Scholar]
- Park, J.; Son, S.; Lee, K.M. Content-aware local gan for photo-realistic super-resolution. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–6 October 2023; pp. 10585–10594. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems 27 (NIPS 2014): 28st Annual Conference on Neural Information Processing Systems, Montreal QC, Canada, 8–13 December 2014. [Google Scholar]
- Mei, Y.; Fan, Y.; Zhou, Y.; Huang, L.; Huang, T.S.; Shi, H. Image super-resolution with cross-scale non-local attention and exhaustive self-exemplars mining. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 5690–5699. [Google Scholar]
- Jia, S.; Wang, Z.; Li, Q.; Jia, X.; Xu, M. Multiattention generative adversarial network for remote sensing image super-resolution. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 5624715. [Google Scholar] [CrossRef]
- Xu, Y.; Luo, W.; Hu, A.; Xie, Z.; Xie, X.; Tao, L. TE-SAGAN: An improved generative adversarial network for remote sensing super-resolution images. Remote. Sens. 2022, 14, 2425. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving language understanding by generative pre-training. arXiv 2018, arXiv:2012.11747v3. [Google Scholar]
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. Openai Blog 2019, 1, 9. [Google Scholar]
- Chen, H.; Wang, Y.; Guo, T.; Xu, C.; Deng, Y.; Liu, Z.; Ma, S.; Xu, C.; Xu, C.; Gao, W. Pre-trained image processing transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12299–12310. [Google Scholar]
- Zhang, X.; Zeng, H.; Guo, S.; Zhang, L. Efficient long-range attention network for image super-resolution. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 649–667. [Google Scholar]
- Liu, Z.; Feng, R.; Wang, L.; Zhong, Y.; Zhang, L.; Zeng, T. Remote Sensing Image Super-Resolution via Dilated Convolution Network with Gradient Prior. In Proceedings of the IGARSS 2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 2402–2405. [Google Scholar]
- Wang, T.; Sun, W.; Qi, H.; Ren, P. Aerial image super resolution via wavelet multiscale convolutional neural networks. IEEE Geosci. Remote. Sens. Lett. 2018, 15, 769–773. [Google Scholar] [CrossRef]
- Ma, W.; Pan, Z.; Guo, J.; Lei, B. Achieving super-resolution remote sensing images via the wavelet transform combined with the recursive res-net. IEEE Trans. Geosci. Remote. Sens. 2019, 57, 3512–3527. [Google Scholar] [CrossRef]
- Zhang, S.; Yuan, Q.; Li, J.; Sun, J.; Zhang, X. Scene-adaptive remote sensing image super-resolution using a multiscale attention network. IEEE Trans. Geosci. Remote. Sens. 2020, 58, 4764–4779. [Google Scholar] [CrossRef]
- Ng, A. Sparse autoencoder. Cs294a Lect. Notes 2011, 72, 1–19. [Google Scholar]
- Chen, X.; Liu, Z.; Tang, H.; Yi, L.; Zhao, H.; Han, S. Sparsevit: Revisiting activation sparsity for efficient high-resolution vision transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Paris, France, 2–6 October 2023; pp. 2061–2070. [Google Scholar]
- Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; pp. 270–279. [Google Scholar]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote. Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part II 14. pp. 391–407. [Google Scholar]
- Haut, J.M.; Paoletti, M.E.; Fernández-Beltran, R.; Plaza, J.; Plaza, A.; Li, J. Remote sensing single-image superresolution based on a deep compendium model. IEEE Geosci. Remote. Sens. Lett. 2019, 16, 1432–1436. [Google Scholar] [CrossRef]
- Wang, S.; Zhou, T.; Lu, Y.; Di, H. Contextual transformation network for lightweight remote-sensing image super-resolution. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5615313. [Google Scholar] [CrossRef]
Scale | FSRCNN [52] | DCM [53] | CTNET [54] | TransENet [21] | SSTNet |
---|---|---|---|---|---|
PSNR/SSIM | PSNR/SSIM | PSNR/SSIM | PSNR/SSIM | PSNR/SSIM | |
2 | 33.18/0.9196 | 33.65/0.9274 | 33.59/0.9255 | 34.03/0.9301 | 34.09/0.9311 |
3 | 29.09/0.8167 | 29.52/0.8394 | 29.44/0.8319 | 29.92/0.8408 | 29.88/0.8397 |
4 | 26.93/0.7267 | 27.22/0.7528 | 27.41/0.7512 | 27.77/0.7630 | 27.66/0.7598 |
Class | FSRCNN [52] | DCM [53] | CTNET [54] | TransENet [21] | SSTNet |
---|---|---|---|---|---|
agricultural | 27.61 | 29.06 | 31.79 | 28.02 | 27.65 |
airplane | 28.98 | 30.77 | 28.22 | 29.94 | 29.94 |
baseballdiamond | 34.64 | 33.76 | 29.37 | 35.04 | 35.04 |
beach | 37.21 | 36.38 | 34.73 | 37.53 | 37.59 |
buildings | 27.5 | 28.51 | 37.39 | 28.81 | 28.74 |
chaparral | 26.21 | 26.81 | 28.01 | 26.69 | 26.69 |
denseresidential | 28.02 | 28.79 | 26.42 | 29.11 | 29.12 |
forest | 28.35 | 28.16 | 28.41 | 28.59 | 28.57 |
freeway | 29.27 | 30.45 | 28.43 | 30.38 | 30.47 |
golfcourse | 36.43 | 34.43 | 29.67 | 36.68 | 36.65 |
harbor | 23.29 | 26.55 | 36.24 | 24.72 | 24.56 |
intersection | 28.06 | 29.28 | 23.99 | 29.03 | 29.02 |
mediumresidential | 27.58 | 27.21 | 28.42 | 28.47 | 28.42 |
mobilehomepark | 24.34 | 26.05 | 27.86 | 25.64 | 25.52 |
overpass | 26.53 | 27.77 | 24.99 | 27.83 | 27.87 |
parkinglot | 23.34 | 24.95 | 27.48 | 24.45 | 24.38 |
river | 29.07 | 28.89 | 23.63 | 29.25 | 29.21 |
runway | 31.01 | 32.53 | 29.03 | 31.25 | 31.19 |
sparseresidential | 30.23 | 29.81 | 30.68 | 31.57 | 31.61 |
storagetanks | 31.92 | 29.02 | 31.18 | 32.71 | 32.77 |
tenniscourt | 31.34 | 30.76 | 32.43 | 32.51 | 32.54 |
AVG | 29.09 | 29.52 | 29.44 | 29.92 | 29.88 |
Scale | FSRCNN [52] | DCM [53] | CTNET [54] | TransENet [21] | SSTNet |
---|---|---|---|---|---|
PSNR/SSIM | PSNR/SSIM | PSNR/SSIM | PSNR/SSIM | PSNR/SSIM | |
2 | 34.67/0.9308 | 35.21/0.9366 | 35.12/0.9357 | 35.24/0.9369 | 35.29/0.9376 |
3 | 30.71/0.8423 | 31.28/0.8560 | 31.21/0.8542 | 31.38/0.8581 | 31.45/0.8595 |
4 | 28.56/0.7620 | 29.16/0.7821 | 29.01/0.7782 | 29.32/0.7879 | 29.34/0.7896 |
Class | FSRCNN [52] | DCM [53] | CTNET [54] | TransENet [21] | SSTNet |
---|---|---|---|---|---|
airport | 30.38 | 31.01 | 30.91 | 31.13 | 31.20 |
bareland | 38.24 | 38.54 | 38.54 | 38.58 | 38.60 |
baseballdiamond | 33.24 | 33.81 | 33.71 | 33.94 | 33.99 |
beach | 34.20 | 34.54 | 34.55 | 34.61 | 34.64 |
bridge | 32.92 | 33.60 | 33.52 | 33.75 | 33.81 |
center | 28.91 | 29.82 | 29.68 | 29.96 | 30.05 |
church | 25.57 | 26.25 | 26.15 | 26.33 | 26.38 |
commercial | 29.61 | 30.21 | 30.13 | 30.29 | 30.35 |
denseresidential | 26.29 | 26.92 | 26.84 | 27.01 | 27.08 |
desert | 40.84 | 41.00 | 40.99 | 41.03 | 41.05 |
farmland | 35.72 | 36.25 | 36.18 | 36.35 | 36.41 |
forest | 30.71 | 30.98 | 31.01 | 31.06 | 31.08 |
industrial | 28.40 | 29.14 | 29.02 | 29.25 | 29.32 |
meadow | 34.49 | 34.72 | 34.72 | 34.77 | 34.80 |
mediumresidential | 29.84 | 30.46 | 30.39 | 30.54 | 30.59 |
mountain | 30.94 | 31.10 | 31.11 | 31.15 | 31.17 |
park | 29.57 | 30.02 | 29.96 | 30.11 | 30.16 |
parking | 27.58 | 28.83 | 28.64 | 29.05 | 29.27 |
playground | 31.43 | 32.47 | 32.37 | 32.69 | 32.82 |
pond | 31.66 | 32.06 | 31.10 | 32.12 | 32.16 |
port | 28.08 | 28.72 | 28.61 | 28.82 | 28.94 |
railwaystation | 29.92 | 30.58 | 30.48 | 30.69 | 30.75 |
resort | 29.51 | 30.12 | 30.06 | 30.21 | 30.27 |
river | 32.39 | 32.65 | 32.62 | 32.7 | 32.73 |
school | 28.53 | 29.18 | 29.08 | 29.29 | 29.35 |
sparseresidential | 27.97 | 28.27 | 28.25 | 28.32 | 28.35 |
square | 30.75 | 31.46 | 31.37 | 31.58 | 31.65 |
stadium | 28.26 | 29.02 | 28.90 | 29.19 | 29.27 |
storagetanks | 27.03 | 27.64 | 27.54 | 27.73 | 27.78 |
viaduct | 29.18 | 29.90 | 29.76 | 30.04 | 30.11 |
AVG | 30.71 | 31.28 | 31.21 | 31.38 | 31.45 |
Method | PSNR | SSIM | Params |
---|---|---|---|
TransENet [21] encoder and decoder module | 34.01 | 0.9299 | 37.31 |
Proposed SSTNet encoder and decoder module | 34.06 | 0.9307 | 33.48 |
Loss | PSNR | SSIM |
---|---|---|
34.08 | 0.9308 | |
34.09 | 0.9311 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, Y.; Gong, C.; Yan, J. Activated Sparsely Sub-Pixel Transformer for Remote Sensing Image Super-Resolution. Remote Sens. 2024, 16, 1895. https://doi.org/10.3390/rs16111895
Guo Y, Gong C, Yan J. Activated Sparsely Sub-Pixel Transformer for Remote Sensing Image Super-Resolution. Remote Sensing. 2024; 16(11):1895. https://doi.org/10.3390/rs16111895
Chicago/Turabian StyleGuo, Yongde, Chengying Gong, and Jun Yan. 2024. "Activated Sparsely Sub-Pixel Transformer for Remote Sensing Image Super-Resolution" Remote Sensing 16, no. 11: 1895. https://doi.org/10.3390/rs16111895
APA StyleGuo, Y., Gong, C., & Yan, J. (2024). Activated Sparsely Sub-Pixel Transformer for Remote Sensing Image Super-Resolution. Remote Sensing, 16(11), 1895. https://doi.org/10.3390/rs16111895