Next Article in Journal
Water Conservation Estimation Based on Time Series NDVI in the Yellow River Basin
Next Article in Special Issue
Single Object Tracking in Satellite Videos: Deep Siamese Network Incorporating an Interframe Difference Centroid Inertia Motion Model
Previous Article in Journal
Combined Rule-Based and Hypothesis-Based Method for Building Model Reconstruction from Photogrammetric Point Clouds
Previous Article in Special Issue
A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection
Article

Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), No.129 Luoyu Road, Wuhan 430079, China
2
Hubei Institute of Land Surveying and Mapping, No.199 Macau Road, Wuhan 430034, China
3
School of Computer and Engineering, Xi’An University of Technology, No.5 Jin Hua South Road, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Academic Editor: Edoardo Pasolli
Remote Sens. 2021, 13(6), 1104; https://doi.org/10.3390/rs13061104
Received: 2 February 2021 / Revised: 8 March 2021 / Accepted: 9 March 2021 / Published: 14 March 2021
Previously, generative adversarial networks (GAN) have been widely applied on super resolution reconstruction (SRR) methods, which turn low-resolution (LR) images into high-resolution (HR) ones. However, as these methods recover high frequency information with what they observed from the other images, they tend to produce artifacts when processing unfamiliar images. Optical satellite remote sensing images are of a far more complicated scene than natural images. Therefore, applying the previous networks on remote sensing images, especially mid-resolution ones, leads to unstable convergence and thus unpleasing artifacts. In this paper, we propose Enlighten-GAN for SRR tasks on large-size optical mid-resolution remote sensing images. Specifically, we design the enlighten blocks to induce network converging to a reliable point, and bring the Self-Supervised Hierarchical Perceptual Loss to attain performance improvement overpassing the other loss functions. Furthermore, limited by memory, large-scale images need to be cropped into patches to get through the network separately. To merge the reconstructed patches into a whole, we employ the internal inconsistency loss and cropping-and-clipping strategy, to avoid the seam line. Experiment results certify that Enlighten-GAN outperforms the state-of-the-art methods in terms of gradient similarity metric (GSM) on mid-resolution Sentinel-2 remote sensing images. View Full-Text
Keywords: super resolution reconstruction; mid-resolution remote sensing images; generative adversarial network super resolution reconstruction; mid-resolution remote sensing images; generative adversarial network
Show Figures

Graphical abstract

MDPI and ACS Style

Gong, Y.; Liao, P.; Zhang, X.; Zhang, L.; Chen, G.; Zhu, K.; Tan, X.; Lv, Z. Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images. Remote Sens. 2021, 13, 1104. https://doi.org/10.3390/rs13061104

AMA Style

Gong Y, Liao P, Zhang X, Zhang L, Chen G, Zhu K, Tan X, Lv Z. Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images. Remote Sensing. 2021; 13(6):1104. https://doi.org/10.3390/rs13061104

Chicago/Turabian Style

Gong, Yuanfu, Puyun Liao, Xiaodong Zhang, Lifei Zhang, Guanzhou Chen, Kun Zhu, Xiaoliang Tan, and Zhiyong Lv. 2021. "Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images" Remote Sensing 13, no. 6: 1104. https://doi.org/10.3390/rs13061104

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop