Next Article in Journal
Assessment of Post-Earthquake Damaged Building with Interferometric Real Aperture Radar
Previous Article in Journal
Self-Training Classification Framework with Spatial-Contextual Information for Local Climate Zones
Previous Article in Special Issue
Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder
Open AccessArticle

Void Filling of Digital Elevation Models with a Terrain Texture Learning Model Based on Generative Adversarial Networks

School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(23), 2829; https://doi.org/10.3390/rs11232829
Received: 26 October 2019 / Revised: 23 November 2019 / Accepted: 25 November 2019 / Published: 28 November 2019
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Digital elevation models (DEMs) are an important information source for spatial modeling. However, data voids, which commonly exist in regions with rugged topography, result in incomplete DEM products, and thus significantly degrade DEM data quality. Interpolation methods are commonly used to fill voids of small sizes. For large-scale voids, multi-source fusion is an effective solution. Nevertheless, high-quality auxiliary source information is always difficult to retrieve in rugged mountainous areas. Thus, the void filling task is still a challenge. In this paper, we proposed a method based on a deep convolutional generative adversarial network (DCGAN) to address the problem of DEM void filling. A terrain texture generation model (TTGM) was constructed based on the DCGAN framework. Elevation, terrain slope, and relief degree composed the samples in the training set to better depict the terrain textural features of the DEM data. Moreover, the resize-convolution was utilized to replace the traditional deconvolution process to overcome the staircase in the generated data. The TTGM was trained on non-void SRTM (Shuttle Radar Topography Mission) 1-arc-second data patches in mountainous regions collected across the globe. Then, information neighboring the voids was involved in order to infer the latent encoding for the missing areas approximated to the distribution of training data. This was implemented with a loss function composed of pixel-wise, contextual, and perceptual constraints during the reconstruction process. The most appropriate fill surface generated by the TTGM was then employed to fill the voids, and Poisson blending was performed as a postprocessing step. Two models with different input sizes (64 × 64 and 128 × 128 pixels) were trained, so the proposed method can efficiently adapt to different sizes of voids. The experimental results indicate that the proposed method can obtain results with good visual perception and reconstruction accuracy, and is superior to classical interpolation methods. View Full-Text
Keywords: digital elevation models; void filling; terrain texture; deep learning digital elevation models; void filling; terrain texture; deep learning
Show Figures

Graphical abstract

MDPI and ACS Style

Qiu, Z.; Yue, L.; Liu, X. Void Filling of Digital Elevation Models with a Terrain Texture Learning Model Based on Generative Adversarial Networks. Remote Sens. 2019, 11, 2829.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop