Next Article in Journal
Sentinel-2 Sharpening via Parallel Residual Network
Next Article in Special Issue
Multiscale Deep Spatial Feature Extraction Using Virtual RGB Image for Hyperspectral Imagery Classification
Previous Article in Journal
Crop Classification Based on Temporal Signatures of Sentinel-1 Observations over Navarre Province, Spain
Previous Article in Special Issue
Geo-Object-Based Land Cover Map Update for High-Spatial-Resolution Remote Sensing Images via Change Detection and Label Transfer
Open AccessArticle

Do Game Data Generalize Well for Remote Sensing Image Segmentation?

by Zhengxia Zou 1,†, Tianyang Shi 2,3,4,†, Wenyuan Li 2,3,4, Zhou Zhang 5 and Zhenwei Shi 2,3,4,*
1
Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI 48109, USA
2
Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
3
Beijing Key Laboratory of Digital Media, Beihang University, Beijing 100191, China
4
State Key Laboratory of Virtual Reality Technology and Systems, School of Astronautics, Beihang University, Beijing 100191, China
5
Department of Biological Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
*
Author to whom correspondence should be addressed.
These authors are co-first authors as they contributed equally to this work.
Remote Sens. 2020, 12(2), 275; https://doi.org/10.3390/rs12020275
Received: 10 December 2019 / Revised: 8 January 2020 / Accepted: 10 January 2020 / Published: 14 January 2020
Despite the recent progress in deep learning and remote sensing image interpretation, the adaption of a deep learning model between different sources of remote sensing data still remains a challenge. This paper investigates an interesting question: do synthetic data generalize well for remote sensing image applications? To answer this question, we take the building segmentation as an example by training a deep learning model on the city map of a well-known video game “Grand Theft Auto V” and then adapting the model to real-world remote sensing images. We propose a generative adversarial training based segmentation framework to improve the adaptability of the segmentation model. Our model consists of a CycleGAN model and a ResNet based segmentation network, where the former one is a well-known image-to-image translation framework which learns a mapping of the image from the game domain to the remote sensing domain; and the latter one learns to predict pixel-wise building masks based on the transformed data. All models in our method can be trained in an end-to-end fashion. The segmentation model can be trained without using any additional ground truth reference of the real-world images. Experimental results on a public building segmentation dataset suggest the effectiveness of our adaptation method. Our method shows superiority over other state-of-the-art semantic segmentation methods, for example, Deeplab-v3 and UNet. Another advantage of our method is that by introducing semantic information to the image-to-image translation framework, the image style conversion can be further improved. View Full-Text
Keywords: remote sensing; deep learning; video game; domain adaptation; building segmentation remote sensing; deep learning; video game; domain adaptation; building segmentation
Show Figures

Graphical abstract

MDPI and ACS Style

Zou, Z.; Shi, T.; Li, W.; Zhang, Z.; Shi, Z. Do Game Data Generalize Well for Remote Sensing Image Segmentation? Remote Sens. 2020, 12, 275.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop