Next Article in Journal
Remote Sensing Estimation of Lake Total Phosphorus Concentration Based on MODIS: A Case Study of Lake Hongze
Next Article in Special Issue
Hyperspectral Image Super-Resolution via Adaptive Dictionary Learning and Double 1 Constraint
Previous Article in Journal
Estimating Nitrogen from Structural Crop Traits at Field Scale—A Novel Approach Versus Spectral Vegetation Indices
Previous Article in Special Issue
Coupled Higher-Order Tensor Factorization for Hyperspectral and LiDAR Data Fusion and Classification
Open AccessArticle

SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits

1
Remote Sensing Technology Institute, German Aerospace Center (DLR), 82234 Wessling, Germany
2
Signal Processing in Earth Observation, Technical University of Munich (TUM), 80333 Munich, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(17), 2067; https://doi.org/10.3390/rs11172067
Received: 22 July 2019 / Revised: 23 August 2019 / Accepted: 27 August 2019 / Published: 3 September 2019
(This article belongs to the Special Issue Advances in Remote Sensing Image Fusion)
Due to its all time capability, synthetic aperture radar (SAR) remote sensing plays an important role in Earth observation. The ability to interpret the data is limited, even for experts, as the human eye is not familiar to the impact of distance-dependent imaging, signal intensities detected in the radar spectrum as well as image characteristics related to speckle or steps of post-processing. This paper is concerned with machine learning for SAR-to-optical image-to-image translation in order to support the interpretation and analysis of original data. A conditional adversarial network is adopted and optimized in order to generate alternative SAR image representations based on the combination of SAR images (starting point) and optical images (reference) for training. Following this strategy, the focus is set on the value of empirical knowledge for initialization, the impact of results on follow-up applications, and the discussion of opportunities/drawbacks related to this application of deep learning. Case study results are shown for high resolution (SAR: TerraSAR-X, optical: ALOS PRISM) and low resolution (Sentinel-1 and -2) data. The properties of the alternative image representation are evaluated based on feedback from experts in SAR remote sensing and the impact on road extraction as an example for follow-up applications. The results provide the basis to explain fundamental limitations affecting the SAR-to-optical image translation idea but also indicate benefits from alternative SAR image representations. View Full-Text
Keywords: synthetic aperture radar (SAR); deep learning; interpretation; generative adversarial networks synthetic aperture radar (SAR); deep learning; interpretation; generative adversarial networks
Show Figures

Graphical abstract

MDPI and ACS Style

Fuentes Reyes, M.; Auer, S.; Merkle, N.; Henry, C.; Schmitt, M. SAR-to-Optical Image Translation Based on Conditional Generative Adversarial Networks—Optimization, Opportunities and Limits. Remote Sens. 2019, 11, 2067.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop