Next Article in Journal
Seismic Remote Sensing of Super Typhoon Lupit (2009) with Seismological Array Observation in NE China
Next Article in Special Issue
A Hierarchical Fully Convolutional Network Integrated with Sparse and Low-Rank Subspace Representations for PolSAR Imagery Classification
Previous Article in Journal
An Optimal Tropospheric Tomography Method Based on the Multi-GNSS Observations
Previous Article in Special Issue
Building Extraction in Very High Resolution Remote Sensing Imagery Using Deep Learning and Guided Filters
Open AccessArticle

A CNN-Based Fusion Method for Feature Extraction from Sentinel Data

1
Department of Electrical Engineering and Information Technology (DIETI), University Federico II, 80125 Naples, Italy
2
Centre International de Recherche Agronomique pour le Développement (CIRAD), Unité Mixte de Recherche Territoires, Environnement, Télédétéction et Information Spatiale (UMR TETIS), Maison de la Télédétéction, 34000 Montpellier, France
3
UMR TETIS, University of Montpellier, 34000 Montpellier, France
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(2), 236; https://doi.org/10.3390/rs10020236
Received: 21 December 2017 / Revised: 19 January 2018 / Accepted: 30 January 2018 / Published: 3 February 2018
(This article belongs to the Special Issue Deep Learning for Remote Sensing)
Sensitivity to weather conditions, and specially to clouds, is a severe limiting factor to the use of optical remote sensing for Earth monitoring applications. A possible alternative is to benefit from weather-insensitive synthetic aperture radar (SAR) images. In many real-world applications, critical decisions are made based on some informative optical or radar features related to items such as water, vegetation or soil. Under cloudy conditions, however, optical-based features are not available, and they are commonly reconstructed through linear interpolation between data available at temporally-close time instants. In this work, we propose to estimate missing optical features through data fusion and deep-learning. Several sources of information are taken into account—optical sequences, SAR sequences, digital elevation model—so as to exploit both temporal and cross-sensor dependencies. Based on these data and a tiny cloud-free fraction of the target image, a compact convolutional neural network (CNN) is trained to perform the desired estimation. To validate the proposed approach, we focus on the estimation of the normalized difference vegetation index (NDVI), using coupled Sentinel-1 and Sentinel-2 time-series acquired over an agricultural region of Burkina Faso from May–November 2016. Several fusion schemes are considered, causal and non-causal, single-sensor or joint-sensor, corresponding to different operating conditions. Experimental results are very promising, showing a significant gain over baseline methods according to all performance indicators. View Full-Text
Keywords: coregistration; pansharpening; multi-sensor fusion; multitemporal images; deep learning; normalized difference vegetation index (NDVI) coregistration; pansharpening; multi-sensor fusion; multitemporal images; deep learning; normalized difference vegetation index (NDVI)
Show Figures

Figure 1

MDPI and ACS Style

Scarpa, G.; Gargiulo, M.; Mazza, A.; Gaetano, R. A CNN-Based Fusion Method for Feature Extraction from Sentinel Data. Remote Sens. 2018, 10, 236.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop