Next Article in Journal
Symmetric Double-Eye Structure in Hurricane Bertha (2008) Imaged by SAR
Next Article in Special Issue
A Multiple-Feature Reuse Network to Extract Buildings from Remote Sensing Imagery
Previous Article in Journal
Distributed Fiber Optic Sensors for the Monitoring of a Tunnel Crossing a Landslide
Previous Article in Special Issue
Improved Fully Convolutional Network with Conditional Random Fields for Building Extraction
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle

Sentinel-2 Image Fusion Using a Deep Residual Network

Department of Electrical Engineering, University of Iceland, Hjardarhagi 2-6, Reykjavik 107, Iceland
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(8), 1290; https://doi.org/10.3390/rs10081290
Received: 4 July 2018 / Revised: 30 July 2018 / Accepted: 7 August 2018 / Published: 15 August 2018
(This article belongs to the Special Issue Recent Advances in Neural Networks for Remote Sensing)
  |  
PDF [14334 KB, uploaded 15 August 2018]
  |  

Abstract

Single sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments. View Full-Text
Keywords: residual neural network; image fusion; convolutional neural network; Sentinel-2 residual neural network; image fusion; convolutional neural network; Sentinel-2
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. Sentinel-2 Image Fusion Using a Deep Residual Network. Remote Sens. 2018, 10, 1290.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top