Sentinel-2 Image Fusion Using a Deep Residual Network
AbstractSingle sensor fusion is the fusion of two or more spectrally disjoint reflectance bands that have different spatial resolution and have been acquired by the same sensor. An example is Sentinel-2, a constellation of two satellites, which can acquire multispectral bands of 10 m, 20 m and 60 m resolution for visible, near infrared (NIR) and shortwave infrared (SWIR). In this paper, we present a method to fuse the fine and coarse spatial resolution bands to obtain finer spatial resolution versions of the coarse bands. It is based on a deep convolutional neural network which has a residual design that models the fusion problem. The residual architecture helps the network to converge faster and allows for deeper networks by relieving the network of having to learn the coarse spatial resolution part of the inputs, enabling it to focus on constructing the missing fine spatial details. Using several real Sentinel-2 datasets, we study the effects of the most important hyperparameters on the quantitative quality of the fused image, compare the method to several state-of-the-art methods and demonstrate that it outperforms the comparison methods in experiments. View Full-Text
Share & Cite This Article
Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. Sentinel-2 Image Fusion Using a Deep Residual Network. Remote Sens. 2018, 10, 1290.
Palsson F, Sveinsson JR, Ulfarsson MO. Sentinel-2 Image Fusion Using a Deep Residual Network. Remote Sensing. 2018; 10(8):1290.Chicago/Turabian Style
Palsson, Frosti; Sveinsson, Johannes R.; Ulfarsson, Magnus O. 2018. "Sentinel-2 Image Fusion Using a Deep Residual Network." Remote Sens. 10, no. 8: 1290.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.