Next Article in Journal
Monitoring of Thermal Activity at the Hatchobaru–Otake Geothermal Area in Japan Using Multi-Source Satellite Images—With Comparisons of Methods, and Solar and Seasonal Effects
Next Article in Special Issue
Deep&Dense Convolutional Neural Network for Hyperspectral Image Classification
Previous Article in Journal
A Geographically Weighted Regression Analysis of the Underlying Factors Related to the Surface Urban Heat Island Phenomenon
Previous Article in Special Issue
A Multiple-Feature Reuse Network to Extract Buildings from Remote Sensing Imagery
Article Menu
Issue 9 (September) cover image

Export Article

Open AccessArticle
Remote Sens. 2018, 10(9), 1429; https://doi.org/10.3390/rs10091429

Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework

1
Chester F. Carlson Center for Imaging Science, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623, USA
2
Department of Electrical & Microelectronic Engineering, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester, NY 14623, USA
3
National Geospatial-Intelligence Agency, 7500 GEOINT Dr, Springfield, VA 22153, USA
NGA Contractor.
*
Author to whom correspondence should be addressed.
Received: 2 July 2018 / Revised: 30 August 2018 / Accepted: 31 August 2018 / Published: 7 September 2018
(This article belongs to the Special Issue Recent Advances in Neural Networks for Remote Sensing)
Full-Text   |   PDF [5332 KB, uploaded 7 September 2018]   |  

Abstract

In this paper, we present a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also propose a composite fusion architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results. View Full-Text
Keywords: image classification; deep learning; multisensor data; sentinel data image classification; deep learning; multisensor data; sentinel data
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Supplementary material

SciFeed

Share & Cite This Article

MDPI and ACS Style

Piramanayagam, S.; Saber, E.; Schwartzkopf, W.; Koehler, F.W. Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework. Remote Sens. 2018, 10, 1429.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top