Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework
AbstractIn this paper, we present a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also propose a composite fusion architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results. View Full-Text
- Supplementary File 1:
ZIP-Document (ZIP, 50121 KB)
Share & Cite This Article
Piramanayagam, S.; Saber, E.; Schwartzkopf, W.; Koehler, F.W. Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework. Remote Sens. 2018, 10, 1429.
Piramanayagam S, Saber E, Schwartzkopf W, Koehler FW. Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework. Remote Sensing. 2018; 10(9):1429.Chicago/Turabian Style
Piramanayagam, Sankaranarayanan; Saber, Eli; Schwartzkopf, Wade; Koehler, Frederick W. 2018. "Supervised Classification of Multisensor Remotely Sensed Images Using a Deep Learning Framework." Remote Sens. 10, no. 9: 1429.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.