Next Article in Journal
Centimeter Precision Geoid Model for Jeddah Region (Saudi Arabia)
Next Article in Special Issue
A Hybrid Attention-Aware Fusion Network (HAFNet) for Building Extraction from High-Resolution Imagery and LiDAR Data
Previous Article in Journal
Mapping Winter Wheat with Combinations of Temporally Aggregated Sentinel-2 and Landsat-8 Data in Shandong Province, China
Open AccessArticle

A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks

1
GIScience Chair, Institute of Geography, Heidelberg University, 69120 Heidelberg, Germany
2
Helmholtz-Zentrum Dresden-Rossendorf, Helmholtz Institute Freiberg for Resource Technology, Exploration, Chemnitzer Str. 40, D-09599 Freiberg, Germany
3
The School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
4
Here+There Mapping Solutions, 10115 Berlin, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(12), 2067; https://doi.org/10.3390/rs12122067
Received: 22 May 2020 / Revised: 18 June 2020 / Accepted: 23 June 2020 / Published: 26 June 2020
(This article belongs to the Special Issue Advanced Multisensor Image Analysis Techniques for Land-Cover Mapping)
Multi-sensor remote sensing image classification has been considerably improved by deep learning feature extraction and classification networks. In this paper, we propose a novel multi-sensor fusion framework for the fusion of diverse remote sensing data sources. The novelty of this paper is grounded in three important design innovations: 1- a unique adaptation of the coupled residual networks to address multi-sensor data classification; 2- a smart auxiliary training via adjusting the loss function to address classifications with limited samples; and 3- a unique design of the residual blocks to reduce the computational complexity while preserving the discriminative characteristics of multi-sensor features. The proposed classification framework is evaluated using three different remote sensing datasets: the urban Houston university datasets (including Houston 2013 and the training portion of Houston 2018) and the rural Trento dataset. The proposed framework achieves high overall accuracies of 93.57%, 81.20%, and 98.81% on Houston 2013, the training portion of Houston 2018, and Trento datasets, respectively. Additionally, the experimental results demonstrate considerable improvements in classification accuracies compared with the existing state-of-the-art methods. View Full-Text
Keywords: deep learning; data fusion; hyperspectral image classification; residual learning; multi-sensor fusion; convolutional neural networks (CNNs); auxiliary loss function deep learning; data fusion; hyperspectral image classification; residual learning; multi-sensor fusion; convolutional neural networks (CNNs); auxiliary loss function
Show Figures

Graphical abstract

MDPI and ACS Style

Li, H.; Ghamisi, P.; Rasti, B.; Wu, Z.; Shapiro, A.; Schultz, M.; Zipf, A. A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks. Remote Sens. 2020, 12, 2067. https://doi.org/10.3390/rs12122067

AMA Style

Li H, Ghamisi P, Rasti B, Wu Z, Shapiro A, Schultz M, Zipf A. A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks. Remote Sensing. 2020; 12(12):2067. https://doi.org/10.3390/rs12122067

Chicago/Turabian Style

Li, Hao; Ghamisi, Pedram; Rasti, Behnood; Wu, Zhaoyan; Shapiro, Aurelie; Schultz, Michael; Zipf, Alexander. 2020. "A Multi-Sensor Fusion Framework Based on Coupled Residual Convolutional Neural Networks" Remote Sens. 12, no. 12: 2067. https://doi.org/10.3390/rs12122067

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop