Next Article in Journal
Comparison of Radar-Based Hail Detection Using Single- and Dual-Polarization
Next Article in Special Issue
A Building Extraction Approach Based on the Fusion of LiDAR Point Cloud and Elevation Map Texture Features
Previous Article in Journal
Performance Evaluation of Newly Proposed Seaweed Enhancing Index (SEI)
Previous Article in Special Issue
Building Extraction from UAV Images Jointly Using 6D-SLIC and Multiscale Siamese Convolutional Networks
Article Menu
Issue 12 (June-2) cover image

Export Article

Open AccessArticle

The Comparison of Fusion Methods for HSRRSI Considering the Effectiveness of Land Cover (Features) Object Recognition Based on Deep Learning

1
School of Geomatics and Urban Spatial Information, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
2
Key Laboratory for Urban Geomatics of National Administration of Surveying, Mapping and Geoinformation, Beijing 100044, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(12), 1435; https://doi.org/10.3390/rs11121435
Received: 21 May 2019 / Revised: 11 June 2019 / Accepted: 13 June 2019 / Published: 17 June 2019
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
  |  
PDF [40517 KB, uploaded 25 June 2019]
  |  

Abstract

The efficient and accurate application of deep learning in the remote sensing field largely depends on the pre-processing technology of remote sensing images. Particularly, image fusion is the essential way to achieve the complementarity of the panchromatic band and multispectral bands in high spatial resolution remote sensing images. In this paper, we not only pay attention to the visual effect of fused images, but also focus on the subsequent application effectiveness of information extraction and feature recognition based on fused images. Based on the WorldView-3 images of Tongzhou District of Beijing, we apply the fusion results to conduct the experiments of object recognition of typical urban features based on deep learning. Furthermore, we perform a quantitative analysis for the existing pixel-based mainstream fusion methods of IHS (Intensity-Hue Saturation), PCS (Principal Component Substitution), GS (Gram Schmidt), ELS (Ehlers), HPF (High-Pass Filtering), and HCS (Hyper spherical Color Space) from the perspectives of spectrum, geometric features, and recognition accuracy. The results show that there are apparent differences in visual effect and quantitative index among different fusion methods, and the PCS fusion method has the most satisfying comprehensive effectiveness in the object recognition of land cover (features) based on deep learning. View Full-Text
Keywords: image fusion; high spatial resolution remotely sensed imagery; object recognition; deep learning; method comparison image fusion; high spatial resolution remotely sensed imagery; object recognition; deep learning; method comparison
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Song, S.; Liu, J.; Pu, H.; Liu, Y.; Luo, J. The Comparison of Fusion Methods for HSRRSI Considering the Effectiveness of Land Cover (Features) Object Recognition Based on Deep Learning. Remote Sens. 2019, 11, 1435.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top