Next Article in Journal
Urban Land-Cover Dynamics in Arid China Based on High-Resolution Urban Land Mapping Products
Previous Article in Journal
Retrieval of Biophysical Crop Variables from Multi-Angular Canopy Spectroscopy
Article Menu
Issue 7 (July) cover image

Export Article

Open AccessArticle
Remote Sens. 2017, 9(7), 725; https://doi.org/10.3390/rs9070725

High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Received: 26 May 2017 / Revised: 4 July 2017 / Accepted: 9 July 2017 / Published: 14 July 2017
Full-Text   |   PDF [92781 KB, uploaded 24 July 2017]   |  

Abstract

Because of recent advances in Convolutional Neural Networks (CNNs), traditional CNNs have been employed to extract thousands of codes as feature representations for image retrieval. In this paper, we propose that more powerful features for high-resolution remote sensing image representations can be learned using only several tens of codes; this approach can improve the retrieval accuracy and decrease the time and storage requirements. To accomplish this goal, we first investigate the learning of a series of features with different dimensions using a few tens to thousands of codes via our improved CNN frameworks. Then, a Principal Component Analysis (PCA) is introduced to compress the high-dimensional remote sensing image feature codes learned by traditional CNNs. Comprehensive comparisons are conducted to evaluate the retrieval performance based on feature codes of different dimensions learned by the improved CNNs as well as the PCA compression. To further demonstrate the powerful ability of the low-dimensional feature representation learned by the improved CNN frameworks, a Feature Weighted Map (FWM), which can perform feature visualization and provides a better understanding of the nature of Deep Convolutional Neural Networks (DCNNs) frameworks, is explored. All the CNN models are trained from scratch using a large-scale and high-resolution remote sensing image archive, which will be published and made available to the public. The experimental results show that our method outperforms state-of-the-art CNN frameworks in terms of accuracy and storage. View Full-Text
Keywords: remote sensing image retrieval; Deep Compact Codes (DCC); Feature Weighted Map (FWM); CNN features; dimension reduction remote sensing image retrieval; Deep Compact Codes (DCC); Feature Weighted Map (FWM); CNN features; dimension reduction
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Xiao, Z.; Long, Y.; Li, D.; Wei, C.; Tang, G.; Liu, J. High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective. Remote Sens. 2017, 9, 725.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top