Next Article in Journal
A Lookup-Table-Based Approach to Estimating Surface Solar Irradiance from Geostationary and Polar-Orbiting Satellite Data
Next Article in Special Issue
A Novel Affine and Contrast Invariant Descriptor for Infrared and Visible Image Registration
Previous Article in Journal
Integration of PSI, MAI, and Intensity-Based Sub-Pixel Offset Tracking Results for Landslide Monitoring with X-Band Corner Reflectors—Italian Alps (Corvara)
Previous Article in Special Issue
Learning a Dilated Residual Network for SAR Image Despeckling
Article Menu
Issue 3 (March) cover image

Export Article

Open AccessArticle

Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

1
Department of Information Engineering, China University of Geosciences, Wuhan 430075, China
2
National Engineering Research Center of Geographic Information System, Wuhan 430075, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2018, 10(3), 410; https://doi.org/10.3390/rs10030410
Received: 16 January 2018 / Revised: 28 February 2018 / Accepted: 1 March 2018 / Published: 6 March 2018
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
  |  
PDF [3900 KB, uploaded 6 March 2018]
  |  

Abstract

Remote sensing (RS) scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises). This paper proposes a deep salient feature based anti-noise transfer network (DSFATN) method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF) is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS) algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM), the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially. View Full-Text
Keywords: scene classification; saliency detection; deep salient feature; anti-noise transfer network; DSFATN scene classification; saliency detection; deep salient feature; anti-noise transfer network; DSFATN
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Gong, X.; Xie, Z.; Liu, Y.; Shi, X.; Zheng, Z. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery. Remote Sens. 2018, 10, 410.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top