Next Article in Journal
Land Use/Land Cover Mapping Using Multitemporal Sentinel-2 Imagery and Four Classification Methods—A Case Study from Dak Nong, Vietnam
Previous Article in Journal
A Soil Erosion Indicator for Supporting Agricultural, Environmental and Climate Policies in the European Union
Article

Deep Discriminative Representation Learning with Attention Map for Scene Classification

by 1,2,3,†, 1,2,†, 1,2, 1,2, 1,2,3, 1,3,* and 4
1
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
2
Key Laboratory of Network Information System Technology (NIST), Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China
3
School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100190, China
4
The Equipment Project Management Center, Equipment Department of People’s Liberation Army Rocket Force, Beijing 100085, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2020, 12(9), 1366; https://doi.org/10.3390/rs12091366
Received: 2 March 2020 / Revised: 5 April 2020 / Accepted: 21 April 2020 / Published: 26 April 2020
In recent years, convolutional neural networks (CNNs) have shown great success in the scene classification of computer vision images. Although these CNNs can achieve excellent classification accuracy, the discriminative ability of feature representations extracted from CNNs is still limited in distinguishing more complex remote sensing images. Therefore, we propose a unified feature fusion framework based on attention mechanism in this paper, which is called Deep Discriminative Representation Learning with Attention Map (DDRL-AM). Firstly, by applying Gradient-weighted Class Activation Mapping (Grad-CAM) algorithm, attention maps associated with the predicted results are generated in order to make CNNs focus on the most salient parts of the image. Secondly, a spatial feature transformer (SFT) is designed to extract discriminative features from attention maps. Then an innovative two-channel CNN architecture is proposed by the fusion of features extracted from attention maps and the RGB (red green blue) stream. A new objective function that considers both center and cross-entropy loss are optimized to decrease the influence of inter-class dispersion and within-class variance. In order to show its effectiveness in classifying remote sensing images, the proposed DDRL-AM method is evaluated on four public benchmark datasets. The experimental results demonstrate the competitive scene classification performance of the DDRL-AM approach. Moreover, the visualization of features extracted by the proposed DDRL-AM method can prove that the discriminative ability of features has been increased. View Full-Text
Keywords: spatial feature transformer; feature fusion; attention map; feature visualization; scene classification; remote sensing images spatial feature transformer; feature fusion; attention map; feature visualization; scene classification; remote sensing images
Show Figures

Graphical abstract

MDPI and ACS Style

Li, J.; Lin, D.; Wang, Y.; Xu, G.; Zhang, Y.; Ding, C.; Zhou, Y. Deep Discriminative Representation Learning with Attention Map for Scene Classification. Remote Sens. 2020, 12, 1366. https://doi.org/10.3390/rs12091366

AMA Style

Li J, Lin D, Wang Y, Xu G, Zhang Y, Ding C, Zhou Y. Deep Discriminative Representation Learning with Attention Map for Scene Classification. Remote Sensing. 2020; 12(9):1366. https://doi.org/10.3390/rs12091366

Chicago/Turabian Style

Li, Jun; Lin, Daoyu; Wang, Yang; Xu, Guangluan; Zhang, Yunyan; Ding, Chibiao; Zhou, Yanhai. 2020. "Deep Discriminative Representation Learning with Attention Map for Scene Classification" Remote Sens. 12, no. 9: 1366. https://doi.org/10.3390/rs12091366

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop