Next Article in Journal
Optimized CapsNet for Traffic Jam Speed Prediction Using Mobile Sensor Data under Urban Swarming Transportation
Previous Article in Journal
Deep Ego-Motion Classifiers for Compound Eye Cameras
Previous Article in Special Issue
Radiometric Assessment of a UAV-Based Push-Broom Hyperspectral Camera
Open AccessArticle

Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN

1
School of Surveying and Land Information Engineering, Henan Polytechnic University, Jiaozuo 454003, China
2
Center for Environmental Remote Sensing, Chiba University, Chiba 2638522, Japan
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(23), 5276; https://doi.org/10.3390/s19235276
Received: 27 October 2019 / Revised: 26 November 2019 / Accepted: 28 November 2019 / Published: 29 November 2019
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Every pixel in a hyperspectral image contains detailed spectral information in hundreds of narrow bands captured by hyperspectral sensors. Pixel-wise classification of a hyperspectral image is the cornerstone of various hyperspectral applications. Nowadays, deep learning models represented by the convolutional neural network (CNN) provides an ideal solution for feature extraction, and has made remarkable achievements in supervised hyperspectral classification. However, hyperspectral image annotation is time-consuming and laborious, and available training data is usually limited. Due to the “small-sample problem”, CNN-based hyperspectral classification is still challenging. Focused on the limited sample-based hyperspectral classification, we designed an 11-layer CNN model called R-HybridSN (Residual-HybridSN) from the perspective of network optimization. With an organic combination of 3D-2D-CNN, residual learning, and depth-separable convolutions, R-HybridSN can better learn deep hierarchical spatial–spectral features with very few training data. The performance of R-HybridSN is evaluated over three public available hyperspectral datasets on different amounts of training samples. Using only 5%, 1%, and 1% labeled data for training in Indian Pines, Salinas, and University of Pavia, respectively, the classification accuracy of R-HybridSN is 96.46%, 98.25%, 96.59%, respectively, which is far better than the contrast models. View Full-Text
Keywords: hyperspectral image classification; deep learning; convolutional neural network; residual learning; depth-separable convolution; R-HybridSN hyperspectral image classification; deep learning; convolutional neural network; residual learning; depth-separable convolution; R-HybridSN
Show Figures

Figure 1

MDPI and ACS Style

Feng, F.; Wang, S.; Wang, C.; Zhang, J. Learning Deep Hierarchical Spatial–Spectral Features for Hyperspectral Image Classification Based on Residual 3D-2D CNN. Sensors 2019, 19, 5276.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop