Next Article in Journal
Earth Observation Data Supporting Non-Communicable Disease Research: A Review
Previous Article in Journal
Object-Based Image Analysis of Ground-Penetrating Radar Data for Archaic Hearths
Previous Article in Special Issue
Automatic Annotation of Hyperspectral Images and Spectral Signal Classification of People and Vehicles in Areas of Dense Vegetation with Deep Learning
Open AccessArticle

Pixel-Wise Classification of High-Resolution Ground-Based Urban Hyperspectral Images with Convolutional Neural Networks

by Farid Qamar 1,2,* and Gregory Dobler 1,3,4,5
1
Biden School of Public Policy and Administration, University of Delaware, Newark, DE 19716, USA
2
Department of Civil and Environmental Engineering, University of Delaware, Newark, DE 19716, USA
3
Department of Physics and Astronomy, University of Delaware, Newark, DE 19716, USA
4
Data Science Institute, University of Delaware, Newark, DE 19716, USA
5
Center for Urban Science and Progress, New York University, New York, NY 11201, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(16), 2540; https://doi.org/10.3390/rs12162540
Received: 3 July 2020 / Revised: 2 August 2020 / Accepted: 4 August 2020 / Published: 7 August 2020
(This article belongs to the Special Issue Feature Extraction and Data Classification in Hyperspectral Imaging)
Using ground-based, remote hyperspectral images from 0.4–1.0 micron in ∼850 spectral channels—acquired with the Urban Observatory facility in New York City—we evaluate the use of one-dimensional Convolutional Neural Networks (CNNs) for pixel-level classification and segmentation of built and natural materials in urban environments. We find that a multi-class model trained on hand-labeled pixels containing Sky, Clouds, Vegetation, Water, Building facades, Windows, Roads, Cars, and Metal structures yields an accuracy of 90–97% for three different scenes. We assess the transferability of this model by training on one scene and testing to another with significantly different illumination conditions and/or different content. This results in a significant (∼45%) decrease in the model precision and recall as does training on all scenes at once and testing on the individual scenes. These results suggest that while CNNs are powerful tools for pixel-level classification of very high-resolution spectral data of urban environments, retraining between scenes may be necessary. Furthermore, we test the dependence of the model on several instrument- and data-specific parameters including reduced spectral resolution (down to 15 spectral channels) and number of available training instances. The results are strongly class-dependent; however, we find that the classification of natural materials is particularly robust, especially the Vegetation class with a precision and recall >94% for all scenes and model transfers and >90% with only a single training instance. View Full-Text
Keywords: hyperspectral; image segmentation; convolutional neural networks; urban science hyperspectral; image segmentation; convolutional neural networks; urban science
Show Figures

Graphical abstract

MDPI and ACS Style

Qamar, F.; Dobler, G. Pixel-Wise Classification of High-Resolution Ground-Based Urban Hyperspectral Images with Convolutional Neural Networks. Remote Sens. 2020, 12, 2540.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop