Next Article in Journal
A Scientometric Visualization Analysis for Night-Time Light Remote Sensing Research from 1991 to 2016
Next Article in Special Issue
Mapping Vulnerable Urban Areas Affected by Slow-Moving Landslides Using Sentinel-1 InSAR Data
Previous Article in Journal
How Do Aerosol Properties Affect the Temporal Variation of MODIS AOD Bias in Eastern China?
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Remote Sens. 2017, 9(8), 803; https://doi.org/10.3390/rs9080803

Effect of Label Noise on the Machine-Learned Classification of Earthquake Damage

1
Department of Computer Science, Cornell University, 402 Gates Hall, Ithaca, NY 14850, USA
2
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
3
Michigan Technological University, 1400 Townsend Drive, Houghton, MI 49931, USA
*
Author to whom correspondence should be addressed.
Academic Editors: Fabian Löw, Siquan Yang, Günter Strunz, Zhenhong Li, Joachim Post, Juan Carlos de Villagrán de Léon, Shunichi Koshimura, Roberto Tomas, Peter Spruyt, Michael Judex, Farid Melgani and Prasad S. Thenkabail
Received: 30 May 2017 / Revised: 17 July 2017 / Accepted: 28 July 2017 / Published: 4 August 2017
View Full-Text   |   Download PDF [34609 KB, uploaded 4 August 2017]   |  

Abstract

Automated classification of earthquake damage in remotely-sensed imagery using machine learning techniques depends on training data, or data examples that are labeled correctly by a human expert as containing damage or not. Mislabeled training data are a major source of classifier error due to the use of imprecise digital labeling tools and crowdsourced volunteers who are not adequately trained on or invested in the task. The spatial nature of remote sensing classification leads to the consistent mislabeling of classes that occur in close proximity to rubble, which is a major byproduct of earthquake damage in urban areas. In this study, we look at how mislabeled training data, or label noise, impact the quality of rubble classifiers operating on high-resolution remotely-sensed images. We first study how label noise dependent on geospatial proximity, or geospatial label noise, compares to standard random noise. Our study shows that classifiers that are robust to random noise are more susceptible to geospatial label noise. We then compare the effects of label noise on both pixel- and object-based remote sensing classification paradigms. While object-based classifiers are known to outperform their pixel-based counterparts, this study demonstrates that they are more susceptible to geospatial label noise. We also introduce a new labeling tool to enhance precision and image coverage. This work has important implications for the Sendai framework as autonomous damage classification will ensure rapid disaster assessment and contribute to the minimization of disaster risk. View Full-Text
Keywords: machine learning; classification; crowdsourcing; earthquake damage; damage detection; GEOBIA; mislabeled training data machine learning; classification; crowdsourcing; earthquake damage; damage detection; GEOBIA; mislabeled training data
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Frank, J.; Rebbapragada, U.; Bialas, J.; Oommen, T.; Havens, T.C. Effect of Label Noise on the Machine-Learned Classification of Earthquake Damage. Remote Sens. 2017, 9, 803.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top