Next Article in Journal
Mapping Aboveground Biomass using Texture Indices from Aerial Photos in a Temperate Forest of Northeastern China
Next Article in Special Issue
Space-Borne and Ground-Based InSAR Data Integration: The Åknes Test Site
Previous Article in Journal
Cropland Mapping over Sahelian and Sudanian Agrosystems: A Knowledge-Based Approach Using PROBA-V Time Series at 100-m
Previous Article in Special Issue
Interseismic Deformation of the Altyn Tagh Fault Determined by Interferometric Synthetic Aperture Radar (InSAR) Measurements
Article Menu

Export Article

Open AccessArticle
Remote Sens. 2016, 8(3), 231; doi:10.3390/rs8030231

Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach

Faculty of Geo-Information Science and Earth Observation (ITC), University of Twente, Enschede 7500 AE, The Netherlands
*
Author to whom correspondence should be addressed.
Academic Editors: Zhenhong Li, Roberto Tomas and Prasad S. Thenkabail
Received: 22 December 2015 / Revised: 19 February 2016 / Accepted: 4 March 2016 / Published: 11 March 2016
(This article belongs to the Special Issue Earth Observations for Geohazards)
View Full-Text   |   Download PDF [13512 KB, uploaded 11 March 2016]   |  

Abstract

Automatic post-disaster mapping of building damage using remote sensing images is an important and time-critical element of disaster management. The characteristics of remote sensing images available immediately after the disaster are not certain, since they may vary in terms of capturing platform, sensor-view, image scale, and scene complexity. Therefore, a generalized method for damage detection that is impervious to the mentioned image characteristics is desirable. This study aims to develop a method to perform grid-level damage classification of remote sensing images by detecting the damage corresponding to debris, rubble piles, and heavy spalling within a defined grid, regardless of the aforementioned image characteristics. The Visual-Bag-of-Words (BoW) is one of the most widely used and proven frameworks for image classification in the field of computer vision. The framework adopts a kind of feature representation strategy that has been shown to be more efficient for image classification—regardless of the scale and clutter—than conventional global feature representations. In this study supervised models using various radiometric descriptors (histogram of gradient orientations (HoG) and Gabor wavelets) and classifiers (SVM, Random Forests, and Adaboost) were developed for damage classification based on both BoW and conventional global feature representations, and tested with four datasets. Those vary according to the aforementioned image characteristics. The BoW framework outperformed conventional global feature representation approaches in all scenarios (i.e., for all combinations of feature descriptors, classifiers, and datasets), and produced an average accuracy of approximately 90%. Particularly encouraging was an accuracy improvement by 14% (from 77% to 91%) produced by BoW over global representation for the most complex dataset, which was used to test the generalization capability. View Full-Text
Keywords: damage detection; feature representation; oblique airborne images; supervised learning; texture; UAV; Visual-Bag-of-Words damage detection; feature representation; oblique airborne images; supervised learning; texture; UAV; Visual-Bag-of-Words
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach. Remote Sens. 2016, 8, 231.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Remote Sens. EISSN 2072-4292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top