Next Article in Journal / Special Issue
Investigating the Ground Deformation and Source Model of the Yangbajing Geothermal Field in Tibet, China with the WLS InSAR Technique
Previous Article in Journal
Computationally Inexpensive Landsat 8 Operational Land Imager (OLI) Pansharpening
Previous Article in Special Issue
Landslide Deformation Analysis by Coupling Deformation Time Series from SAR Data with Hydrological Factors through Data Assimilation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A 3D Shape Descriptor Based on Contour Clusters for Damaged Roof Detection Using Airborne LiDAR Point Clouds

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
2
Collaborative Innovation Center for Geospatial Technology, 129 Luoyu Road, Wuhan 430079, China
3
State-Province Joint Engineering Laboratory of Spatial Information Technology for High-Speed Railway Safety, Southwest Jiaotong University, Chengdu 611756, China
4
Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu 611756, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(3), 189; https://doi.org/10.3390/rs8030189
Submission received: 29 October 2015 / Revised: 3 February 2016 / Accepted: 18 February 2016 / Published: 26 February 2016
(This article belongs to the Special Issue Earth Observations for Geohazards)

Abstract

:
The rapid and accurate assessment of building damage states using only post-event remote sensing data is critical when performing loss estimation in earthquake emergency response. Damaged roof detection is one of the most efficient methods of assessing building damage. In particular, airborne LiDAR is often used to detect roofs damaged by earthquakes, especially for certain damage types, due to its ability to rapidly acquire accurate 3D information on individual roofs. Earthquake-induced roof damages are categorized into surface damages and structural damages based on the geometry features of the debris and the roof structure. However, recent studies have mainly focused on surface damage; little research has been conducted on structural damage. This paper presents an original 3D shape descriptor of individual roofs for detecting roofs with surface damage and roofs exhibiting structural damage by identifying spatial patterns of compact and regular contours for intact roofs, as well as jagged and irregular contours for damaged roofs. The 3D shape descriptor is extracted from building contours derived from airborne LiDAR point clouds. First, contour clusters are extracted from contours that are generated from a dense DSM of individual buildings derived from point clouds. Second, the shape chaos indexes of contour clusters are computed as the information entropy through a contour shape similarity measurement between two contours in a contour cluster. Finally, the 3D shape descriptor is calculated as the weighted sum of the shape chaos index of each contour cluster corresponding to an individual roof. Damaged roofs are detected solely using the 3D shape descriptor with the maximum entropy threshold. Experiments using post-event airborne LiDAR point clouds of the 2010 Haiti earthquake suggest that the proposed damaged roof detection technique using the proposed 3D shape descriptor can detect both roofs exhibiting surface damage and roofs exhibiting structural damage with a high accuracy.

Graphical Abstract

1. Introduction

Building collapse is one of the primary causes of heavy human casualties in destructive earthquakes [1]. Rapid and reliable damage assessment on the individual building level following earthquakes has become imperative for the optimal utilization of available resources for rescue [2,3]. Damage to roofs is an important feature for distinguishing extreme damage states, i.e., collapsed buildings, from lesser damaged or undamaged buildings [4]. Therefore, vertical remote sensing, including optical, SAR and LiDAR, represents an efficient tool for rapid damage assessment due to its low cost, high availability, minimal corresponding fieldworks, large coverage, digital processing and quantitative results. The effectiveness of remote sensing has also been proven following earthquakes worldwide [5,6,7,8]. Numerous methods have been reported for building damage detection using 2D features, such as gray scale, spectra, texture, edge and morphological features, and amplitude and phase information derived from optical or SAR imagery [8,9,10,11,12,13,14,15,16,17]. However, certain damage types (e.g., pancake collapse) cannot be identified using vertical remote sensing data due to the absence of precise height data [18]. Airborne laser scanning systems are particularly suitable for damaged roof detection because precise 3D point clouds can be rapidly obtained at all times and under most weather conditions without entering the quake-stricken area [19,20], and the elevation accuracy is higher compared to point clouds derived from vertical optical or SAR imagery [18,21]. Change detection using both pre- and post-event remote sensing data is a popular method of acquiring building damage information because detailed pre-event data are invaluable in reconnaissance [18,22,23,24]. However, the major limitation concerning this method is the lack of homogeneous pre-event reference data in many situations [18]. The method of damage interpretation using post-event remote sensing data can be applied even in the absence of homogeneous reference data, which is an alternative to rapid damage assessment during earthquake emergency response when pre-event data are limited.
According to the essential clue that damaged buildings, unlike organized manmade patterns on intact buildings, usually manifest themselves as disturbed spatial or spectral patterns [25], various methods have been developed to infer damage patterns from post-event data. The 2D features, including edge, texture and spectra, have been assessed by numerous studies [17,26,27,28] as important cues for damage detection because damaged regions tend to exhibit disturbed spatial or textural patterns, in contrast to intact buildings [29]. On the other hand, 3D features have been found to be useful for identifying specific damage types based on geometric reasoning, as highlighted by some studies [30,31,32,33]. Therefore, this paper focuses on severely damaged buildings and proposes a damaged roof detection approach based on the roof’s 3D features extracted from only post-event airborne LiDAR point clouds. In the following subsection, we do not provide an exhaustive review of all these methods; instead, we highlight only the 3D-feature-based approaches using only post-event airborne point clouds that are directly relevant to our work in the next subsection.

1.1. Damage Types

Prior knowledge about damaged buildings is necessary when performing building damage detection using airborne LiDAR point clouds [34]. A building damage catalog, as shown in Figure 1, including typical damage types of buildings, is used to identify buildings exhibiting different damage types based on geometry features such as reductions in volume and height, changes in the inclination of building surfaces and surface structures as well as the size of debris [35]. However, extracting geometry features such as reductions in volume and height and changes in the inclination of building surfaces using only post-event airborne LiDAR point clouds is difficult. Therefore, a majority of the building damage detection approaches using only post-event airborne LiDAR point clouds utilize geometry features including the surface structure and debris.
This paper categorizes the building damage catalog into two categories based on geometry features of roofs including debris and the roof structure. The first category, named surface damages, contains damaged building with debris surfaces such as multilayer collapse (2), top story pancake collapse (4c, 5, 5c), heap of debris (6, 7a, 7c) and heap of debris with planes (3, 7b). The second category, named structural damages, contains damaged buildings with relatively intact surfaces and includes inclined plane (1), middle or lower story pancake collapse (4a, 4b, 5a, 5b) and inclination (9a). Pancake collapse is a damage type of concern. The top story pancake collapse types, including damage types 4c, 5 and 5c, can be detected using post-event airborne LiDAR point clouds because the roofs are collapsed or damaged, whereas middle or lower story pancake collapses, including damage types 4a, 4b, 5a and 5b, are difficult to detect because the roofs are nearly intact. However, few perfect middle or lower story pancake collapses wherein the building entirely maintains its surface and structure during destructive earthquakes occur; the majority of buildings exhibiting this type of pancake collapse usually exhibit an inclination as well [36]. Therefore, this form of pancake collapse can be detected through the structure analysis of inclination.

1.2. Building Damage Detection Approaches

The majority of approaches to damage detection using only post-event airborne LiDAR data can be categorized into surface damage detection approaches and structural damage detection approaches, according to the damage categories defined in the above section.
The majority of methods for surface damage detection detect the surface damages of collapsed buildings based on the planarity of the roof surface because airborne LiDAR point clouds are particularly suited to extracting planar roof surfaces [37,38,39]. Rehor et al. [40] produced a 2.5D planar Delaunay-based triangulated irregular network (TIN) using planar segments and non-segmented points segmented from roof points and extracted debris triangles from the TIN as the damaged building parts. Rehor et al. [41] compared the random sample consensus (RANSAC) and region-growing algorithms applied to Digital Surface Models (DSMs) for building damage detection and suggested that the region-growing algorithm is more suitable for building damage detection than the RANSAC algorithm. Labiak et al. [42] presented a line-based slope threshold method for evaluating and identifying the damaged points of each roof based on the idea that points in intact roof planes have constant slopes, whereas points in damaged roof surfaces have varying slopes. Segment-based classification methods have been presented to detect collapsed buildings using only post-event airborne LiDAR point clouds based on the assumption that damaged buildings are represented as many small planar segments or unsegmented points, whereas intact buildings are represented as large planar segments [43,44,45].
Structural damage detection approaches detect damaged buildings via structure analysis based on prior knowledge about intact buildings. Shen et al. [31] extracted the geometric axis line of a flat or symmetric roof and identified the inclined roofs based on the assumption that the inclined roof’s angle between its geometric axis line and plumb line is greater than an empirical threshold angle. Gerke and Kerle [32] presented a graph-based approach for structural seismic damage assessment based on oblique airborne images. The structural integrity of a building is inferred based on the spatial relation between observable features such as vegetation, façade, intact roofs and destroyed roofs. The relations are represented through a directed graph and are trained by a graph-based learning algorithm. However, this approach is difficult to apply directly to airborne LiDAR point clouds because it is difficult to extract building façades from airborne LiDAR point clouds. Vetrivel et al. [29] developed a gap-based classification method for building structural damage assessment using post-event image-based 3D points. In this method, 3D points of building elements are voxelized based on a pre-defined voxel size; then, gaps are identified as the voxels that are visible from a sufficient number of cameras but that are not occupied by 3D points. Finally, using radiometric features, the gaps caused by damage are detected based on surrounding damage patterns such as spalling or debris. However, it is difficult to extract such evidence from airborne LiDAR point clouds because of their limited radiometric information. As a result, damage-related gaps cannot be reliably classified using airborne LiDAR point clouds. Fernandez Galarreta et al. [4] presented an UAV-based method for urban structural damage assessment using object-based image analysis and semantic reasoning. In this method, the detailed 3D point clouds were generated from multi-view imagery obtained by unmanned aerial vehicles; then, the z component of the normal of a local tangent plane of each point was computed from the co-variance matrix of the neighborhood points and was used to visually assess the D4-D5 damage elements in terms of the European Macroseismic Scale 1998 (EMS-98). Finally, the D1-D3 damage features were extracted via a more detailed façade and roof analysis using object-based image analysis and semantic reasoning. However, the D4-D5 damages were identified not by automatic analysis but by visual analysis.
The above-mentioned methods using airborne LiDAR point clouds all mainly focus on surface damage and pay minimal attention to structural damage. Thus, the spatial relations between the components of a building must be analyzed to infer structural damage [32]. The topological relationships between adjacent planar segments can be described and reconstructed using a roof topology in building modeling [46,47,48]. However, it is challenging to mathematically describe and explain the topological relationships of complex, damaged roofs based on a scattered point cloud at a low level, including at the point level and segment level, because the topology of random and irregularly damaged roofs is uncertain.
The 3D shape of a complex, damaged roof can be quickly represented by contours, which avoids the problem of topological analysis and reconstruction based on planar segments [49]. By analyzing the characteristics of a building’s contours, some of their features, such as closed and regular shapes, simple topology and density, are leveraged to extract and reconstruct buildings [49,50,51,52]. Our previous studies have shown that it is possible to detect roof damage using a contour-based method [53]. Damaged roofs with confusing contours were detected by a shape similarity analysis algorithm applied to building contours derived from airborne LiDAR point clouds. However, the damage feature based on contours was not explicitly described, and the automation of the algorithm was poor.
Focusing on surface and structural damages on roofs, this paper defines a 3D shape descriptor based on shape analysis of the contour clusters of buildings for detecting severely damaged buildings from post-event airborne LiDAR point clouds. The contribution of the paper lies in the presentation of a 3D shape descriptor that provides a comprehensive description of both surface and structure features of roofs based on the shapes and spatial relations of building contours. Compared to other 3D features of roofs, the 3D shape descriptor can more reliably and completely detect damaged roofs.
The remainder of this paper is organized as follows: Section 2 details the key procedures of damaged roof detection, including the data preprocessing, the definition and calculation algorithm of the proposed 3D shape descriptor, and damaged roof detection based on the 3D shape descriptor using a maximum entropy threshold. Section 3 introduces the study area and the data source. Section 4 presents experimental results and discussion about damaged roof detection on the airborne LiDAR dataset of the Haiti earthquake in 2010. The selection of values for algorithmic parameters is also discussed in Section 4. The conclusions are presented in Section 5.

2. Methodology

This study proposes a method of damaged roof detection for both roofs suffering surface damage and roofs suffering structural damage based on a 3D shape descriptor using post-earthquake airborne LiDAR point clouds. The 3D shape descriptor is used to quantitatively describe both surface and structure features of roofs based on the shapes and spatial relations of a building’s contours. The descriptor is a more comprehensive description of 3D shapes of damaged roofs compared to other 3D features. Therefore, damaged roof detection based on the 3D shape descriptor can be more reliably and effectively performed.
There are three key procedures that constitute this method: data preprocessing, feature extraction and damaged roof detection. In data preprocessing, the DSM of each individual building is extracted from post-earthquake airborne LiDAR point clouds with guidance from 2D GIS vector data of building footprints and a digital elevation model (DEM). In feature extraction, the 3D shape descriptor of each building is calculated based on contour clusters generated from the DSM of each individual building. In damaged roof detection, damaged roofs are detected based on the 3D shape descriptor. Figure 2 illustrates the detailed procedures.

2.1. Data Preprocessing

With the guidance of the 2D GIS vector data on building footprints, a dense DSM of an individual building is constructed using the airborne LiDAR point cloud and the DEM. The individual building points are extracted from the point cloud according to the building footprint, as observed in Figure 3b,f. A thin plate spline (TPS) interpolation method is used to interpolate the dense DSM with a grid cell size of λ to remove the impact of noise and to create a continuous and smooth surface [54,55], as shown in Figure 3c,g. The ground points are set as having the same height, which is acquired from the DEM.

2.2. Feature Extraction

The shapes and spatial relations of building contours can represent the roof’s 3D shape feature integrated with both surface and structure features more effectively than can planar segments. A 3D shape descriptor is defined to quantitatively describe the 3D shape feature using the shapes and spatial relations of building contours. Therefore, the 3D shape descriptor can potentially be used to identify surface or structural damages on roofs.

2.2.1. Feature Definition

The 3D shape of a roof’s surface and structure can be represented by a contour cluster, which is a set of contours extracted from a contour tree based on a containment relationship. A contour tree is a data structure addressing the description of surface topologies and is built upon the containment relationship between contours [56]. A contour cluster is defined as a subtree in the contour tree according to the following three conditions: (1) the subtree’s root node has no parent node or has a brother node in the contour tree; (2) the subtree’s terminal node has no child node or has at least two child nodes in the contour tree; and (3) the subtree’s other node has only one child node in the contour tree. The building’s surface regions belonging to the same structure are segmented from the complex building surface by the contour cluster based on a homogeneous spatial topology relationship, and each region is represented by a set of contours in the contour cluster, as shown in Figure 4.
The 3D shape difference between damaged and intact roof surface regions can be reliably distinguished using the full 3D geometric relationships among the contour clusters. One of the essential differences between damaged and intact buildings is that the former usually manifest as disturbed spatial patterns such as irregular surfaces and structures, unlike the organized, manmade patterns of the latter [25], as demonstrated in Figure 5. For an intact roof, the organized manmade pattern of the regular surface and structure is represented by a contour cluster containing similar shape contours. For a damaged roof, the disturbed spatial pattern of the irregular surface and structure is represented by a contour cluster containing chaotic shape contours. Therefore, the shape chaos of a contour cluster can be used to describe the irregular 3D shape of a damaged roof’s surface and structure and thus represents a potential feature for reliably distinguishing between intact roofs and damaged roofs due to surface or structural damages.
According to the concept of information entropy developed by Shannon [57], the shape chaos index is defined to quantitatively describe the shape chaos based on the contour shape similarity measurement. The principle of information entropy is to use uncertainty as a measure to describe the information contained in a source [58]. For example, entropy is used to quantitatively measure the spatial information of a map such as metric information, topological information and thematic information [59]. In this paper, entropy is used to quantitatively measure the shape chaos of the entire contour cluster based on the shape similarities between two contours.
This can be achieved using the ratio between the number of contours with similar shapes and the total number of contours in the contour cluster as the probability Pi used in the definition of the entropy E, as shown in Equation (1). Let N be the total amount of contours, clustered into n groups with similar shapes, and let the number of contours in each group be Ni. Thus, the probability Pi can then be defined as shown in Equation (2). Specifically, if each contour is sufficiently similar, the shape chaos index will equal zero. If each contour is very different, which leads to each group having only one contour, the contour cluster will be clustered into N groups, and all probabilities will be 1/N; therefore, the shape chaos index will reach a maximal value: ln(N). Thus, the shape chaos index C is the entropy E normalized by the maximal value, as shown in Equation (3).
E = 1 n P i ln ( P i )
P i = N i N
C = 1 n P i ln ( P i ) ln ( N )
The 3D shape descriptor is the weighted sum of the shape chaos indexes of contour clusters corresponding to a single building, which describes the chaotic 3D shape of the entire building surface. The 3D shape descriptor is defined as shown in Equation (4).
S = 1 m ( Q i × C i )
where m is the number of contour clusters corresponding to a single building, Qi is the weight of each contour cluster, and Ci is the shape chaos index of each contour cluster. In this paper, Qi can be determined using the ratio between the area of the building surface region segmented by the contour cluster and the total area of the building. Let A be the total area, and let Ai be the area of each region. Such a weight can then be defined as shown in Equation (5). Because the shape chaos index and the weight are both normalized, the 3D shape descriptor is also normalized.
Q i = A i A
The damaged roof can be explicitly described by the 3D shape descriptor based on contour clusters. The 3D shape descriptor measures the chaotic degree of the entire roof surface’s 3D shape based on the shape chaos index. According to the Shannon entropy [57], the greater the difference between the contour shapes within the contour cluster, the larger the shape chaos index of the contour cluster, which means that the 3D shape of the surface region segmented using the contour cluster is more irregular. If the area of the irregular surface region is greater, the 3D shape descriptor will be larger. This means that the probability that the roof was damaged is higher. Accordingly, we assume that damaged roofs can be distinguished from intact roofs by a threshold δ of the 3D shape descriptor. Furthermore, the damaged roof is defined using the 3D shape descriptor, as shown in Equation (6).
D = { S | S > δ }

2.2.2. Feature Extraction Algorithm

This paper proposes a feature extraction algorithm for a 3D shape descriptor based on the shape analysis of contour clusters. There are three key procedures in the feature extraction algorithm: the contour cluster extraction, the shape chaos index calculation and the 3D shape descriptor calculation. Contour clusters are extracted from the contour tree, which is built using relationships based on the containment method for contours generated from the DSM. The shape chaos index of the contour cluster is calculated based on the information entropy of shape similarities, which are computed based on the shape similarity measurements among contours. The 3D shape descriptor is calculated based on the shape chaos indexes. Figure 6 shows the complete workflow of the feature extraction algorithm.
(1)
Contour Cluster Extraction
Contour clusters are extracted from the contour tree based on the three conditions discussed in Section 2.2.1. Dense contours with a contour interval of ε are generated from a dense DSM of an individual-building-based grid-tracking method that interpolates points at given heights between grid cells and connects them sequentially [60,61], as shown Figure 3d,h. The contour tree is built using relationships based on the containment method [62,63]. The containment method suffers from the limitation that all contours are considered closed, and the unclosed contours are thus initially excluded. In a contour tree, each node represents a different contour, and every node may have a list of descendants. If one contour is contained by another, then that contour is a descendant. The containment method begins from the root node with the lowest elevation and recursively creates the contour tree in a depth-first manner.
(2)
Shape Chaos Index Calculation
The shape chaos index of the contour cluster is used to quantitate the chaotic spatial pattern as the information entropy of shape similarities among contours. The shape similarity measurement is a crucial step in calculating the shape chaos index. The shape similarity is measured using the Euclidean distance between the normalized Fourier descriptors (FDs) of two contours. FDs are typically used in image retrieval and pattern recognition because of their insensitivity to geometric translation, rotation, and scaling [64,65,66,67]. We employ the normalized Fourier descriptor (nFD) to measure the approximate shape similarity between two contours based on the assumption that the shape differences among the intact contour clusters are only produced by geometric translations and scaling, whereas the damaged contour clusters do not follow this assumption. Thus, the shape similarities based on nFDs among the intact contour clusters will remain small, and the shape similarities among the damaged contour clusters will be random. This results in the shape chaos indexes of contour clusters of intact roof surface regions having small values, whereas such values will be large in damaged roof surface regions.
Suppose a contour is composed of M points as a discrete complex function; then, the function can be transformed into the frequency domain without any loss of information by the Discrete Fourier Transformation (DFT), defined as a(k) in Equation (7). The coefficients a(k) are FDs that represent the discrete contour of a contour line in the Fourier domain [64].
a ( k ) = 1 M j = 0 M 1 [ x ( j ) + i y ( j ) ] e i j k 2 π M , k = 0 , 1 , ... , M 1
The Fourier coefficients a(k) are normalized to make them invariant to the translation, rotation, and scaling of contours [67]. The normalized FDs are defined as in Equation (8). Translation invariance can be achieved by omitting the Fourier coefficient a(0) and using the other Fourier coefficients because translation affects only the first Fourier coefficient a(0), and the other FDs retain their values [64]. The FDs are made invariant against rotation by taking the magnitude of each Fourier coefficient, and they are made scale invariant by dividing all Fourier coefficients by the magnitude of a(1) [64,67]. The normalized FD na(k), defined in Equation (8), also omits the second Fourier coefficient because it is a constant value of 1.
n a ( k ) = | a ( k + 2 ) | | a ( 1 ) | , k = 0 , 1 , ... , M 3
The Euclidean distance is based on the normalized FDs of two contours following Equation (9).
s = i = 0 L ( n a α ( i ) n a β ( i ) ) 2
where naα and naβ are the normalized FDs of contour α and contour β, respectively, and L is the number of normalized Fourier coefficients. In most cases, these two contours contain a different number of points, and their normalized Fourier coefficients are not same; therefore, the Euclidean distance cannot be directly calculated. To solve this problem, both contours are first sampled such that they have the same number of points before FDs are applied to the two contours [67].
The shape chaos index is calculated based on an entropy definition whereby the probability is defined as the ratio between the number of contours with similar shapes and the total number of contours in the contour cluster, as detailed in Section 2.2.1. The first step is to cluster contours into several groups following the clustering rule that the shape similarity between two contours in the group must be less than a threshold ω. Based on the results of contour clustering, the probabilities of the shape distributions are computed, and the shape chaos index is calculated following Equation (3).
(3)
3D Shape Descriptor Calculation
The 3D shape descriptor calculation is dependent on the shape chaos indexes of contour clusters and their area weights. The area weight is defined as the ratio between the area of the roof surface region segmented by the contour cluster and the total area of the roof. Thus, it is crucial to estimate the area of the roof surface region for the weight calculation. The contour cluster is used to estimate the area of the corresponding roof surface region. Based on the containment relationship between contour clusters, there are two types of contour clusters. The first type of contour cluster does not contain any other contour cluster and has the same area as its outermost contour. The other type of contour cluster fully contains some contour clusters. The area of the cluster is the area of the outermost contour reduced by the areas of the outermost contours in contour clusters contained by the contour cluster. The total area of the roof’s surface is the sum of the areas of all contour clusters. Finally, the weight is computed following Equation (5).

2.3. Damaged Roof Detection

The damaged roof detection problem is transformed into a roof classification problem by using a maximum entropy threshold for the 3D shape descriptor. The maximum entropy threshold is implemented to select the optimal threshold δ from a group of buildings that includes intact and damaged buildings. As an optimal criterion, the maximum entropy was initially used for image thresholding by Pun [68] and was later corrected and improved by Kapur [69]. The maximum entropy threshold considers an image histogram as a probability distribution, and later, one obtains an optimal threshold value by maximizing the upper bound of the a posteriori entropy [58,70]. This paper adopts Kapur’s method based on the Shannon entropy to select an optimal threshold δ that yields the maximum entropy between damaged and intact buildings.
Suppose that all roofs are classified as intact or damaged roofs using a threshold of the 3D shape descriptor, which is denoted as t. Thus, the numbers of intact and damaged roofs are determined by the threshold t and are defined as functions of t, as shown in Equations (10) and (11).
N I = I n t a c t ( t )
N D = D a m a g e ( t )
where NI is the number of intact roofs and ND is the number of damaged roofs. Intact and damaged roof histograms are built based on the 3D shape descriptor with a bin width φ . Because the 3D shape descriptor is normalized, the number of bins is denoted as B, which is determined using Equation (12). The frequency of each bin of the intact roof histogram is FIi, as given in Equation (13), and the frequency of each bin of the damaged roof histogram is FDi, as given in Equation (14).
B = 1 φ
i = 1 B F I i = N I
i = 1 B F D i = N D
The entropy of the intact roofs is EI, as given in Equation (15), and the entropy of the damaged roofs is ED, as given in Equation (16). The a posteriori entropy is defined as a function of the threshold t according to Kapur’s method [69], as in Equation (17). The optimal threshold δ is obtained by adjusting the threshold t iteratively to maximize the upper bound of the a posteriori entropy. When the threshold δ is determined, damaged roofs will be identified following Equation (6).
E I = i = 1 B F I i N I ln ( F I i N I ) = i = 1 B F I i I n t a c t ( t ) ln ( F I i I n t a c t ( t ) )
E D = i = 1 B F D i N D ln ( F D i N D ) = i = 1 B F D i D a m a g e ( t ) ln ( F D i D a m a g e ( t ) )
E ( t ) = i = 1 B F I i I n t a c t ( t ) ln ( F I i I n t a c t ( t ) ) i = 1 B F D i D a m a g e ( t ) ln ( F D i D a m a g e ( t ) )

3. Study Area and Data

3.1. Study Area

The study area (Figure 7) is located in the area surrounding Haiti’s National Palace, Port-au-Prince, Haiti. The site is approximately 776,660 square meters of flat terrain with dense buildings. On Tuesday, 12 January 2010, Haiti was hit by an earthquake that struck nearly 15 miles from Haiti’s capital, Port-au-Prince. According to official estimates, 316,000 people were killed, 300,000 were injured, 1.3 million were displaced, 97,294 houses were destroyed, and 188,383 houses were damaged in the Port-au-Prince area and in much of southern Haiti [71]. Many buildings in the study area were destroyed by the earthquake.

3.2. Data Source

The airborne LiDAR data were acquired between 21 January and 27 January 2010, and had an average point cloud density of approximately 3.4 points per square meter [72], as shown in Figure 8a. A 2D GIS vector data of building footprints of the area, which contains 1875 buildings, as shown in Figure 8b, were provided by a third party.

4. Results and Discussion

4.1. Damaged Roof Detection

In this paper, the detection of damaged roofs is based on damage interpretation using post-event airborne LiDAR data. The process includes three key procedures: data preprocessing, feature extraction and damaged roof detection. Five algorithm parameters, as shown in Table 1, are estimated experimentally or automatically during the workflow, which will be discussed in the Section 4.3.
During data preprocessing, 1875 individual buildings are extracted from the raw experimental airborne LiDAR data guided by building footprints, and a dense DSM of each building is generated with a grid cell size of λ from the building points. During feature extraction, first, contour clusters are extracted from each individual building’s contours, which are generated based on the contour interval ε from the dense DSM. Two examples of the procedure are shown in Figure 3. Second, the shape chaos index of each contour cluster is calculated using the shape similarity measurement based on the normalized FDs. Contours of the contour cluster are clustered into some groups using the shape similarity threshold ω. Entropy is calculated based on the probability of each contour group and is normalized as the shape chaos index. Third, the 3D shape descriptor of each roof is calculated based on the shape chaos indexes and the area weights of the contour clusters. In damaged roof detection, damaged roofs are discriminated from intact roofs using the optimal threshold δ, which is automatically selected using the maximum entropy threshold based on the roof histogram with a bin width of ϕ, as shown in Figure 9. The results of damaged roof detection are given as building footprints in a polygon shape file, as shown in Figure 10.

4.2. Accuracy Evaluation

For validation, the results are compared to manually labeled reference data based on remote-sensing-based building damage assessment data on the 2010 Haiti earthquake [73], where buildings were classified using the EMS-98 criteria. In these criteria, safe buildings with intact roofs and damaged walls are given grades of 1–3, and heavily or completely collapsed buildings with damaged roofs are given grades of 4–5 [74]. To evaluate the accuracy of damaged roof detection in this paper, buildings with grades of 1–3 are re-categorized as intact roofs, and buildings with grades of 4–5 are re-categorized as damaged roofs.
The results of the validation were compared to the reference data, and the accuracy indices, including the overall accuracy (OA), Kappa accuracy (KA), completeness (Compl.) rate and correctness (Corr.) rate, are listed in the confusion matrix in Table 2. The results show that the damaged roof detection technique performed well at classifying both intact and damaged roofs, as shown in Figure 10: the detection technique correctly identified 652 out of 767 (85.01%) damaged roofs and 985 out of 1108 (88.90%) intact roofs. The overall accuracy is 87.31%, and the Kappa accuracy is 73.79%.
Some examples of typical buildings, including intact, surface-damaged and structurally damaged roofs, are shown in Figure 11. A surface-damaged roof with debris (Figure 11a) and intact roofs including a flat (Figure 11b), hipped (Figure 11c), gabled (Figure 11d) and pyramid (Figure 11e) roof are correctly identified; this result can also be reliably obtained using other surface damage detection methods using features based on points or planar segments. A completely collapsed building with intact walls as shown in Figure 3e is also detected as a roof with surface damage. Structurally damaged roofs with large planar segments are correctly identified; identifying these structural damages using planar-segment-based features can be difficult. The inclined roof shown in Figure 11f perhaps is a combination of middle and lower story pancake collapse. The inclined roof can be easily confused with an intact single sloping roof; therefore, this structural damage is difficult to identify using segment-based detection methods but is easily identified by the method based on the 3D shape descriptor. Top story pancake collapse, as shown in Figure 11g, with a twisted roof is distinctly represented by contour clusters and is correctly identified. Some intact regular arched roofs are also correctly identified, as shown in Figure 11h, i.
Figure 9 presents the distributions of intact and damaged roofs using a histogram based on the 3D shape descriptor. The undetected damaged roofs mainly show distributions in the critical region below the threshold, whereas undetected intact roofs are widely distributed above the threshold. The main cause of damaged roofs being undetected is that the number and extent of contours affected by damages in some partially damaged roofs are relatively low, which results in the 3D shape descriptors having small values. For example, there is a gap caused by damage on the roof, as shown in Figure 12a. The gap containing points is represented as a contour cluster with various shapes, where the outermost contour has the maximum elevation and is easily detected by the shape chaos index based on contour clusters. However, the area of the gap is very small compared to the whole roof; therefore, the 3D shape descriptor is understated. There are three main reasons for some intact roofs being undetected. The first reason is that the 3D shape characteristics of many shanties exhibit similarities with those of damaged shanties. The second reason is that small components or sundries on intact roofs lead to various irregular contours, as shown in Figure 12b. The third reason is that some intact roofs have regular but dissimilar contours, such as L roofs, as shown in Figure 12c. The shape transformation among these contours is not a translation or scaling transformation; rather, it is an approximate distance transformation. As a result, the normalized FD with invariance under shifting and scaling produces large values of the corresponding shape similarities as well as the 3D shape descriptors. Therefore, the normalized FD is not suitable for measuring the shape similarities among these contours. Some intact roofs are misidentified as damaged because the 3D shape descriptors are overstated for the above reasons.
In addition, many roofs had already experienced substantial decay prior to the earthquake and were often unfinished [75], thus incorrectly appearing as earthquake-induced damage in post-disaster data. Nevertheless, in the context of earthquake emergency response, undetected damaged roofs are considerably more problematic than misidentified damaged roofs [42].

4.3. Parameter Selection and Sensitivity Analysis

In damaged roof detection based on the proposed 3D shape descriptor, four parameters are experimentally estimated, as shown in Table 1. Several key parameters, such as the grid cell size λ, the contour interval ε and the shape similarity threshold ω for contour clustering, are determined using the characteristics of the airborne LiDAR data. It is generally impossible to have each point of an unorganized point cloud associated to one grid height. Consequently, dense DSM is used to approximate the complex building surface represented by the unorganized point cloud. Considering the size of the roof structure and the computational efficiency, the grid cell size of each individual building’s DSM is set as 0.1 m in the experiments. The quality of contours derived from a DSM is associated with the grid cell size of the DSM and the average slope in the topography [76]. To achieve a comparable quality of contours derived from a DSM, the average horizontal distance between contour lines of the same vertical interval should be approximately equal to the grid cell size. Therefore, we suppose that the optimal average horizontal distance is the grid cell size λ, and the contour interval can be determined as Equation (18).
ε = tan ( θ ) × λ
where ε is the contour interval, θ is the average roof slope, and g is the grid cell size of the DSM. According to general building structure knowledge, the average slope is usually less than 45°; therefore, the optimal contour interval should be no more than the grid cell size of the DSM. To select the key parameters ε and ω, a sensitivity analysis is conducted using the variations in each parameter in a reasonable range while the other parameter is fixed at the value that leads to an optimum accuracy. The overall accuracy (OA), completeness (Compl.) rate and correctness (Corr.) rate are used to evaluate ε and ω.
The quantitative analysis results for the two parameters are shown in Figure 13 and Figure 14, which compare several reasonable values of ε and ω. As the parameter ε increases, the overall accuracy (OA) and the completeness (Compl.) rate increase at first and then decrease; the correctness (Corr.) rate follows the opposite trend. When the parameter ε is 0.08 m, which is less than the DSM’s grid cell size of 0.1 m, the OA reaches the local maximum. The result verifies that a relatively reliable result can be achieved when the contour interval is no more than the grid cell size of the DSM. The exact roof slope estimated based on the point cloud can be used to select an ideal contour interval for each building. The OA, Compl. and Corr. also increase at first and then decrease as the parameter ω increases. When the parameter ω is 0.02, the OA reaches a local maximum. It is difficult to quantitatively analyze the parameter ω because it is affected by many factors such as the roof structure, the quality of the contours and the characteristics of the shape similarity measurement based on the normalized FDs. The sample training of contour clusters represents a good option toward selecting a proper parameter ω. Based on the experiments, the values of ε and ω are set as 0.08 m and 0.02, which produce the optimum accuracy in damaged roof detection, as shown in Table 2.

4.4. Comparison

The main objective of this paper is for reliable and complete damaged roof detection using post-earthquake airborne LiDAR point clouds. The relevant methods in the literature usually only focused on one or several damage types of roofs. Consequently, in this section, the damage types and the accuracy of the proposed method are compared to those of other methods of damaged roof detection using 3D geometric features extracted from post-earthquake data.
Buildings are usually characterized by various damage types in a serious earthquake zone. In this situation, if a method can detect more damage types of buildings, it will have wide applicability, especially for earthquake emergency response. According to the damage types mentioned in Section 1.1, the damage types of roofs that can be detected by building damage detection methods using 3D geometric features are listed in Table 3.
Many methods [40,41,42,44] extract debris for damaged roof detection based on the planarity of the roof. These methods are suitable for detecting surface-damaged roofs but present difficulties in detecting structurally damaged roofs because there is minimal debris in a structurally damaged roof. Vetrivel et al. [29] only focused on the gaps caused by damage and could not detect these building damage types. Gerke and Kerle [32] detected surface-damaged and inclined buildings based on the spatial relation between observable features such as façades, roofs and rubble piles. However, their method cannot detect inclined planes (1) and middle and lower story pancake collapse (4a, 4b, 5a, 5b) because façades are intact and because rubble piles do not exist for these damage types. Shen et al. [31] was able to detect structurally damaged roofs using the geometric axis lines of the roofs. However, it was difficult to detect inclined flat roofs because this building damage type is easily confused with an intact single sloping roof. Fernandez Galarreta et al. [4] was able to detect both surface- and structurally damaged roofs via visual analysis.
A fair comparison between the precision of these methods depends on the type and accuracy of the remote sensing datasets, the number of damage classes, and the type and accuracy of the reference data. The methods in [42,44] and the proposed method detected intact and damaged roofs from different study areas using airborne LiDAR point clouds of the Haiti earthquake, and the results were evaluated using a damage assessment performed by the Global Earth Observation—Catastrophe Assessment Network (GEO-CAN). Labiak et al. [42] obtained an overall accuracy of 73.40% and a Kappa accuracy of 27.51%. Oude Elberink et al. [44] achieved an overall accuracy of 60%. In contrast, the proposed method achieved a high overall accuracy and Kappa accuracy of 87.31% and 73.79%, respectively.
The comparison suggests that the proposed method based on the 3D shape descriptor automatically detects both surface- and structurally damaged roofs and achieves a higher accuracy in damaged roof detection using the 3D geometric features extracted from post-earthquake airborne LiDAR point clouds. This is because the 3D shape descriptor provides a comprehensive description of both surface and structure features based on the shapes and spatial relations of building contours, thus significantly improving the completeness and accuracy of damaged roof detection. A major drawback of the proposed method is that it is difficult to automatically identify specific damage types using the 3D shape descriptor. However, the first priority is to reliably and completely detect damaged roofs in earthquake emergency response.

5. Conclusions

This paper focuses on both surface-damaged and structurally damaged roof detection using post-event data and proposes a 3D shape descriptor based on building contour clusters derived from airborne LiDAR point clouds for practical application.
The novelty of the 3D shape descriptor is that the 3D shapes of complex roof surfaces are quantitatively described through the spatial patterns of contours. The significant 3D features of surface and structural damages are characterized as a chaotic spatial pattern of contours and are represented as a group of contours with chaotic shapes. The shape chaos index of contour clusters quantitates the chaotic spatial pattern as the information entropy of shape similarities among contours by integrating the spatial attributes and spatial relationships of complex roof surfaces at the regional level. The 3D shape descriptor quantitatively describes the characteristics of chaotic 3D shapes of surface and structural damages at the whole roof level by combining the shape chaos indexes using the area weight.
A significant performance improvement is achieved through the use of the novel 3D shape descriptor based on contour clusters compared to other geometric features directly extracted from point clouds. The experiments produce good results for the airborne LiDAR point cloud used in Haiti earthquake damaged roof detection, which classifies roofs solely using the 3D shape descriptor with the maximum entropy threshold. The overall accuracy and Kappa accuracy are 87.31% and 73.79%, respectively. This damaged roof detection approach is quite valuable for rescue efforts in earthquake-struck areas, especially when pre-event 3D data are difficult to obtain.
Future work will be devoted to identifying specific damage types of damaged roofs detected using the proposed 3D feature, as well as to improving the shape analysis between contours to increase the robustness of the 3D shape descriptor.

Acknowledgments

This work was supported by the National High Resolution Earth Observation System (the Civil Part) Technology Projects of China, the National Natural Science Foundation of China (No. 41571390, 41571392, 41471320) and the Open Research Fund of State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (No. 15I01). The Haiti LiDAR data were supplied through an RIT partnership with ImageCat Inc. and Kucera International, which was sponsored by the World Bank. ImageCat also provided the GEO-CAN damage assessment validation data. The imagery of the study area was obtained from Google Earth. We would like to thank the anonymous reviewers for their constructive comments, which greatly improved the quality of our manuscript.

Author Contributions

Meizhang He designed the study, performed the experiments, analyzed the data, and wrote the manuscript. Qing Zhu and Zhiqiang Du supervised Meizhang He. Meizhang He, Qing Zhu, Zhiqiang Du, Han Hu, Yulin Ding and Min Chen reviewed the paper for organization, clarification and English corrections.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Lu, H.; Kohiyama, M.; Horie, K.; Maki, N.; Hayashi, H.; Tanaka, S. Building damage and casualties after an earthquake. Nat. Hazards 2003, 29, 387–403. [Google Scholar]
  2. Ghosh, S.; Huyck, C.K.; Greene, M.; Gill, S.P.; Bevington, J.; Svekla, W.; DesRoches, R.; Eguchi, R.T. Crowdsourcing for rapid damage assessment: The Global Earth Observation Catastrophe Assessment Network (Geo-Can). Earthq. Spectra 2011, 27, S179–S198. [Google Scholar] [CrossRef]
  3. Erdik, M.; Şeşetyan, K.; Demircioğlu, M.B.; Hancılar, U.; Zülfikar, C. Rapid earthquake loss assessment after damaging earthquakes. Soil Dyn. Earthq. Eng. 2011, 31, 247–266. [Google Scholar] [CrossRef]
  4. Fernandez Galarreta, J.; Kerle, N.; Gerke, M. UVA-based urban structural damage assessment using object-based image analysis and semantic reasoning. Nat. Hazard. Earth Syst. 2015, 15, 1087–1101. [Google Scholar] [CrossRef]
  5. Ehrlich, D.; Guo, H.D.; Molch, K.; Ma, J.W.; Pesaresi, M. Identifying damage caused by the 2008 Wenchuan earthquake from VHR remote sensing data. Int. J. Digit. Earth 2009, 2, 309–326. [Google Scholar]
  6. Corbane, C.; Saito, K.; Dell’Oro, L.; Bjorgo, E.; Gill, S.P.D.; Piard, E.B.; Huyck, C.K.; Kemper, T.; Lemoine, G.; Spence, R.J.S.; et al. A comprehensive analysis of building damage in the 12 January 2010 Mw7 Haiti earthquake using high-resolution satellite and aerial imagery. Photogramm. Eng. Remote Sens. 2011, 77, 997–1009. [Google Scholar] [CrossRef]
  7. Dong, P.; Guo, H. A framework for automated assessment of post-earthquake building damage using geospatial data. Int. J. Remote Sens. 2012, 33, 81–100. [Google Scholar] [CrossRef]
  8. Turker, M.; Sumer, E. Building-based damage detection due to earthquake using the watershed segmentation of the post-event aerial images. Int. J. Remote Sens. 2008, 29, 3073–3089. [Google Scholar] [CrossRef]
  9. Matsuoka, M.; Yamazaki, F. Use of satellite SAR intensity imagery for detecting building areas damaged due to earthquakes. Earthq. Spectra 2004, 20, 975–994. [Google Scholar] [CrossRef]
  10. Vu, T.T.; Matsuoka, M.; Yamazaki, F. Detection and animation of damage using very high-resolution satellite data following the 2003 Bam, Iran, earthquake. Earthq. Spectra 2005, 21, 319–327. [Google Scholar] [CrossRef]
  11. Rathje, E.M.; Kyu-Seok, W.; Crawford, M.; Neuenschwander, A. Earthquake damage identification using multi-temporal high-resolution optical satellite imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Seoul, Korea, 25–29 July 2005; pp. 5045–5048.
  12. Hoffmann, J. Mapping damage during the Bam (Iran) earthquake using interferometric coherence. Int. J. Remote Sens. 2007, 28, 1199–1216. [Google Scholar] [CrossRef]
  13. Chini, M.; Bignami, C.; Stramondo, S.; Pierdicca, N. Uplift and subsidence due to the 26 December 2004 Indonesian earthquake detected by SAR data. Int. J. Remote Sens. 2008, 29, 3891–3910. [Google Scholar] [CrossRef]
  14. Guo, H.; Lu, L.; Ma, J.; Pesaresi, M.; Yuan, F. An improved automatic detection method for earthquake-collapsed buildings from ADS40 image. Chin. Sci. Bull. 2009, 54, 3303–3307. [Google Scholar] [CrossRef]
  15. Chini, M.; Cinti, F.R.; Stramondo, S. Co-seismic surface effects from very high resolution panchromatic images: The case of the 2005 Kashmir (Pakistan) earthquake. Nat. Hazard. Earth Syst. 2011, 11, 931–943. [Google Scholar] [CrossRef] [Green Version]
  16. Li, X.; Yang, W.; Ao, T.; Li, H.; Chen, W. An improved approach of information extraction for earthquake-damaged buildings using high-resolution imagery. J. Earthq. Tsunami 2011, 5, 389–399. [Google Scholar] [CrossRef]
  17. Ma, J.; Qin, S. Automatic depicting algorithm of earthquake collapsed buildings with airborne high resolution image. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; IEEE: Munich, Germany, 2012; pp. 939–942. [Google Scholar]
  18. Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  19. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  20. Schweier, C.; Markus, M.; Steinle, E.; Weidner, U. Casualty scenarios based on laser scanning data. In Proceedings of the 250th Anniversary of 1755 Lisbon Earthquake, Lisbon, Portugal, 1–4 November 2005.
  21. Stilla, U.; Soergel, U.; Thoennessen, U. Potential and limits of InSAR data for building reconstruction in built-up areas. ISPRS J. Photogramm. Remote Sens. 2003, 58, 113–123. [Google Scholar] [CrossRef]
  22. Plank, S. Rapid damage assessment by means of multi-temporal SAR—A comprehensive review and outlook to Sentinel-1. Remote Sens. 2014, 6, 4870–4906. [Google Scholar] [CrossRef]
  23. Dell’Acqua, F.; Gamba, P. Remote sensing and earthquake damage assessment: Experiences, limits, and perspectives. Proc. IEEE 2012, 100, 2876–2890. [Google Scholar] [CrossRef]
  24. Awrangjeb, M. Effective generation and update of a building map database through automatic building change detection from LiDAR point cloud data. Remote Sens. 2015, 7, 14119–14150. [Google Scholar] [CrossRef]
  25. Olsen, M.J.; Chen, Z.; Hutchinson, T.; Kuester, F. Optical techniques for multiscale damage assessment. Geomat. Nat. Hazards Risk 2013, 4, 49–70. [Google Scholar] [CrossRef]
  26. Yamazaki, F.; Matsuoka, M. Remote sensing technologies in post-disaster damage assessment. J. Earthq. Tsunami 2007, 1, 193–210. [Google Scholar] [CrossRef]
  27. Li, L.; Zhang, B.; Wu, Y. Fusing spectral and texture information for collapsed buildings detection in airborne image. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 186–189.
  28. Radhika, S.; Tamura, Y.; Matsui, M. Use of post-storm images for automated tornado-borne debris path identification using texture-wavelet analysis. J. Wind Eng. Ind. Aerodyn. 2012, 107–108, 202–213. [Google Scholar] [CrossRef]
  29. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of damage in buildings based on gaps in 3D point clouds from very high resolution oblique airborne images. ISPRS J. Photogramm. Remote Sens. 2015, 105, 61–78. [Google Scholar] [CrossRef]
  30. Rehor, M.; Bähr, H.; Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Improvement of building damage detection and classification based on laser scanning data by integrating spectral information. Int. Arch. Photogramm. Spat. Inf. Sci. 2008, 37, 1599–1605. [Google Scholar]
  31. Shen, Y.; Wu, L.; Wang, Z. Identification of inclined buildings from aerial LiDAR data for disaster management. In Proceedings of the 2010 18th International Conference on Geoinformatics, Beijing, China, 18–20 June 2010; Institute of Electrical and Electronics Engineers: Beijing, China, 2010; pp. 1–5. [Google Scholar]
  32. Gerke, M.; Kerle, N. Graph matching in 3D space for structural seismic damage assessment. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; Institute of Electrical and Electronics Engineers: Barcelona, Spain, 2011; pp. 204–211. [Google Scholar]
  33. Gerke, M.; Kerle, N. Automatic structural seismic damage assessment with airborne oblique pictometry @ imagery. Photogramm. Eng. Remote Sens. 2011, 77, 885–898. [Google Scholar] [CrossRef]
  34. Schweier, C.; Markus, M. Classification of collapsed buildings for fast damage and loss assessment. Bull. Earthq. Eng. 2006, 4, 177–192. [Google Scholar] [CrossRef]
  35. Schweier, C.; Markus, M. Assessment of the search and rescue demand for individual buildings. In Proceedings of the 13th World Conference on Earthquake Engineering, Vancouver, BC, Canada, 1–6 August 2004.
  36. Rehor, M. Classification of building damage based on laser scanning data. Int. Arch. Photogramm. Spat. Inf. Sci. 2007, 20, 54–63. [Google Scholar]
  37. Khoshelham, K.; Nardinocchi, C.; Frontoni, E.; Mancini, A.; Zingaretti, P. Performance evaluation of automated approaches to building detection in multi-source aerial data. ISPRS J. Photogramm. Remote Sens. 2010, 65, 123–133. [Google Scholar] [CrossRef]
  38. Van der Sande, C.; Soudarissanane, S.; Khoshelham, K. Assessment of relative accuracy of AHN-2 laser scanning data using planar features. Sensors 2010, 10, 8198–8214. [Google Scholar] [CrossRef] [PubMed]
  39. Vosselman, G.; Gorte, B.G.; Sithole, G.; Rabbani, T. Recognising structure in laser scanner point clouds. Int. Arch. Photogramm. Spat. Inf. Sci. 2004, 46, 33–38. [Google Scholar]
  40. Rehor, M.; Bähr, H.P. Segmentation of damaged buildings from laser scanning data. Int. Arch. Photogramm. Spat. Inf. Sci. 2006, 36, 67–72. [Google Scholar]
  41. Rehor, M.; Bähr, H.; Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P. Contribution of two plane detection algorithms to recognition of intact and damaged buildings in LiDAR data. Photogramm. Rec. 2008, 23, 441–456. [Google Scholar] [CrossRef]
  42. Labiak, R.C.; Van Aardt, J.A.; Bespalov, D.; Eychner, D.; Wirch, E.; Bischof, H.-P. Automated method for detection and quantification of building damage and debris using post-disaster LiDAR data. SPIE 8037 2011. [Google Scholar] [CrossRef]
  43. Khoshelham, K.; Oude Elberink, S.; Sudan, X. Segment-based classification of damaged building roofs in aerial laser scanning data. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1258–1262. [Google Scholar] [CrossRef]
  44. Oude Elberink, S.; Shoko, M.A.; Fathi, S.A.; Rutzinger, M. Detection of collapsed buildings by classifying segmented airborne laser scanner data. Int. Arch. Photogramm. Spat. Inf. Sci. 2012, 38, 307–312. [Google Scholar] [CrossRef]
  45. Khoshelham, K.; Oude Elberink, S. Role of dimensionality reduction in segment-based classification of damaged building roofs in airborne laser scanning data. In Proceedings of the International Conference on Geographic Object Based Image Analysis, Rio de Janeiro, Brazil, 7–9 May 2012; pp. 372–377.
  46. Engels, J.; Arefi, H.; Hahn, M. Generation of roof topologies using plane fitting with RANSAC. ISPRS J. Photogramm. Remote Sens. 2008, 37, 119–126. [Google Scholar]
  47. Oude Elberink, S.; Vosselman, G. Quality analysis on 3D building models reconstructed from airborne laser scanning data. ISPRS J. Photogramm. Remote Sens. 2011, 66, 157–165. [Google Scholar] [CrossRef]
  48. Li, Y.; Ma, H.; Wu, J. Planar segmentation and topological reconstruction for urban buildings with LiDAR point clouds. Proc. SPIE 2011, 8286. [Google Scholar] [CrossRef]
  49. Song, J.; Wu, J.; Jiang, Y. Extraction and reconstruction of curved surface buildings by contour clustering using airborne LiDAR data. Optik 2015, 126, 513–521. [Google Scholar] [CrossRef]
  50. Ren, Z.; Cen, M.; Zhang, T. Building extraction from LiDAR data based on shape analysis of contours. J. Southwest 2009, 44, 83–88. [Google Scholar]
  51. Ren, Z. Building and Road Extraction from Lidar Data Based on Contour Feature Analysis. Ph.D. Thesis, Southwest Jiaotong University, Chengdu, China, 2009. [Google Scholar]
  52. Zhang, J.; Li, L.; Lu, Q.; Jiang, W. Contour clustering analysis for building reconstruction from LiDAR data. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 18 May 2008; pp. 355–360.
  53. He, M.; Zhu, Q.; Du, Z.; Zhang, Y.; Hu, H.; Lin, Y.; Qi, H. Contour cluster shape analysis for building damage detection from post-earthquake airborne LiDAR. Acta Geod. Cartogr. Sin. 2015, 44, 407–413. [Google Scholar]
  54. Mongus, D.; Žalik, B. Parameter-free ground filtering of LiDAR data for automatic DTM generation. ISPRS J. Photogramm. Remote Sens. 2012, 67, 1–12. [Google Scholar] [CrossRef]
  55. Hu, H.; Ding, Y.; Zhu, Q.; Wu, B.; Lin, H.; Du, Z.; Zhang, Y.; Zhang, Y. An adaptive surface filter for airborne laser scanning point clouds by means of regularization and bending energy. ISPRS J. Photogramm. Remote Sens. 2014, 92, 98–111. [Google Scholar] [CrossRef]
  56. Guilbert, E. Multi-level representation of terrain features on a contour map. Geoinformatica 2013, 17, 301–324. [Google Scholar] [CrossRef]
  57. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  58. Chang, C.I.; Du, Y.; Wang, J.; Guo, S.M.; Thouin, P.D. Survey and comparative analysis of entropy and relative entropy thresholding techniques. IEE Proc. Vis. Image Signal Process. 2006, 153, 837–850. [Google Scholar] [CrossRef]
  59. Li, Z.; Huang, P. Quantitative measures for spatial information of maps. Int. J. Geogr. Inf. Sci. 2002, 16, 699–709. [Google Scholar] [CrossRef]
  60. Jones, N.L.; Kennard, M.J.; Zundel, A.K. Fast algorithm for generating sorted contour strings. Comput. Geosci. 2000, 26, 831–837. [Google Scholar] [CrossRef]
  61. Wang, T. An algorithm for extracting contour lines based on interval tree from grid DEM. Geospat. Inf. Sci. 2008, 11, 103–106. [Google Scholar] [CrossRef]
  62. Kweon, I.S.; Kanade, T. Extracting topographic terrain features from elevation maps. CVGIP Image Underst. 1994, 59, 171–182. [Google Scholar] [CrossRef]
  63. Cronin, T. Automated reasoning with contour maps. Comput. Geosci. 1995, 21, 609–618. [Google Scholar] [CrossRef]
  64. Folkers, A.; Samet, H. Content-based image retrieval using Fourier descriptors on a logo database. In Proceedings of the 16th International Conference on Pattern Recognition, Quebec, QC, Canada, 11–15 August 2002; pp. 521–524.
  65. Zhang, D.; Lu, G. Review of shape representation and description techniques. Pattern Recognit. 2004, 37, 1–19. [Google Scholar] [CrossRef]
  66. Wong, W.; Shih, F.Y.; Liu, J. Shape-based image retrieval using support vector machines, Fourier descriptors and self-organizing maps. Inform. Sci. 2007, 177, 1878–1891. [Google Scholar] [CrossRef]
  67. Duan, W.; Kuester, F.; Gaudiot, J.; Hammami, O. Automatic object and image alignment using Fourier descriptors. Image Vis. Comput. 2008, 26, 1196–1206. [Google Scholar] [CrossRef]
  68. Pun, T. A new method for grey-level picture thresholding using the entropy of the histogram. Signal Process. 1980, 2, 223–237. [Google Scholar] [CrossRef]
  69. Kapur, J.N.; Sahoo, P.K.; Wong, A.K.C. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  70. Yimit, A.; Hagihara, Y.; Miyoshi, T.; Hagihara, Y. 2-D direction histogram based entropic thresholding. Neurocomputing 2013, 120, 287–297. [Google Scholar] [CrossRef]
  71. M7.0—Haiti Region. Available online: http://earthquake.usgs.gov/earthquakes/eventpage/usp000h60h#general_summary (accessed on 10 October 2015).
  72. World Bank—Imagecat Inc. Rit Haiti Earthquake LiDAR Dataset. Available online: http://opentopo.sdsc.edu.gridsphere/gridsphere?cid=geonlidarframeportlet&gs_action=lidarDataset&opentopoID=OTLAS.072010.32618.1 (accessed on 10 October 2015).
  73. Haiti Earthquake 2010: Remote Sensing Based Building Damage Assessment Data. Available online: http://www.unitar.org/unosat/haiti-earthquake-2010-remote-sensing-based-building-damage-assessment-data (accessed on 10 October 2015).
  74. Bevington, J.; Adams, B.; Eguchi, R. Geo-Can debuts to map Haiti damage. Imaging Notes 2010, 25, 26–30. [Google Scholar]
  75. Kerle, N.; Hoffman, R.R. Collaborative damage mapping for emergency response: The role of cognitive systems engineering. Nat. Hazards Earth Syst. 2013, 13, 97–113. [Google Scholar] [CrossRef]
  76. Ziadat, F.M. Effect of contour intervals and grid cell size on the accuracy of DEMs and slope derivatives. Trans. GIS 2007, 11, 67–81. [Google Scholar] [CrossRef]
Figure 1. Compilation of damage types [35].
Figure 1. Compilation of damage types [35].
Remotesensing 08 00189 g001
Figure 2. Flow chart of damaged roof detection.
Figure 2. Flow chart of damaged roof detection.
Remotesensing 08 00189 g002
Figure 3. The data processing samples: (a) The image of an intact building with an arched roof; (b) The airborne LiDAR point cloud; (c) The dense DSM; (d) Contour clusters colored with different colors; (e) The image of a completely collapsed building with intact walls; (f) The airborne LiDAR point cloud; (g) The dense DSM; (h) Contour clusters colored with different colors.
Figure 3. The data processing samples: (a) The image of an intact building with an arched roof; (b) The airborne LiDAR point cloud; (c) The dense DSM; (d) Contour clusters colored with different colors; (e) The image of a completely collapsed building with intact walls; (f) The airborne LiDAR point cloud; (g) The dense DSM; (h) Contour clusters colored with different colors.
Remotesensing 08 00189 g003
Figure 4. Three building surface regions belonging to different structures are segmented by three contour clusters and are different colors: (a) Contours of the building; (b) Contour clusters of different structures that are represented as nodes colored with the same color in a contour tree.
Figure 4. Three building surface regions belonging to different structures are segmented by three contour clusters and are different colors: (a) Contours of the building; (b) Contour clusters of different structures that are represented as nodes colored with the same color in a contour tree.
Remotesensing 08 00189 g004
Figure 5. The spatial patterns of complex roof surfaces are represented by contour clusters: (a) The complex surface of Haiti’s National Palace, which was destroyed in the 2010 Haiti earthquake (obtained from Google Earth); (b) Contour clusters of the roof are rendered in different colors according to different surface or structural morphologies, such as contours of surfaces with debris (red), twisted surfaces (purple) and inclined roof structures (blue), which are jagged or irregular, and contours of intact surfaces and structures (green) are compact and regular.
Figure 5. The spatial patterns of complex roof surfaces are represented by contour clusters: (a) The complex surface of Haiti’s National Palace, which was destroyed in the 2010 Haiti earthquake (obtained from Google Earth); (b) Contour clusters of the roof are rendered in different colors according to different surface or structural morphologies, such as contours of surfaces with debris (red), twisted surfaces (purple) and inclined roof structures (blue), which are jagged or irregular, and contours of intact surfaces and structures (green) are compact and regular.
Remotesensing 08 00189 g005
Figure 6. The complete workflow of the feature extraction algorithm.
Figure 6. The complete workflow of the feature extraction algorithm.
Remotesensing 08 00189 g006
Figure 7. The location of the study area.
Figure 7. The location of the study area.
Remotesensing 08 00189 g007
Figure 8. Data sets of the study area; (a) The airborne LiDAR point cloud; (b) The 2D GIS vector data of building footprints.
Figure 8. Data sets of the study area; (a) The airborne LiDAR point cloud; (b) The 2D GIS vector data of building footprints.
Remotesensing 08 00189 g008
Figure 9. The distribution of all intact and damaged roofs based on the 3D shape descriptor and the shape similarity threshold δ. The damage information of roofs is from the reference data.
Figure 9. The distribution of all intact and damaged roofs based on the 3D shape descriptor and the shape similarity threshold δ. The damage information of roofs is from the reference data.
Remotesensing 08 00189 g009
Figure 10. The validated results of damaged roof detection, where correctly detected intact roofs (identified intact roofs) are colored green, correctly detected damaged roofs (identified damaged roofs) are colored red, undetected damaged roofs (misidentified intact roofs) are colored yellow, and undetected intact roofs (misidentified damaged roofs) are colored blue.
Figure 10. The validated results of damaged roof detection, where correctly detected intact roofs (identified intact roofs) are colored green, correctly detected damaged roofs (identified damaged roofs) are colored red, undetected damaged roofs (misidentified intact roofs) are colored yellow, and undetected intact roofs (misidentified damaged roofs) are colored blue.
Remotesensing 08 00189 g010
Figure 11. Contour clusters of typical buildings: (a) Completely collapsed building with debris; (b) Intact flat roof; (c) Intact hipped roof; (d) Intact gabled roof; (e) Intact complex building consisting of pyramid roof; (f) Inclined roof; (g) Completely collapsed building with twisted roof; (h) Intact ruled surface roof; (i) Intact arched roof.
Figure 11. Contour clusters of typical buildings: (a) Completely collapsed building with debris; (b) Intact flat roof; (c) Intact hipped roof; (d) Intact gabled roof; (e) Intact complex building consisting of pyramid roof; (f) Inclined roof; (g) Completely collapsed building with twisted roof; (h) Intact ruled surface roof; (i) Intact arched roof.
Remotesensing 08 00189 g011
Figure 12. Contour clusters of typical buildings that are difficult to identify using the proposed 3D shape descriptor: (a) Partially damaged roof with gaps; (b) Intact roof exhibiting complex features; (c) Intact L-shaped roof.
Figure 12. Contour clusters of typical buildings that are difficult to identify using the proposed 3D shape descriptor: (a) Partially damaged roof with gaps; (b) Intact roof exhibiting complex features; (c) Intact L-shaped roof.
Remotesensing 08 00189 g012
Figure 13. Sensitivity test of the contour interval ε.
Figure 13. Sensitivity test of the contour interval ε.
Remotesensing 08 00189 g013
Figure 14. Sensitivity test of the shape similarity threshold ω.
Figure 14. Sensitivity test of the shape similarity threshold ω.
Remotesensing 08 00189 g014
Table 1. The parameter list for damaged roof detection based on the 3D shape descriptor.
Table 1. The parameter list for damaged roof detection based on the 3D shape descriptor.
ProcedureSymbolValueSet ModeDescription
Data preprocessingλ0.1 mExperimentallyThe grid cell size of DSM
Feature extractionε0.08 mExperimentallyThe contour interval
ω0.02ExperimentallyThe shape similarity threshold
Damaged roof detectionϕ0.05ExperimentallyThe bin width of the histogram
δ0.4269AutomaticallyThe threshold of the 3D shape descriptor
Table 2. Confusion matrix assessing the accuracy of damaged roof detection based on the 3D shape descriptor.
Table 2. Confusion matrix assessing the accuracy of damaged roof detection based on the 3D shape descriptor.
Roof StatusReference Data
IntactDamageRow TotalCorr.
Intact985123110888.90%
Damage11565276785.01%
Column Total11007751875
Compl.89.55%84.13%
OA87.31%
KA73.79%
Table 3. The applicability comparison results and damage types that can be detected by the building damage detection methods using 3D geometric features.
Table 3. The applicability comparison results and damage types that can be detected by the building damage detection methods using 3D geometric features.
MethodsDamage Types
Surface DamagesStructure Damages
Multilayer Collapse (2)Top story Pancake Collapse (4c, 5, 5c)Heap of Debris (6, 7a, 7c)Heap of Debris with Planes (3, 7b)Inclined Plane (1)Middle or Lower Story Pancake Collapse (4a, 4b, 5a, 5b)Inclination (9a)
Rehor et al. [40]
Rehor et al. [41]
Labiak et al. [42]
Oude Elberink et al. [44]
Shen et al. [31]
Gerke and Kerle [32]
Vetrivel et al. [29]
Fernandez Galarreta et al. [4]
The proposed method

Share and Cite

MDPI and ACS Style

He, M.; Zhu, Q.; Du, Z.; Hu, H.; Ding, Y.; Chen, M. A 3D Shape Descriptor Based on Contour Clusters for Damaged Roof Detection Using Airborne LiDAR Point Clouds. Remote Sens. 2016, 8, 189. https://doi.org/10.3390/rs8030189

AMA Style

He M, Zhu Q, Du Z, Hu H, Ding Y, Chen M. A 3D Shape Descriptor Based on Contour Clusters for Damaged Roof Detection Using Airborne LiDAR Point Clouds. Remote Sensing. 2016; 8(3):189. https://doi.org/10.3390/rs8030189

Chicago/Turabian Style

He, Meizhang, Qing Zhu, Zhiqiang Du, Han Hu, Yulin Ding, and Min Chen. 2016. "A 3D Shape Descriptor Based on Contour Clusters for Damaged Roof Detection Using Airborne LiDAR Point Clouds" Remote Sensing 8, no. 3: 189. https://doi.org/10.3390/rs8030189

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop