1. Introduction
It is crucial to conduct the accurate assessment of structural damage to buildings after disaster events. Final assessment results are beneficial to immediate relief efforts and subsequent post-disaster reconstruction [
1,
2]. Traditional ground-based investigations are time-consuming and dangerous. They also require a large amount of labor and material resources. That is because with the development of remote sensing (RS) technology, building damage information can be obtained efficiently. RS offers an efficient way of obtaining building damage information [
3,
4].
Recently, very high resolution (VHR) optical images, synthetic aperture radar (SAR) data, and light detection and ranging (LiDAR) data provide more detailed damage characteristics in the detection of damaged buildings [
5,
6,
7]. In practical applications, optical data are preferred because they are relatively easy to interpret. With the continuous improvement of RS image resolution, disaster damage assessment can be conducted through various platforms, such as satellites, manned aircraft, and unmanned aerial vehicles (UAVs) [
8,
9,
10]. In addition, with the development of RS platforms, recent research focused on the application of airborne techniques in emergency response [
9,
11]. Spaceborne RS technology is typically used to discover building damage within a large area. It is an effective method that assures satellite image availability. In contrast, data acquisition methods based on airborne UAVs benefit from flexibility, low cost, and real-time monitoring [
12]. Unlike traditional RS observation methods, which only acquire single view image features and plane geometry information of ground objects, UAVs can observe object features from multiple angles. With the development of oblique photography techniques, building facade information can be obtained directly from the UAV platform [
13,
14]. ‘Intact buildings’ are identified from orthophotos (for example,
Figure 1a) is actually inclined and partially collapsed as appear when it is portrayed by UAV images (for example,
Figure 1b). Thus, the detection accuracy of structural building damage cannot be assured when detection is merely based on a single view. As such, the UAV oblique observation technique has the potential to improve structural damage detection accuracy with a more detailed facade and roof information.
Apart from new observation techniques, new damage detection methods are being rapidly developed with the aid of artificial intelligence and computer vision [
15,
16]. Based on the results of latest studies, it can be said that deep-learning algorithms are becoming popular for the detection of building damage and have greatly improved in terms of their detection accuracy [
15,
16,
17]. Deep learning is extremely efficient as in this technology, nonlinear spatial filters can be automatically learned and a hierarchy of increasingly complex features can be directly generated from the original data. Furthermore, deep learning has demonstrated superior flexibility and capability compared with the traditional classification methods [
18]. Owing to the convolutional neural network (CNN) structure, the entire CNN system alleviates the requirement to design a suitable feature extractor manually. However, deep learning requires several training samples and a long training time owing to its deep CNN structures. Typical damage characteristics may also vary depending on the area and the spatial image resolution, limiting the generalization and application of a specific trained model [
19]. Furthermore, considering the complexity of building damages, 3D damage features (i.e., geometric and elevation features) are more useful than 2D image features for damage detection [
16,
20]. Thus, a multi-feature building damage detection method combining 2D and 3D features is critically needed. Nowadays, several studies use 3D features to classify or detect typical objects, such as buildings, vegetation, cars, and traffic signs [
21,
22]. Although 3D structural analysis for building can significantly improve the detection accuracy, it has not yet been fully exploited due to the following reasons: (1) the object features are complex and vary for most of the damaged buildings, making it difficult to select typical 3D features of the damaged buildings; (2) structural building damage details (e.g., cracks, local scaling) are difficult to detect with traditional observation methods.
To address these issues, we proposed herein a building damage detection method based on oblique photogrammetric point clouds using supervoxel segmentation and a latent Dirichelet allocation model. Upon selecting the data source, oblique photogrammetric points were used to extract the 3D damage features rather than the LIDAR points. This method has proved to be low cost, with point precision similar to LIDAR technique, and rich in RGB information. For the correct selection and representation of typical features of damaged buildings, we accounted for the complexity and particularity of building damage. Thus, we combined the 2D and 3D features to achieve accurate building damage detection based on supervoxels and an LDA model. We provided a fully automatic and general framework for detecting building damage by effectively overcoming the influence of other ground objects.
The innovative contributions of our proposed approach are as follows: (1) Different from the traditional point-based classification method with low accuracy, we developed one supervoxel-based damage detection method. Considering the generated supervoxels using the classic Voxel Cloud Connectivity Segmentation (VCCS) algorithm suffer from “zig-zag” effect, we developed a refined boundary supervoxelization algorithm. The proposed method consists of detecting and refining the boundary points, which significantly enhances the precision of damage locations. (2) Considering the detection accuracy of structural building damage cannot be assured when detection is merely based on a single view, we combined 2D and 3D features together using the LDA model in this study. The LDA model generalizes these point-based features and builds the representation of high-level features. This new approach provides a systematic view on the efficient and autonomous processing of rooftop and facade features into useful structural damage information. (3) In view of the difficulty in replicating the approach, we provided a general and accurate realization framework combining building point extraction with building damage detection. Such a methodology improves damage-detection accuracy and can be replicated in fine building damage assessment.
2. Study Area and Data Sources
A violent Ms 8.0 Wenchuan earthquake occurred on 12 May 2008. It killed nearly 70,000 people, injured more than 370,000, left more than 17,000 listed as missing, and destroyed most of the buildings. We selected the old town of Beichuan in Sichuan province, China, as our study area because it was completely preserved as the site of the Wenchuan earthquake. Although the town was largely destroyed due to the strong earthquake, different types of building damage can still be found in this site even after 12 years. Even though the initial damage features are no longer significant, the building damage research at the site is of high value due to the abundant and variable building damage types and damage samples. An overview of the study area, including its spatial location on Google maps, is presented in
Figure 2.
To evaluate the effectiveness of our proposed method, we mapped the Beichuan earthquake ruins on the ground during 12–16 August 2019. As a UAV platform, we used DJI-Phantom 4 Pro, which was equipped with a digital camera (8.8–24 mm f/2.8–11 lens, 5456 × 3632 pixel image size) and APS CMOS sensor (25.4 × 25.4 mm). The camera rotated freely, thereby allowing multiple views and taking images with a count of 20 million pixels. The DJI software package, Ground Station, was used for photogrammetric flight mission planning. The flights were performed with a planned side overlap of 80%, and oblique photos were obtained with a dip angle of 55°. To acquire additional damage details with a flying height limitation, we set the flying height to approximately 100 m to achieve a spatial resolution of 1 cm. Considering the large flight area of nearly 15 km
2 and the battery limitations, the entire study area was divided into five parts. We took overlapped photos from the adjacent regions. More than 1400 digital images (
Figure 3b) were collected. These images were imported into the Pix4d software to generate dense point clouds (
Figure 3c). To avoid the occurrence of ‘empty holes’ in the final point clouds, formed due to the platform’s instability during flight, we organized multi-group images for each subset area.
6. Conclusions and Future Work
This study used the high-accuracy and semi-automated method to assess building structural damage by integrating 2D and 3D features. The entire process was conducted systematically, including model implementation of building point extraction, sample set construction, and model implementation of damage extraction. The proposed damage detection framework was additionally compared with other commonly used approaches. We verified its effectiveness through the transferability analysis in another scene.
Considering its low cost and convenience, we used the UAV-based oblique photogrammetry technique to obtain dense building point clouds. As the final classification results in traditional methods tend to be easily affected by land-cover types, we proposed a systematic and detailed damage extraction framework. It included building point extraction and damage classification, which offered a balance between efficiency and accuracy. After confirming that the extraction results using traditional point-based methods suffered from the ‘salt-and-pepper’ effect, we proposed a supervoxel-based damage classification method. In contrast to the VCCS algorithm, we developed a boundary refined supervoxelization algorithm to improve the damage classification precision. Our proposed method also fully considered the 2D and 3D damage features of the building roof and facade using the LDA model for damage extraction. The proposed method improved the damage detection accuracy and the highest improvement ratio is over 8%. As determined by the quantitative analysis, the extraction accuracy of the building points reached approximately 94%, while the detection accuracy of building damage reached almost 90%. Moreover, both the precision and recall for damage detection reached 89%, illustrating both the reliability and accuracy of the proposed method. In terms of time consumption, the proposed LDA model promoted the damage detection efficiency compared with the classic model. In conclusion, the new building damage detection framework is based on the 3D analytical method and is convenient for a post-disaster emergency, meeting the need for accuracy and efficiency under emergency response.
In future studies, we plan on expanding the data source onto various post-disaster areas. With the use of additional damage samples, we can not only verify the further transferability of the proposed method but also integrate other types of building damage characteristics to help determine a specific damage level. Considering different manually selected damage features can significantly affect the final damage detection results, we must find those discriminative and representative high-level features to conduct building damage classification. With the development of 3D deep learning, increasing focus on 3D object recognition has motivated more research to conduct related studies. However, owing to the limited number of damage samples and complex building damage features, existing 3D recognition models are insufficient to cope with building damage detection. Thus, a more effective 3D detection method for building damage needs to be developed.