# BDD-Net: A General Protocol for Mapping Buildings Damaged by a Wide Range of Disasters Based on Satellite Imagery

^{1}

^{2}

^{3}

^{4}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Data Sources and Disaster Cases

## 3. Model Development

#### 3.1. Preprocessing

#### 3.2. Deep Pixel-Classification Network

#### 3.3. Loss Function

#### 3.4. Model Learning

#### 3.5. Accuracy Assessment

## 4. Results

## 5. Discussion

#### 5.1. The Special Capacity of Deep Convolutional Neural Networks

#### 5.2. Visual Comparison of Image Classification Using Image Pairs and Post-Event Images as Input

#### 5.3. Comparison of Different Loss Functions

## 6. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens.
**2013**, 84, 85–99. [Google Scholar] [CrossRef] - Akbar, M.A.; Qidwai, U.; Jahanshahi, M.R. An evaluation of image-based structural health monitoring using integrated unmanned aerial vehicle platform. Struct. Control. Health Monit.
**2019**, 26, e2276. [Google Scholar] [CrossRef] [Green Version] - Gong, L.; Li, Q.; Wu, F.; Zhang, J.; Tian, T.; Jiang, H. Earthquake-Induced Building Damage Assessment Based on SAR Correlation and Texture. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 9600–9602. [Google Scholar]
- Lucks, L.; Bulatov, D.; Thönnessen, U.; Böge, M. Superpixel-Wise Assessment of Building Damage from Aerial Images. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 211–220. [Google Scholar]
- Fujita, A.; Sakurada, K.; Imaizumi, T.; Ito, R.; Hikosaka, S.; Nakamura, R. Damage Detection from Aerial Images Via Convolutional Neural Networks. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 5–8. [Google Scholar]
- Duarte, D.; Nex, F.; Kerle, N.; Vosselman, G. Satellite Image Classification of Building Damages Using Airborne and Satellite Image Samples in a Deep Learning Approach. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci.
**2018**, 4, 89–96. [Google Scholar] [CrossRef] [Green Version] - Doshi, J.; Basu, S.; Pang, G. From Satellite Imagery to Disaster Insights. arXiv
**2018**, arXiv:1812.07033. [Google Scholar] - Vetrivel, A.; Gerke, M.; Kerle, N.; Nex, F.; Vosselman, G. Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning. ISPRS J. Photogramm. Remote Sens.
**2018**, 140, 45–59. [Google Scholar] [CrossRef] - Cao, Q.D.; Choe, Y. Building Damage Annotation on Post-Hurricane Satellite Imagery Based on Convolutional Neural Networks. arXiv
**2019**, arXiv:1807.01688. [Google Scholar] - Nex, F.; Duarte, D.; Tonolo, F.G.; Kerle, N. Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions. Remote Sens.
**2019**, 11, 2765. [Google Scholar] [CrossRef] [Green Version] - Gupta, R.; Hosfelt, R.; Sajeev, S.; Patel, N.; Goodman, B.; Doshi, J.; Heim, E.; Choset, H.; Gaston, M. xBD: A Dataset for Assessing Building Damage from Satellite Imagery. arXiv
**2019**, arXiv:1911.09296. [Google Scholar] - Zhang, L.; Zhang, L.; Du, B. Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art. IEEE Geosci. Remote Sens. Mag.
**2016**, 4, 22–40. [Google Scholar] [CrossRef] - Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv
**2015**, arXiv:1505.04597. [Google Scholar] - Tan, M.; Le, Q.V. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv
**2019**, arXiv:1905.11946. [Google Scholar] - Tan, M.; Chen, B.; Pang, R.; Vasudevan, V.; Sandler, M.; Howard, A.; Le, Q.V. MnasNet: Platform-Aware Neural Architecture Search for Mobile. arXiv
**2019**, arXiv:1807.11626. [Google Scholar] - Huang, G.; Sun, Y.; Liu, Z.; Sedra, D.; Weinberger, K. Deep Networks with Stochastic Depth. arXiv
**2016**, arXiv:1603.09382. [Google Scholar] - Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. arXiv
**2019**, arXiv:1709.01507. [Google Scholar] [CrossRef] [Green Version] - Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. arXiv
**2016**, arXiv:1606.04797. [Google Scholar] - Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Cardoso, M.J. Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations. arXiv
**2017**, arXiv:1707.03237. [Google Scholar] - Zhu, W.; Huang, Y.; Zeng, L.; Chen, X.; Liu, Y.; Qian, Z.; Du, N.; Fan, W.; Xie, X. AnatomyNet: Deep Learning for Fast and Fully Automated Whole-volume Segmentation of Head and Neck Anatomy. Med. Phys.
**2019**, 46, 576–589. [Google Scholar] [CrossRef] [Green Version] - Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollar, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell.
**2020**, 42, 318–327. [Google Scholar] [CrossRef] [Green Version] - Bottou, L.; Laboratories, T.B. Stochastic Gradient Learning in Neural Networks. Proc. Neuro-Nımes
**1991**, 91, 12. [Google Scholar] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2017**, arXiv:1412.6980. [Google Scholar] - Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens.
**2019**, 152, 166–177. [Google Scholar] [CrossRef] - Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis.
**2015**, 115, 211–252. [Google Scholar] [CrossRef] [Green Version] - Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis.
**2010**, 88, 303–338. [Google Scholar] [CrossRef] [Green Version] - Shao, G.; Tang, L.; Liao, J. Overselling overall map accuracy misinforms about research reliability. Landsc. Ecol.
**2019**, 34, 2487–2492. [Google Scholar] [CrossRef] [Green Version] - Isensee, F.; Petersen, J.; Klein, A.; Zimmerer, D.; Jaeger, P.F.; Kohl, S.; Wasserthal, J.; Koehler, G.; Norajitra, T.; Wirkert, S.; et al. nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation. arXiv
**2018**, arXiv:1809.10486. [Google Scholar]

**Figure 1.**Architecture of the proposed building damage detection network (BDD-Net). BDD-Net expects six channels as input, and this input is concatenated with pre- and post-disaster images. BDD-Net includes an encoder part and a decoder part. The encoder contains eight convolutional blocks, and each block consists of one mobile inverted bottleneck convolution (MBConv). The decoder contains eight convolutional blocks, each of which consists of one convolutional layer and one upsampling layer. There are short cut connections between each block of the encoder and decoder.

**Figure 2.**The results of pixel-classification for post-disaster images based on various input forms when using a combination of dice loss and focal loss as the loss function.

**Figure 3.**The F1 values obtained for the different disaster types when using various loss functions to optimize the deep convolutional neural network when using pre- and post-disaster image pairs as input.

**Figure 4.**The F1 values obtained for the different disaster types when using various loss functions to optimize the deep convolutional neural network when using only post-disaster images as input.

**Figure 5.**An illustration of a set of input images and classification results with BDD-Net. (

**a**,

**b**) Pre- and post-event images (1024 pixels × 1024 pixels), (

**c**) the ground truth, and (

**d**,

**e**) the results of different input forms. Red areas represent damaged buildings that model recognized, green areas represent undamaged buildings that model recognized and black areas are the background that model recognized.

**Table 1.**Details of the disaster events, which were used to test the proposed building damage detection network (BDD-Net).

Disaster Type | Location (and Nickname) | Date |
---|---|---|

Volcanic Eruption | Guatemala | 03 Jun 2018 |

Hurricane | Italy (Hurricane Florence) | 10–19 Sep 2018 |

Hurricane | USA (Hurricane Harvey) | 17 Aug–02 Sep 2017 |

Hurricane | Puerto Rico (Hurricane Matthew) | 28 Sep–10 Oct 2016 |

Hurricane | Michael | 07–16 Oct 2018 |

Earthquake | Mexico | 19 Sep 2017 |

Flood | Midwest USA | 03 Jan–31 May 2019 |

Tsunami | Indonesia | 18 Sep 2018 |

Wildfire | USA (Carr Fire) | 23 Jul–30 Aug 2018 |

Wildfire | USA (Woolsey Fire) | 09–28 Nov 2018 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Shao, J.; Tang, L.; Liu, M.; Shao, G.; Sun, L.; Qiu, Q.
BDD-Net: A General Protocol for Mapping Buildings Damaged by a Wide Range of Disasters Based on Satellite Imagery. *Remote Sens.* **2020**, *12*, 1670.
https://doi.org/10.3390/rs12101670

**AMA Style**

Shao J, Tang L, Liu M, Shao G, Sun L, Qiu Q.
BDD-Net: A General Protocol for Mapping Buildings Damaged by a Wide Range of Disasters Based on Satellite Imagery. *Remote Sensing*. 2020; 12(10):1670.
https://doi.org/10.3390/rs12101670

**Chicago/Turabian Style**

Shao, Jinyuan, Lina Tang, Ming Liu, Guofan Shao, Lang Sun, and Quanyi Qiu.
2020. "BDD-Net: A General Protocol for Mapping Buildings Damaged by a Wide Range of Disasters Based on Satellite Imagery" *Remote Sensing* 12, no. 10: 1670.
https://doi.org/10.3390/rs12101670