Earthquake Damage Region Detection by Multitemporal Coherence Map Analysis of Radar and Multispectral Imagery

: Earth, as humans’ habitat, is constantly affected by natural events, such as ﬂoods, earthquakes, thunder, and drought among which earthquakes are considered one of the deadliest and most catastrophic natural disasters. The Iran-Iraq earthquake occurred in Kermanshah Province, Iran in November 2017. It was a 7.4-magnitude seismic event that caused immense damages and loss of life. The rapid detection of damages caused by earthquakes is of great importance for disaster management. Thanks to their wide coverage, high resolution, and low cost, remote-sensing images play an important role in environmental monitoring. This study presents a new damage detection method at the unsupervised level, using multitemporal optical and radar images acquired through Sentinel imagery. The proposed method is applied in two main phases: (1) automatic built-up extraction using spectral indices and active learning framework on Sentinel-2 imagery; (2) damage detection based on the multitemporal coherence map clustering and similarity measure analysis using Sentinel-1 imagery. The main advantage of the proposed method is that it is an unsupervised method with simple usage, a low computing burden, and using medium spatial resolution imagery that has good temporal resolution and is operative at any time and in any atmospheric conditions, with high accuracy for detecting deformations in buildings. The accuracy analysis of the proposed method found it visually and numerically comparable to other state-of-the-art methods for built-up area detection. The proposed method is capable of detecting built-up areas with an accuracy of more than 96% and a kappa of about 0.89 in overall comparison to other methods. Furthermore, the proposed method is also able to detect damaged regions compared to other state-of-the-art damage detection methods with an accuracy of more than 70%.


Introduction
Earth is constantly undergoing dynamic processes such as natural disasters as well as anthropogenic changes to Earth's surface [1][2][3]. Natural disasters, in particular, are a growing concern worldwide that threatens the lives of a large population [4][5][6]. Among the various types of natural disasters, the earthquake is considered one of the deadliest and most catastrophic ones [7,8]. Earthquakes potentially cause massive loss of life and property damage, especially in urban areas, and damage assessment is one of the critical problems after each of such disasters [9][10][11][12][13][14]. Tracking the evolution of damage in earthquake areas and determining the damage level is vital, as they can help direct rescue teams to the most critical sites [6]. Additionally, such data provide highly important information for reconstruction operations, restorations of facilities, and repair of critical lines [11].
teams to the most critical sites [6]. Additionally, such data provide highly important information for reconstruction operations, restorations of facilities, and repair of critical lines [11].
The processing of satellite remote sensing imagery is a tool that continuously provides valuable information about Earth, on a large scale, with minimum time and cost, and over long periods of time [15]. During recent years, the development of remote sensing satellites has allowed researchers to widely employ multitemporal image datasets with high spectral, spatial, and temporal resolutions [16,17]. Such techniques play a key role in environmental monitoring in many applications, especially in damage assessment [18][19][20][21].
Recently, there is a tremendous interest by researchers in producing damage maps using remote sensing imagery, and many methods have been proposed for damage assessment [22][23][24]. These methods generally focus on optical high-resolution imagery. Although many research efforts have proposed a number of damage detection algorithms and applied them to optical high-resolution imagery and LiDAR data, many limitations remain, such as: (1) some of these methods are based on change detection and classification methods that require training data and prior knowledge or ground truth [25]; (2) the high-resolution remote sensing imagery has a low spectral resolution and sometimes, due to some conditions, the pre-event imagery is unavailable [26,27]; (3) the high-resolution imagery suffers from low spectral resolution, so the detection of damaged areas becomes exceedingly difficult and/or suffers from a high false alarm rate due to, e.g., the shadows of buildings; (4) a separation between damage and shadows using high-resolution imagery is extraordinarily difficult; (5) the high-resolution imagery covers a low scale area and has more commercial aspects; (6) the building subsidence is not detected by optical data, unless a hole happens to open in the roof while the SAR (Synthetic Aperture Radar) imagery is being recorded (Figure 1) [7,10]; and (7) optical data are often limited by cloud cover. Airborne LiDAR systems, on the other hand, allow for the fast and extensive acquisition of precise height (altitude) data which can be used to detect some specific damage types [8]. Damage detection based on LiDAR data and a subsequent fusion with further optical data could improve the performance of damage detection, albeit with numerous limitations: (1) LiDAR data acquired by an airborne system does not cover all areas, and limitations are resulting from permission. (2) The problem of pre-event data availability persists. Airborne LiDAR systems, on the other hand, allow for the fast and extensive acquisition of precise height (altitude) data which can be used to detect some specific damage types [8]. Damage detection based on LiDAR data and a subsequent fusion with further optical data could improve the performance of damage detection, albeit with numerous limitations: (1) LiDAR data acquired by an airborne system does not cover all areas, and limitations are resulting from permission. (2) The problem of pre-event data availability persists.
Recently, the use of radar imagery for damage detection has been considered by researchers and much research has been presented [10,14,24,[28][29][30][31][32][33][34][35][36]. They were focused mainly on the indexing of two temporal datasets, ignoring many factors such as the effects of noise and vegetation on the coherency products.
Based on these issues, it is necessary to resolve the aforementioned problems in order to achieve practicable damage detection. In that respect, this research proposes a novel damage detection method based on both optical and SAR imagery in an unsupervised framework with the following properties: (1) extraction of urban areas in an unsupervised framework, (2) detecting damaged areas and classifying them according to different levels based on the coherence spectral signature, (3) an unsupervised damage detection algorithm with no need for the initial parameter setting, (4) easy implementation with low computational complexity, (5) using free-access optical (Sentinel-2) and SAR (Sentinel-1) imagery with medium spatial resolution and good temporal resolution (with the latter having small sensitivity with respect to clouds and light rain and the capability of being operated by day and by night).

Study Area and Datasets
The study areas are located in Kermanshah Province in western Iran. Several satellites with optical sensors and/or SAR observed the Sarpol-Zahab and Qasr-Shirin areas before and after the Iran-Iraq earthquake on 12 October 2017. To evaluate the performance of the proposed method and assess the accuracy of the results, a ground truth map is required. The ground truth for built-up areas is obtained by the visual analysis of the optical Sentinel imagery by an expert, and, additionally, detailed visual comparison imagery available in the Google Earth platform. For evaluation of multiple damage map, sample region is selected for Sarpol-Zahab case study (Figure 2b). The damage map is created using maps generated by the Iranian space agency and field visits. All these datasets can be found on the University of Tehran's Remote Sensing Laboratory website (rslab.ut.ac.ir, accessed on 18 March 2021)).

Case Study
Qasr-Shirin is a county in Kermanshah Province in western Iran. It is located at coordinates between 34 • 29 and 34 • 30 N and 45 • 33 and 45 • 36 E. The extent of the studied region, as extracted from Sentinel-1 satellite imagery, was 295 × 341 pixels. In this area, we incorporated four multitemporal datasets acquired on 30 October, 11 November, 17 November, and 5 December 2017. The optical remote sensing dataset was acquired on 5 November 2017 by Sentinel-2. Figure 2b shows the optical Sentinel-2 image for Qasr-Shirin county.
The ground truth Sarpol-Zahab is also a county in Kermanshah Province in western Iran. It is located at coordinates between 34 • 26 and 34 • 28 N and 45 • 50 and 45 • 52 E. The extent of the studied region, as extracted from Sentinel-1 satellite imagery, was 403 × 345 pixels. In this area, we incorporated four multitemporal datasets acquired on 30 October, 11 November, 17 November, and 5 December 2017. The optical remote sensing dataset was acquired on 5 November 2017 by Sentinel-2. Figure 2a shows the optical Sentinel-2 image for Sarpol-Zahab county.
Sentinel-1 is a constellation of two satellites (1A and 1B) operating in the C-band. Sentinel-2 delivers 13 spectral bands with wide spectral coverage over the visible, nearinfrared (NIR), and Shortwave-Infrared (SWIR) domains at different spatial resolutions from 10 m to 60 m. These datasets are freely available at Sentinel Scientific Data Hub (https://scihub.copernicus.eu, accessed on 18 March 2021).
All maps are presented in a reference geographic coordinate system of the World Geodetic System 1984 (WGS 1984). The spatial resolution of optical and SAR datasets is 10 (m) for both case studies.

Reference Data for the Damaged Area and Accuracy Assessment
This research used two ground truths presented by international agencies. The first ground truth was made by UNITAR-UNOSAT and it is available on this website (https://unitar.org/unosat/, accessed on 18 March 2021). In addition, the second ground truth was made by the Iranian space agency and it is available on the website (https://isa.ir/, accessed on 18 March 2021). Figure 3 presents some reference maps for Sarpol-Zahab that are obtained from some national agencies.

Reference Data for the Damaged Area and Accuracy Assessment
This research used two ground truths presented by international agencies. The first ground truth was made by UNITAR-UNOSAT and it is available on this website (https://unitar.org/unosat/ accessed on 18 March 2021). In addition, the second ground truth was made by the Iranian space agency and it is available on the website (https://isa.ir/ accessed on 18 March 2021). Figure 3 presents some reference maps for Sarpol-Zahab that are obtained from some national agencies.   The first analysis relied on visual analysis and the second one on numerical analysis which is defined by a confusion matrix containing four components of true positive, true negative, false positive, and false negative (Table 1).

Built-Up Non-Built-Up
Actual The result of built-up area detection was compared with the ground truth data by calculating several most common accuracy assessment indices. Table 2 provides the equations of these indices including the overall accuracy (OA), miss-detection (MD), F1score, balanced error (BE), false rate (FAR), and Kappa coefficient (KC).

Methodology
This section considers the proposed method in more detail, particularly regarding its combination with optical and radar imagery for detecting built-up and damaged areas.
The proposed method consists of two main steps after preprocessing ( Figure 4).
Step (1) is the extraction of man-made objects using spectral indices and an unsupervised learning framework.
Step (2) is the estimation of the coherence map and damage region detection using the SAM algorithm and automatic thresholding.

Preprocessing
In this study, the optical Sentinel-2 and SAR Sentinel-1 datasets were used. Data preprocessing plays an important role before the beginning of the main process. Preprocessing is different for optical and radar Sentinel datasets. The optical Sentinel-2 data have good quality and only need atmospheric correction done to transform TOA reflectance to BOA reflectance via algorithms developed by DLR/Telespazio [37].
The preprocessing steps for radar data include thermal noise removal, radiometric calibration, and despeckling. All preprocessing operations were accomplished using the Sentinel toolbox, a free and open-source software (https://sentinel.esa.int/web/sentinel/ toolboxes, accessed on 18 March 2021).

Preprocessing
In this study, the optical Sentinel-2 and SAR Sentinel-1 datasets were used. Data preprocessing plays an important role before the beginning of the main process. Pre-processing is different for optical and radar Sentinel datasets. The optical Sentinel-2 data have good quality and only need atmospheric correction done to transform TOA reflectance to BOA reflectance via algorithms developed by DLR/Telespazio [37].
The preprocessing steps for radar data include thermal noise removal, radiometric calibration, and despeckling. All preprocessing operations were accomplished using the Sentinel toolbox, a free and open-source software (https://sentinel.esa.int/web/sentinel/toolboxes accessed on 18 March 2021).

Phase 1: Built-Up Area Detection
The main purpose of this phase is to extract man-made objects from the imagery obtained from Sentinel-2 satellite data. This step is applied in two main phases by active learning procedure. The first step is feature extraction which is conducted by spatial and spectral features.
The spectral and spatial features are extracted by spectral indices and Grey Level Cooccurrence Matrices (GLCM). The second step is the pseudo sample generation in which sample data is gathered in two groups (man-made areas and other targets), extracted by considering a hard threshold for the indices.

Spectral/Spatial Feature Extraction
The spectral band indices are the most common spectral transformations used widely in remote sensing analysis [38]. The main purpose of these indices is to make some features more discernible compared to the original data based on a pixel-to-pixel operation, creating new values for individual pixels according to some predefined settings. The main

Phase 1: Built-Up Area Detection
The main purpose of this phase is to extract man-made objects from the imagery obtained from Sentinel-2 satellite data. This step is applied in two main phases by active learning procedure. The first step is feature extraction which is conducted by spatial and spectral features.
The spectral and spatial features are extracted by spectral indices and Grey Level Co-occurrence Matrices (GLCM). The second step is the pseudo sample generation in which sample data is gathered in two groups (man-made areas and other targets), extracted by considering a hard threshold for the indices.

Spectral/Spatial Feature Extraction
The spectral band indices are the most common spectral transformations used widely in remote sensing analysis [38]. The main purpose of these indices is to make some features more discernible compared to the original data based on a pixel-to-pixel operation, creating new values for individual pixels according to some predefined settings. The main purpose of using these indices is to extract built-up areas by eliminating other objects such as vegetation, water, and bare soil as they are predicted by spectral indices. This research used ten spectral indices (Table 3). Table 3. Various types of the spectral index used for predicting objects [15].

Formula
Abbreviation Index The spatial features can also improve the classification results. One of the most important spatial features is texture features that are widely used in the classification of remote sensing datasets. Texture features consider the relations between individual pixels and their neighboring pixels and provide informative features [39]. Different features, such as homogeneity, entropy, and size uniformity can be generated by texture analysis. In this study, 7 texture features were generated from the Grey Level Co-occurrence Matrices (GLCM) [39]. Table 4 presents various types of texture features originated from the GLCM matrix. Table 4. Various types of spatial indices originated from the GLCM matrix [15,39].

NO.
Full Name Formula Description The randomness of Intensity Distribution Where p(i, j) is (i, j)-th entry of the normalized GLCM, N is the total number of gray levels in the image, and x, y, and µ, σ denote the mean and standard deviation of the row and column sums of the GLCM, respectively. In this research, three visible bands such as Green, Blue, and Red were combined based on Equation (1) to be converted into panchromatic data. Then, the texture features are extracted by the generated band [15].
where B pan is the panchromatic band; B 2 , B 3 , and B 4 are the blue, green, and red bands, respectively. The window size for texture analysis was set to 3 × 3.

Pseudo Sample Generation and Classification
In this phase, built-up areas are detected by an active learning framework. To this end, after feature extraction, several spectral indices are used to designate vegetation, water, and soil. The main challenge lies with the extraction of man-made areas by finding the optimum threshold on indices. To remove it, this research proposes a novel framework able to extract man-made objects with high accuracy. The hard threshold is applied on spectral indices to generate sample data in two classes (built-up area and nonbuilt-up area). After finding the location of pseudo sample data, spatial and spectral features are generated. The spatial and spectral bands are stacked to apply the next step. The generated sample data is divided into two parts: (1) training data and (2) testing data. The second step then follows, i.e., optimizing Support Vector Machine (SVM) parameters (penalty and kernel parameter) and binary classifications. After finding the optimum values of SVM parameters, the optimum model is built and the stacked spatial and spectral bands are classified into two classes.

SVM Classifier
The SVM is a supervised machine learning algorithm commonly used for classification purposes. It is based on statistical learning theory. The main idea behind SVM is to find a hyperplane that maximizes the margin between two classes [40]. The SVM classifier has two parameters: kernel parameters and penalty coefficient. For our study, we used the radial basis function (RBF) kernel, which is widely utilized in the remote sensing community [41,42]. These parameters are needed for optimization, and their optimum values are obtained by a grid search algorithm [15].

Phase 2: Damaged Region Mapping
The main purpose of this step is to detect damaged regions using a time series of the coherence map, i.e., pre-and postearthquake data. Preprocessing is applied to Sentinel-1 SAR imagery, and then a coherence map is estimated for pair imageries in the time series. These coherence maps relate pair images to pre-event images, pre-event to postevent images, and pair images to postevent images.
After the extraction of man-made areas and the masking of other objects on the coherence map, the SAM algorithm is used to measure the similarity values between the reference signature and the target signature. The reference signature (damage pixel) can be either low coherence or high coherence. The low-value pixel in the coherence map is known as "low coherence" and the high-value pixel in the coherence map can be considered "high coherence". This means that each pixel showing behavior similar to the reference pixel can be considered a damaged pixel. Next, the Otsu Algorithm is used to determine the damage level and to detect no-damage areas. To achieve this, the Otsu Algorithm is applied using four different classes. The first class has the lowest similarity, which means it corresponds to the no-damage class. The second class represents the low-level damage region. The third class is related to the destruction of the moderate level damage region, and the fourth class signifies the complete destruction of the building regions.

Coherence Map
Interferometry is the complex coherence value estimated by the absolute value of the correlation coefficient between corresponding samples of two SAR images [10,32,33,36,43].
The complex coherence value can be defined for two zero-mean circular Gaussian variables m and s as Equation (2): where E is the expectation operator and * denotes complex conjugation. M and S are the master and slave images, respectively. γ denotes the complex coherence between slave and master images. The coherence value obtained from the magnitude of the complex coherence is defined as Equation (3): where N is the number of pixels in a moving window.

SAM Algorithm
The SAM algorithm is used to measure the spectral angle between the reference map and the target vectors in n-dimensional space widely used in remote sensing applications [44]. The SAM algorithm can be calculated as Equation (4): where SAM α is the spectral angle, x, and y represent the target and reference spectra, respectively.

Otsu Algorithm
The Otsu Algorithm is a thresholding algorithm that automatically clusters the image [45]. The goal of this approach is to determine the threshold by minimizing the weight of the variance within the class. The variance within the class is the variance of the total weight of each defined cluster. In this study, the Otsu algorithm is applied to determine the multiple damage mapping.

Built-Up Areas Extraction
The proposed method can extract built-up areas in an unsupervised framework from optical remote sensing data. For this purpose, preprocessing is applied to the optical dataset and the desired features are extracted. After producing an initial training dataset by a strict threshold selection on indices, all features and original spectral bands are considered as input for the SVM classifier. The optimum value for the penalty coefficient is 103 for both case studies, and the optimal values for the kernel parameter are 10 −2 and 10 −3 for Qasr-Shirin and Sarpol-Zahab, respectively. The proposed approach was compared with a similar method based on normalized differences (ND) between the pre-and coevent interferometric coherence and impervious surfaces extracted using optical imagery [43]. Figure 5 presents the result of built-up area detection in Qasr-Shirin. As seen, the proposed method provided a better performance on the built-up area detection compared to the ND-Based method.  Figure 6 shows the built-up area detection result for Sarpol-Zahab. Figure 6a represents the application of the ND-Based method. In this image, one can see misdetections in part of the area and a high false rate, so this method does not work very well. Figure 6b shows the built-up area detection result obtained by the proposed method, indicating the detection of almost all of the built-up areas.  Figure 6 shows the built-up area detection result for Sarpol-Zahab. Figure 6a represents the application of the ND-Based method. In this image, one can see misdetections in part of the area and a high false rate, so this method does not work very well. Figure 6b  The result of the built-up area detection is considered by numerical analysis using some of the indices. Table 5 presents the numerical results of the built-up area detection by the proposed method as well as the ND-based method. Results from the proposed method indicate an improved performance on both datasets compared to the ND-based method.  The result of the built-up area detection is considered by numerical analysis using some of the indices. Table 5 presents the numerical results of the built-up area detection by the proposed method as well as the ND-based method. Results from the proposed method indicate an improved performance on both datasets compared to the ND-based method. In addition, the proposed method provided the lowest MD (under 20%) and FA rates (under 5%). Based on the obtained numerical results, the performance of the proposed method in the detection of non-built-up areas is better than built-areas because utilized methods provided high MD rates compared to FA rates. In addition, the precision rates are more than Recall. Figure 7 presents the confusion matrix of both datasets. In addition, the proposed method provided the lowest MD (under 20%) and FA rates (under 5%). Based on the obtained numerical results, the performance of the proposed method in the detection of non-built-up areas is better than built-areas because utilized methods provided high MD rates compared to FA rates. In addition, the precision rates are more than Recall. Figure 7 presents the confusion matrix of both datasets.

Damaged Region Detection
The obtained mask for built-up areas is applied to the multitemporal coherence map. Evaluation of results were based on visual analysis and accuracy assessment. Figure 8 presents the results of the damaged region detection in Sarpol-Zahab. Based on the visual analysis and comparison to the ground truth for some sample regions (Figure 8c), it is clear that the proposed method predicted damaged areas well on damage classes. The proposed method has a good performance on highly damaged region detection compared to ND-based methods. However, some no-damage areas were considered as high damage. This issue originated from the effect of irrelevant changes and performance of the proposed method in classification damage regions.

Damaged Region Detection
The obtained mask for built-up areas is applied to the multitemporal coherence map. Evaluation of results were based on visual analysis and accuracy assessment. Figure 8 presents the results of the damaged region detection in Sarpol-Zahab. Based on the visual analysis and comparison to the ground truth for some sample regions (Figure 8c), it is clear that the proposed method predicted damaged areas well on damage classes. The proposed method has a good performance on highly damaged region detection compared to ND-based methods. However, some no-damage areas were considered as high damage. This issue originated from the effect of irrelevant changes and performance of the proposed method in classification damage regions.
Tables 6 and 7 present the confusion matrix of results of the damage region detection on the Sarpol-Zahab dataset for both datasets. The multiple data map is evaluated by 20 sample region areas for four classes.  Figure 8 presents the results of the damaged region detection in Sarpol-Zahab. Based on the visual analysis and comparison to the ground truth for some sample regions (Figure 8c), it is clear that the proposed method predicted damaged areas well on damage classes. The proposed method has a good performance on highly damaged region detection compared to ND-based methods. However, some no-damage areas were considered as high damage. This issue originated from the effect of irrelevant changes and performance of the proposed method in classification damage regions. Tables 6 and 7 present the confusion matrix of results of the damage region detection on the Sarpol-Zahab dataset for both datasets. The multiple data map is evaluated by 20 sample region areas for four classes.  Table 7. The accuracy assessment of the obtained multiple damage map based on comparison with the reference map for the Sarpol-Zahab dataset.

Classification Overall
User's Accuracy (%) Based on the obtained results, our proposed method shows good performance for multiple damage region detection at more damage levels compared to the ND-based method. Based on presented numerical results, the proposed method is provided an accuracy of more than 70(%) while the ND-based method achieves an accuracy of 50(%). In addition, it is worth noting the proposed method is applied in an unsupervised framework in the generation of multiple damage map. Among the type of damage level results, the no-damage level has the lowest accuracy compared to other classes. However, the ND-based method has good performance (No-Damage class) but it has weak performance in detection of other damage classes.

Discussion
The optical Sentinel-2 imagery has a high potential for extracting built-up areas based on the feature extraction using incorporated indices. Built-up area extraction is the first part of damage assessment and plays a key role. High accuracy is therefore crucial so that the obtained results become reliable. The results presented in Figure 9 show almost all built-up areas extracted by the proposed framework that conforms to the performance of the proposed method on both datasets. However, other built-up extraction methods are not very efficient, because they ignore some of the areas, especially on the Qasr-Shirin dataset. This issue originates from the threshold selection process using various types of features. The proposed method has a good framework for threshold selection and uses new and efficient indices for feature extraction.

Discussion
The optical Sentinel-2 imagery has a high potential for extracting built-up areas based on the feature extraction using incorporated indices. Built-up area extraction is the first part of damage assessment and plays a key role. High accuracy is therefore crucial so that the obtained results become reliable. The results presented in Figure 9 show almost all built-up areas extracted by the proposed framework that conforms to the performance of the proposed method on both datasets. However, other built-up extraction methods are not very efficient, because they ignore some of the areas, especially on the Qasr-Shirin dataset. This issue originates from the threshold selection process using various types of features. The proposed method has a good framework for threshold selection and uses new and efficient indices for feature extraction. Recently, much research has been done on built-up area detection. Mainly, these methods are based on supervised learning methods and need a training sample. Table 8 presents the Overall Accuracy of these methods and the proposed one. Based on the presented numerical results, the proposed method has a good performance compared to other state-of-the-art methods. Besides, these methods are applied in a supervised framework and require sample data. The quality and quantity of sample data play a key role in classification while the proposed method provides the sample data in an active learning framework. It is worth noting the collection of sample data is a big challenge and time-consuming, especially for a project like this study.
The index-based methods mainly focus on spectral features while the spatial features improve the performance of the built-up area detection. In addition, one of the main disadvantages of index-based methods is finding an optimum threshold for each spectral index. So, some methods like ND-based methods do not provide good, promising results.
The multiple damage map is another product of the proposed method (Figures 8 and 9, Tables 6 and 7). The multiple damage map provides valuable information about the nature of damages. However, some state-of-the-art-methods provided promising results about damaged regions but they ignore multiple-damage maps. The proposed method presents a multiple damage map in three classes. On the other hand, the mentioned ground truth maps only focused on damaged areas. The result of the multiple damage map was evaluated by some sample damage regions. Based on numerical and visual analysis, the proposed method was provided the accuracy of more than 70 (%) by overall accuracy index.
Because of the good temporal resolution and the general advantages of SAR data, damage information extraction using multitemporal SAR data could be very important. The results of damage information extraction are applied in an unsupervised framework. Our proposed method not only detects regions damaged but also can provide additional details such as the extent of the damage at three levels based on the coherence spectral signature. It shows good results compared to other damage detection methods. Figure 10 presents the result of damaged region detection in Sarpol-Zahab on a local scale. Based on this figure, it is clear the proposed method has a good performance in the detection of damaged regions compared to the ND-based method. method in Qasr-Shirin.
Recently, much research has been done on built-up area detection. Mainly, these methods are based on supervised learning methods and need a training sample. Table 8 presents the Overall Accuracy of these methods and the proposed one. Based on the presented numerical results, the proposed method has a good performance compared to other state-of-the-art methods. Besides, these methods are applied in a supervised framework and require sample data. The quality and quantity of sample data play a key role in classification while the proposed method provides the sample data in an active learning framework. It is worth noting the collection of sample data is a big challenge and time-consuming, especially for a project like this study.
The index-based methods mainly focus on spectral features while the spatial features improve the performance of the built-up area detection. In addition, one of the main disadvantages of index-based methods is finding an optimum threshold for each spectral index. So, some methods like ND-based methods do not provide good, promising results.
The multiple damage map is another product of the proposed method (Figures 8 and  9, Tables 6 and 7). The multiple damage map provides valuable information about the nature of damages. However, some state-of-the-art-methods provided promising results about damaged regions but they ignore multiple-damage maps. The proposed method presents a multiple damage map in three classes. On the other hand, the mentioned ground truth maps only focused on damaged areas. The result of the multiple damage map was evaluated by some sample damage regions. Based on numerical and visual analysis, the proposed method was provided the accuracy of more than 70 (%) by overall accuracy index.
Because of the good temporal resolution and the general advantages of SAR data, damage information extraction using multitemporal SAR data could be very important. The results of damage information extraction are applied in an unsupervised framework. Our proposed method not only detects regions damaged but also can provide additional details such as the extent of the damage at three levels based on the coherence spectral signature. It shows good results compared to other damage detection methods. Figure 10 presents the result of damaged region detection in Sarpol-Zahab on a local scale. Based on this figure, it is clear the proposed method has a good performance in the detection of damaged regions compared to the ND-based method. One advantage of our proposed method is that it uses three temporal datasets, which can reduce the noise effects of some of the factors that increase the coherence value and could potentially cause false results. Our proposed method uses a measure based on the One advantage of our proposed method is that it uses three temporal datasets, which can reduce the noise effects of some of the factors that increase the coherence value and could potentially cause false results. Our proposed method uses a measure based on the spectral signature of each pixel, while other methods use only an index. Additionally, our proposed method does not require threshold selection while other methods do.

Conclusions
This study presents a new method for achieving a fast damaged-region-detection framework without requiring prior knowledge of the case study. The proposed method uses five steps to enhance the content and quality of the final damage detection results. The experiments were conducted using real Sentinel datasets of different regions.
The proposed method utilized Sentinel-1 imagery that has acceptable temporal resolution compared to other SAR sensors. These advantages make the proposed method applicable after occurring any earthquake. This helps to quickly map damaged regions and assess damages while field monitoring takes more time for evaluating damaged areas especially, on large scale.
The findings of this study have shown: (1) improved accuracy results while not requiring prior knowledge of damages; (2) this method can provide both binary maps and ranges of damage level; (3) the potential of incorporating a new temporal series of remotesensing imagery; (4) the advantages of using a SAR dataset with all-weather capability and day-and-night operation; and (5) the proposed method is simple to implement and has high efficiency with a low computational cost.
This research focused on damaged region detection based on high-resolution satellite imagery. However, these datasets have poor spatial resolution and building damage detection is very hard at this level. So, we proposed this algorithm for very high-resolution SAR imagery that could extract the multiple building damage.