1. Introduction
Due to the occurrence of natural disasters across the world, there is a strong need to develop an automated algorithm for the fast and accurate extraction of changing landscapes within the affected areas. Such techniques can accelerate the process of strategic planning services to move people into shelters and carry out damage assessment as well as risk management during a crisis [
1,
2]. Therefore, several methods have been developed for this purpose and efforts made to consider low and medium resolution imagery [
3]. Change detection using multi-temporal remote sensing images with a high spatial resolution is challenging. In the case of low-resolution images, change detection techniques are mostly based on the analysis of spectral and statistical information [
4]. Such methods may be efficient for broad-scale images or large-scale changes for the reason that the noise caused by registration errors and radiometric variation can be restricted to low levels compared to real changes through preprocessing or other means. However, for high-resolution images, there are many new problems to be considered in the design of change detection algorithms. First, accurate registration (e.g., half- or quarter-pixel accuracy) of different images is not easily achieved. Second, variations in lighting and environmental conditions are rather local and diversified between different images, such as the shadow of buildings [
4]. Besides these, there is more imaging noise in high spatial resolution images. Finally, in many applications, users wish to detect small size changes including lines, buildings, bridges and other human-made features. However, the performance of the current change detection methods is not satisfying for high spatial resolution remote sensing images as the effect, efficiency and false alarm rates are relatively high [
4].
Recently, support vector machines (SVM) and support vector data description (SVDD) classifiers have demonstrated their effectiveness in several remote sensing applications. The success of such approaches is related to the intrinsic properties of these classifiers, that is, they can handle ill-posed problems and the curse of dimensionality, and they provide robust sparse solutions and delineate nonlinear decision boundaries between the classes [
3]. To take advantage of a large amount of information present in the multispectral difference images, we formulate the change detection problem in the higher dimensional feature space [
5]. As all kernel methods, SVMs and SVDD show some interesting advantages over other techniques, such as intrinsic regularization and robustness to noise and high dimensionality [
5,
6,
7,
8,
9,
10]. On the other hand, specifying the threshold requires prior knowledge of the nature of the data, the study area, a skilled user and is often associated with a large error.
As mentioned, in high spatial resolution images the underlying class distributions are often strongly overlapped, resulting in hardly classifiable pixels, even using robust methods such as SVM. The high within-class variance as well as the low between-class distance, due to the low spectral information, increase the need for approaches that enhance the separability between the different classes. On the other hand, most of the above studies did not make use of the benefits of combining optical and radar images. Radar images with different polarizations contribute greatly to the separation of complex land cover classes. Radar images are sensitive to scattering processes and are affected by the shape, direction, and dielectric properties of the surface producing the scatter.
In order to solve these problems, in this paper, an object-level and kernel-based change detection method based on the integration of object-based image analysis (OBIA) and support vector data description (SVDD) method is proposed. This framework is an automatic change detection framework for either optical or radar remote sensing data, where users need easy and rapid access to real-time geospatial information to support disaster management. This proposed kernel-based method leads to a strong decrease in the false alarm rate (classifying a background pixel as a change class), and a slight accuracy improvement in the generated change map. This method used the information contents of the radar and optical data simultaneously by using the decision-level fusing of the change maps obtained from these data.
2. Experiments
2.1. Case Study and Remote Sensing Data
In order to assess the effectiveness of the proposed approach, the 2011 Sendai tsunami was considered as the case study, for which multi-temporal optical and radar images were collected by a variety of satellite remote sensing sensors. These datasets were acquired before and after this natural disaster. In Japan, on 11 March 2011, at 05:46:23 UTC, an earthquake occurred near the subduction plate boundary between the Pacific and North American plates. The epicentre was located about 130 km east of Sendai City at a depth of about 32 km. This earthquake was followed by a tsunami that caused devastating damage to wide areas of East Japan, particularly along the coastline of the Pacific Ocean [
11]. In order to extract the destroyed areas, we considered two different data sets acquired by both optical and radar sensors, i.e., IKONOS and Radarsat-2. The geographical location and the extent of the study area over Sendai, Japan is shown in
Figure 1a.
The acquisition dates and spectral and spatial resolutions of these image data sets from Sendai, Japan are presented in
Table 1.
The optical and radar remote sensing images from before and after the 2011 Sendai tsunami acquired by Radarsat-2 and IKONOS are illustrated in
Figure 2.
2.2. Methodology
The proposed decision-fusion-based CD framework consists of several steps, including: (a) pre-processing step; (b) object-based classification; (c) kernel parameter estimation; (d) one class classifier; and (e) change map fusion. In the first step of the proposed CD method, geometric and radiometric pre-processing was performed on the multi-temporal images. In each case study, optical multi-temporal images were co-registered manually to each other, while radar images were co-registered automatically using an angular histogram based on the co-registration method [
12]. The clouds in the optical images were symmetrically masked.
Figure 1b presents the flowchart of this automatic decision-based change detection method. The mathematical details for each of the steps in the proposed CD framework are presented in [
13,
14,
15].
In the second step, the pre-change image was classified using an object-based support vector machine (SVM) classifier. For each class of interest in this image, the SVDD classifier was then trained using randomly selected samples. At this stage, the SVDD separation function, in the form of a hypersphere in the high dimensional space, covered the pixels of this class of interest. All corresponding pixels in the post-event image were then entered into the SVDD classifier as unknown pixels. If the unknown pixel in the post-event image did not belong to the no-change (target) class, it was placed outside of the hypersphere and considered a changed pixel or outlier. On the other hand, if a pixel was placed inside this hyper-sphere, it was placed in a no-changed class. This process was repeated for all classes in both the optical and radar images until all pixels in the post-event image were classified and final change map was produced [
13].
In the final step, the produced change map from the optical and radar images was fused together using a decision-based fusion method such as voting strategies. These strategies can be applied to multiple classifier systems assuming that each classifier produces a single class label as an output. There are a number of approaches to a combination of such uncertain information units in order to obtain the best final decision. However, they all lead to a generalized voting definition. In this paper, the enhanced majority voting method was used to fuse the change maps obtained from the optical and radar imagery. In the simple majority voting method, the final change and no-change classes were chosen when all the SVDD classifiers produced the same output. However, in our proposed enhanced majority voting method, in the areas that the SVDD classifiers produced a different output, the spectral similarity measure between the multitemporal radar and optical imagery were calculated. If this criteria for each pixel was lower than the predefined threshold, the corresponding pixels were assigned to change class and vice versa.
3. Results and Discussion
In order to analyze the accuracy of the proposed decision-fusion-based CD method, the test data were extracted from the optical images and Google Earth high-resolution images by visually comparing the multi-temporal images. These samples were selected so that they spread over the entire area and the effects of the sun’s angle and the topography could be carefully considered in the analysis. Two criteria, i.e., kappa coefficient of agreement and overall accuracy (OA) extracted from the confusion matrix, were used for the analysis of the quantitative accuracy of the results.
Figure 3 shows the change maps obtained from the proposed object-based CD method for the IKONOS and Radarsat-2 imagery from Sendai, Japan. The blue colour indicates the change class.
The results show that using optical or radar imageries separately lead to an increase in the false alarm rate in the change maps. In this case, the flooded areas have not been fully identified due to the inability of optical data to separate flooded areas from other areas. Using only radar data to detect flooded areas leads to the inability to detect all flooded areas as well as the misdiagnosis of flooded areas in some agricultural areas and bare land, due to the complexity of the region and the proximity of the change classes and the limitation of input information to the proposed CD algorithm.
As can be seen in
Figure 3c, the noise level in the change maps is very low, and the proposed fusion-based CD algorithm completely succeeded to separate flooded areas from other areas. It is clear that by fusing the change maps obtained from the optical and radar imagery, changed areas were well extracted. The results obtained had limited isolated pixels, and they were less noisy in essence. Thus better results were achieved. As well as the areas devastated by the earthquake, flood-affected areas were extracted with high accuracy. The exploitation of the optical images together with the radar data allowed us to obtain a sharp boundary between the change and no-change area, which was the basis of the statistical approach to estimating the changes. Indeed, the accurate knowledge of the homogeneous regions provided by the joint segmentation allowed better exploitation of the entire available information on the pattern change detection phase. The accuracy analysis of the proposed decision-fusion-based CD method for Sendai is presented in
Table 2.
For the optical dataset, the accuracy analysis of the proposed CD method showed that the best results were obtained by using the Radial Basis Function (RBF) kernel function. For the C-band HH intensity image of the Radarsat-2 imagery, the best results were obtained using the sigmoid kernel function. The fusion of the change maps obtained from the optical and radar imagery always provided better results than without completing the fusion phase.
Several conclusions can be deduced from the accuracy assessment of the proposed CD method. Preliminary results show that objects may be well suited to quantify changes when only one class of the landscape features are the research emphasis. Therefore, to map the flooded areas, using radar imagery is a more appropriate choice. As high-resolution optical imagery is a more appropriate choice for extracting earthquake-affected and flood-affected built-up and crop-land areas, it is suggested that in order to explore the environmental changes caused by natural disasters, the integration of optical and radar imagery be used.
There are several reasons that account for the superiority of the object and fusion-based CD approach. First, in this classification step, the proposed algorithm can extract the boundaries of changed (damage or flooded) areas from the adjacent no-changed areas. This allows the changed areas to be processed as homogeneous objects, instead of individual pixels. Second, through the fusion of optical and radar data, the objects have spectral, textural, spatial and contextual patterns and backscatter information that can be used to aid the CD process. By integrating the change maps obtained from the optical and radar imagery, a CD approach can detect the various land cover changes on the ground.
4. Conclusions
In this paper, a decision-fusion-based CD method at the object-level is presented for change detection using both optical and radar remotely sensed data for the 2011 earthquake in Sendai, Japan. This proposed method shows great flexibility for the problem of change detection by finding nonlinear solutions to the problem. This method aims at exploiting both the high information content available from the radar imagery and the high level of spectral information available even in a multiband optical image. The proposed method is largely automated and was influenced in a small way by some of the errors issued by the classification process. In addition, all the change detection analyses were on the object level, and therefore the obtained change maps have a lower level of noise, and the boundary between the change and no-change classes had high contrast.
Experimental results showed that the proposed CD approach led to an acceptable level of accuracy for both the optical and radar imagery. The results confirm the fundamental role and potential of using both optical and radar data for natural hazard damage detection applications. Microwave signals have high sensitivity to the water content of wetland and flooded areas, which increases the intensity of the backscatter signal. Consequently, radar sensors have a high potential for detecting environmental changes during natural disasters with adverse weather conditions. In future research, the focus will be on the integration of various remote sensing sensor types using information-level and feature-level methods for multiple change detection. Thus, more accurate change maps and complementary information was achievable from this kind of high-level fusion framework for natural hazard damage detection applications.