Next Article in Journal
Strain Conditions Monitoring on Corroded Prestressed Steel Strands in Beams Based on Fiber Bragg Grating Sensors
Previous Article in Journal
Multi-View Visual Question Answering with Active Viewpoint Selection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Image Registration Method for Multisource High-Resolution Remote Sensing Images for Earthquake Disaster Assessment

1
College of Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
2
Key Laboratory of Digital Earth Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
3
Hainan Key Laboratory of Earth Observation, Sanya 572029, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(8), 2286; https://doi.org/10.3390/s20082286
Submission received: 29 February 2020 / Revised: 15 April 2020 / Accepted: 16 April 2020 / Published: 17 April 2020
(This article belongs to the Section Remote Sensors)

Abstract

:
For earthquake disaster assessment using remote sensing (RS), multisource image registration is an important step. However, severe earthquakes will increase the deformation between the remote sensing images acquired before and after the earthquakes on different platforms. Traditional image registration methods can hardly meet the requirements of accuracy and efficiency of image registration of post-earthquake RS images used for disaster assessment. Therefore, an improved image registration method was proposed for the registration of multisource high-resolution remote sensing images. The proposed method used the combination of the Shi_Tomasi corner detection algorithm and scale-invariant feature transform (SIFT) to detect tie points from image patches obtained by an image partition strategy considering geographic information constraints. Then, the random sample consensus (RANSAC) and greedy algorithms were employed to remove outliers and redundant matched tie points. Additionally, a pre-earthquake RS image database was constructed using pre-earthquake high-resolution RS images and used as the references for image registration. The performance of the proposed method was evaluated using three image pairs covering regions affected by severe earthquakes. It was shown that the proposed method provided higher accuracy, less running time, and more tie points with a more even distribution than the classic SIFT method and the SIFT method using the same image partitioning strategy.

1. Introduction

Disasters caused by severe earthquakes, such as collapsed buildings, road damage, dammed lakes, and secondary geological disasters, pose a threat to people’s lives and property safety worldwide. In recent years, major earthquakes that occurred in China, such as the Wenchuan earthquake on 12 May 2018, and the Yushu earthquake on 14 April 2010, attracted considerable attention from the government and society. Obtaining exact disaster information immediately after severe earthquakes is important for disaster rescue and relief and can effectively reduce the damage caused by severe earthquakes [1,2]. With the advantages of wide-ranging, multiscale, dynamic comprehensive observation and being free from ground conditions, remote sensing (RS) technology for earth observation provides a rapid, safe, and economical method for earthquake disaster assessment and emergency rescue [3]. Since the occurrence of the Wenchuan earthquake on May 12, 2018, satellite and airborne RS technologies have played an important role in the rapid acquisition and dynamic monitoring of earthquake disaster information and post-earthquake reconstruction.
The data processing for extracting earthquake disaster information from RS images includes image preprocessing, disaster information extraction, and disaster assessment [4]. The accuracy and efficiency of image preprocessing, mainly referring to geometric rectification of RS images and the coregistration of multisource RS images, are crucial for the accuracy and efficiency of disaster information extraction. In order to obtain disaster information quickly and support disaster responses, it is particularly important to have geometrically accurate RS images available shortly after the acquirement of post-earthquake RS images. Using the images to extract the disaster information, including the location, distribution, and extension, is very crucial for disaster relief and rescue. For instance, in transportation planning processes after an earthquake, a fast and accurate assessment of the destruction degree of traffic facilities (such as roads and bridges) is essential to have an updated configuration of infrastructures to facilitate disaster relief work [5]. The accuracy of coregistration of the post-earthquake and pre-earthquake RS images is particularly important when change detection methods are used for disaster information extraction.
With the development of RS technology, different satellite sensors can provide multispectral, multitemporal, and multiplatform RS images. Additionally, the spectral resolution of RS images is also continually improving, and image registration research has gradually shifted from medium-low resolution to high resolution [6]. The registration of high-resolution RS images is significantly more challenging than that of medium-low-resolution RS images, since high-resolution RS images contain more intricate details and texture information [7]. Therefore, the coregistration of multisource high-resolution RS images has been a research focus in RS image processing.
Image registration is the process of the geometric registration of two or more images covering the same area and acquired at different times, from different viewpoints, or by different sensors [8]. The majority of the automatic registration methods for RS images can be classified into two categories, including gray-based (or intensity-based) methods and feature-based methods [9,10]. To extract tie points from the two images, the gray-based methods establish a similarity measure between the two images through the mutual information (MI) method [11,12] or normalized cross-correlation method (NCC) [13]. As gray values of pixels within the neighborhood of each pixel need to be considered, this kind of method suffers from a large number of calculations and relatively low efficiency. The feature-based methods locally extract points, lines, and regional features for image registration [14]. Due to the advantages of easy acquisition, short running time, and high robustness, point features are widely used in image registration. Currently, many feature-based methods have been successfully used for RS image registration, such as the Moravec corner detection algorithm [15], the Harris corner detection algorithm [16], the Shi_Tomasi corner detection algorithm [17], the scale-invariant feature transform (SIFT) [18], speeded up robust features (SURF) [19], and the features from accelerated segment test (FAST) [20].
The performances of several different algorithms, including SIFT, shape context [21], steerable filters [22], principal components analysis-SIFT (PCA-SIFT) [23], differential invariants [24], and other methods, were compared by Mikolajczyk and Schmid [25]. The experimental results show that the SIFT algorithm yielded the best performance. However, it was apt to be affected by image noise and texture changes and was computationally intensive and time-consuming during feature point extraction [26]. To solve these problems, several improved versions of the SIFT algorithm were proposed. For instance, the independent component analysis-SIFT (ICA-SIFT) algorithm, which uses independent component analysis to remove redundant information in the SIFT feature vectors, was proposed [27]. It improved the efficiency but decreased the accuracy. The affine-SIFT (ASIFT) algorithm proposed by Yu and Morel [28] achieved complete affine invariance by simulating longitude and latitude, but its efficiency was much lower than that of the SIFT algorithm. The uniform robust SIFT (UR-SIFT) algorithm [29] is suitable for multisource optical RS images with different illumination conditions, rotations, and five-fold scales. However, the crossmatching approach employed by the UR-SIFT algorithm eliminated a number of correctly matched tie points, which reduced the accuracy. Inspired by the SIFT algorithm, an enhanced feature matching method by combining the position, scale, and orientation of each keypoint, which is called PSO-SIFT, was proposed [30]. The PSO-SIFT method effectively increased the number of correctly matched tie points, but it also increased the running time. However, for images with complex terrain and surface deformations caused by disasters, these algorithms can hardly meet the requirements of accuracy and efficiency simultaneously. Through the improvement of the corner response function of Harris, the Shi_Tomasi corner detection algorithm is not susceptible to image rotations, lighting conditions, angle changes, and noise. Additionally, it avoids the phenomenon of feature point clustering, which makes the distribution of extracted feature points more even [31]. Additionally, the Shi_Tomasi algorithm needed less running time than the SIFT algorithm and thus effectively improved the efficiency of feature extraction.
Due to surface deformations caused by severe earthquakes, post-earthquake RS images usually show surface damage such as building collapse, ground displacements, road damage, and landslides, making it more challenging for the coregistration of pre- and post-earthquake RS images. Many algorithms work well for RS image registration, but when used for the registration of pre-earthquake and post-earthquake RS images, especially for mountain areas with complex terrain, two problems arise, including the lack of tie points and the uneven distribution of matched tie points. To solve these two problems, a fast automatic registration method for multisource high-resolution RS image registration in earthquake damage assessment was proposed in this study. In this method, the Shi_Tomasi and SIFT algorithms were combined to obtain tie points using an image partition matching strategy considering geographic information. The proposed method was compared with the classic SIFT method and the SIFT method using the same image partitioning strategy using three pairs of post- and pre-earthquake images that have complex terrain and significant differences in color tone.
The rest of this paper is organized as follows. The proposed method is introduced in Section 2, and the experiments are demonstrated in Section 3. The discussions are provided in Section 4, and the conclusions are presented in Section 5.

2. Methodology

The proposed method employs an improved SIFT algorithm using the Shi_Tomasi approach to detect feature points on enhanced images and a partition matching strategy considering geographic information to search for tie points. Seven steps are involved in the method (Figure 1), as follows:
(1)
Constructing a pre-earthquake image database;
(2)
Image enhancement;
(3)
Image partitioning strategy based on geographic information constraints;
(4)
Image patch matching using the combination of Shi_Tomasi and SIFT;
(5)
Removing the outliers;
(6)
Homogenizing the even spatial distribution of matched tie points;
(7)
Image transformation and resampling.

2.1. Constructing a Pre-Earthquake Satellite Image Database

To improve the efficiency and the degree of automation of image processing, a pre-earthquake image database can be first constructed using high-resolution optical remote sensing images covering the study area. The construction of the database mainly consists of three steps. First, radiometric calibration, atmospheric correction, and projection coordinate conversion are used to preprocess the pre-earthquake high-resolution satellite images. Then, the preprocessed images are geometrically corrected according to ground control points (GCPs) if GCPs are available. If no GCPs are available in the case of emergency, high-resolution pre-earthquake RS images with high quality and strict orthorectification can be used without geometric correction. Finally, the identity number (ID) and the coordinates of the corresponding high-resolution satellite image are recorded in a coordinate file, as shown in Table 1. The pre-earthquake RS images and the corresponding coordinate files were stored in the same directory to construct the pre-earthquake satellite image database.

2.2. Image Enhancement

Usually, image denoising is employed to reduce image noise before image matching. However, the denoised images may suffer from relatively low image contrast when the image has only a single channel or the color change is not obvious [32]. This makes it difficult to extract feature points. To enhance the details of the denoised images and increase the number of extracted feature points, the proposed method adopted a 2% linear stretch method to enhance both the input and the reference images.
The 2% linear stretch method is usually performed by defining a transfer function in the following form:
g ( x , y ) = { c , 0 f ( x , y ) < a d c b a [ f ( x , y ) a ] + c , a f ( x , y ) < b d , b < f ( x , y ) ,
where g (x, y) is the value of pixel (x, y) after image stretching, f (x, y) is the original value of pixel (x, y), and a and b are the values at 2% and 98%, respectively, of the cumulative frequency of the original image. c and d are the minimum and the maximum of the stretched image, respectively. In this work, c = 0, and d = 255.
To test the advantage of the 2% linear stretch method, an experiment was performed using the GaoFen-1 (GF-1) multispectral image (with an 8-m resolution) and the SIFT algorithm. The original image yielded 16 matched tie points, as shown in Figure 2a. In contrast, the enhanced image obtained 119 matched tie points (Figure 2b). The experimental results show that applying the 2% linear stretching method on the images can facilitate the extraction of feature points and increase the number of matched tie points.

2.3. Image Partitioning Strategy Based on Geographic Information Constraints

For large high-resolution remote sensing images, the extraction and matching of feature points usually have high computational complexity, which requires considerable memory and is time-consuming. In addition, it yields a low accuracy of registration when enough feature points are extracted from only some of the regions of the image. To solve this problem, an image partitioning strategy based on geographic information constraints was adopted to divide remote sensing images into several image patches. For each image patch, the corresponding reference image can be obtained according to the projected coordinates of the image patch. The extraction and matching of feature points were applied to each image patch pair.
With respect to the geometric information of RS images, the mapping relationship [33] between the image coordinates and geographic coordinates can be expressed using Equation (2).
[ X i Y i ] = [ X 0 Y 0 ] + [ G 1 G 2 G 4 G 5 ] [ I i J i ] ,
where (Ii, Ji) are the image coordinates of the ith pixel, and (Xi, Yi) are the corresponding projected coordinates of (Ii, Ji); (X0, Y0) are the projected coordinates of the top left corner in the original whole scene input image, and G1, G2, G4, and G5, are the parameters of the transformation model. Figure 3 shows a schematic of the image partitioning strategy. The image partition of the input RS image and the corresponding reference image can be conducted using the following steps:
First, the input image is divided into n×n patches, each of which has a size of M×N pixels. Each patch is numbered, and the coordinates of the four corners of the patch are recorded. For example, the corners of patch F2N are recorded as A1, A2, A3, and A4. The image coordinates of corners A1 and A3 are then calculated and recorded as (I1, J1) and (I3, J3), respectively.
Second, the image coordinates are transformed using Equation (2) into the projected coordinates according to the geometric information of the input image. Specifically, (X0, Y0) are the projected coordinates of the top left corner in the original whole scene input image; (Ii, Ji), i = 1 or 3 are the image coordinates of corners A1 and A3 of the image patch F2N; (Xi, Yi), i = 1 or 3 are the projected coordinates of A1 and A3; G1 is the pixel size of the input image; G5 is the negative value of pixel size of the input image; and the values of G2 and G4 are equal to 0.
Third, according to the projected coordinates (Xi, Yi) of an input image patch, the pre-earthquake satellite image covering this patch is found through the pre-earthquake high-resolution image database. If there is more than one image covering the patch, the reference image is the one whose spatial resolution is closest to the input image. Additionally, the extension of the corresponding reference patch can be determined according to (Xi, Yi). The image coordinates ((I1, J1), (I3, J3)), which correspond to the two corners (B1 and B3) of the reference image patch shown in Figure 3c, are calculated using Equation (2).
Finally, the image coordinates of the other two corners (B2 and B4) of the reference image patch can be obtained according to those of corners B1 and B3. The image coordinates of the four corners B1, B2, B3, and B4 are then used to obtain the corresponding reference image patch from the pre-earthquake satellite image.

2.4. Partition Matching Using Shi_Tomasi and SIFT

The Shi_Tomasi and SIFT algorithms were combined to extract feature points from each of the image patch pairs obtained according to the introduction in Section 2.3. The Shi_Tomasi algorithm was used to detect feature points from the image pairs. The SIFT descriptor, which is invariant of scale and rotation, was used to describe the features of detected feature points. The Best Bin First (BBF) algorithm [34] was employed to match the feature points.

2.4.1. Feature Detection Using the Shi_Tomasi Algorithm

The Shi_Tomasi corner detection algorithm is an improved version of the Harris corner detector. Considering a local window in the image, Harris corner points are detected based on the determination of the average changes in image intensity that result from shifting the local window by a small amount in various directions.
Denoting the image intensity of the image as I, the intensity difference (E) produced by a shift (u, v) of the local window is provided by Equation (3):
E ( u , v ) = x , y w ( x , y ) [ I ( x + u , y + v ) I ( x , y ) ] 2 ,
where w (x, y) is a window function at position (x, y).
To search for the windows that produce a large E (u, v), I (x + u, y + v) can be expanded using the Taylor series:
I ( x + u , y + v ) = I ( x , y ) + I x u + I y v + ο ( u 2 , v 2 ) ,
where Ix and Iy are image derivatives in the x and y directions, respectively. Combining Equations (3) and (4), we obtain Equation (5) as:
E ( u , v ) = ( x , y ) o [ u v ] w ( x , y ) [ I x 2 I x I y I x I y I y 2 ] [ u v ] M [ u v ] ,
where M is defined using Equation (6),
M = w ( x , y ) [ I x 2 I x I y I x I y I y 2 ] ,
Finally, a score R is calculated using Equation (7) and used to justify whether the window contains a corner:
R = d e t M k ( t r a c e M ) 2 ,
where detM = λ1λ2; trace(M) = λ1 + λ2; and λ1 and λ2 are the eigenvalues of M. K is an empirical value, and the general value of K is 0.04 [35]. If the value of R is greater than a threshold value (T), the window in which R is treated as a corner point is a good tracking point.
Different from the Harris algorithm, the score R for the Shi_Tomasi algorithm is calculated using Equation (8).
R = m i n ( λ 1 , λ 2 ) ,
If R is greater than T, the point can be marked as a Shi_Tomasi corner. The schematic of the Shi_Tomasi algorithm is shown in Figure 4. In this work, the maximum number of corner points (Nmax) for each image patch should be set in advance, usually to 1500.

2.4.2. Feature Description Using the SIFT Descriptor

The SIFT descriptor was employed for feature description of the feature points detected in this work. The SIFT descriptor proposed by Lowe in 2004 is a local feature descriptor based on the gradient distribution in the detected regions. By assigning one or more consistent orientations to each feature point based on the gradient directions of a local image, the feature descriptor of the feature point can be represented relative to the orientations and therefore achieve invariance to image rotation. For each feature point (x, y) of the scale image L, the gradient magnitude m (x, y) and orientation θ (x, y) can be calculated using Equations (9) and (10), respectively.
m ( x , y ) = [ ( L ( x + 1 , y ) L ( x 1 , y ) ) 2 + ( L ( x , y + 1 ) L ( x , y 1 ) ) 2 ] 1 2 ,
θ ( x , y ) = a r c t a n ( L ( x , y + 1 ) L ( x , y 1 ) L ( x + 1 , y ) L ( x 1 , y ) ) ,
An orientation histogram is formed from the gradient orientations of the sample points within a region around the feature point. It has 36 bins covering the 360° range of orientations. Each bin covers 10 degrees. As shown in Figure 5, the peak in the histogram, indicating the dominant orientation of the local gradient image, is assigned as the orientation of the feature point.
After determining the orientation of a feature point, the SIFT descriptor of the feature point is calculated using three steps, as illustrated in Figure 6.
First, the coordinates of the descriptor and the gradient orientations are rotated relative to the orientation of the feature point to achieve orientation invariance. Then, the orientation histograms over 4×4 sample regions centered with the feature point are created. There are eight directions for each orientation histogram, in which the length of each arrow represents the size of that histogram entry. Therefore, Lowe suggested using a 4 × 4 × 8 = 128 element feature vector for each feature point. Finally, the feature vector is normalized to unit length to reduce the effects of illumination change.

2.4.3. Feature Matching

The BBF algorithm, based on a multidimensional space segmentation tree (K-D tree) [36], was adopted for feature matching in this work. The BBF algorithm can quickly find the nearest neighbor point and the second-closest neighbor point of a feature point to be matched. However, due to image noise or other reasons, the distance of the second nearest neighbor (DSN) may be very close to that of the nearest neighbor (DN). To reduce the impact of noise, the proposed method calculates the distance ratio, denoted as R, of DN to DSN (R = DN/DSN). The nearest neighbor point is accepted as the match point of the feature point if R is higher than a default value of 0.8.
It is worth noting that the image coordinates of the matched points within each image patch need to be converted to the coordinates of the entire image scene.

2.5. Removing the Outliers

The random sample consensus (RANSAC) algorithm is commonly used to remove incorrect matches obtained using the SIFT algorithm [37]. The RANSAC algorithm was proposed by Fischler and Bolles in 1981 [38]. It is an outlier detection method [39] that iteratively derives the parameters of a mathematical model from a set of observed data that contains outliers. However, because of the randomness of RANSAC, the effect of removing incorrect matches depends on the selection of sample points. To achieve robust performances, the RANSAC algorithm was combined with the homography matrix to remove outliers in this work. The homography matrix is a transformation matrix that maps the points in one image to the corresponding points in the other image. It can be represented using Equation (11).
H = [ h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ] ,
The matched tie points are denoted as {(xw, yw), (x, y)}, where (xw, yw) are the coordinates of the feature point in the input image to be registered, and (x, y) are the coordinates of the corresponding feature point in the reference image. The incorrectly matched tie points were excluded using the following steps.
First, a homography matrix H between the input image and the reference image was initially estimated by the RANSAC algorithm using all the matched tie points.
Then, the transformed coordinates of each feature point to be registered were estimated using the initial homography matrix H, according to Equation (12).
[ X Y 1 ] = H [ x w y w 1 ] = [ h 00 h 01 h 02 h 10 h 11 h 12 h 20 h 21 h 22 ] [ x w y w 1 ] ,
where (xw, yw) is the coordinates of the feature points of the input image and (X, Y) is the transformed coordinates of (xw, yw).
After that, the residual error and root-mean-square error (RMSE) of the ith tie point were obtained using Equations (13) and (14), respectively,
Δ x i = x i X i Δ y i = y i Y i ,
R M S E i = Δ x i 2 + Δ y i 2 ,
where Δxi and Δyi are the residual errors of the ith feature point and xi and yi are the coordinates of the ith feature point in the reference image. Tie point pairs that have (Δxi, Δyi) and RMSEi values higher than an initial threshold, which is denoted as TR, are considered outliers and removed.
Finally, H and RMSEi were estimated iteratively based on the remaining tie points and the outliers were removed until either the Δxi, Δyi and RMSEi values were lower than the threshold TR or the number of iterations was higher than a set value denoted as Niter.

2.6. Even Spatial Distribution of Matched Tie Points

The uneven spatial distribution of the matched tie points may lead to local geometric distortions of the rectified images. In addition, redundant tie points covering some local regions may increase the number of computations and therefore reduce the efficiency of image processing. To solve this problem, a greedy algorithm [40] was used in this work to remove matched tie points with high density.
Given that a number of Ntp tie point pairs were obtained after the elimination of the outliers, the following steps were performed to achieve the even distribution of the tie points.
  • Step 1: The perspective transform model between the input image and the reference image was estimated using all the matched tie points. The RMSE value of each tie point pair was calculated, and all the tie point pairs were then sorted in ascending order according to the RMSE values.
  • Step 2: The sorted tie point pairs were numbered according to the RMSE values. The first item of the sorted tie point pairs, which is numbered as 1 (i.e., i = 1), was selected as the first node.
  • Step 3: The distances between tie point i and the other tie points j (j = i+1, …, Ntp) were calculated using the Euclidean distance. If the distance of a tie point j was less than a threshold (TL), this tie point was considered to be too close to the tie point i. Then, this tie point was removed, and let i = i+1.
  • Step 4: Step 3 was repeated until i = Ntp.
To test the employed approach for the even distribution of tie points, an experiment was performed using an 800 × 800 RS image (experimental results are shown in Figure 7). As shown in Figure 7a, 181 matched tie points (dots in red) with a model error of 0.93 pixels were obtained after removing outliers using the combination of RANSAC and the homography matrix. After removing high-density tie point pairs using a threshold (TL) of 30 pixels, 108 tie points with a model error of 0.75 pixels remained, as shown in Figure 7b. The remaining tie points were more evenly distributed and offered higher accuracy than the initial tie points.

2.7. Image Transformation and Resampling

In this paper, the polynomial rectification model was used to warp the input image. The coefficients of the polynomial model were solved using the matched tie points. The number of matched tie points should meet the requirement in Equation (15),
N c p ( N d + 1 ) ( N d + 2 ) / 2 ,
where Ncp is the number of tie point pairs and Nd is the degree of the polynomial model. The value of Nd is determined according to Ncp and the terrain relief of the images. When Nd is 2 or 3, the polynomial model is applicable for topographic relief areas, such as mountain areas. After the coefficients were obtained, the input image was transformed and resampled to align it with the reference image.

3. Experiments and Results

In this section, three groups of postearthquake RS images were used to compare the proposed method with two other methods, including the classical SIFT (referred to as SIFT henceforth) and the SIFT method using the same patch matching approach (referred to as Patch-SIFT) used by the proposed method. The details of the three datasets (such as the satellite, resolution, size, acquired date, and disaster information) are introduced in Section 3.1. The evaluation criteria are presented in Section 3.2. The comparison results are displayed in Section 3.3.
The experiments were performed on a computer with an Intel Core i7-4770 CPU 3.40-GHz processor and 14.0 GB of physical memory, using Visual Studio 2013 (C#) as the programming environment.

3.1. Datasets

Three sets of high-resolution RS images after earthquakes were used in the experiment. All the images contain different secondary disasters caused by severe earthquakes. The first image is the QuickBird multispectral image (2.4 m) of the Wenchuan area that suffered from the Wenchuan earthquake (magnitude 8.2) on 12 May 2018. The image, with a size of 2427 × 2569 pixels, was acquired on 26 December 2008. The elevation of the image ranges from 1346 to 2960 m, as shown in Figure 8a. A large number of landslides can be seen in the image, as well as the reference image employed (Figure 8b). The second post-earthquake image is a GaoFen-1 (GF-1) multispectral image (8 m) covering Baoxing Town, which experienced the Yaan earthquake (magnitude 7.0) on 20 April 2013. The GF-1 image, recorded on 23 July 2013, has a size of 1218 × 1363 pixels. Both landslides and river expansion caused by the earthquake can be observed in the image, as shown in Figure 9a. Its elevation ranges from 711 to 2406 m. The third image is a GaoFen-2 (GF-2) image of the Jiuzhaigou area after the 7.0-magnitude Jiuzhaigou earthquake on 8 August 2017. The image was acquired on 9 August 2017, and has a size of 2096 × 1789 pixels. Its elevation, from 1892 to 3760 m, has a larger span. Landslides caused by the earthquake can be seen in the image, as shown in Figure 10a.
The details of the three post-disaster images and the corresponding reference images obtained from the pre-earthquake image database are presented in Table 2. The reference images are also shown on the right of Figure 8, Figure 9 and Figure 10.
According to the characteristics of the three datasets, the parameter settings of the proposed method are presented in Table 3.

3.2. Evaluation Criteria

The performance of the proposed method was evaluated using four criteria, including the number of matched tie points (Ncp), the used time for image registration (T), the accuracy of the transformation model obtained using the final tie points (RMSEM), and the geometric accuracy of the registered image using some verification points (RMSET).
The RMSEM and RMSET are computed using Equation (16),
R M S E = 1 N i = 1 N ( ( x i X i ) 2 + ( y i Y i ) 2 ) ,
where (xi, yi) are the projected coordinates of the ith point on the pre-earthquake image, (Xi, Yi) are the projected coordinates of the ith point after transformation, and N is the number of points. The RMSEM is calculated using all matched tie points obtained by the three methods. The RMSET can be calculated using some manually selected verification points. In this case, (Xi, Yi) are the projected coordinates of the validation points in the reference image, whereas (xi, yi) are the coordinates of the corresponding points in the registered image.
To verify the geometric accuracies of the registered images, 18, 10, and 10 verification points were selected for the three datasets, respectively, using ENVI software. As shown in the reference images in Figure 8, Figure 9 and Figure 10, these verification points (shown in red), which are evenly distributed, were used to calculate the RMSET of the three methods.

3.3. Experimental Results

Table 4 shows the statistics of the matched tie points N, running time T, model error RMSEM, and verification error RMSET of the three datasets using the three algorithms. Both the quadratic (Nd = 2) and cubic (Nd = 3) polynomial models were considered in the experiments.

3.3.1. Experiment 1 Using the Wenchuan Dataset

In the experiment using the first image pair, the proposed method obtained 52 matched tie points, which is slightly more than that of the Patch-SIFT method (Ncp = 45) but significantly more than that of the SIFT method (Ncp = 21). The proposed method yielded very similar performances when using the quadratic and cubic polynomial models. Using the cubic polynomial model, the proposed method yielded an RMSET value of 1.40 pixels, which is significantly lower than those of the Patch-SIFT (RMSET = 2.01 pixels) and SIFT (RMSET = 9.18 pixels) methods. Although the SIFT method provided the lowest RMSET value (0.81 pixels), the RMSET value of 9.18 pixels is extremely high. Using the cubic polynomial model, the running time of the proposed method was 2.20 s, which is only 6.47% of the SIFT (33.98 s) method and 44.27% of the Patch-SIFT (4.97 s) method. Similarly, the proposed method using the quadratic polynomial model gave a lower RMSET value and a shorter running time than the other two methods.
Figure 11 shows the spatial distribution of matched tie points (dots in red) of the first image pair. It can be seen that the distribution of matched tie points obtained by the proposed method, as shown in Figure 11a, is the most even. As can be seen from Figure 11c, the 21 matched tie points obtained by the SIFT method are unevenly distributed, which contributes to the extremely high validation error it yielded. In contrast, the proposed method can still extract correct tie points in the areas with large topographic relief, such as the top-left corner of images shown in Figure 11.
The registration results of the Wenchuan dataset using the cubic polynormal model are shown in Figure 12. It can be seen that only the registered image obtained using the proposed method (Figure 12b) can effectively rectify the geometric position deviations of the objects on the original image (Figure 12a). In contrast, the registration results of the Patch-SIFT (Figure 12c) and the SIFT (Figure 12d) methods show more significant displacements than the original image. This is consistent with the higher RMSET values offered by the two methods. The main reason is that the number of matched tie points extracted by the two methods was small, and the spatial distribution of the tie points was uneven.

3.3.2. Experiment 2 Using the Yaan Dataset

For the second image pair, the proposed method also obtained 29 matched tie points, which is significantly larger than those of the Patch-SIFT (Ncp = 9) and SIFT (Ncp = 4) methods. Since the numbers of tie points extracted by the Patch-SIFT method were less than the minimum number required for the cubic polynomial model, only the quadratic polynomial model was used for the Patch-SIFT method. For a similar reason, only the linear polynomial model was used for the SIFT method. Using the cubic polynomial model, the proposed method yielded a running time of 1.56 s, an RMSEM value of 1.81 pixels, and an RMSET value of 1.75 pixels. The running time is only 34.29% of the SIFT (4.55 s) and 48.30% of the Patch-SIFT (3.23 s). The RMSEM and RMSET values are also significantly lower than those of the other two methods. Similarly to the first image pair, the running time, RMSEM, and RMSET of the proposed method obtained using the quadratic polynomial model were very close to those obtained using the cubic polynomial model. Similarly, the distribution of the matched tie points extracted by the proposed method (Figure 13a) is also more even than those of the Patch-SIFT (Figure 13b) and the SIFT (Figure 13c) methods.
The registration results of the Yaan dataset are shown in Figure 14. Although there was a slight displacement for the road along the river in the registered image produced by the proposed method, the proposed method yielded better performance than the other two methods. In contrast, the registration results of Patch-SIFT (Figure 14c) and SIFT (Figure 14d) show even more noticeable displacements for the roads and constructions than the original image (shown in Figure 14a).

3.3.3. Experiment 3 Using the Jiuzhaigou Dataset

The third image pair shows significant differences in temporal and hue. In the experiment, the proposed method also provided the largest number of tie points (Ncp = 118), whereas the Patch-SIFT and SIFT methods yielded 94 and 46 tie points, respectively. The proposed method has a significant advantage in the running time over the other two methods. The running times of the proposed method were 2.05 and 1.93 s for the cubic and quadratic polynomial models, respectively. The former is only 7.36% of the SIFT method (27.85 s) and 32.23% of the Patch-SIFT method (6.36 s) using the same model. The proposed method provided RMSEM values of 1.37 and 1.39 pixels, respectively, which are higher than those of the Patch-SIFT (RMSEM = 1.27 pixels and RMSEM = 1.32 pixels, respectively) and SIFT (RMSEM = 0.72 pixels and RMSEM = 0.76 pixels, respectively) methods. Although the SIFT method yielded the lowest model error, the validation error (RMSET = 2.48 and RMSET = 2.51 pixels for the cubic and quadratic polynomial models, respectively) was extremely high. The main reason is that the number of matched tie points was small, and the spatial distribution of tie points was uneven. The spatial distribution of matched tie points is shown in Figure 15. It can be seen that the spatial distribution of the matched tie points (dots in red) extracted using the proposed method (as shown in Figure 15a) was the most even. In contrast, the matched tie points obtained by the SIFT are mainly located in the image patches with significant features (such as roads and houses). However, in the image patches with complex terrain, the number of matched tie points extracted by the SIFT method was very limited compared with the two methods using the image partitioning strategy.
The registered images of the Jiuzhaigou dataset using the cubic polynormal model are shown in Figure 16. As shown in Figure 16b, the registered image of the proposed method still had slight displacements with the original image. In contrast, the other two methods had noticeable displacements for rivers and roads (Figure 16c,d). This is also consistent with the RMSET values provided by the three methods. Consequently, compared with the other methods, the proposed method can achieve a larger number of tie points, which are more evenly distributed. This is helpful for improving the accuracy of image registration.

4. Discussions

The experimental results demonstrate the advantages of the proposed method over the SIFT and Patch-SIFT methods in terms of accuracy, running time, and the number and distribution of matched tie points for the three sets of pre- and post-earthquake images. All three image pairs covered mountainous areas affected by disasters (such as landslides and river expansion, triggered by severe earthquakes). Topographic relief in the images increased the difficulty of image registration. However, in this case, the proposed method still achieved better registration results.
Both the quadratic (Nd = 2) and cubic (Nd = 3) polynomial models were used in the experiments to generate registered images. The proposed method yielded very similar performances when using the two models. Using the quadratic polynomial model (Nd = 2), the proposed method also provided shorter running times and lower RMSET values than the Patch-SIFT and SIFT methods using the same model. The proposed method using the cubic polynomial model yielded slightly longer running times and lower RMSET values than the proposed method using the quadratic polynomial model. Consequently, we think that both the quadratic and cubic polynomial models can be selected for the images used in the experiments.
The proposed method can be also used for satellite images (such as GF-1 and GF-2 images) covering plains, according to the results of the other experiments. For example, a GF-1 multispectral image covering Chengdu City, Sichuan province, China, was also tested. The image was recorded on March 9, 2018 and had a relatively large size (5354 × 5354 pixels) and a spatial resolution of 8 m. A total of 175 matched tie points were extracted by the proposed method, with an RMSET of 0.38 pixels. The Patch-SIFT method obtained 116 matched tie points with an RMSET of 0.55 pixels, whereas the SIFT method provided 80 tie points with an RMSET of 0.66 pixels. The proposed method obtained the registered image within 17.17 s, which is only 26.5% of the SIFT (64.77 s) and 36.7% of the Patch-SIFT (46.76 s). The experimental results also show that the registration results of the proposed method are better than those of the other two methods. This demonstrates that the proposed method also works for RS images covering urban areas and shows advantages in running time on larger RS images.
The GE images were used in this work to construct the pre-earthquake image database and then used as the reference for the co-registration of post-earthquake RS images. If higher quality high-resolution satellite remote sensing images were available in the study area, the images could be used as the reference for image registration. In this case, the proposed method is also applicable and is expected to yield better performance than the other two methods.
In general, the proposed method can provide geometrically accurate images for disaster information extraction, and is helpful for the assessment of disasters, such as collapsed buildings, damaged roads, and landslides. The proposed method also has potential applications. For instance, the registered images obtained by the proposed method can be integrated with big data, such as the Floating Car Data [41], which is used to observe historical traffic patterns, traffic facilities, and services, to the phenomenon after the earthquakes.

5. Conclusions

To improve the efficiency and accuracy of multisource remote sensing image registration in the process of earthquake disaster assessment, a new automatic registration method for multisource RS images was proposed in this paper. Based on the construction of a pre-earthquake database, the proposed method employed a combination of the Shi_Tomasi and SIFT methods to extract tie points using the image partitioning strategy based on geographic information constraints. Additionally, a greedy algorithm was introduced to eliminate redundant tie points and thus to achieve even spatial distribution of matched tie points. The accuracy and robustness of the proposed method were compared with the traditional SIFT method and the Patch-SIFT method using three pairs of high-resolution images covering regions affected by severe earthquakes. The experimental results show that the proposed method outperformed the other two methods in terms of the accuracy of the registered image, running time, and number and distribution of matched tie points. The larger number and more even distribution of the tie points extracted by the proposed method contribute to the improvement in accuracy. The experimental results indicate that the proposed method is more suitable than the Patch-SIFT and SIFT methods in the case of the registration of post-earthquake high-resolution RS images used for earthquake damage assessment. According to the experimental results of a test using a GF-1 image covering urban areas (with a size of 5354 × 5354 pixels), the proposed method can also be used for satellite images covering plains (including urban areas), and shows significant advantages in running time over the two comparison methods for large-size RS images.

Author Contributions

All authors made significant contributions to this work. X.Z. and H.L. designed the experiments and analyzed the datasets; X.Z. writing the original draft and edited the English language; H.L., L.J., and P.W. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Finance Science and Technology project of Hainan Province, China, grant number 418MS113, the National Natural Science Foundation of China, grant number 41801259, the National Key Research and Development Program of China, grant number 2017YFC1500902, the Aerospace Information Research Institute, Chinese Academy of Sciences, grant number Y951150Z2F, the Science and Technology Major Project of Xinjiang Uygur Autonomous Region, grant number 2018A03004, and the National Natural Science Foundation of China, grant number 41972308.

Acknowledgments

The authors would like to thank the Linhai Jing Research Group for providing the experiments datasets and learning environments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dong, L.; Shan, J. A comprehensive review of earthquake-induced building damage detection with remote sensing techniques. ISPRS J. Photogramm. Remote Sens. 2013, 84, 85–99. [Google Scholar] [CrossRef]
  2. Li, B.; Liu, J. Use of shadows for detection of earthquake-induced collapsed buildings in high-resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2013, 79, 53–67. [Google Scholar]
  3. Corbane, C.; Carrion, D.; Lemoine, G.; Broglia, M. Comparison of damage assessment maps derived from very high spatial resolution satellite and aerial imagery produced for Haiti 2010 earthquake. Earthq. Spectra 2011, 27, 199–218. [Google Scholar] [CrossRef]
  4. Pan, G.; Tang, D. Damage information derived from multi-sensor data of the Wenchuan Earthquake of May 2008. Int. J. Remote Sens. 2010, 31, 3509–3519. [Google Scholar] [CrossRef]
  5. Russo, F.; Rindone, C. The planning process and logical framework approach in road evacuation: A coherent vision. Wit Trans. Built Environ. 2011, 117, 415–425. [Google Scholar]
  6. Huang, F.; Mao, Z.; Shi, W. ICA-ASIFT-based multitemporal matching of high-resolution remote sensing urban images. Cybern. Inf. Technol. 2016, 16, 34–49. [Google Scholar]
  7. Benediktsson, J.A.; Chanussot, J.; Moon, W.M. Very high-resolution remote sensing: Challenges and opportunities. Proc. IEEE 2012, 100, 1907–1910. [Google Scholar] [CrossRef]
  8. Jiang, J.; Shi, X. A Robust point-matching algorithm based on integrated spatial structure constraint for remote sensing image registration. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1716–1720. [Google Scholar] [CrossRef]
  9. Zhang, S.; Jing, J.; Cao, S. Relative shape context based on multiscale edge feature for disaster remote sensing image registration. In Proceedings of the Third International Conference on Intelligent Control and Information Processing, Dalian, China, 15–17 July 2012; pp. 605–609. [Google Scholar]
  10. Zhu, H.; Li, Y.; Yu, J.; Leung, H.; Li, Y. Ensemble registration of multisensor images by a variational Bayesian approach. IEEE Sens. J. 2014, 14, 2698–2705. [Google Scholar] [CrossRef]
  11. Chen, H.; Varshney, P.; Arora, M. Performance of mutual information similarity measure for registration of multitemporal remote sensing images. IEEE Trans. Geos. Remote Sens. 2003, 41, 2445–2454. [Google Scholar] [CrossRef]
  12. Suri, S.; Reinartz, P. Mutual information based registration of TerraSAR-X and Ikonos imagery in urban areas. IEEE Trans. Geos. Remote Sens. 2010, 48, 939–949. [Google Scholar] [CrossRef]
  13. Eastman, R.D.; Moigne, J.L.; Netanyahu, N.S. Research issues in image registration for remote sensing. In Proceedings of the IEEE Comference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  14. Holia, M.S.; Thakar, V.K. Mutual information based image registration for MRI and CT scan brain images. In Proceedings of the International Conference on Audio, Language and Image Processing, Shanghai, China, 16–18 July 2012; pp. 78–83. [Google Scholar]
  15. Moravec, H.P. Towards automatic visual obstacle avoidance. In Proceedings of the 5th International Joint Conference on Artificial Intelligence, Cambridge, MA, USA, 22–25 August 1977; Volume 2, pp. 584–590. [Google Scholar]
  16. Harris, C.G.; Stephens, M.J. A combined corner and edge detector. In Proceedings of the 4th Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–151. [Google Scholar]
  17. Shi, J.; Tomasi, C. Good features to track. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
  18. Lowe, D.G. Distinctive Image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  19. Bay, H.; Ess, A.; TuytelAars, T.; Gool, L.V. Speeded-up robust feature (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  20. Rosten, E.; Porter, R.; Drummond, T. Faster and better: A machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 105–119. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Belongie, S.; Malik, J.; Puzicha, J. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 2, 509–522. [Google Scholar] [CrossRef] [Green Version]
  22. Freeman, W.T.; Adelson, E.H. The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 891–906. [Google Scholar] [CrossRef]
  23. Yan, K.; Sukthankar, R. PCA-SIFT: A more distinctive representation for local image descriptors. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; Volume 2, p. 2. [Google Scholar]
  24. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [Google Scholar] [CrossRef] [Green Version]
  25. Koenderink, J.J.; Van Doorn, A.J. Representation of local geometry in the visual system. Biol. Cybern. 1987, 55, 367–375. [Google Scholar] [CrossRef]
  26. Karami, E.; Prasad, S.; Shehata, M. Image matching using SIFT, SURF, BRIEF and ORB performance comparison for distorted images. In Proceedings of the 2015 Newfoundland Electrical and Computer Engineering Conference, St. John, NF, Canada, 5 November 2015. [Google Scholar]
  27. Duan, C.; Meng, X.; Tu, C.; Tang, C. How to make local image features more efficient and distinctive. IET Comput. Vis. 2008, 2, 178–189. [Google Scholar] [CrossRef]
  28. Yu, G.; Morel, J.M. ASIFT: An algorithm for fully affine invariant comparison. Image Process. Line 2011, 1, 11–38. [Google Scholar] [CrossRef] [Green Version]
  29. Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geos. Remote Sens. 2011, 49, 4516–4527. [Google Scholar] [CrossRef]
  30. Ma, W.; Wen, Z.; Wu, Y.; Jiao, L.; Gong, M.; Zheng, Y.; Liu, L. Remote sensing image registration with modified SIFT and enhanced feature matching. IEEE Geosci. Remote Sens. Lett. 2017, 14, 3–7. [Google Scholar] [CrossRef]
  31. Zuo, Y.; Liu, J.; Yang, M.; Wang, X.; Sun, M. Algorithm for unmanned aerial vehicle aerial different-source image matching. Opt. Eng. 2016, 55, 123111. [Google Scholar] [CrossRef]
  32. Celik, T. Two-dimensional histogram equalization and contrast enhancement. Pattern Recognit. 2012, 45, 3810–3824. [Google Scholar] [CrossRef]
  33. Huo, C.; Pan, C.; Huo, L.; Zhou, Z. Multilevel SIFT matching for large-size VHR image registration. IEEE Geosci. Remote Sens. Lett. 2012, 9, 171–175. [Google Scholar] [CrossRef]
  34. Beis, J.S.; Lowe, D.G. Shape indexing using approximate nearest-neighbour search in high-dimensional spaces. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 1000–1006. [Google Scholar]
  35. Gui, B.; Shuai, R.; Chen, P. Optic disc location algorithm based on improved corner detection. In Proceedings of the The 8th International Congress of Information and Communication Technology, Xiamen, China, 27–28 January 2018; Volume 131, pp. 311–319. [Google Scholar]
  36. Silpa, A.C.; Hartley, R. Optimised KD-tree for fast image descriptor matching. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  37. Wei, M.; Han, W.; Gerald, S. Robust homography estimation based on nonlinear least squares optimization. Math. Probl. Eng. 2013, 6, 372–377. [Google Scholar]
  38. Fischiled, M.A.; Bolles, R.C. Random sample consensus—A paradigm for model-fitting with applications to image-analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar]
  39. Wu, Y.; Ma, W.; Gong, M.; Su, L.; Jiao, L. A novel point-matching algorithm based on fast sample consensus for image registration. IEEE Trans. Geos. Remote Sens. 2015, 12, 43–47. [Google Scholar] [CrossRef]
  40. Sourabh, P.; Umesh, C.P. Remote sensing optical image registration using modified uniform robust SIFT. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1300–1304. [Google Scholar]
  41. Croce, A.I.; Musolino, G.; Rindone, C.; Vitetta, A. Transport system models and big data: Zoning and graph building with traditional surveys, FCD and GIS. ISPRS Int. J. Geo-Inf. 2019, 8, 187. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Sensors 20 02286 g001
Figure 2. Tie points obtained using the original multispectral image (a) and the enhanced image obtained using the 2% linear stretch method (b).
Figure 2. Tie points obtained using the original multispectral image (a) and the enhanced image obtained using the 2% linear stretch method (b).
Sensors 20 02286 g002
Figure 3. Schematic of the image partitioning strategy based on a geographical information constraint: (a) an input image and the image partition of the input image; (b) an image patch F2N (with corners A1, A2, A3, and A4) of the input image; (c) the corresponding reference image patch (with corners B1, B2, B3, and B4) in a pre-earthquake image covering the input image patch.
Figure 3. Schematic of the image partitioning strategy based on a geographical information constraint: (a) an input image and the image partition of the input image; (b) an image patch F2N (with corners A1, A2, A3, and A4) of the input image; (c) the corresponding reference image patch (with corners B1, B2, B3, and B4) in a pre-earthquake image covering the input image patch.
Sensors 20 02286 g003
Figure 4. Schematic of the Shi_Tomasi algorithm.
Figure 4. Schematic of the Shi_Tomasi algorithm.
Sensors 20 02286 g004
Figure 5. The orientation histogram of a feature point.
Figure 5. The orientation histogram of a feature point.
Sensors 20 02286 g005
Figure 6. The feature descriptor of a feature point.
Figure 6. The feature descriptor of a feature point.
Sensors 20 02286 g006
Figure 7. Comparison of the spatial distribution of the initial matched tie points (a) and the remaining tie points after removing some tie points with high density (b).
Figure 7. Comparison of the spatial distribution of the initial matched tie points (a) and the remaining tie points after removing some tie points with high density (b).
Sensors 20 02286 g007
Figure 8. The Wenchuan earthquake dataset. The left (a) is the post-earthquake image (the input image), whereas the right (b) is the reference image in the pre-earthquake database. The points in red are the verification points used for calculating the accuracy of the rectified image.
Figure 8. The Wenchuan earthquake dataset. The left (a) is the post-earthquake image (the input image), whereas the right (b) is the reference image in the pre-earthquake database. The points in red are the verification points used for calculating the accuracy of the rectified image.
Sensors 20 02286 g008
Figure 9. The Yaan earthquake dataset. The left (a) is the post-earthquake image (the input image), and the right (b) is the reference image in the pre-earthquake database. The points in red are the verification points used for calculating the accuracy of the rectified image.
Figure 9. The Yaan earthquake dataset. The left (a) is the post-earthquake image (the input image), and the right (b) is the reference image in the pre-earthquake database. The points in red are the verification points used for calculating the accuracy of the rectified image.
Sensors 20 02286 g009
Figure 10. The Jiuzhaigou earthquake dataset. The left (a) is the post-earthquake image (the input image), and the right (b) is the reference image in the pre-earthquake database. The points in red are the verification points used for calculating the accuracy of the rectified image.
Figure 10. The Jiuzhaigou earthquake dataset. The left (a) is the post-earthquake image (the input image), and the right (b) is the reference image in the pre-earthquake database. The points in red are the verification points used for calculating the accuracy of the rectified image.
Sensors 20 02286 g010
Figure 11. Matched tie points obtained from the Wenchuan dataset. (a) The proposed method; (b) Patch- scale-invariant feature transform (SIFT); and (c) SIFT.
Figure 11. Matched tie points obtained from the Wenchuan dataset. (a) The proposed method; (b) Patch- scale-invariant feature transform (SIFT); and (c) SIFT.
Sensors 20 02286 g011
Figure 12. Registered images of the Wenchuan dataset. (a) The original image; (b) registered image using the proposed method; (c) registered image using Patch-SIFT; and (d) registered image using SIFT.
Figure 12. Registered images of the Wenchuan dataset. (a) The original image; (b) registered image using the proposed method; (c) registered image using Patch-SIFT; and (d) registered image using SIFT.
Sensors 20 02286 g012
Figure 13. Matched tie points of the GF-1 image. (a) The proposed method, (b) Patch-SIFT, and (c) SIFT.
Figure 13. Matched tie points of the GF-1 image. (a) The proposed method, (b) Patch-SIFT, and (c) SIFT.
Sensors 20 02286 g013
Figure 14. Registration results of the Yaan dataset. (a) The original image; (b) registered image of the proposed method; (c) registered image of Patch-SIFT; and (d) registered image of SIFT.
Figure 14. Registration results of the Yaan dataset. (a) The original image; (b) registered image of the proposed method; (c) registered image of Patch-SIFT; and (d) registered image of SIFT.
Sensors 20 02286 g014
Figure 15. Matched tie points of the GF-2 image. (a) The proposed method; (b) Patch-SIFT; and (c) SIFT.
Figure 15. Matched tie points of the GF-2 image. (a) The proposed method; (b) Patch-SIFT; and (c) SIFT.
Sensors 20 02286 g015
Figure 16. Registered images for the Jiuzhaigou dataset. (a) The original image; (b) the registered image of the proposed method; (c) the registered image of the Patch-SIFT method; and (d) the registered image of the SIFT method.
Figure 16. Registered images for the Jiuzhaigou dataset. (a) The original image; (b) the registered image of the proposed method; (c) the registered image of the Patch-SIFT method; and (d) the registered image of the SIFT method.
Sensors 20 02286 g016
Table 1. The information in the coordinate files of the pre-earthquake remote sensing (RS) image database.
Table 1. The information in the coordinate files of the pre-earthquake remote sensing (RS) image database.
Field NameDescription
ID_ImgThe ID of pre-earthquake images
Ref_Lon_TopTop longitude of a pre-earthquake RS image
Ref_Lat_LeftLeft latitude of a pre-earthquake RS image
Ref_Lon_BottomBottom longitude of a pre-earthquake RS image
Ref_Lat_RightRight latitude of a pre-earthquake RS image
Ref_XCenter x (in projected coordinates) of a pre-earthquake RS image
Ref_YCenter y (in projected coordinates) of a pre-earthquake RS image
Ref_PixelsizeThe spatial resolution of a pre-earthquake RS image
Table 2. Details of the three image datasets used in the experiment.
Table 2. Details of the three image datasets used in the experiment.
No.SatelliteResolution (m)Size (pixel)DateEarthquake Disaster
1QuickBird2.42427 × 25692008/12/262008 Wenchuan earthquakeLandslides
GE22932 × 31082008/05/23
2GF-181218 × 13632013/07/232013 Yaan earthquake.Landslides, river expansion
GE26568 × 86442010/02/08
3GF-242096 × 17892017/08/092017 Jiuzhaigou earthquake.Landslides
GE24632 × 40022014/02/05
Table 3. Parameter settings for the three sets of experimental data.
Table 3. Parameter settings for the three sets of experimental data.
ParametersParameters SettingSpecification of Parameters
123
n10 × 1010 × 107 × 7The number of image patches
Nmax150015001500The maximum number of corner points for each image patch
Niter101010The number of iterations for eliminating mismatched tie points
TR/pixel535The residual threshold for eliminating mismatched tie points
TL/pixel10010030The distance threshold for removing high-density tie points
Table 4. Experimental results for three sets of images.
Table 4. Experimental results for three sets of images.
Image PairMethodNcp (pair)NdT (s)RMSEM (pixel)RMSET (pixel)
1The proposed method5232.201.651.40
21.951.791.95
Patch-SIFT4534.971.942.01
24.912.072.27
SIFT21333.980.819.18
233.891.249.27
2The proposed method2931.561.811.75
21.481.931.95
Patch-SIFT923.232.392.76
SIFT414.552.695.10
3The proposed method11832.051.371.24
21.931.391.30
Patch-SIFT9436.361.271.88
26.311.321.92
SIFT46327.850.722.48
227.880.762.51

Share and Cite

MDPI and ACS Style

Zhao, X.; Li, H.; Wang, P.; Jing, L. An Image Registration Method for Multisource High-Resolution Remote Sensing Images for Earthquake Disaster Assessment. Sensors 2020, 20, 2286. https://doi.org/10.3390/s20082286

AMA Style

Zhao X, Li H, Wang P, Jing L. An Image Registration Method for Multisource High-Resolution Remote Sensing Images for Earthquake Disaster Assessment. Sensors. 2020; 20(8):2286. https://doi.org/10.3390/s20082286

Chicago/Turabian Style

Zhao, Xin, Hui Li, Ping Wang, and Linhai Jing. 2020. "An Image Registration Method for Multisource High-Resolution Remote Sensing Images for Earthquake Disaster Assessment" Sensors 20, no. 8: 2286. https://doi.org/10.3390/s20082286

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop