An Automatic Procedure for Early Disaster Change Mapping Based on Optical Remote Sensing

Disaster change mapping, which can provide accurate and timely changed information (e.g., damaged buildings, accessibility of road and the shelter sites) for decision makers to guide and support a plan for coordinating emergency rescue, is critical for early disaster rescue. In this paper, we focus on optical remote sensing data to propose an automatic procedure to reduce the impacts of optical data limitations and provide the emergency information in the early phases of a disaster. The procedure utilizes a series of new methods, such as an Optimizable Variational Model (OptVM) for image fusion and a scale-invariant feature transform (SIFT) constraint optical flow method (SIFT-OFM) for image registration, to produce product maps including cloudless backdrop maps and change-detection maps for catastrophic event regions, helping people to be aware of the whole scope of the disaster and assess the distribution and magnitude of damage. These product maps have a rather high accuracy as they are based on high precision preprocessing results in spectral consistency and geometric, which compared with traditional fused and registration methods by visual qualitative or quantitative analysis. The procedure is fully automated without any manual intervention to save response time. It also can be applied to many situations.


Introduction
Natural disasters, such as earthquakes, landslides, avalanches and debris flows, occur unexpectedly and suddenly, claiming huge losses of life and property and causing significant damage to the surrounding environment.A large number of cases show that rapid emergency response is effective in reducing the casualties and the loss caused by disasters.However, effectively making proper contingency plans is a troublesome problem faced by governments and experts [1].
Remote sensing technologies have the unique ability to help emergency managers streamline response and recovery by providing a backdrop of situational awareness which can be invaluable for assessing the impacts of the damage and to guide the disaster rescue [2].Remote sensing plays an important role for the disaster rescue from the very early phase of a disaster, right through to long-term recovery [3].Especially during the early stages of disaster response, a coordinated and reasonable plan for search and rescue (S&R) activities, logistics planning, and monitoring can save lives and property.However, the information (e.g., the geographical scope of the disaster areas, the magnitude and the spatial distribution of damage, and transportation infrastructures conditions) to support rapid response is limited.Remote sensing is a critical option for obtaining a comprehensive information about the disaster and for assessing the scope of damage, in particular to remote areas, where other means of assessment or mapping either fail or are of insufficient quality [4].
Various type of remote sensing sensors, platforms and techniques can be considered for emergency mapping [5].The choice is mainly based on the type of disaster, the approximate extent of the affected areas and the requirement for monitoring the event.Generally, the main source of data for response activities is satellite remote sensing, as this gives the ability to monitor a wide footprint on the ground with limited or no access.In addition, modern satellite platforms can be triggered to change the acquisition angle to cover the affected areas in a short time for increasing the observation frequency of the regions of interest.The satellite imagery of disaster-hit areas can be obtained every day or even every few hours.Regarding the sensor type, the synthetic aperture radar (SAR) data and optical high-resolution (HR) or very high-resolution images (VHR) are generally obtained when disaster occur.SAR or InSAR systems are of great value for disaster response as it is a utility further enhanced by its all-weather capabilities, especially when the persistent of cloud cover over the affected areas make optical data unusable [6].It has been achieved success in detecting surface displacement and offsets and height change of disaster areas by using the intensity, coherence or phase information of post-event SAR or InSAR data.However, for structural damage or some temporary change (e.g., damage buildings and the shelter sites), SAR and InSAR data are insufficiently sensitive and the final mapping results have been less conclusive, and marked by uncertainties.And most SAR-based change detection approaches suffer from a lack of archive data with the same acquisition parameters as the post-crisis imagery [7].In addition, SAR imagery is less intuitive and is difficult to be interpreted by nonexperts, who must rely on sophisticated software to analyze the interferometry or amplitude coherence that require a longer processing time.Almost from the very onset of the disaster, optical satellite imagery is always available and provides the first glimpse of the disaster devastation.Optical high resolution imagery is the preferred choice to delineate the feature of interest (e.g., building, tents) and their current status (e.g., destroyed, burned, or moved) [5].Generally, optical data can provide useful information to discriminate between damaged or non-damaged areas.Even though no further classification with respect to damage levels can be retrieved, the information derived from optical imagery can be effective [8].The advantage of optical data is that their interpretation is intuitive for nonexperts, it can provide an overview of affected regions and sufficiently detailed information for decision-makers or aid workers to make S&R plan.However, the presence of cloud and shadows and variations in observation angles and geometric distortions limit the application of optical imagery for rapid emergency mapping [9].
This paper focuses on optical remote sensing image processing for the disaster mapping.One widely used approach for producing damage mapping is visual interpretation [10,11], which is tedious and labor intensive.Several automatic methodologies have been presented for disaster mapping, however, geometric distortions and improper co-registration between pre-and post-disaster imagery can result in a high false alarm rate in these automatic change detection approaches [7].Therefore, a balance must be found between the product accuracy and the labor-cost, and a compromise must be found between the response time, the analysis depth, and the mapping accuracy.In this respect, we propose an automatic Optimizable Variational Model (OptVM) and scale-invariant feature transform (SIFT) constraint optical flow method (SIFT-OFM) to improve the accuracy of change mapping to produce the preliminary damage map, which usually will have a rather coarse character.In the early period of disaster, these initial maps could quickly provide some crucial information to relief teams and support the planning and coordination of rescue operations for emergency response.At the same time, these maps would give professional analysts or the public a guide to further checking whether a detected change is correct to improve the accuracy of the damage map by visual validation.With the availability of further earth observation data and more in-depth image analysis, the maps can be updated to incorporate this new information and provide a refined damage assessment [7].
Therefore, we propose an automatic optical remote sensing process to produce a cloud free product as a situation awareness map for identifying the overall damage and potential cascading effects, a change detection map for estimating the impacts of damage, and a temporal change map of key areas (such as the heavily damaged areas and temporary resettlement of victims) quickly in order to monitor the recovery process and guide the disaster recovery.This process can rely on the newest available optical data to rapidly produce the up-to-date maps, cover large parts of the affected area, and enable disaster managers to obtain an overview of the situation, assess the damage, and supply local logistic teams with reliable information on very short notice.The procedure and methods are introduced in Section 2. The experimental results and analyses, using the Nepal magnitude 8.1 earthquake on 25 April 2015, as an example, are given in Section 3. A discussion is presented in Section 4. The last section concludes the paper with the significance of this research and proposed future work.

The Procedure and Methods
The timeliness of receiving data and the length of time needed for data processing are key factors in rapid emergency mapping for disaster events.When a large disaster occurs (e.g., the Haiti earthquake, the forest fires in Australia, and the Nepal earthquake), many international organizations or commercial companies decrease the satellite revisit time to obtain the newest data for the affected areas by adjusting the acquisition angle [12,13].This timely triggering provides enough data for disaster mapping.However, these agencies only offer the original satellite imagery, rather than processing or analysis results.The orderly organization and management for effective processing is scanty; and these multi-source and multi-temporal data are not sufficient for developing information applicable in humanitarian and natural crisis situations.
In this paper, a procedure is proposed to deal automatically with multi-source and multi-temporal remote sensing data, in order to produce a product that can serve in the early stages of disasters.The procedure is shown in Figure 1, which includes image fusion, image registration, and the production of maps.The procedure ultimately offers a cloudless remote sensing map, a change detection map, and a local temporal variation map for some typical areas during a short period of time.The proposed procedure is based mainly on the general rapid mapping framework proposed by Ajmar and Boccardo [14], but our proposed procedure is a specific implementation of a general framework except for map dissemination.In our procedure that uses some new techniques and methods, the accuracy of the products in each step has been greatly improved.Additionally, the whole procedure is fully automated without any artificial participation.The methods applied in each step of the procedure are described below.available optical data to rapidly produce the up-to-date maps, cover large parts of the affected area, and enable disaster managers to obtain an overview of the situation, assess the damage, and supply local logistic teams with reliable information on very short notice.The procedure and methods are introduced in Section 2. The experimental results and analyses, using the Nepal magnitude 8.1 earthquake on 25 April 2015, as an example, are given in Section 3. A discussion is presented in Section 4. The last section concludes the paper with the significance of this research and proposed future work.

The Procedure and Methods
The timeliness of receiving data and the length of time needed for data processing are key factors in rapid emergency mapping for disaster events.When a large disaster occurs (e.g., the Haiti earthquake, the forest fires in Australia, and the Nepal earthquake), many international organizations or commercial companies decrease the satellite revisit time to obtain the newest data for the affected areas by adjusting the acquisition angle [12,13].This timely triggering provides enough data for disaster mapping.However, these agencies only offer the original satellite imagery, rather than processing or analysis results.The orderly organization and management for effective processing is scanty; and these multi-source and multi-temporal data are not sufficient for developing information applicable in humanitarian and natural crisis situations.
In this paper, a procedure is proposed to deal automatically with multi-source and multi-temporal remote sensing data, in order to produce a product that can serve in the early stages of disasters.The procedure is shown in Figure 1, which includes image fusion, image registration, and the production of maps.The procedure ultimately offers a cloudless remote sensing map, a change detection map, and a local temporal variation map for some typical areas during a short period of time.The proposed procedure is based mainly on the general rapid mapping framework proposed by Ajmar and Boccardo [14], but our proposed procedure is a specific implementation of a general framework except for map dissemination.In our procedure that uses some new techniques and methods, the accuracy of the products in each step has been greatly improved.Additionally, the whole procedure is fully automated without any artificial participation.The methods applied in each step of the procedure are described below.

Image Fusion
Recently high spatial resolution images, such as QuickBird, WordView, GeoEye and GF-1, have been favored in many remote sensing applications, especially for the assessment of disasters [15].Usually, high spatial resolution satellites provide image data with a low-resolution multispectral (LRM) image and a high-resolution panchromatic (HRP) image [16].In order to obtain the high-resolution multispectral (HRM) images needed for detailed damage assessment, the image fusion process must be implemented.This is the first key step in our procedure.
There has been much research done on image fusion.The methods can be categorized into five types, including variable substitution (VS) methods, such as the IHS and PCA algorithms [17,18]; modulation fusion methods (MF), such as the Brovery algorithm [19]; multi-scale analysis (MSA) such as the wavelet fusion algorithm [20]; restoration based methods (RB), such as the adjustable model-based fusion algorithm [21]; and compressed sensing methods (CS) [16,22].The first three types of methods can be viewed as traditional methods, which tend to cause color distortions.Additionally, spatial information weakens due to the wavelength extension of the new satellite HRP images.Inspired by the rapid development of super-resolution techniques, RB methods have been applied routinely to image restoration and fusion.These methods view the LRM image and the HRP image as the observations of the HRM image via an image degradation model.The HRM image is obtained by setting up a model and solving the model.The CS method is based on sparse representation, which utilizes the strong correlation between the LRM and HRP images to obtain a sparse coefficient, construct two coupled dictionaries, and reconstruct the fused image by the fused measurement [23].This method usually needs a large collection of training images to obtain trained dictionary, requiring significant computing time.
In this paper, an improved fusion model, OptVM, is applied to produce high-quality HRM images automatically.The flowchart of the OptVM is shown in Figure 2. First, a PAN image is simulated by MS image, and the simulated PAN and original HRP image are matched.Next, the restored MS image is obtained by the relationship of spectrum and grayscale between the simulated PAN and LRM images; and finally, the model (Equation ( 1)) is optimized to get the HRM image.The model defines Pan-sharpening as an optimization of a linear over-determined system, which is based on gray-value and spectral sensitivity relationships between the original multispectral (MS), panchromatic (PAN), and fused images.

Image Fusion
Recently high spatial resolution images, such as QuickBird, WordView, GeoEye and GF-1, have been favored in many remote sensing applications, especially for the assessment of disasters [15].Usually, high spatial resolution satellites provide image data with a low-resolution multispectral (LRM) image and a high-resolution panchromatic (HRP) image [16].In order to obtain the high-resolution multispectral (HRM) images needed for detailed damage assessment, the image fusion process must be implemented.This is the first key step in our procedure.
There has been much research done on image fusion.The methods can be categorized into five types, including variable substitution (VS) methods, such as the IHS and PCA algorithms [17,18]; modulation fusion methods (MF), such as the Brovery algorithm [19]; multi-scale analysis (MSA) such as the wavelet fusion algorithm [20]; restoration based methods (RB), such as the adjustable model-based fusion algorithm [21]; and compressed sensing methods (CS) [16,22].The first three types of methods can be viewed as traditional methods, which tend to cause color distortions.Additionally, spatial information weakens due to the wavelength extension of the new satellite HRP images.Inspired by the rapid development of super-resolution techniques, RB methods have been applied routinely to image restoration and fusion.These methods view the LRM image and the HRP image as the observations of the HRM image via an image degradation model.The HRM image is obtained by setting up a model and solving the model.The CS method is based on sparse representation, which utilizes the strong correlation between the LRM and HRP images to obtain a sparse coefficient, construct two coupled dictionaries, and reconstruct the fused image by the fused measurement [23].This method usually needs a large collection of training images to obtain trained dictionary, requiring significant computing time.
In this paper, an improved fusion model, OptVM, is applied to produce high-quality HRM images automatically.The flowchart of the OptVM is shown in Figure 2. First, a PAN image is simulated by MS image, and the simulated PAN and original HRP image are matched.Next, the restored MS image is obtained by the relationship of spectrum and grayscale between the simulated PAN and LRM images; and finally, the model (Equation (1)) is optimized to get the HRM image.The model defines Pan-sharpening as an optimization of a linear over-determined system, which is based on gray-value and spectral sensitivity relationships between the original multispectral (MS), panchromatic (PAN), and fused images.The OptVM constructs a cost function to generate the HRM image based on three constrained hypotheses: (1) the radiation of an object in an LRM image should be equal to the integral average radiation of the same object in the HRM image; (2) spatial details of the HRP image and the HRM image (the target fused image) should be similar; (3) the values in the panchromatic band should be the summation of the respective multispectral band, if the spectral response range of the LRM image is nearly the same as the HRP image.According to the three assumption, the cost function can be defined as follows in Equation (1): where XS L , XS H and P H are the radiation of pixels in the LRM image, the HRM image and the HRP image respectively.The variables, λ 1 , λ 2 andλ 3 represent the weight of the three models in the total energy function, which can be adjusted according the fusion result.Here, in next experiment, λ 1 , λ 2 and λ 3 are set 1. The variables, n is number of fusion band.The parameter of k is based on the fact that the low-resolution pixels are formed from the high-resolution ones by low pass filtering followed by subsampling [24], and k represents the low pass filtering followed by subsampling.The express of k is shown as follows in Equation (2): In the Equation (2), the pixel j is one of the pixels set M in HRP image which corresponds to the pixel x in LRM image.For example, if the resolution of LRM image is 8 m and HRP image is 2 m, each pixel in LRM image corresponds to 4 ˆ4 pixels set in HRP image in the same object.The variables in Equation (1), W sx represents the weighted value of the pixel s which in the neighborhood window of pixel x in HRP image, the express is as follows Equation (2), including δ x is the standard deviation.
In the Equation (3), the size of neighborhood window is set 9 ˆ9 in the next experiment.The variables in Equation (1), Wn is the band weights of multispectral image in spectral response relationship.The express is shown as follows in Equation (4): In the Equation (4), λ represent the Spectral value.The process of image fusion is the global minimum of E(P H ,XS H ,XS L ),which is the linear constraint of the three basic models.The least square technique-LSQR [25] is applied to optimize the model.An important step of the processing is the image registration of the HRP image and the simulated panchromatic image by the LRM image to avoid fusion image distortion and spectral offset based on the optical flow and feature constraint method (introduced in Section 2.2).

Image Registration
Image registration is an important procedure in remote sensing image applications [26] and a crucial step in our chain.Previous studies of image registration methods can be divided into three categories, including area-based method, such as Cross-Correlation (CC) method [27], the mutual information (MI) [28]and the Sequential Similarity Detection Algorithms (SSDAs) [29]; feature-based method such as SIFT [30][31][32] and physically-based methods, such as multi-resolution elastic model [33] and the optical flow model [34,35].The first two methods, which are based on the gray information and the spatial features of the images to register the images, cannot meet the real time requirements, as they require large computing capacity and highly complex algorithms, They are also not very robust.The physically based methods utilize physical models to describe the process of image deformation and solve the image physical model to achieve image registration.These methods have great advantages for speed and accuracy.In this paper, a physically-based method, the SIFT feature constraint optical flow method (SIFT-OFM), is proposed to register images.Figure 3 shows the flowchart of the algorithm.First, homologous points are obtained for the reference image and the sensed image using the SIFT feature matching method [36].Then the affine transform coefficients are calculated to be a constraint for an optical flow model.Finally, the parameters of optical flow model are optimized through an iterative process to obtain high accuracy registration images.and the spatial features of the images to register the images, cannot meet the real time requirements, as they require large computing capacity and highly complex algorithms, They are also not very robust.The physically based methods utilize physical models to describe the process of image deformation and solve the image physical model to achieve image registration.These methods have great advantages for speed and accuracy.In this paper, a physically-based method, the SIFT feature constraint optical flow method (SIFT-OFM), is proposed to register images.Figure 3 shows the flowchart of the algorithm.First, homologous points are obtained for the reference image and the sensed image using the SIFT feature matching method [36].Then the affine transform coefficients are calculated to be a constraint for an optical flow model.Finally, the parameters of optical flow model are optimized through an iterative process to obtain high accuracy registration images.The optical flow model method is based mainly on the assumption of the local gradient model and the overall smoothness model of the pixel gray values, which regard the change of image gray scale as the change of optical flow [35].The algorithm is able to get the high-precision matched image by considering the optical flow change as the displacement of the reference image and the sensed image to the register.
The optical flow model is based on the assumption of the constancy of brightness, which means that the gray value of the same object is not changing with time and the radiance consistent is in motion [37].Thus, the spatial variation of the optical flow model can be expressed as follows in Equation ( 5): , where I is the image intensity function of interval Ω (x, y), (x, y) ϵ Ω; u and v are the displacements in the x and y directions, respectively, which is the offset between the reference image and the sensed image.The symbol, Ψ, represents a differentiable function on the interval Ω.To reduce the impact of The optical flow model method is based mainly on the assumption of the local gradient model and the overall smoothness model of the pixel gray values, which regard the change of image gray scale as the change of optical flow [35].The algorithm is able to get the high-precision matched image by considering the optical flow change as the displacement of the reference image and the sensed image to the register.
The optical flow model is based on the assumption of the constancy of brightness, which means that the gray value of the same object is not changing with time and the radiance consistent is in motion [37].Thus, the spatial variation of the optical flow model can be expressed as follows in Equation ( 5): where I is the image intensity function of interval Ω (x, y), (x, y) Ω; u and v are the displacements in the x and y directions, respectively, which is the offset between the reference image and the sensed image.The symbol, Ψ, represents a differentiable function on the interval Ω.To reduce the impact of light changes, the gradient energy equation is added into the total energy formula.The gradient energy equation is defined as follows in Equation ( 6): where ∇I " " I x , I y ‰ T , and ∇ I is the gradient vector of the image.For every pixel, there is only one equation (the sum of Equations ( 2) and ( 3)); but there are two variables, u and v, which can cause uncertain solutions and aperture problems.Thus, there is a need to add a regularization constraint.The regularization constraint plays an important role in the estimation intensity, the edge smoothness, and the edge retention of the optical flow field.In this algorithm, a global smoothness constraint proposed by Horn [37] is used to minimize the constraint term.The global smoothness constraint is described as follows in Equation ( 7): where ∇ :" pBx, Byq is two dimensional differential operator.Optical flow model is not suitable for matching large displacement images since the optical flow model can be expressed by expanding the Taylor series and omitting the high-order terms.Therefore, the SIFT-OFM adds the SIFT feature as a constraint into the optical flow model.The generation of the SIFT feature points includes four steps: extreme space detection, precise positioning of the SIFT feature point, determination of the main direction of the feature point, and the description of the building features.Based on these SIFT feature points, the affine coefficients are calculated as the initial value of the optical flow field to match the image.The total energy equation of SIFT-OFM is defined as follows in Equation ( 8): E " E gray `λE gradient `αE smooth `τE SIFT " ş ş ϕ ´pI px `u, y `vq ´I px, yqq 2 ¯dxdy where λ, α and τ are the coefficient of the last three component.Based on previous experience, these are set to λ " 1, α " 50 and τ " 1 in next experiment.The minimum of Ñ u and Ñ v , are solved based on the energy function E, using Successive Over Relaxation (SOR) iterative method [38].This is the displacement of the reference image and the sensed image.Then calculate the displacement based Ñ u and Ñ v and resample to get high-precision matched image.

Map Production
The procedure mainly provides two kinds of products.These are the cloudless maps for the disaster regions and the change detection maps for the early stage of the disaster.The flowchart for map production is shown in Figure 4.After obtaining the high precision matched disaster image, the clouds on the images are detected and removed to get cloudless remote sensing maps for determining the overall scope of the disaster and for automatically detecting the change between the pre-and post-disaster cloudless images.For the automated change map, artificial checking is needed to exclude the error detection, which is caused by the different types of sensors and the observed conditions between the two images.In cases of emergency, the automated coarse change map can be used directly to offer approximate information for managing crisis issues.Much research has been focused on the task of cloud removal to relieve the influences of clouds.In fact, the process of cloud removal is an information reconstruction process for missing information.The proposed approaches can be grouped into three categories [39,40]: (1) spatial-based methods without any other auxiliary information source; (2) spectral-based methods, which extract the complementary information from other spectra; and (3) temporal-based methods, which extract the complementary information from other data acquired at the same position and at different time periods.Spatial-based methods are often used for some small missing regions by interpolation, such as for the scan line (SLC) missing from Landsat-7 [41].It is not appropriate for large-scale cloud removal.Spectral-based methods are based on the relationship of spectra in clear regions and restore the contaminated band of data by modeling a relationship between the contaminated band and the auxiliary band by methods such as the HOT method [42], and the STARFM method [43].It is usually constrained by the spectral compatibility and has difficulty with thick clouds.Temporal-based methods utilize multi-temporal clear data for the same region to simulate the cloud cover area data for acquiring cloud-free images, such as local linear histogram matching (LLHM) approach [44], spatio-temporal MRF model [45] and etc.In this paper, we produce the cloud free maps by the temporal phases simulation method proposed by Gui [46], which utilize spectrum relationship, vegetation index and HOT parameters [47] to detect clouds on the images and calculate the feature difference between two temporal images to make up the cloud regions.The simulation method uses the minimum value of the feature difference and a multi-grid optimization algorithm.It converts the features of one image to the features of the reference image and then fills the areas of the clouds by the converted image that has no cloud in the same region.This generates a cloud free image in the disaster districts.In this study, time to acquire the difference between the images is very short, so the generated image can truly reflect the overall situation after the disaster by this method.Much research has been focused on the task of cloud removal to relieve the influences of clouds.In fact, the process of cloud removal is an information reconstruction process for missing information.The proposed approaches can be grouped into three categories [39,40]: (1) spatial-based methods without any other auxiliary information source; (2) spectral-based methods, which extract the complementary information from other spectra; and (3) temporal-based methods, which extract the complementary information from other data acquired at the same position and at different time periods.Spatial-based methods are often used for some small missing regions by interpolation, such as for the scan line (SLC) missing from Landsat-7 [41].It is not appropriate for large-scale cloud removal.Spectral-based methods are based on the relationship of spectra in clear regions and restore the contaminated band of data by modeling a relationship between the contaminated band and the auxiliary band by methods such as the HOT method [42], and the STARFM method [43].It is usually constrained by the spectral compatibility and has difficulty with thick clouds.Temporal-based methods utilize multi-temporal clear data for the same region to simulate the cloud cover area data for acquiring cloud-free images, such as local linear histogram matching (LLHM) approach [44], spatio-temporal MRF model [45] and etc.In this paper, we produce the cloud free maps by the temporal phases simulation method proposed by Gui [46], which utilize spectrum relationship, vegetation index and HOT parameters [47] to detect clouds on the images and calculate the feature difference between two temporal images to make up the cloud regions.The simulation method uses the minimum value of the feature difference and a multi-grid optimization algorithm.It converts the features of one image to the features of the reference image and then fills the areas of the clouds by the converted image that has no cloud in the same region.This generates a cloud free image in the disaster districts.In this study, time to acquire the difference between the images is very short, so the generated image can truly reflect the overall situation after the disaster by this method.
Several authors have presented semi-or fully-automatic methods to analyze images for damage change detection and assessment by optical data [3,[48][49][50].But geometric distortion, improper co-registration of pre-and post-disaster images can result in a high false alarm rate of automatic change detection methods.In this procedure, the new methods, OptVM and SIFT-OFM, are utilized to improve the accuracy of preprocessing results.And based on the high-precision preprocessing images, the Iteratively Re-weighted modification to the Multivariate Alteration Detection (IR-MAD) method proposed by Nielsen [51,52] is used for detecting change automatically in this procedure.IR-MAD is mainly based on the means of canonical correlation analysis (CCA).First, the maximum canonical correlation coefficient after linear transformation is determined for the pre-and post-disaster images [53].The D-value and the variance of the linear transformed images are calculated.Finally, the pixel change is checked by the Chi-square distribution of the D-value and the variance.In order to highlight the changing information of the pixels to improve the accuracy of the change detection map, a weighted iteration method is used during the process of transformation.The iterations continue until the absolute value is less than the error limit (for this paper, the error limit is set as 0.001).The absolute value is the difference of correlation coefficient maximum value from two iterations, namely, before and the current conversion.
However, the change detection map has some error detection points because the map, which is generated by IR-MAD, could be influenced by the different conditions during the acquisition of the two images.Artificial checking is essential for removing the errors in the changed regions.Changes are totally possible during the early stages of disasters, since many people and agencies from other regions or countries have a great enthusiasm and willingness to help the victims and local government in relief and reconstruction activities.There are many public community remote sensing platforms, such as TomNod [54] and GIS Corps [55] which could add and update the information (not only in vector format but also by means of geotagged pictures and textual information), validate the information conveyed, and actively participate in the map production phase by coordinating the efforts of ordinary people and experts [56].It could gather every tiny force to provide significant assistance for the victims and decision making department in the disaster region [57].The automatic production change map can be used in on a community remote sensing platform to reduce labor costs and to provide a rapid response to the disaster.

Study Area and Data Sets
A powerful earthquake shook Nepal near its capital, Kathmandu, at 11:56 NST on 25 April 2015, killing more than 8000 people and flattening section of the city's historic center.Some of the oldest parts of the city, including seven UNESCO World Heritage sites, were severely damaged in the earthquake.This paper uses the Nepal earthquake as an example.The location of the study area, the center of Kathmandu, is shown in Figure 5. Four images from a GF-1 PMS sensor were selected, including a pre-earthquake image on April 11 and post-earthquake images on 27 April, 1 May, and 2 May, as shown in Figure 6.The specific observation parameters of the data set are shown in Table 1, and the parameters for the PMS GF-1 sensor are shown in Table 2.The proposed procedure was followed without any human intervention.The procedure will be discussed clearly, including the intermediate results of the image fusion and registration and the final products of the cloudless maps and the change map.

Image Fusion Results and Analysis
In this section, the image from 27 April is taken as an example to show the accuracy of the image fusion results.Figure 7 shows part of the whole image to exhibit the image details clearly.The left image is multi-spectral data, and the right is panchromatic data.The upper left corner image in each view is an enlarged view of the small red box.Figure 8 shows the result of PCA fusion, GS-pansharpening fusion, wavelet transform fusion, and the proposed OptVM method.Visually, the four methods have achieved spatial resolution enhancement and keep a certain spectral fidelity; but the results of PCA, GS, and wavelet transforms have virtual phenomena in the edge of objects, especially the wavelet transform result being the worst.In detail, from the enlarged image of the small red box, the resulting color recovery of OptVM is finer than with the other three algorithms.For example, observe the red roof in the enlarged image in Figure 8.The OptVM method restored the red color of the roof regularly, while the color recovery results of the other three algorithms deviate from the red color, with some parts of roof being white.
GS-pansharpening fusion, wavelet transform fusion, and the proposed OptVM method.Visually, the four methods have achieved spatial resolution enhancement and keep a certain spectral fidelity; but the results of PCA, GS, and wavelet transforms have virtual phenomena in the edge of objects, especially the wavelet transform result being the worst.In detail, from the enlarged image of the small red box, the resulting color recovery of OptVM is finer than with the other three algorithms.For example, observe the red roof in the enlarged image in Figure 8.The OptVM method restored the red color of the roof regularly, while the color recovery results of the other three algorithms deviate from the red color, with some parts of roof being white.Six typical surface features are shown in Figure 9 which are sampled from the fused results (seen in Figure 8).These are sampled in relatively uniform areas to assess the spectral fidelity and the spatial location of the six points marked in Figure 9A.These pixels cover a large set of material in the scene and are uniform around their neighborhood.The spectra of the original MS image are used as ground truth for comparisons based on the reasonable assumption that the spectra around such uniform areas are most likely unchanged.The spectra results of the image fusion methods are shown in Figure 10; and the Euclidean distance difference and the spectral difference, expressed in spectral angle difference, are shown in Table 3.For uniformity of comparison, the values of the fusion results are normalized to 0-255.From Figure 10 and Table 3, it can be seen that the result of OptVM method can preserve the spectra of these pixels well, whereas the spectra from the PCA, the Gram-Schmidt (GS), and the wavelet fusion are very different from the truth.For example, the red roof of pixel #1 is shifted seriously in the result from wavelet fusion.For the ordinary roof (pixel #4) and bare land (pixel #6), the results of PCA and GS are lower than the truth, especially in red band, and the Six typical surface features are shown in Figure 9 which are sampled from the fused results (seen in Figure 8).These are sampled in relatively uniform areas to assess the spectral fidelity and the spatial location of the six points marked in Figure 9A.These pixels cover a large set of material in the scene and are uniform around their neighborhood.The spectra of the original MS image are used as ground truth for comparisons based on the reasonable assumption that the spectra around such uniform areas are most likely unchanged.The spectra results of the image fusion methods are shown in Figure 10; and the Euclidean distance difference and the spectral difference, expressed in spectral angle difference, are shown in Table 3.For uniformity of comparison, the values of the fusion results are normalized to 0-255.From Figure 10 and Table 3, it can be seen that the result of OptVM method can preserve the spectra of these pixels well, whereas the spectra from the PCA, the Gram-Schmidt (GS), and the wavelet fusion are very different from the truth.For example, the red roof of pixel #1 is shifted seriously in the result from wavelet fusion.For the ordinary roof (pixel #4) and bare land (pixel #6), the results of PCA and GS are lower than the truth, especially in red band, and the difference of the spectral angle is larger.In the vegetation area, the colors of the results based on the wavelet method (see Figure 9D) lack fidelity, while our algorithm performs very well in most regions (Figure 9F).In our method, a crucial step is that the simulated PAN and the original PAN need to be matched to reduce the distortion that is caused by the geometric deviation of the original MS and PAN images.
difference of the spectral angle is larger.In the vegetation area, the colors of the results based on the wavelet method (see Figure 9D) lack fidelity, while our algorithm performs very well in most regions (Figure 9F).In our method, a crucial step is that the simulated PAN and the original PAN need to be matched to reduce the distortion that is caused by the geometric deviation of the original MS and PAN images.In order to comprehensively assess the quality of fused images, four objective evaluation metrics are selected, which include the spectral angle mapper (SAM), the error of relative dimensionless global in synthesis (ERGAS) metric, the Q4 index [58] and Quality with No Reference (QNR) metric [59].The equations of four metrics are presented in the Appendix.SAM and ERGAS are measures of the global radiometric distortion of the fused images, and a smaller value indicates better fusion results.Q4 and QNR reflect the spectral and spatial quality of the fused images.Values closer to 1 denote the better result.The results of the four metrics are shown in Figure 11 and Table 4.In regards to the fidelity of the image spectrum and the image clarity, the fused images by GS and by our proposed algorithms appear better than those obtained by the PCA and the wavelet fusion methods.The QNR index obviously displays this difference, since replacing the original MS image information leads to the loss of information.Compared with the other fused methods, the SAM value of the GS fused image is very large, with the degree of sharpening being excessive in some parts of the results.The values for the four metrics for our method show that it is the best of the four methods.In fact, most of the existing image fusion algorithms are based on the band-to-band correlation, whereas our method synthetically utilize the relationship of spectrum, grayscale and spatial to produce an image with fewer distortions.In order to comprehensively assess the quality of fused images, four objective evaluation metrics are selected, which include the spectral angle mapper (SAM), the error of relative dimensionless global in synthesis (ERGAS) metric, the Q4 index [58] and Quality with No Reference (QNR) metric [59].The equations of four metrics are presented in the Appendix.SAM and ERGAS are measures of the global radiometric distortion of the fused images, and a smaller value indicates better fusion results.Q4 and QNR reflect the spectral and spatial quality of the fused images.Values closer to 1 denote the better result.The results of the four metrics are shown in Figure 11 and Table 4.In regards to the fidelity of the image spectrum and the image clarity, the fused images by GS and by our proposed algorithms appear better than those obtained by the PCA and the wavelet fusion methods.The QNR index obviously displays this difference, since replacing the original MS image information leads to the loss of information.Compared with the other fused methods, the SAM value of the GS fused image is very large, with the degree of sharpening being excessive in some parts of the results.The values for the four metrics for our method show that it is the best of the four methods.In fact, most of the existing image fusion algorithms are based on the band-to-band correlation, whereas our method synthetically utilize the relationship of spectrum, grayscale and spatial to produce an image with fewer distortions.

Image Registration Results and Analysis
Image registration is based on the results of image fusion and select two images as the example (pre-earthquake on 11 April 2015, and post-earthquake on 27 April 2015).The pre-earthquake image is the reference image to match with the post-seismic image of 27 April.In Figure 12, the middle image is a combined image, which takes a 100 × 100 block from the reference image and also from the sensed image, alternately.Figure 12A-D are the detailed enlarged images, which show the position offset of the reference image and of the sensed image.In Figures 13-16, the middle combined image is obtained in the same way as Figure 12.These figures are based on the reference image and the

Image Registration Results and Analysis
Image registration is based on the results of image fusion and select two images as the example (pre-earthquake on 11 April 2015, and post-earthquake on 27 April 2015).The pre-earthquake image is the reference image to match with the post-seismic image of 27 April.In Figure 12, the middle image is a combined image, which takes a 100 ˆ100 block from the reference image and also from the sensed image, alternately.Figure 12A-D are the detailed enlarged images, which show the position offset of the reference image and of the sensed image.In Figures 13-16 the middle combined image is obtained in the same way as Figure 12.These figures are based on the reference image and the matched image using the optical flow model algorithm (Figure 13), the SIFT feature algorithm (Figure 14), the ENVI software registration workflow (Figure 15), and our proposed SIFT-OFM method (Figure 16).From the detailed images in Figure 12, the dislocation between the reference image and the sensed image is very obvious, so it is necessary to register the two images for producing the products.From Figure 14 to Figure 16, the results are good.Figure 13 shows that the result that is based only on the optical flow model algorithm is the worst.It is because optical flow algorithm model is not suitable for the large displacement of two images, generally not more than two pixels; but in this study, the difference of the two images significantly exceeds at least 10 pixels.Furthermore, comparing the detailed image from Figure 14 to that in Figure 16, the SIFT algorithm performed well  From the detailed images in Figure 12, the dislocation between the reference image and the sensed image is very obvious, so it is necessary to register the two images for producing the products.From Figure 14 to Figure 16, the results are good.Figure 13 shows that the result that is based only on the optical flow model algorithm is the worst.It is because optical flow algorithm model is not suitable for the large displacement of two images, generally not more than two pixels; but in this study, the difference of the two images significantly exceeds at least 10 pixels.Furthermore, From the detailed images in Figure 12, the dislocation between the reference image and the sensed image is very obvious, so it is necessary to register the two images for producing the products.From Figure 14 to Figure 16, the results are good.Figure 13 shows that the result that is based only on the optical flow model algorithm is the worst.It is because optical flow algorithm model is not suitable for the large displacement of two images, generally not more than two pixels; but in this study, the difference of the two images significantly exceeds at least 10 pixels.Furthermore, comparing the detailed image from Figure 14 to that in Figure 16, the SIFT algorithm performed well in Figure 14A,D; but in Figure 14B, the image of the dislocation of the bridges is very clear, while the road in the Figure 14C image is broken.The linear correction model of the affine transformation is utilized in SIFT feature algorithm and can cause some nonlinear distortions to be uncorrected.The result of Figure 15 is based on ENVI registration process, in which 34 ground control points (GCP) are marked by artificial selection and the overall error of these GCP is not more than 0.43 pixels.However, in both Figure 15A,B the discrepancy of the bridges is obvious and serious.This is related to the maldistribution of the GCP and the linear calibration model in the registration process.There is no obvious displacement or deviation in any of the images of Figure 16 produced by the SIFT-OFM method, which restricts the larger deviation by the SIFT feature and corrects the local linear or nonlinear distortion by the optical flow model.
The structural similarity index measurement (SSIM) [60] is the quantitative evaluation metric used to evaluate the precision of a registration image.The SSIM maps in Figure 17 are obtained by calculating the SSIM between the reference image and the matched image in a sliding window within 9 ˆ9.In the SSIM map, red indicates SSIM values close to 1; that is, the areas with high precision registration.It can be seen that the precision of the proposed method is higher for the SIFT feature and the ENVI automatic registration process.The existence of clouds and shadows in sensed images and the lack of clouds in the reference image leads to the low SSIM value in the clouds and shadows regions.Therefore, in the Table 5, the overall SSIM value in the whole image shows relatively low.The structural similarity index measurement (SSIM) [60] is the quantitative evaluation metric used to evaluate the precision of a registration image.The SSIM maps in Figure 17 are obtained by calculating the SSIM between the reference image and the matched image in a sliding window within 9 × 9.In the SSIM map, red indicates SSIM values close to 1; that is, the areas with high precision registration.It can be seen that the precision of the proposed method is higher for the SIFT feature and the ENVI automatic registration process.The existence of clouds and shadows in sensed images and the lack of clouds in the reference image leads to the low SSIM value in the clouds and shadows regions.Therefore, in the Table 5, the overall SSIM value in the whole image shows relatively low.

Cloudless Product and Analysis
The generation of cloudless maps includes cloud detection and temporal simulation based on the result of image fusion and image registration.The four images after image fusion and image registration are shown in Figure 18.From Figure 18, all the post-event images (images of 27 April, 1 May, and 2 May) included clouds and shadows, which affect the ability of the decision makers and analysts to estimate and grasp the overall situation.Therefore, cloud and shadow removal and temporal simulation for producing a cloud free image is indispensable.
The cloud detection results by our method introduced in Section 2.3 are shown in Figure 19, in which red areas represent cloud regions and green areas represent shadow regions.By visual comparison, most of the cloud and shadow have been detected, while only a small portion of the thin clouds have not been detected.The cloud and shadow based on the result in Figure 19 is used to produce the cloud free image by temporal simulation based on the adjacent time image after the earthquake.The cloudless image is shown in Figure 20, in which Figure 20A is the image of 1 May and Figure 20B is the image of 2 May. Figure 20C is the cloudless map.The image, Figure 20C, is based mainly on the Figure 20B image, with the cloud and shadow regions made up from the Figure 20A image.Figure 21 show the detailed enlarged images of the rectangle in Figure 20.In Figure 21,

Cloudless Product and Analysis
The generation of cloudless maps includes cloud detection and temporal simulation based on the result of image fusion and image registration.The four images after image fusion and image registration are shown in Figure 18.From Figure 18, all the post-event images (images of 27 April, 1 May, and 2 May) included clouds and shadows, which affect the ability of the decision makers and analysts to estimate and grasp the overall situation.Therefore, cloud and shadow removal and temporal simulation for producing a cloud free image is indispensable.cloudless image of Figure 22 contains some pre-event information, which may make a confusion by users if they only use the single cloudless map, so some red vectors of cloud and shadow regions are added to distinguish the pre and post event data.In the internal regions of the vectors are the pre-event information, while the external region are the post-event information.This kind maps is only used in the short time after the event when there are few post-event images and the images still have overlapping cloud-covered areas.The cloud detection results by our method introduced in Section 2.3 are shown in Figure 19, in which red areas represent cloud regions and green areas represent shadow regions.By visual comparison, most of the cloud and shadow have been detected, while only a small portion of the thin clouds have not been detected.The cloud and shadow based on the result in Figure 19 is used to produce the cloud free image by temporal simulation based on the adjacent time image after the earthquake.The cloudless image is shown in Figure 20, in which Figure 20A is the image of 1 May and Figure 20B is the image of 2 May. Figure 20C is the cloudless map.The image, Figure 20C, is based mainly on the Figure 20B image, with the cloud and shadow regions made up from the Figure 20A image.Figure 21 show the detailed enlarged images of the rectangle in Figure 20.In Figure 21, the scenes in the cloud and shadow regions are recovered by the temporal simulation method and keep consistency in color and texture using neighbors.It provides a high quality cloudless image and identifies the overall damage and potential cascading effects accurately.Figure 22 shows the cloudless maps of the post-event on 27 April, 1 May, and 2 May.Comparing with the original image of Figure 18, the cloudless maps eliminate the influence of clouds and shadows and can be used easily as backdrop maps or situation maps.The cloud and shadow regions of the image on 27 April (seen Figure 18 right-up image and Figure 19 right-up image) is replaced by the pre-event cloudless image on 11 April (seen Figure 18 left-up image) to obtain a cloudless map (seen Figure 22 left image), as there is no post-event image before 27 April to make up the cloud and shadow regions.The left cloudless image of Figure 22 contains some pre-event information, which may make a confusion by users if they only use the single cloudless map, so some red vectors of cloud and shadow regions are added to distinguish the pre and post event data.In the internal regions of the vectors are the pre-event information, while the external region are the post-event information.This kind maps is only used in the short time after the event when there are few post-event images and the images still have overlapping cloud-covered areas.

Change-Detection Product and Analysis
The change-detection map produced by IR-MAD in this paper is a coarse detection map and needs further manual checks to improve the accuracy of the product.In this section, four images of Figure 18

Change-Detection Product and Analysis
The change-detection map produced by IR-MAD in this paper is a coarse detection map and needs further manual checks to improve the accuracy of the product.In this section, the four images of Figure 18

Change-Detection Product and Analysis
The change-detection map produced by IR-MAD in this paper is a coarse detection map and needs further manual checks to improve the accuracy of the product.In this section, the four images of Figure 18 are tested.The image in Figure 23I is the change-detection map between 11 April and 27 April, in which clouds and shadows are considered as the change, and Figure 23II is the change-detection map after removing the cloud and shadow regions based on the Figure 23I image.Figure 23A-D give the detailed enlarged images of the red rectangle in the large maps.Figure 23A illustrates the change of open grass land before and after earthquake, where some tents have been put up as temporary refuge.Figure 23B indicates the landmark building of the Kathmandu, Bhimsen Tower, completely collapsed after the earthquake.Figure 23C represents an error change detection result, as the different imaging angle of two images result in the different positions of tall buildings and their shadows in the images.Though the images are matched by the method proposed in Section 2.2, the differences caused by the viewing angle is very difficult to be corrected.Figure 23D also shows an error change-detection result, because part of the shadow area is missing from detection.From the above analysis, the change-detection map produced by our proposed procedure is a coarse result without distinguishing the type and grade of change and many errors are detected, which require manual checking to refine the work. of open grass land before and after earthquake, where some tents have been put up as temporary refuge.Figure 23B indicates the landmark building of the Kathmandu, Bhimsen Tower, completely collapsed after the earthquake.Figure 23C represents an error change detection result, as the different imaging angle of two images result in the different positions of tall buildings and their shadows in the images.Though the images are matched by the method proposed in Section 2.2, the differences caused by the viewing angle is very difficult to be corrected.Figure 23D also shows an error change-detection result, because part of the shadow area is missing from detection.From the above analysis, the change-detection map produced by our proposed procedure is a coarse result without distinguishing the type and grade of change and many errors are detected, which require manual checking to refine the work.The image in Figure 24I is the same image shown in Figure 23II, which is the change map between 11 April and 27 April.The image in Figure 24II is the change between 1 May and 11 April and the image in Figure 24III is the change between 27 April and 1 May.All of these eliminate the impact of cloud and shadow.Figure 25 shows the detailed enlarged images of the red rectangles of Figure 24.In Figure 25A, the Durbar Square of Kathmandu, which damaged severely in the earthquake, is shown, but this change (the damage) is not discovered because the clouds and shadows exist in the image of 27 April (seen in Figure 25A).In Figure 24III, the change is observed, since the image of 1 May is cloudless in this region.For B sense in the Figure 24, the reason of discovering the change in II, not in I and III map is the same with the A sense as the influence of shadow (seen the detail in Figure 25B).Figure 25C shows the Kal Mochan Temple of Kathmandu, which also collapsed after the earthquake.In Figure 24I,II, the change area is the same large, but small in Figure 24III map.This is because the feature on the ground has no major changes in the postearthquake images (27 April and 1 May), while the feature changed obviously between pre-and postearthquake images (Figure 24I is the result of 11 April and 27 April, Figure 24II is the result of 11 April and 1 May).There are just some rocks from the collapsed temple that have been consolidated during the period from 27 April to 1 May to clean the area of the ruins (shown in Figure 25C).By these comparisons, the progress of the rescue will be learned.The same reason applies to the Figure 25D,E scenes.From these images, we see that many tents were put up in some open space nearby the houses of victims as temporary shelters because of the constant large aftershocks.The temporary shelters were migrated or revoked on 1 May as the rescue proceeded.So some temporary facilities would disappear on the image of 1 May (seen in Figure 25D,E); and no The image in Figure 24I is the same image shown in Figure 23II, which is the change map between 11 April and 27 April.The image in Figure 24II is the change between 1 May and 11 April and the image in Figure 24III is the change between 27 April and 1 May.All of these eliminate the impact of cloud and shadow.Figure 25 shows the detailed enlarged images of the red rectangles of Figure 24.In Figure 25A, the Durbar Square of Kathmandu, which damaged severely in the earthquake, is shown, but this change (the damage) is not discovered because the clouds and shadows exist in the image of 27 April (seen in Figure 25A).In Figure 24III, the change is observed, since the image of 1 May is cloudless in this region.For B sense in the Figure 24, the reason of discovering the change in II, not in I and III map is the same with the A sense as the influence of shadow (seen the detail in Figure 25B).Figure 25C shows the Kal Mochan Temple of Kathmandu, which also collapsed after the earthquake.In Figure 24I,II, the change area is the same large, but small in Figure 24III map.This is because the feature on the ground has no major changes in the post-earthquake images (27 April and 1 May), while the feature changed obviously between pre-and post-earthquake images (Figure 24I is the result of 11 April and 27 April, Figure 24II is the result of 11 April and 1 May).There are just some rocks from the collapsed temple that have been consolidated during the period from 27 April to 1 May to clean the area of the ruins (shown in Figure 25C).By these comparisons, the progress of the rescue will be learned.The same reason applies to the Figure 25D,E scenes.From these images, we see that many tents were put up in some open space nearby the houses of victims as temporary shelters because of the constant large aftershocks.The temporary shelters were migrated or revoked on 1 May as the rescue proceeded.So some temporary facilities would disappear on the image of 1 May (seen in Figure 25D,E); and no or few change areas would be detected in image III of Figure 24.The results of this paper mainly provide a quick change detection map to determine the urgent requirements, as well as offer guidance to check the change regions precisely through a community remote sensing platform for experts or the public, which is possible to reduce labor costs and improve efficiency in the maximum extent.Figure 26 show the detected changes vector superposition on cloudless maps of 27 April.The vector of the left hand map is the final result, which was extracted from the image I of Figure 26; and the vector of the right hand map removes the error regions of the final result by visual checking.The right vector map is the combination of visual interpretation by four independent people.This vector map can be considered the relative correct result for comparing.The simple statistic is shown in Table 6.The number of change regions in the original vector map (shown in Figure 26 left hand map) is 758, while the number in the checked vector map (shown in Figure 26 right hand map) is 556, and the error detected change regions are 202.The percentage of truth change regions and original change detected regions is 73.4%.This can meet the application requirements to a certain degree.In the true change regions, most of them are tents or shelters (shown Figure 23A, Figure 25D,E, and Figure 27A), some are cars on the street or other temporary features, and a few are the collapsed or damaged buildings (shown in Figures 25C and 27B).In the error change regions, as analyzed above, most of the error is caused by the changes in different view angles between pre-event maps and post-event maps, and some are caused by undetected clouds, thin clouds, shadows, or other unknown errors.or few change areas would be detected in image III of Figure 24.The results of this paper mainly provide a quick change detection map to determine the urgent requirements, as well as offer guidance to check the change regions precisely through a community remote sensing platform for experts or the public, which is possible to reduce labor costs and improve efficiency in the maximum extent.Figure 26 show the detected changes vector superposition on cloudless maps of 27 April.The vector of the left hand map is the final result, which was extracted from the image I of Figure 26; and the vector of the right hand map removes the error regions of the final result by visual checking.The right vector map is the combination of visual interpretation by four independent people.This vector map can be considered the relative correct result for comparing.The simple statistic is shown in Table 6.The number of change regions in the original vector map (shown in Figure 26            Through the change detection map, the rescue progress can be observed.An example is shown in Figure 27.The image in Figure 27A shows the difference over four days in the open space of a park in the center of Kathmandu.The image from 27 April, the day after earthquake, indicates that the situation in this open space is in chaos, with a variety of temporary residences set up.However, by the sixth day after the earthquake, on the 1 May, the temporary settlements are orderly.This phenomenon is more obvious and more and unified residences and relief tents are set up by 2 May.The image in Figure 27B shows the series of changes of Bhimsen Tower.From 27 April to 2 May, the bricks and masonry from the ruins are cleaned up effectively and the situation changed from one of chaos to order.These temporal change maps indicate the progress of the disaster rescue, allowing for more objective guidance to formulate the next plan of rescue and allocate rescue resources reasonably.

Discussion
The goal of this work was to solve the problems related to the use of optical remote sensing data in disaster emergency mapping.The main problems are the impact of clouds in obtaining a clear situation backdrop map after the disaster and the contradiction between obtaining the poor precision change detection maps by automated methods and the relatively labor intensive process by visual interpretations.Through a series of new methods proposed in this paper, the accuracy of optical data pre-processing results have been improved greatly and the produced change detection maps can fulfill the fundamental requirement of disaster S&R activities, which is the information regarding the Through the change detection map, the rescue progress can be observed.An example is shown in Figure 27.The image in Figure 27A shows the difference over four days in the open space of a park in the center of Kathmandu.The image from 27 April, the day after earthquake, indicates that the situation in this open space is in chaos, with a variety of temporary residences set up.However, by the sixth day after the earthquake, on the 1 May, the temporary settlements are orderly.This phenomenon is more obvious and more and unified residences and relief tents are set up by 2 May.The image in Figure 27B shows the series of changes of Bhimsen Tower.From 27 April to 2 May, the bricks and masonry from the ruins are cleaned up effectively and the situation changed from one of chaos to order.These temporal change maps indicate the progress of the disaster rescue, allowing for more objective guidance to formulate the next plan of rescue and allocate rescue resources reasonably.

Discussion
The goal of this work was to solve the problems related to the use of optical remote sensing data in disaster emergency mapping.The main problems are the impact of clouds in obtaining a clear situation backdrop map after the disaster and the contradiction between obtaining the poor precision change detection maps by automated methods and the relatively labor intensive process by visual interpretations.Through a series of new methods proposed in this paper, the accuracy of optical data pre-processing results have been improved greatly and the produced change detection maps can fulfill the fundamental requirement of disaster S&R activities, which is the information regarding the disaster in terms of the number, location, distribution, arrangement, extent, and impact of the devastation.These rather coarse change maps also can be used as a guideline and the basis for visual observation to check the correctness of the detected change in objects.This can then generate more refined damage assessment maps and up-to-date maps for evaluation.

Map Accuracy and Timeliness
Reliability of the results and timeliness of the process are the key factors in emergency mapping.In previous studies, the biggest problem of optical data used in damage mapping by automatic change detection method was the calibration stage, which is a time consuming procedure with poor accuracy of results.In this paper, OptVM for image fusion and SIFT-OFM for image registration are proposed for producing a high accuracy fused image and matched image to obtain a rather high precision change detection map.Comparing the results based on traditional image fusion and registration methods, the results based on our proposed methods are the best overall (see the analysis in Sections 3.2 and 3.3).The false alarm rate of the final change detection scenes, which are caused by improper co-registration, can be reduced by relying on the rather high accuracy of the fused image and the matched image.Timeliness is also an issue.The time for processing the images of 11 April, 27 April, and 1 May from the Kathmandu example was only about 46 min to obtain the cloudless map shown in Figure 20 and the change map shown in Figure 24 using a PC with 8GB RAM and a 64-bit operating system.This is well worth the effort to obtain a cloudless situation backdrop map and a rather coarse change map.The time can be shortened if cluster or GPU parallel computing is used.

Limitations
Although a series of methods or strategies are used in this study to reduce the effects of optical imagery limitations, some limitations are still presented in the maps produced by this study.First, cloudless situation maps are available only when cloud cover is less than 50% of the image taken on the same day; otherwise, the cloudless maps contain a lot of information from the previous images.When the cloud coverage in the image is more than 80%, the image is completely unavailable, and only the SAR data is available to determine the situation of the cloud and shadow areas.The other limitation is that the generation of cloudless maps requires at least two post-event images with non-overlapping cloud covered areas.Post-event data are a bit more challenging due to cloud cover during the time of satellite passes.In some cases, getting a post-event cloudless map of the entire damaged area requires a great deal of time.However, multiple satellites passes and multiple data increase the possibility of obtaining cloudless images.Generally, a single post-event imagery is available in the first hours after the event.If it has cloud covered areas, there is no method other than to just replace the cloud areas with the nearest pre-event clear data when we want to get cloudless maps.Our proposed method can produce a cloudless map in this case.It is also useful for rescue teams to learn the situation of the affected areas using cloudless maps before they start search and rescue (S&R) activities within a few hours after the disaster.In parts of the country without internet access, the professional rescue teams need to take a cloudless printed map.It is important to provide cloudless maps even though cloud cover areas are replaced by pre-event clear data.In this kind cloudless map (seen Figure 22 left image), the vectors of detected cloud and shadow regions are provided to distinguish where is the pre-event or post-event data to avoid confusion for users.
Next, undetected clouds and shadows, variations in solar illumination, different off-nadir angles, and some temporary objects (e.g., car traffic), which could influence the results of change detection, still exist and reduce the accuracy of change maps in this example (as shown in Figure 23C,D).These errors are difficult to exclude from automatic change detected maps.It is only by manual visual checking that errors can be removed to obtain more precise change maps.That is why we emphasize the important of manual participation for further checking the change maps in this study.These change maps are just the coarse result, which provides some basic main information for the affected regions.In addition, the detected changes in the product maps are not divided into categories and degrees, which also rely on manual interpretation to measure.
Then, a main limitation is the data resolution.As the policy of data acquisition and fund problems, the highest data resolution obtained was only 2 m with GF-1.This is not sufficient for detecting and assessing the damaged buildings, except those that are completely collapsed (shown in Figure 27B) or partially collapsed (shown in Figure 25C).The main purpose of the change detection maps in this study is to detect the scene after the earthquake to determine whether it was changed or not.The change detection maps could provide guidance of the changed regions for further checking as mentioned above.
Lastly, though image-based mapping has played an important role in rapid post-disaster damage assessment, the detail and accuracy of assessment has not reached the level of ground-based surveys [9].The limitation of image-based damage assessment is not only related to the spatial resolution of the sensors, but also the reflected information reflected by image itself.Generally, most of the satellite images for disaster mapping are acquire by vertical angle (or small angle offset vertical), and the provided information of images, such as for building, is the situation of roofs.The roof information is well suited for the identification of extreme damage states, i.e., completely destroyed structures [61].But for some not collapsed buildings with cracks or inclined walls, the results of image-based mapping are largely missed and lead to assessment deficiency of this type damage in the final results.Recently oblique or multi-perspective data, which provide information of both roofs and facades, has been identified as a potential data source for a more comprehensive building damage assessment [62,63].

Conclusions and Future Work
Using remote sensing for emergency mapping is now a common operational approach for disaster risk management.Both optical and radar data are the main input data when disasters occur.However, for damage assessment or analysis purposes, a number of results have been presented after every event mostly based on optical data.These can be easy to read and analyze to generate fast maps showing the location, scale, or the extent of disaster or crisis situation.In this study, a rapid procedure for transforming optical remote sensing data is proposed to produce cloudless situation backdrop maps and change detection maps for the emergency needs in the early phases of disaster recovery.On the one hand, the precision and reliability of the products has improved significantly by using the new methods, which include OptVM and SIFT-OFM.On the other hand, the automatic processing time is shortened by proposed procedure.It is useful to provide some rapid and actionable information for the governmental agencies, multilateral organizations, and NGOs in a timely fashion.While only the GF-1 temporal data were tested in this study as the limitation of data acquisition, the proposed procedure is also applicable to the other remote sensing data.It can even provide up-to-date situation awareness, loss estimation, and recovery monitoring maps to support decision making by processing multi-source and multi-temporal optical remote sensing data together.
The accuracy of the results from the automatic methods is obviously not as good as the manual visual interpretation by good cooperation of professional image analysts or skilled operators [63].Manual visual interpretation, is always a rather labor intensive task.An ideal solution is to minimize the impact of restrictions to improve the accuracy of automatic methods results continuously.These automatic methods results can save the labor-cost to the maximum extent and provide some guideline for further refine the results by manual visual checking, though the results is rather coarse.This is also our main purposes of this study.International cooperation and volunteer contributions play an important role in mapping for disaster impact assessment at the urban level.International agencies and volunteers should be coordinated properly to ensure that common mapping guidelines and proper communication mechanisms are in place, for more effective cooperation among the involved actors especially during major emergencies.
Remote sensing is an effective tool at the disposal of a disaster manager.It can supply authentic and objective information for various rescue resources and the general public, especially in remote or inaccessible areas during the early period of the disaster.It would be strongest when allied with other tools, such as catastrophe risk models or GIS tools [64], to provide a powerful guidance to streamline evacuation, issue disaster declarations, coordinate S&R, coordinate logistics, conduct damage assessment, distribute relief supplies, and aid in long-term recovery.Some emerging technologies and initiatives will provide updated change and geospatial information as well as the resources distribution and demand data to analyze it in a time frame suitable for emergency response purposes.In the future, integrating different remote sensing data based on emergency mapping and other GIS information or analysis tools will be the popular trend to reduce economic losses, ensure public safety, and meet the basic subsistence needs of the people affected in an emergency response context.
where ˇˇZ 2 ˇˇis the same way with ˇˇZ 1 ˇˇ, X 1 and Y 1 are the first band of image A and B respectively, and so are the other bands.σ z 1 z 2 is the covariance of z 1 and z 2 , andσ z1 and σ z2 is the variance of z 1 and z 2 .
The first term of the equation is the modulus of the hyper-complex CC between z 1 and z 2 , and the second and third terms measure contrast changes and the mean bias on all bands, respectively.D is block size of image and in this paper the size is 32*32.4) Quality with No Reference (QNR): QNR measure the quality of fused image and reference image from the spectral and spatial distortion.The best value is 1 and it is defined as follow: QNR " p1 ´Dλ q α p1 ´Ds q β D λ " G is LRM image.p is integer index of amplifying the spectrum distortion, generally taking p = 1.P is HRP image and " P is panchromatic image after processing by low-pass filter.q is integer index of amplifying of spatial distortion, generally taking q= 1,and α=β=1.

Figure 1 .
Figure 1.Procedure for dealing with multispectral and multi-temporal remote sensing images.

Figure 1 .
Figure 1.Procedure for dealing with multispectral and multi-temporal remote sensing images.

Figure 3 .
Figure 3. Flowchart of image registration method.

Figure 3 .
Figure 3. Flowchart of image registration method.

Figure 5 .
Figure 5. Spatial location of the study area (the resolution of left map is 2 m after fusion).Figure 5. Spatial location of the study area (the resolution of left map is 2 m after fusion).

Figure 5 .
Figure 5. Spatial location of the study area (the resolution of left map is 2 m after fusion).Figure 5. Spatial location of the study area (the resolution of left map is 2 m after fusion).

Figure 6 .
Figure 6.The four images of data set (all the four image is the original image of MS with 8 m resolution).

Figure 6 .
Figure 6.The four images of data set (all the four image is the original image of MS with 8 m resolution).

Figure 7 .
Figure 7. Multispectral (left) and panchromatic data (right) of the original image on 27 April (upper-left small image is the detail from the area of the red rectangle).

Figure 7 .
Figure 7. Multispectral (left) and panchromatic data (right) of the original image on 27 April (upper-left small image is the detail from the area of the red rectangle).

Figure 9 .
Figure 9.Comparison of the various algorithms on the urban area scene.(A) the PAN image; (B) the MS image after resampling using the nearest neighbor; (C) the PCA result; (D) the wavelet result; (E) the GS result; and (F) the OptVM result.

Figure 10 .
Figure10.Spectral comparison between the various approaches in the urban area scene.The site of pixels marked in Figure9A.

Figure 9 .
Figure 9.Comparison of the various algorithms on the urban area scene.(A) the PAN image; (B) the MS image after resampling using the nearest neighbor; (C) the PCA result; (D) the wavelet result; (E) the GS result; and (F) the OptVM result.

Figure 9 .
Figure 9.Comparison of the various algorithms on the urban area scene.(A) the PAN image; (B) the MS image after resampling using the nearest neighbor; (C) the PCA result; (D) the wavelet result; (E) the GS result; and (F) the OptVM result.

Figure 10 .
Figure 10.Spectral comparison between the various approaches in the urban area scene.The site of pixels marked in Figure 9A.

Figure 10 .
Figure 10.Spectral comparison between the various approaches in the urban area scene.The site of pixels marked in Figure 9A.

Figure 11 .
Figure 11.Line chart of the four metrics comparing fusion methods (ERGAS and SAM values are better when approaching 0, while Q4 and QNR values are better when approaching 1).

Figure 11 .
Figure 11.Line chart of the four metrics comparing fusion methods (ERGAS and SAM values are better when approaching 0, while Q4 and QNR values are better when approaching 1).

Figure 12 .
Figure 12.The combined image of the reference image and the sensed image (A-D show the detail of each red rectangle on the combined image).

Figure 13 .
Figure 13.The combined image of the reference image and the matched image only using the optical flow model (A-D show the detail of each red rectangle on the combined image).

Figure 14 .
Figure 14.The combined image of the reference image and the matched image only using the SIFT

Figure 12 . 32 Figure 12 .
Figure 12.The combined image of the reference image and the sensed image (A-D show the detail of each red rectangle on the combined image).

Figure 13 .
Figure 13.The combined image of the reference image and the matched image only using the optical flow model (A-D show the detail of each red rectangle on the combined image).

Figure 13 .
Figure 13.The combined image of the reference image and the matched image only using the optical flow model (A-D show the detail of each red rectangle on the combined image).

Figure 13 .
The combined image of the reference image and the matched image only using the optical flow model (A-D show the detail of each red rectangle on the combined image).

Figure 14 .
Figure 14.The combined image of the reference image and the matched image only using the SIFT algorithm (A-D show the detail of each red rectangle on the combined image).

Figure 14 . 32 Figure 15 .
Figure 14.The combined image of the reference image and the matched image only using the SIFT algorithm (A-D show the detail of each red rectangle on the combined image).Remote Sens. 2016, 8, 272 17 of 32

Figure 16 .
Figure 16.The combined image of the reference image and the matched image using SIFT-OFM (A-D show the detail of each red rectangle on the combined image).

Figure 15 . 32 Figure 15 .
Figure 15.The combined image of the reference image and the matched image by using GCP in ENVI software (A-D show the detail of each red rectangle on the combined image).

Figure 16 .
Figure 16.The combined image of the reference image and the matched image using SIFT-OFM (A-D show the detail of each red rectangle on the combined image).

Figure 16 .
Figure 16.The combined image of the reference image and the matched image using SIFT-OFM (A-D show the detail of each red rectangle on the combined image).

Figure 17 .
Figure 17.SSIM maps of three methods (the red area indicates SSIM values close to 1; that is, the more high precision registration regions).

Figure 17 .
Figure 17.SSIM maps of three methods (the red area indicates SSIM values close to 1; that is, the more high precision registration regions).

Figure 18 .
Figure18.The data set of four images after image fusion and registration (compared with Figure6, the resolution and the registration accuracy have been improved greatly).

Figure 18 .
Figure18.The data set of four images after image fusion and registration (compared with Figure6, the resolution and the registration accuracy have been improved greatly).

Figure 19 .
Figure 19.The cloud detection products of the four image in Figure 18 (red areas represent cloud regions and green areas represent shadow regions).

Figure 20 .
Figure 20.The cloudless map of 2 May.(I and II are the original images of 1 May and 2 May, III is the cloudless map after removing the cloud and shadow).(A) and (B) are the shadow and cloud regions in the original image; (C) and (D) are the same areas after removing the influence of cloud and shadow corresponding to (A) and (B) regions.The detail of A-D are shown in Figure 21).

Figure 19 . 32 Figure 19 .
Figure 19.The cloud detection products of the four image in Figure 18 (red areas represent cloud regions and green areas represent shadow regions).

Figure 20 .
Figure 20.The cloudless map of 2 May.(I and II are the original images of 1 May and 2 May, III is the cloudless map after removing the cloud and shadow).(A) and (B) are the shadow and cloud regions in the original image; (C) and (D) are the same areas after removing the influence of cloud and shadow corresponding to (A) and (B) regions.The detail of A-D are shown in Figure 21).

Figure 20 .
Figure 20.The cloudless map of 2 May.(I and II are the original images of 1 May and 2 May, III is the cloudless map after removing the cloud and shadow).(A) and (B) are the shadow and cloud regions in the original image; (C) and (D) are the same areas after removing the influence of cloud and shadow corresponding to (A) and (B) regions.The detail of A-D are shown in Figure 21).

Figure 21 .
Figure 21.The detailed enlarged images of the rectangle in Figure 20.(A and B are the original images from 2 May, C and D are the simulated cloudless images).

Figure 22 .
Figure 22.The cloudless maps for the post-event images from 27 April, 1 May , and 2 May.
are tested.The image in Figure 23I is the change-detection map between 11 April and 27 April, in which clouds and shadows are considered as the change, and Figure 23II is the change-detection map after removing the cloud and shadow regions based on the Figure 23I image.Figure 23A-D give the detailed enlarged images of the red rectangle in the large maps.Figure 23A illustrates the change

Figure 21 . 32 Figure 21 .
Figure 21.The detailed enlarged images of the rectangle in Figure 20.(A and B are the original images from 2 May, C and D are the simulated cloudless images).

Figure 22 .
Figure 22.The cloudless maps for the post-event images from 27 April, 1 May , and 2 May.
are tested.The image in Figure 23I is the change-detection map between 11 April and 27 April, in which clouds and shadows are considered as the change, and Figure 23II is the change-detection map after removing the cloud and shadow regions based on the Figure 23I image.Figure 23A-D give the detailed enlarged images of the red rectangle in the large maps.Figure 23A illustrates the change

Figure 22 .
Figure 22.The cloudless maps for the post-event images from 27 April, 1 May , and 2 May.

Figure 23 .
Figure 23.The change-detection map of image between 11 April and 27 April.(I image contains the cloud and shadow, II image remove the impact of cloud and shadow based on I image, A-D are the different detected changing scenes, A and B are the right detected scenes, C and D are false detected scenes).

Figure 23 .
Figure 23.The change-detection map of image between 11 April and 27 April.(I image contains the cloud and shadow, II image remove the impact of cloud and shadow based on I image, A-D are the different detected changing scenes, A and B are the right detected scenes, C and D are false detected scenes).
left hand map) is 758, while the number in the checked vector map (shown in Figure26right hand map) is 556, and the error detected change regions are 202.The percentage of truth change regions and original change detected regions is 73.4%.This can meet the application requirements to a certain degree.In the true change regions, most of them are tents or shelters (shown Figures23A, 25D,E, and 27A), some are cars on the street or other temporary features, and a few are the collapsed or damaged buildings (shown in Figures25C and 27B).In the error change regions, as analyzed above, most of the error is caused by the changes in different view angles between pre-event maps and post-event maps, and some are caused by undetected clouds, thin clouds, shadows, or other unknown errors.

Figure 24 .
Figure 24.The change-detection map of image between 11 April, 27 April and 1 May after removing the cloud and shadow (I is the change map of 11 April and 27 April, II is the change map of 11 April and 1 May, III is the change map of 27 April and 1 May.A-E region are the different sense and the detail show in Figure 25).

Figure 24 .
Figure 24.The change-detection map of image between 11 April, 27 April and 1 May after removing the cloud and shadow (I is the change map of 11 April and 27 April, II is the change map of 11 April and 1 May, III is the change map of 27 April and 1 May.A-E region are the different sense and the detail show in Figure 25).

Figure 25 .
Figure 25.The detailed enlarged images of the red rectangle in Figure 24 (A and B are influenced by clouds and shadows; C is a collapsed temple; D and E are the open spaces where many tents are being set up).

Figure 26 .
Figure 26.The detected changes vector superposition on the cloudless map of 27 April.(the vector of the left hand map is extracted from the I image of Figure 24, and the vector of the right hand map has removed the error regions of the left hand vector map by visual checking).

Figure 25 . 32 Figure 25 .
Figure 25.The detailed enlarged images of the red rectangle in Figure 24 (A and B are influenced by clouds and shadows; C is a collapsed temple; D and E are the open spaces where many tents are being set up).

Figure 26 .
Figure 26.The detected changes vector superposition on the cloudless map of 27 April.(the vector of the left hand map is extracted from the I image of Figure 24, and the vector of the right hand map has removed the error regions of the left hand vector map by visual checking).

Figure 26 .
Figure 26.The detected changes vector superposition on the cloudless map of 27 April.(the vector of the left hand map is extracted from the I image of Figure 24, and the vector of the right hand map has removed the error regions of the left hand vector map by visual checking).

Figure 27 .
Figure 27.The temporal change detail map of two scenes (A shows a series of changes in Center Park from 27 April to 2 May and B shows the change of a collapsed tower).

Figure 27 .
Figure 27.The temporal change detail map of two scenes (A shows a series of changes in Center Park from 27 April to 2 May and B shows the change of a collapsed tower).
l , Ĝr ˘´Q ´Ă G l , l , P ˘´Q ´Ă G l , r P ¯ˇˇq where L is the number of bands, Ĝ is fused HRM image ,and "

Table 1 .
The specific observations parameters of data set.

Table 1 .
The specific observations parameters of data set.

Table 3 .
Spectral angle difference and Euclidean distance difference of marked pixels.

Table 3 .
Spectral angle difference and Euclidean distance difference of marked pixels.

Table 3 .
Spectral angle difference and Euclidean distance difference of marked pixels.

Table 4 .
Comparison of the value of above four index.

Table 4 .
Comparison of the value of above four index.

Table 5 .
Comparison of SSIM value for whole subset image using three methods.

Table 5 .
Comparison of SSIM value for whole subset image using three methods.

Table 6 .
The simple statistic of detected changes vector map and checked changes vector map.

Table 6 .
The simple statistic of detected changes vector map and checked changes vector map.