Next Article in Journal
Virtual Geographic Simulation of Light Distribution within Three-Dimensional Plant Canopy Models
Previous Article in Journal
Towards Sustainable Urban Planning Through Transit-Oriented Development (A Case Study: Tehran)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Guided Image Filtering-Based Pan-Sharpening Method: A Case Study of GaoFen-2 Imagery

1
Faculty of Forestry, Southwest Forestry University, Kunming 650224, China
2
School of Printing and Packaging, Wuhan University, Wuhan 430079, China
3
Faculty of Design, Southwest Forestry University, Kunming 650224, China
4
School of Electrical and Electronic, Nanyang Technological University, Singapore 637553, Singapore
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(12), 404; https://doi.org/10.3390/ijgi6120404
Submission received: 13 September 2017 / Revised: 5 December 2017 / Accepted: 7 December 2017 / Published: 12 December 2017

Abstract

:
GaoFen-2 (GF-2) is a civilian optical satellite self-developed by China equipped with both multispectral and panchromatic sensors, and is the first satellite in China with a resolution below 1 m. Because the pan-sharpening methods on GF-2 imagery have not been a focus of previous works, we propose a novel pan-sharpening method based on guided image filtering and compare the performance to state-of-the-art methods on GF-2 images. Guided image filtering was introduced to decompose and transfer the details and structures from the original panchromatic and multispectral bands. Thereafter, an adaptive model that considers the local spectral relationship was designed to properly inject spatial information back into the original spectral bands. Four pairs of GF-2 images acquired from urban, water body, cropland, and forest areas were selected for the experiments. Both quantitative and visual inspections were used for the assessment. The experimental results demonstrated that for GF-2 imagery acquired over different scenes, the proposed approach consistently achieves high spectral fidelity and enhances spatial details, thereby benefitting the potential classification procedures.

1. Introduction

Currently, numerous digital aerial cameras and optical earth observation satellites such as QuickBird, WorldView-3, and GaoFen-2 (GF-2) exist that can simultaneously obtain multispectral (MS) and panchromatic (Pan) images [1]. Due to physical constraints, high-resolution Pan images lack the spectral information of MS images, while MS images often have a lower spatial resolution. To synergistically utilize these images for various applications, such as detailed land cover classification, change detection, and so on, it has become increasingly important to integrate the strengths of both types [2,3].
The GF-2 satellite was launched in August 2014. It is a civilian optical remote sensing satellite developed by China and the first satellite in China with a resolution below 1 m. This satellite is equipped with both a panchromatic sensor and multispectral sensor that can be used simultaneously. The GF-2 can achieve a spatial resolution of 0.8 m with a swath of 48 km in panchromatic mode; in contrast, the satellite acquires images with a resolution of 3.2 m in 4 spectral bands in multispectral mode. Furthermore, it is also characterized by high radiation accuracy, high positioning accuracy, and fast attitude maneuverability, among other features. With its low cost and availability, this satellite can benefit many possible applications in China, such as detailed land cover/use classification, change detection, and landscape design. As a recently launched optical satellite, exploring effective sharpening approaches to expand the application scope of the images is important.
Many pan-sharpening methods have been proposed to achieve high-spatial and high-spectral resolutions. These methods can be roughly classified into three categories: ratio enhancement (RE) methods, multiresolution analysis (MRA) methods, and component substitution (CS) methods [4]. In general, RE methods [5,6] use image division to compute a synthetic ratio; then, the pan-sharpening result is obtained by multiplying an MS image by the ratio. The MRA methods [7] utilize some multi-scale analysis tools, such as Laplacian pyramids or wavelet transform, to divide the spatial information of each image into many channels and then insert the high-frequency channels of the Pan image into the corresponding MS channels, before restoring them to produce a fused image. CS methods [8] first project the MS image into a vector space; then, one structural component of the MS bands is replaced by a Pan image, before applying an inverse transformation. The CS methods can be summarized into four steps [9,10], including: (a) resampling the MS image to the scale of the Pan image; (b) computing the intensity component (e.g., acquired by weighted summation of the MS image); (c) matching the histograms of the Pan image to the intensity component; and (d) injecting the extracted details according to a set of weight coefficients. Some studies [11] also indicate that the MRA methods can be formulated in the same way as the CS methods, but the main difference lies in the method used to compute the intensity component.
CS methods are more practical and popular because of their fast calculation speeds and convenient implementation. Representative cases of CS methods include principal component analysis (PCA), Gram-Schmidt transformation (GS), Intensity-Hue-Saturation (IHS), and the University of New Brunswick (UNB) method [12], among others. These typical methods are widely used and can retain the spatial details of original Pan images well. However, spectral distortion will occasionally occur in pan-sharpened images [13]. Yun Zhang [12] attributes this distortion to the inefficiency of classical techniques on new sensors. Xie et al. [14] indicate that neglecting the spectral consistency term results in fused images that are not strictly spectrally consistent. A local adaptive method, i.e., an adaptive GS method (GSA), is proposed in Ref. [10] that can preserve the spectral features without diminishing the spatial quality. Xie et al. [14] reveal the implicit statistical assumptions of the CS methods from a Bayesian data fusion framework, and demonstrate that all pixel values in different vectors are independent and identically distributed; considering this assumption in a local sliding window is always a better solution to spectral distortion.
Furthermore, these popular methods are also employed to fuse data of different resolution ratios. For example, Fryskowska et al. [15] analyze the multispectral image integration abilities of Landsat 8 with data from the high spatial resolution panchromatic EROS B satellite. The authors test six algorithms (Brovey, Multiplicative, PCA, IHS, Ehler, and HPF) and the experimental results show that the Brovey and Multiplicative algorithms can achieve better visual effectiveness. Santurri et al. [16] compare the pansharpened results of different methods on SPOT-HRV panchromatic and Landsat-TM multispectral images. The IHS-based and GSA methods are reported in the literature as the more effective techniques, whereas, other traditional methods barely achieve satisfactory results.
Recent developments in pan-sharpening approaches have also included a fast pan-sharpening method based on nearest-neighbor diffusion (NND) [17] and deep-learning based algorithms [18,19]. The NND method assumes that each spectral value in the fused image is a linear mixture of the spectra of its direct adjacent superpixels, and it takes each pixel spectrum as the smallest unit of operation. The structure of a deep-learning network includes multiple artificial neural networks with hidden layers. Such models have excellent feature learning abilities [18], and they have recently been introduced for use in image fusion. For instance, Liu et al. [19] propose a multi-focus image fusion method that utilizes a deep convolutional neural network trained by both high-quality image patches and their blurred versions to encode the mapping. Many experiments have shown that both the NND and deep learning approaches can achieve a strong fusion performance; however, the fused results produced by the NND method may result in spectral distortion in some specific scenes of very high-resolution images. Meanwhile, methods based on deep learning require large amounts of training data to achieve acceptable performance, and their complex model structures often make explaining the results difficult. In this context, increasing numbers of emerging satellite images provide the motivation for developing new methods to counteract these limitations.
In recent years, applications of edge-preserving filtering, such as bilateral filtering [20], mean shift [21,22], and guided image filtering [23], have attracted a great deal of attention in the image processing community. Among these, guided image filtering, proposed by He et al. [23] in 2010, is quite popular due to its low computational cost and excellent edge-preserving properties.
Guided image filtering has been widely used for combining features from two different source images, such as image matting/feathering [24], HDR compression [25], flash/no-flash de-noising [26], haze removal [27], and so on. By transferring the main boundaries of the guidance image to the filtered image, the original image can be smoothed; meanwhile, the gradient information of the guidance image can also be retained. Guided image filtering provides an interesting way to fuse the features of multi-source data sets. However, the application of guided filtering to remote sensing image pan-sharpening tasks remains to be considered. Li et al. [28] developed an image fusion method with guided filtering that has been tested on multi-focus or multi-exposure images of nature scenes.
In this context, a novel pan-sharpening method based on guided image filtering for fusing GF-2 images is proposed. In detail, the spectrum coverage of the Pan and MS bands is considered, and a simulated low-resolution Pan band is simulated through a linear regression model. During the filtering process, the resampled MS image is taken as the guiding image for the simulated Pan band. Next, the spatial information is obtained by subtracting the filter output from the original Pan image. Finally, the pan-sharpened image is synthesized by adaptively injecting the spatial details into each band of the resampled MS image.
The remaining sections of this paper are organized as follows. Section 2 reviews guided image filtering. Section 3 presents the proposed image pan-sharpening method. The experimental settings are introduced in Section 4, and Section 5 provides the experimental results and discussion. Finally, Section 6 provides conclusions.

2. Guided Image Filtering

Pan-sharpening can be considered as a process that combines the strengths of both panchromatic and multispectral images. Correspondingly, guided filtering [23] combines the characteristics of an original image and a guidance image. When the original and guidance images are properly selected, it is feasible to integrate the characteristics of both images. In this section, we first briefly review the guided image filtering algorithm; then, we analyze the properties of the guided image filtering with different parameter settings.

2.1. Guided Image Filtering

The guided image filter [23] assumes that the filtering output follows a local linear model between filter output Q and guidance image I in a local window ω k centered at pixel k .
Q i = a k I i + b k , i ω k ,
where a k and b k are linear coefficients considered to be constant in a small square image window ω k with a radius of ( 2 r + 1 ) × ( 2 r + 1 ) . The local linear model guarantees Q = a I , that is, that filter output Q has an edge only if the guidance image I has an edge. Here, the coefficients a k and b k are computed by minimizing the following cost function:
E ( a k , b k ) = i ω k [ ( a k I i + b k p i ) 2 + ε a k 2 ] ,
where p is the filter input, and ε is a regularization parameter assigned by users that prevents a k from becoming too large. The linear coefficients are directly resolved by the linear ridge regression [29] as follows:
a k = 1 | ω | i ω k I i p i μ k p ¯ k σ k 2 + ε b k = p ¯ k a k μ k p ¯ k = 1 | ω | i ω k p i ,
where μ k and σ k 2 are the mean and variance of I in ω k , | ω | is the number of pixels in ω k , and p ¯ k is the mean of p in ω k . Because all windows that contain i will involve pixel i , different windows will have different values of Q i . One effective method to resolve this problem is to average all the possible values of Q i to obtain a filtered output image Q . Therefore, after calculating ( a k , b k ) for all windows ω k in the image, the filter result is computed by:
Q i = 1 | ω | k : i ω k ( a k I i + b k ) = a ¯ i I i + b ¯ i ,
where a ¯ i = 1 | ω | k ω i a k and b ¯ i = 1 | ω | k ω i b k .

2.2. Influence of Parameters

Two important parameters, the radius r of the local windows and the regularization parameter ε , determine the filtering performance. Some guided filtering results using various parameters are shown in Figure 1. Figure 1a is one band of the MS image, defined as the input image, and Figure 1c is the corresponding Pan image, which serves as the guidance image; Figure 1b,d show the filtering results of the guided filter with different parameter settings.
The edge-preserving performance is emphasized by the local rectangles. As shown, with different extents, the structures in the guidance image are preserved in the filtering outputs. More specifically, in Figure 1b as the filter size r increases, the edge information gradually becomes visible. This means that an increasing amount of structural information is transferred from the guidance image to the filtering results. Meanwhile, as the degree of smoothing ε increases, only slight changes occur in the filtered outputs (Figure 1d).
However, as more structures of the guidance image (the Pan band) are transferred to the filtering outputs, the spectral information inherited from the original input image (MS) is reduced. In contrast, the greater the value of ε , the more the details of the input image are smoothed. Therefore, to consider the trade-off between the spectral and spatial qualities, empirically, the values of r and ε cannot be too large.

3. Proposed Algorithm for GaoFen-2 (GF-2) Datasets

In this section, we formulate the problem and subsequently introduce the proposed pan-sharpening method based on guided filtering. Then, we verify the effectiveness of the proposed method.

3.1. Problem Formulation and Notations

The goal of the proposed algorithm is to obtain new MS images, which simultaneously possess both high spectral and high spatial properties. In the following section, we use M and P to denote the original MS and Pan images, respectively. After resampling all the MS bands into the same spatial size as the Pan band, P ( x , y ) is the Pan band pixel value of the position ( x , y ) , M i and M i ( x , y ) with i { 1 , 2 , 3 , 4 } are the i-th MS band and the pixel value of the i-th MS band at the position ( x , y ) respectively, and the final pan-sharpened output is denoted as F .

3.2. Guided Filtering Based Pan-Sharpening

In this section, a pan-sharpening method based on guided image filtering is proposed. The flow chart of the method shown in Figure 2 can be described as follows.
(i)
The original multispectral image is registered and resampled to be the same size as the original Pan image P .
(ii)
By minimizing the residual sum of squares (Equation (5)), the weights w i (with i = 1 , 2 , 3 , 4 ) can be easily estimated.
R S S ( w i ) = x y ( P ( x , y ) i = 1 4 w i M i ( x , y ) ) 2 ,
Thereafter, a synthetic low-resolution panchromatic image P ˜ can be obtained with Equation (6).
P ˜ = i = 1 4 w i M i ,
where P ˜ is the simulated low-resolution panchromatic image and w i is the weight for the i-th band M i ( x , y ) , which is constant for the given band.
(iii)
Each M i (with i = 1 , 2 , 3 , 4 ) is taken as the guidance image to guide the filtering process of the low-resolution Pan image P ˜ , and the filter output M i (with i = 1 , 2 , 3 , 4 ) is obtained as follows:
M i = G F ( M i , P ˜ ) , i = 1 , 2 , 3 , 4 ,
where G F ( u , v ) denotes the process of guided filtering, and u and v represent the guidance and input images, respectively.
(iv)
The pan-sharpening result F i is obtained by extracting the spatial information from the Pan image and injecting it into the resampled MS image M i according to the weight α i ( x , y ) . This process can be formulated as shown in Equations (8) and (9):
F i ( x , y ) = ( P ( x , y ) M i ( x , y ) ) × α i ( x , y ) + M i ( x , y ) , i { 1 , 2 , 3 , 4 } ,
α i ( x , y ) = 1 ( p , q ) w ( x , y ) ( M i ( p , q ) P ( p , q ) ) 2 , i { 1 , 2 , 3 , 4 } ,
where F i ( x , y ) is the fusion image, P ( x , y ) is the original Pan image, M i ( x , y ) is the resampled MS image, M i ( x , y ) is the filtering output, α i ( x , y ) is the weight corresponding to i-th MS band at a position ( x , y ) , w ( x , y ) denotes a local square window centered at ( x , y ) , ( p , q ) expresses a pixel in the local square window w ( x , y ) , i is the band number of the MS image and the total number of bands in the MS image is 4. Obviously, the greater the distance is, the smaller the weight should be; otherwise, the weight should be large.
In the proposed algorithm, the guided filtering involves both the resampled spectral band M i (as the guidance image) and the simulated Pan band P ˜ (as the input band); therefore, the output band M i preserves the structures of both M i and P ˜ . This process results in less spectral distortion when extracting the spatial details from the Pan band.
Furthermore, the algorithm modulates the extracted spatial details with a position-dependent ratio α i ( x , y ) . More specifically, as shown in Equation (9), for each pixel located at position ( x , y ) centered at a window of size ( 2 R + 1 ) × ( 2 R + 1 ) , the Euclidean distance between each M i ( x , y ) and P ( x , y ) is calculated. Then, the reciprocal of the distance is defined as an indicator of the amount of spatial details that should be injected into a specific MS band. A small distance indicates a small spectrum difference of the corresponding pixels between the MS and Pan bands; in which case, the weight should be large. However, the larger the distance is, the smaller the weight should be; thus, a weak combination is assumed. In this way, the spectral distortion is further reduced.
There are three important parameters in the proposed algorithm: the radius r of local windows, the regularization parameter ε in the guided filter, and the radius R of the local windows for calculating the weights. A detailed discussion concerning parameter selections is provided in Section 5.1.

3.3. Effectiveness of the Proposed Method

The effectiveness of the fused results depends primarily on the detail injection models, that is, the injection weights. A demonstration of detail enhancement and spectral preservation with different weights is provided in Figure 3 and Figure 4.
A 1-D example of detail enhancement with different models is shown in Figure 3. The 1-D input spectral signal (blue) is the spectral curve of part of the features in the first resampled multispectral band M 1 . The product of the SP (spatial detail) and weight is the injected detail layer (red); among these, SP expresses the difference between the Pan band and the filtered output. The injection weights include α i ( x , y ) (Equation (9)), the equal proportion injection model, and the GS-based model [30] (the covariance between the Pan and first resampled multispectral band). The enhanced signal (green) is the combination of the input signal and the detail layer.
Figure 3a shows that the result obtained from the proposed method with weight α i preserves the gradient information well, and the spatial details are obviously simultaneously enhanced. This is because the weight α i is calculated pixel-by-pixel, and the spatial details are not lost during processing. However, spatial details are injected in equal proportion in the model; as shown in Figure 3b, the enhanced signal appears to have an abnormal protrusion. In Figure 3c, the result based on the GS model has the same trend as the input signal, but the curve is smoother. Due to this global model, some detail may be lost in the fused image.
Figure 4 shows a comparison of output images with different injection weights as mentioned above. During processing, the radius and regularization parameter in the guided filter were set to 3 and 10−8, respectively. The zoomed-in patches indicate that the proposed method, using α i as the injection weight, achieves better spectral preservation and detail enhancement than do other models.

4. Datasets and Experimental Settings

In this section, the data sets are introduced first; then, some state-of-the-art fusion methods are used for comparisons, and several evaluation metrics are briefly described.

4.1. GF-2 Datasets

The characteristics of the employed datasets are shown in Table 1. The MS image consists of four bands, including blue, green, red, and near infrared (NIR), and the spectral range of the MS bands are exactly covered by the range of the Pan band. Four pairs of GF-2 images acquired over urban, water body, cropland, and forest areas were employed to evaluate the effectiveness of the proposed method. The characteristics of each image are displayed in Table 1 (the original MS images were resampled to be the same size as the Pan images).

4.2. Methods Considered for Comparison

Five state-of-the-art pan-sharpening methods, including Gram-Schmidt transformation [30] and NND pan-sharpening [17] from the ENVI software, the University of New Brunswick method [14] from the PCI Geomatica software, the adaptive GS (GSA) method [12], and the GD method [31] were selected for comparisons. These approaches are either integrated into commercial software or are newly developed, and have all been shown to be efficient for fusing remote sensing images.
(1)
Gram-Schmidt Transformation (GS) [30]: The general GS method uses the Blue, Green, Red and NIR bands of an MS image to simulate a low-resolution Pan band according to corresponding predefined weights. Thereafter, the GS transformation is applied to the synthetic Pan and low-resolution MS images using the first band from the former. Finally, the high-resolution Pan image replaces the first band of the GS transformed bands, and the inverse GS transformation is employed to produce a fused MS image. This method has been integrated into the ENVI 5.3 software. The average of the low-resolution multispectral bands is calculated as the low-resolution Pan band.
(2)
Adaptive GS method [12]. The GSA method has the same processing procedures as the general GS method except that the GS method uses equal weight coefficients for each MS band to obtain an intensity image, whereas the GSA method employs the weight coefficients derived using a regression between the MS and degraded low-resolution Pan images. Both methods enjoy the injection gains given by Equation (10):
g i = cov ( M i , I L ) var ( I L ) , i = 1 , , N ,
where cov ( M i , I L ) is the covariance of the i-th MS band and the low-resolution Pan image, var ( I L ) denotes the variance of the low-resolution Pan image, and N is the total number of bands in the MS image.
(3)
Nearest-neighbor Diffusion-based Pan-Sharpening (NND) [17]: The NND method first downsamples the high-resolution Pan image to match the size of the MS image. Then, it calculates the spectral band contribution vector using linear regression and obtains the difference factors from the neighboring super pixels of each pixel in the original Pan image. Finally, this method applies a linear mixture model to acquire a fused image. Two important external parameters, an intensity smoothness factor and a spatial smoothness factor, are set based on the intended application. In this study, the default values of the parameters were utilized across all the experiments.
(4)
University of New Brunswick method (UNB) [14]: The UNB pan-sharpening method first equalizes the histogram of the MS image and Pan image. Then, the spectral bands of the MS image, that are covered by the Pan band, are employed to produce a new synthetic image using the least squares technique. Finally, all the equalized bands of the MS image are fused with the synthesized image to obtain a high-resolution multispectral image. This method is currently integrated into the PCI Geomatica software. In this study, the method was executed using the default parameters.
(5)
GD method: Zhao et al. [31] proposed a fusion method based on a guided image filter that takes the resampled MS image as the guidance image and the original Pan image as the input image to implement the filtering process. Next, it obtains the filtered image with the related spatial information. Finally, the spatial details of the original Pan image are extracted and injected into each MS band according to the weight w i defined by Equation (11) [12] to obtain the fused image. Equation (11) calculates the optimal coefficient, which indicates the amount of spatial detail that should be injected into the corresponding MS band:
w i = cov ( P , M S i ) var ( P ) ,
where cov ( P , M S i ) is the covariance of P and the i-th MS band, and var ( P ) denotes the variance of the original Pan image.
It is worth mentioning that the GD method employs a global detail-injection model in which the weight for a given band is fixed and calculated according to Equation (11). In contrast, the proposed pan-sharpening method employs a local-based injection model, which is adaptively determined according to a local statistic defined by Equation (9).

4.3. Evaluation Methods

In this paper, both visual interpretations and quantitative evaluation methods were employed to verify the effectiveness of the proposed method.
In general, the quality evaluation approaches consist of two types, a reduced-resolution and a full-resolution assessment. In the first approach, some evaluation metrics, such as the relative dimensionless global error in synthesis (ERGAS) [32], the Spectral Angle Mapper (SAM) [33], and Q4 [34], require a reference image for comparison. In many previous experiments, the original Pan and MS images are degraded by four; then, the degraded images were pan-sharpened so that the original images could be used as references for comparison [35]. However, the scale invariance assumption does not always hold in practice, and the accuracy is fundamentally influenced by the way that the resolution degradation is performed [13]. Therefore, to avoid potential bias introduced from the degradation, we directly utilized the resampled MS and original Pan images as the reference images for quantitative assessments of the different pan-sharpening results.
Using the resampled MS images as reference images, four widely used metrics were selected for quantitative assessment from spatial and spectral qualities, including Entropy, the correlation coefficient (CC) [36], the universal image quality index (UIQI) [37], and the relative dimensionless global error in synthesis (ERGAS) [32]. Furthermore, the pan-sharpened images were also classified based only on their spectral features; therefore, the classification accuracy was employed as a metric to indirectly verify the effectiveness of the proposed method.
(1)
Entropy is used to measure the spatial information contained in a fused image. The higher the Entropy is, the richer the spatial information possessed by the fused image is. Entropy is expressed as follows:
E n t r o p y = 0 255 F ( i ) log 2 F ( i )
where F ( i ) is the probability of pixel value i in the image.
(2)
CC [36] measures the correlation between the MS image and fused image. The value of CC ranges from 0 to 1. A higher correlation value indicates a better correspondence between the MS image and fused image, the ideal correlation coefficient value is 1. CC is defined as follows:
C C = i = 1 m j = 1 n [ M ( i , j ) M ¯ ] [ F ( i , j ) F ¯ ] { [ i = 1 m j = 1 n M ( i , j ) M ¯ ] 2 } { [ i = 1 m j = 1 n F ( i , j ) F ¯ ] 2 }
where M ( i , j ) is the pixel value of original MS image, F ( i , j ) is the pixel value of the fused image, and M ¯ and F ¯ respectively denote the mean values of original and fused images.
(3)
UIQI [37] models any distortion as a combination of three different factors: loss of correlation, luminance distortion, and contrast distortion. It is suitable for most image evaluations, and the best value is 1. UIQI is given by:
U I Q I = σ x y σ x σ y × 2 x ¯ y ¯ ( x ¯ ) 2 + ( y ¯ ) 2 × 2 σ x σ y σ x 2 + σ y 2
where x ¯ and y ¯ are the mean values of the fused and original images, and σ x and σ y are the standard deviation of the fused and original images, respectively.
(4)
ERGAS [32] evaluates the overall spectral distortion of the pan-sharpened image. The lower the ERGAS value is, the better the spectrum quality of the fused image is. The best ERGAS value is 0. The definition of ERGAS is as follows:
E R G A S = Δ 100 d P d M S 1 K i = 1 K R M S E 2 ( i ) M E A N 2 ( i )
where d P / d M S is the ratio between pixel sizes of the Pan and MS images, K is the number of bands, and M E A N ( i ) is the mean of the i-th band, whereas R M S E ( i ) is the root-mean-square error between the i-th band of the reference image and the i-th band of the pansharpened image.

5. Results and Discussion

In this section, the influence of the three parameters is discussed first. Successively, four groups of experimental results and some image quality assessment are presented and discussed. Finally, the computational complexity of the proposed method is reported.

5.1. Analysis of the Influence of Parameters

5.1.1. Parameter Influences in the Guided Filter

As mentioned in Section 2.2, the parameters r and ε affect the filtering size and smoothing degree of the guided filter, respectively. To obtain the optimal parameter settings, an image of size 500 × 500 pixels was employed to conduct a parameter analysis, and two metrics, Entropy and SAM, were used as measures. Entropy is related to the spatial quality, while SAM [33] quantifies the spectral distortion by computing the angle between the corresponding pixels of the pan-sharpened and reference images. Figure 5 and Figure 6 show the influences of these two parameters on the pan-sharpening performance.
In these experiments, the window size of the weight was fixed to 7 × 7 , and 7 groups of both r and ε values were evaluated. When the influence of r was analyzed, ε was fixed to 10−3 and 10−6, while r was fixed to 2 and 4 when the influence of ε was analyzed.
From Figure 5 we can see that when ε is fixed, a larger r reduces the Entropy value and increase the SAM value. However, when r is less than 3, the changes in the Entropy and SAM values are not obvious; and when r is greater than 3, the Entropy value gradually decreases. Therefore, considering the trade-off between the two metrics, the value of r should not be too large or too small.
In Figure 6, when r is fixed, as the ε value decreases, the Entropy value becomes larger, while the SAM value continues to decrease. However, an ε value greater than 10−3 has a negligible effect on the Entropy and SAM values, an ε value less than 10−4 causes the Entropy value to decrease slowly, whereas the SAM value decreases continuously. Therefore, the value of ε also should not be too large or too small. Consequently, in the following experiments, we set the values of r and ε to 3 and 10−8, respectively.

5.1.2. The Influence of the Window Radius R for Calculating Weights

The window radius R of the weight during the fusion process is another important parameter. Here, the values of r and ε were fixed at 3 and 10−8, and R was set to 1, 3, 5, 7, and 9 to analyze its influence on the pan-sharpening performance. A quality index Q4 [34] was employed as the measure of influence. Q4 was averaged over the whole image to produce a global evaluation index, and all calculations were based on N × N blocks. The value of Q4 ranges from 0 to 1, and lower values reflect the amount of spectral distortion in the fused product, while high values may indicate that a result is closer to the reference image. In the experiments, the value of N was set to 32.
As shown in Figure 7, the R increases, the value of Q4 tends to rise, thus the fusion result had better spectral preservation. However, in some of the detailed pictures of the edge of the building displayed in Figure 8a, the greater the radius is, the more the spatial details are blurred. Meanwhile, the spectral profile curves of these detailed pictures in the same position are shown in Figure 8b. As shown, the smaller the R value, the sharper the edges of the corresponding curves become. Therefore, to achieve a better overall effect, the radius R of the weight should not be too small or too large. Considering this trade-off, R was consistently set to 3 in the experiments, that is, the window size for weight calculation was 7 × 7 .
In the above experiments, we found that, according to different metrics (i.e., Entropy, SAM, and Q4), the optimal window sizes for the filtering and weight calculations were consistently set to 7 × 7 . Therefore, it is reasonable to set the two window sizes as the same, as both are related to modeling the connections between the Pan and MS bands. Therefore, we assume that the two window sizes are the same.

5.2. Comparison of Different Pan-Sharpening Approaches

In this subsection, the proposed pan-sharpening method was compared with some other state-of-the-art approaches. Detailed information about these methods is provided in Section 4.2. As shown in Figure 9, Figure 10, Figure 11 and Figure 12, local patches with various land cover types were clipped from the fused results and displayed in true color using the same stretching mode. Quantitative assessments of these four sets of test data are shown in Table 2, Table 3, Table 4 and Table 5. The best performance of each metric is in bold.
In general, all of the methods yield visually better images than the original. For the urban area (Figure 9 and Table 2), the NND method exhibits obvious spectral distortion, and the ERGAS value of the NND method is the largest. From the local detail images, however, the differences between the GS, GSA, UNB, and GD methods are not obvious. Furthermore, Table 2 shows that the proposed method achieves better spectral and spatial performance; its values for all four metrics are the best.
As shown in Figure 10, the GSA, UNB, and GD methods cannot preserve the spectral information well for water bodies, and the color of the river is paler and not as deep blue as in the original MS image. In Table 3, we can see that the Entropy value of the NND method is the best; however, its values on the other metrics are the worst. This demonstrates that the NND method performs well in enhancing detail but does not perform well on spectral preservation; this result may have occurred because the NND method is more suitable for fusing low-resolution images.
As seen from Figure 11, the fused images obtained by the NND and GD methods had serious problems with spectral performance: the color of the cropland in their results is obviously quite different from that of the original MS image. In addition, in Table 4, the NND and GD methods’ UIQI, CC and ERGAS values were the worst, but the overall spatial information was well preserved from a qualitative point of view. This is because the GD method employs a global detail-injection model; thus, color distortion will occur in some specific scenes. However, the proposed method based on the local injection model can be a good solution to spectral distortion. The GS, GSA, and proposed method performed better than the others.
For the forest area shown in Figure 12 and Table 5, from a visual analysis, the proposed method achieves the best spectral and spatial information performance compared to the other methods, followed by the GSA, UNB, GS, and GD methods and finally, the NND method. In the quantitative evaluation, the UIQI and CC values of the proposed method were the best, and its ERGAS value was second best, which is consistent with the visual analysis results.
Many pan-sharpened images are employed not only for manual interpretation, but also for computer-based interpretation. Therefore, classification accuracy was used as an indirect evaluation method to verify the effectiveness of the proposed method. A better fusion method should result in fused images with higher interclass variance and, thus, should obtain better classification results. Therefore, in our work, the pan-sharpened images are classified based on spectral features. The overall accuracy (OA) and Kappa are employed as the measures of classification accuracy. The higher the OA and Kappa values are, the better the classification effect is.
In detail, a sample image with a 400 × 600 pixels size was employed to conduct the pan-sharpening process using different methods; then, a supervised classification using a Support Vector Machine (SVM) method [38] was applied to the fused results. Figure 13 shows the classification maps, among which, Figure 13b,c shows the test sample with 157 blocks, including 10 classes. The number of each class in the test sample was 1/10 of the number in the training samples. As can be seen in Figure 13, the classification results obtained by the NND and UNB methods are not particularly satisfactory. However, the proposed fusion method achieves a more consistent classification result. Table 6 shows the OA and Kappa precisions achieved by these different methods, corresponding to the classification maps. Although many types of misclassifications occurred, the proposed method achieved the highest accuracy, which means that the proposed method is more effective at spectral preservation than other tested methods.
In conclusion, these experimental results are sufficient to demonstrate that the proposed approach both enhances the spatial information and effectively preserves the spectral characteristics of the original MS images with less distortion. In addition, the produced images can improve the classification accuracy, which is important.

5.3. Computational Complexity

In this section, the computational time of each pan-sharpening method is described to evaluate the computational efficiency. We employed MATLAB on a laptop with 4 GB of memory and a 2.4 GHz CPU to perform the experiments. For a 500 × 500-pixel image, the proposed method requires 11.28 s, while the GS, NND, UNB, GSA, and GD methods require 1.84 s, 1.67 s, 1.52 s, 1.43 s, and 2.36 s, respectively. Compared to the other algorithms, the proposed approach consumes more time; this may be because the weight calculation during the pan-sharpening process is performed in a pixel-by-pixel fashion, and the loop is not efficient enough. Therefore, the speed of the proposed algorithm can be further improved by using a more efficient computational approach.

6. Conclusions

The goal of pan-sharpening methods is to simultaneously increase the spatial resolution of an original multispectral image while retaining its spectral features. In this study, we proposed a novel pan-sharpening method based on guided image filtering, and applied it to GF-2 images. The underlying idea of this approach is to consider the spectral difference of each pixel between a resampled MS image and a corresponding Pan image, and to adaptively inject the details of the Pan image into the MS image to yield high-resolution MS images. The experimental results and quality assessments demonstrated that, for GF-2 imagery acquired over different scenes, the proposed pan-sharpening approach consistently achieves high spectral fidelity and enhances the spatial details, regardless of the image content. Furthermore, it can also improve the classification accuracy, which is an important aspect in applications of GF-2 images. Finally, adaptively selecting the window size for the weight calculations and estimating the parameters of the guided filtering process requires further research.

Acknowledgments

The authors would like to acknowledge the support from the National Natural Science Foundation of China (no. 41571372, no. 41771375 and no. 41301470). The authors would also like to thank the editors and the reviewers for their insightful comments and suggestions.

Author Contributions

Y.Z. conducted the experiment and prepared the manuscript. Q.D. helped write the manuscript. L.W. proposed the general framework and helped write the manuscript. Z.T. also helped write the manuscript.

Conflicts of Interest

The authors declare no potential conflicts of interest.

References

  1. Zhang, Y.; Mishra, R.K. A review and comparison of commercially available pan-sharpening techniques for high resolution satellite image fusion. In Proceedings of the Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 182–185. [Google Scholar]
  2. Dong, J.; Zhuang, D.; Huang, Y.; Fu, J. Advances in multi-sensor data fusion: Algorithms and applications. Sensors 2009, 9, 7771–7784. [Google Scholar] [CrossRef] [PubMed]
  3. Ehlers, M.; Klonus, S.; Johan Åstrand, P.; Rosso, P. Multi-sensor image fusion for pansharpening in remote sensing. Int. J. Image Data Fusion 2010, 1, 25–45. [Google Scholar] [CrossRef]
  4. Xu, Q.; Li, B.; Zhang, Y.; Ding, L. High-Fidelity Component Substitution Pansharpening by the Fitting of Substitution Data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7380–7392. [Google Scholar] [CrossRef]
  5. Gillespie, A.R.; Kahle, A.B.; Walker, R.E. Color enhancement of highly correlated images. II. Channel ratio and chromaticity transformation techniques. Remote Sens. Environ. 1987, 22, 343–365. [Google Scholar] [CrossRef]
  6. Zhang, Y. A new merging method and its spectral and spatial effects. Int. J. Remote Sens. 1999, 20, 2003–2014. [Google Scholar] [CrossRef]
  7. Murga, J.N.D.; Otazu, X.; Fors, O.; Arbiol, R. Multiresolution-based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 1999, 37, 1204–1211. [Google Scholar] [CrossRef] [Green Version]
  8. Tu, T.-M.; Huang, P.S.; Hung, C.-L.; Chang, C.-P. A fast intensity–hue–saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  9. Dou, W.; Chen, Y.; Li, X.; Sui, D.Z. A general framework for component substitution image fusion: An implementation using the fast image fusion method. Comput. Geosci. 2007, 33, 219–228. [Google Scholar] [CrossRef]
  10. Aiazzi, B.; Baronti, S.; Selva, M. Improving Component Substitution Pansharpening Through Multivariate Regression of MS +Pan Data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  11. Vivone, G.; Alparone, L.; Chanussot, J. A Critical Comparison Among Pansharpening Algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2585. [Google Scholar] [CrossRef]
  12. Zhang, Y. Understanding image fusion. Photogramm. Eng. Remote Sens. 2004, 70, 657–661. [Google Scholar]
  13. Pohl, C.; Van Genderen, J.L. Review article Multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef]
  14. Xie, B.; Zhang, H.; Huang, B. Revealing Implicit Assumptions of the Component Substitution Pansharpening Methods. Remote Sens. 2017, 9, 443. [Google Scholar] [CrossRef]
  15. Fryskowska, A.; Wojtkowska, M.; Delis, P.; Grochala, A. Some aspects of satellite imagery integration from eros b and landsat 8. ISPRS–Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B7, 647–652. [Google Scholar] [CrossRef]
  16. Santurri, L.; Carlà, R.; Fiorucci, F.; Aiazzi, B.; Baronti, S.; Guzzetti, F. Assessment of very high resolution satellite data fusion techniques for landslide recognition. In Proceedings of the ISPRS Centenary Symposium, Vienna, Austria, 5–7 July 2010; Volume 38, pp. 492–497. [Google Scholar]
  17. Sun, W.; Chen, B.; Messinger, D.W. Nearest-neighbor diffusion-based pan-sharpening algorithm for spectral images. Opt. Eng. 2014, 53, 013107. [Google Scholar] [CrossRef]
  18. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 36, 191–207. [Google Scholar] [CrossRef]
  20. Jiang, W.; Baker, M.L.; Wu, Q.; Bajaj, C.; Chiu, W. Applications of a bilateral denoising filter in biological electron microscopy. J. Struct. Biol. 2003, 144, 114–122. [Google Scholar] [CrossRef] [PubMed]
  21. Fukunaga, K.; Hostetler, L.D. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. Theory 1975, 21, 32–40. [Google Scholar] [CrossRef]
  22. Cheng, Y.Z. Mean shift, mode seeking, and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1995, 17, 790–799. [Google Scholar] [CrossRef]
  23. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  24. Levin, A.; Lischinski, D.; Weiss, Y. A closed form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 30, 228–242. [Google Scholar] [CrossRef] [PubMed]
  25. Durand, F.; Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. (TOG) 2002, 21, 257–266. [Google Scholar] [CrossRef]
  26. Petschnigg, G.; Szeliski, R.; Agrawala, M.; Cohen, M.; Hoppe, H.; Toyama, K. Digital photography with flash and no-flash image pairs. ACM Trans. Graph. 2004, 23, 664–672. [Google Scholar] [CrossRef]
  27. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef] [PubMed]
  28. Li, S.; Kang, X.; Hu, J. Image Fusion With Guided Filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [CrossRef] [PubMed]
  29. Draper, N.; Smith, H. Applied Regression Analysis, 2nd ed.; John Wiley: New York, NY, USA, 1981. [Google Scholar]
  30. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6011875 A, 4 January 2000. [Google Scholar]
  31. Zhao, W.; Dai, Q.; Zheng, Y.; Wang, L. A new pansharpen method based on guided image filtering: A case study over Gaofen-2 imagery. In Proceedings of the IGARSS IEEE International Geoscience and Remote Sensing Symposium, Beijing, China, 10–15 July 2016; pp. 3766–3769. [Google Scholar]
  32. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  33. Klonus, S.; Ehlers, M. Image fusion using the Ehlers spectral characteristics preserving algorithm. GIS Remote Sens. 2007, 44, 93–116. [Google Scholar] [CrossRef]
  34. Bovik, A.; Wang, Z. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  35. Ranchin, T.; Wald, L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogramm. Eng. Remote Sens. 2000, 66, 49–61. [Google Scholar]
  36. Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Proceedings of the Summaries 3rd Annual JPL Airborne Geoscience Workshop, Pasadena, CA, USA, 1–5 June 1992; pp. 147–149. [Google Scholar]
  37. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of Pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  38. Boser, B.; Guyon, I.; Vapnik, V. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
Figure 1. (a) The input image (one band of multispectral image); (c) The guidance image (the panchromatic band); (b) The outputs: from left to right, the regularization parameter ε is set to 0.12, and the radius r is set to 2, 4, 6, and 8; (d) The outputs: from left to right, the radius r is set to 4, and the regularization parameter ε is set to 0.12, 0.22, 0.42, and 0.82. As the filter size r increases, the edge information gradually becomes visible. As the degree of smoothing ε increases, the changes in filtering outputs are slight.
Figure 1. (a) The input image (one band of multispectral image); (c) The guidance image (the panchromatic band); (b) The outputs: from left to right, the regularization parameter ε is set to 0.12, and the radius r is set to 2, 4, 6, and 8; (d) The outputs: from left to right, the radius r is set to 4, and the regularization parameter ε is set to 0.12, 0.22, 0.42, and 0.82. As the filter size r increases, the edge information gradually becomes visible. As the degree of smoothing ε increases, the changes in filtering outputs are slight.
Ijgi 06 00404 g001
Figure 2. The flow chart of the proposed method. The numbers in parentheses reference the related equations.
Figure 2. The flow chart of the proposed method. The numbers in parentheses reference the related equations.
Ijgi 06 00404 g002
Figure 3. 1-D illustrations of different detail injection models: (a) The proposed model, detail = SP × αi; (b) Equal proportion injection model, detail = SP (spatial detail) × 1; (c) Generic GS based model [30].
Figure 3. 1-D illustrations of different detail injection models: (a) The proposed model, detail = SP × αi; (b) Equal proportion injection model, detail = SP (spatial detail) × 1; (c) Generic GS based model [30].
Ijgi 06 00404 g003
Figure 4. Example of the fused results with different injection weights: (a) The original MS (multispectral) image; (b) The proposed model, detail = SP × αi; (c) Equal proportion injection model, detail = SP × 1; (d) Generic GS (Gram-Schmidt transformation) based model [30]; (e) Local patches from (b) to (d).
Figure 4. Example of the fused results with different injection weights: (a) The original MS (multispectral) image; (b) The proposed model, detail = SP × αi; (c) Equal proportion injection model, detail = SP × 1; (d) Generic GS (Gram-Schmidt transformation) based model [30]; (e) Local patches from (b) to (d).
Ijgi 06 00404 g004aIjgi 06 00404 g004b
Figure 5. Analysis of the influence of the parameter r .
Figure 5. Analysis of the influence of the parameter r .
Ijgi 06 00404 g005
Figure 6. Analysis of the influence of the parameter ε .
Figure 6. Analysis of the influence of the parameter ε .
Ijgi 06 00404 g006
Figure 7. Analysis of the influence of the parameter R .
Figure 7. Analysis of the influence of the parameter R .
Ijgi 06 00404 g007
Figure 8. (a) Details of the building edges with different window radii. (b) The corresponding spectral profile curves of specific parts in (a).
Figure 8. (a) Details of the building edges with different window radii. (b) The corresponding spectral profile curves of specific parts in (a).
Ijgi 06 00404 g008
Figure 9. The fused results and some local patches using different methods on GF-2 images over an urban area: (a) original Pan (panchromatic) image; (b) original MS image; (c) GS; (d) NND (nearest-neighbor diffusion); (e) UNB (University of New Brunswick method); (f) GSA (adaptive GS method); (g) GD; and (h) the proposed method.
Figure 9. The fused results and some local patches using different methods on GF-2 images over an urban area: (a) original Pan (panchromatic) image; (b) original MS image; (c) GS; (d) NND (nearest-neighbor diffusion); (e) UNB (University of New Brunswick method); (f) GSA (adaptive GS method); (g) GD; and (h) the proposed method.
Ijgi 06 00404 g009
Figure 10. The fused results using different methods on GF-2 images over water bodies: (a) original Pan image; (b) original MS image; (c) GS; (d) NND; (e) UNB; (f) GSA; (g) GD; and (h) the proposed method.
Figure 10. The fused results using different methods on GF-2 images over water bodies: (a) original Pan image; (b) original MS image; (c) GS; (d) NND; (e) UNB; (f) GSA; (g) GD; and (h) the proposed method.
Ijgi 06 00404 g010
Figure 11. The fused results using different methods on GF-2 images over cropland: (a) original Pan image; (b) original MS image; (c) GS; (d) NND; (e) UNB; (f) GSA; (g) GD; and (h) the proposed method.
Figure 11. The fused results using different methods on GF-2 images over cropland: (a) original Pan image; (b) original MS image; (c) GS; (d) NND; (e) UNB; (f) GSA; (g) GD; and (h) the proposed method.
Ijgi 06 00404 g011
Figure 12. The fused results using different methods on GF-2 images over a forest area: (a) original Pan image; (b) original MS image; (c) GS; (d) NND; (e) UNB; (f) GSA; (g) GD; and (h) the proposed method.
Figure 12. The fused results using different methods on GF-2 images over a forest area: (a) original Pan image; (b) original MS image; (c) GS; (d) NND; (e) UNB; (f) GSA; (g) GD; and (h) the proposed method.
Ijgi 06 00404 g012aIjgi 06 00404 g012b
Figure 13. The classification results of the fused image produced from different methods: (a) the original image displayed in true color; (b) and (c) ground truth data; (d) GS; (e) NND; (f) UNB; (g) GSA; (h) GD; and (i) the proposed method.
Figure 13. The classification results of the fused image produced from different methods: (a) the original image displayed in true color; (b) and (c) ground truth data; (d) GS; (e) NND; (f) UNB; (g) GSA; (h) GD; and (i) the proposed method.
Ijgi 06 00404 g013
Table 1. Characteristics of the employed GaoFen-2 (GF-2) datasets.
Table 1. Characteristics of the employed GaoFen-2 (GF-2) datasets.
Spatial resolutionMS: 3.2 m
Pan: 0.8 m
Spectral rangeBlue: 450–520 nm
Green: 520–590 nm
Red: 630–690 nm
NIR: 770–890 nm
Pan: 450–900 nm
Image locationsGuangzhou
Land cover typesUrban, rural, water body, cropland, forest, concrete buildings, etc.
Image sizeMS: 250 × 250
Pan: 1000 × 1000
MS: 1250 × 1250
Pan: 5000 × 5000
MS: 1250 × 1250
Pan: 5000 × 5000
MS: 1250 × 1250
Pan: 5000 × 5000
Table 2. Quality evaluation of fused images: urban areas (corresponding to Figure 9).
Table 2. Quality evaluation of fused images: urban areas (corresponding to Figure 9).
MethodEntropyUIQICCERGAS
MS7.189
GS6.8770.8480.87822.504
NND6.8630.7790.88136.835
UNB6.6850.8240.88126.102
GSA6.9120.8930.88821.001
GD6.8910.8780.90225.731
Proposed7.1560.9590.96214.150
Table 3. Quality evaluation of fused images: water bodies (corresponding to Figure 10).
Table 3. Quality evaluation of fused images: water bodies (corresponding to Figure 10).
MethodEntropyUIQICCERGAS
MS4.113
GS3.9780.6080.63331.796
NND3.9970.3930.59370.754
UNB3.8020.6060.66140.418
GSA3.9160.7000.75739.723
GD3.8050.5380.59844.323
Proposed3.9330.7260.79039.589
Table 4. Quality evaluation of fused images: cropland (corresponding to Figure 11).
Table 4. Quality evaluation of fused images: cropland (corresponding to Figure 11).
MethodEntropyUIQICCERGAS
MS6.709
GS6.6420.8550.85914.697
NND6.7650.6610.76640.106
UNB6.4640.8280.84418.575
GSA6.6250.9080.91215.126
GD6.6580.7770.84129.136
Proposed6.5630.9050.92116.460
Table 5. Quality evaluation of fused images: forest area (corresponding to Figure 12).
Table 5. Quality evaluation of fused images: forest area (corresponding to Figure 12).
MethodEntropyUIQICCERGAS
MS5.882
GS6.7490.6630.73256.697
NND6.5780.7270.74528.991
UNB6.4820.6880.75256.064
GSA6.5880.7200.79255.977
GD6.6670.6750.75359.855
Proposed6.5040.8560.89634.666
Table 6. Classification accuracy achieved from images produced by the different methods.
Table 6. Classification accuracy achieved from images produced by the different methods.
MethodGSNNDUNBGSAGDProposed
OA (%)74.0873.9172.5074.7774.7776.04
Kappa0.70290.70090.68440.71070.71030.7256

Share and Cite

MDPI and ACS Style

Zheng, Y.; Dai, Q.; Tu, Z.; Wang, L. Guided Image Filtering-Based Pan-Sharpening Method: A Case Study of GaoFen-2 Imagery. ISPRS Int. J. Geo-Inf. 2017, 6, 404. https://doi.org/10.3390/ijgi6120404

AMA Style

Zheng Y, Dai Q, Tu Z, Wang L. Guided Image Filtering-Based Pan-Sharpening Method: A Case Study of GaoFen-2 Imagery. ISPRS International Journal of Geo-Information. 2017; 6(12):404. https://doi.org/10.3390/ijgi6120404

Chicago/Turabian Style

Zheng, Yalan, Qinling Dai, Zhigang Tu, and Leiguang Wang. 2017. "Guided Image Filtering-Based Pan-Sharpening Method: A Case Study of GaoFen-2 Imagery" ISPRS International Journal of Geo-Information 6, no. 12: 404. https://doi.org/10.3390/ijgi6120404

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop