Next Article in Journal
QoS-Based DWBA Algorithm for NG-EPON
Previous Article in Journal
DSCBlocks: An Open-Source Platform for Learning Embedded Systems Based on Algorithm Visualizations and Digital Signal Controllers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pansharpening with a Gradient Domain GIF Based on NSST

Science and Technology on Complex Electronic System Simulation Laboratory, Space Engineering University, Beijing 101416, China
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(2), 229; https://doi.org/10.3390/electronics8020229
Submission received: 24 December 2018 / Revised: 13 February 2019 / Accepted: 15 February 2019 / Published: 18 February 2019
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In order to improve the fusion quality of multispectral (MS) and panchromatic (PAN) images, a pansharpening method with a gradient domain guided image filter (GIF) that is based on non-subsampled shearlet transform (NSST) is proposed. First, multi-scale decomposition of MS and PAN images is performed by NSST. Second, different fusion rules are designed for high- and low-frequency coefficients. A fusion rule that is based on morphological filter-based intensity modulation (MFIM) technology is proposed for the low-frequency coefficients, and the edge refinement is carried out based on a gradient domain GIF to obtain the fused low-frequency coefficients. For the high-frequency coefficients, a fusion rule based on an improved pulse coupled neural network (PCNN) is adopted. The gradient domain GIF optimizes the firing map of the PCNN model, and then the fusion decision map is calculated to guide the fusion of the high-frequency coefficients. Finally, the fused high- and low-frequency coefficients are reconstructed with inverse NSST to obtain the fusion image. The proposed method was tested using the WorldView-2 and QuickBird data sets; the subjective visual effects and objective evaluation demonstrate that the proposed method is superior to the state-of-the-art pansharpening methods, and it can efficiently improve the spatial quality and spectral maintenance.

1. Introduction

With the development of satellite technology and imaging systems, we are able to obtain remote sensing images with higher resolution. High-resolution remote sensing images can describe target information more accurately, which is of great significance in the development of numerous applications, such as environmental monitoring, land and resource planning, military mapping, object recognition, and scene interpretation. Most commercial satellites, such as QuickBird, IKONOS, GeoEye, and WorldView, can currently jointly obtain panchromatic (PAN) and multispectral (MS) images. Due to imaging physical constraints and transmission bandwidth limit, it has been difficult to obtain images with the characteristics of both high spatial resolution and high spectral resolution. Fusion is one of the most important and effective methods in providing better interpretation ability of remote sensing images. How to combine complementary information of PAN and MS images is an urgent problem to be solved.
MS images have the advantage of high spectral resolution, while PAN images have the advantage of high spatial resolution. The purpose of fusion is to combine the complementary characteristics of the original images and provide enough information for image interpretation. The component substitution (CS) method is a traditional pansharpening model, which includes intensity-hue-saturation transformation (IHS) [1], principle component analysis (PCA) [2], the Gram-Schmidt process (GS) [3], and so on. These methods offer outstanding spatial quality but they have the problem of serious spectral distortion in the fused image. The CS method has been extended and improved based on new theories over time, because of low computational complexity, [4,5]. In [6], an adaptive image fusion method that is based on the concept of partial replacement of intensity component is proposed, which is known as partial replacement adaptive CS (PRACS). A context-adaptive (CA) pansharpening method that is based on image segmentation is proposed in [7], which is integrated into the GS scheme in order to achieve a better estimation of the injection coefficients. The band-dependent spatial-detail (BDSD) model is also known as adaptive CS [8]. Model-based methods have gathered increasing interest in recent studies. This kind of method based on complex models can achieve a better pansharpening effect in some cases, but the time complexity is high due to the optimization process. Within this family, many contributions that are based on Bayesian methods rely on the sparse representations of signals [9,10] and total variation penalization [11,12] terms. Essentially, this can be regarded as a strategy of image repair, which consists of the reconstruction of the high-resolution MS image from the original data [13]. Multi-resolution analysis (MRA) methods, which decompose the image into different frequency coefficients, can harmonize the injection of spatial details and maintenance of spectral information. The MRA scheme is based on the injection of detailed information that is extracted from the decomposition coefficients of PAN images into the low-resolution MS bands. Wavelet transform is one of the MRA methods and it is an important milestone in the field of image processing as a mathematical tool [14]. The fusion results of wavelet transform provide certain improvements in preserving spectral information, but they also have shortcomings, such as direction limitation, shift, and aliasing. When compared with discrete wavelet transform (DWT), dual-tree complex wavelet transform (DTCWT) [15] has the advantages of shift-invariance and directional selectivity, but its limited number of directional textures and edges of wavelet families make it difficult to represent two-dimensional (2-D) images. To solve this problem, a number of multi-scale geometric analysis tools, such as curvelet transform [16], contourlet transform [17], and shearlet transform [18], have been developed and successfully applied to the pansharpening problem. The main motivation of multi-scale geometric analysis methods is to pursue a “true” 2-D transform [19], which can effectively capture the geometric structure of an image, so that the fusion quality can be further improved. However, without the shift-invariant property, the contourlet and shearlet transform may suffer the frequency alias problem. The non-subsampled contourlet transform (NSCT) [20,21] is an effective solution to this problem, but its application is limited by the finite decomposition directions and high computational complexity. Non-subsampled shearlet transform (NSST) is a shift-invariant version of shearlet transform, which attains a low computational cost and good image sparse representation performance [22]. For the current study, the appearance of NSST has provided a new solution for the pansharpening issue. As a novel multi-scale geometric analysis method, NSST is one of the topics that are currently being studied by many researchers. Moonon [23] proposed a remote sensing image fusion method based on NSST and sparse representation. Wu proposed a method that is based on improved non-negative matrix decomposition in the NSST domain [24] and a fusion method using chaotic bee colony optimization in the NSST domain [25]. Yang [26] proposed a pansharpening framework based on the matting model and multi-scale transform. These methods have good effects on pansharpening, although they all are subject to their own limitations.
The lack of an anti-aliasing feature in multi-scale decomposition tends to cause decision bias in the boundary region of objects. The bias will result in an artificial texture and image non-uniformity, and it therefore causes a bad influence on visual effects and image interpretation. In order to solve this problem, some spatial techniques and optimization strategies have been introduced into the fusion method, such as bilateral filter [27], cross bilateral filter (CBF) [28], weighted least squares filter [29], and guided image filter (GIF) [30,31]. The GIF is one of the fastest edge-preserving local filters and it is superior to bilateral filters in avoiding gradient reversal. Meng [32] proposed a pansharpening method with an edge-preserving guided filter based on three-layer decomposition and the decomposed PAN image is injected into the MS image within this method. However, due to the fixed regularized values in the GIF, the edges will inevitably be smoothed. Li [33] proposed a weighted GIF, which avoided the edge blurring problem to some extent. However, these two methods do not have explicit constraints in processing the edges of images and the filtering process is usually accompanied by image coarsening. When both edge preservation and filtering are considered, the problem of edge blurring will inevitably occur. Moreover, in some cases, these methods still cannot maintain the edges well, which leads to the degradation of fusion quality. Kou [34] proposed a gradient domain GIF, in which the introduction of explicit first-order edge condition constraints defines a new edge-perception weight, so that the edges of an image can be better preserved.
Based on a gradient domain GIF with excellent edge-preserving properties, a new fusion method of MS and PAN images in the NSST domain is proposed. The MS and PAN images are decomposed by NSST to obtain the coefficients of a different frequency. For the high-frequency coefficients in the NSST domain, an improved pulse coupled neural network (PCNN) model is used to obtain the initial firing map. Unlike previous methods that directly calculate the fusion decision map, the gradient domain GIF is used to optimize the firing map, and then the fusion decision map is calculated in order to guide the fusion of high-frequency coefficients. For the low-frequency coefficients in the NSST domain, a fusion strategy that is based on morphological filter-based intensity modulation (MFIM) technology is adopted. The gradient domain GIF is used to perform the edge refinement on the modulated low-frequency coefficients to obtain the fusion result of the low-frequency coefficients. The experimental results show that, in the proposed method, the detailed information and spatial continuity can be effectively improved while still maintaining excellent spectral information.

2. Materials and Methods

2.1. NSST

Guo and Labate [18] constructed the shearlet transform by combining geometry and multi-scale method through classical theory of the affine system. When the dimension n = 2 , the affine system is:
ψ A B ( ψ ) = { ψ j , k , l ( X ) = | det A | j / 2 ψ ( B l A j X k ) : j , l Z , k Z 2 }
where, ψ L 2 ( 2 ) , L represents the square integrable space, A and B are both 2 × 2 invertible matrices, and | det B | = 1 . If for any f L 2 ( 2 ) , ψ A B ( ψ ) constitutes the following tight support framework, and then the element of ψ A B ( ψ ) is called the synthetic wavelet:
j , k , l | f , ψ j , k , l | 2 = f 2
where, A j is associated with scale transformation and B l is associated with geometric transformation. In Equation (1), A = [ a 0 0 a 1 / 2 ] denotes an anisotropic matrix and B = [ 1 s 0 1 ] denotes a shearlet matrix; at this point, the synthetic wavelet is called a shearlet and the values usually are taken as A = A 0 = [ 4 0 0 2 ] , B = B 0 = [ 1 1 0 1 ] . Figure 1 shows the frequency decomposition and supports of a shearlet. The frequency supported base of each element ψ ^ j , k , l is a trapezoid region, the size is approximately 2 2 j × 2 j , and the slope is l 2 j .
As the shearlet does not have a shift-invariant character, it is easy to introduce the Gibbs phenomenon in image fusion. NSST [22] is an improved form of the shearlet with directional selectivity and shift-invariance, high computational efficiency, and simple structure. NSST is mainly divided into multi-scale decomposition and multi-directional decomposition. Non-subsampled Laplacian pyramid (NSP) performs multi-scale decomposition. J sub-band images of the high-frequency and one sub-band image of the low-frequency can be obtained by the J level NSP decomposition, and each sub-band image has the same size as the source image. The shearlet filter (SF) implements multi-directional decomposition. j = 1 J 2 l j + 1 sub-band images with the same size as the source image can be obtained by the j level decomposition of a certain scale. J is the image decomposition layer and l j is the direction decomposition levels under scale j . Figure 2 illustrates a three-level NSST decomposition model. NSST has the characteristics of approximate optimal sparse representation for 2-D images. Applying it to the fusion of MS and PAN images can provide more useful information for the fused result.

2.2. Gradient Domain GIF

2.2.1. GIF

The GIF is a new kind of local linear filter. It has the edge-preserving property and its time complexity is independent of the size of filter window. It has fast calculation speed, high efficiency, and time complexity of O(n), which has been successfully applied to the field of image fusion [35]. The GIF can transfer the structural information in the guided image to the filter output, which makes the filter output more structured, so as to complete the edge correction in the fusion decision. However, the local linear model that is used in the GIF cannot represent the image well around some edges, which may result in halo artifacts. Li [33] proposed the weighted GIF, which can effectively reduce artifacts by introducing the edge-aware factors, but there is no explicit constraint to deal with edges, so in some cases the edges cannot be well preserved. Kou [34] proposed a gradient domain GIF by introducing explicit first-order edge condition constraints, so that the edges can be better preserved.
Figure 3 shows the processing flowchart of the GIF. The GIF assumes that ω k is a square window whose size is ( 2 r 1 + 1 ) × ( 2 r 1 + 1 ) centered at a pixel k . I is the guidance image, Z is the output image, and X is an image to be filtered. Equation (3) is a local linear model between the guided image and the filtered output image:
Z ( i ) = a k I ( i ) + b k , i ω k
where, a k and b k are the linear coefficients in the window ω k and have fixed values. a k and b k are calculated by minimizing the cost function E ( a k , b k ) , which is defined as:
E ( a k , b k ) = i ω k ( ( a k I i + b k X ( i ) ) 2 + λ a k 2 )
where, λ is a regularization term, which is an important parameter in adjusting the ambiguity of the filter to penalize the too-large values of a k . According to linear regression analysis, the solution of Equation (4) can be obtained:
a k = μ I X , r 1 ( k ) μ I , r 1 ( k ) μ X , r 1 ( k ) σ I , r 1 2 ( k ) + λ
b k = μ X , r 1 ( k ) a k μ I , r 1 ( k )
where, is the dot products of two matrices, μ I X , r 1 ( k ) , μ I , r 1 ( k ) , and μ X , r 1 ( k ) represent the means of I X , I , and X in the local window ω k , respectively. Since the value of λ in the GIF is fixed, the edges are inevitably smoothed, and in some cases the edges cannot be well preserved.

2.2.2. Edge-Aware Weighting

In order to better preserve the edges, Kou defines a new edge-aware weighting Γ I ( k ) , which can determine the importance of each pixel with respect to the global guidance image. Γ I ( k ) is defined, as follows:
Γ I ( k ) = 1 M i = 1 M χ ( k ) + ε χ ( i ) + ε
where, χ ( k ) is defined as σ I , 1 ( k ) σ I , r 1 ( k ) , σ I , 1 ( k ) , and σ I , r 1 ( k ) are the variances of I in window 3 × 3 and ( 2 r 1 + 1 ) × ( 2 r 1 + 1 ) , respectively. r 1 is the window size of the filter, M is the pixel number in the image.

2.2.3. Gradient Domain GIF

Z ( i ) = a k I ( i ) can be obtained according to the linear model in Equation (3). Obviously, the smoothness of Z in local window depends on the value of a k . If the pixel k is at the edge area and a k =   1 , the edge can be preserved well. If the pixel k is in a smooth region, it is expected that the a k =   0 , so the region will be smoothed. Based on the observation, a new cost function is defined, as follows:
E ( a k , b k ) = i ω k [ ( a k I ( i ) + b k X ( i ) ) 2 + λ Γ I ( k ) ( a k γ k ) 2 ]
where the definition of γ k is
γ k = 1 1 1 + e η ( χ ( k ) μ χ , )
where, μ χ , is the mean value of χ ( i ) , η = 4 / ( μ χ , min ( χ ( i ) ) ) . It can be seen that, if the pixel k is at the edge area, the value of γ k approaches 1. On the contrary, the value of γ k approaches 0 if the pixel is in the smooth region. The filter is less sensitive to the selection of parameter λ .
The optimal values of a k and b k are calculated, as follows
a k = μ I X , r 1 ( k ) μ I , r 1 ( k ) μ X , r 1 ( k ) + λ Γ I ( k ) γ k σ I , r 1 2 ( k ) + λ Γ I ( k )
b k = μ X , r 1 ( k ) a k μ I , r 1 ( k )
The final value of Z is:
Z ( i ) = a ¯ i I i + b ¯ i
In the calculation of the linear coefficients of the local window, a pixel may be contained in multiple windows, so a k and b k need to be calculated by mean filtering, that is:
Z = 1 | ω | k , i ω k ( a k I ( i ) + b k ) = a ¯ i I ( i ) + b ¯ i
where a ¯ i and b ¯ i are the means of a k and b k in the window ω k , respectively. a ¯ i = 1 | ω | k ω i a k , b ¯ i = 1 | ω | k ω i b k , and | ω | is the number of pixels in the local window.

3. Proposed Method

The factors affecting the fusion quality of PAN and MS images mainly include two aspects: the integration of spatial details and the preservation of spectral information. The main purpose of pansharpening is to establish a good tradeoff between the injection of spatial details and the preservation of spectral information. It is assumed that PAN and MS images that are used for fusion have been geometrically registered. NSST is utilized to decompose the source images with multi-scale and multi-direction into different frequency coefficients. The high-frequency coefficients represent the detailed features of the image, which is the reflection of spatial information. The selection of high-frequency coefficients plays an important role in the maintenance of detailed spatial information and improving the spatial resolution of the fused image. The low-frequency coefficient image is the approximate version of the original image, which contains the most energy and spectral information. The selection of low-frequency coefficients is of great importance for maintaining the spectral signatures and reducing the spectral distortion of the fused result. According to the different significant features that need to be preserved in different frequency domains, the corresponding fusion rules of high- and low-frequency coefficients are proposed. Figure 4 illustrates the flowchart of the multi-scale image fusion procedure that is based on NSST, and the fusion steps are as follows:
1. NSST decomposition
The PAN and MS images are decomposed by NSST to obtain the corresponding low- and high-frequency sub-band coefficients { S P 0 , d , S P l , d } and { S M 0 , d , S M l , d } . S P 0 , d is the low-frequency coefficients of the PAN image, and S P l , d is the l th scale, d th directional high-frequency coefficients of the PAN image. Similarly, S M 0 , d and S M l , d are the corresponding coefficients of the MS image.
2. Fusion of low-frequency coefficients in the NSST domain
The low frequency coefficients of the PAN and MS images are processed with fusion rule that is based on MFIM technology. The high resolution version of the low-frequency coefficients can be obtained while using low-frequency coefficients of the PAN image to modulate the low-frequency coefficients of the MS image. The gradient domain GIF is used to optimize the modulated sub-band coefficients and the fusion results of low-frequency coefficients { S F 0 , d } are obtained.
3. Fusion of high-frequency coefficients in the NSST domain
The high-frequency coefficients of the PAN and MS images are processed by the improved PCNN-based fusion rule. The initial firing map is obtained by a PCNN model, the original PAN image is subsequently used as a guidance image, and then the edge preserving gradient domain GIF is performed on the firing map. According to the optimized firing map, the fusion decision map is calculated to guide the fusion of high-frequency coefficients to obtain the fusion result { S F l , d } .
4. Inverse transform of NSST
The inverse transform of NSST is performed on the fused low-frequency coefficients { S F 0 , d } and the fused high-frequency coefficients { S F l , d } to obtain the final fused image F .

3.1. Fusion Rule of the Low-Frequency Coefficients

The low-frequency coefficients that were decomposed by the NSST process are the inheritance of the original MS and PAN images, which are the approximate components. They contain the most energy of the original images. The fusion rule of these coefficients is designed to combine the available spatial information from the low-frequency coefficients of the PAN image and the available spectral information from the low-frequency coefficients of the MS image.
In this paper, the morphological filter-based intensity modulation (MFIM) technique [36] and gradient domain GIF based fusion rule of the low-frequency coefficients is designed. The low-frequency coefficient image serves as an approximate version of the original image, which contains the main information regarding the original image and also contains some detailed spatial information. Traditional fusion strategies for the low-frequency coefficients of the MS and PAN images usually adopt a simple weighted averaging method or the region energy analysis based method. These methods are easy to implement but they have problems in contrast reduction and detailed information loss. Furthermore, the coefficient selection or superposition procedure may result in decision bias and spectral distortion. The preservation of the spectral content of the original MS image is very important. Thus, we adopt the MFIM technique to reconstruct the spatial details that are missing in the low-frequency sub-bands of the MS image and preserve the spectral information. The gradient-domain GIF is used to refine the modulated low-frequency coefficients image in order to properly inject the coefficients of spatial details and avoid halo artifacts in the fusion results.
First, by using the low-frequency coefficients image M S k of the MS image as a reference image, the image histogram equalization on the low-frequency coefficient image S P 0 , d of the PAN image is performed. In other words, the image P k 0 on the band k can be obtained by the equalization step between S P 0 , d and M S k .
The low-resolution version P k l o w of the low-frequency coefficients image of the PAN image can be obtained by half-gradient operator that is based morphological filtering. Setting the structural element B as B = [ 0 1 0 1 1 1 0 1 0 ] , the corresponding morphological filter is as follows:
ψ HG , B = F ψ ¯ HG , B = F [ 0.5 ( F ε B ( F ) ) 0.5 ( δ B ( F ) F ) ] = 0.5 ( ε B ( F ) + δ B ( F ) )
where, ε B ( F ) and δ B ( F ) are the erosion and dilation operators, respectively. P k l o w is derived through the morphological filter and a pyramidal decomposition.
Subsequently, the spatial details can be extracted and injected. The fusion result { M S ^ k } k = 1 , , N can be obtained by intensity modulation of the low-frequency coefficients of the MS image { M S ˜ k } k = 1 , , N , as follows:
M S ^ k = M S ˜ k + M S ˜ k P k 0 P k l o w P k l o w
The definition of the contrast by Weber [37] is given by C = Δ L / L b , where Δ L =   L L b , L b is the background luminance, L is the pixel luminance [38], and Δ L can be seen as the foreground luminance. Equation (15) can be written as M S ^ k = M S ˜ k ( 1 + C ) .
Finally, the gradient domain GIF is used to refine the edges in the modulated coefficients image M S ^   =   { M S ^ k } k = 1 , , N . The low-frequency coefficients image of the PAN image serves as the guidance image. The final fusion result of the low-frequency coefficients can be obtained by the following equation:
S F 0 = G D G I F r 1 , ε 1 ( M S ^ , S P 0 , d )

3.2. Fusion Rule of the High-Frequency Coefficients

A PCNN is used to extract detailed information in the high-frequency coefficients, which is a new type of neural network with global pulse synchronization and pulsecoupling. It can realize the simultaneous firing of pixels with proximal space and similar features. The firing number of the PCNN model can effectively reflect the detailed spatial information of an image. The more time that the firing occurs, the more information that the corresponding pixel area will have in the image. The PCNN consists of several neurons, each of which consists of three parts: the receiving domain, the modulation domain, and the pulse generator. In this paper, a simplified PCNN model is adopted [39] and its expression is as follows:
{ F i j ( n ) = D i j L i j ( n ) = L i j ( n 1 ) × exp ( α L ) + V L p q ω i j , p q Y p q ( n 1 ) U i j ( n ) = F i j ( n ) ( 1 + β L i j ( n ) ) θ i j ( n ) = θ i j ( n 1 ) × exp ( α θ ) + V θ Y i j ( n 1 ) Y i j ( n ) = s t e p ( U i j ( n ) θ i j ( n ) )
where, n denotes the iteration steps, F i j ( n ) , D i j , and L i j ( n ) are the neuron feedback input, exterior input, and linking input, respectively, the parameter β is the connecting weight between the neurons, U i j ( n ) represents the inner activity of the neuron, θ i j ( n ) is the dynamic threshold, ω i j , p q denotes the synaptic links of the neuron, V L is the amplification factor of the linking input, V θ is the threshold amplification factor, α L and α θ denote the attenuation time constant, and Y i j ( n ) represents the PCNN output pulse of a neuron. If U i j ( n ) > θ i j ( n ) , then the neuron generates a pulse value Y i j ( n ) = 1 , called a single firing. After n iterations, the firing map that is generated by the total numbers of firings of each neuron becomes the PCNN model output.
Based on the simplified PCNN model, a soft limited amplitude sigmoid function is used to calculate the firing output amplitude of the model during the iterative process:
T i j ( n ) = 1 1 + e θ i j ( n ) U i j ( n )
After n iterations of the PCNN, the sum of the firing output amplitude is:
Z i j ( n ) = Z i j ( n 1 ) + T i j ( n )
The gradient domain GIF is used to optimize the firing map that is exported by the PCNN model, which aims to satisfy the spatial consistency and therefore reduce the artificial texture and image non-uniformity produced by the multi-scale fusion methods. W P and W M denote the firing maps of the PAN and MS images that were optimized by gradient domain GIF, respectively:
W P = G D G I F r 2 , ε 2 ( Z P , I M ) ,   W M = G D G I F r 2 , ε 2 ( Z M , I M )
Based on observation, the fusion decision matrix can be obtained as:
H ( i , j ) = { 1 ,   W i j M ( N max ) > W i j P ( N max ) 0 ,   else
The fusion result of the high-frequency coefficients can then be calculated according to the decision matrix:
S F ( i , j ) = S M ( i , j ) H ( i , j ) + S P ( i , j ) [ 1 H ( i , j ) ]
where, S M ( i , j ) and S P ( i , j ) represent the high-frequency coefficients of the MS and PAN images, respectively.

4. Result

4.1. Data Set

In our experiment, the data sets that were acquired by WorldView-2 and QuickBird satellites are used to evaluate the proposed fusion method. The main features of these two satellites are given in Table 1 and Table 2. The MS and PAN images that were used in the experiment have been registered. The WorldView-2 data set was collected by the Beijing Key Laboratory of Digital Media. The sizes of the PAN and MS images are 512×512 and 128×128, respectively. Eight multi-spectral wave bands, including four standard bands and four new bands are contained in the WorldView-2 satellite, but the data set only provides three bands of 5 (R), 3 (G), and 2 (B) for the MS image. The land-cover types of the MS and PAN images include home, seaside, urban, house, and bridge. Given space limitations, four pairs of MS and PAN images are considered for the comparison analysis of our proposed method, two performed at reduced resolution and two evaluated at the original resolution. The second data set was captured by the QuickBird satellite on November 21, 2002, which covers the national forest park of Sundarbans, India. The sizes of the PAN and MS images are 256×256 and 64×64, respectively. This data set provides the MS image with four bands (R, G, B, and NIR). Two pairs of MS and PAN images are selected for the result analysis as an example, one for full resolution assessment and the other for reduced resolution assessment.

4.2. Quality Assessment of Fusion Results

Generally, the quality assessment of an image fusion method includes subjective and objective evaluations. Subjective evaluation that is based on individuals may have differences, meaning that the assessment faces a challenge of reliability, so an objective evaluation is needed to provide a complementary and quantitative evaluation system.
Because of the absence of reference images, two strategies have been proposed in order to measure the fusion quality. One is evaluated on the degraded images that are based on Wald’s protocol; the other is performed on the full resolution without the reference images. For the degraded evaluation, the original MS and PAN images are processed by a low-pass filter with a down-sampling factor of four to obtain the reduced resolution images, and the original MS image is taken as the reference image. The other strategy calculates quality indices by operating on the relationship between the original MS and PAN images and the fused images.
As detailed below, nine well-known indices are utilized to evaluate the spatial and spectral qualities of the fused results. The correlation coefficient ( CC ) reflects the similarity of spectral features, and structural similarity ( SSIM ) is a structural similarity index. The spectral angle mapper ( SAM ) is utilized to measure the spectral distortion. Root mean square error ( RMSE ), erreur relative global adimensionnelle de synthèse ( ERGAS ), and the universal image quality index ( UIQI ) measures the global quality of spectral and spatial features. The “quality with no reference” ( QNR ) indicator measures the overall quality of the fused image and it is composed of two separate indices, the spatial distortion index D s and the spectral distortion index D λ .

4.2.1. Reduced-Resolution Assessment

Follow Wald’s protocol, the reduced resolution assessment is based on the scale invariance assumption. Six indices have been selected for the evaluation procedure, which consider the available reference images. The indices are described, as follows:
1. Correlation Coefficient ( CC )
CC [40] indicates the presentation ability of the spectral characteristics. It reflects the correlation degree of spectral features between each band of the fusion MS image and the reference image, and the ideal value is 1. The definition is as follows:
C C k = m = 1 M n = 1 N [ ( F k ( m , n ) F ¯ k ) × ( R k ( m , n ) R ¯ k ) ] m = 1 M n = 1 N ( F k ( m , n ) F ¯ k ) 2 × m = 1 M n = 1 N ( R k ( m , n ) R ¯ k ) 2
where, C C k is the correlation coefficient of the band k , F ¯ k , and R ¯ k denote pixel means of the k band of the fusion MS image and the reference image, respectively.
2. Structural Similarity ( SSIM )
SSIM [41] reflects the structural similarity between the fused image and the reference image. It quantifies the degradation of image quality. A higher value of SSIM indicates a higher structural similarity between the two images.
S S I M ( R , F ) = ( 2 μ R μ F + C 1 ) ( 2 σ R F + C 2 ) ( μ R 2 + μ F 2 + C 1 ) ( σ R 2 + σ F 2 + C 2 )
where μ R and μ F denote the mean values of the reference image R and the fused image F , respectively. σ R and σ F are the variances of R and F . σ R F calculates the covariance value of R and F . C 1 and C 2 are constants, which exist to avoid the denominator equaling to 0 and to maintain stability. In order to simplify the model, the value is set to 0.
3. Spectral Angle Mapper ( SAM )
A simple index, SAM [42], reflects the distortion in spectral information between the fused MS image and the reference image, and the ideal value is 0. Its definition is as follows:
S A M = arccos [ u R , u F u R 2 u F 2 ]
where u F and u R are the spectral vectors constructed by the fused image and the reference image.
4. Root Mean Square Error ( RMSE )
RMSE [43] is the index accounts for the difference between the reference image and the fused image; the ideal value is 0. RMSE is defined as
R M S E = 1 M N m = 1 M n = 1 N ( F ( m , n ) R ( m , n ) ) 2
5. Erreur Relative Global Adimensionnelle de Synthèse ( ERGAS )
ERGAS [13] reflects the distortion degree of the spectral and spatial information, and it is an index for the overall evaluation of the fused image. The optimal value of ERGAS is 0. The definition is as follows:
E R G A S = 100 h l 1 N k = 1 N [ R M S E k M e a n ( R k ) ] 2
where h and l are the spatial resolution of the PAN and MS images, respectively. RMSE k is the mean square error between the k band of the fused image and the k band of the reference image, indicating the difference between the two images, and Mean ( R k ) denotes the mean value of the band k of the reference image.
6. Universal Image Quality Index ( UIQI )
UIQI [44] is a performance index that measures the spatial details of the fused MS images. The optimal value is 1; so the closer that UIQI value is to 1, the better the fusion quality. It is defined as:
U I Q I ( R , F ) = σ R F σ R σ F × 2 F R ¯ F ¯ 2 + R ¯ 2 × 2 σ R σ F σ R 2 + σ F 2
where, σ R and σ F are the standard deviations of the reference image R and the fused image F , respectively. σ R F calculates the covariance value of R and F .

4.2.2. Full-Resolution Assessment

In order to operate the assessment directly on the data at the original resolution, the quality with no reference ( QNR ) index was proposed [45]. This index measures the correlation, luminance, and contrast between the two images. QNR is composed of two separate indices, D λ and D s , which denote the distortion of spectral and spatial features, respectively.
1. Quality with No Reference (QNR)
QNR, which consists of D λ and D s , reflects the overall quality of the fused image, and the ideal value is 1. The definition of QNR is as follows:
Q N R = ( 1 D λ ) ( 1 D s )
2. Spectral Distortion Index ( D λ )
D λ represents the spectral distortion degree between the fused image and the MS image, with an ideal value of 0. The definition of D λ is as follows:
D λ = 1 N ( N 1 ) i = 1 N j = 1 , i j N | U I Q I ( M S i , M S j ) U I Q I ( F i , F j ) | p p
where, p is the difference enhancement parameter that isusually set to 1, U I Q I ( M S i , M S j ) and U I Q I ( F i , F j ) are the generalized image quality indicators of the MS image bands and the fused image bands, respectively; U I Q I ( M S i , M S j ) = σ M S i M S j σ M S i σ M S j × 2 M S i ¯ M S j ¯ M S i ¯ 2 + M S j ¯ 2 × 2 σ M S i σ M S j σ M S i 2 + σ M S j 2 , where σ M S i M S j is the image covariance, σ M S i and σ M S j represent image variances, and M S i ¯ and M S j ¯ denote the image mean values.
3. Spatial Distortion Index ( D s )
D s indicates the information preservation of the fused image in the spatial domain, with an ideal value of 0. Its definition is as follows:
D s = 1 N i = 1 N | U I Q I ( M S i , P A N ) U I Q I ( F i , P A N ) | q q
where, q is the difference enhancement parameter that is usually set to 1, U I Q I ( M S i , P A N ) is the generalized image quality indicator between the k band of the MS image and the PAN image, and U I Q I ( F i , P A N ) is the generalized image quality indicator between the k band of the fused image and the PAN image.

4.3. Implementation Details

The implementation details for the fusion methods are provided and analyzed. The experimental hardware environment is an Intel Core i5-4200U, 1.60GHz CPU, and 4GB memory. The software environment is MATLAB R2014a. The parameters of the fusion methods are set, as follows: the NSST decomposition filter is “maxflat”, the scale decomposition layer is set to 3, and the corresponding directions to { 16 , 8 , 4 } . In the PCNN model, n   =   200 , α L =   1 . 0 , α θ =   0.2 , β   =   3.0 , V L =   1.0 , and V θ   =   20 . The two parameters r and ε of the gradient domain GIF affect the fusion performance of the proposed method to some extent. The parameter r is the size of the filter window, which determines the significant difference of the guidance image in local windows. ε denotes the normalized parameter, which determines the blur degree of guided filter.
The effects of the two sets of parameters, r 1 , ε 1 and r 2 , ε 2 , in the gradient domain GIF on fusion performance will be discussed. Four indices, CC , SAM , ERGAS , and UIQI , are adopted to analyze the performance impact of the four parameters. Figure 5a,b,d,e represent two groups of the MS and PAN images from the WorldView-2 data set, which are the plant and house areas, respectively. Figure 5c,f is the fusion results that were obtained with the optimal guided filter parameters.
When the parameter r 1 is analyzed, the other three parameters are set to fixed values: ε 1 =   1 , r 2 =   3 , and ε 2 =   10 6 . Table 3 shows the results of the fusion performance on the two pairs of test images, in terms of four selected indices ( CC , SAM , ERGAS , and UIQI ) versus different window sizes r 1 . Figure 6 shows the change trend of each index intuitively, in which the indices SAM and ERGAS have been normalized. It can be seen that the performance is optimal when r 1 =   2 .
When analyzing the parameter ε 1 , the other three parameters are also set to fixed values: r 1 = 2 , r 2 = 3 , and ε 2 =   10 6 . As shown in Table 4, statistics of all evaluation indices of fusion images that were obtained by different values of ε 1 are presented. Figure 7 shows the relevant changing curve of each index; as can be seen, better performance results came from setting the parameter ε 1 =   10 6 .
When the parameter r 2 is analyzed, the other three parameters are set to: r 1 = 2 , ε 1 =   10 6 , and ε 2 =   10 6 , respectively. The statistical results of each evaluation index that are obtained by different window sizes r 2   ×   r 2 are presented in Table 5. Figure 8 shows the change curve of each evaluation index and it can be seen that the performance is optimal when r 2 =   2 .
When analyzing the parameter ε 2 , the other three parameters are also set to fixed values: r 1 =   2 , ε 1 =   10 6 , and r 2 =   2 , respectively. As shown in Table 6, the performance results with different values of ε 2 for the proposed and other comparison methods are presented. Figure 9 shows the relevant changing curve of each index and the best performance results came from setting the parameter ε 1 =   10 6 .
By analyzing the performance with different parameter settings in the gradient domain GIF for the proposed method on WorldView-2 data set, we find that the proposed method achieves better results when the four parameters are set as r 1 =   2 , ε 1 =   10 6 , r 2 =   2 , and ε 2 =   10 6 . At the same time, it can be observed that the gradient domain GIF is less sensitive to the selection of the blur degree parameter. From our experiment, the performance with the no-reference indices for the proposed method also achieves the best results under the same parameter settings. The proposed method is independent of the precise parameter selection of the gradient domain GIF. These parameters of the filter have little effect on the overall fusion performance. Therefore, a satisfactory effect can be obtained by the proposed method with fixed parameter values.

4.4. Results Analysis and Discussion

In this section, the experiments are performed on the real data and the degraded data, respectively. Six image pairs of the experimental data set are from different observation scenes of the WorldView-2 and QuickBird satellites: three groups are used as the real data and the other three are applied to the performance comparison on the degraded data.

4.4.1. Performance Comparison with State-of-the-Art methods on Real Data

In this experiment, the proposed fusion method is compared with IHS [1], NSCT-PCNN [20], NSST-SR [23], multi-resolution singular value decomposition (MSVD) [46], PRACS [6], Gram–Schmidt adaptive (GSA) [7], and morphological filter-half gradients (MF-HG) [36] on the real remote sensing data set without a reference image. The codes for the seven comparison methods were downloaded from the homepage that was published by the authors and the parameter settings in the experiment are consistent with those in the literature. As shown in Figure 10, three image pairs of experimental data sets are selected as examples for the result analysis without a reference image. The first two image pairs came from two different observation scenes of the WorldView-2 satellite and the third one was collected from the QuickBird satellite. Figure 11, Figure 12 and Figure 13 show the fusion results of the proposed method and the comparison methods. For better comparison, a rectangular sub-image area is magnified and then displayed at the upper left of the fusion image.
1. WorldView-2 Data Set
For the group 1 image pair of the WorldView-2 data set, the corresponding fused products are given in Figure 11a–h. The fusion results of IHS, NSCT-PCNN, NSST-SR, MSVD, PRACS, GSA, and MF-HG are shown in Figure 11a–g, respectively; Figure 11h is the result that is generated by our method. It can be seen that the fused image that was obtained by the proposed method achieves a satisfactory visual effect in terms of the preservation of spectral information and spatial details. When compared with the other methods, there is a severe spectral distortion in the fused image that is obtained by the IHS method. The spatial and spectral quality of fusion result of the NSCT-PCNN method is relatively good. The fused result of NSST-SR suffers from some degrees of color distortion, and from the enlarged rectangle sub-image we can see that there are not enough details of the cars. The fusion results of the MSVD and GSA methods have some blurring at the edges of the cars. The PRACS method results in some noise in the dark area next to the cars. The MF-HG method shows the quality of the spectral information preservation and the spatial details injection is enhanced.
For the group 2 image pair of the WorldView-2 data set, Figure 12 shows the corresponding experimental results. By comparing the visual effects with the other comparison methods, we can find that the proposed method has significant improvement on the spatial quality and the spectral fidelity of the fused image. Figure 12a shows the fused image of the IHS method. It can be seen that this method generates serious spectral distortion. The NSCT-PCNN method injects more spatial details and it preserves more spectral information than the IHS method. The NSST-SR method has higher spatial resolution, but the color of the fusion result is quite different from the original MS image, and there is a certain degree of spectral distortion. The fusion result of the MSVD has little blurring. The PRACS, GSA, and MF-HG methods result in better spectral information maintenance, as well as improved spatial quality. It is difficult to tell the difference between the results that were obtained by these methods through visual observation.
Table 7 and Table 8 provide the quantitative evaluation results of Figure 11 and Figure 12. From the comparison of each index, we can see that the proposed method provides the best value in terms of QNR . In Table 7, the D s value of our proposed method is the smallest, but the MF-HG method gives the best value of D λ . In Table 8, our proposed method provides the best value of D λ , and the best value of D s is given by the GSA method; the result of our method is the third smallest. In general, however, the proposed method achieves better pansharpening performance than the comparison methods.
2. QuickBird Data Set
For the group 3 image pair in Figure 10e,f, which came from the QuickBird satellite, the corresponding fusion products are shown in Figure 13. The IHS method has certain limitations; namely, it can only be applied to the MS image with three bands. This data set from the QuickBird satellite provides the MS image with four bands (R, G, B, and NIR), so the methods other than IHS are used for comparison. When compared with the visual effects of the fusion results, there is a relatively serious spectral and spatial distortion in the results that were achieved by the NSCT-PCNN and NSST-SR methods. The fusion result of the MSVD method suffers from some blurring and it has an obviously serrated border. The fusion result of the GSA method has some improvement in terms of the spatial and spectral quality, but spectral inhomogeneity exists in local areas. The PRACS, MF-HG, and our proposed methods obtain better visual effects, which have higher spatial quality and better spectral information maintenance ability. Table 9 shows the corresponding quantitative results of Figure 13, in which we can see that the proposed method outperforms the six compared methods in the QNR and D s indices, while the best value of D λ is given by the PRACS method.

4.4.2. Performance Comparison with State-of-the-Art Methods on Degraded Data

In this experiment, the performance of the proposed fusion method is compared with seven state-of-the-art methods, including IHS [1], NSCT-PCNN [20], NSST-SR [23], MSVD [46], PRACS [6], GSA [7], and MF-HG [36], on the degraded remote sensing data set with the reference image. As shown in Figure 14, three image pairs are selected for the reduced resolution evaluation. The first two image pairs came from two different observation scenes of the WorldView-2 satellite and the third was collected from the QuickBird satellite. The low resolution images can be generated by modulation transfer functions (MTF) filtering and decimation. The original MS images are used as the reference images for performance evaluation. Figure 15, Figure 17, and Figure 19 show the fusion results of the proposed method and the compared methods.
1. WorldView-2 Data Set
For the group 1 image set, which is the degraded data set of WorldView-2 on the bridge area, the corresponding fused results are shown in Figure 15. Figure 15a is the reference MS image. Figure 15b–i demonstrate the fusion results of IHS, NSCT-PCNN, NSST-SR, MSVD, PRACS, GSA, MF-HG, and our proposed method, respectively. It can be seen that serious spectral distortion is produced by the IHS method. The fusion result of the NSCT-PCNN method behaves relatively well in terms of spectral maintenance and spatial information injection. The spatial detailed information in the fused result of NSST-SR is well maintained, but there are some color distortions. The fusion results of the MSVD and PRACS also behave well in terms of spectral and spatial information. From Figure 15g, we can see that some detailed information is lost in the result of the GSA method, with some blurring of the outlines of the bridge. In the fusion result that was obtained by the MF-HG method, the contrast in local areas is slightly higher than that in the reference image. By contrast, the fusion image of the proposed method is closer to the reference image, and it has higher spatial resolution than the comparison methods.
For better analysis of the spectral preservation ability of the fused images in Figure 15, the horizontal profiles of the column means for R, G, and B bands of the fusion images that were obtained by different methods are displayed in Figure 16. The profile that was formed by black dots is the reference image. The closer to this dotted profile, the better the spectral characteristics are preserved. It can be seen that the profile of the IHS method has large deviations from the reference images in most areas. In contrast, the fused image that was obtained by the proposed method is most similar to the horizontal spectral profile of the reference image, and it has the highest fidelity for the spectral information.
Group 2 image pair in Figure 14 is the degraded data set of WorldView-2 on the urban area, and Figure 17 shows the corresponding fusion results of the compared methods and the proposed method. Figure 17a is the original MS image, which is used as the reference image for evaluation. There is serious spectral distortion in the fused result that was obtained by the IHS method. The color of the fusion result of the NSST-SR method has a certain degree of spectral distortion. The fusion result that was obtained by the MSVD method is blurred. Fusion results yielded by the NSCT-PCNN, PRACS, GSA, MF-HG, and our proposed methods achieve better visual effects and have a more similar appearance to the reference MS image.
Figure 18 shows the horizontal spectral profiles of the fused images on urban area. It can be seen that the IHS profile has a large degree of deviation from the reference image. The horizontal profiles of NSCT-PCNN, NSST-SR, GSA, and MF-HG methods are close to the reference image, with only some local sections having differences. The horizontal spectral profile of the proposed method is closest to the reference image.
In this experiment, six indices are selected for the evaluation procedure on the degraded data, including CC , SSIM , SAM , RMSE , ERGAS , and UIQI . From Table 10 and Table 11 we can see that the largest CC values are achieved by the proposed method, which are significantly higher than other methods, indicating that our method has excellent spectral preservation ability. The proposed method also obtains the best values in RMSE , ERGAS and SSIM . It demonstrates that the fused image that was obtained by our method maintains good spatial information and has the most similar structure to the reference image. The optimal value of the UIQI index shows that the spatial details preservation of the proposed method is the best. The value of SAM is close to the optimal value of the compared methods. According to the subjective visual effect and objective evaluation using the WorldView-2 data set, the proposed method not only improves the spatial continuity, but also effectively maintains the spectral information and improves the spatial resolution of the fused image. The performance of the proposed method is obviously superior to the other seven state-of-the-art methods.
2. QuickBird Data Set
Figure 19 shows the fusion results of the group 3 image pair in Figure 14. This data set, from the QuickBird satellite, provides the MS image with four bands (R, G, B, and NIR). It uses the six compared methods excluding IHS for comparison in this part. The fusion result of the NSCT-PCNN method behaves well in terms of spatial detail enhancement and spectral preservation, but it results in some noise in local areas. The results of the NSST-SR and MSVD methods have a certain degree of color distortion and the fusion result that was achieved by the MSVD method has the problem of edge blurring. The spectral characteristics in the fusion results of the PRACS and GSA methods are preserved well, but there is some structural information loss. In contrast, the fusion results of the MF-HG method and the proposed method achieve better visual effects.
Figure 20 shows the horizontal spectral profiles of the column means for R, G, B, and NIR bands of the fusion images and the reference image. The profiles of the NSST-SR and MSVD methods have large deviations from the reference image. The profiles of the GSA, PRACS, and MF-HG methods agree relatively well with the reference image. It can be seen that the profile that was obtained by our proposed method is the closest to the reference profile.
The quantitative evaluation results of Figure 19 are given in Table 12, in which we can see that the proposed method outperforms the six compared methods in most indices, except the RMSE index. The best values in CC and SAM indicate better spectral preservation ability of the proposed method. Values of SSIM , ERGAS and UIQI also achieve the optimal value, which demonstrates that the fused image obtained by the proposed method achieves better preservation ability of spatial and spectral information and has the most similar structure to the reference image. The RMSE value of our proposed method is close to the optimum, which is given by the PRACS method. According to the subjective visual effect and objective evaluation using the QuickBird data set, our proposed method can harmonize the spectral information preservation and the spatial details injection well, and it is better than the other six state-of-the-art methods.

5. Conclusions

In this paper, a new pansharpening method that is based on the gradient domain GIF and NSST is proposed. The spatial continuity of the fused image is improved by using a gradient domain GIF with good edge preserving properties. For low-frequency coefficients in the NSST domain, the MFIM technology is adopted to complete the injection of the detailed spatial information. The modulated low-frequency coefficients are refined based on the gradient-domain GIF, which reduces the blurring of the edges. For the high-frequency coefficients, an improved PCNN is utilized to calculate the model firing output amplitude. The gradient domain GIF is then used to optimize the firing map, which reduces the decision deviation at the boundary region of the object. It also effectively suppresses the artificial texture and image non-uniformity phenomenon. The real and simulated data sets from the WorldView-2 and QuickBird satellites were utilized in order to verify the pansharpening performance of the proposed method. The data sets consist of three channels and four channels, respectively. Experiments conducted on the WorldView-2 data set adopted seven state-of-the-art methods for comparison: (1) IHS; (2) NSCT-PCNN; (3) NSST-SR; (4) MSVD; (5) PRACS; (6) GSA; and, (7) MF-HG. Experiments that were conducted using the QuickBird data set adopted these same methods with the exception of the IHS method. The experimental results show that our proposed method can achieve better performance in terms of spectral preservation and spatial detail injection, which verifies the method’s effectiveness and superiority.
Although the proposed method performs well in spectral information preservation and spatial quality improvement, the spatial quality of the fused image still needs to be further improved due to the blurring of some edge details that is caused by the gradient domain GIF. In addition, the WorldView-2 satellite data set that was used in the paper only provides three channels of R, G, and B, while the satellite image itself contains more bands. Combinations of different bands have different applications in practice. Spectral fidelity is particularly important for WorldView-2 satellite images. In future work, we would like to extend the pansharpening method to MS images with more bands, and we will work to further improve the spatial quality of the fused images while maintaining spectral information and spatial continuity.

Author Contributions

Conceptualization, J.J. and L.W.; Methodology and experimental simulation, J.J.; Writing-original draft, J.J.; Writing-review & editing, J.J. and L.W.

Funding

This research was funded by the National Nature Science Foundation of China, grant number 61801513 and the Defense Equipment Pre-research Foundation, grant number 61420100103.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Carper, W.J.; Lillesand, T.M.; Kiefer, P.W. The use of intensity-hue-saturation transformation for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
  2. Pohl, C.; Genderen, J.L.V. Review article multisensor image fusion in remote sensing: Concepts, methods and applications. Int. J. Remote Sens. 1998, 19, 823–854. [Google Scholar] [CrossRef]
  3. Laben, C.A.; Brower, B.V. Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening. Patents US 6011875 A, 4 January 2000. [Google Scholar]
  4. Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  5. Rahmani, S.; Strait, M.; Merkurjev, D.; Moeller, M.; Wittman, T. An adaptive IHS pan-sharpening method. IEEE Geosci. Remote Sens. Lett. 2010, 7, 746–750. [Google Scholar] [CrossRef]
  6. Choi, J.; Yu, K.Y.; Kim, Y. A new adaptive component-substitution-based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  7. Restaino, R.; Mura, M.D.; Vivone, G.; Chanussot, J. Context-adaptive pansharpening based on image segmentation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 753–766. [Google Scholar] [CrossRef]
  8. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE pansharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
  9. Li, S.; Yang, B.A. New pan-sharpening method using a compressed sensing technique. IEEE Trans. Geosci. Remote Sens. 2011, 49, 738–746. [Google Scholar] [CrossRef]
  10. Vicinanza, M.R.; Restaino, R.; Vivone, G.; Mura, M.D.; Chanussot, J. A pansharpening method based on the sparse representation of injected details. IEEE Geosci. Remote Sens. Lett. 2015, 12, 180–184. [Google Scholar] [CrossRef]
  11. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. A new pansharpening algorithm based on total variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 318–322. [Google Scholar] [CrossRef]
  12. He, X.Y.; Condat, L.; Bioucas-Dias, J.M.; Chanussot, J.; Xia, J.S. A new pansharpening method based on spatial and spectral sparsity priors. IEEE Trans. Image Process. 2014, 23, 4160–4174. [Google Scholar] [CrossRef] [PubMed]
  13. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  14. Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
  15. Lewis, J.J.; O’Callaghan, R.J.; Nikolov, S.G.; Bull, D.R.; Canagarajah, N. Pixel- and region-based image fusion with complex wavelets. Inf. Fusion 2007, 8, 119–130. [Google Scholar] [CrossRef]
  16. Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156. [Google Scholar] [CrossRef]
  17. Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef] [PubMed]
  18. Guo, K.; Labate, D. Optimally sparse multidimensional representation using shearlets. SIAM J. Math. Anal. 2008, 39, 298–318. [Google Scholar] [CrossRef]
  19. Li, S.T.; Yang, B.; Hu, J.W. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion 2011, 12, 74–84. [Google Scholar] [CrossRef]
  20. Qu, X.B.; Yan, J.W.; Xiao, H.Z. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 2008, 34, 1508–1514. [Google Scholar] [CrossRef]
  21. Cunha, A.L.; Zhou, J.P.; Do, M.N. The non-subsampled contourlet transform: Theory, design, and applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar] [CrossRef]
  22. Easley, G.; Labate, D.; Lim, W.Q. Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal. 2008, 25, 25–46. [Google Scholar] [CrossRef] [Green Version]
  23. Moonon, A.U.; Hu, J.; Li, S. Remote sensing image fusion method based on nonsubsampled shearlet transform and sparse representation. Sens. Imaging 2015, 16, 23. [Google Scholar] [CrossRef]
  24. Wu, Y.Q.; Tao, F.X. Multispectral and panchromatic image fusion based on improved projected gradient NMF in NSST domain. Acta Opt. Sin. 2015, 35, 129–138. [Google Scholar]
  25. Wu, Y.Q.; Wang, Z.L. Multispectral and panchromatic image fusion using chaotic Bee Colony optimization in NSST domain. J. Remote Sens. 2017, 21, 549–557. [Google Scholar]
  26. Yang, Y.; Wan, W.G.; Huang, S.Y.; Que, Y. A novel pan-sharpening framework based on matting model and multiscale transform. Remote Sens. 2017, 9, 391. [Google Scholar] [CrossRef]
  27. Durand, F.; Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. In Proceedings of the 29th International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH), San Antonio, TX, US, 23–26 July 2002; pp. 257–266. [Google Scholar]
  28. Kumar, B.K.S. Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 2015, 9, 1193–1204. [Google Scholar] [CrossRef]
  29. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. In Proceedings of the 36th International Conference and Exhibition on Computer Graphics and Interactive Techniques (SIGGRAPH), Los Angeles, CA, US, 11–15 August 2008; p. 67. [Google Scholar]
  30. He, K.M.; Sun, J.; Tang, X.O. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1397–1409. [Google Scholar] [CrossRef]
  31. Wan, W.; Yang, Y.; Lee, H.J. Practical remote sensing image fusion method based on guided filter and improved SML in the NSST domain. Signal Image Video Process. 2018, 12, 959–966. [Google Scholar] [CrossRef]
  32. Meng, X.C.; Li, J.; Shen, H.F.; Zhang, L.P. Pansharpening with a guided filter based on three-layer decomposition. Sensors 2016, 16, 1068. [Google Scholar] [CrossRef]
  33. Li, Z.G.; Zheng, J.H.; Zhu, Z.J.; Yao, W.; Wu, S.Q. Weighted guided image filtering. IEEE Trans. Image Process. 2014, 24, 120–129. [Google Scholar]
  34. Kou, F.; Chen, W.H.; Wen, C.Y.; Li, Z.G. Gradient domain guided image filtering. IEEE Trans. Image Process. 2015, 24, 4528–4539. [Google Scholar] [CrossRef] [PubMed]
  35. Li, S.T.; Kang, X.D.; Hu, J.W. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [PubMed]
  36. Restaino, R.; Vivone, G.; Dalla, M.M.; Chanussot, J. Fusion of multispectral and panchromatic images based on morphological operators. IEEE Trans. Image Process. 2016, 25, 2882–2895. [Google Scholar] [CrossRef] [PubMed]
  37. Peli, E. Contrast in complex images. J. Opt. Soc. Am. A 1990, 7, 2032–2040. [Google Scholar] [CrossRef]
  38. Vivone, G.; Restaino, R.; Mura, M.D.; Licciardi, G.; Chanussot, J. Contrast and error-based fusion schemes for multispectral image pansharpening. IEEE Geosci. Remote Sens. Lett. 2013, 11, 930–934. [Google Scholar] [CrossRef]
  39. Blasch, E.P. Biological information fusion using a PCNN and belief filtering. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Washington, DC, US, 10–16 July 1999; pp. 2792–2795. [Google Scholar]
  40. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  41. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  42. Yuhas, R.H.; Goetz, A.F.H.; Boardman, J.W. Discrimination among semi-arid landscape endmembers using the Spectral Angle Mapper (SAM) algorithm. In Proceedings of the Summaries of the Third Annual JPL Airborne Geoscience Workshop, Boulder, CO, USA, 1–5 June 1992; pp. 147–149. [Google Scholar]
  43. Yang, Y.; Wu, L.; Huang, S.Y.; Wan, W.G.; Que, Y. Remote sensing image fusion based on adaptively weighted joint detail injection. IEEEAccess 2018, 6, 6849–6864. [Google Scholar] [CrossRef]
  44. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  45. Alparone, L.; Aiazzi, B.; Baronti, S.; Garzelli, A.; Nencini, F.; Selva, M. Multispectral and panchromatic data fusion assessment without reference. Photogramm. Eng. Remote Sens. 2008, 74, 193–200. [Google Scholar] [CrossRef]
  46. Naidu, V. Image fusion technique using multi-resolution singular value decomposition. Def. Sci. J. 2011, 61, 479–484. [Google Scholar] [CrossRef]
Figure 1. Frequency plane and supports of a shearlet: (a) shearlet frequency plane; and, (b) shearlet supports.
Figure 1. Frequency plane and supports of a shearlet: (a) shearlet frequency plane; and, (b) shearlet supports.
Electronics 08 00229 g001
Figure 2. Multi-scale and multi-directional decomposition model of a three-level non-subsampled shearlet transform (NSST).
Figure 2. Multi-scale and multi-directional decomposition model of a three-level non-subsampled shearlet transform (NSST).
Electronics 08 00229 g002
Figure 3. The flow chart of the guided image filter (GIF).
Figure 3. The flow chart of the guided image filter (GIF).
Electronics 08 00229 g003
Figure 4. The flow chart of the proposed fusion method.
Figure 4. The flow chart of the proposed fusion method.
Electronics 08 00229 g004
Figure 5. Two pairs of the multispectral (MS) and panchromatic (PAN) example images: (a) PAN image 1; (b) MS image 1; (c) the fusion result of PAN image 1 and MS image 1; (d) PAN image 2; (e) MS image 2; and, (f) the fusion result of PAN image 2 and MS image 2.
Figure 5. Two pairs of the multispectral (MS) and panchromatic (PAN) example images: (a) PAN image 1; (b) MS image 1; (c) the fusion result of PAN image 1 and MS image 1; (d) PAN image 2; (e) MS image 2; and, (f) the fusion result of PAN image 2 and MS image 2.
Electronics 08 00229 g005
Figure 6. Analysis of parameter r 1 used in the algorithm. Correlation Coefficient ( CC ), Spectral Angle Mapper ( SAM ), Erreur Relative Global Adimensionnelle de Synthèse ( ERGAS ), and Universal Image Quality Index ( UIQI ) are observed when ε 1 =   1 , r 2 =   3 , and ε 2 =   10 6 (which are set as default values): (a) the effect of r 1 on fusion performance of group 1 images; and, (b) the effect of r 1 on fusion performance of group 2 images.
Figure 6. Analysis of parameter r 1 used in the algorithm. Correlation Coefficient ( CC ), Spectral Angle Mapper ( SAM ), Erreur Relative Global Adimensionnelle de Synthèse ( ERGAS ), and Universal Image Quality Index ( UIQI ) are observed when ε 1 =   1 , r 2 =   3 , and ε 2 =   10 6 (which are set as default values): (a) the effect of r 1 on fusion performance of group 1 images; and, (b) the effect of r 1 on fusion performance of group 2 images.
Electronics 08 00229 g006
Figure 7. Analysis of parameter ε 1 used in the algorithm. CC , SAM , ERGAS , and UIQI are observed when r 1 =   2 , r 2 =   3 , and ε 2 =   10 6 (which are set as default values): (a) the effect of ε 1 on fusion performance of group 1 images; and, (b) the effect of ε 1 on fusion performance of group 2 images.
Figure 7. Analysis of parameter ε 1 used in the algorithm. CC , SAM , ERGAS , and UIQI are observed when r 1 =   2 , r 2 =   3 , and ε 2 =   10 6 (which are set as default values): (a) the effect of ε 1 on fusion performance of group 1 images; and, (b) the effect of ε 1 on fusion performance of group 2 images.
Electronics 08 00229 g007
Figure 8. Analysis of parameter r 2 used in the algorithm. CC , SAM , ERGAS , and UIQI are observed when r 1 =   2 , ε 1 =   10 6 , and ε 2 =   10 6 (which are set as default values): (a) the effect of r 2 on fusion performance of group 1 images; and, (b) the effect of r 2 on fusion performance of group 2 images.
Figure 8. Analysis of parameter r 2 used in the algorithm. CC , SAM , ERGAS , and UIQI are observed when r 1 =   2 , ε 1 =   10 6 , and ε 2 =   10 6 (which are set as default values): (a) the effect of r 2 on fusion performance of group 1 images; and, (b) the effect of r 2 on fusion performance of group 2 images.
Electronics 08 00229 g008
Figure 9. Analysis of parameter ε 2 used in the algorithm. CC , SAM , ERGAS , and UIQI are observed when r 1 =   2 , r 2 =   2 , and ε 1 =   10 6 (which are set as default values): (a) the effect of ε 2 on fusion performance of group 1 images; and, (b) the effect of ε 2 on fusion performance of group 2 images.
Figure 9. Analysis of parameter ε 2 used in the algorithm. CC , SAM , ERGAS , and UIQI are observed when r 1 =   2 , r 2 =   2 , and ε 1 =   10 6 (which are set as default values): (a) the effect of ε 2 on fusion performance of group 1 images; and, (b) the effect of ε 2 on fusion performance of group 2 images.
Electronics 08 00229 g009
Figure 10. Threeimage pairs of experimental data sets are used for assessment without a reference. The first two came from WorldView-2 satellite, and the third was collected from the QuickBird satellite: (a) test MS image 1; (b) test PAN image 1; (c) test MS image 2; (d) test PAN image 2; (e) test MS image 3; and, (f) test PAN image 3.
Figure 10. Threeimage pairs of experimental data sets are used for assessment without a reference. The first two came from WorldView-2 satellite, and the third was collected from the QuickBird satellite: (a) test MS image 1; (b) test PAN image 1; (c) test MS image 2; (d) test PAN image 2; (e) test MS image 3; and, (f) test PAN image 3.
Electronics 08 00229 g010
Figure 11. The fusion results of the WorldView-2 data set on the seaside area: (a) intensity-hue-saturation transformation (IHS); (b) non-subsampled contourlet transform-pulse coupled neural network (NSCT-PCNN); (c) non-subsampled shearlet transform-SR (NSST-SR); (d) multi-resolution singular value decomposition (MSVD); (e) partial replacement adaptive CS (PRACS); (f) Gram–Schmidt adaptive (GSA); (g) morphological filter-half gradients (MF-HG); and, (h) proposed.
Figure 11. The fusion results of the WorldView-2 data set on the seaside area: (a) intensity-hue-saturation transformation (IHS); (b) non-subsampled contourlet transform-pulse coupled neural network (NSCT-PCNN); (c) non-subsampled shearlet transform-SR (NSST-SR); (d) multi-resolution singular value decomposition (MSVD); (e) partial replacement adaptive CS (PRACS); (f) Gram–Schmidt adaptive (GSA); (g) morphological filter-half gradients (MF-HG); and, (h) proposed.
Electronics 08 00229 g011
Figure 12. The fusion results of the WorldView-2 data set on the house area: (a) IHS; (b) NSCT-PCNN;(c) NSST-SR; (d) MSVD; (e) PRACS; (f) GSA; (g) MF-HG; and, (h) proposed.
Figure 12. The fusion results of the WorldView-2 data set on the house area: (a) IHS; (b) NSCT-PCNN;(c) NSST-SR; (d) MSVD; (e) PRACS; (f) GSA; (g) MF-HG; and, (h) proposed.
Electronics 08 00229 g012
Figure 13. The fusion results of the QuickBird data set on the house area: (a) NSCT-PCNN; (b) NSST-SR; (c) MSVD; (d) PRACS; (e) GSA; (f) MF-HG; and, (g) proposed.
Figure 13. The fusion results of the QuickBird data set on the house area: (a) NSCT-PCNN; (b) NSST-SR; (c) MSVD; (d) PRACS; (e) GSA; (f) MF-HG; and, (g) proposed.
Electronics 08 00229 g013aElectronics 08 00229 g013b
Figure 14. Three image pairs of experimental data sets used for assessment with reference. The first two came from the WorldView-2 satellite, and the third was collected from the QuickBird satellite: (a) test MS image 1; (b) test PAN image 1; (c) test MS image 2; (d) test PAN image 2; (e) test MS image 3; and, (f) test PAN image 3.
Figure 14. Three image pairs of experimental data sets used for assessment with reference. The first two came from the WorldView-2 satellite, and the third was collected from the QuickBird satellite: (a) test MS image 1; (b) test PAN image 1; (c) test MS image 2; (d) test PAN image 2; (e) test MS image 3; and, (f) test PAN image 3.
Electronics 08 00229 g014
Figure 15. The fusion results of the WorldView-2 data set on the bridge area: (a) reference; (b) IHS; (c) NSCT-PCNN; (d) NSST-SR; (e) MSVD; (f) PRACS; (g) GSA; (h) MF-HG; and, (i) proposed.
Figure 15. The fusion results of the WorldView-2 data set on the bridge area: (a) reference; (b) IHS; (c) NSCT-PCNN; (d) NSST-SR; (e) MSVD; (f) PRACS; (g) GSA; (h) MF-HG; and, (i) proposed.
Electronics 08 00229 g015
Figure 16. Horizontal spectral profiles of the fusion results for the WorldView-2 data set on the bridge area: (a) band R; (b) band G; and, (c) band B.
Figure 16. Horizontal spectral profiles of the fusion results for the WorldView-2 data set on the bridge area: (a) band R; (b) band G; and, (c) band B.
Electronics 08 00229 g016
Figure 17. The fusion results of the WorldView-2 data set on the urban area: (a) reference; (b) IHS; (c) NSCT-PCNN; (d) NSST-SR; (e) MSVD; (f) PRACS; (g) GSA; (h) MF-HG; and, (i) proposed.
Figure 17. The fusion results of the WorldView-2 data set on the urban area: (a) reference; (b) IHS; (c) NSCT-PCNN; (d) NSST-SR; (e) MSVD; (f) PRACS; (g) GSA; (h) MF-HG; and, (i) proposed.
Electronics 08 00229 g017aElectronics 08 00229 g017b
Figure 18. Horizontal spectral profiles of the fusion results for the WorldView-2 data set on the urban area: (a) band R; (b) band G; and, (c) band B.
Figure 18. Horizontal spectral profiles of the fusion results for the WorldView-2 data set on the urban area: (a) band R; (b) band G; and, (c) band B.
Electronics 08 00229 g018
Figure 19. The fusion results of the QuickBird data set: (a) reference; (b) NSCT-PCNN; (c) NSST-SR; (d) MSVD; (e) PRACS; (f) GSA; (g) MF-HG; and, (h) proposed.
Figure 19. The fusion results of the QuickBird data set: (a) reference; (b) NSCT-PCNN; (c) NSST-SR; (d) MSVD; (e) PRACS; (f) GSA; (g) MF-HG; and, (h) proposed.
Electronics 08 00229 g019
Figure 20. Horizontal spectral profiles of the fusion results for the QuickBird data set: (a) band R; (b) band G; (c) band B; and, (d) band NIR.
Figure 20. Horizontal spectral profiles of the fusion results for the QuickBird data set: (a) band R; (b) band G; (c) band B; and, (d) band NIR.
Electronics 08 00229 g020
Table 1. The spectral and spatial resolution of each band of the WorldView-2 satellite.
Table 1. The spectral and spatial resolution of each band of the WorldView-2 satellite.
Spectral BandsWavelength Range (nm)Spatial Resolution (m)Comment
Band 1400–4501.84Coastal Blue
Band 2450–510Blue
Band 3510–580Green
Band 4585–625Yellow
Band 5630–690Red
Band 6705–745Red Edge
Band 7770–895Near-Infrared 1
Band 8860–1040Near-Infrared 2
Panchromatic wave band450–8000.46
Table 2. The spectral and spatial resolution of each band of the QuickBird satellite.
Table 2. The spectral and spatial resolution of each band of the QuickBird satellite.
Spectral BandsWavelength Range (nm)Spatial Resolution (m)Comment
Band 1450–5202.88Blue
Band 2520–600Green
Band 3630–690Red
Band 4760–900Near-Infrared
Panchromatic wave band450–9000.72
Table 3. The performance evaluation results with different window sizes r 1   ×   r 1 .
Table 3. The performance evaluation results with different window sizes r 1   ×   r 1 .
ImageQuality IndicesIdeal Value r 1   ×   r 1
2 × 23 × 35 × 57 × 79 × 911 × 11
plantCC10.88140.87570.86020.84280.82680.8125
SAM08.47438.71379.22879.740310.182810.5024
ERGAS08.79728.86499.12309.44689.74149.9903
UIQI10.73400.72890.71350.69730.684650.6747
houseCC10.96240.95920.94950.93980.93320.9287
SAM08.55858.73928.90668.99369.00348.9866
ERGAS05.65645.76736.23126.70687.02257.2251
UIQI10.95120.94820.93740.92660.91910.9138
Table 4. The performance evaluation results with different values of ε 1 .
Table 4. The performance evaluation results with different values of ε 1 .
ImageQuality IndicesIdeal Value ε 1
10.30.20.1 10 3 10 6
plantCC10.88140.88150.88150.88160.88350.8854
SAM08.47438.46248.46288.44838.35278.3177
ERGAS08.79728.79708.79688.79698.78308.7869
UIQI10.73400.73400.73410.73410.73560.7370
houseCC10.96240.96240.96240.96250.96330.9636
SAM08.55858.57218.56168.57148.51298.4953
ERGAS05.65645.65825.65945.66175.69155.7057
UIQI10.95120.95120.95120.95120.95140.9515
Table 5. The performance evaluation results with different window sizes r 2   ×   r 2 .
Table 5. The performance evaluation results with different window sizes r 2   ×   r 2 .
ImageQuality IndicesIdeal Value r 2   ×   r 2
2 × 23 × 35 × 57 × 79 × 911 × 11
plantCC10.88790.88540.88150.87940.87740.8758
SAM08.19158.31778.43788.53328.57158.5824
ERGAS08.73208.78698.88618.94008.98719.0244
UIQI10.74140.73700.73140.72810.72580.7241
houseCC10.96420.96360.96270.96180.96090.9601
SAM08.32168.49538.79119.02159.35649.4627
ERGAS05.70535.70575.71055.73685.77355.8002
UIQI10.95200.95150.95030.94920.94820.9476
Table 6. The performance evaluation results with different values of ε 2 .
Table 6. The performance evaluation results with different values of ε 2 .
ImageQuality IndicesIdeal Value ε 2
10.30.20.1 10 3 10 6
plantCC10.88610.88630.88640.88650.88760.8879
SAM08.22438.24728.23178.24348.20698.1915
ERGAS08.77298.77248.76848.77218.74048.732
UIQI10.73820.73830.73850.73870.74080.7414
houseCC10.96380.96380.96380.96390.96420.9642
SAM08.41318.40168.36628.34708.30238.3216
ERGAS05.71005.71345.71515.71705.70685.7053
UIQI10.95160.95160.95160.95160.95200.9520
Table 7. Objective evaluation of the experimental results shown in Figure 11.
Table 7. Objective evaluation of the experimental results shown in Figure 11.
Quality IndicesPansharpening Algorithms
IHSNSCT-PCNNNSST-SRMSVDPRACSGSAMF-HGProposed
D λ 0.12040.04410.00920.03760.02170.02720.00870.0242
D s 0.15410.04580.03780.05260.04360.03150.03990.0147
QNR0.74400.91210.95330.91180.93560.94220.95170.9615
Table 8. Objective evaluation of the experimental results shown in Figure 12.
Table 8. Objective evaluation of the experimental results shown in Figure 12.
Quality IndicesPansharpening Algorithms
IHSNSCT-PCNNNSST-SRMSVDPRACSGSAMF-HGProposed
D λ 0.14460.04790.05010.03690.08060.06790.04150.0282
D s 0.10280.08680.17890.05240.05160.02520.03660.0456
QNR0.76750.86950.77990.91260.87190.90860.92340.9274
Table 9. Objective evaluation of the experimental results shown in Figure 13.
Table 9. Objective evaluation of the experimental results shown in Figure 13.
Quality IndicesPansharpening Algorithms
NSCT-PCNNNSST-SRMSVDPRACSGSAMF-HGProposed
D λ 0.08330.12630.10650.00440.03250.02700,0168
D s 0.08660.10050.01710.03170.06340.02420.0161
QNR0.83730.78590.87820.96400.90610.94940.9674
Table 10. Objective evaluation of the experimental results shown in Figure 15.
Table 10. Objective evaluation of the experimental results shown in Figure 15.
Quality IndicesPansharpening Algorithms
IHSNSCT-PCNNNSST-SRMSVDPRACSGSAMF-HGProposed
CC0.94390.95560.95510.97050.96800.96250.97040.9728
SSIM0.76980.90940.87770.91140.94730.89750.93150.9496
SAM4.17033.81736.68692.52092.74122.06940.48142.3857
RMSE21.14588.241912.20710.140310.19187.40310.87196.9643
ERGAS22.69365.20368.46007.27286.10084.74216.66954.3979
UIQI0.68690.87680.80110.90460.92010.85870.91180.9368
Table 11. Objective evaluation of the experimental results shown in Figure 17.
Table 11. Objective evaluation of the experimental results shown in Figure 17.
Quality IndicesPansharpening Algorithms
IHSNSCT-PCNNNSST-SRMSVDPRACSGSAMF-HGProposed
CC0.96240.97210.98530.94360.98460.97880.98290.9883
SSIM0.71010.90700.94690.86220.94500.92560.92380.9503
SAM4.11155.51444.35657.460.58343.72410.80904.3772
RMSE51.898615.286512.569122.560115.929615.936717.864110.626
ERGAS22.41903.83952.99385.30723.95973.98364.73712.6909
UIQI0.72550.96970.98360.93690.97340.97180.96740.9868
Table 12. Objective evaluation of the experimental results shown in Figure 19.
Table 12. Objective evaluation of the experimental results shown in Figure 19.
Quality IndicesPansharpening Algorithms
NSCT-PCNNNSST-SRMSVDPRACSGSAMF-HGProposed
CC0.93480.87860.89750.95130.94920.93450.9524
SSIM0.75400.73010.63440.72830.73120.73270.7590
SAM3.71208.19806.65253.50303.69583.33903.3048
RMSE16.443528.217024.169513.986314.199716.801714.1164
ERGAS2.92104.64384.00232.55192.57812.96312.5087
UIQI0.86800.78230.80750.877610.87980.86980.8832

Share and Cite

MDPI and ACS Style

Jiao, J.; Wu, L. Pansharpening with a Gradient Domain GIF Based on NSST. Electronics 2019, 8, 229. https://doi.org/10.3390/electronics8020229

AMA Style

Jiao J, Wu L. Pansharpening with a Gradient Domain GIF Based on NSST. Electronics. 2019; 8(2):229. https://doi.org/10.3390/electronics8020229

Chicago/Turabian Style

Jiao, Jiao, and Lingda Wu. 2019. "Pansharpening with a Gradient Domain GIF Based on NSST" Electronics 8, no. 2: 229. https://doi.org/10.3390/electronics8020229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop