Panchromatic and Multispectral Image Fusion Combining GIHS, NSST, and PCA

: Spatial and spectral information are essential sources of information in remote sensing applications, and the fusion of panchromatic and multispectral images effectively combines the advantages of both. Due to the existence of two main classes of fusion methods—component substitution (CS) and multi-resolution analysis (MRA), which have different advantages—mixed approaches are possible. This paper proposes a fusion algorithm that combines the advantages of generalized intensity–hue–saturation (GIHS) and non-subsampled shearlet transform (NSST) with principal component analysis (PCA) technology to extract more spatial information. Therefore, compared with the traditional algorithms, the algorithm in this paper uses PCA transformation to obtain spatial structure components from PAN and MS, which can effectively inject spatial information while maintaining spectral information with high ﬁdelity. First, PCA is applied to each band of low-resolution multispectral (MS) images and panchromatic (PAN) images to obtain the ﬁrst principal component and to calculate the intensity of MS. Then, the PAN image is fused with the ﬁrst principal component using NSST, and the fused image is used to replace the original intensity component. Finally, a fused image is obtained using the GIHS algorithm. Using the urban, plants and water, farmland, and desert images from GeoEye-1, WorldView-4, GaoFen-7 (GF-7), and Gaofen Multi-Mode (GFDM) as experimental data, this fusion method was tested using the evaluation mode with references and the evaluation mode without references and was compared with ﬁve other classic fusion algorithms. The results showed that the algorithms in this paper had better fusion performances in both spectral preservation and spatial information incorporation.


Introduction
Spatial and spectral information are significant in remote sensing imaging applications, such as land classification, change detection, and road extraction.However, based on considerations of imaging quality, the high-frequency spatial information is separated from the spectral information during satellite imaging [1], and typical optical remote sensing satellites, such as QuickBird, WorldView-2, GF-1, and GF-2, only provide highspatial-resolution panchromatic (PAN) images and low-spatial-resolution multispectral (MS) images.The fusion of PAN and MS images effectively solves the problem of this separation of the high-frequency spatial information from the spectral information.
According to the different techniques for high-frequency information injection, PAN-MS fusion can be divided into two categories: spectral and spatial methods [2].The spectral methods are based on the component replacement method in which the spectral information component (SIC) is separated from the spatial structure component (SSC) by projecting the MS into another vector space.Then, the SSC is replaced by a PAN to incorporate high-frequency spatial information, and, finally, a fused image is obtained through inverse transformation.Typical component substitution (CS) methods include principal component analysis (PCA) and the Gram-Schmidt process (GS).In addition, in recent years, methods based on deep learning [3,4] have also achieved good results, but their computational complexity is high, and they are not suitable for large-scale remote sensing images; thus, this paper does not discuss them in depth.
The spatial methods include multi-resolution analysis (MRA), which decomposes MS and PAN at multiple scales.The high-frequency components are fused with lowfrequency components using different rules and, finally, inverted back to the fused image.Typical MRA methods include wavelet transform [5], curvelet transform [6], contourlet transform [7,8], non-subsampled contourlet transform (NSCT) [9], non-subsampled shearlet transform (NSST) [10], and so on.Among these, wavelet transform is the most widely used MRA method, but its direction selectivity is limited and it cannot achieve a stable fusion effect.The curvelet and contourlet transforms have no translation invariance, and the fusion result may be affected by the noise or alignment accuracy of the source image.NCST has high computational complexity, so it is unsuitable for large images.NSST has good directional selectivity, can obtain more information from the source image, and has no down-sampling operation in the decomposition process, thus effectively reducing the pseudo-Gibbs phenomena caused by the registration accuracy.
The CS approaches have good spatial quality but severe spectral distortion, while the MRA class methods have high spectral fidelity but poor spatial quality.These two types of method are complementary [11], which has given rise to many coupled methods.The conventional model of coupling method is shown in Figure 1a: (1) project MS into another vector space to separate the spectral information (MS_SIC) and spatial information (MS_SSC); (2) fuse MS_SSC and PAN using MRA-like methods to obtain the new spatial structure component (New-SSC); and (3) invert NEW-SSC and MS_SIC back to the original space to obtain the fused image.The addition of SSC reduces the information mismatch between PAN and MS, thus reducing the spectral distortion.However, the SSC is obtained directly from the MS, which lacks high-frequency spatial information, thus reducing the image's sharpness.Although the coupling method can overcome the spectral distortion of CS and the spatial distortion of MRA, its spatial information quality (sharpness) is inferior to that of CS, and the spectral information quality (color) is inferior to that of MRA.

Overall Process
The GIHS [14,15] fusion algorithm is simple and efficient, with no limitation on the number of bands, and the NSST [16] has an excellent multi-scale decomposition capability and low computational complexity.Therefore, we utilize the GIHS as a CS algorithm and the NSST as an MRA algorithm in the novel coupling model shown in Figure 1c. Figure 2 specifically shows the algorithm flow, and Figure 3 is an expansion upon the subfigure within the solid red line of Figure 2 and shows the specific steps of the fusion of the component and PAN images.Both this paper's method and GIHS essentially add detailed gain to the up-sampled MS image, while the difference lies in the source of the detailed Therefore, it is of practical significance to optimize the coupling method to improve the quality of spatial information and the quality of spectral information of the fused images.The PCA technique can partially concentrate the spatial information shared by the bands in the first principal component through a linear transformation of the data.In this paper, a new fusion strategy is proposed, as shown in Figure 1b: (1) project MS into another vector space to separate the spectral information (MS_SIC) and spatial information (MS_SSC); (2) combine PAN and MS for PCA transformation and use the first principal component (PC1) as the spatial component (PC1_SSC); (3) use the MRA-like method to fuse PC1_SSC and PAN to obtain NEW-SSC; and (4) invert NEW-SSC and MS_SIC back to the original space to obtain the fused image.The difference between the new and conventional modes lies in how the spatial structure components are obtained: the conventional mode obtains the spatial structure components directly from MS using color space transformations and others.In contrast, using PCA transformation, the new mode obtains the spatial structure components from PAN and MS.
In the subsequent experiments, we selected the generalized intensity-hue-saturation (GIHS) algorithm from the CS methods and the NSST from the MRA class methods.In terms of fusion rules, this paper proposes a new low-frequency fusion rule using gradient-domain singular value decomposition (SVD) [12] and local structure descriptors to construct the weight coefficients, as well as a bootstrap filter [13], to guide the weights and to increase the spatial continuity of the weights; meanwhile, the high-frequency coefficients use local spatial frequencies to guide them.

Overall Process
The GIHS [14,15] fusion algorithm is simple and efficient, with no limitation on the number of bands, and the NSST [16] has an excellent multi-scale decomposition capability and low computational complexity.Therefore, we utilize the GIHS as a CS algorithm and the NSST as an MRA algorithm in the novel coupling model shown in Figure 1c. Figure 2 specifically shows the algorithm flow, and Figure 3 is an expansion upon the subfigure within the solid red line of Figure 2 and shows the specific steps of the fusion of the component and PAN images.Both this paper's method and GIHS essentially add detailed gain to the up-sampled MS image, while the difference lies in the source of the detailed gain.GIHS uses a difference map from PAN and multispectral intensity component I, as detailed gain.Additionally, this paper's method first extracted principal component PC1 from PAN and MS using the PCA transform.Then, it used NSST to extract the new spatial structure component (New_SSC) from PC1 and PAN.Finally, the difference map of New_SSC and intensity component I were used as the detailed gain.This improvement meant that the New_SSC contained more detailed information from the MS and PAN images while retaining some spectral information, thus improving both the spatial and spectral accuracy of the final fused image.For the NSST in the framework, we propose a low-frequency coefficient fusion rule, based on a gradient-domain SVD and bootstrap filter (see Section 2.2 for details), and use the local spatial frequency reflecting the pixel neighborhood variation to fuse the high-frequency coefficients (see Section 2.3 for details).The specific steps are as follows: (1) Up-sample the MS image using three convolutional interpolations to obtain MS * , making it the same size as the PAN image.and fuse the high-frequency coefficients to obtain the new high-frequency component (H F j , k). (5) The new low-frequency and high-frequency components are then NSST inverse transformed to obtain the primary fusion image (F1), and the GAIN will be obtained (F1 − I).Finally, use the f Fi = Mi + ωiGAI N formula to obtain the fusion image (F), where the detail modulation coefficient (ω) is set to 1.
13, 1412 4 of 15 (5) The new low-frequency and high-frequency components are then NSST inverse transformed to obtain the primary fusion image (1), and the GAIN will be obtained (1 − ).Finally, use the f  =  +  formula to obtain the fusion image (F), where the detail modulation coefficient () is set to 1.

Low-Frequency Fusion Rules
The low-frequency coefficients include the primary information of the original images.Takeda [17] proposed an SVD of the image in the gradient domain.For image P, the steps of the gradient domain SVD are as follows: (1) Calculate the gradients in the row and column directions of the image P.
(2) Form the local gradient values into an N × 2 matrix (G), with N referring to the number of local image elements.
(3) Perform singular value decomposition of G to obtain two singular values, λ1 and λ2.
Here,  =  ,  refers to the gradient values of image P in the row and column directions at image element i. U is an N × N orthogonal matrix, S is an N × 2 matrix containing the singular values λ1 and λ2 on its diagonal, and V* is a 2 × 2 orthogonal matrix.The singular values λ1 and λ2 reflect the energy change of G in the eigenvector direction, and the magnitudes of λ1 and λ2 have different characteristics in the imagesmoothing region, where the boundary is consistent with the texture direction and/or , 13, 1412 4 of 15 (5) The new low-frequency and high-frequency components are then NSST inverse transformed to obtain the primary fusion image (1), and the GAIN will be obtained (1 − ).Finally, use the f  =  +  formula to obtain the fusion image (F), where the detail modulation coefficient () is set to 1.

Low-Frequency Fusion Rules
The low-frequency coefficients include the primary information of the original images.Takeda [17] proposed an SVD of the image in the gradient domain.For image P, the steps of the gradient domain SVD are as follows: (1) Calculate the gradients in the row and column directions of the image P.
(2) Form the local gradient values into an N × 2 matrix (G), with N referring to the number of local image elements.
(3) Perform singular value decomposition of G to obtain two singular values, λ1 and λ2.
Here,  =  ,  refers to the gradient values of image P in the row and column directions at image element i. U is an N × N orthogonal matrix, S is an N × 2 matrix containing the singular values λ1 and λ2 on its diagonal, and V* is a 2 × 2 orthogonal matrix.The singular values λ1 and λ2 reflect the energy change of G in the eigenvector direction, and the magnitudes of λ1 and λ2 have different characteristics in the imagesmoothing region, where the boundary is consistent with the texture direction and/or

Low-Frequency Fusion Rules
The low-frequency coefficients include the primary information of the original images.Takeda [17] proposed an SVD of the image in the gradient domain.For image P, the steps of the gradient domain SVD are as follows: (1) Calculate the gradients in the row and column directions of the image P.
(2) Form the local gradient values into an N × 2 matrix (G), with N referring to the number of local image elements.
(3) Perform singular value decomposition of G to obtain two singular values, λ1 and λ2. Here, T refers to the gradient values of image P in the row and column directions at image element i. U is an N × N orthogonal matrix, S is an N × 2 matrix containing the singular values λ1 and λ2 on its diagonal, and V* is a 2 × 2 orthogonal matrix.The singular values λ1 and λ2 reflect the energy change of G in the eigenvector direction, and the magnitudes of λ1 and λ2 have different characteristics in the image-smoothing region, where the boundary is consistent with the texture direction and/or there is a more richly detailed region.Based on this, Ming Yin [16] proposed a new image structure descriptor based on lsd(i) = λ1(i) + λ2(i) and demonstrated its ability to reflect the basic structural information of image localization.In this paper, a low-frequency coefficient fusion rule based on SVD and a bootstrap filter is proposed, as follows.
First, calculate the local structure descriptors L PC , L PAN of the low-frequency component (lsd PC , lsd PAN ), and determine the initial weight matrices weight PC and weight PAN by comparing their sizes.
Process the weight matrix using bootstrap filtering to enhance its spatial continuity.Use PC1 * and PAN * as bootstrap images, and apply bootstrap filtering weight PC and weight PAN .
The low-frequency coefficient fusion rule based on SVD and bootstrap filter can be written as:

High-Frequency Fusion Rules
After NSST decomposition, each source image can be obtained as a series of highfrequency sub-band images.The high-frequency coefficients at different scales of NSST provide rich edge and texture information for the source images.The absolute values of the coefficients are larger when the edge and texture features are more pronounced [18].Therefore, a considerable absolute value is usually used as the high-frequency coefficient selection rule.However, this rule ignores the correlation between neighboring pixels and may introduce noise into the fused image.The local spatial frequency (LSF) can reflect the pixel neighborhood activity index: the larger the LSF value, the more active the pixel points in the local region.Therefore, LSF is used to fuse high-frequency coefficients.The equation of LSF is as follows: where LRF and LCF denote the image's local row frequency and column frequency, respectively, with the following equations: where M and N represent the neighborhood size.Then, the SF-based HF coefficient selection rule can be written as:

Evaluation Metrics
The purpose of fusion is to create a synthetic image that resembles reality.Ranchin [19] stated that the fused image should be as similar as possible to a high-resolution multispectral image obtained from the same sensor.In order to evaluate the performance of a certain method, there are two main techniques: one is the evaluation method with references, and the other is the evaluation method without references.

Evaluation Metrics with References
The evaluation model with references down-samples the original MS and PAN images using the cubic convolution method (the down-sampling factor is obtained based on the resolution ratio of the MS image to the PAN image), and the sampled images are fused.In this way, the original MS image is used as the reference image for assessing the image quality, and a method evaluation can be performed using the full reference method.The image quality assessment indexes used in this paper include the average gradient (AG), structural similarity (SSIM), correlation coefficient (CC), universal image quality indexes (UIQI) [20], spectral angle mapper (SAM) [21], and erreur relative global adimensionnelle de synthèse (ERGAS) [22], where AG can be used to measure the spatial quality of the fused image, and a larger value of AG indicates a clearer image; SSIM indicates the structural similarity between the two scenic images, and a higher SSI value indicates that the structure of the fused image is more similar to the reference image and that its spatial quality is better; the size of CC indicates the degree of correlation between the two images; UIQI is used to evaluate the degree of structural preservation of the image, and its optimal value is 1; SAM reflects the size of the spectral distortion between the reference image and the fused image, and a smaller value of SAM indicates a better spectral quality of the fused result; and ERGAS reflects the overall quality of the fused image, and a smaller value indicates a better quality of the fused image.

Evaluation Index without Reference
The evaluation mode without references fuses the original MS and PAN images directly.There is no actual reference image to evaluate the fusion results in this evaluation mode, so the method is evaluated using a comprehensive evaluation index without a reference.This method uses the spatial information of the PAN image to evaluate the spatial distortion index (D s ) of the fused image and the spectral information of the MS image to evaluate the spectral distortion index (D λ ) of the fusion influence, while the hybrid quality with no reference (HQNR) is calculated based on both images [23].The smaller the values of D s and D λ , the smaller the spatial and spectral distortion of the fused image, with the best value being 0. The larger the value of HQNR, the higher the overall evaluation of the fused image, with the best value being 0. This evaluation index can evaluate the performance of different fusion methods without references to real images.

Experiment Preparation
In order to verify the reliability and generalizability of the method, fusion experiments were selected from four satellites with different feature types, as shown in the Table 1.The four satellites were GeoEye-1 (GE-1), WorldView-4 (WV-4), Gaofen-7 (GF-7), and Gaofen Multi-Mode (GFDM), where the spatial resolutions of the PAN images were all at the sub-meter scale, between 0.31 and 0.8 m, and the spatial resolution of the MS images were at the meter scale, between 1.24 and 3.2 m.The feature types of the four scenes were urban, plants and water, agricultural land, and desert, covering some of the features that frequently appear during satellite observation of Earth.The PAN image block size was 2048 pixels, and the MS image fast size was 512 pixels.The GE-1 and WV-4 data were from the standard fusion dataset [24], GF-7 and GFDM were from the China Resource Satellite observation data, and each dataset was preprocessed using the exact alignment method [25,26].In the subsequent experiments, for the experimental mode with references, the MS images were down-sampled to 128 × 128 pixels, the PAN images were down-sampled to 512 × 512 pixels, and the original MS images were used as the reference data.The experiments were performed with the original-sized data for the experimental mode without references, and the experimental effects of the different methods were evaluated using the method without references.

Experimental Results with References
The test in this subsection was conducted using the model with references to compare the effects of different methods.Figure 4 shows the fusion images of GE-1 urban features.Figure 4a shows a down-sampled PAN image; Figure 4b shows an up-sampled image after the down-sampling of the MS image, which is noted as being an EXP (expanded) image; Figure 4c shows the original MS image, which was used as the reference image and is a GT (ground true) image; Figure 4d-i show the results under different fusion algorithms; and the subsequent experiments in this section were set up in this manner.It can be seen from the figure that different methods can successfully fuse PAN and MS images.However, the results of the GIHS, Brovey, and GS methods had some spectral distortion, while the spectra of the SIFM, GS, and this paper's method were better maintained.
Furthermore, in terms of the details, the accuracy was better retained when using the method proposed in this paper.From the quantitative evaluation results in Table 2, it can also be seen that the AG and SSIM of this method were the highest among the various methods, indicating that this method had the best spatial detail retention, in addition to the best performance for the UIQI and ERGAS indexes, indicating its excellent spectral retention ability.The best performances for CC and SAM were found in the results of the GS and SFIM methods, respectively.However, on the whole, the best results were obtained by the method of this paper.The fusion results for the WV-4 images with different algorithms are shown in Figure 5, and the scene mainly contains plants and water.It can be seen from the figure that the methods did not show any obvious color bias, but the sharpness of the SFIM method was significantly lower than the other methods.The clarity of the method presented in this paper was better than that of the other methods.The quantitative results in Table 3 show that this method had the best performance for the AG value, while the SFIM method had the lowest AG value, which is also consistent with the visible results.Moreover, the method in this paper obtained the best performance for the CC and UIQI indexes.In contrast, the SFIM method obtained the best results for SAM and ERGAS, which indicates that the SFIM method had the best spectral retention ability on these data.However, on the whole, the method presented in this paper had the best results.
5, and the scene mainly contains plants and water.It can be seen from the figure that the methods did not show any obvious color bias, but the sharpness of the SFIM method was significantly lower than the other methods.The clarity of the method presented in this paper was better than that of the other methods.The quantitative results in Table 3 show that this method had the best performance for the AG value, while the SFIM method had the lowest AG value, which is also consistent with the visible results.Moreover, the method in this paper obtained the best performance for the CC and UIQI indexes.In contrast, the SFIM method obtained the best results for SAM and ERGAS, which indicates that the SFIM method had the best spectral retention ability on these data.However, on the whole, the method presented in this paper had the best results.Figure 6 shows the fusion results of GF-7 images using different algorithms, and the scene is mainly farmland.For this scene's images, the GIHS, Brovey, GS, and PCA methods showed a more obvious color bias, especially the GIHS and Brovey methods.The scene image is reddish with these two methods, while the GS and PCA methods are greenish.Conversely, the SFIM method and the method in this paper have an excellent overall spectrum.The quantitative evaluation results in Table 4 show that the methods in this paper obtained the best results for the SSIM, CC, UIQI, and SAM indexes.At the same time, SFIM had the best results for the AG and ERGAS indexes, which is also consistent with the visual results.
Figure 6 shows the fusion results of GF-7 images using different algorithms, and the scene is mainly farmland.For this scene's images, the GIHS, Brovey, GS, and PCA methods showed a more obvious color bias, especially the GIHS and Brovey methods.The scene image is reddish with these two methods, while the GS and PCA methods are greenish.Conversely, the SFIM method and the method in this paper have an excellent overall spectrum.The quantitative evaluation results in Table 4 show that the methods in this paper obtained the best results for the SSIM, CC, UIQI, and SAM indexes.At the same time, SFIM had the best results for the AG and ERGAS indexes, which is also consistent with the visual results.Figure 7 shows the fusion results of the GFDM images using different algorithms, and the scene consists mainly of a desert.The figure shows that the color bias of the GIHS, Brovey, and GS methods is serious, while the color bias of the PCA method is better for the desert scene.Furthermore, the results of the SFIM and this paper's method are relatively good.In addition, the sharpness of this paper's method is significantly higher than the other methods.This can also be verified from the quantitative evaluation results in Table 5.The AG and SSIM indexes of this paper's method demonstrate that it had the best performance, while SFIM obtained better results for CC, SAM, and ERGAS, indicating its better spectral retention ability.Figure 7 shows the fusion results of the GFDM images using different algorithms, and the scene consists mainly of a desert.The figure shows that the color bias of the GIHS, Brovey, and GS methods is serious, while the color bias of the PCA method is better for the desert scene.Furthermore, the results of the SFIM and this paper's method are relatively good.In addition, the sharpness of this paper's method is significantly higher than the other methods.This can also be verified from the quantitative evaluation results in Table 5.The AG and SSIM indexes of this paper's method demonstrate that it had the best performance, while SFIM obtained better results for CC, SAM, and ERGAS, indicating its better spectral retention ability.In summary, the method in this paper achieved better results for four different scenes of the GE-1, WV-4, GF-7, and GFDM satellites (urban, plants and water, farmland, and desert) compared with the other methods, which shows that the method presented in this paper has good universality and generality.

Discussion
As demonstrated by the previous experimental results, the proposed method achieved good results for both the evaluation system with references and the evaluation system without references and was obviously better than the comparison methods, especially in the index test on the retention of spatial structure information.The conventional mode of extracting spatial structure information through conventional spectral methods and spatial methods mainly uses color space transformation and other techniques to obtain spatial structure components from images.In contrast, the method proposed in this paper uses PCA transformation to jointly extract spatial structure components from PAN and MS images, which can better preserve and fuse the obtained spatial information.The above experiments also verified this point of view.Compared with the conventional method, it can be seen that the method in this paper retained more spatial details.Compared with a single spectral method and a spatial method, the method given in this paper combines the advantages of the two.The optimized coupling method used in this paper can improve the quality of the spatial information and spectral information of the fused image.In this paper, PCA technology was used to concentrate the part of the spatial information shared by the bands in the first principal component; to obtain the spatial components through linear transformation of the data; to make full use of the acquired spatial information of all the bands; and, on this basis, to extract the MS image.The spectral information is fused with this to obtain the final fused image.However, the method given in this paper uses PCA transformation when extracting spatial structure information.Although this transformation can concentrate the main information in the first component, it will inevitably lose part of the spatial structure information.In the future, we could study how to use deep learning to extract spatial structure information from the original image and then inject this into the low-frequency spectral component to avoid interference by human factors.

Conclusions
For the fusion of PAN and MS images, a fusion framework combining GIHS, NSST, and PCA was proposed in this paper.The GIHS method was improved to take advantage of its concise formulas and high execution efficiency, while there was no limitation on the number of bands of input data.The constructed fusion algorithm contains more spatial structure information of MS and PAN images and retains some spectral information of MS.PCA is applied to each band of the PAN image and MS image to obtain the first principal

Figure 1 .
Figure 1.Hybrid model of the CS method and MRA method.

Figure 1 .
Figure 1.Hybrid model of the CS method and MRA method.

( 2 )
Combine intensity component I according to Equation (1), perform PCA transformation on the combination of MS * and PAN images, extract the first principal component PC1, and histogram match the PAN images and PC1 with I as the standard to obtain PAN * and PC * 1.(3) Perform NSST transform decomposition of PC * 1 and PAN * to obtain the low-frequency components (L PC , L PAN ) and the high-frequency component (H PC j,k , H PAN j,k ), respectively.(4) Fuse the low-frequency coefficients to obtain the new low-frequency components (L F ),

Figure 2 .
Figure 2. Flowchart of the proposed method.

Figure 3 .
Figure 3. Fusion flowchart of PC1 and the PAN image using the NSST transform.

Figure 2 .
Figure 2. Flowchart of the proposed method.

Figure 2 .
Figure 2. Flowchart of the proposed method.

Figure 3 .
Figure 3. Fusion flowchart of PC1 and the PAN image using the NSST transform.

Figure 3 .
Figure 3. Fusion flowchart of PC1 and the PAN image using the NSST transform.

Figure 4 .
Figure 4. Fusion results for GE-1 images using different methods.

Figure 4 .
Figure 4. Fusion results for GE-1 images using different methods.

Figure 5 .
Figure 5. Fusion results for the WV-4 images using different methods.

Figure 5 .
Figure 5. Fusion results for the WV-4 images using different methods.

Figure 6 .
Figure 6.Fusion results for the GF-7 images using different methods.

Figure 6 .
Figure 6.Fusion results for the GF-7 images using different methods.

Table 2 .
Objective assessment indexes of the GE-1 image fusion results.
Bold indicates best results.

Table 2 .
Objective assessment indexes of the GE-1 image fusion results.

Table 3 .
Objective assessment indexes of the WV-4 image fusion results.

Table 4 .
Objective assessment indexes of the GF-7 image fusion results.

Table 4 .
Objective assessment indexes of the GF-7 image fusion results.
Bold indicates best results.

Table 6 .
Objective assessment indexes of fusion results without references.