Image Fusion for High-Resolution Optical Satellites Based on Panchromatic Spectral Decomposition

Ratio transformation methods are widely used for image fusion of high-resolution optical satellites. The premise for the use the ratio transformation is that there is a zero-bias linear relationship between the panchromatic band and the corresponding multi-spectral bands. However, there are bias terms and residual terms with large values in reality, depending on the sensors, the response spectral ranges, and the land-cover types. To address this problem, this paper proposes a panchromatic and multi-spectral image fusion method based on the panchromatic spectral decomposition (PSD). The low-resolution panchromatic and multi-spectral images are used to solve the proportionality coefficients, the bias coefficients, and the residual matrixes. These coefficients are substituted into the high-resolution panchromatic band and decompose it into the high-resolution multi-spectral bands. The experiments show that this method can make the fused image acquire high color fidelity and sharpness, it is robust to different sensors and features, and it can be applied to the panchromatic and multi-spectral fusion of high-resolution optical satellites.


Introduction
In the conflict between spectral resolution and spatial resolution, the existing optical satellites generally provide high-resolution panchromatic images but low-resolution multi-spectral images [1]. Panchromatic images provide the overall spatial structure information, which can describe the structural details of features. The multi-spectral images provide spectral information of the features, which facilitates feature recognition, as well as their classification and interpretation. The spatial and spectral structures are both important components of the remote-sensing information. The panchromatic and multi-spectral image fusion technology can combine the advantages of both components to obtain high-resolution multi-spectral images [2].
Fusion methods for panchromatic and multi-spectral images can be roughly divided into coefficient enhancement methods and component transformation methods. The coefficient enhancement methods sharpen the multi-spectral bands through the panchromatic band. The typical methods include Brovey [3], SFIM (smoothing filter-based intensity modulation) [4], SVR (synthetic variable ratio) [5], UNB_Pansharp (University of New Brunswick Pansharp) [6], etc. The Brovey algorithm uses a part of the panchromatic image as a multi-spectral band, which is simple and easy to understand. However, the fusion results may suffer the problem of serious color distortion. The methods of SFIM, SVR, and UNB_Pansharp all have the same fusion formula to sharpen multi-spectral images by the ratio

Relationship between Panchromatic and Multi-Spectral Pixels
The ratio transformation methods belong to the coefficient enhancement methods, which are commonly used in the field of high-resolution optical remote sensing. The theoretical basis is that there is a strong correlation between the panchromatic and the multi-spectral images for the simultaneous phase [18]. The ratio transformation methods consider that the ratio of the original panchromatic image to the degraded panchromatic image is equal to the ratio of the fused image to the up-sampled multi-spectral image [19], which can be described as follows: Pan HQ (i, j) Pan LQ (i, j) = MS (g) where Pan HQ is the high-quality panchromatic image (original panchromatic image), Pan LQ is the low-quality panchromatic (degraded panchromatic image), MS HQ is the high-quality multi-spectral image (fused image), MS LQ is the low-quality multi-spectral image (up-sampled multi-spectral image), and i, j, and g are the indices of the row, column, and band, respectively. The ratio transformation methods are devoted to construct a reasonable Pan LQ . It can be obtained in two ways. One approach is band weighting of the multi-spectral bands, such as UNB_Pansharp and its improved methods [20]. The other approach is panchromatic image degrading, such as SFIM and its improved methods [21].
The correctness of Equation (1) is based on the idea that there is zero-bias linear relationship between panchromatic and multi-spectral images. However, this premise is not accurate enough in practice. A set of Beijing-2 data are taken as an example. Beijing-2 has a panchromatic band (Pan: 450-650 nm) and four multi-spectral bands (B1: 440-510 nm, B2: 510-590nm, B3: 600-670 nm, B4: 760-910 nm). Figure 1 records the digital number (DN) distribution of the panchromatic band and its corresponding multi-spectral bands. The distribution of bands 1-3 is more linear than band 4. That is because the spectral range of bands 1-3 has a certain overlap with the panchromatic band, but band 4 does not. It can be seen from Figure 1 that the panchromatic and multi-spectral pixels have correlation, but not a zero-bias linear relationship. Therefore, the relationship between the panchromatic image and a single multi-spectral band is modified as where superscript (g) denotes the index of band, Pan is the panchromatic image, MS (g) is the multi-spectral band, k (g) is the proportionality coefficient, b (g) is the bias coefficient, T is a matrix which has same size as Pan and all elements are 1, and E (g) is the residual matrix. From Equation (2), a single multi-spectral band with the same resolution can be decomposed from the panchromatic image as follows: spectral image (fused image), LQ MS is the low-quality multi-spectral image (up-sampled multispectral image), and i , j , and g are the indices of the row, column, and band, respectively.
The ratio transformation methods are devoted to construct a reasonable LQ Pan . It can be obtained in two ways. One approach is band weighting of the multi-spectral bands, such as UNB_Pansharp and its improved methods [20]. The other approach is panchromatic image degrading, such as SFIM and its improved methods [21]. The correctness of Equation (1) is based on the idea that there is zero-bias linear relationship between panchromatic and multi-spectral images. However, this premise is not accurate enough in practice. A set of Beijing-2 data are taken as an example. Beijing-2 has a panchromatic band (Pan: 450-650 nm) and four multi-spectral bands (B1: 440-510 nm, B2: 510-590nm, B3: 600-670 nm, B4: 760-910 nm). Figure 1 records the digital number (DN) distribution of the panchromatic band and its corresponding multi-spectral bands. The distribution of bands 1-3 is more linear than band 4. That is because the spectral range of bands 1-3 has a certain overlap with the panchromatic band, but band 4 does not. It can be seen from Figure 1 that the panchromatic and multi-spectral pixels have correlation, but not a zero-bias linear relationship. Therefore, the relationship between the panchromatic image and a single multi-spectral band is modified as where superscript ( ) g denotes the index of band, Pan is the panchromatic image, a matrix which has same size as Pan and all elements are 1, and ( ) g E is the residual matrix. From Equation (2), a single multi-spectral band with the same resolution can be decomposed from the panchromatic image as follows: Compared with the existing ratio transformation methods, the PSD algorithm adds the bias term and the residual term into the Pan-MS relationship model, which enhances the accuracy. In addition, the ratio transformation methods will add the same high-frequency information into different bands, which is an important cause of color distortion. The PSD algorithm generates the fused image by converting the panchromatic image directly, which can avoid this problem effectively.  Compared with the existing ratio transformation methods, the PSD algorithm adds the bias term and the residual term into the Pan-MS relationship model, which enhances the accuracy. In addition, the ratio transformation methods will add the same high-frequency information into different bands, which is an important cause of color distortion. The PSD algorithm generates the fused image by converting the panchromatic image directly, which can avoid this problem effectively.

Linear Fitting of Panchromatic and Multi-Spectral Bands
The proportionality coefficient k (g) and the bias coefficient b (g) reflect the overall linear relationship between the panchromatic band and the corresponding multi-spectral band. The relationship is related to only three factors: (1) the response spectra of the panchromatic band and the corresponding multi-spectral band, (2) the land-cover types and their proportion in the image, and (3) the solar irradiance during imaging. The image degradation model [22] does not include the above three factors, which means that the resolution differences will not change the proportionality and the bias coefficients. Therefore, k (g) and b (g) can be solved with the low-resolution bands.
The original panchromatic image (Pan HR ) needs to be down-sampled to match the size of the original multi-spectral image (MS LR ). In order to reduce the high-pass effect of down-sampling, Pan HR is blurred first. Since the general Pan-MS resolution ratio is 1:4, the mean filter of 5 × 5 is chosen to blur Pan HR . Subsequently, Pan HR is down-sampled to get Pan LR . After these processes, the pixels with the same positions of Pan LR and MS LR are constituted into a set of valid samples.
Due to the spatial correlation of remote-sensing images, there is redundant information between adjacent pixels. Having all the pixels participating in the fit will greatly increase the cost of the calculation. Therefore, samples are taken from every 10th row and every 10th column. In addition, the DN value of standard remote-sensing data has a certain numerical limit. For example, Beijing-2 is stored in 10 bits, and its maximum DN value is 1023, which means that the overexposure objects will have the value of 1023. These overexposure pixels do not conform to the linear relationship described in Equation (2); thus, they are treated as gross errors and need to be rejected. After collecting the valid samples, the proportionality and the bias coefficients can be fitted by LSM [8].

Estimation of the Residual Matrix
Based on the above analyses, the panchromatic and multi-spectral bands with different resolution have relationships as follows: (4) HR is replaced by the up-sampled E Therefore, the fused image (Fusion) can be expressed as where There will be an error matrix ∆ in the fused image if E (g) No matter what the resolution is, the residual matrix E can be divided into two parts. One part will cause color distortion, and the other part will cause spatial distortion. Since E (g) HR and E (g) LR ↑ are only different in original resolution, the color distortion effects caused by them are the same, and can be canceled out by LR ↑ will retain some high-frequency spatial information. The above analyses illustrate that the existence of ∆ will cause spatial distortion, but will not cause color distortion. The spatial distortion appears to be jagged in detail, especially in high-gradient areas. This is due to the fact that E LR ↑ is not smooth enough following the up-sampling. In order to avoid or weaken this spatial distortion, we perform smooth filtering on E LR ↑ to make the edges smoother. Above all, the final fused image is expressed as where the convolution kernel h used here is the mean filter kernel of 3 × 3.

Constraints on Extreme Values
Saturated pixels do not reflect the true spectral information of the objects. In the fitting process, the saturated pixels are already excluded. However, in the process of panchromatic band decomposition, it is not considered whether a pixel is saturated. In this case, errors easily occur in high-DN areas, which will enlarge the dynamic range of the fused image and cause global color deviation. To avoid this, the color distortion caused by dynamic range change can be attenuated by some extreme value constraint. The up-sampled multi-spectral band (MS LR ↑) is treated as standard, whereby the maximum value of each line in the fused image cannot be larger than the maximum value of the corresponding line in the multi-spectral image, and the minimum value of each line in the fused image cannot be less than the minimum value of the corresponding line in the multi-spectral image. The constraint is described as where MS LR ↑, min[·] denotes getting the minimum value, and max[·] denotes getting the maximum value.

Summary of the PSD Algorithm
The PSD algorithm is summarized in Figure 2, which has five steps: (1) mean filtering and down-sampling the original panchromatic image to obtain the low-resolution panchromatic image with the same size as the original multi-spectral image; (2) obtaining the proportionality coefficient, the bias coefficient, and the low-resolution residual matrix by linear fitting of the low-resolution panchromatic image and the original multi-spectral image; (3) up-sampling and mean filtering the low-resolution residual matrix to replace the high-resolution residual matrix; (4) decomposing the panchromatic image into high-resolution multi-spectral bands; and (5) constraining the extreme value to obtain the final fused image.

Experimental Data
Many quality metrics require a high-resolution multi-spectral image as a reference image [23]. Based on this consideration, the experiments are divided into two parts: simulation data experiments and real data experiments. The simulation data are used for linear fitting analysis, residual matrix analysis, and comparison of different algorithms. The real data are used for algorithm stability analysis to test the stability of PSD algorithm for different sensors.
In simulation data experiments, a set of down-sampled Beijing-2 panchromatic and multispectral images are used as simulation data for quantitative analysis, to ensure the reliability of the conclusions. Beijing-2 is a constellation constituted by three push-broom optical remote-sensing satellites, providing panchromatic images with 0.8-m resolution (Pan: 450-650nm) and multi-spectral images (B1: 440-510 nm, B2: 510-590 nm, B3: 600-670 nm, B4: 760-910 nm). The study area is Kunming, China, with coordinates of 102.6176 east (E) and 25.1246 north (N). The size of the original panchromatic image is 6000 6000 × , and the size of the original multi-spectral image is 1500 1500 × . Down-sampled by Gaussian pyramid [24], the size of the simulated panchromatic image is 1500 1500 × , and the size of simulated multi-spectral image is 375 375 × . This simulation data can produce a fused image with size of 1500 1500 × . Taking the original multi-spectral image as the reference image, the fusion result can be quantitatively evaluated.
In real data experiments, this paper selects Beijing-2, SkySat-3, SuperView-1, and ZiYuan-3 as experimental data. The experimental data cover various features such as city, farmland, forest, and mountain. The specific conditions are listed in Table 1.

Experimental Data
Many quality metrics require a high-resolution multi-spectral image as a reference image [23]. Based on this consideration, the experiments are divided into two parts: simulation data experiments and real data experiments. The simulation data are used for linear fitting analysis, residual matrix analysis, and comparison of different algorithms. The real data are used for algorithm stability analysis to test the stability of PSD algorithm for different sensors.
In simulation data experiments, a set of down-sampled Beijing-2 panchromatic and multi-spectral images are used as simulation data for quantitative analysis, to ensure the reliability of the conclusions. Beijing-2 is a constellation constituted by three push-broom optical remote-sensing satellites, providing panchromatic images with 0.8-m resolution (Pan: 450-650 nm) and multi-spectral images (B1: 440-510 nm, B2: 510-590 nm, B3: 600-670 nm, B4: 760-910 nm). The study area is Kunming, China, with coordinates of 102.6176 east (E) and 25.1246 north (N). The size of the original panchromatic image is 6000 × 6000, and the size of the original multi-spectral image is 1500 × 1500. Down-sampled by Gaussian pyramid [24], the size of the simulated panchromatic image is 1500 × 1500, and the size of simulated multi-spectral image is 375 × 375. This simulation data can produce a fused image with size of 1500 × 1500. Taking the original multi-spectral image as the reference image, the fusion result can be quantitatively evaluated.
In real data experiments, this paper selects Beijing-2, SkySat-3, SuperView-1, and ZiYuan-3 as experimental data. The experimental data cover various features such as city, farmland, forest, and mountain. The specific conditions are listed in Table 1.

Analysis of Linear Fitting
The simulated Beijing-2 data are applied for linear fitting analysis. Figure 3 records the relationship between the panchromatic band and multi-spectral band 1 before and after the residual matrix elimination. Table 2 records the linear fitting results include proportionality coefficient, bias coefficient, determination coefficients (R 2 ), and root-mean-square error (RMSE) [25]. The R 2 and RMSE are used to measure the goodness of fit [26]. Higher R 2 and lower RMSE indicate better fitting goodness.
It can be seen from Figure 3 and Table 2 that, after subtracting the residual matrix from the panchromatic image, the DN value will obviously move closer to the fitted straight line. The all-fitting goodness of different bands is improved, especially band 4 (near-infrared, NIR). In addition, the fitting parameters of different bands are basically unchanged before and after the residual matrix elimination. This experiment can prove that the linear relationships are the same for different resolutions. Therefore, it is reasonable to use the low-resolution panchromatic and multi-spectral bands to solve the proportionality and bias coefficients.

Analysis of Linear Fitting
The simulated Beijing-2 data are applied for linear fitting analysis. Figure 3 records the relationship between the panchromatic band and multi-spectral band 1 before and after the residual matrix elimination. Table 2 records the linear fitting results include proportionality coefficient, bias coefficient, determination coefficients (R 2 ), and root-mean-square error (RMSE) [25]. The R 2 and RMSE are used to measure the goodness of fit [26]. Higher R 2 and lower RMSE indicate better fitting goodness.
It can be seen from Figure 3 and Table 2 that, after subtracting the residual matrix from the panchromatic image, the DN value will obviously move closer to the fitted straight line. The all-fitting goodness of different bands is improved, especially band 4 (near-infrared, NIR). In addition, the fitting parameters of different bands are basically unchanged before and after the residual matrix elimination. This experiment can prove that the linear relationships are the same for different resolutions. Therefore, it is reasonable to use the low-resolution panchromatic and multi-spectral bands to solve the proportionality and bias coefficients.

Analysis of the Residual Matrix
The simulated Beijing-2 data are applied for residual matrix analysis.

Analysis of the Residual Matrix
The simulated Beijing-2 data are applied for residual matrix analysis. Figure 4 presents a set of visual residual matrices, where (a) is the high-resolution residual matrix (E HR ), (b) is the up-sampled low-resolution residual matrix (E LR ↑), and (c) is the difference between E HR and E LR ↑ (E HR − E LR ↑). The root mean square (RMS) is applied to indicate the intensity of the residual matrix, and the RMS values of the above three are recorded in Table 3.
From the comparison of Figure 4 and Table 3, the following conclusions can be drawn: (1) E HR and E LR ↑ have similar color information, and E HR − E LR ↑ basically only retains some high-frequency spatial information; (2) the information of E LR ↑ is completely included in E HR , so the residual intensity of E HR is larger than E LR ↑; (3) the related parts of E HR and E LR ↑ can be eliminated, so the residual intensity of E HR − E LR ↑ is less than E HR . ↑ E E .

Comparison of Different Algorithms
The Beijing-2 simulation data are used to test the performance of the proposed fusion algorithm. The comparison methods include Brovey, GS, PCA, SFIM, and UNB_Pansharp, where Brovey, GS, and PCA are algorithm modules in ENVI (The Environment for Visualizing Images), and UNB_Pansharp is an algorithm module in PCI Geomatica. The results are shown in Figure 5, and the corresponding evaluation parameters are given in Table 4. The quality metrics include RMSE, signal-

Comparison of Different Algorithms
The Beijing-2 simulation data are used to test the performance of the proposed fusion algorithm. The comparison methods include Brovey, GS, PCA, SFIM, and UNB_Pansharp, where Brovey, GS, and PCA are algorithm modules in ENVI (The Environment for Visualizing Images), and UNB_Pansharp is an algorithm module in PCI Geomatica. The results are shown in Figure 5, and the corresponding evaluation parameters are given in Table 4. The quality metrics include RMSE, signal-to-noise ratio (SNR), relative dimensionless global error in synthesis (ERGAS) [27], and correlation coefficient (CC) [28]. The performance of the algorithms is compared by the similarity of the fused image to the reference image. If an algorithm performs well, then RMSE and ERGAS will have smaller values, while SNR and CC will have larger values. Figure 5 shows that there is no serious color distortion in any of the fusion images. The PSD, Brovey, and SFIM have the closest color performance, while the shadows in PCA and GS have some distortion, the color of SFIM and UNB_Pansharp tend toward blue, and the color of the roof in UNB_Pansharp is too dark. In terms of the image sharpness, the UNB_Pansharp has the best performance, and the PSD and SFIM are second to it. In terms of the feature details, no visible structural variation occurred except for SFIM. In the fused image of SFIM, there is a phenomenon of intermittent lines on the highway. This is because the low-resolution panchromatic of SFIM is generated by smooth filtering, which does not work well with linear objects. Compared with the other algorithms, PSD has the best visual effect in Figure 5.
Brovey can only be applied for three bands; thus, the mean values of the quality metrics are calculated from band 1 to band 3. From the parameters in Table 4, PSD has the best performance, and PCA and SFIM are second, followed by UNB_Pansharp, whereas Brovey and GS are the worst. We note that CC sometimes deviated from the other three metrics. For example, Brovey has bad performance in RMSE, SNR, and ERGAS, but good performance in CC. This is because RMSE, SNR, and ERGAS measure the absolute difference between the fused image and the reference image, while CC measures the relative difference between the two.
The PSD algorithm still has some shortcomings. Compared with the other three bands, band 4 of the PSD fused image does not perform well. It can be seen from Figure 1, Table 2, and Table 4 that the performance of PSD depends on the correlation between the panchromatic and multi-spectral bands. Since the spectra of the Beijing-2 panchromatic band and multi-spectral band 4 do not overlap, the correlation between the two is relatively low. to-noise ratio (SNR), relative dimensionless global error in synthesis (ERGAS) [27], and correlation coefficient (CC) [28]. The performance of the algorithms is compared by the similarity of the fused image to the reference image. If an algorithm performs well, then RMSE and ERGAS will have smaller values, while SNR and CC will have larger values. Figure 5 shows that there is no serious color distortion in any of the fusion images. The PSD, Brovey, and SFIM have the closest color performance, while the shadows in PCA and GS have some distortion, the color of SFIM and UNB_Pansharp tend toward blue, and the color of the roof in UNB_Pansharp is too dark. In terms of the image sharpness, the UNB_Pansharp has the best performance, and the PSD and SFIM are second to it. In terms of the feature details, no visible structural variation occurred except for SFIM. In the fused image of SFIM, there is a phenomenon of intermittent lines on the highway. This is because the low-resolution panchromatic of SFIM is generated by smooth filtering, which does not work well with linear objects. Compared with the other algorithms, PSD has the best visual effect in Figure 5.
Brovey can only be applied for three bands; thus, the mean values of the quality metrics are calculated from band 1 to band 3. From the parameters in Table 4, PSD has the best performance, and PCA and SFIM are second, followed by UNB_Pansharp, whereas Brovey and GS are the worst. We note that CC sometimes deviated from the other three metrics. For example, Brovey has bad performance in RMSE, SNR, and ERGAS, but good performance in CC. This is because RMSE, SNR, and ERGAS measure the absolute difference between the fused image and the reference image, while CC measures the relative difference between the two.
The PSD algorithm still has some shortcomings. Compared with the other three bands, band 4 of the PSD fused image does not perform well. It can be seen from Figure 1, Table 2, and Table 4 that the performance of PSD depends on the correlation between the panchromatic and multi-spectral bands. Since the spectra of the Beijing-2 panchromatic band and multi-spectral band 4 do not overlap, the correlation between the two is relatively low.    Table 5 records their corresponding evaluation parameters.

Analysis of Algorithm Stability
Through the fusion results of the four different data sets, we can see in Figure 6 that the colors of the fused images are consistent with their corresponding up-sampled multi-spectral images, and no significant color distortion occurs. Meantime, the fused images are sharp enough for visual effects, and there is no obvious spatial structure deformation.
There are no real high-resolution multi-spectral images; thus, the up-sampled multi-spectral images are used as reference images to calculate evaluation parameters. However, this operation will    Table 5 records their corresponding evaluation parameters.

Analysis of Algorithm Stability
Through the fusion results of the four different data sets, we can see in Figure 6 that the colors of the fused images are consistent with their corresponding up-sampled multi-spectral images, and no significant color distortion occurs. Meantime, the fused images are sharp enough for visual effects, and there is no obvious spatial structure deformation.
There are no real high-resolution multi-spectral images; thus, the up-sampled multi-spectral images are used as reference images to calculate evaluation parameters. However, this operation will make the increased high-frequency information in fused images be treated as noise. In this case, the evaluation parameters will be relatively worse than normal. For example, the evaluation parameters of the real Beijing-2 fused image are worse than parameters of the simulated Beijing-2 fused image. Despite such unfavorable conditions, the four sets of data still achieve good performance, where the RMSE is maintained between 13.62 and 28.5, SNR is maintained between 15.57 and 23.00, CC is maintained between 0.81 and 0.96, and ERGAS is maintained between 2.21 and 4.89. In summary, the PSD algorithm is robust for different types of satellites and sensors, and it is suitable for the panchromatic and multi-spectral fusion of high-resolution optical satellites.
Sensors 2018, 18, x FOR PEER REVIEW 11 of 13 make the increased high-frequency information in fused images be treated as noise. In this case, the evaluation parameters will be relatively worse than normal. For example, the evaluation parameters of the real Beijing-2 fused image are worse than parameters of the simulated Beijing-2 fused image. Despite such unfavorable conditions, the four sets of data still achieve good performance, where the RMSE is maintained between 13.62 and 28.5, SNR is maintained between 15.57 and 23.00, CC is maintained between 0.81 and 0.96, and ERGAS is maintained between 2.21 and 4.89. In summary, the PSD algorithm is robust for different types of satellites and sensors, and it is suitable for the panchromatic and multi-spectral fusion of high-resolution optical satellites.

Conclusions
This paper proposes a fusion method which can directly convert the panchromatic band into the high-resolution multi-spectral band by constructing the relationship between the two. Experiments show that the proposed method has good color retention ability. Compared with UNB_Pansharp, RMSE decreased from 13.82 to 7.16, ERGAS decreased from 4.89 to 2.54, SNR increased from 15.41 to 21.13, and CC increased from 0.91 to 0.98. The proposed method has good adaptability to different sensors and features by testing multiple sets of data. The current problem of this algorithm is that the fusion performance depends on the correlation between the panchromatic band and the multispectral band. Bands 1-3 have strong correlation with the panchromatic band, and the RMSEs of their fusion results are 6.73, 6.59, and 8.17, respectively. Band 4 has a weak correlation with the panchromatic band, and the RMSE is increased to 18.94. How to improve the algorithm's adaptation to Pan-MS correlation is a worthy direction in future work.

Conclusions
This paper proposes a fusion method which can directly convert the panchromatic band into the high-resolution multi-spectral band by constructing the relationship between the two. Experiments show that the proposed method has good color retention ability. Compared with UNB_Pansharp, RMSE decreased from 13.82 to 7.16, ERGAS decreased from 4.89 to 2.54, SNR increased from 15.41 to 21.13, and CC increased from 0.91 to 0.98. The proposed method has good adaptability to different sensors and features by testing multiple sets of data. The current problem of this algorithm is that the fusion performance depends on the correlation between the panchromatic band and the multi-spectral band. Bands 1-3 have strong correlation with the panchromatic band, and the RMSEs of their fusion results are 6.73, 6.59, and 8.17, respectively. Band 4 has a weak correlation with the panchromatic band, and the RMSE is increased to 18.94. How to improve the algorithm's adaptation to Pan-MS correlation is a worthy direction in future work.