Next Article in Journal
A Study on the Characterization of Novel Silicon-Based Heterojunctions for Optically Controlled Microwave Switching
Previous Article in Journal
Enhancing Upper Limb Exoskeletons Using Sensor-Based Deep Learning Torque Prediction and PID Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization

1
School of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524088, China
2
Guangdong Provincial Marine Remote Sensing and Information Technology Engineering Technology Center, Zhanjiang 524088, China
*
Authors to whom correspondence should be addressed.
Sensors 2025, 25(11), 3530; https://doi.org/10.3390/s25113530
Submission received: 15 April 2025 / Revised: 21 May 2025 / Accepted: 30 May 2025 / Published: 4 June 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

:
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors and the spectral distortion caused by the dynamic changes of water bodies in island sea areas restrict the fusion accuracy, necessitating more precise fusion solutions. Therefore, this paper proposes a pansharpening method based on Hybrid-Scale Mutual Information (HSMI). This method effectively enhances the accuracy and consistency of panchromatic sharpening results by integrating mixed-scale information into scale regression. Secondly, it introduces mutual information to quantify the spatial–spectral correlation among multi-source data to balance the fusion representation under mixed scales. Finally, the performance of various popular pansharpening methods was compared and analyzed using the coupled datasets of Sentinel-2 and Sentinel-3 in typical island and reef waters of the South China Sea. The results show that HSMI can enhance the spatial details and edge clarity of islands while better preserving the spectral characteristics of the surrounding sea areas.

1. Introduction

With the advancement of image acquisition technologies, high-precision satellite images play a vital role in various important marine remote sensing applications, including environmental monitoring [1,2], spectral decomposition [3,4], water quality assessment [5,6], and water color retrieval [7,8,9]. However, acquiring satellite remote sensing images with high spatial and spectral attributes through a single sensor is often a challenging task due to the limitations of optical remote sensing sensors [10]. For this reason, multi-source fusion technology has emerged, capable of generating high-quality remote sensing images with high spatial resolution and high spectral information by integrating remote sensing data from different sensors with different resolutions and spectral ranges.
At the same time, many existing Earth observation programs have attempted to alleviate these limitations by incorporating multiple dedicated satellites to meet specific spatial and spectral requirements. The Sentinel-2 (S2) and Sentinel-3 (S3) satellites are examples of this trend. The S2 satellite [11] carries a Multispectral Imager (MSI) sensor providing 13 spectral bands in the 443–2190 nm wavelength range with a spatial resolution of 10 to 60 m, while the Sentinel-3B (or Sentinel-3A) satellites [12] both carry the Ocean and Land Colour Imager (OLCI). The image produced by the OLCI consists of 21 spectral channels ranging from about 400 nm to 1020 nm, while the OLCL product, with its limited coarse spatial resolution of 300 m, is more focused on spectral features of the oceans, inland waterways, and coastal areas [13]. However, this also limits the application of OLCI imagery at local scales, particularly for heterogeneous landscape features.
In applying localized scenes of island waters in this paper, to improve the spatial resolution of the Sentinel-3 OLCI images, they can be fused with the spatial detail information provided by the Sentinel-2 MSI. It should be noted that marine image fusion is a different process from terrestrial image fusion because the intrinsic characteristics of water bodies in island waters are constantly changing. The two images to be fused will have non-negligible differences in spectral and spatial characteristics, and inconsistent acquisition times can lead to spatial blurring and spectral distortion. This problem cannot be solved by geographically weighted regression point-to-surface information fusion methods. Therefore, we tend to use covariance-guided feature-matching methods for spatial–spectral information fusion to alleviate this problem. At the same time, the unprecedented cross-sensor sentinel data also provides us with the opportunity to address these limitations using pansharpening methods within the framework of spatial–spectral information fusion from the perspective of injection-gain-based processing.
Classical algorithms for pansharpening continue to occupy a significant position in the field, offering a notable advantage in high computational efficiency. These algorithms can be broadly classified into three main categories [14], as outlined in the following section. These include component substitution (CS), multi-resolution analysis (MRA), and variational optimization (VO)-based methods. The initial approach, frequently designated as the ‘spectral class’, entails projecting the original MS image into the transform domain to isolate its spatial information and replace it with a PAN image. A considerable number of pioneering pansharpening techniques are classified as belonging to the CS class, largely due to the ease with which they can be implemented. Notable examples include Intensity–Hue–Saturation (IHS) [15], Principal Component Analysis (PCA) [16], Gram–Schmidt (GS) spectral sharpening, GS adaptive methods [17], and Band-Dependent Spatial Detail (BDSD) [18]. The latter are frequently designated as “spatial classes”, these include the additive wavelet luminance proportional (AWLP) [19], morphological filters (MFs) [20], generalized Laplace pyramid (GLP) [21], Morphological Pyramid [22], and Frame Boosting [23]. In comparison to component substitution class fusion methods, multi-resolution analytical fusion methods typically demonstrate superior spectral fidelity and are relatively unaffected by radiometric differences between multi-source images.
In recent years, VO methods have gained popularity due to the development of convex optimization and inverse problems. These methods employ observation models and sparse representations to construct an energy function, which is then solved using an optimization algorithm to obtain a pansharpening image. The most commonly used VO-based methods are P+XS, regularized solution of the inverse problem, coupled non-negative matrix factorization (CNMF) [24], and sparse representation. However, the computational cost is relatively high, as the target image is usually estimated under the assumption of properly co-aligned PAN and LRMS images. In contrast to traditional methodologies, deep learning-based approaches (DL), which directly learn the mapping from observed actual values to real values through neural networks, have emerged as a prominent area of interest in recent years. Prominent examples include PNN [25] and PANnet [26]. Most of these deep learning-based methods rely on training with large datasets, yet this is constrained by asynchronous data types from different satellites and the scarcity of remote sensing datasets in marine domains. (This manifests as a scarcity of high-quality paired training data in cloudy island marine environments, often accompanied by temporal mismatches.) Furthermore, training deep neural network models requires significant time investment and poses challenges in optimizing optimal parameters.
Notwithstanding the favorable outcomes of the aforementioned and associated methodologies, certain challenges persist in the context of highly heterogeneous sensors (e.g., S2/S3) and oceanic island scenes. To illustrate, the texture features derived from OLCI products invariably exhibit some degree of spatial distortion, which may prove detrimental to the resulting pansharpening outcomes. Furthermore, the local water scenes of islands present significant spectral loss issues that require a more precise solution to be addressed concurrently.
The MRA method, on the other hand, does not need to provide training data and is more effective than CS in preserving the spectral features of the original MS images. Given the above problems, in this paper, the MRA method based on scale regression is proposed for the improvement of low spectral retention of island waters with large resolution differences between multi-source sensors, i.e., MTF-GLP-HPM-HSMI based on mutual information hybrid-scale regression method. The hybrid-scale information, including the detailed image with spatial residuals as well as the spectral residuals, is combined and fitted by the mutual information to compute the hybrid weighting coefficients to obtain the final pansharpening results. Experimental results show that our proposed MTF-GLP-HPM-HSMI method outperforms various popular MRA methods in island sea application scenarios. The technical route of this paper is shown in Figure 1.
Briefly, the main contributions of this work can be summarized as follows: 1. To address the problem of spectral aberrations in water bodies in island waters, we propose a novel MRA pansharpening method (MTF-GLP-HPM-HSMI), which incorporates mixed-scale information (including detailed images with spatial and spectral residuals) into scale regression to generate more accurate pansharpening results. 2. By introducing mutual information to calculate the hybrid weighting coefficients, the intrinsic relationship between spatial and spectral information can be more accurately represented, achieving a more balanced fusion representation across different scales. 3. When fusing S3-OLCI and S2-MSI data, the performance of several popular pansharpening methods is compared and analyzed, and HSMI can better balance the enhancement of island spatial details and edge clarity with the retention of the spectra of the sea area around the island.

2. Related Work

2.1. Inject Gain

Several algorithms have been developed for the fusion of multi-source satellite remote sensing data [27]. However, the majority of these algorithms are designed for the fusion of Landsat-8 and Sentinel-2 imagery, as well as the enhancement of auxiliary images using Sentinel satellite data [28]. For instance, Wang et al. [29] extended the aspect regression kriging method (ATPRK) to Landsat-8 and Sentinel-2 image fusion, employing the high spatial resolution bands in both images as an auxiliary reference. Similarly, Shao et al. [30] devised a super-resolution network (ESRCNN) with the specific purpose of fusing multiple sources of remote sensing data in order to obtain coordinated remote sensing reflectance products. Additionally, pansharpening represents a category of techniques within the aforementioned image fusion paradigm [31]. It is undeniable that unsupervised deep learning-based pansharpening methods have achieved significant progress in the field of multi-source information fusion. However, it is disappointing that the fusion performance of these algorithms is suboptimal in specific island water scenarios. According to analysis, the spatial resolution of Sentinel-3 OLCI is relatively low, and the spatial information available from Sentinel-3 OLCI multispectral images in island scenarios is limited (islands are small in area, and the limited resolution of satellite remote sensing results in fewer learnable pixels in the input images, with sparse spatial details). This restricts the ability of deep networks to learn meaningful spatial–spectral relationships, especially in dynamic marine environments where water bodies exhibit rapid spectral variations. In this context, we have chosen an MRA-based injection gain framework for multi-source fusion while prioritizing the preservation of the spectral characteristics of the original MS images.
Two widely accepted models for injection gain in MRA modeling are high-pass modulation (HPM) [32] and context-based decision-making (CBD) [33]. In [34], a model based on full-size (FS) regression is proposed that employs HRMS and PAN images instead of their low-resolution (LR) degraded versions for estimation. Furthermore, in [35], a model grounded in dual-scale (DS) regression was introduced, wherein the injection gain is computed through a linear combination of high-resolution (HR) and low-resolution (LR) projection coefficients. This model is regarded as an extension of the regression-based HPM model, and the MTF-GLP-HPM-R [36] model, which is based on multiple linear regression. However, the majority of regression-based methodologies are constrained by the unavailability of high-resolution (HR) images [34]. This degradation inevitably results in a considerable loss of information, particularly in images with intricate details. This paper aims to provide insights into mixed-scale regression methods that inject detail gain into HPM models. Firstly, the information in high-resolution data (e.g., PAN images or their details) can be fully utilized to improve the accuracy of the estimation. Secondly, the residual information between the detail images and the (HR and LR) images is taken into account, and the injection weights are calculated using correlation coefficients to preserve the spectral details better.

2.2. Ocean Color Inversion in Island Waters

Satellite remote sensing has significantly propelled the progress of ocean color research. However, in the context of island waters, as discussed in this paper, the optical properties of near-island waters are complex, and their optical characteristics change rapidly. Therefore, such near-island water inversion requires more spectral information, including more spectral channels and higher spectral resolution, while also needing to enhance high-resolution texture spatial features as much as possible. Additionally, factors such as different satellite sensors and cloud interference affect the accuracy of data obtained by satellite sensors, thereby limiting effective sampling of the ocean [37,38]. Currently, popular multi-source remote sensing data fusion methods for water color inversion typically rely on the interpolation and matching of water color products, which limits their application to single products. Meanwhile, methods primarily based on multi-temporal data fusion are mostly suitable for the fusion of inland water bodies [39,40].
To address these challenges, this study takes raw data from the “Sentinel” series of multi-sensors as input and employs a covariance-guided feature matching method for spatial–spectral information fusion to mitigate the issues. Enhancing the spatial details of the S3 multi-band spectral channels preserves more spectral features of islands and adjacent sea areas. This is mainly reflected in considering the spectral residual scale information generated by the covariance between the detail image I P L and M S ˜ k . This enables more accurate and comprehensive monitoring of ocean color.
The rest of the paper is structured as follows. In Section 3, we briefly introduce the pansharpening framework based on the injective regression scheme and the HPM injection scheme. Additionally, we present a proposed scheme on mixed-scale mutual information regression for the pansharpening of South China Sea islands from multi-source images. In Section 4, we present the experimental results, including quantitative and visual comparisons between the proposed method and some popular methods. Finally, in Section 5, we conclude the paper.

3. Methodology

In this section, we will review the discussion on the injection gain-based pansharpening framework and detail-scale regression, followed by an introduction to the proposed pansharpening method for island waters, MTF-GLP-HPM-HSMI.

3.1. Injection Gain and Detail Scale Regression

Let P and M S k , with k ranging from 1 to K, denote the original PAN and MS images, respectively. Here, k signifies the spectral band index. The pansharpening injection scheme is formulated as follows:
M S ^ k = M S ˜ k + g k D = M S ˜ k + g k P P L , k = 1 , , K .
where M S ^ k and M S ˜ k denote the fused MS images and the LRMS images (obtained by upsampling MS to the PAN scale using a 23-tap interpolation), respectively. g k represents the injection coefficients for the k th band. D is the detail image, and  P L denotes the equivalent low-pass filtering (LPF) of P by generalized Laplacian pyramid (GLP) with MTF-matched filter.
Next, let us focus on the injection gains and use them to construct the detail-scale regression. The residual between REF and M S ^ k can be expressed as follows:
R E F k M S ^ k = ( R E F k g k P ) ( M S ˜ k g k P L ) .
That is, according to Wald’s protocol, the MS image generated by fusion should very closely approximate the ground-truth HRMS image, referred to as the reference (REF). To achieve this goal, an injection gain strategy is typically employed (injecting high-frequency details from the panchromatic image into the upsampled low-resolution multispectral image). Therefore, the two parentheses in Equation (2) represent the residuals of the MS and PAN images at the HR and LR scales, respectively. That is, the fusion result should align with the spectral characteristics of the original low-resolution multispectral (LRMS) image, and the spatial resolution of the fusion result should also be close to that of the high-resolution reference image (REF).
If both residuals are considered as the total loss of the estimate (i.e., independently, only one class is considered) [41], the respective optimal solutions are
g k LR = arg min g k M S ˜ k g k P L F 2 = cov M S ˜ k , P L cov P L , P L ,
g ^ k HR = arg min g k R E F k g k P F 2 cov M S ^ k , P cov P , P ,
where cov A , B denotes the covariance of the matrix, and the optimal gain is obtained by dividing the covariance values g k , and  · F denotes the Frobenius norm of a matrix. Therefore, the optimal injection gain at the LR scale can be interpreted as the scalar projection of M S ˜ k onto P L . At the HR scale, however, ground truth is unavailable in practice, and the fused image approximates the reference image in spatial resolution. By replacing R E F k with M S ^ k , an approximate gain at the HR scale can be derived. At the same time, it should be noted that optimizations Equations (3) and (4) are band-dependent. This means that each band has its own optimal gain. This is because the spectral responses and noise levels vary across different bands, requiring independent calculations. To streamline the discussion, the specific indices of spectral bands will not be detailed.
By substituting the approximate gain Equation (4) into Equation (1), a new fused image can be obtained. By repeating the aforementioned steps, we obtain an iterative solution for g ^ k .
g ^ k = cov M S ˜ k , P cov P L , P .
In essence, however, these above estimates are considered as suboptimal solutions. According to [41], the best estimates are expressed in terms of detail images. Therefore, information at both scales, LR and HR, should be involved in the estimation, and the optimal solution of Equation (2) can be described as
g k = cov M S ˜ k , D cov P L , D .
In this work, we have designed an MRA model based on a detail scale regression. Based on the HS model [41], the optimality assumption for detail estimation was used, supplemented with spectral matching; to replace the detail image D with detail images that have undergone full-scale regression I P L , rewrite Equation (6)
g k = cov M S ˜ k , I P L cov P L , I P L ,
where
I P L = M S ˜ k P L D L M S .
Have undergone full-scale regression I P L here aims to eliminate radiometric differences between multispectral (MS) and panchromatic (PAN) images, normalizing the pixel values of the multispectral bands to the radiometric range of the panchromatic image, thereby suppressing spectral distortions in the fused results.

3.2. HPM Injection Scheme

In terms of injection gain, we choose an HPM injection scheme [32,42] that is different from the HS model. Therefore, Equation (1) can be rewritten as follows:
M S ^ k = M S ˜ k + M S ˜ k P L P P L = M S ˜ k P P L .
Next, we add detail scale regression to the HPM injection method by setting the digital MS or PAN image D s XR (either HR or LR) captured by the sensor to be defined as the convolution of the spatial response of the sensor with the total energy captured S s XR ( x , y ) by the sensor s in its spectral band ω s ( x , y ) :
D s XR ( x , y ) = ϵ s + k s + + S s ( x α , y β ) · + L ( α , β , λ ) R s ( λ ) d λ d α d β = ϵ s + k s S s XR ( x , y ) ω s ( x , y ) ,
where ϵ s and k s are additive (or offset) and multiplicative (or gain) constants, respectively; S s is the spatial response of the sensor s, ω s is the integral of the sensor’s radiated frequency weighted by the relative spectral response, and ϵ s is usually negligible, so it will be disregarded in the following.
Based on Equations (9) and (10), M S ˜ k , P L , and P are now expressed as follows:
M S ˜ k = k m s S m s LR ω m s ,
P L = k p S P L LR ω p ,
P = k p S P HR ω p ,
where, S M S LR , S P L LR , and S P HR denote the spatial responses of the LR-MS image, the LR-PAN image, and the HR-PAN image, respectively; ω m s and ω p denote the total energy of the corresponding sensor over its spectrum, respectively.
In this way, we obtain the HR MS image M S ^ k using the spatial response of HR-PAN S P HR and the total energy ω m s :
M S ^ k = k m s S P HR ω m s .
By substituting Equations (11)–(13) into Equation (9) for the HPM injection scheme, another form of M S ^ k is obtained:
M S ^ k = k m s S m s LR ω m s k p S P HR ω p k p S P L LR ω p .
It can be observed that Equations (14) and (15) differ in variables other than M S ^ k and can be transformed into each other. Therefore, we can use tilde to distinguish between actual variables and estimated variables. Thus, the modified equations are as follows:
M S ^ k = M S ˜ k P ˜ P L ˜ = k m s S m s LR ω m s k p S ˜ P HR ω ˜ p k p S ˜ P L LR ω ˜ p .
As a result, the following equation is obtained.
S ˜ P HR = S P HR .
S ˜ P L LR = S m s LR .
This is because the virtual sensor that ultimately generates the HRMS image should have the same spatial response as the existing PAN sensor, while the construction of the P L is supported by a filter that matches the spatial response of the MS sensor.
Based on [36], the third equation is defined as
k p ω ˜ p = k m s ω m s .
Thus, considering Equations (11) and (12), we obtain Equation (20)
P L ˜ = k p S ˜ P L LR ω ˜ p = k m s S m s LR ω m s = M S ˜ k .
Similarly, we obtain
P ˜ = k p S ˜ P HR ω ˜ p = k m s S P HR ω m s = M S ^ k .
Combining Equations (20) and (21), a linear affine function is proposed to solve M S ^ k XR as follows:
P ˜ XR = m P XR + n = M S ^ k XR .
Figure 2 shows a schematic diagram related to the HPM injection scheme.

3.3. Mixed-Scale Regression for Detailed Images

Thus, according to Equation (22), the problem is transformed into finding the coefficients m and n to obtain M S ^ k XR . In this paper, spectral matching between LR PAN and MS data is established through linear regression, where the coefficients m and n of the linear model are estimated using the least squares method [36].
m = cov M S ˜ k , I P L cov P L , I P L ,
n = E M S ˜ k cov M S ˜ k , I P L cov P L , I P L E P ,
where E(X) represents the mean of image X.
Thus, we can rewrite Equation (16):
M S ^ k = M S ˜ k P ˜ P L ˜ = M S ˜ k P + C b P L + C b ,
where
C b = E M S ˜ k / g k E P .
To address the single-scale deficiency of scale regression-based methods [35], we enhance these methods with detailed image information and construct a hybrid-scale regression. This approach not only considers the spectral residual scale information from the covariance between the detail image I P L , which has undergone full-scale regression, and  M S ˜ k , but also considers the spatial residual scale information from the covariance between I P L and P. The covariance-guided matching of remote sensing data features originating from different sensors is so realized that “HSMI” can be obtained:
g k i = M I cov M S ˜ k i , I P L cov I P L , P L + 1 M I cov P , I P L cov I P L , P L ,
where MI is based on the computationally generated mutual information of the detail image I P L with P L , as follows:
M I ( I P L ; P L ) = x I P L y P L p ( x , y ) · log 2 p ( x , y ) p ( x ) · p ( y ) ,
Here, p ( x ) , p ( y ) , and p ( x , y ) denote the marginal probability distribution function and the joint probability distribution function for I P L and P L , respectively. Such control parameters are more expressive of the intrinsic representation between the detailed image and the degraded PAN image to balance the fusion representation at the mixed scale.
In addition, the input PAN and MS images do not correspond to each other due to the inherent bias of multi-source sensor platforms. The spectral matching of I P L and M S ˜ k is performed before injecting the gain estimation so that they have almost the same statistical characteristics:
I P L = M S ˜ k P L D L M S = M S ˜ k P L a k P + b k a k P L + b k ,
where
a k = std ( M S ˜ k ) std ( P L ) , b k = μ M S ˜ k a k μ P L ,
Meanwhile, μ and std denote the mean and standard deviation between the matched images, respectively.
Figure 1 shows the flowchart of the MTF-GLP-HPM-HSMI pansharpening method. After each iteration, the injection coefficient g k is obtained, which is substituted into Equation (25) to obtain the pansharpening result. When the iterative process is stopped, the final pansharpening result is obtained. The process is described in Algorithm 1.
Algorithm 1 MTF-GLP-HPM-HSMI
  •   Input: An original MS image and an original PAN image;
  •   (1) Interpolate the MS image to the size of the PAN image;
  •   (2) Obtain P L using the PAN image using the MTF-GLP;
  •   for each band k { 1 , , K }  do
    •    1. Calculate the gain coefficients g k using Equation (27);
    •    2. Spectral matching was performed using Equation (29) prior to estimation and detailed images were generated I P L ;
    •    3. By using Equation (28) to calculate the mutual information between P L and I P L , the mixed weighting coefficient MI is derived;
    •    for each iteration i { 0 , , N 1 }  do
      •   1. Calculate injection coefficients g k i as:
      •       g k i M I cov M S ˜ k i , I P L cov I P L , P L + 1 M I cov P , I P L cov I P L , P L
      •   2. Use g k i to fuse the MS and PAN as follows:
      •       M S ^ k i M S ˜ k P + C b P L + C b
      •       C b = E M S ˜ k / g k i E P
    •    end for
    •    Stop the iterative process and output M S ^ k ;
  •   end for
  •   return  M S ^

4. Experimental Results

4.1. Datasets and Preprocessing

This work consists of three distinct datasets of paired S2-MSI and S3-OLCI products covering several key island regions in the South China Sea, including Huangyan Island, Sabina Shoal, and Discovery Reef. Table 1 summarizes the selected scenes and their acquisition dates and locations. The datasets chosen for construction were all cloud-free products downloaded from the Copernicus Open Access Centre and preprocessed using the Sentinel Application Platform (SNAP). For S2, the Level 2B MSI images were spatially resampled to 20 m using the Sen2Cor processor to generate homogeneous data cubes for HR panchromatic inputs. Atmospheric corrections were performed for the S3-OLCI images used as multispectral inputs. Finally, the OLCI images were reprojected onto the corresponding S2 plots to capture the overlap region between the two sensors of the construct (see Figure 3).
In this section, we define the input and output images of the considered multi-source inter-sensor pansharpening scheme. They are input LRMS ( I M S ) and input HR panchromatic ( I P A N ). In this study, we only consider MSI and OLCI bands with similar wavelengths, i.e., the red (R), green (G), blue (B), and near-infrared (IR) bands at wavelengths of 665, 560, 490, and 865 nm. Therefore, I M S is defined as Oa04, Oa06, Oa08, Oa17. In addition, a 21-band extension of I M S was performed to verify the spectral retention adaptation of the proposed method in multispectral fusion. In addition, I P A N is generated by calculating the corresponding bands of MSI by band averaging and adjusting the spatial size to R × OLCI (R = 4 in this paper).

4.2. Benchmarks and Assessment

To validate the advantages of the proposed MTF-GLP-HPM-HSMI method, we selected the popular pansharpening methods from different categories for benchmarking, with a particular focus on detailed benchmarking of the MTF-GLP class pansharpening methods within MRA, including C-GSA [43], BDSD-PC [44], AWLP [19], MF [20], MTF-GLP [21], MTF-GLP-HPM [42], MTF-GLP-HPM-H [45], MTF-GLP-HPM-R [36], MTF-GLP-CBD [33], MTF-GLP-Reg-FS [34], RR [46], TV [47], FE-HPM [48], and PWMBF [49].
To provide a comprehensive analysis of the experimental results presented in this paper, both down-resolution experiments (RR) and full-resolution experiments (FR) were conducted on all three South China Sea island datasets. In the down-resolution experiments, the original MS is used as the ideal fusion result according to the synthetic properties of the Wald protocol. The down-sampled and then up-sampled MS and down-sampled PAN are used as the inputs, and the results obtained from the experiments are evaluated in comparison with the original MS to obtain the index results. The primary advantage is the objective assessment of quality distortion measurements. However, there is also a clear disadvantage, namely the assumption of scale invariance. To address this issue, we used full-resolution experiments to aid in the validation, where RR experiments were selected as evaluation metrics for Q 2 n (Universal Image Quality Index), ERGAS (relative dimensionless global error index), SAM (spectral angle mapper), and PSNR (Peak Signal-to-Noise Ratio). The FR evaluation was used as the Hybrid Quality Without Reference (HQNR) index. It consists of the product of two independent values D λ k and D S that quantify the spectral and spatial distortions:
D λ k = 1 Q 2 n M S ^ d , M S D s = 1 K k = 1 K Q M S ^ k , P Q M S ˜ k , P L H Q N R = 1 D λ k 1 D S
where M S ^ d represents the degraded version of the fused outcome, and Q denotes the Universal Image Quality Index (UIQI). The HQNR metric ranges between 0 and 1, with higher values indicating superior pansharpening quality. This index effectively combines spectral and spatial information to assess the quality of pansharpening images without requiring a reference image. In the quantitative assessment, the optimal values for the selected metrics are as follows: Q 2 n (1), ERGAS (0), SAM (0), PSNR ( + ), D λ (0), D s (1), and HQNR (1); In addition, calculating time (CT) is also included in the evaluation to demonstrate computational efficiency.

4.3. Quantitative Comparison Results

Table 2, Table 3 and Table 4 summarize the quantitative results for the three pairs of coupled data (i.e., HY, XB, and HG). In this paper, “HSMI” stands for the method MTF-GLP-HPM-HSMI. In addition, the two best results for each indicator are highlighted in bold.
Comprehensive evaluations under reduced-resolution (RR) and full-resolution (FR) protocols indicate that, compared to other regression-based multi-resolution analysis (MRA) methods (e.g., MTF-GLP-CBD, MTF-GLP-HPM, and MTF-GLP-Reg-FS), the HSMI method performs favorably in terms of spectral fidelity. To further examine the fusion performance, we analyzed the metrics across three aspects based on their characteristics: (1) spectral distortion, (2) spatial distortion, and (3) spatial–spectral coupling. The first group of metrics, including SAM and D λ k , quantifies spectral errors. On the HY, XB, and HG datasets, HSMI achieves the lowest SAM values of 0.3332, 0.5600, and 0.3708, respectively; under FR evaluation, its D λ k values are 0.2169, 0.2126, and 0.1830, suggesting effective preservation of spectral information. The second group, comprising D S and ERGAS, assesses spatial detail quality and distortion. HSMI’s fused images exhibit relatively clear island edges and improved detail resolution; for instance, on the HY dataset, its D S value is 0.0341, slightly higher than BDSD-PC (0.0053) or TV (0.0122), yet its overall spatial performance remains competitive. The third group, including Q 2 n and HQNR, evaluates spatial–spectral coupling. HSMI records Q 2 n values of 0.7707, 0.8776, and 0.8945, and HQNR values of 0.7563, 0.7697, and 0.8071 across the HY, XB, and HG datasets, respectively, ranking among the top and demonstrating stable performance compared to methods like BDSD-PC, MTF-GLP-Reg-FS, and AWLP. Overall, these results suggest that HSMI achieves a suitable fusion of spatial and spectral resolution in island sea areas environments, effectively balancing spectral fidelity and spatial detail enhancement.

4.4. Visual and Qualitative Comparisons

Figure 4, Figure 5 and Figure 6, respectively, present the visual results of experiments conducted on three coupled Sentinel-2 and Sentinel-3 datasets (HY, XB, and HG). The fusion results are displayed in true color, specifically the Oa04, Oa06, and Oa08 bands of Sentinel-3. Compared with variational optimization (VO)-based methods (such as TV and RR), component substitution (CS)-based methods (such as BDSD-PC and C-GSA) generally preserve spatial details more effectively. However, CS methods exhibit limitations in spectrally sensitive scenarios, while VO methods often suffer from significant spectral distortion and spatial detail loss, partly due to their reliance on manual parameter adjustment, which reduces flexibility and increases computational complexity. MTF-GLP and other regression-based methods also show a certain degree of spectral distortion, as evidenced by the presence of obvious noise in the fusion results of MTF-GLP and its derivatives (such as MTF-GLP-HPM and MTF-GLP-Reg-FS), especially in spectrally sensitive areas, which further exacerbates spectral distortion.
Meanwhile, the magnified regions highlighted by the blue and green boxes indicate that the proposed HSMI method achieves a better balance between spatial detail clarity and spectral fidelity. Methods such as C-GSA, MTF-GLP, and regression-based methods (such as MTF-GLP-CBD and MTF-GLP-Reg-FS) show obvious spectral distortion in island environments, occasionally accompanied by noise artifacts, which may be due to their insufficient handling of the spatial heterogeneity of multi-source data. For instance, C-GSA shows excessive spectral smoothing in the green magnified box of the HG dataset, causing the island edge areas to appear overly green. Other VO-based methods and hybrid methods (such as FE-HPM, PWMBF, etc.) exhibit significant fusion noise on the HY and HG datasets. In addition, the visualization output results of BDSD-PC are inconsistent with the quantitative indicators. Although it performs well in spatial indicators (such as D S : 0.0053 for the HY dataset), its fused images have halo effects and incorrect contour lines, which leads to the loss of island boundary features. This is in line with the lower Q 2 n value (HY: 0.6926). In contrast, the proposed HSMI in this paper shows clear textures and accurate color restoration in both island edges and lagoon areas, indicating that HSMI can effectively capture the complex features of island and sea scenes. This may be due to its adaptive regression framework, which can better adapt to the spectral and spatial characteristics of Sentinel-2 and Sentinel-3 data.

4.5. Extended Comparison

To more clearly illustrate the differences in pansharpening images with respect to spectral fidelity and spatial detail enhancement and to further validate the superiority of the proposed HSMI method in mitigating spectral distortions in marine regions, we generated SAM (spectral angle mapper) error maps for subregions of the HY, HB, and HG datasets, as shown in Figure 7, Figure 8, and Figure 9, respectively. The results demonstrate that, compared to other methods, the fused images produced by the HSMI method (labeled as (o) in Figure 7, Figure 8 and Figure 9) exhibit the closest resemblance to the reference images derived from the original multispectral inputs, with significantly lower SAM errors. This strongly confirms the HSMI method’s capability to effectively balance spatial detail enhancement and spectral information preservation in complex marine environments.
To systematically evaluate the performance of pansharpened images in terms of multi-band spectral fidelity and spatial detail enhancement, we first conducted a 4-band pansharpening experiment on the HY dataset, presenting the fused results in false-color format (Figure 10). The experimental results demonstrate that the proposed HSMI method achieves an excellent balance between spatial detail enhancement in island regions and spectral information preservation in surrounding marine areas, fully showcasing its superior capability in synergistic optimization of spectral and spatial features.
To further validate the adaptability and robustness of the HSMI method in more complex multi-band scenarios, we performed an extended 21-band pansharpening experiment on the HG dataset, where the low-resolution multispectral image LRMS( I M S ) was replaced with a full-band S3 image. The visual results and quantitative evaluations are presented in Figure 11 and Table 5, respectively, showing consistency with the 4-band experiment. Notably, even under challenging conditions where the original LRMS and PAN images in the HG dataset were affected by cloud and fog interference, the HSMI method maintained optimal spectral-related metrics. Although its spatial performance was slightly below the ideal level, no coupling effect between spectral and spatial distortions was observed. Comprehensive evaluation indicates that the HSMI method exhibits robust performance in multi-band pansharpening tasks, effectively addressing the demands for spectral fidelity and spatial enhancement in complex environments.

5. Conclusions

This paper introduces a multi-source pansharpening method based on mutual information hybrid-scale regression (HSMI), aimed at addressing the challenges of spectral fidelity and spatial detail optimization in multi-source remote sensing data fusion for island sea areas. Limited by the low spatial resolution of Sentinel-3 OLCI, systematic biases between sensor platforms, and spectral distortions caused by dynamic marine water changes due to inconsistent imaging times, conventional methods often yield suboptimal spectral preservation in marine regions when integrating Sentinel-2 spatial information with Sentinel-3 spectral information. To tackle these issues, we propose a hybrid scale regression framework that employs mutual information to quantify spatial–spectral correlations across multi-source data, computing weight parameters to integrate hybrid scale information and thereby producing high-precision pansharpened outputs.
To validate the performance of the proposed HSMI method, we utilized a South China Sea island dataset derived from coupled Sentinel-2 and Sentinel-3 data, conducting a systematic evaluation of reduced-resolution (ERGAS, SAM, PSNR, and Q 2 n metrics) and full-resolution ( D λ k , D S , and HQNR metrics) performance. The experimental results demonstrate that, compared to established mainstream pansharpening methods, the HSMI method (denoted as MTF-GLP-HPM-HSMI) enhances spatial details and edge clarity in island regions while markedly improving spectral fidelity in surrounding marine areas, showcasing exceptional spatial–spectral synergistic optimization. Although the current achievements have shown initial promise, there is still room for further optimization. In future work, we intend to explore advanced deep learning techniques, extend the approach to other multi-source sensor platforms, and apply it to water color inversion tasks in island sea areas, thereby enhancing its practical value in marine remote sensing applications.

Author Contributions

Conceptualization, D.F. and J.M.; Data curation, D.F. and J.M.; Formal analysis, J.M.; Funding acquisition, D.F.; Investigation, D.F.; Methodology, J.M.; Project administration, D.F. and B.L.; Resources, D.F.; Software, J.M.; Supervision, B.L.; Validation, Y.Z.; Visualization, D.F. and J.M.; Writing—original draft, D.F. and J.M.; Writing—review and editing, D.F. and J.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key Research and Development Program of China under grant 2022YFC3103101, the Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory (GML2021GD0809), the National Natural Science Foundation of China (No. 42206187), Key projects of the Guangdong Education Department (2023ZDZX4009).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tian, Y.; Duan, M.; Cui, X.; Zhao, Q.; Tian, S.; Lin, Y.; Wang, W. Advancing application of satellite remote sensing technologies for linking atmospheric and built environment to health. Front. Public Health 2023, 11, 1270033. [Google Scholar] [CrossRef] [PubMed]
  2. Dube, T.; Shekede, M.D.; Massari, C. Remote sensing for water resources and environmental management. Remote Sens. 2022, 15, 18. [Google Scholar] [CrossRef]
  3. Ma, F.; Huo, S.; Yang, F. Graph-based logarithmic low-rank tensor decomposition for the fusion of remotely sensed images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 11271–11286. [Google Scholar] [CrossRef]
  4. Wu, K.; Chen, T.; Xu, Y.; Song, D.; Li, H. A Novel Change Detection Approach Based on Spectral Unmixing from Stacked Multitemporal Remote Sensing Images with a Variability of Endmembers. Remote Sens. 2021, 13, 2550. [Google Scholar] [CrossRef]
  5. Chawla, I.; Karthikeyan, L.; Mishra, A.K. A review of remote sensing applications for water security: Quantity, quality, and extremes. J. Hydrol. 2020, 585, 124826. [Google Scholar] [CrossRef]
  6. Yang, H.; Kong, J.; Hu, H.; Du, Y.; Gao, M.; Chen, F. A review of remote sensing for water quality retrieval: Progress and challenges. Remote Sens. 2022, 14, 1770. [Google Scholar] [CrossRef]
  7. Concha, J.A.; Schott, J.R. Retrieval of color producing agents in Case 2 waters using Landsat 8. Remote Sens. Environ. 2016, 185, 95–107. [Google Scholar] [CrossRef]
  8. Soppa, M.A.; Silva, B.; Steinmetz, F.; Keith, D.; Scheffler, D.; Bohn, N.; Bracher, A. Assessment of polymer atmospheric correction algorithm for hyperspectral remote sensing imagery over coastal waters. Sensors 2021, 21, 4125. [Google Scholar] [CrossRef]
  9. Yu, G.; Zhong, Y.; Fu, D.; Chen, F.; Chen, C. Remote sensing estimation of δ15NPN in the Zhanjiang Bay using Sentinel-3 OLCI data based on machine learning algorithm. Front. Mar. Sci. 2024, 11, 1366987. [Google Scholar] [CrossRef]
  10. Liu, Z.; Han, X.H. Hyperspectral image super resolution using deep internal and self-supervised learning. CAAI Trans. Intell. Technol. 2024, 9, 128–141. [Google Scholar] [CrossRef]
  11. Wang, S.; Jiang, X.; Spyrakos, E.; Li, J.; McGlinchey, C.; Constantinescu, A.M.; Tyler, A.N. Water color from Sentinel-2 MSI data for monitoring large rivers: Yangtze and Danube. Geo-Spat. Inf. Sci. 2024, 27, 854–869. [Google Scholar] [CrossRef]
  12. Joshi, N.; Park, J.; Zhao, K.; Londo, A.; Khanal, S. Monitoring harmful algal blooms and water quality using sentinel-3 OLCI satellite imagery with machine learning. Remote Sens. 2024, 16, 2444. [Google Scholar] [CrossRef]
  13. Zeng, F.; Song, C.; Cao, Z.; Xue, K.; Lu, S.; Chen, T.; Liu, K. Monitoring inland water via Sentinel satellite constellation: A review and perspective. ISPRS J. Photogramm. Remote Sens. 2023, 204, 340–361. [Google Scholar] [CrossRef]
  14. Vivone, G.; Dalla Mura, M.; Garzelli, A.; Restaino, R.; Scarpa, G.; Ulfarsson, M.O.; Alparone, L.; Chanussot, J. A new benchmark based on recent advances in multispectral pansharpening: Revisiting pansharpening with classical and emerging pansharpening methods. IEEE Geosci. Remote Sens. Mag. 2020, 9, 53–81. [Google Scholar] [CrossRef]
  15. Ghadjati, M.; Benazza-Benyahia, A.; Moussaoui, A. Satellite image fusion using an iterative IHS-based approach. In Proceedings of the 2020 Mediterranean and Middle-East Geoscience and Remote Sensing Symposium (M2GARSS), Tunis, Tunisia, 9–11 March 2020; pp. 133–136. [Google Scholar]
  16. Jelének, J.; Kopačková, V.; Koucká, L.; Mišurec, J. Testing a modified PCA-based sharpening approach for image fusion. Remote Sens. 2016, 8, 794. [Google Scholar] [CrossRef]
  17. Yilmaz, V.; Serifoglu Yilmaz, C.; Güngör, O.; Shan, J. A genetic algorithm solution to the gram-schmidt image fusion. Int. J. Remote Sens. 2020, 41, 1458–1485. [Google Scholar] [CrossRef]
  18. Zhong, S.; Zhang, Y.; Chen, Y.; Wu, D. Combining component substitution and multiresolution analysis: A novel generalized BDSD pansharpening algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2867–2875. [Google Scholar] [CrossRef]
  19. Vivone, G.; Alparone, L.; Garzelli, A.; Lolli, S. Fast reproducible pansharpening based on instrument and acquisition modeling: AWLP revisited. Remote Sens. 2019, 11, 2315. [Google Scholar] [CrossRef]
  20. Restaino, R.; Vivone, G.; Dalla Mura, M.; Chanussot, J. A Pansharpening Algorithm Based on Morphological Filters. In Mathematical Morphology and Its Applications to Signal and Image Processing; Benediktsson, J., Chanussot, J., Najman, L., Talbot, H., Eds.; ISMM 2015. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9082. [Google Scholar] [CrossRef]
  21. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. Advantages of Laplacian pyramids over “à trous” wavelet transforms for pansharpening of multispectral images. In Proceedings of the Image and Signal Processing for Remote Sensing XVIII, Edinburgh, UK, 24–26 September 2012; Volume 8537, pp. 12–21. [Google Scholar]
  22. Restaino, R.; Vivone, G.; Dalla Mura, M.; Chanussot, J. Fusion of multispectral and panchromatic images based on morphological operators. IEEE Trans. Image Process. 2016, 25, 2882–2895. [Google Scholar] [CrossRef]
  23. Shi, Y.; Zhou, W.; Li, W. Pansharpening of multispectral images based on cycle-spinning quincunx lifting transform. In Proceedings of the 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China, 11–13 December 2019; pp. 1–5. [Google Scholar]
  24. Khader, A.; Yang, J.; Xiao, L. NMF-DuNet: Nonnegative matrix factorization inspired deep unrolling networks for hyperspectral and multispectral image fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 5704–5720. [Google Scholar] [CrossRef]
  25. Scarpa, G.; Vitale, S.; Cozzolino, D. Target-adaptive CNN-based pansharpening. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5443–5457. [Google Scholar] [CrossRef]
  26. Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A deep network architecture for pan-sharpening. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 5449–5457. [Google Scholar]
  27. Fernandez-Beltran, R.; Fernandez, R.; Kang, J.; Pla, F. W-NetPan: Double-U network for inter-sensor self-supervised pan-sharpening. Neurocomputing 2023, 530, 125–138. [Google Scholar] [CrossRef]
  28. Fernandez, R.; Fernandez-Beltran, R.; Kang, J.; Pla, F. Sentinel-3 super-resolution based on dense multireceptive channel attention. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 7359–7372. [Google Scholar] [CrossRef]
  29. Wang, Q.; Blackburn, G.A.; Onojeghuo, A.O.; Dash, J.; Zhou, L.; Zhang, Y.; Atkinson, P.M. Fusion of Landsat 8 OLI and Sentinel-2 MSI data. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3885–3899. [Google Scholar] [CrossRef]
  30. Shao, Z.; Cai, J.; Fu, P.; Hu, L.; Liu, T. Deep learning-based fusion of Landsat-8 and Sentinel-2 images for a harmonized surface reflectance product. Remote Sens. Environ. 2019, 235, 111425. [Google Scholar] [CrossRef]
  31. Jia, S.; Min, Z.; Fu, X. Multiscale spatial–spectral transformer network for hyperspectral and multispectral image fusion. Inf. Fusion 2023, 96, 117–129. [Google Scholar] [CrossRef]
  32. Liu, P.; Liu, J.; Xiao, L.; Zheng, Z. Multiresolution analysis-inspired spatial and spectral details preserved model for variational pansharpening. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5408815. [Google Scholar] [CrossRef]
  33. Yilmaz, V. Adaptive hybrid pansharpening: A novel approach for combining two methods to achieve superior pansharpening performance. Int. J. Remote Sens. 2023, 44, 4301–4325. [Google Scholar] [CrossRef]
  34. Vivone, G.; Restaino, R.; Chanussot, J. Full scale regression-based injection coefficients for panchromatic sharpening. IEEE Trans. Image Process. 2018, 27, 3418–3431. [Google Scholar] [CrossRef]
  35. Wang, P.; Yao, H.; Li, C.; Zhang, G.; Leung, H. Multiresolution analysis based on dual-scale regression for pansharpening. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5406319. [Google Scholar] [CrossRef]
  36. Vivone, G.; Restaino, R.; Chanussot, J. A regression-based high-pass modulation pansharpening approach. IEEE Trans. Geosci. Remote Sens. 2017, 56, 984–996. [Google Scholar] [CrossRef]
  37. Kumar, S.; Imen, S.; Sridharan, V.K.; Gupta, A.; McDonald, W.; Ramirez-Avila, J.J.; Abdul-Aziz, O.I.; Talchabhadel, R.; Gao, H.; Quinn, N.W.; et al. Perceived barriers and advances in integrating earth observations with water resources modeling. Remote Sens. Appl. Soc. Environ. 2024, 33, 101119. [Google Scholar] [CrossRef]
  38. Jaywant, S.A.; Arif, K.M. Remote Sensing Techniques for Water Quality Monitoring: A Review. Sensors 2024, 24, 8041. [Google Scholar] [CrossRef] [PubMed]
  39. Yang, H.; Du, Y.; Zhao, H.; Chen, F. Water quality Chl-a inversion based on spatio-temporal fusion and convolutional neural network. Remote Sens. 2022, 14, 1267. [Google Scholar] [CrossRef]
  40. Li, J.; Dong, Z.; Chen, L.; Tang, Q.; Hao, J.; Zhang, Y. Multi-Temporal Image Fusion-Based Shallow-Water Bathymetry Inversion Method Using Active and Passive Satellite Remote Sensing Data. Remote Sens. 2025, 17, 265. [Google Scholar] [CrossRef]
  41. Shi, Y.; Tan, A.; Liu, N.; Li, W.; Tao, R.; Chanussot, J. A pansharpening method based on hybrid-scale estimation of injection gains. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5400615. [Google Scholar] [CrossRef]
  42. Alparone, L.; Garzelli, A.; Vivone, G. Intersensor statistical matching for pansharpening: Theoretical issues and practical solutions. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4682–4695. [Google Scholar] [CrossRef]
  43. Restaino, R.; Dalla Mura, M.; Vivone, G.; Chanussot, J. Context-adaptive pansharpening based on image segmentation. IEEE Trans. Geosci. Remote Sens. 2016, 55, 753–766. [Google Scholar] [CrossRef]
  44. Vivone, G. Robust band-dependent spatial-detail approaches for panchromatic sharpening. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6421–6433. [Google Scholar] [CrossRef]
  45. Lolli, S.; Alparone, L.; Garzelli, A.; Vivone, G. Haze correction for contrast-based multispectral pansharpening. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2255–2259. [Google Scholar] [CrossRef]
  46. Palsson, F.; Ulfarsson, M.O.; Sveinsson, J.R. Model-based reduced-rank pansharpening. IEEE Geosci. Remote Sens. Lett. 2019, 17, 656–660. [Google Scholar] [CrossRef]
  47. Liu, P. Spatial and spectral anisotropic tensor total variation-driven adaptive pansharpening. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5002405. [Google Scholar] [CrossRef]
  48. Tang, Y.; Li, H.; Xie, G.; Liu, P.; Li, T. Multi-Frequency Spectral–Spatial Interactive Enhancement Fusion Network for Pan-Sharpening. Electronics 2024, 13, 2802. [Google Scholar] [CrossRef]
  49. Maleki, S.A.; Ghassemian, H.; Imani, M. Nonreference object-based pansharpening quality assessment. Egypt. J. Remote Sens. Space Sci. 2024, 27, 227–241. [Google Scholar] [CrossRef]
Figure 1. MTF-GLP-HPM-HSMI flowchart.
Figure 1. MTF-GLP-HPM-HSMI flowchart.
Sensors 25 03530 g001
Figure 2. Schematic diagram of HPM injection scheme.
Figure 2. Schematic diagram of HPM injection scheme.
Sensors 25 03530 g002
Figure 3. Visualization of the considered datasets made of coupled S3 OLCI (a) and S2 MSI scenes (b).
Figure 3. Visualization of the considered datasets made of coupled S3 OLCI (a) and S2 MSI scenes (b).
Sensors 25 03530 g003
Figure 4. Visual presentation of the HY experiment (true color).
Figure 4. Visual presentation of the HY experiment (true color).
Sensors 25 03530 g004
Figure 5. Visual presentation of the XB experiment (true color).
Figure 5. Visual presentation of the XB experiment (true color).
Sensors 25 03530 g005
Figure 6. Visual presentation of the HG experiment (true color).
Figure 6. Visual presentation of the HG experiment (true color).
Sensors 25 03530 g006
Figure 7. Subareas of full-band SAM average error for HY experiment. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) C-GSA. (d) AWLP. (e) MTF-GLP. (f) MTF-GLP-HPM. (g) MTF-GLP-HPM-H. (h) MTF-GLP-HPM-R. (i) MTF-GLP-CBD. (j) MTF-GLP-Reg-FS. (k) TV. (l) RR. (m) MF. (n) FE-HPM. (o) MTF-GLP-HPM-HSMI.
Figure 7. Subareas of full-band SAM average error for HY experiment. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) C-GSA. (d) AWLP. (e) MTF-GLP. (f) MTF-GLP-HPM. (g) MTF-GLP-HPM-H. (h) MTF-GLP-HPM-R. (i) MTF-GLP-CBD. (j) MTF-GLP-Reg-FS. (k) TV. (l) RR. (m) MF. (n) FE-HPM. (o) MTF-GLP-HPM-HSMI.
Sensors 25 03530 g007
Figure 8. Subareas of full-band SAM average error for XB experiment. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) C-GSA. (d) AWLP. (e) MTF-GLP. (f) MTF-GLP-HPM. (g) MTF-GLP-HPM-H. (h) MTF-GLP-HPM-R. (i) MTF-GLP-CBD. (j) MTF-GLP-Reg-FS. (k) TV. (l) RR. (m) MF. (n) FE-HPM. (o) MTF-GLP-HPM-HSMI.
Figure 8. Subareas of full-band SAM average error for XB experiment. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) C-GSA. (d) AWLP. (e) MTF-GLP. (f) MTF-GLP-HPM. (g) MTF-GLP-HPM-H. (h) MTF-GLP-HPM-R. (i) MTF-GLP-CBD. (j) MTF-GLP-Reg-FS. (k) TV. (l) RR. (m) MF. (n) FE-HPM. (o) MTF-GLP-HPM-HSMI.
Sensors 25 03530 g008
Figure 9. Subareas of full-band SAM average error for HG experiment. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) C-GSA. (d) AWLP. (e) MTF-GLP. (f) MTF-GLP-HPM. (g) MTF-GLP-HPM-H. (h) MTF-GLP-HPM-R. (i) MTF-GLP-CBD. (j) MTF-GLP-Reg-FS. (k) TV. (l) RR. (m) MF. (n) FE-HPM. (o) MTF-GLP-HPM-HSMI.
Figure 9. Subareas of full-band SAM average error for HG experiment. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) C-GSA. (d) AWLP. (e) MTF-GLP. (f) MTF-GLP-HPM. (g) MTF-GLP-HPM-H. (h) MTF-GLP-HPM-R. (i) MTF-GLP-CBD. (j) MTF-GLP-Reg-FS. (k) TV. (l) RR. (m) MF. (n) FE-HPM. (o) MTF-GLP-HPM-HSMI.
Sensors 25 03530 g009
Figure 10. The results of the 4-band fusion of the HY dataset are shown in standard false color (NIR–red–green).
Figure 10. The results of the 4-band fusion of the HY dataset are shown in standard false color (NIR–red–green).
Sensors 25 03530 g010
Figure 11. Spectral errors of 21-band experiments on the HG dataset. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) MTF-GLP-Reg-FS. (d) RR. (e) MTF-GLP-HPM-HSMI.
Figure 11. Spectral errors of 21-band experiments on the HG dataset. From left to right: (a) Selected subregions. (b) BDSD-PC. (c) MTF-GLP-Reg-FS. (d) RR. (e) MTF-GLP-HPM-HSMI.
Sensors 25 03530 g011
Table 1. Description of the constructed dataset.
Table 1. Description of the constructed dataset.
NameSceneSensing DatesTile (Ref. S2)
S3S2
HY (Huangyan Island)the Zhongsha Islands21/03/202320/03/202350PNB
XB (Sabina Shoal)the Nansha Islands05/03/202405/03/202450PMR
HG (Discovery Reef)the Xisha Islands14/01/202314/01/202349QEU
Table 2. Quantitative evaluation results for the HY dataset.
Table 2. Quantitative evaluation results for the HY dataset.
MethodRRFR
Q2nSAMERGASPSNR D λ k DSHQNRCT
GT/EXP1.00000.00000.00000.18700.00100.8123
BDSD-PC0.69267.13592.627244.79820.44710.00530.54990.26
C-GSA0.278713.857816.502029.40420.66420.02700.32670.53
AWLP0.57064.246416.935328.19940.24200.03180.73390.23
MF0.44036.686798.053411.60060.37550.03540.60240.28
MTF-GLP0.49875.585210.566233.09220.29480.03380.68140.10
MTF-GLP-HPM0.36949.0922532.5727−3.20680.42520.03260.55610.11
MTF-GLP-HPM-R0.44167.8198308.41441.54850.40410.02330.58200.07
MTF-GLP-CBD0.55945.33419.563133.71280.28830.02390.69470.09
MTF-GLP-Reg-FS0.56185.29269.466733.80070.28480.02360.69830.09
TV0.282810.198810.642435.02240.79370.01220.20385.65
RR0.55785.60509.704633.84920.45560.01510.53624.42
FE-HPM0.42296.9582397.60301.45450.41230.03710.56590.42
PWMBF0.53124.12607.549637.33470.47880.03900.50090.47
HSMI0.77070.33327.847935.78450.21690.03410.75630.20
Table 3. Quantitative evaluation results for the XB dataset.
Table 3. Quantitative evaluation results for the XB dataset.
MethodRRFR
Q2nSAMERGASPSNR D λ k DSHQNRCT
GT/EXP1.00000.00000.00000.19890.00190.7995
BDSD-PC0.81673.66840.880059.87590.29880.00210.69980.05
C-GSA0.53577.893830.817328.83230.49370.02090.49580.26
AWLP0.57494.823636.173026.58740.26100.02230.72250.15
MF0.71151.649020.487132.47540.26840.03110.70880.04
MTF-GLP0.61424.750517.621233.83760.26030.02950.71780.07
MTF-GLP-HPM0.58212.672476.387421.65830.27440.02830.70510.08
MTF-GLP-HPM-R0.75111.7318146.250916.06510.24880.02040.73590.05
MTF-GLP-CBD0.72613.719913.465736.00380.24670.02140.73710.05
MTF-GLP-Reg-FS0.73673.592212.938136.37050.24270.02100.74140.07
TV0.523010.494810.889540.40000.51570.00680.48102.62
RR0.62646.575113.935435.92740.34730.00660.64843.89
FE-HPM0.68781.786127.652329.88840.29350.03310.68300.20
PWMBF0.68893.04619.303539.75920.38500.02880.59730.24
HSMI0.87760.560012.078337.38640.21260.02260.76970.08
Table 4. Quantitative evaluation results for the HG dataset.
Table 4. Quantitative evaluation results for the HG dataset.
MethodRRFR
Q2nSAMERGASPSNR D λ k DSHQNRCT
GT/EXP1.00000.00000.00000.17420.00100.8250
BDSD-PC0.69673.35461.428951.58190.44420.00290.55420.04
C-GSA0.373313.186127.795425.53580.66800.01690.32630.20
AWLP0.62494.663740.987622.09310.28000.01420.70970.12
MF0.48754.8841307.982712.06750.35130.02420.63310.04
MTF-GLP0.60953.879013.180133.59470.27000.02220.71380.07
MTF-GLP-HPM0.41865.8181248.61919.31560.36000.01910.62780.07
MTF-GLP-HPM-R0.71421.633050.395723.28510.22320.01990.76140.05
MTF-GLP-CBD0.67313.209110.021834.89900.25090.02020.73400.05
MTF-GLP-Reg-FS0.67643.15409.849035.04320.24940.02000.73550.05
TV0.51808.157125.379134.61240.56760.00630.42962.31
RR0.67325.864313.686233.49330.38120.00690.61453.64
FE-HPM0.44455.29071468.0778−0.57530.39530.02610.58900.19
PWMBF0.77282.01747.523738.54000.31080.01970.67560.28
HSMI0.89450.370815.449131.99070.18300.01220.80710.08
Table 5. Results of the 21-band extension experiment on the HG dataset.
Table 5. Results of the 21-band extension experiment on the HG dataset.
MethodRRFR
Q2nSAMERGASPSNR D λ k DSHQNRCT
GT/EXP1.00000.00000.00000.15490.00130.8440
BDSD-PC0.73182.93551.053554.75270.41240.00250.58620.57
MTF-GLP-Reg-FS0.68462.47767.182438.98210.25420.02370.72810.35
RR0.52149.796615.409834.61990.56580.01220.428917.78
HSMI0.86690.511814.218437.61990.16770.01710.81800.63
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, D.; Ma, J.; Liu, B.; Zhu, Y. Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization. Sensors 2025, 25, 3530. https://doi.org/10.3390/s25113530

AMA Style

Fu D, Ma J, Liu B, Zhu Y. Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization. Sensors. 2025; 25(11):3530. https://doi.org/10.3390/s25113530

Chicago/Turabian Style

Fu, Dongyang, Jin Ma, Bei Liu, and Yan Zhu. 2025. "Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization" Sensors 25, no. 11: 3530. https://doi.org/10.3390/s25113530

APA Style

Fu, D., Ma, J., Liu, B., & Zhu, Y. (2025). Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization. Sensors, 25(11), 3530. https://doi.org/10.3390/s25113530

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop