Next Article in Journal
Sparse Inversion for the Iterative Marchenko Scheme of Irregularly Sampled Data
Next Article in Special Issue
Landsat-8 to Sentinel-2 Satellite Imagery Super-Resolution-Based Multiscale Dilated Transformer Generative Adversarial Networks
Previous Article in Journal
Salt Constructs in Paleo-Lake Basins as High-Priority Astrobiology Targets
Previous Article in Special Issue
MPFINet: A Multilevel Parallel Feature Injection Network for Panchromatic and Multispectral Image Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework

1
Department of Electrical Engineering and Information Technology (DIETI), University Federico II, 80125 Naples, Italy
2
Department of Engineering, University Parthenope, Centro Direzionale ISOLA C4, 80133 Naples, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(2), 319; https://doi.org/10.3390/rs15020319
Submission received: 24 November 2022 / Revised: 23 December 2022 / Accepted: 3 January 2023 / Published: 5 January 2023
(This article belongs to the Special Issue Pansharpening and Beyond in the Deep Learning Era)

Abstract

:
In the last few years, there has been a renewed interest in data fusion techniques, and, in particular, in pansharpening due to a paradigm shift from model-based to data-driven approaches, supported by the recent advances in deep learning. Although a plethora of convolutional neural networks (CNN) for pansharpening have been devised, some fundamental issues still wait for answers. Among these, cross-scale and cross-datasets generalization capabilities are probably the most urgent ones since most of the current networks are trained at a different scale (reduced-resolution), and, in general, they are well-fitted on some datasets but fail on others. A recent attempt to address both these issues leverages on a target-adaptive inference scheme operating with a suitable full-resolution loss. On the downside, such an approach pays an additional computational overhead due to the adaptation phase. In this work, we propose a variant of this method with an effective target-adaptation scheme that allows for the reduction in inference time by a factor of ten, on average, without accuracy loss. A wide set of experiments carried out on three different datasets, GeoEye-1, WorldView-2 and WorldView-3, prove the computational gain obtained while keeping top accuracy scores compared to state-of-the-art methods, both model-based and deep-learning ones. The generality of the proposed solution has also been validated, applying the new adaptation framework to different CNN models.

1. Introduction

Due to the increasing number of remote sensing satellites and to renewed data sharing policies, e.g., the European Space Agency (ESA) Copernicus program, the remote sensing community calls for new data fusion techniques for such diverse applications as cross-sensor [1,2,3,4], cross-resolution [5,6,7,8] or cross-temporal [4,9,10] ones, for analysis, information extraction or synthesis tasks. In this work, we target the pansharpening of remotely sensed images, which amounts to the fusion of a single high resolution panchromatic (PAN) band with a set of low resolution multispectral (MS) bands to provide a high-resolution MS image.
A recent survey [11] gathered the available solutions in four categories: component substitution (CS) [12,13,14,15], multiresolution analysis (MRA) [16,17,18,19], variational optimization (VO) [20,21,22,23], and machine/deep learning (ML) [24,25,26,27,28]. In the CS approach, the multispectral image is transformed in a suitable domain, one of its components is replaced by the spatially rich PAN and the image is transformed back into the original domain. For example, in the simple case where only three spectral bands are concerned, the Intensity-Hue-Saturation (IHS) transform can be used for this purpose. The same method has been straightforwardly generalized to the case of a larger number of bands in [13]. Other examples of this approach, to mention a few, are whitening [12], Brovey [14] or the Gram–Schmidt decomposition [15]. In the MRA approach, instead, pansharpening is addressed resorting to a multi-resolution decomposition, such as decimated or undecimated wavelet transforms [16,18] and Laplacian pyramids [17,19], for proper extraction of the detail component to be injected into the resized multispectral component. VO approaches leverage on suitable acquisition or representation models to define a target function to optimize. This can involve the degradation filters mapping high-resolution to low-resolution images [22], sparse representation of the injected details [29], probabilistic models [21] and low-rank PAN-MS representations [23]. Needless to say, the paradigm shift from model-based to ML approaches registered in the last decade has also heavily had an impact on such diverse remote sensing-related image processing problems as classification, detection, denoising, data fusion and so forth. In particular, the first pansharpening convolutional neural network (PNN) was introduced by Masi et al. (2016) [24], and it was rapidly followed by many other works [25,27,28,30,31,32].
It seems safe to say that deep learning is currently the most popular approach for pansharpening. Nonetheless, it suffers from a major problem: the lack of ground truth data for supervised training. Indeed, multiresolution sensors can only provide the original MS-PAN data, downgraded in space or spectrum, never their high-resolution versions, which remain to be estimated. The solution to this problem introduced in [24], and still adopted by many others, consists in a resolution shift. The resolution of the PAN-MS data is properly downgraded by a factor equal to the PAN-MS resolution ratio in order to obtain input data whose ground-truth (GT) is given by the original MS. Any network can therefore be trained in a fully supervised manner, although in a lower-resolution domain, and then be used on full-resolution images at inference time. The resolution downgrade paradigm is not new, as it stems from Wald’s protocol [33], a procedure employed in the contest of the pansharpening quality assessment, and presents two main drawbacks:
i.
It requires the knowledge of the point spread function (also referred to as sensor Modulation Transfer Function, MTF, in the pansharpening context), which characterizes the imaging system, to apply before decimation to obtain the reduced resolution dataset;
ii.
It relies on a sort of scale-invariance assumption (a method optimized at reduced resolution is expected to work equally well at full resolution).
In particular, the latter limitation has recently motivated several studies aimed at circumventing the resolution downgrade [31,32,34,35]. These approaches resort to losses that do not require any GT being oriented to consistency rather than to the synthesis assessment. During training, the full-resolution samples feed the network whose output is then compared to the two input components, MS and PAN, once suitably reprojected in their respective domains. The way such reprojections are realized, in combination with the measurement employed, i.e., the consistency check, has been the object of intense research since the seminal Wald’s paper [33], and is still an open problem. In addition, a critical issue is also represented by the lack of publicly available datasets that are sufficiently large and representative to ensure generality to the trained networks. A solution to this, based on the target-adaptivity principle, was proposed in [36] and later adopted in [34,35] too. On the downside, target-adaptive models pay a computational overhead at inference time, which increases when operating at full-resolution, as occurs in [35].
Motivated by the above considerations, following the research line drawn in [35], which combines full-resolution training and target-adaptivity, in this work, we introduce a new target-adaptive scheme, which allows reducing the computational overhead while preserving the accuracy of the pansharpened products. Experiments carried out on GeoEye-1, WorldView-2 and WorldView-3 images demonstrate the effectiveness of the proposed solution, achieving computational gains of about one order of magnitude (∼10 times faster on average) for fixed accuracy levels.
In the next Section 2, we describe the related work and the proposed solution. Then, we provide experimental results in Section 3 and a related discussion in Section 4, before providing conclusions in Section 5.

2. Materials and Methods

Inspired by the recent paper [35], we propose here a new method that reduces the computational burden at inference time when using target-adaptivity. The adaptation overhead, in fact, may be not negligible even if relatively shallow networks such as [24,25,36,37,38] are used. Likewise [35], here, we also propose a framework rather than a single model. In particular, we use the “zoom” (Z) versions [35] of A-PNN [36], PanNet [25] and DRPNN [30] as baselines, which will be referred to as Z-PNN, Z-PanNet and Z-DRPNN, respectively. However, for the sake of simplicity, the proposed solution will be presented with respect to a fixed model, Z-PNN, without lack of generality.

2.1. Revisiting Z-PNN

The method proposed in [35], called Zoom Pansharpening Neural Network (Z-PNN), inherits the target-adaptive paradigm introduced in [36], but it is based on a new loss conceived to use full-resolution images without ground-truths, hence allowing a fully unsupervised training and tuning. In Figure 1, the top-level flowchart of the full-resolution target-adaptive inference scheme is shown. The target-adaptive modality involves two stages: adaptation and prediction. The former is an iterative tuning phase where the whole PAN-MS input pair feeds the pre-trained network with initial parameters ϕ ( 0 ) . At each t-th iteration, the spatial and spectral consistencies of the output are quantified to provide updated parameters ( ϕ ( t ) ), thanks to a suitable composite loss that makes use of the input PAN and MS as references, respectively. After a predefined (by the User) number of iterations, the network parameters are frozen ( ϕ ( ) ) and used for the last run of the network to provide the final pansharpened image. The default number of iterations is fixed to 100.
More in detail, said M and P are the MS and PAN input components, respectively, M ^ the pansharpened image and D a suitable scaling operator. The overall loss is given by
L ( M , P ; M ^ ) = L λ ( M ; D ( M ^ ) ) + β L S ( P ; M ^ ) ,
where the two loss terms are weighted by β to account for both spectral and spatial consistency. The spectral loss L λ is given by
L λ = D ( M ^ ) M 1
where · 1 indicates the 1 -norm, and the low-resolution projection operator D consists of band-dependent low-pass filtering, followed by spatial decimation at pace R = 4 :
D ( M ( · , · , b ) ) = M ( · , · , b ) h b R ,
being h b a suitable band-dependent point spread function.
On the other hand, the spatial loss term L S is given by
L S ( P ; M ^ ) = 1 E i , j , b min ( ρ i , j , b max , ρ i , j , b σ ( M ^ ; P ) )
where ρ i , j , b σ ( M ^ ; P ) is the correlation coefficient computed in a σ × σ window centered at location ( i , j ) , between the b-th band of M ^ and P, and ρ i , j , b max is an upper bound correlation field estimated on a smoothed version of P and the 23-tap polynomial interpolation of M. By doing so, image locations where the correlation coefficient between M ^ and P have reached the corresponding bounding level and do not contribute to the gradient computation. Experimental evidence suggested setting σ = R = 4 .

2.2. Proposed Fast CNN-Based Target-Adaptation Framework for Pansharpening

The core CNN architecture of Z-PNN is the model A-PNN summarized in Figure 2. It is relatively shallow in order to keep the computational overhead limited due to the adaptation phase (Figure 1). Nonetheless, such an overhead remains a serious issue that may prevent the use of this approach in critical situations when large-scale target images are concerned. As an example, on a commercial GPU, such as NVIDIA Quadro P6000, a 2048 × 2048 image (at PAN resolution) requires about a second per tuning iteration that sums up to a couple of minutes if hundred adaptation iterations are concerned. Such a timing scales about linearly with the image size as long as the GPU memory is capable at hosting the whole process, but once such capacity is exceeded, the computational burden grows much more quickly. Motivated by this consideration, we have explored a modified adaptation protocol still operating at full resolution and valid for generic pansharpening networks. The basic conjecture from which we have started is that the required adaptation is mainly due to the different atmospheric and geometrical conditions that can determine a mismatch between the training and the test datasets, whereas the intra-image spatial variability counts much less.
Atmospheric and/or geometrical mismatches between training and test datasets can occur easily. Fog, pollution and daylight conditions are typical examples of atmospheric parameters that can heavily impact image features such as contrast and intensity. Figure 3a shows two details of GeoEye-1 images with similar content (forest) but taken from different acquisitions. The two details show different contrast and brightness levels, mostly because of a different light condition. On the other hand, geometry (think of satellite viewing angle) also plays an important role. In fact, scenes acquired at Nadir present geometries that differ from those visible off-Nadir. Urban settlement orientations with respect to the satellite orbit can also have some relevance. Figure 3b shows another pair of crops, showing some buildings lying on WorldView-3 images acquired with different viewing angles. The buildings clearly show different skews. It goes without saying that data augmentation techniques can only partially address these kinds of data issues.
On the basis of the above considerations, it makes sense to run the tuning iterations on partial tiles of the target image. In particular, we start from a small (128 × 128) central crop of the image and run half of the prefixed total number of iterations (fixed to 256). Once this first burst of 128 iterations is completed, the crop size is doubled in each direction (keeping the central positioning), while a halved (64) number of iterations are queued to the tuning process. The process goes on with this rule until the current crop covers the whole image. Depending on the image size, a residual number of iterations are needed to reach the total number of 256 iterations. This residue, in case, can be completed on the crop corresponding to the whole image. This rule is summarized in Table 1.
Besides this proposed method, hereinafter referred to as Fast Z-<Network Name>, we also test a lighter version, namely, Faster Z-<Network Name>, that undergoes an early stop of the tuning that allows skipping of the heaviest iterations that will involve the whole image (bold numbers in Table 1 correspond to the last tuning phase for the Faster version). In particular, for the sample network of Figure 2, the two versions will be called Fast Z-PNN and Faster Z-PNN, respectively. In addition to this, the “Z” versions of PanNet [25] and DRPNN [30], i.e., with full-resolution target-adaptation and loss (1), will also be tested.

3. Results

To prove the effectiveness of the proposed adaptation schedule, we carried out experiments on 2048 × 2048 WorldView-3, WorldView-2 and GeoEye1 images, parts of larger tiles covering the cities of Adelaide (courtesy of DigitalGlobe © ), Washington (courtesy of DigitalGlobe © ) and Genoa (courtesy of DigitalGlobe © , provided by European Space Imaging), respectively. The image size (power of 2) simplifies the analysis of the proposed solution for obvious reasons but is by no means limiting in the validation of the basic idea. For each of the three cities/sensors, we disposed of four images—three for validation, the remaining for a test—as summarized in Table 2.
An RGB version of the MS component for each test image is shown in Figure 4, Figure 5 and Figure 6, respectively. Overlaid on the images are the crops involved in the several adaptation phases and some crops (A, B, C) that will be later recalled for the purpose of visual inspection of the results.
Table 3 summarizes the distribution of the number of iterations planned for each crop size and the corresponding computational time, by iteration and by phase. As it can be seen, the adaptation time for the proposed solution (see Fast Z-PNN on WV-∗ to fix the ideas) was about 12 s against 40 or 100 s for the baseline scheme when using 100 (default choice) or 256 iterations, respectively. Moreover, it is worth noticing that most of the time (6.42 s) was spent by the last iterations on the full image. Similar considerations apply for GeoEye-1, as well as for the other two models. In particular, notice that for Z-DRPNN, all time scores scaled since this model is heavier (more parameters) compared to Z-PNN and Z-PanNet.
Actually, from a more careful inspection of Table 3, it can be noticed that the time per iteration tended to increase by a factor of 4, moving from one crop size to the next one when these were larger. In fact, Fast Z-PNN obtained the following time multipliers on WV-∗ : 1.17 ( 128 2 256 2 ), 3.14, 3.51 and 3.84 ( 1024 2 2048 2 ). This should not be surprising since the crop area increased by 4 when doubling its linear dimensions and also because the computational time on parallel computing units, such as GPUs, does not always scale linearly with the image size, particularly when the input images are relatively small or too large, causing memory swaps. Assuming to be in a linear regime where the iteration cost grows linearly with the image size (area), considering that the number of iterations halves from one phase to the next one, the time consumption per phase eventually doubled, moving from a phase to the next one. Asymptotically, each new phase took a time comparable to the time accumulated by all previous phases. Consequently, by skipping the last phase, one would save approximately half of the computational burden, paying something in terms of accuracy.
Based on the above considerations, we also tested the lighter configuration, “Faster”, of the proposed method, where we skipped the last tuning phase with an early stop, as indicated in Table 1.
The experimental evaluation was split in two parts. On one side, the proposed Fast and Faster variants were compared to the corresponding baselines (Z-∗) directly in terms of loss achieved during the target-adaptation phase. This was completed separately for both spectral and spatial loss components and using the validation images only. The results of this analysis for the whole validation dataset are summarized in Table 4 and Table 5, while Figure 7 displays the loss curves for a sample image. Moreover, Figure 8 compares the different target-adaptive schemes for Z-PNN using some sample visual results.
On the other hand, the test images were used for a comparative quality assessment in terms of both pansharpening numerical indexes and subjective visual inspection. For a more robust quality evaluation, we resorted to the pansharpening benchmark toolbox [11], which provides the implementation of several state-of-the-art methods and several quality indexes, e.g., the spectral D λ ( K ) and spatial D S consistency indexes. We integrated the benchmark with the Machine Learning toolbox proposed in [39], which provides additional CNN-based methods. Furthermore, the evaluation was carried out using the additional indexes D ρ , R-SAM, R-ERGAS and R- Q 2 n , recently proposed in [40]. All comparative methods are summarized in Table 6. The results are gathered in Table 7, Table 8 and Table 9, whereas sample visual results are given in Figure 9, Figure 10 and Figure 11. A deeper discussion about these results is left to Section 4.

4. Discussion

Let us now analyze in depth the provided results, starting from a comparison with the baseline models Z-∗.
In Figure 7, we show the loss decay during the adaptation phase, separately for the spectral (top) and spatial (bottom) terms, as a function of the running time rather than the iteration count. This experiment refers to a WorldView-3 image processed by Z-PNN models, but similar behaviors have been observed for the other images, regardless of the employed base model. The loss terms for the baseline (Z-PNN) and the proposed Fast Z-PNN are plotted in green and blue, respectively. Although the loss is a per-pixel loss (spatial average), so that these curves are dimensionally consistent, the latter refers to the current crop while the former is an average on the whole image. For this reason, for the proposed (Fast/Faster) solution, we also computed the value of the loss on the whole image at each iteration (red dashed line). Surprisingly, the “global” loss tightly follows the “local” (to crop) loss, showing a more regular decay, thanks to a wider average. These plots clearly show the considerable computational gain achievable for any fixed target loss level. It is worth noticing, for example, that the loss levels achieved by the proposed Fast scheme in 12 s (256 iterations) are reached by the baseline approach in about 90 s (∼200 iterations on the full image). Besides, the Faster version reaches almost the same loss levels of the Fast version in about 5.7 s.
In Table 4, for each validation image, we quantify the computational gains achieved by Faster and Fast Z-PNN in terms of time consumption. For each image, it is indicated, separately for the spectral (top table) and spatial (bottom table) loss components, the value without adaptation ( L 0 ) and those achieved by Faster ( L Faster ) and Fast ( L Fast ) adaptation schemes (these values refer to the loss assessed on the whole image, not to the one computed on the crops for gradient descent), whose run times ( T and T ) do not depend on the specific image (they drop when working with GeoEye-1 instead of WV-∗ because of the smaller number of spectral bands). Then, we report the times T Z and T Z needed for Z-PNN to reach the loss levels L Faster and L Faster , respectively. We can fairly read T Z and T Z as the times needed for Z-PNN to achieve the same spectral (top table) or spatial (bottom table) target levels of its faster versions. Consequently, T Z / T and T Z / T represent the computational gains of the proposed solution. Similar results, not shown for brevity, have been registered for Z-PanNet and Z-DRPNN. It is worth noticing that whereas for the spectral loss the gain is always (well) larger than 1, for the spatial component, there can occur cases where it is already pretty low (see L S 0 in Table 4); hence, only the spectral loss decays while the spatial one either keeps constant or even grows a little (see bold numbers in Table 4). In these cases, the gains cannot be computed, but actually, it make sense to focus on the spectral behavior only, as no adaptation is needed on the spatial side.
In general, from all experiments that we carried out, it emerges that the adaptation is mostly needed for the reduction of the spectral distortion rather than for the spatial one. This is a particular feature of the Z-∗ framework, which leverages a spatial consistency loss based on correlation (Equation (4)) that shows a quite robust behavior. On the basis of the above reasons, it makes sense to focus on the spectral loss only for checking the quality alignment between Z-∗ and the proposed variants. Therefore, we can give a look to Table 5, which provides the average gains for all models, averaged by sensor, but limited to the spectral part. Overall, it can be observed that the Fast models allow for obtaining gains ranging from 4.03 to 15.6 (9.6 on average), whereas Faster models provide gains between 6.33 to 26.8 (17.9 on average), at the price of a little increase of the loss components (compare the loss levels in Table 4).
Let us now move to the analysis of the results obtained on the test images for a comparison with the state-of-the-art solutions. Starting from the numerical results gathered in Table 7 (WV-3), Table 8 (WV-2) and Table 9 (GE-1), we can observe that the most important achievement is that the proposed Faster and Fast solutions are exactly where they are expected to be, i.e., between Z-∗ (100 it.) and Z-∗ (256 it.) almost always, coherently with the loss levels shown in Figure 7 and Table 4, with few exceptions. In some cases, the proposals behave even better than the baseline (e.g., Fast-Z-PNN against Z-PNN on GeoEye-1, Table 9). Moreover, it is worth noticing that Faster Z-∗ tightly follows Fast Z-∗ and Z-∗ (256 it.), confirming our initial guess that the need for tuning is mostly due to geometric or atmospheric misalignments between the training and testing datasets rather than grounding content mismatches.
Concerning the overall comparison, it must be underlined that all indexes are used on the full-resolution image without any ground-truth. They assess the consistency rather than the synthesis properties, and each reflects some arbitrary assumption. Both D S and D ρ deal with the spatial consistency, while the remaining relate to the spectral consistency. While the latter group shows a good level of agreement, the former looks much less correlated. For a deeper discussion on this issue that is a little out of scope here, the Reader is referred to [35]. In addition, the goal of the present contribution is on efficiency rather than on accuracy; therefore, we leave these results to the reader without further discussion.
Beside numerical assessment, the visual inspection of the results is a fundamental complementary check to gain insight into the behavior of the compared methods. Let us first analyze the impact of the tuning phase by comparing the pretrained model Z-∗ (0 it.) with the proposed target-adaptive solutions, Faster and Fast versions. Figure 8 shows a few crops from WV-3 Adelaide image (Figure 4), with the related pansharpened images obtained with the Z-PNN variants. Remarkable differences can be noticed between the pretrained model and the two target-adaptive options. The Reader may notice the visible artifacts introduced by the pretrained model occurring, for example, in the pool of crop A or on several building roofs (spotted artifacts), which are removed by the proposed solutions, thanks to the tuning. On the other hand, there is no noticeable difference between the Faster and Fast configurations proposed. It is also worth noticing that this alignment between the Fast and the Faster configurations hold indistinguishably on crop A (always involved in the tuning) and crops B and C that are not involved in the tuning process in the Faster version (see Figure 4). Similar considerations can be noted from the experiments, not shown for brevity, carried out on the other datasets and/or using different baseline models.
Figure 9, Figure 10 and Figure 11 show, again, on some selected crops, the pansharpening results obtained by the proposed solutions and by the best performing ones among all comparative methods listed in Table 6 for the WorldView-3, WorldView-2 and GeoEye-1 images, respectively. In particular, these results confirm the most relevant observation that Fast Z-PNN, Faster Z-PNN and the baseline Z-PNN are nearly indistinguishable, in line with the numerical results discussed above, further underlining that the registered computational gain has been achieved without sacrificing accuracy. Regarding the comparison with the other methods, from the visual point of view, the proposed solutions are aligned with the best ones, without appreciable spectral or spatial distortion and a very high contrast level.
Finally, in order to further prove the robustness of the proposed approach, we ran a cross-image experiment. Taking two different WV-3 sample images whose content is sufficiently different (see Figure 12), we ran Fast Z-PNN on the first one (a), which plays as a target image. On the same image, we also tested Z-PNN without adaptation (0 it.) and the model adapted on image (b) using the Fast configuration. The numerical results, shown in (c), provide further confirmation that the actual content of the image has a minor impact on pansharpening, in the tuning phase, with respect to acquisition geometry and atmospheric conditions.

5. Conclusions

In this work, we have presented a new target-adaptive framework for CNN pansharpening. A full-version named Fast and a quicker configuration named Faster have been tested over the recently proposed unsupervised full-resolution models: Z-PNN, Z-PanNet and Z-DRPNN [35]. The results clearly show that the inference time reduces by a factor of 10 or more, which doubles for the Faster version, without sacrificing accuracy. Such gain has been achieved thanks to the assumption, experimentally validated here, that generalization problems occur when training and test datasets come from different acquisitions, hence because of geometrical and/or atmospheric changes, whereas different areas of a same image can be considered sufficiently “homogeneous” from the learning point of view. Starting from this consideration, the proposed target adaptive scheme does not make use of the full test image for its unsupervised adaptation from the beginning, but, starting from a central portion, it progressively encloses all data in the process. Indeed, the Faster version only uses 25% of the image, obtaining nearly the same quality level.
Future work may explore different spatial sampling strategies, such as adaptive or dynamic selection of the tuning crops, in order to further improve the robustness and generality of the proposed framework.

Author Contributions

Conceptualization, G.S. and M.C.; methodology, G.S. and M.C.; software, M.C.; validation, G.S. and M.C.; formal analysis, G.S. and M.C.; investigation, G.S. and M.C.; resources, G.S. and M.C.; data curation, M.C.; writing—original draft preparation, G.S.; writing—review and editing, G.S. and M.C.; visualization, M.C.; supervision, G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank DigitalGlobe © and ESA for providing sample images for research purposes.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moser, G.; Serpico, S. Generalized minimum-error thresholding for unsupervised change detection from SAR amplitude imagery. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2972–2982. [Google Scholar] [CrossRef]
  2. Chen, Y.; Bruzzone, L. Self-Supervised SAR-Optical Data Fusion of Sentinel-1/-2 Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5406011. [Google Scholar] [CrossRef]
  3. Errico, A.; Angelino, C.V.; Cicala, L.; Podobinski, D.P.; Persechino, G.; Ferrara, C.; Lega, M.; Vallario, A.; Parente, C.; Masi, G.; et al. SAR/multispectral image fusion for the detection of environmental hazards with a GIS. In Proceedings of the SPIE—The International Society for Optical Engineering, Amsterdam, The Netherlands, 22–25 September 2014; Volume 9245. [Google Scholar]
  4. Scarpa, G.; Gargiulo, M.; Mazza, A.; Gaetano, R. A CNN-Based Fusion Method for Feature Extraction from Sentinel Data. Remote Sens. 2018, 10, 236. [Google Scholar] [CrossRef] [Green Version]
  5. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A Critical Comparison Among Pansharpening Algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  6. Gargiulo, M.; Mazza, A.; Gaetano, R.; Ruello, G.; Scarpa, G. A CNN-Based Fusion Method for Super-Resolution of Sentinel-2 Data. In Proceedings of the IGARSS 2018–2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 4713–4716. [Google Scholar] [CrossRef]
  7. Ciotola, M.; Ragosta, M.; Poggi, G.; Scarpa, G. A full-resolution training framework for Sentinel-2 image fusion. In Proceedings of the IGARSS 2021, Brussels, Belgium, 11–16 July 2021; pp. 1–4. [Google Scholar]
  8. Ciotola, M.; Martinelli, A.; Mazza, A.; Scarpa, G. An Adversarial Training Framework for Sentinel-2 Image Super-Resolution. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 3782–3785. [Google Scholar] [CrossRef]
  9. Peng, D.; Bruzzone, L.; Zhang, Y.; Guan, H.; Ding, H.; Huang, X. SemiCDNet: A Semisupervised Convolutional Neural Network for Change Detection in High Resolution Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 59, 5891–5906. [Google Scholar] [CrossRef]
  10. Gaetano, R.; Amitrano, D.; Masi, G.; Poggi, G.; Ruello, G.; Verdoliva, L.; Scarpa, G. Exploration of multitemporal COSMO-skymed data via interactive tree-structured MRF segmentation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2763–2775. [Google Scholar] [CrossRef]
  11. Vivone, G.; Dalla Mura, M.; Garzelli, A.; Restaino, R.; Scarpa, G.; Ulfarsson, M.O.; Alparone, L.; Chanussot, J. A New Benchmark Based on Recent Advances in Multispectral Pansharpening: Revisiting Pansharpening With Classical and Emerging Pansharpening Methods. IEEE Geosci. Remote Sens. Mag. 2021, 9, 53–81. [Google Scholar] [CrossRef]
  12. Chavez, P.S.; Kwarteng, A.W. Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  13. Tu, T.M.; Huang, P.S.; Hung, C.L.; Chang, C.P. A fast intensity hue-saturation fusion technique with spectral adjustment for IKONOS imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 309–312. [Google Scholar] [CrossRef]
  14. Gillespie, A.R.; Kahle, A.B.; Walker, R.E. Color enhancement of highly correlated images. II. Channel ratio and “chromaticity” transformation techniques. Remote Sens. Environ. 1987, 22, 343–365. [Google Scholar] [CrossRef]
  15. Laben, C.A.; Brower, B.V. Process for Enhancing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6011875, 4 January 2000. [Google Scholar]
  16. Ranchin, T.; Wald, L. Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation. Photogramm. Eng. Remote Sens. 2000, 66, 49–61. [Google Scholar]
  17. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A. Context-driven fusion of high spatial and spectral resolution images based on oversampled multiresolution analysis. IEEE Trans. Geosci. Remote Sens. 2002, 40, 2300–2312. [Google Scholar] [CrossRef]
  18. Khan, M.M.; Chanussot, J.; Condat, L.; Montanvert, A. Indusion: Fusion of Multispectral and Panchromatic Images Using the Induction Scaling Technique. IEEE Geosci. Remote Sens. Lett. 2008, 5, 98–102. [Google Scholar] [CrossRef] [Green Version]
  19. Restaino, R.; Mura, M.D.; Vivone, G.; Chanussot, J. Context-Adaptive Pansharpening Based on Image Segmentation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 753–766. [Google Scholar] [CrossRef] [Green Version]
  20. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. A New Pansharpening Algorithm Based on Total Variation. Geosci. Remote Sens. Lett. IEEE 2014, 11, 318–322. [Google Scholar] [CrossRef]
  21. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. Model-Based Fusion of Multi- and Hyperspectral Images Using PCA and Wavelets. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2652–2663. [Google Scholar] [CrossRef]
  22. Vivone, G.; Simões, M.; Dalla Mura, M.; Restaino, R.; Bioucas-Dias, J.M.; Licciardi, G.A.; Chanussot, J. Pansharpening Based on Semiblind Deconvolution. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1997–2010. [Google Scholar] [CrossRef]
  23. Palsson, F.; Ulfarsson, M.O.; Sveinsson, J.R. Model-Based Reduced-Rank Pansharpening. IEEE Geosci. Remote Sens. Lett. 2020, 17, 656–660. [Google Scholar] [CrossRef]
  24. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. Pansharpening by Convolutional Neural Networks. Remote Sens. 2016, 8, 594. [Google Scholar] [CrossRef] [Green Version]
  25. Yang, J.; Fu, X.; Hu, Y.; Huang, Y.; Ding, X.; Paisley, J. PanNet: A Deep Network Architecture for Pan-Sharpening. In Proceedings of the ICCV, Venice, Italy, 22–29 October 2017. [Google Scholar] [CrossRef]
  26. Masi, G.; Cozzolino, D.; Verdoliva, L.; Scarpa, G. CNN-based Pansharpening of Multi-Resolution Remote-Sensing Images. In Proceedings of the Joint Urban Remote Sensing Event, Dubai, United Arab Emirates, 6–8 March 2017. [Google Scholar]
  27. Yuan, Q.; Wei, Y.; Meng, X.; Shen, H.; Zhang, L. A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 978–989. [Google Scholar] [CrossRef] [Green Version]
  28. Shao, Z.; Cai, J. Remote Sensing Image Fusion With Deep Convolutional Neural Network. IEEE J. Sel. Topics Appl. Earth Observ. 2018, 11, 1656–1669. [Google Scholar] [CrossRef]
  29. Vicinanza, M.R.; Restaino, R.; Vivone, G.; Mura, M.D.; Chanussot, J. A Pansharpening Method Based on the Sparse Representation of Injected Details. IEEE Geosci. Remote Sens. Lett. 2015, 12, 180–184. [Google Scholar] [CrossRef]
  30. Wei, Y.; Yuan, Q.; Shen, H.; Zhang, L. Boosting the Accuracy of Multispectral Image Pansharpening by Learning a Deep Residual Network. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1795–1799. [Google Scholar] [CrossRef] [Green Version]
  31. Luo, S.; Zhou, S.; Feng, Y.; Xie, J. Pansharpening via Unsupervised Convolutional Neural Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4295–4310. [Google Scholar] [CrossRef]
  32. Seo, S.; Choi, J.S.; Lee, J.; Kim, H.H.; Seo, D.; Jeong, J.; Kim, M. UPSNet: Unsupervised Pan-Sharpening Network With Registration Learning Between Panchromatic and Multi-Spectral Images. IEEE Access 2020, 8, 201199–201217. [Google Scholar] [CrossRef]
  33. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolution: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  34. Vitale, S.; Scarpa, G. A Detail-Preserving Cross-Scale Learning Strategy for CNN-Based Pansharpening. Remote Sens. 2020, 12, 348. [Google Scholar] [CrossRef] [Green Version]
  35. Ciotola, M.; Vitale, S.; Mazza, A.; Poggi, G.; Scarpa, G. Pansharpening by convolutional neural networks in the full resolution framework. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5408717. [Google Scholar] [CrossRef]
  36. Scarpa, G.; Vitale, S.; Cozzolino, D. Target-Adaptive CNN-Based Pansharpening. IEEE Trans. Geosci. Remote. Sens. 2018, 56, 5443–5457. [Google Scholar] [CrossRef] [Green Version]
  37. Zhang, Y.; Liu, C.; Sun, M.; Ou, Y. Pan-Sharpening Using an Efficient Bidirectional Pyramid Network. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5549–5563. [Google Scholar] [CrossRef]
  38. Xu, S.; Zhang, J.; Zhao, Z.; Sun, K.; Liu, J.; Zhang, C. Deep Gradient Projection Networks for Pan-sharpening. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 1366–1375. [Google Scholar] [CrossRef]
  39. Deng, L.J.; Vivone, G.; Paoletti, M.E.; Scarpa, G.; He, J.; Zhang, Y.; Chanussot, J.; Plaza, A.J. Machine Learning in Pansharpening: A Benchmark, from Shallow to Deep Networks. IEEE Geosci. Remote Sens. Mag. 2022, 10, 279–315. [Google Scholar] [CrossRef]
  40. Scarpa, G.; Ciotola, M. Full-Resolution Quality Assessment for Pansharpening. Remote Sens. 2022, 14, 1808. [Google Scholar] [CrossRef]
  41. Lolli, S.; Alparone, L.; Garzelli, A.; Vivone, G. Haze Correction for Contrast-Based Multispectral Pansharpening. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2255–2259. [Google Scholar] [CrossRef]
  42. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
  43. Garzelli, A. Pansharpening of Multispectral Images Based on Nonlocal Parameter Optimization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2096–2107. [Google Scholar] [CrossRef]
  44. Vivone, G. Robust Band-Dependent Spatial-Detail Approaches for Panchromatic Sharpening. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6421–6433. [Google Scholar] [CrossRef]
  45. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS+Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  46. Choi, J.; Yu, K.; Kim, Y. A New Adaptive Component-Substitution-Based Satellite Image Fusion by Using Partial Replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  47. Otazu, X.; Gonzalez-Audicana, M.; Fors, O.; Nunez, J. Introduction of sensor spectral response into image fusion methods. Application to wavelet-based methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 2376–2385. [Google Scholar] [CrossRef]
  48. Alparone, L.; Garzelli, A.; Vivone, G. Intersensor Statistical Matching for Pansharpening: Theoretical Issues and Practical Solutions. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4682–4695. [Google Scholar] [CrossRef]
  49. Vivone, G.; Restaino, R.; Chanussot, J. Full Scale Regression-Based Injection Coefficients for Panchromatic Sharpening. IEEE Trans. Image Process. 2018, 27, 3418–3431. [Google Scholar] [CrossRef] [PubMed]
  50. Vivone, G.; Restaino, R.; Chanussot, J. A Regression-Based High-Pass Modulation Pansharpening Approach. IEEE Trans. Geosci. Remote Sens. 2018, 56, 984–996. [Google Scholar] [CrossRef]
  51. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S Data-Fusion Contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef] [Green Version]
  52. Restaino, R.; Vivone, G.; Dalla Mura, M.; Chanussot, J. Fusion of Multispectral and Panchromatic Images Based on Morphological Operators. IEEE Trans. Image Process. 2016, 25, 2882–2895. [Google Scholar] [CrossRef] [Green Version]
  53. He, L.; Rao, Y.; Li, J.; Chanussot, J.; Plaza, A.; Zhu, J.; Li, B. Pansharpening via detail injection based convolutional neural networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1188–1204. [Google Scholar] [CrossRef] [Green Version]
  54. Deng, L.J.; Vivone, G.; Jin, C.; Chanussot, J. Detail injection-based deep convolutional neural networks for pansharpening. IEEE Trans. Geosci. Remote. Sens. 2020, 59, 6995–7010. [Google Scholar] [CrossRef]
Figure 1. Target-adaptive inference scheme Z-<Network Name>. Available for networks (A-)PNN, PanNet and DRPNN.
Figure 1. Target-adaptive inference scheme Z-<Network Name>. Available for networks (A-)PNN, PanNet and DRPNN.
Remotesensing 15 00319 g001
Figure 2. A-PNN model for Z-PNN.
Figure 2. A-PNN model for Z-PNN.
Remotesensing 15 00319 g002
Figure 3. Atmospheric (a) and geometrical (b) mismatch examples.
Figure 3. Atmospheric (a) and geometrical (b) mismatch examples.
Remotesensing 15 00319 g003
Figure 4. MS (RGB version) component of the WorldView-3 test image of Adelaide. Red concentric squared boxes indicate the crops used during the adaptation phases (the smaller the box, the earlier the phase). Green boxes (A, B, C) highlight three crops selected for visual inspection purposes.
Figure 4. MS (RGB version) component of the WorldView-3 test image of Adelaide. Red concentric squared boxes indicate the crops used during the adaptation phases (the smaller the box, the earlier the phase). Green boxes (A, B, C) highlight three crops selected for visual inspection purposes.
Remotesensing 15 00319 g004
Figure 5. MS (RGB version) component of the WorldView-2 test image of Washington. Red concentric squared boxes indicate the crops used during the adaptation phases (the smaller the box, the earlier the phase). Green boxes (A, B, C) highlight three crops selected for visual inspection purposes.
Figure 5. MS (RGB version) component of the WorldView-2 test image of Washington. Red concentric squared boxes indicate the crops used during the adaptation phases (the smaller the box, the earlier the phase). Green boxes (A, B, C) highlight three crops selected for visual inspection purposes.
Remotesensing 15 00319 g005
Figure 6. MS (RGB version) component of the GeoEye-1 test image of Genoa. Red concentric squared boxes indicate the crops used during the adaptation phases (the smaller the box, the earlier the phase). Green boxes (A, B, C) highlight three crops selected for visual inspection purposes.
Figure 6. MS (RGB version) component of the GeoEye-1 test image of Genoa. Red concentric squared boxes indicate the crops used during the adaptation phases (the smaller the box, the earlier the phase). Green boxes (A, B, C) highlight three crops selected for visual inspection purposes.
Remotesensing 15 00319 g006
Figure 7. Loss decay during the adaptation phase for Z-PNN on a sample WV-3 image: spectral (top) and spatial (bottom) loss terms. The loss of the baseline model is shown in green; the loss of the proposed solution (computed on the current crop) is shown in blue; dashed red lines show the loss of the proposed model during training computed on the whole image.
Figure 7. Loss decay during the adaptation phase for Z-PNN on a sample WV-3 image: spectral (top) and spatial (bottom) loss terms. The loss of the baseline model is shown in green; the loss of the proposed solution (computed on the current crop) is shown in blue; dashed red lines show the loss of the proposed model during training computed on the whole image.
Remotesensing 15 00319 g007
Figure 8. Pansharpening results without adaptation (0 it.), with partial (Faster) and full (Fast) adaptation according to the proposed scheme. From top to bottom: WV-3 Adelaide crops A, B and C, respectively.
Figure 8. Pansharpening results without adaptation (0 it.), with partial (Faster) and full (Fast) adaptation according to the proposed scheme. From top to bottom: WV-3 Adelaide crops A, B and C, respectively.
Remotesensing 15 00319 g008
Figure 9. Pansharpening results on the WorldView-3 Adelaide image (crop C).
Figure 9. Pansharpening results on the WorldView-3 Adelaide image (crop C).
Remotesensing 15 00319 g009
Figure 10. Pansharpening results on the WorldView-2 Washington image (crop B).
Figure 10. Pansharpening results on the WorldView-2 Washington image (crop B).
Remotesensing 15 00319 g010
Figure 11. Pansharpening results on the GeoEye-1 Washington image (crop B).
Figure 11. Pansharpening results on the GeoEye-1 Washington image (crop B).
Remotesensing 15 00319 g011
Figure 12. Cross-image adaptation test. (a) target image; (b) a different image from the same validation dataset (WV-3); (c) numerical results. Fast Z-PNN (b) corresponds to the results on (a) with the model fitted on (b).
Figure 12. Cross-image adaptation test. (a) target image; (b) a different image from the same validation dataset (WV-3); (c) numerical results. Fast Z-PNN (b) corresponds to the results on (a) with the model fitted on (b).
Remotesensing 15 00319 g012
Table 1. Proposed target-adaptive iterations distribution. Bold numbers indicate the last phase (early stop) for the lighter version.
Table 1. Proposed target-adaptive iterations distribution. Bold numbers indicate the last phase (early stop) for the lighter version.
Shorter Image SideCrop Size
128 2 256 2 512 2 1024 2 2048 2 4096 2 Whole
512–10231286432--- 32
1024–2047128643216-- 16
2048–40951286432168- 8
4096–819112864321684 4
Phase123456 (residue)
Table 2. Datasets for validation and test. Adelaide and Washington, courtesy of DigitalGlobe © . Genoa (DigitalGlobe © ) provided by ESA.
Table 2. Datasets for validation and test. Adelaide and Washington, courtesy of DigitalGlobe © . Genoa (DigitalGlobe © ) provided by ESA.
SensorCityPAN SizePAN GSD at NadirSpectral BandsValidation ImagesTest Images
GeoEye-1Genoa2048 × 20480.41 m431
WorldView-2Washington2048 × 20480.46 m831
WorldView-3Adelaide2048 × 20480.31 m831
Table 3. Number of iterations for each image crop during the adaptation phase with corresponding unitary (Time/iter.) and cumulative (Time/phase) costs. The overall computational time for adaptation over WV-∗ /GE-1 images is about 12/6, 13/8 and 74/60 s for Z-PNN, Z-PanNet and Z-DRPNN, respectively, including an initial overhead of about a second for preliminary operations. Experiments have been run on a NVIDIA A100 GPU.
Table 3. Number of iterations for each image crop during the adaptation phase with corresponding unitary (Time/iter.) and cumulative (Time/phase) costs. The overall computational time for adaptation over WV-∗ /GE-1 images is about 12/6, 13/8 and 74/60 s for Z-PNN, Z-PanNet and Z-DRPNN, respectively, including an initial overhead of about a second for preliminary operations. Experiments have been run on a NVIDIA A100 GPU.
Fast Z-PNN
crop size 128 2 256 2 512 2 1024 2 2048 2
iterations1286432168 + 8
WV-2/-3Time/iter. [ms]8.19.529.8104.5401.2
Time [s]1.040.610.951.676.42
GE-1Time/iter. [ms]5.15.312.848.6189.0
Time [s]0.660.340.410.783.02
Fast Z-PanNet
crop size 128 2 256 2 512 2 1024 2 2048 2
iterations1286432168 + 8
WV-2/-3Time/iter. [ms]11.115.232.8118.1426.2
Time [s]1.430.971.051.896.82
GE-1Time/iter. [ms]12.213.013.752.4210.3
Time [s]1.560.830.430.843.36
Fast Z-DRPNN
crop size 128 2 256 2 512 2 1024 2 2048 2
iterations1286432168 + 8
WV-2/-3Time/iter. [ms]26.830.2165.6872.53060.8
Time [s]3.421.935.3013.9648.97
GE-1Time/iter. [ms]26.328.8143.9510.42620.3
Time [s]3.361.844.608.1641.92
Table 4. Run time for target-adaptation. T and T indicate the run time for Faster and Fast Z-PNN, who achieve the loss values L Faster and L Fast , respectively ( L 0 : pretrained model). T Z and T Z are the run time for Z-PNN to reach the same loss levels achieved by its Faster and Fast versions, respectively. All times are given in seconds. In (a) is the WorldView-∗ case, in (b) GeoEye-1. Initialization time (about a second) fixed for all is not counted. Bold numbers indicate cases with non decreasing spatial loss.
Table 4. Run time for target-adaptation. T and T indicate the run time for Faster and Fast Z-PNN, who achieve the loss values L Faster and L Fast , respectively ( L 0 : pretrained model). T Z and T Z are the run time for Z-PNN to reach the same loss levels achieved by its Faster and Fast versions, respectively. All times are given in seconds. In (a) is the WorldView-∗ case, in (b) GeoEye-1. Initialization time (about a second) fixed for all is not counted. Bold numbers indicate cases with non decreasing spatial loss.
Image L λ 0 L λ Faster L λ Fast T Z T Z T Z / T T Z / T
(0[s])( T = 4.3[s])( T = 10.7[s]) (gain)(gain)
WV-2 #10.0080.0060.00649.961.511.65.75
WV-2 #20.0090.0060.00585.210319.89.62
WV-2 #30.0130.0090.00976.484.817.87.93
WV-3 #10.0190.0150.01579.087.218.48.14
WV-3 #20.0180.0130.01314217133.116.2
WV-3 #30.0140.0110.01012415028.914.9
Image L S 0 L S Faster L S Fast T Z T Z T Z / T T Z / T
(0[s])( T = 4.3[s])( T = 10.7[s]) (gain)(gain)
WV-2 #10.0440.0460.047n.a.n.a.n.a.n.a.
WV-2 #20.0280.0500.049n.a.n.a.n.a.n.a.
WV-2 #30.0450.0570.056n.a.n.a.n.a.n.a.
WV-3 #10.1540.0990.09788.0116.320.510.9
WV-3 #20.1460.0990.09247.076.410.97.1
WV-3 #30.1210.0790.07856.276.413.17.1
(a)
Image L λ 0 L λ Faster L λ Fast T Z T Z T Z / T T Z / T
(0[s])( T = 2.9[s])( T = 4.7[s]) (gain)(gain)
GE-1 #10.0110.0100.01075.595.126.020.2
GE-1 #20.0160.0130.01345.556.415.712.0
GE-1 #30.0130.0110.01157.068.919.714.7
Image L S 0 L S Faster L S Fast T Z T Z T Z / T T Z / T
(0[s])( T = 2.9[s])( T = 4.7[s]) (gain)(gain)
GE-1 #10.1300.1200.11538.952.213.411.1
GE-1 #20.1370.1350.13154.069.418.614.8
GE-1 #30.1180.1180.114n.a30.6n.a.6.6
(b)
Table 5. Avarage run time for target-adaptation for all involved models, referred to the spectral losses.
Table 5. Avarage run time for target-adaptation for all involved models, referred to the spectral losses.
ModelSensor T Z T Z T Z / T T Z / T
Z-PNNWV-270.583.116.47.77
Z-PNNWV-311513626.813.1
Z-PNNGE-159.373.520.515.6
Z-PanNetWV-281.291.118.98.51
Z-PanNetWV-397.311122.610.4
Z-PanNetGE-141.746.114.49.80
Z-DRPNNWV-257966519.99.2
Z-DRPNNWV-31842926.334.03
Z-DRPNNGE-122126416.27.77
Table 6. Detailed list of all reference methods.
Table 6. Detailed list of all reference methods.
Component Substitution (CS)
BT-H [41], BDSD [42], C-BDSD [43], BDSD-PC [44], GS [15], GSA [45], C-GSA [19], PRACS [46]
Multiresolution Analysis (MRA)
AWLP [47], MTF-GLP [48], MTF-GLP-FS [49], MTF-GLP-HPM [48], MTF-GLP-HPM-H [41],
MTF-GLP-HPM-R [50], MTF-GLP-CBD [51], C-MTF-GLP-CBD [19], MF [52]
Variational Optimization (VO)
FE-HPM [22], SR-D [29], TV [20]
Machine Learning (ML)
BDPN [37], DiCNN [53], DRPNN [30], FusionNet [54], MSDCNN [27], PanNet [25],
PNN [24], A-PNN [36], A-PNN-TA [36], Z-PNN [35]
Table 7. Numerical results on the 2048 × 2048 WorldView-3 test image (Adelaide), courtesy of DigitalGlobe © .
Table 7. Numerical results on the 2048 × 2048 WorldView-3 test image (Adelaide), courtesy of DigitalGlobe © .
Method D λ ( K ) R-Q 2 n R-SAMR-ERGAS D S D ρ
(Ideal)(0.0000)(1.0000)(0.0000)(0.0000)(0.0000)(0.0000)
EXP0.04490.93155.51754.73940.15850.8557
BT-H0.12160.94165.48304.33140.12830.0702
BDSD0.15020.92596.45634.90370.11430.0971
C-BDSD0.13490.92476.61805.08750.07750.1512
BDSD-PC0.14840.92816.34804.80470.12110.0836
GS0.17170.91656.02615.11570.10970.0974
GSA0.12910.94335.92604.44140.11330.0738
C-GSA0.12560.94156.26094.53170.08990.1748
PRACS0.06880.94545.47574.22580.06650.1944
AWLP0.03200.95365.63403.96040.07280.0907
MTF-GLP0.03540.95345.60073.98530.10670.0723
MTF-GLP-FS0.03440.95095.57524.07430.09730.1071
MTF-GLP-HPM0.04230.95285.76514.02300.09000.0817
MTF-GLP-HPM-H0.03660.95305.61774.04370.11630.0799
MTF-GLP-HPM-R0.03620.95125.62764.07700.08480.1230
MTF-GLP-CBD0.03430.95065.57434.08530.09540.1115
C-MTF-GLP-CBD0.03480.94585.58834.28010.05220.2480
MF0.04390.95885.59403.76250.08190.0972
FE-HPM0.03440.95155.64334.14640.08100.0983
SR-D0.01830.94665.67964.40050.02640.2801
TV0.03730.96994.24373.25920.05290.2285
BDPN0.21440.88226.18215.84800.10640.1882
DiCNN0.18620.89096.46085.92250.06840.4236
DRPNN0.20150.88806.13275.73240.08670.2117
FusionNet0.10560.93225.72424.73370.05830.4142
MSDCNN0.20590.86916.34535.94490.10450.2288
PanNet0.03520.94935.40974.17810.03640.3466
PNN0.18390.88617.71837.05130.05230.4755
A-PNN0.07360.94295.49234.32730.05450.5640
A-PNN-FT0.05430.94885.35474.18220.03180.3154
Z-DRPNN (0 it.)0.07420.94517.08134.72820.09490.0543
Z-DRPNN(100 it.)0.03160.94635.76854.32780.06010.0753
Z-DRPNN (256 it.)0.03070.94675.70514.33880.06250.0797
Faster Z-DRPNN (early stop)0.02780.94665.71274.33100.07000.0788
Fast Z-DRPNN0.02700.94705.67674.30860.06700.0777
Z-PanNet (0 it.)0.03780.94905.57554.17460.07310.1634
Z-PanNet (100 it.)0.03910.95145.61354.09100.09110.1288
Z-PanNet (256 it.)0.03760.95085.65964.12020.09150.1059
Faster Z-PanNet (early stop)0.03700.94995.66964.15020.08950.1111
Fast Z-PanNet0.03700.95015.66984.14430.08980.1087
Z-PNN (0 it.)0.08590.93856.24384.38600.07890.1634
Z-PNN (100 it.)0.05990.94665.95724.16880.09220.1048
Z-PNN (256 it.)0.04580.94945.70664.10880.09410.0959
Faster Z-PNN (early stop)0.04960.94765.79864.16800.09430.0985
Fast Z-PNN0.04970.94755.80134.16930.09430.0985
Table 8. Numerical results on the 2048 × 2048 WorldView-2 test image (Washington), courtesy of DigitalGlobe © .
Table 8. Numerical results on the 2048 × 2048 WorldView-2 test image (Washington), courtesy of DigitalGlobe © .
Method D λ ( K ) R-Q 2 n R-SAMR-ERGAS D S D ρ
(Ideal)(0.0000)(1.0000)(0.0000)(0.0000)(0.0000)(0.0000)
EXP0.03160.96332.79022.53960.20660.7918
BT-H0.08970.96232.80522.49420.12780.0670
BDSD0.16510.92294.14173.54500.09380.1198
C-BDSD0.15900.92234.19143.71820.07380.1594
BDSD-PC0.13350.94253.20183.09350.12930.0698
GS0.12070.94543.39273.03980.11510.0797
GSA0.08600.96513.05872.52720.10370.0657
C-GSA0.08710.96463.07322.54130.11060.0715
PRACS0.04890.96762.83872.38870.08270.1766
AWLP0.01920.97402.81822.18700.06680.0967
MTF-GLP0.02260.97392.77282.19250.07480.0790
MTF-GLP-FS0.02180.97342.77552.20840.06950.0983
MTF-GLP-HPM0.02320.97392.79362.19620.07280.0778
MTF-GLP-HPM-H0.02540.97312.83292.21480.09040.0868
MTF-GLP-HPM-R0.02220.97342.82222.21800.06510.1057
MTF-GLP-CBD0.02170.97332.77612.21040.06850.1003
C-MTF-GLP-CBD0.02120.97222.77452.25440.05820.1404
MF0.03060.97822.78592.02140.05200.0954
FE-HPM0.02360.97372.83192.23810.05830.0929
SR-D0.01040.97202.71682.30900.03290.2567
TV0.03360.98771.76971.47600.04750.2043
BDPN0.09130.95673.08432.65260.08630.1388
DiCNN0.08060.96032.89352.55430.04460.2330
DRPNN0.06200.96703.02652.36860.06110.1604
FusionNet0.05460.96892.67942.29790.04610.2105
MSDCNN0.07090.96333.14262.44720.07180.1415
PanNet0.02240.97332.70592.21690.04020.2714
PNN0.05040.96662.99382.41260.07400.1839
A-PNN0.03740.97042.65802.30550.06620.2806
A-PNN-FT0.03860.97042.67732.29390.05110.1846
Z-DRPNN (0 it.)0.12410.94377.27894.54300.06160.0558
Z-DRPNN(100 it.)0.02830.97233.01262.25860.04130.0510
Z-DRPNN (256 it.)0.02310.97292.88222.23380.04840.0544
Faster Z-DRPNN (early stop)0.02080.97232.93522.27110.04980.0613
Fast Z-DRPNN0.02040.97282.87072.24280.05100.0596
Z-PanNet (0 it.)0.02730.97582.83582.11330.04240.1628
Z-PanNet (100 it.)0.03000.97592.82752.10950.04710.1404
Z-PanNet (256 it.)0.02970.97552.81692.12460.04930.1177
Faster Z-PanNet (early stop)0.03090.97572.81362.10690.05130.1187
Fast Z-PanNet0.03060.97572.81612.10950.05080.1167
Z-PNN (0 it.)0.07380.96033.37732.52440.11010.0367
Z-PNN (100 it.)0.04370.96893.13702.32120.08920.0398
Z-PNN (256 it.)0.03610.97132.97812.25760.08280.0437
Faster Z-PNN (early stop)0.03840.96993.06182.31410.08450.0432
Fast Z-PNN0.03980.96993.07052.30120.08690.0419
Table 9. Numerical results on the 2048 × 2048 GeoEye-1 test image (Genoa), © DigitalGlobe, Inc. (2018), provided by European Space Agency.
Table 9. Numerical results on the 2048 × 2048 GeoEye-1 test image (Genoa), © DigitalGlobe, Inc. (2018), provided by European Space Agency.
Method D λ ( K ) R-Q 2 n R-SAMR-ERGAS D S D ρ
(Ideal)(0.0000)(1.0000)(0.0000)(0.0000)(0.0000)(0.0000)
EXP0.13210.85154.18496.05610.11030.8244
BT-H0.32130.81816.16036.08920.07030.0818
BDSD0.28070.84704.58715.72980.12830.0749
C-BDSD0.24720.85054.76505.75360.10650.1032
BDSD-PC0.28450.84504.56675.73690.13050.0742
GS0.29550.82624.68965.88480.12740.0859
GSA0.29790.82135.96195.75290.10860.0525
C-GSA0.28970.83096.19185.70730.09650.1087
PRACS0.17180.86574.40105.53380.04530.2564
AWLP0.10690.88624.15385.25220.07990.1090
MTF-GLP0.11040.89244.09925.06840.08150.0913
MTF-GLP-FS0.11060.88874.09775.16010.08550.1079
MTF-GLP-HPM0.11350.89444.02715.02040.07120.0876
MTF-GLP-HPM-H0.11620.89523.97765.01310.04680.0910
MTF-GLP-HPM-R0.11090.89034.09325.12290.07760.1141
MTF-GLP-CBD0.11080.88754.09605.18960.08570.1129
C-MTF-GLP-CBD0.11750.87434.14095.52690.03430.3105
MF0.12400.89484.01784.86970.07950.1031
FE-HPM0.11750.89434.04064.98310.08490.0996
SR-D0.06970.89654.20345.34770.01430.3989
TV0.16430.89073.67064.95640.03920.7326
PNN0.11260.88343.65565.31720.01990.4615
A-PNN0.10260.88413.71735.34930.03710.5225
A-PNN-FT0.11610.87823.76715.44070.01670.3543
Z-DRPNN (0 it.)0.12100.89033.86845.00010.03550.1370
Z-DRPNN(100 it.)0.07860.90223.76525.11020.08380.1319
Z-DRPNN (256 it.)0.07710.90413.80075.14870.10310.1323
Faster Z-DRPNN (early stop)0.09650.89393.82585.15370.09860.1376
Fast Z-DRPNN0.10360.89093.84845.15890.09680.1369
Z-PanNet (0 it.)0.11960.89174.01945.17170.10200.1224
Z-PanNet (100 it.)0.11740.89333.94825.09040.07320.1122
Z-PanNet (256 it.)0.10940.89383.99995.09560.04480.1040
Faster Z-PanNet (early stop)0.10510.89334.10545.10780.03720.1105
Fast Z-PanNet0.10060.89394.08305.08950.03460.1100
Z-PNN (0 it.)0.17100.88044.08355.10550.09910.1494
Z-PNN (100 it.)0.14820.88553.95095.01340.09130.1360
Z-PNN (256 it.)0.14540.88603.92985.00460.08290.1209
Faster Z-PNN (early stop)0.13390.89223.90974.94360.07870.1427
Fast Z-PNN0.13730.88963.91174.98690.07770.1389
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ciotola, M.; Scarpa, G. Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework. Remote Sens. 2023, 15, 319. https://doi.org/10.3390/rs15020319

AMA Style

Ciotola M, Scarpa G. Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework. Remote Sensing. 2023; 15(2):319. https://doi.org/10.3390/rs15020319

Chicago/Turabian Style

Ciotola, Matteo, and Giuseppe Scarpa. 2023. "Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework" Remote Sensing 15, no. 2: 319. https://doi.org/10.3390/rs15020319

APA Style

Ciotola, M., & Scarpa, G. (2023). Fast Full-Resolution Target-Adaptive CNN-Based Pansharpening Framework. Remote Sensing, 15(2), 319. https://doi.org/10.3390/rs15020319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop