Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (258)

Search Parameters:
Keywords = Pan-sharpening

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 14783 KB  
Article
HSSTN: A Hybrid Spectral–Structural Transformer Network for High-Fidelity Pansharpening
by Weijie Kang, Yuan Feng, Yao Ding, Hongbo Xiang, Xiaobo Liu and Yaoming Cai
Remote Sens. 2025, 17(19), 3271; https://doi.org/10.3390/rs17193271 - 23 Sep 2025
Viewed by 540
Abstract
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS [...] Read more.
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS and PAN data. Consequently, spectral distortion and spatial degradation often occur, limiting high-precision downstream applications. To address these issues, this work proposes a Hybrid Spectral–Structural Transformer Network (HSSTN) that enhances multi-level collaboration through comprehensive modelling of spectral–structural feature complementarity. Specifically, the HSSTN implements a three-tier fusion framework. First, an asymmetric dual-stream feature extractor employs a residual block with channel attention (RBCA) in the MS branch to strengthen spectral representation, while a Transformer architecture in the PAN branch extracts high-frequency spatial details, thereby reducing modality discrepancy at the input stage. Subsequently, a target-driven hierarchical fusion network utilises progressive crossmodal attention across scales, ranging from local textures to multi-scale structures, to enable efficient spectral–structural aggregation. Finally, a novel collaborative optimisation loss function preserves spectral integrity while enhancing structural details. Comprehensive experiments conducted on QuickBird, GaoFen-2, and WorldView-3 datasets demonstrate that HSSTN outperforms existing methods in both quantitative metrics and visual quality. Consequently, the resulting images exhibit sharper details and fewer spectral artefacts, showcasing significant advantages in high-fidelity remote sensing image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

22 pages, 3882 KB  
Article
Combining Satellite Image Standardization and Self-Supervised Learning to Improve Building Segmentation Accuracy
by Haoran Zhang and Bunkei Matsushita
Remote Sens. 2025, 17(18), 3182; https://doi.org/10.3390/rs17183182 - 14 Sep 2025
Viewed by 481
Abstract
Many research fields, such as urban planning, urban climate and environmental assessment, require information on the distribution of buildings. In this study, we used U-Net to segment buildings from WorldView-3 imagery. To improve the accuracy of building segmentation, we undertook two endeavors. First, [...] Read more.
Many research fields, such as urban planning, urban climate and environmental assessment, require information on the distribution of buildings. In this study, we used U-Net to segment buildings from WorldView-3 imagery. To improve the accuracy of building segmentation, we undertook two endeavors. First, we investigated the optimal order of atmospheric correction (AC) and panchromatic sharpening (pan-sharpening) and found that performing AC before pan-sharpening results in higher building segmentation accuracy than after pan-sharpening, increasing the average IoU by 9.4%. Second, we developed a new multi-task self-supervised learning (SSL) network to pre-train VGG19 backbone using 21 unlabeled WorldView images. The new multi-task SSL network includes two pretext tasks specifically designed to take into account the characteristics of buildings in satellite imagery (size, distribution pattern, multispectral, etc.). Performance evaluation shows that U-Net combined with an SSL pre-trained VGG19 backbone improves building segmentation accuracy by 15.3% compared to U-Net combined with a VGG19 backbone trained from scratch. Comparative analysis also shows that the new multi-task SSL network outperforms other existing SSL methods, improving building segmentation accuracy by 3.5–13.7%. Moreover, the proposed method significantly saves computational costs and can effectively work on a personal computer. Full article
Show Figures

Figure 1

23 pages, 6105 KB  
Article
YUV Color Model-Based Adaptive Pansharpening with Lanczos Interpolation and Spectral Weights
by Shavkat Fazilov, Ozod Yusupov, Erali Eshonqulov, Khabiba Abdieva and Ziyodullo Malikov
Mathematics 2025, 13(17), 2868; https://doi.org/10.3390/math13172868 - 5 Sep 2025
Viewed by 494
Abstract
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, [...] Read more.
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, a key challenge continues to be the maintenance of both spatial details and spectral accuracy in the combined image. To tackle this challenge, we introduce a new approach that enhances the component substitution-based Adaptive IHS method by integrating the YUV color model along with weighting coefficients influenced by the multispectral data. In our proposed approach, the conventional IHS color model is substituted with the YUV model to enhance spectral consistency. Additionally, Lanczos interpolation is used to upscale the MS image to match the spatial resolution of the PAN image. Each channel of the MS image is fused using adaptive weights derived from the influence of multispectral data, leading to the final pansharpened image. Based on the findings from experiments conducted on the PairMax and PanCollection datasets, our proposed method exhibited superior spectral and spatial performance when compared to several existing pansharpening techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

25 pages, 5194 KB  
Article
A Graph-Based Superpixel Segmentation Approach Applied to Pansharpening
by Hind Hallabia
Sensors 2025, 25(16), 4992; https://doi.org/10.3390/s25164992 - 12 Aug 2025
Viewed by 652
Abstract
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral [...] Read more.
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral (MS) image to generate a unique comprehensive high-resolution MS image. As the performance of such a fusion method relies on the choice of the fusion strategy, and in particular, on the way the algorithm is used for estimating gain coefficients, our proposal is dedicated to computing the injection gains over a graph-driven segmentation map. The graph-based segments are obtained by applying simple linear iterative clustering (SLIC) on the MS image followed by a region adjacency graph (RAG) merging stage. This graphical representation of the segmentation map is used as guidance for spatial information to be injected during fusion processing. The high-resolution MS image is achieved by inferring locally the details in accordance with the local simplex injection fusion rule. The quality improvements achievable by our proposal are evaluated and validated at reduced and at full scales using two high resolution datasets collected by GeoEye-1 and WorldView-3 sensors. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

34 pages, 10241 KB  
Review
A Comprehensive Benchmarking Framework for Sentinel-2 Sharpening: Methods, Dataset, and Evaluation Metrics
by Matteo Ciotola, Giuseppe Guarino, Antonio Mazza, Giovanni Poggi and Giuseppe Scarpa
Remote Sens. 2025, 17(12), 1983; https://doi.org/10.3390/rs17121983 - 7 Jun 2025
Viewed by 1093
Abstract
The advancement of super-resolution and sharpening algorithms for satellite images has significantly expanded the potential applications of remote sensing data. In the case of Sentinel-2, despite significant progress, the lack of standardized datasets and evaluation protocols has made it difficult to fairly compare [...] Read more.
The advancement of super-resolution and sharpening algorithms for satellite images has significantly expanded the potential applications of remote sensing data. In the case of Sentinel-2, despite significant progress, the lack of standardized datasets and evaluation protocols has made it difficult to fairly compare existing methods and advance the state of the art. This work introduces a comprehensive benchmarking framework for Sentinel-2 sharpening, designed to address these challenges and foster future research. It analyzes several state-of-the-art sharpening algorithms, selecting representative methods ranging from traditional pansharpening to ad hoc model-based optimization and deep learning approaches. All selected methods have been re-implemented within a consistent Python-based (Version 3.10) framework and evaluated on a suitably designed, large-scale Sentinel-2 dataset. This dataset features diverse geographical regions, land cover types, and acquisition conditions, ensuring robust training and testing scenarios. The performance of the sharpening methods is assessed using both reference-based and no-reference quality indexes, highlighting strengths, limitations, and open challenges of current state-of-the-art algorithms. The proposed framework, dataset, and evaluation protocols are openly shared with the research community to promote collaboration and reproducibility. Full article
Show Figures

Figure 1

21 pages, 9082 KB  
Article
Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization
by Dongyang Fu, Jin Ma, Bei Liu and Yan Zhu
Sensors 2025, 25(11), 3530; https://doi.org/10.3390/s25113530 - 4 Jun 2025
Viewed by 1121
Abstract
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors [...] Read more.
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors and the spectral distortion caused by the dynamic changes of water bodies in island sea areas restrict the fusion accuracy, necessitating more precise fusion solutions. Therefore, this paper proposes a pansharpening method based on Hybrid-Scale Mutual Information (HSMI). This method effectively enhances the accuracy and consistency of panchromatic sharpening results by integrating mixed-scale information into scale regression. Secondly, it introduces mutual information to quantify the spatial–spectral correlation among multi-source data to balance the fusion representation under mixed scales. Finally, the performance of various popular pansharpening methods was compared and analyzed using the coupled datasets of Sentinel-2 and Sentinel-3 in typical island and reef waters of the South China Sea. The results show that HSMI can enhance the spatial details and edge clarity of islands while better preserving the spectral characteristics of the surrounding sea areas. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 6314 KB  
Article
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening
by Jinting Ding, Honghui Xu and Shengjun Zhou
Entropy 2025, 27(6), 567; https://doi.org/10.3390/e27060567 - 27 May 2025
Viewed by 839
Abstract
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs [...] Read more.
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral–spatial fidelity and producing images with higher perceptual quality. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

21 pages, 7602 KB  
Article
Panchromatic and Hyperspectral Image Fusion Using Ratio Residual Attention Networks
by Fengxiang Xu, Nan Zhang, Zhenxiang Chen, Peiran Peng and Tingfa Xu
Appl. Sci. 2025, 15(11), 5986; https://doi.org/10.3390/app15115986 - 26 May 2025
Cited by 1 | Viewed by 584
Abstract
Hyperspectral remote sensing images provide rich spectral information about land surface features and are widely used in fields such as environmental monitoring, disaster assessment, and land classification. However, effectively leveraging the spectral information in hyperspectral images remains a significant challenge. In this paper, [...] Read more.
Hyperspectral remote sensing images provide rich spectral information about land surface features and are widely used in fields such as environmental monitoring, disaster assessment, and land classification. However, effectively leveraging the spectral information in hyperspectral images remains a significant challenge. In this paper, we propose a hyperspectral pansharpening method based on ratio transformation and residual networks, which significantly enhances both spatial details and spectral fidelity. The method generates an initial image through ratio transformation and refines it using a residual attention network. Additionally, specialized loss functions are designed to preserve both spatial and spectral details. Experimental results demonstrate that, when evaluated on the EO-1 and Chikusei datasets, the proposed method outperforms other methods in terms of both visual quality and quantitative metrics, particularly in spatial detail clarity and spectral fidelity. This approach effectively addresses the limitations of existing technologies and shows great potential for high-resolution remote sensing image processing applications. Full article
Show Figures

Figure 1

8 pages, 3697 KB  
Proceeding Paper
Pansharpening Remote Sensing Images Using Generative Adversarial Networks
by Bo-Hsien Chung, Jui-Hsiang Jung, Yih-Shyh Chiou, Mu-Jan Shih and Fuan Tsai
Eng. Proc. 2025, 92(1), 32; https://doi.org/10.3390/engproc2025092032 - 28 Apr 2025
Cited by 1 | Viewed by 560
Abstract
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN [...] Read more.
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN image while maintaining the spectral integrity of the MS image. To address this, this article presents a generative adversarial network (GAN)-based approach to pansharpening. The GAN discriminator facilitated matching the generated image’s intensity to the HR PAN image and preserving the spectral characteristics of the LR MS image. The performance in generating images was evaluated using the peak signal-to-noise ratio (PSNR). For the experiment, original LR MS and HR PAN satellite images were partitioned into smaller patches, and the GAN model was validated using an 80:20 training-to-testing data ratio. The results illustrated that the super-resolution images generated by the SRGAN model achieved a PSNR of 31 dB. These results demonstrated the developed model’s ability to reconstruct the geometric, textural, and spectral information from the images. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

18 pages, 3766 KB  
Article
Self-Supervised Multiscale Contrastive and Attention-Guided Gradient Projection Network for Pansharpening
by Qingping Li, Xiaomin Yang, Bingru Li and Jin Wang
Sensors 2025, 25(8), 2560; https://doi.org/10.3390/s25082560 - 18 Apr 2025
Cited by 2 | Viewed by 976
Abstract
Pansharpening techniques are crucial in remote sensing image processing, with deep learning emerging as the mainstream solution. In this paper, the pansharpening problem is formulated as two optimization subproblems with a solution proposed based on multiscale contrastive learning combined with attention-guided gradient projection [...] Read more.
Pansharpening techniques are crucial in remote sensing image processing, with deep learning emerging as the mainstream solution. In this paper, the pansharpening problem is formulated as two optimization subproblems with a solution proposed based on multiscale contrastive learning combined with attention-guided gradient projection networks. First, an efficient and generalized Spectral–Spatial Universal Module (SSUM) is designed and applied to spectral and spatial enhancement modules (SpeEB and SpaEB). Then, the multiscale high-frequency features of PAN and MS images are extracted using discrete wavelet transform (DWT). These features are combined with contrastive learning and residual connection to progressively balance spectral and spatial information. Finally, high-resolution multispectral images are generated through multiple iterations. Experimental results verify that the proposed method outperforms existing approaches in both visual quality and quantitative evaluation metrics. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

25 pages, 10869 KB  
Article
Pansharpening Applications in Ecological and Environmental Monitoring Using an Attention Mechanism-Based Dual-Stream Cross-Modality Fusion Network
by Bingru Li, Qingping Li, Haoran Yang and Xiaomin Yang
Appl. Sci. 2025, 15(8), 4095; https://doi.org/10.3390/app15084095 - 8 Apr 2025
Viewed by 697
Abstract
Pansharpening is a critical technique in remote sensing, particularly in ecological and environmental monitoring, where it is used to integrate panchromatic (PAN) and multispectral (MS) images. This technique plays a vital role in assessing environmental changes, monitoring biodiversity, and supporting conservation efforts. While [...] Read more.
Pansharpening is a critical technique in remote sensing, particularly in ecological and environmental monitoring, where it is used to integrate panchromatic (PAN) and multispectral (MS) images. This technique plays a vital role in assessing environmental changes, monitoring biodiversity, and supporting conservation efforts. While many current pansharpening methods primarily rely on PAN images, they often overlook the distinct characteristics of MS images and the cross-modal relationships between them. To address this limitation, the paper presents a Dual-Stream Cross-modality Fusion Network (DCMFN), designed to offer reliable data support for environmental impact assessment, ecological monitoring, and material optimization in nanotechnology. The proposed network utilizes an attention mechanism to extract features from both PAN and MS images individually. Additionally, a Cross-Modality Feature Fusion Module (CMFFM) is introduced to capture the complex interrelationships between PAN and MS images, enhancing the reconstruction quality of pansharpened images. This method not only boosts the spatial resolution but also maintains the richness of multispectral information. Through extensive experiments, the DCMFN demonstrates superior performance over existing methods on three remote sensing datasets, excelling in both objective evaluation metrics and visual quality. Full article
(This article belongs to the Special Issue Applications of Big Data and Artificial Intelligence in Geoscience)
Show Figures

Figure 1

13 pages, 11855 KB  
Article
SSA-GAN: Singular Spectrum Analysis-Enhanced Generative Adversarial Network for Multispectral Pansharpening
by Lanfa Liu, Jinian Zhang, Baitao Zhou, Peilun Lyu and Zhanchuan Cai
Mathematics 2025, 13(5), 745; https://doi.org/10.3390/math13050745 - 25 Feb 2025
Cited by 1 | Viewed by 884
Abstract
Pansharpening is essential for remote sensing applications requiring high spatial and spectral resolution. In this paper, we propose a novel Singular Spectrum Analysis-Enhanced Generative Adversarial Network (SSA-GAN) for multispectral pansharpening. We designed SSA modules within the generator, enabling more effective extraction and utilization [...] Read more.
Pansharpening is essential for remote sensing applications requiring high spatial and spectral resolution. In this paper, we propose a novel Singular Spectrum Analysis-Enhanced Generative Adversarial Network (SSA-GAN) for multispectral pansharpening. We designed SSA modules within the generator, enabling more effective extraction and utilization of spectral features. Additionally, we introduce Pareto optimization to the nonreference loss function to improve the overall performance. We conducted comparative experiments on two representative datasets, QuickBird and Gaofen-2 (GF-2). On the GF-2 dataset, the Peak Signal-to-Noise Ratio (PSNR) reached 30.045 and Quality with No Reference (QNR) achieved 0.920, while on the QuickBird dataset, PSNR and QNR were 24.262 and 0.817, respectively. These results indicate that the proposed method can generate high-quality pansharpened images with enhanced spatial and spectral resolution. Full article
(This article belongs to the Special Issue Advanced Mathematical Methods in Remote Sensing)
Show Figures

Figure 1

20 pages, 2403 KB  
Article
A Novel Dual-Branch Pansharpening Network with High-Frequency Component Enhancement and Multi-Scale Skip Connection
by Wei Huang, Yanyan Liu, Le Sun, Qiqiang Chen and Lu Gao
Remote Sens. 2025, 17(5), 776; https://doi.org/10.3390/rs17050776 - 23 Feb 2025
Viewed by 1036
Abstract
In recent years, the pansharpening methods based on deep learning show great advantages. However, these methods are still inadequate in considering the differences and correlations between multispectral (MS) and panchromatic (PAN) images. In response to the issue, we propose a novel dual-branch pansharpening [...] Read more.
In recent years, the pansharpening methods based on deep learning show great advantages. However, these methods are still inadequate in considering the differences and correlations between multispectral (MS) and panchromatic (PAN) images. In response to the issue, we propose a novel dual-branch pansharpening network with high-frequency component enhancement and a multi-scale skip connection. First, to enhance the correlations, the high-frequency branch consists of the high-frequency component enhancement module (HFCEM), which effectively enhances the high-frequency components through the multi-scale block (MSB), thereby obtaining the corresponding high-frequency weights to accurately capture the high-frequency information in MS and PAN images. Second, to address the differences, the low-frequency branch consists of the multi-scale skip connection module (MSSCM), which comprehensively captures the multi-scale features from coarse to fine through multi-scale convolution, and it effectively fuses these multilevel features through the designed skip connection mechanism to fully extract the low-frequency information from MS and PAN images. Finally, the qualitative and quantitative experiments are performed on the GaoFen-2, QuickBird, and WorldView-3 datasets. The results show that the proposed method outperforms the state-of-the-art pansharpening methods. Full article
Show Figures

Graphical abstract

30 pages, 82967 KB  
Article
Pansharpening Techniques: Optimizing the Loss Function for Convolutional Neural Networks
by Rocco Restaino
Remote Sens. 2025, 17(1), 16; https://doi.org/10.3390/rs17010016 - 25 Dec 2024
Cited by 1 | Viewed by 1525
Abstract
Pansharpening is a traditional image fusion problem where the reference image (or ground truth) is not accessible. Machine-learning-based algorithms designed for this task require an extensive optimization phase of network parameters, which must be performed using unsupervised learning techniques. The learning phase can [...] Read more.
Pansharpening is a traditional image fusion problem where the reference image (or ground truth) is not accessible. Machine-learning-based algorithms designed for this task require an extensive optimization phase of network parameters, which must be performed using unsupervised learning techniques. The learning phase can either rely on a companion problem where ground truth is available, such as by reproducing the task at a lower scale or using a pretext task, or it can use a reference-free cost function. This study focuses on the latter approach, where performance depends not only on the accuracy of the quality measure but also on the mathematical properties of these measures, which may introduce challenges related to computational complexity and optimization. The evaluation of the most recognized no-reference image quality measures led to the proposal of a novel criterion, the Regression-based QNR (RQNR), which has not been previously used. To mitigate computational challenges, an approximate version of the relevant indices was employed, simplifying the optimization of the cost functions. The effectiveness of the proposed cost functions was validated through the reduced-resolution assessment protocol applied to a public dataset (PairMax) containing images of diverse regions of the Earth’s surface. Full article
Show Figures

Figure 1

22 pages, 18328 KB  
Article
A Three-Branch Pansharpening Network Based on Spatial and Frequency Domain Interaction
by Xincan Wen, Hongbing Ma and Liangliang Li
Remote Sens. 2025, 17(1), 13; https://doi.org/10.3390/rs17010013 - 24 Dec 2024
Cited by 2 | Viewed by 1070
Abstract
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite [...] Read more.
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite significant developments achieved by deep learning-based pansharpening methods over traditional approaches, most existing techniques either fail to account for the modal differences between LRMS and PAN images, relying on direct concatenation, or use similar network structures to extract spectral and spatial information. Additionally, many methods neglect the extraction of common features between LRMS and PAN images and lack network architectures specifically designed to extract spectral features. To address these limitations, this study proposed a novel three-branch pansharpening network that leverages both spatial and frequency domain interactions, resulting in improved spectral and spatial fidelity in the fusion outputs. The proposed method was validated on three datasets, including IKONOS, WorldView-3 (WV3), and WorldView-4 (WV4). The results demonstrate that the proposed method surpasses several leading techniques, achieving superior performance in both visual quality and quantitative metrics. Full article
Show Figures

Figure 1

Back to TopTop