Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (128)

Search Parameters:
Keywords = panchromatic (PAN) image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2848 KiB  
Article
A Dual-Branch Network for Intra-Class Diversity Extraction in Panchromatic and Multispectral Classification
by Zihan Huang, Pengyu Tian, Hao Zhu, Pute Guo and Xiaotong Li
Remote Sens. 2025, 17(12), 1998; https://doi.org/10.3390/rs17121998 - 10 Jun 2025
Viewed by 363
Abstract
With the rapid development of remote sensing technology, satellites can now capture multispectral (MS) and panchromatic (PAN) images simultaneously. MS images offer rich spectral details, while PAN images provide high spatial resolutions. Effectively leveraging their complementary strengths and addressing modality gaps are key [...] Read more.
With the rapid development of remote sensing technology, satellites can now capture multispectral (MS) and panchromatic (PAN) images simultaneously. MS images offer rich spectral details, while PAN images provide high spatial resolutions. Effectively leveraging their complementary strengths and addressing modality gaps are key challenges in improving the classification performance. From the perspective of deep learning, this paper proposes a novel dual-source remote sensing classification framework named the Diversity Extraction and Fusion Classifier (DEFC-Net). A central innovation of our method lies in introducing a modality-specific intra-class diversity modeling mechanism for the first time in dual-source classification. Specifically, the intra-class diversity identification and splitting (IDIS) module independently analyzes the intra-class variance within each modality to identify semantically broad classes, and it applies an optimized K-means method to split such classes into fine-grained sub-classes. In particular, due to the inherent representation differences between the MS and PAN modalities, the same class may be split differently in each modality, allowing modality-aware class refinement that better captures fine-grained discriminative features in dual perspectives. To handle the class imbalance introduced by both natural long-tailed distributions and class splitting, we design a long-tailed ensemble learning module (LELM) based on a multi-expert structure to reduce bias toward head classes. Furthermore, a dual-modal knowledge distillation (DKD) module is developed to align cross-modal feature spaces and reconcile the label inconsistency arising from modality-specific class splitting, thereby facilitating effective information fusion across modalities. Extensive experiments on datasets show that our method significantly improves the classification performance. The code was accessed on 11 April 2025. Full article
Show Figures

Figure 1

24 pages, 6314 KiB  
Article
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening
by Jinting Ding, Honghui Xu and Shengjun Zhou
Entropy 2025, 27(6), 567; https://doi.org/10.3390/e27060567 - 27 May 2025
Viewed by 490
Abstract
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs [...] Read more.
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral–spatial fidelity and producing images with higher perceptual quality. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

8 pages, 3697 KiB  
Proceeding Paper
Pansharpening Remote Sensing Images Using Generative Adversarial Networks
by Bo-Hsien Chung, Jui-Hsiang Jung, Yih-Shyh Chiou, Mu-Jan Shih and Fuan Tsai
Eng. Proc. 2025, 92(1), 32; https://doi.org/10.3390/engproc2025092032 - 28 Apr 2025
Viewed by 310
Abstract
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN [...] Read more.
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN image while maintaining the spectral integrity of the MS image. To address this, this article presents a generative adversarial network (GAN)-based approach to pansharpening. The GAN discriminator facilitated matching the generated image’s intensity to the HR PAN image and preserving the spectral characteristics of the LR MS image. The performance in generating images was evaluated using the peak signal-to-noise ratio (PSNR). For the experiment, original LR MS and HR PAN satellite images were partitioned into smaller patches, and the GAN model was validated using an 80:20 training-to-testing data ratio. The results illustrated that the super-resolution images generated by the SRGAN model achieved a PSNR of 31 dB. These results demonstrated the developed model’s ability to reconstruct the geometric, textural, and spectral information from the images. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

25 pages, 10869 KiB  
Article
Pansharpening Applications in Ecological and Environmental Monitoring Using an Attention Mechanism-Based Dual-Stream Cross-Modality Fusion Network
by Bingru Li, Qingping Li, Haoran Yang and Xiaomin Yang
Appl. Sci. 2025, 15(8), 4095; https://doi.org/10.3390/app15084095 - 8 Apr 2025
Viewed by 505
Abstract
Pansharpening is a critical technique in remote sensing, particularly in ecological and environmental monitoring, where it is used to integrate panchromatic (PAN) and multispectral (MS) images. This technique plays a vital role in assessing environmental changes, monitoring biodiversity, and supporting conservation efforts. While [...] Read more.
Pansharpening is a critical technique in remote sensing, particularly in ecological and environmental monitoring, where it is used to integrate panchromatic (PAN) and multispectral (MS) images. This technique plays a vital role in assessing environmental changes, monitoring biodiversity, and supporting conservation efforts. While many current pansharpening methods primarily rely on PAN images, they often overlook the distinct characteristics of MS images and the cross-modal relationships between them. To address this limitation, the paper presents a Dual-Stream Cross-modality Fusion Network (DCMFN), designed to offer reliable data support for environmental impact assessment, ecological monitoring, and material optimization in nanotechnology. The proposed network utilizes an attention mechanism to extract features from both PAN and MS images individually. Additionally, a Cross-Modality Feature Fusion Module (CMFFM) is introduced to capture the complex interrelationships between PAN and MS images, enhancing the reconstruction quality of pansharpened images. This method not only boosts the spatial resolution but also maintains the richness of multispectral information. Through extensive experiments, the DCMFN demonstrates superior performance over existing methods on three remote sensing datasets, excelling in both objective evaluation metrics and visual quality. Full article
(This article belongs to the Special Issue Applications of Big Data and Artificial Intelligence in Geoscience)
Show Figures

Figure 1

20 pages, 2403 KiB  
Article
A Novel Dual-Branch Pansharpening Network with High-Frequency Component Enhancement and Multi-Scale Skip Connection
by Wei Huang, Yanyan Liu, Le Sun, Qiqiang Chen and Lu Gao
Remote Sens. 2025, 17(5), 776; https://doi.org/10.3390/rs17050776 - 23 Feb 2025
Viewed by 803
Abstract
In recent years, the pansharpening methods based on deep learning show great advantages. However, these methods are still inadequate in considering the differences and correlations between multispectral (MS) and panchromatic (PAN) images. In response to the issue, we propose a novel dual-branch pansharpening [...] Read more.
In recent years, the pansharpening methods based on deep learning show great advantages. However, these methods are still inadequate in considering the differences and correlations between multispectral (MS) and panchromatic (PAN) images. In response to the issue, we propose a novel dual-branch pansharpening network with high-frequency component enhancement and a multi-scale skip connection. First, to enhance the correlations, the high-frequency branch consists of the high-frequency component enhancement module (HFCEM), which effectively enhances the high-frequency components through the multi-scale block (MSB), thereby obtaining the corresponding high-frequency weights to accurately capture the high-frequency information in MS and PAN images. Second, to address the differences, the low-frequency branch consists of the multi-scale skip connection module (MSSCM), which comprehensively captures the multi-scale features from coarse to fine through multi-scale convolution, and it effectively fuses these multilevel features through the designed skip connection mechanism to fully extract the low-frequency information from MS and PAN images. Finally, the qualitative and quantitative experiments are performed on the GaoFen-2, QuickBird, and WorldView-3 datasets. The results show that the proposed method outperforms the state-of-the-art pansharpening methods. Full article
Show Figures

Graphical abstract

22 pages, 18328 KiB  
Article
A Three-Branch Pansharpening Network Based on Spatial and Frequency Domain Interaction
by Xincan Wen, Hongbing Ma and Liangliang Li
Remote Sens. 2025, 17(1), 13; https://doi.org/10.3390/rs17010013 - 24 Dec 2024
Cited by 2 | Viewed by 843
Abstract
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite [...] Read more.
Pansharpening technology plays a crucial role in remote sensing image processing by integrating low-resolution multispectral (LRMS) images and high-resolution panchromatic (PAN) images to generate high-resolution multispectral (HRMS) images. This process addresses the limitations of satellite sensors, which cannot directly capture HRMS images. Despite significant developments achieved by deep learning-based pansharpening methods over traditional approaches, most existing techniques either fail to account for the modal differences between LRMS and PAN images, relying on direct concatenation, or use similar network structures to extract spectral and spatial information. Additionally, many methods neglect the extraction of common features between LRMS and PAN images and lack network architectures specifically designed to extract spectral features. To address these limitations, this study proposed a novel three-branch pansharpening network that leverages both spatial and frequency domain interactions, resulting in improved spectral and spatial fidelity in the fusion outputs. The proposed method was validated on three datasets, including IKONOS, WorldView-3 (WV3), and WorldView-4 (WV4). The results demonstrate that the proposed method surpasses several leading techniques, achieving superior performance in both visual quality and quantitative metrics. Full article
Show Figures

Figure 1

15 pages, 3905 KiB  
Article
Conditional Skipping Mamba Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Peng Liu and Tong Li
Symmetry 2024, 16(12), 1681; https://doi.org/10.3390/sym16121681 - 19 Dec 2024
Viewed by 1051
Abstract
Pan-sharpening aims to generate high-resolution multispectral (HRMS) images by combining high-resolution panchromatic (PAN) images with low-resolution multispectral (LRMS) data, while maintaining the symmetry of spatial and spectral characteristics. Traditional convolutional neural networks (CNNs) struggle with global dependency modeling due to local receptive fields, [...] Read more.
Pan-sharpening aims to generate high-resolution multispectral (HRMS) images by combining high-resolution panchromatic (PAN) images with low-resolution multispectral (LRMS) data, while maintaining the symmetry of spatial and spectral characteristics. Traditional convolutional neural networks (CNNs) struggle with global dependency modeling due to local receptive fields, and Transformer-based models are computationally expensive. Recent Mamba models offer linear complexity and effective global modeling. However, existing Mamba-based methods lack sensitivity to local feature variations, leading to suboptimal fine-detail preservation. To address this, we propose a Conditional Skipping Mamba Network (CSMN), which enhances global-local feature fusion symmetrically through two modules: (1) the Adaptive Mamba Module (AMM), which improves global perception using adaptive spatial-frequency integration; and (2) the Cross-domain Mamba Module (CDMM), optimizing cross-domain spectral-spatial representation. Experimental results on the IKONOS and WorldView-2 datasets demonstrate that CSMN surpasses existing state-of-the-art methods in achieving superior spectral consistency and preserving spatial details, with performance that is more symmetric in fine-detail preservation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 26046 KiB  
Article
Downscaling Land Surface Temperature via Assimilation of LandSat 8/9 OLI and TIRS Data and Hypersharpening
by Luciano Alparone and Andrea Garzelli
Remote Sens. 2024, 16(24), 4694; https://doi.org/10.3390/rs16244694 - 16 Dec 2024
Viewed by 1053
Abstract
Land surface temperature (LST) plays a pivotal role in many environmental sectors. Unfortunately, thermal bands produced by instruments that are onboard satellites have limited spatial resolutions; this seriously impairs their potential usefulness. In this study, we propose an automatic procedure for the spatial [...] Read more.
Land surface temperature (LST) plays a pivotal role in many environmental sectors. Unfortunately, thermal bands produced by instruments that are onboard satellites have limited spatial resolutions; this seriously impairs their potential usefulness. In this study, we propose an automatic procedure for the spatial downscaling of the two 100 m thermal infrared (TIR) bands of LandSat 8/9, captured by the TIR spectrometer (TIRS), by exploiting the bands of the optical instrument. The problem of fusion of heterogeneous data is approached as hypersharpening: each of the two sharpening images is synthesized following data assimilation concepts, with the linear combination of 30 m optical bands and the 15 m panchromatic (Pan) image that maximizes the correlation with each thermal channel at its native 100 m scale. The TIR bands resampled at 15 m are sharpened, each by its own synthetic Pan. On two different scenes of an OLI-TIRS image, the proposed approach is compared with 100 m to 15 m pansharpening, carried out uniquely by means of the Pan image of OLI and with the two high-resolution assimilated thermal images that are used for hypersharpening the two TIRS bands. Besides visual evaluations of the temperature maps, statistical indexes measuring radiometric and spatial consistencies are provided and discussed. The superiority of the proposed approach is highlighted: the classical pansharpening approach is radiometrically accurate but weak in the consistency of spatial enhancement. Conversely, the assimilated TIR bands, though adequately sharp, lose more than 20% of radiometric consistency. Our proposal trades off the benefits of its counterparts in a unique method. Full article
(This article belongs to the Special Issue Remote Sensing for Land Surface Temperature and Related Applications)
Show Figures

Graphical abstract

15 pages, 6962 KiB  
Article
Perceptual Quality Assessment for Pansharpened Images Based on Deep Feature Similarity Measure
by Zhenhua Zhang, Shenfu Zhang, Xiangchao Meng, Liang Chen and Feng Shao
Remote Sens. 2024, 16(24), 4621; https://doi.org/10.3390/rs16244621 - 10 Dec 2024
Cited by 1 | Viewed by 1001
Abstract
Pan-sharpening aims to generate high-resolution (HR) multispectral (MS) images by fusing HR panchromatic (PAN) and low-resolution (LR) MS images covering the same area. However, due to the lack of real HR MS reference images, how to accurately evaluate the quality of a fused [...] Read more.
Pan-sharpening aims to generate high-resolution (HR) multispectral (MS) images by fusing HR panchromatic (PAN) and low-resolution (LR) MS images covering the same area. However, due to the lack of real HR MS reference images, how to accurately evaluate the quality of a fused image without reference is challenging. On the one hand, most methods evaluate the quality of the fused image using the full-reference indices based on the simulated experimental data on the popular Wald’s protocol; however, this remains controversial to the full-resolution data fusion. On the other hand, existing limited no reference methods, most of which depend on manually crafted features, cannot fully capture the sensitive spatial/spectral distortions of the fused image. Therefore, this paper proposes a perceptual quality assessment method based on deep feature similarity measure. The proposed network includes spatial/spectral feature extraction and similarity measure (FESM) branch and overall evaluation network. The Siamese FESM branch extracts the spatial and spectral deep features and calculates the similarity of the corresponding pair of deep features to obtain the spatial and spectral feature parameters, and then, the overall evaluation network realizes the overall quality assessment. Moreover, we propose to quantify both the overall precision of all the training samples and the variations among different fusion methods in a batch, thereby enhancing the network’s accuracy and robustness. The proposed method was trained and tested on a large subjective evaluation dataset comprising 13,620 fused images. The experimental results suggested the effectiveness and the competitive performance. Full article
Show Figures

Figure 1

27 pages, 11681 KiB  
Article
HyperGAN: A Hyperspectral Image Fusion Approach Based on Generative Adversarial Networks
by Jing Wang, Xu Zhu, Linhai Jing, Yunwei Tang, Hui Li, Zhengqing Xiao and Haifeng Ding
Remote Sens. 2024, 16(23), 4389; https://doi.org/10.3390/rs16234389 - 24 Nov 2024
Cited by 4 | Viewed by 1041
Abstract
The objective of hyperspectral pansharpening is to fuse low-resolution hyperspectral images (LR-HSI) with corresponding panchromatic (PAN) images to generate high-resolution hyperspectral images (HR-HSI). Despite advancements in hyperspectral (HS) pansharpening using deep learning, the rich spectral details and large data volume of HS images [...] Read more.
The objective of hyperspectral pansharpening is to fuse low-resolution hyperspectral images (LR-HSI) with corresponding panchromatic (PAN) images to generate high-resolution hyperspectral images (HR-HSI). Despite advancements in hyperspectral (HS) pansharpening using deep learning, the rich spectral details and large data volume of HS images place higher demands on models for effective spectral extraction and processing. In this paper, we present HyperGAN, a hyperspectral image fusion approach based on Generative Adversarial Networks. Unlike previous methods that deepen the network to capture spectral information, HyperGAN widens the structure with a Wide Block for multi-scale learning, effectively capturing global and local details from upsampled HSI and PAN images. While LR-HSI provides rich spectral data, PAN images offer spatial information. We introduce the Efficient Spatial and Channel Attention Module (ESCA) to integrate these features and add an energy-based discriminator to enhance model performance by learning directly from the Ground Truth (GT), improving fused image quality. We validated our method on various scenes, including the Pavia Center, Eastern Tianshan, and Chikusei. Results show that HyperGAN outperforms state-of-the-art methods in visual and quantitative evaluations. Full article
Show Figures

Figure 1

24 pages, 43949 KiB  
Article
An Image Fusion Algorithm for Sustainable Development Goals Satellite-1 Night-Time Light Images Based on Optimized Image Stretching and Dual-Domain Fusion
by Kedong Li, Bo Cheng, Xiaoming Li, Xiaoping Zhang, Guizhou Wang, Jie Gao, Qinxue He and Yaocan Gan
Remote Sens. 2024, 16(22), 4298; https://doi.org/10.3390/rs16224298 - 18 Nov 2024
Viewed by 1047
Abstract
The Glimmer Imager of Urbanization (GIU) on SDGSAT-1 provides high-resolution and global-coverage images of night-time lights (NLs) with 10 m panchromatic (PAN) and 40 m multispectral (MS) imagery. High-resolution 10 m MS NL images after ideal fusion can be used to better study [...] Read more.
The Glimmer Imager of Urbanization (GIU) on SDGSAT-1 provides high-resolution and global-coverage images of night-time lights (NLs) with 10 m panchromatic (PAN) and 40 m multispectral (MS) imagery. High-resolution 10 m MS NL images after ideal fusion can be used to better study subtle manifestations of human activities. Most existing remote sensing image-fusion methods are based on the fusion of daytime optical remote sensing images, which do not apply to lossless compressed images of the GIU. To address this limitation, we propose a novel approach for 10 m NL data fusion, namely, a GIU NL image fusion model based on PAN-optimized OIS (OIS) and DDF (DDF) fusion for SDGSAT-1 high-resolution products. The OIS of PAN refers to the optimized stretching method that integrates linear and gamma stretching while DDF indicates a fusion process that separately merges the dark and light regions of NL images using different fusion methods, then stitches them together. In this study, fusion experiments were conducted in four study areas—Beijing, Shanghai, Moscow, and New York—and the proposed method was compared to traditional methods using visual evaluation and five quantitative evaluation metrics. The results demonstrate that the proposed method achieves superior visual quality and outperforms conventional methods across all quantitative metrics. Additionally, the ablation study confirmed the necessity of the methodological steps employed in this study. Full article
Show Figures

Figure 1

20 pages, 8709 KiB  
Article
Automatic Fine Co-Registration of Datasets from Extremely High Resolution Satellite Multispectral Scanners by Means of Injection of Residues of Multivariate Regression
by Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Remote Sens. 2024, 16(19), 3576; https://doi.org/10.3390/rs16193576 - 25 Sep 2024
Cited by 3 | Viewed by 1227
Abstract
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other [...] Read more.
This work presents two pre-processing patches to automatically correct the residual local misalignment of datasets acquired by very/extremely high resolution (VHR/EHR) satellite multispectral (MS) scanners, one for, e.g., GeoEye-1 and Pléiades, featuring two separate instruments for MS and panchromatic (Pan) data, the other for WorldView-2/3 featuring three instruments, two of which are visible and near-infra-red (VNIR) MS scanners. The misalignment arises because the two/three instruments onboard GeoEye-1 / WorldView-2 (four onboard WorldView-3) share the same optics and, thus, cannot have parallel optical axes. Consequently, they image the same swath area from different positions along the orbit. Local height changes (hills, buildings, trees, etc.) originate local shifts among corresponding points in the datasets. The latter would be accurately aligned only if the digital elevation surface model were known with sufficient spatial resolution, which is hardly feasible everywhere because of the extremely high resolution, with Pan pixels of less than 0.5 m. The refined co-registration is achieved by injecting the residue of the multivariate linear regression of each scanner towards lowpass-filtered Pan. Experiments with two and three instruments show that an almost perfect alignment is achieved. MS pansharpening is also shown to greatly benefit from the improved alignment. The proposed alignment procedures are real-time, fully automated, and do not require any additional or ancillary information, but rely uniquely on the unimodality of the MS and Pan sensors. Full article
Show Figures

Figure 1

26 pages, 6739 KiB  
Article
Pansharpening Based on Multimodal Texture Correction and Adaptive Edge Detail Fusion
by Danfeng Liu, Enyuan Wang, Liguo Wang, Jón Atli Benediktsson, Jianyu Wang and Lei Deng
Remote Sens. 2024, 16(16), 2941; https://doi.org/10.3390/rs16162941 - 11 Aug 2024
Viewed by 1188
Abstract
Pansharpening refers to the process of fusing multispectral (MS) images with panchromatic (PAN) images to obtain high-resolution multispectral (HRMS) images. However, due to the low correlation and similarity between MS and PAN images, as well as inaccuracies in spatial information injection, HRMS images [...] Read more.
Pansharpening refers to the process of fusing multispectral (MS) images with panchromatic (PAN) images to obtain high-resolution multispectral (HRMS) images. However, due to the low correlation and similarity between MS and PAN images, as well as inaccuracies in spatial information injection, HRMS images often suffer from significant spectral and spatial distortions. To address these issues, a pansharpening method based on multimodal texture correction and adaptive edge detail fusion is proposed in this paper. To obtain a texture-corrected (TC) image that is highly correlated and similar to the MS image, the target-adaptive CNN-based pansharpening (A-PNN) method is introduced. By constructing a multimodal texture correction model, intensity, gradient, and A-PNN-based deep plug-and-play correction constraints are established between the TC and source images. Additionally, an adaptive degradation filter algorithm is proposed to ensure the accuracy of these constraints. Since the TC image obtained can effectively replace the PAN image and considering that the MS image contains valuable spatial information, an adaptive edge detail fusion algorithm is also proposed. This algorithm adaptively extracts detailed information from the TC and MS images to apply edge protection. Given the limited spatial information in the MS image, its spatial information is proportionally enhanced before the adaptive fusion. The fused spatial information is then injected into the upsampled multispectral (UPMS) image to produce the final HRMS image. Extensive experimental results demonstrated that compared with other methods, the proposed algorithm achieved superior results in terms of both subjective visual effects and objective evaluation metrics. Full article
Show Figures

Figure 1

22 pages, 7835 KiB  
Article
Towards Robust Pansharpening: A Large-Scale High-Resolution Multi-Scene Dataset and Novel Approach
by Shiying Wang, Xuechao Zou, Kai Li, Junliang Xing, Tengfei Cao and Pin Tao
Remote Sens. 2024, 16(16), 2899; https://doi.org/10.3390/rs16162899 - 8 Aug 2024
Cited by 4 | Viewed by 2695
Abstract
Pansharpening, a pivotal task in remote sensing, involves integrating low-resolution multispectral images with high-resolution panchromatic images to synthesize an image that is both high-resolution and retains multispectral information. These pansharpened images enhance precision in land cover classification, change detection, and environmental monitoring within [...] Read more.
Pansharpening, a pivotal task in remote sensing, involves integrating low-resolution multispectral images with high-resolution panchromatic images to synthesize an image that is both high-resolution and retains multispectral information. These pansharpened images enhance precision in land cover classification, change detection, and environmental monitoring within remote sensing data analysis. While deep learning techniques have shown significant success in pansharpening, existing methods often face limitations in their evaluation, focusing on restricted satellite data sources, single scene types, and low-resolution images. This paper addresses this gap by introducing PanBench, a high-resolution multi-scene dataset containing all mainstream satellites and comprising 5898 pairs of samples. Each pair includes a four-channel (RGB + near-infrared) multispectral image of 256 × 256 pixels and a mono-channel panchromatic image of 1024 × 1024 pixels. To avoid irreversible loss of spectral information and achieve a high-fidelity synthesis, we propose a Cascaded Multiscale Fusion Network (CMFNet) for pansharpening. Multispectral images are progressively upsampled while panchromatic images are downsampled. Corresponding multispectral features and panchromatic features at the same scale are then fused in a cascaded manner to obtain more robust features. Extensive experiments validate the effectiveness of CMFNet. Full article
Show Figures

Figure 1

16 pages, 4099 KiB  
Article
Multi-Frequency Spectral–Spatial Interactive Enhancement Fusion Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Guangxu Xie, Peng Liu and Tong Li
Electronics 2024, 13(14), 2802; https://doi.org/10.3390/electronics13142802 - 16 Jul 2024
Cited by 5 | Viewed by 1264
Abstract
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including [...] Read more.
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including insufficient edge detail, spectral distortion, increased noise, and limited robustness. To address these challenges, we propose a multi-frequency spectral–spatial interaction enhancement network (MFSINet) that comprises the spectral–spatial interactive fusion (SSIF) and multi-frequency feature enhancement (MFFE) subnetworks. The SSIF enhances both spatial and spectral fusion features by optimizing the characteristics of each spectral band through band-aware processing. The MFFE employs a variant of wavelet transform to perform multiresolution analyses on remote sensing scenes, enhancing the spatial resolution, spectral fidelity, and the texture and structural features of the fused images by optimizing directional and spatial properties. Moreover, qualitative analysis and quantitative comparative experiments using the IKONOS and WorldView-2 datasets indicate that this method significantly improves the fidelity and accuracy of the fused images. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

Back to TopTop