Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (252)

Search Parameters:
Keywords = pansharpening

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 44594 KB  
Article
Pansharpened WorldView-3 Imagery and Machine Learning for Detecting Mal secco Disease in a Citrus Orchard
by Adriano Palma, Antonio Tiberini, Marco Caruso, Silvia Di Silvestro and Marco Bascietto
Remote Sens. 2026, 18(1), 110; https://doi.org/10.3390/rs18010110 - 28 Dec 2025
Viewed by 283
Abstract
Mal secco disease (MSD), caused by Plenodomus tracheiphilus, poses a serious threat to Citrus limon production across the Mediterranean Basin. This study investigates the potential of high-resolution WorldView-3 imagery for detecting early-stage MSD symptoms in lemon orchards through the integration of three [...] Read more.
Mal secco disease (MSD), caused by Plenodomus tracheiphilus, poses a serious threat to Citrus limon production across the Mediterranean Basin. This study investigates the potential of high-resolution WorldView-3 imagery for detecting early-stage MSD symptoms in lemon orchards through the integration of three pansharpening algorithms(Gram–Schmidt, NNDiffuse, and Brovey) with two machine learning classifiers (Random Forest and Support Vector Machine). The Brovey-based fusion combined with Random Forest yielded the best results, achieving 80% overall accuracy, 90% precision, and 84% recall, with high spatial reliability confirmed by 10-fold cross-validation. Spectral analysis revealed that Brovey introduced the largest radiometric deviation, particularly in the NIR band, which nonetheless enhanced class separability between healthy and symptomatic crowns. These findings demonstrate that moderate spectral distortion can be tolerated, or even beneficial, for vegetation disease detection. The proposed workflow—efficient, transferable, and based solely on visible and NIR bands—offers a practical foundation for satellite-driven disease monitoring and precision management in Mediterranean citrus systems. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

25 pages, 8187 KB  
Article
Cascaded Local–Nonlocal Pansharpening with Adaptive Channel-Kernel Convolution and Multi-Scale Large-Kernel Attention
by Junru Yin, Zhiheng Huang, Qiqiang Chen, Wei Huang, Le Sun, Qinggang Wu and Ruixia Hou
Remote Sens. 2026, 18(1), 97; https://doi.org/10.3390/rs18010097 - 27 Dec 2025
Viewed by 387
Abstract
Pansharpening plays a crucial role in remote sensing applications, as it enables the generation of high-spatial-resolution multispectral images that simultaneously preserve spatial and spectral information. However, most current methods struggle to preserve local textures and exploit spectral correlations across bands while modeling nonlocal [...] Read more.
Pansharpening plays a crucial role in remote sensing applications, as it enables the generation of high-spatial-resolution multispectral images that simultaneously preserve spatial and spectral information. However, most current methods struggle to preserve local textures and exploit spectral correlations across bands while modeling nonlocal information in source images. To address these issues, we propose a cascaded local–nonlocal pansharpening network (CLNNet) that progressively integrates local and nonlocal features through stacked Progressive Local–Nonlocal Fusion (PLNF) modules. This cascaded design allows CLNNet to gradually refine spatial–spectral information. Each PLNF module combines Adaptive Channel-Kernel Convolution (ACKC), which extracts local spatial features using channel-specific convolution kernels, and a Multi-Scale Large-Kernel Attention (MSLKA) module, which leverages multi-scale large-kernel convolutions with varying receptive fields to capture nonlocal information. The attention mechanism in MSLKA enhances spatial–spectral feature representation by integrating information across multiple dimensions. Extensive experiments on the GaoFen-2, QuickBird, and WorldView-3 datasets demonstrate that the proposed method outperforms state-of-the-art methods in quantitative metrics and visual quality. Full article
Show Figures

Figure 1

18 pages, 10928 KB  
Article
Long-Term Monitoring of Qaraoun Lake’s Water Quality and Hydrological Deterioration Using Landsat 7–9 and Google Earth Engine: Evidence of Environmental Decline in Lebanon
by Mohamad Awad
Hydrology 2026, 13(1), 8; https://doi.org/10.3390/hydrology13010008 - 23 Dec 2025
Viewed by 552
Abstract
Globally, lakes are increasingly recognized as sensitive indicators of climate change and ecosystem stress. Qaraoun Lake, Lebanon’s largest artificial reservoir, is a critical resource for irrigation, hydropower generation, and domestic water supply. Over the past 25 years, satellite remote sensing has enabled consistent [...] Read more.
Globally, lakes are increasingly recognized as sensitive indicators of climate change and ecosystem stress. Qaraoun Lake, Lebanon’s largest artificial reservoir, is a critical resource for irrigation, hydropower generation, and domestic water supply. Over the past 25 years, satellite remote sensing has enabled consistent monitoring of its hydrological and environmental dynamics. This study leverages the advanced cloud-based processing capabilities of Google Earth Engine (GEE) to analyze over 180 cloud-free scenes from Landsat 7 (Enhanced Thematic Mapper Plus) (ETM+) from 2000 to present, Landsat 8 Operational Land Imager and Thermal Infrared Sensor (OLI/TIRS) from 2013 to present, and Landsat 9 OLI-2/TIRS-2 from 2021 to present, quantifying changes in lake surface area, water volume, and pollution levels. Water extent was delineated using the Modified Normalized Difference Water Index (MNDWI), enhanced through pansharpening to improve spatial resolution from 30 m to 15 m. Water quality was evaluated using a composite pollution index that integrates three spectral indicators—the Normalized Difference Chlorophyll Index (NDCI), the Floating Algae Index (FAI), and a normalized Shortwave Infrared (SWIR) band—which serves as a proxy for turbidity and organic matter. This index was further standardized against a conservative Normalized Difference Vegetation Index (NDVI) threshold to reduce vegetation interference. The resulting index ranges from near-zero (minimal pollution) to values exceeding 1.0 (severe pollution), with higher values indicating elevated chlorophyll concentrations, surface reflectance anomalies, and suspended particulate matter. Results indicate a significant decline in mean annual water volume, from a peak of 174.07 million m3 in 2003 to a low of 106.62 million m3 in 2025 (until mid-November). Concurrently, pollution levels increased markedly, with the average index rising from 0.0028 in 2000 to a peak of 0.2465 in 2024. Episodic spikes exceeding 1.0 were detected in 2005, 2016, and 2024, corresponding to documented contamination events. These findings were validated against multiple institutional and international reports, confirming the reliability and efficiency of the GEE-based methodology. Time-series visualizations generated through GEE underscore a dual deterioration, both hydrological and qualitative, highlighting the lake’s growing vulnerability to anthropogenic pressures and climate variability. The study emphasizes the urgent need for integrated watershed management, pollution control measures, and long-term environmental monitoring to safeguard Lebanon’s water security and ecological resilience. Full article
(This article belongs to the Special Issue Lakes as Sensitive Indicators of Hydrology, Environment, and Climate)
Show Figures

Figure 1

26 pages, 12587 KB  
Article
Shift-Invariant Unsupervised Pansharpening Based on Diffusion Model
by Jialei Xie, Luyan Ji, Jinzhou Ye, Jilei Liu, Qi Feng, Kejian Liu and Yongchao Zhao
Remote Sens. 2026, 18(1), 27; https://doi.org/10.3390/rs18010027 - 22 Dec 2025
Viewed by 216
Abstract
Pansharpening is a crucial topic in remote sensing, and numerous deep learning-based methods have recently been proposed to explore the potential of deep neural networks (DNNs). However, existing approaches are often sensitive to spatial translation errors between high-resolution panchromatic (HRPan) and low-resolution multispectral [...] Read more.
Pansharpening is a crucial topic in remote sensing, and numerous deep learning-based methods have recently been proposed to explore the potential of deep neural networks (DNNs). However, existing approaches are often sensitive to spatial translation errors between high-resolution panchromatic (HRPan) and low-resolution multispectral (LRMS) images, leading to noticeable artifacts in the fused results. To address this issue, we propose an unsupervised pansharpening method that is robust to translation misalignment between HRPan and LRMS inputs. The proposed framework integrates a shift-invariant module to estimate subpixel spatial offsets and a diffusion-based generative model to progressively enhance spatial and spectral details. Moreover, a multi-scale detail injection module is designed to guide the diffusion process with fine-grained structural information. In addition, a carefully formulated loss function is established to preserve the fidelity of fusion results and facilitate the estimation of translation errors. Experiments conducted on the GaoFen-2, GaoFen-1, and WorldView-2 datasets demonstrate that the proposed method achieves superior fusion quality compared with state-of-the-art approaches and effectively suppresses artifacts caused by translation errors. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

25 pages, 3090 KB  
Article
Matrix-R Theory: A Simple Generic Method to Improve RGB-Guided Spectral Recovery Algorithms
by Graham D. Finlayson, Yi-Tun Lin and Abdullah Kucuk
Sensors 2025, 25(24), 7662; https://doi.org/10.3390/s25247662 - 17 Dec 2025
Viewed by 468
Abstract
RGB-guided spectral recovery algorithms include both spectral reconstruction (SR) methods that map image RGBs to spectra and pan-sharpening (PS) methods, where an RGB image is used to guide the upsampling of a low-resolution spectral image. In this paper, we exploit Matrix-R theory in [...] Read more.
RGB-guided spectral recovery algorithms include both spectral reconstruction (SR) methods that map image RGBs to spectra and pan-sharpening (PS) methods, where an RGB image is used to guide the upsampling of a low-resolution spectral image. In this paper, we exploit Matrix-R theory in developing a post-processing algorithm that, when applied to the outputs of any and all spectral recovery algorithms, almost always improves their spectral recovery accuracy (and never makes it worse). In Matrix-R theory, any spectrum can be decomposed into a component—called the fundamental metamer—in the space spanned by the spectral sensitivities and a second component—the metameric black—that is orthogonal to this subspace. In our post-processing algorithm, we substitute the correct fundamental metamer, which we calculate directly from the RGB image, for the estimated (and generally incorrect) fundamental metamer that is returned by a spectral recovery algorithm. Significantly, we prove that substituting the correct fundamental metamer always reduces the recovery error. Further, if the spectra in a target application are known to be well described by a linear model of low dimension, then our Matrix-R post-processing algorithm can also exploit this additional physical constraint. In experiments, we demonstrate that our Matrix-R post-processing improves the performance of a variety of spectral reconstruction and pan-sharpening algorithms. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

27 pages, 122137 KB  
Article
Object-Based Random Forest Approach for High-Resolution Mapping of Urban Green Space Dynamics in a University Campus
by Bakhrul Midad, Rahmihafiza Hanafi, Muhammad Aufaristama and Irwan Ary Dharmawan
Appl. Sci. 2025, 15(24), 13183; https://doi.org/10.3390/app152413183 - 16 Dec 2025
Viewed by 358
Abstract
Urban green space is essential for ecological functions, environmental quality, and human well-being, yet campus expansion can reduce vegetated areas. This study assessed UGS dynamics at Universitas Padjadjaran’s Jatinangor campus from 2015 to 2025 and evaluated an object-based machine learning approach for fine-scale [...] Read more.
Urban green space is essential for ecological functions, environmental quality, and human well-being, yet campus expansion can reduce vegetated areas. This study assessed UGS dynamics at Universitas Padjadjaran’s Jatinangor campus from 2015 to 2025 and evaluated an object-based machine learning approach for fine-scale land cover mapping. High-resolution WorldView-2, WorldView-3, and Legion-03 imagery were pan-sharpened, geometrically corrected, normalized, and used to compute NDVI and NDWI indices. Object-based image analysis segmented the imagery into homogeneous objects, followed by random forest classification into six land cover classes; UGS was derived from dense and sparse vegetation. Accuracy assessment included confusion matrices, overall accuracy 0.810–0.860, kappa coefficients 0.747–0.826, weighted F1 scores 0.807–0.860, and validation with 43 field points. The total UGS increased from 68.89% to 74.69%, bare land decreased from 13.49% to 5.81%, and building areas moderately increased from 10.36% to 11.52%. The maps captured vegetated and developed zones accurately, demonstrating the reliability of the classification approach. These findings indicate that campus expansion has been managed without compromising ecological integrity, providing spatially explicit, reliable data to inform sustainable campus planning and support green campus initiatives. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Figure 1

23 pages, 4335 KB  
Article
Fourier Fusion Implicit Mamba Network for Remote Sensing Pansharpening
by Ze-Zheng He, Hong-Xia Dou and Yu-Jie Liang
Remote Sens. 2025, 17(22), 3747; https://doi.org/10.3390/rs17223747 - 18 Nov 2025
Viewed by 730
Abstract
Pansharpening seeks to reconstruct a high-resolution multi-spectral image (HR-MSI) by integrating the fine spatial details from the panchromatic (PAN) image with the spectral richness of the low-resolution multi-spectral image (LR-MSI). In recent years, Implicit Neural Representations (INRs) have demonstrated remarkable potential in various [...] Read more.
Pansharpening seeks to reconstruct a high-resolution multi-spectral image (HR-MSI) by integrating the fine spatial details from the panchromatic (PAN) image with the spectral richness of the low-resolution multi-spectral image (LR-MSI). In recent years, Implicit Neural Representations (INRs) have demonstrated remarkable potential in various visual domains, offering a novel paradigm for pansharpening tasks. However, traditional INRs often suffer from insufficient global awareness and a tendency to capture mainly low-frequency information. To address these challenges, we present the Fourier Fusion Implicit Mamba Network (FFIMamba). The network takes advantage of Mamba’s ability to capture long-range dependencies and integrates a Fourier-based spatial–frequency fusion approach. By mapping features into the Fourier domain, FFIMamba identifies and emphasizes high-frequency details across spatial and frequency dimensions. This process broadens the network’s perception area, enabling more accurate reconstruction of fine structures and textures. Moreover, a spatial–frequency interactive fusion module is introduced to strengthen the information exchange among INR features. Extensive experiments on multiple benchmark datasets demonstrate that FFIMamba achieves superior performance in both visual quality and quantitative metrics. Ablation studies further verify the effectiveness of each component within the proposed framework. Full article
Show Figures

Figure 1

21 pages, 18006 KB  
Article
Shallow Bathymetry from Hyperspectral Imagery Using 1D-CNN: An Innovative Methodology for High Resolution Mapping
by Steven Martínez Vargas, Sibila A. Genchi, Alejandro J. Vitale and Claudio A. Delrieux
Remote Sens. 2025, 17(21), 3584; https://doi.org/10.3390/rs17213584 - 30 Oct 2025
Viewed by 881
Abstract
The combined application of machine or deep learning algorithms and hyperspectral imagery for bathymetry estimation is currently an emerging field with widespread uses and applications. This research topic still requires further investigation to achieve methodological robustness and accuracy. In this study, we introduce [...] Read more.
The combined application of machine or deep learning algorithms and hyperspectral imagery for bathymetry estimation is currently an emerging field with widespread uses and applications. This research topic still requires further investigation to achieve methodological robustness and accuracy. In this study, we introduce a novel methodology for shallow bathymetric mapping using a one-dimensional convolutional neural network (1D-CNN) applied to PRISMA hyperspectral images, including refinements to enhance mapping accuracy, together with the optimization of computational efficiency. Four different 1D-CNN models were developed, incorporating pansharpening and spectral band optimization. Model performance was rigorously evaluated against reference bathymetric data obtained from official nautical charts provided by the Servicio de Hidrografía Naval (Argentina). The BoPsCNN model achieved the best testing accuracy with a coefficient of determination of 0.96 and a root mean square error of 0.65 m for a depth range of 0–15 m. The implementation of band optimization significantly reduced computational overhead, yielding a time-saving efficiency of 31–38%. The resulting bathymetric maps exhibited a coherent depth gradient from nearshore to offshore zones, with enhanced seabed morphology representation, particularly in models using pansharpened data. Full article
Show Figures

Figure 1

28 pages, 14783 KB  
Article
HSSTN: A Hybrid Spectral–Structural Transformer Network for High-Fidelity Pansharpening
by Weijie Kang, Yuan Feng, Yao Ding, Hongbo Xiang, Xiaobo Liu and Yaoming Cai
Remote Sens. 2025, 17(19), 3271; https://doi.org/10.3390/rs17193271 - 23 Sep 2025
Viewed by 1006
Abstract
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS [...] Read more.
Pansharpening fuses multispectral (MS) and panchromatic (PAN) remote sensing images to generate outputs with high spatial resolution and spectral fidelity. Nevertheless, conventional methods relying primarily on convolutional neural networks or unimodal fusion strategies frequently fail to bridge the sensor modality gap between MS and PAN data. Consequently, spectral distortion and spatial degradation often occur, limiting high-precision downstream applications. To address these issues, this work proposes a Hybrid Spectral–Structural Transformer Network (HSSTN) that enhances multi-level collaboration through comprehensive modelling of spectral–structural feature complementarity. Specifically, the HSSTN implements a three-tier fusion framework. First, an asymmetric dual-stream feature extractor employs a residual block with channel attention (RBCA) in the MS branch to strengthen spectral representation, while a Transformer architecture in the PAN branch extracts high-frequency spatial details, thereby reducing modality discrepancy at the input stage. Subsequently, a target-driven hierarchical fusion network utilises progressive crossmodal attention across scales, ranging from local textures to multi-scale structures, to enable efficient spectral–structural aggregation. Finally, a novel collaborative optimisation loss function preserves spectral integrity while enhancing structural details. Comprehensive experiments conducted on QuickBird, GaoFen-2, and WorldView-3 datasets demonstrate that HSSTN outperforms existing methods in both quantitative metrics and visual quality. Consequently, the resulting images exhibit sharper details and fewer spectral artefacts, showcasing significant advantages in high-fidelity remote sensing image fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

22 pages, 3882 KB  
Article
Combining Satellite Image Standardization and Self-Supervised Learning to Improve Building Segmentation Accuracy
by Haoran Zhang and Bunkei Matsushita
Remote Sens. 2025, 17(18), 3182; https://doi.org/10.3390/rs17183182 - 14 Sep 2025
Viewed by 949
Abstract
Many research fields, such as urban planning, urban climate and environmental assessment, require information on the distribution of buildings. In this study, we used U-Net to segment buildings from WorldView-3 imagery. To improve the accuracy of building segmentation, we undertook two endeavors. First, [...] Read more.
Many research fields, such as urban planning, urban climate and environmental assessment, require information on the distribution of buildings. In this study, we used U-Net to segment buildings from WorldView-3 imagery. To improve the accuracy of building segmentation, we undertook two endeavors. First, we investigated the optimal order of atmospheric correction (AC) and panchromatic sharpening (pan-sharpening) and found that performing AC before pan-sharpening results in higher building segmentation accuracy than after pan-sharpening, increasing the average IoU by 9.4%. Second, we developed a new multi-task self-supervised learning (SSL) network to pre-train VGG19 backbone using 21 unlabeled WorldView images. The new multi-task SSL network includes two pretext tasks specifically designed to take into account the characteristics of buildings in satellite imagery (size, distribution pattern, multispectral, etc.). Performance evaluation shows that U-Net combined with an SSL pre-trained VGG19 backbone improves building segmentation accuracy by 15.3% compared to U-Net combined with a VGG19 backbone trained from scratch. Comparative analysis also shows that the new multi-task SSL network outperforms other existing SSL methods, improving building segmentation accuracy by 3.5–13.7%. Moreover, the proposed method significantly saves computational costs and can effectively work on a personal computer. Full article
Show Figures

Figure 1

23 pages, 6105 KB  
Article
YUV Color Model-Based Adaptive Pansharpening with Lanczos Interpolation and Spectral Weights
by Shavkat Fazilov, Ozod Yusupov, Erali Eshonqulov, Khabiba Abdieva and Ziyodullo Malikov
Mathematics 2025, 13(17), 2868; https://doi.org/10.3390/math13172868 - 5 Sep 2025
Cited by 1 | Viewed by 786
Abstract
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, [...] Read more.
Pansharpening is a method of image fusion that combines a panchromatic (PAN) image with high spatial resolution and multispectral (MS) images which possess different spectral characteristics and are frequently obtained from satellite sensors. Despite the development of numerous pansharpening methods in recent years, a key challenge continues to be the maintenance of both spatial details and spectral accuracy in the combined image. To tackle this challenge, we introduce a new approach that enhances the component substitution-based Adaptive IHS method by integrating the YUV color model along with weighting coefficients influenced by the multispectral data. In our proposed approach, the conventional IHS color model is substituted with the YUV model to enhance spectral consistency. Additionally, Lanczos interpolation is used to upscale the MS image to match the spatial resolution of the PAN image. Each channel of the MS image is fused using adaptive weights derived from the influence of multispectral data, leading to the final pansharpened image. Based on the findings from experiments conducted on the PairMax and PanCollection datasets, our proposed method exhibited superior spectral and spatial performance when compared to several existing pansharpening techniques. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

25 pages, 5194 KB  
Article
A Graph-Based Superpixel Segmentation Approach Applied to Pansharpening
by Hind Hallabia
Sensors 2025, 25(16), 4992; https://doi.org/10.3390/s25164992 - 12 Aug 2025
Cited by 1 | Viewed by 1065
Abstract
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral [...] Read more.
In this paper, an image-driven regional pansharpening technique based on simplex optimization analysis with a graph-based superpixel segmentation strategy is proposed. This fusion approach optimally combines spatial information derived from a high-resolution panchromatic (PAN) image and spectral information captured from a low-resolution multispectral (MS) image to generate a unique comprehensive high-resolution MS image. As the performance of such a fusion method relies on the choice of the fusion strategy, and in particular, on the way the algorithm is used for estimating gain coefficients, our proposal is dedicated to computing the injection gains over a graph-driven segmentation map. The graph-based segments are obtained by applying simple linear iterative clustering (SLIC) on the MS image followed by a region adjacency graph (RAG) merging stage. This graphical representation of the segmentation map is used as guidance for spatial information to be injected during fusion processing. The high-resolution MS image is achieved by inferring locally the details in accordance with the local simplex injection fusion rule. The quality improvements achievable by our proposal are evaluated and validated at reduced and at full scales using two high resolution datasets collected by GeoEye-1 and WorldView-3 sensors. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

34 pages, 10241 KB  
Review
A Comprehensive Benchmarking Framework for Sentinel-2 Sharpening: Methods, Dataset, and Evaluation Metrics
by Matteo Ciotola, Giuseppe Guarino, Antonio Mazza, Giovanni Poggi and Giuseppe Scarpa
Remote Sens. 2025, 17(12), 1983; https://doi.org/10.3390/rs17121983 - 7 Jun 2025
Cited by 2 | Viewed by 1930
Abstract
The advancement of super-resolution and sharpening algorithms for satellite images has significantly expanded the potential applications of remote sensing data. In the case of Sentinel-2, despite significant progress, the lack of standardized datasets and evaluation protocols has made it difficult to fairly compare [...] Read more.
The advancement of super-resolution and sharpening algorithms for satellite images has significantly expanded the potential applications of remote sensing data. In the case of Sentinel-2, despite significant progress, the lack of standardized datasets and evaluation protocols has made it difficult to fairly compare existing methods and advance the state of the art. This work introduces a comprehensive benchmarking framework for Sentinel-2 sharpening, designed to address these challenges and foster future research. It analyzes several state-of-the-art sharpening algorithms, selecting representative methods ranging from traditional pansharpening to ad hoc model-based optimization and deep learning approaches. All selected methods have been re-implemented within a consistent Python-based (Version 3.10) framework and evaluated on a suitably designed, large-scale Sentinel-2 dataset. This dataset features diverse geographical regions, land cover types, and acquisition conditions, ensuring robust training and testing scenarios. The performance of the sharpening methods is assessed using both reference-based and no-reference quality indexes, highlighting strengths, limitations, and open challenges of current state-of-the-art algorithms. The proposed framework, dataset, and evaluation protocols are openly shared with the research community to promote collaboration and reproducibility. Full article
Show Figures

Figure 1

21 pages, 9082 KB  
Article
Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization
by Dongyang Fu, Jin Ma, Bei Liu and Yan Zhu
Sensors 2025, 25(11), 3530; https://doi.org/10.3390/s25113530 - 4 Jun 2025
Viewed by 1407
Abstract
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors [...] Read more.
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors and the spectral distortion caused by the dynamic changes of water bodies in island sea areas restrict the fusion accuracy, necessitating more precise fusion solutions. Therefore, this paper proposes a pansharpening method based on Hybrid-Scale Mutual Information (HSMI). This method effectively enhances the accuracy and consistency of panchromatic sharpening results by integrating mixed-scale information into scale regression. Secondly, it introduces mutual information to quantify the spatial–spectral correlation among multi-source data to balance the fusion representation under mixed scales. Finally, the performance of various popular pansharpening methods was compared and analyzed using the coupled datasets of Sentinel-2 and Sentinel-3 in typical island and reef waters of the South China Sea. The results show that HSMI can enhance the spatial details and edge clarity of islands while better preserving the spectral characteristics of the surrounding sea areas. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 6314 KB  
Article
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening
by Jinting Ding, Honghui Xu and Shengjun Zhou
Entropy 2025, 27(6), 567; https://doi.org/10.3390/e27060567 - 27 May 2025
Cited by 1 | Viewed by 1304
Abstract
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs [...] Read more.
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral–spatial fidelity and producing images with higher perceptual quality. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

Back to TopTop