Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = hybrid pansharpening

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 9082 KiB  
Article
Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization
by Dongyang Fu, Jin Ma, Bei Liu and Yan Zhu
Sensors 2025, 25(11), 3530; https://doi.org/10.3390/s25113530 - 4 Jun 2025
Viewed by 984
Abstract
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors [...] Read more.
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors and the spectral distortion caused by the dynamic changes of water bodies in island sea areas restrict the fusion accuracy, necessitating more precise fusion solutions. Therefore, this paper proposes a pansharpening method based on Hybrid-Scale Mutual Information (HSMI). This method effectively enhances the accuracy and consistency of panchromatic sharpening results by integrating mixed-scale information into scale regression. Secondly, it introduces mutual information to quantify the spatial–spectral correlation among multi-source data to balance the fusion representation under mixed scales. Finally, the performance of various popular pansharpening methods was compared and analyzed using the coupled datasets of Sentinel-2 and Sentinel-3 in typical island and reef waters of the South China Sea. The results show that HSMI can enhance the spatial details and edge clarity of islands while better preserving the spectral characteristics of the surrounding sea areas. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 15242 KiB  
Article
Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet)
by Jing Wang, Jiaqing Miao, Gaoping Li, Ying Tan, Shicheng Yu, Xiaoguang Liu, Li Zeng and Guibing Li
Remote Sens. 2024, 16(1), 75; https://doi.org/10.3390/rs16010075 - 24 Dec 2023
Viewed by 2074
Abstract
Achieving a balance between spectral resolution and spatial resolution in multi-spectral remote sensing images is challenging due to physical constraints. Consequently, pan-sharpening technology was developed to address this challenge. While significant progress was recently achieved in deep-learning-based pan-sharpening techniques, most existing deep learning [...] Read more.
Achieving a balance between spectral resolution and spatial resolution in multi-spectral remote sensing images is challenging due to physical constraints. Consequently, pan-sharpening technology was developed to address this challenge. While significant progress was recently achieved in deep-learning-based pan-sharpening techniques, most existing deep learning approaches face two primary limitations: (1) convolutional neural networks (CNNs) struggle with long-range dependency issues, and (2) significant detail loss during deep network training. Moreover, despite these methods’ pan-sharpening capabilities, their generalization to full-sized raw images remains problematic due to scaling disparities, rendering them less practical. To tackle these issues, we introduce in this study a multi-spectral remote sensing image fusion network, termed TAMINet, which leverages a two-stream coordinate attention mechanism and multi-detail injection. Initially, a two-stream feature extractor augmented with the coordinate attention (CA) block is employed to derive modal-specific features from low-resolution multi-spectral (LRMS) images and panchromatic (PAN) images. This is followed by feature-domain fusion and pan-sharpening image reconstruction. Crucially, a multi-detail injection approach is incorporated during fusion and reconstruction, ensuring the reintroduction of details lost earlier in the process, which minimizes high-frequency detail loss. Finally, a novel hybrid loss function is proposed that incorporates spatial loss, spectral loss, and an additional loss component to enhance performance. The proposed methodology’s effectiveness was validated through experiments on WorldView-2 satellite images, IKONOS, and QuickBird, benchmarked against current state-of-the-art techniques. Experimental findings reveal that TAMINet significantly elevates the pan-sharpening performance for large-scale images, underscoring its potential to enhance multi-spectral remote sensing image quality. Full article
Show Figures

Graphical abstract

18 pages, 6287 KiB  
Article
SSML: Spectral-Spatial Mutual-Learning-Based Framework for Hyperspectral Pansharpening
by Xianlin Peng, Yihao Fu, Shenglin Peng, Kai Ma, Lu Liu and Jun Wang
Remote Sens. 2022, 14(18), 4682; https://doi.org/10.3390/rs14184682 - 19 Sep 2022
Viewed by 2713
Abstract
This paper considers problems associated with the large size of the hyperspectral pansharpening network and difficulties associated with learning its spatial-spectral features. We propose a deep mutual-learning-based framework (SSML) for spectral-spatial information mining and hyperspectral pansharpening. In this framework, a deep mutual-learning mechanism [...] Read more.
This paper considers problems associated with the large size of the hyperspectral pansharpening network and difficulties associated with learning its spatial-spectral features. We propose a deep mutual-learning-based framework (SSML) for spectral-spatial information mining and hyperspectral pansharpening. In this framework, a deep mutual-learning mechanism is introduced to learn spatial and spectral features from each other through information transmission, which achieves better fusion results without entering too many parameters. The proposed SSML framework consists of two separate networks for learning spectral and spatial features of HSIs and panchromatic images (PANs). A hybrid loss function containing constrained spectral and spatial information is designed to enforce mutual learning between the two networks. In addition, a mutual-learning strategy is used to balance the spectral and spatial feature learning to improve the performance of the SSML path compared to the original. Extensive experimental results demonstrated the effectiveness of the mutual-learning mechanism and the proposed hybrid loss function for hyperspectral pan-sharpening. Furthermore, a typical deep-learning method was used to confirm the proposed framework’s capacity for generalization. Ideal performance was observed in all cases. Moreover, multiple experiments analysing the parameters used showed that the proposed method achieved better fusion results without adding too many parameters. Thus, the proposed SSML represents a promising framework for hyperspectral pansharpening. Full article
(This article belongs to the Special Issue Remote Sensing and Machine Learning of Signal and Image Processing)
Show Figures

Figure 1

19 pages, 17851 KiB  
Article
Deep Pansharpening via 3D Spectral Super-Resolution Network and Discrepancy-Based Gradient Transfer
by Haonan Su, Haiyan Jin and Ce Sun
Remote Sens. 2022, 14(17), 4250; https://doi.org/10.3390/rs14174250 - 29 Aug 2022
Cited by 5 | Viewed by 2207
Abstract
High-resolution (HR) multispectral (MS) images contain sharper detail and structure compared to the ground truth high-resolution hyperspectral (HS) images. In this paper, we propose a novel supervised learning method, which considers pansharpening as the spectral super-resolution of high-resolution multispectral images and generates high-resolution [...] Read more.
High-resolution (HR) multispectral (MS) images contain sharper detail and structure compared to the ground truth high-resolution hyperspectral (HS) images. In this paper, we propose a novel supervised learning method, which considers pansharpening as the spectral super-resolution of high-resolution multispectral images and generates high-resolution hyperspectral images. The proposed method learns the spectral mapping between high-resolution multispectral images and the ground truth high-resolution hyperspectral images. To consider the spectral correlation between bands, we build a three-dimensional (3D) convolution neural network (CNN). The network consists of three parts using an encoder–decoder framework: spatial/spectral feature extraction from high-resolution multispectral images/low-resolution (LR) hyperspectral images, feature transform, and image reconstruction to generate the results. In the image reconstruction network, we design the spatial–spectral fusion (SSF) blocks to reuse the extracted spatial and spectral features in the reconstructed feature layer. Then, we develop the discrepancy-based deep hybrid gradient (DDHG) losses with the spatial–spectral gradient (SSG) loss and deep gradient transfer (DGT) loss. The spatial–spectral gradient loss and deep gradient transfer loss are developed to preserve the spatial and spectral gradients from the ground truth high-resolution hyperspectral images and high-resolution multispectral images. To overcome the spectral and spatial discrepancy between two images, we design a spectral downsampling (SD) network and a gradient consistency estimation (GCE) network for hybrid gradient losses. In the experiments, it is seen that the proposed method outperforms the state-of-the-art methods in the subjective and objective experiments in terms of the structure and spectral preservation of high-resolution hyperspectral images. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

18 pages, 15912 KiB  
Article
Hybrid Attention Based Residual Network for Pansharpening
by Qin Liu, Letong Han, Rui Tan, Hongfei Fan, Weiqi Li, Hongming Zhu, Bowen Du and Sicong Liu
Remote Sens. 2021, 13(10), 1962; https://doi.org/10.3390/rs13101962 - 18 May 2021
Cited by 19 | Viewed by 3791
Abstract
Pansharpening aims at fusing the rich spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images to generate a fused image with both high resolutions. In general, the existing pansharpening methods suffer from the problems of spectral distortion and [...] Read more.
Pansharpening aims at fusing the rich spectral information of multispectral (MS) images and the spatial details of panchromatic (PAN) images to generate a fused image with both high resolutions. In general, the existing pansharpening methods suffer from the problems of spectral distortion and lack of spatial detail information, which might prevent the accuracy computation for ground object identification. To alleviate these problems, we propose a Hybrid Attention mechanism-based Residual Neural Network (HARNN). In the proposed network, we develop an encoder attention module in the feature extraction part to better utilize the spectral and spatial features of MS and PAN images. Furthermore, the fusion attention module is designed to alleviate spectral distortion and improve contour details of the fused image. A series of ablation and contrast experiments are conducted on GF-1 and GF-2 datasets. The fusion results with less distorted pixels and more spatial details demonstrate that HARNN can implement the pansharpening task effectively, which outperforms the state-of-the-art algorithms. Full article
Show Figures

Graphical abstract

37 pages, 34428 KiB  
Article
Automatic 3-D Building Model Reconstruction from Very High Resolution Stereo Satellite Imagery
by Tahmineh Partovi, Friedrich Fraundorfer, Reza Bahmanyar, Hai Huang and Peter Reinartz
Remote Sens. 2019, 11(14), 1660; https://doi.org/10.3390/rs11141660 - 11 Jul 2019
Cited by 35 | Viewed by 8541
Abstract
Recent advances in the availability of very high-resolution (VHR) satellite data together with efficient data acquisition and large area coverage have led to an upward trend in their applications for automatic 3-D building model reconstruction which require large-scale and frequent updates, such as [...] Read more.
Recent advances in the availability of very high-resolution (VHR) satellite data together with efficient data acquisition and large area coverage have led to an upward trend in their applications for automatic 3-D building model reconstruction which require large-scale and frequent updates, such as disaster monitoring and urban management. Digital Surface Models (DSMs) generated from stereo satellite imagery suffer from mismatches, missing values, or blunders, resulting in rough building shape representations. To handle 3-D building model reconstruction using such low-quality DSMs, we propose a novel automatic multistage hybrid method using DSMs together with orthorectified panchromatic (PAN) and pansharpened data (PS) of multispectral (MS) satellite imagery. The algorithm consists of multiple steps including building boundary extraction and decomposition, image-based roof type classification, and initial roof parameter computation which are prior knowledge for the 3-D model fitting step. To fit 3-D models to the normalized DSM (nDSM) and to select the best one, a parameter optimization method based on exhaustive search is used sequentially in 2-D and 3-D. Finally, the neighboring building models in a building block are intersected to reconstruct the 3-D model of connecting roofs. All corresponding experiments are conducted on a dataset including four different areas of Munich city containing 208 buildings with different degrees of complexity. The results are evaluated both qualitatively and quantitatively. According to the results, the proposed approach can reliably reconstruct 3-D building models, even the complex ones with several inner yards and multiple orientations. Furthermore, the proposed approach provides a high level of automation by limiting the number of primitive roof types and by performing automatic parameter initialization. Full article
(This article belongs to the Special Issue 3D Reconstruction Based on Aerial and Satellite Imagery)
Show Figures

Graphical abstract

17 pages, 62953 KiB  
Article
Assessment of Spatiotemporal Fusion Algorithms for Planet and Worldview Images
by Chiman Kwan, Xiaolin Zhu, Feng Gao, Bryan Chou, Daniel Perez, Jiang Li, Yuzhong Shen, Krzysztof Koperski and Giovanni Marchisio
Sensors 2018, 18(4), 1051; https://doi.org/10.3390/s18041051 - 31 Mar 2018
Cited by 40 | Viewed by 5802
Abstract
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet [...] Read more.
Although Worldview-2 (WV) images (non-pansharpened) have 2-m resolution, the re-visit times for the same areas may be seven days or more. In contrast, Planet images are collected using small satellites that can cover the whole Earth almost daily. However, the resolution of Planet images is 3.125 m. It would be ideal to fuse these two satellites images to generate high spatial resolution (2 m) and high temporal resolution (1 or 2 days) images for applications such as damage assessment, border monitoring, etc. that require quick decisions. In this paper, we evaluate three approaches to fusing Worldview (WV) and Planet images. These approaches are known as Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), Flexible Spatiotemporal Data Fusion (FSDAF), and Hybrid Color Mapping (HCM), which have been applied to the fusion of MODIS and Landsat images in recent years. Experimental results using actual Planet and Worldview images demonstrated that the three aforementioned approaches have comparable performance and can all generate high quality prediction images. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

20 pages, 7810 KiB  
Article
Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter
by Wenqian Dong, Song Xiao, Yunsong Li and Jiahui Qu
Remote Sens. 2018, 10(3), 445; https://doi.org/10.3390/rs10030445 - 12 Mar 2018
Cited by 7 | Viewed by 5093
Abstract
Component substitution (CS) and multiresolution analysis (MRA) based methods have been adopted in hyperspectral pansharpening. The major contribution of this paper is a novel CS-MRA hybrid framework based on intrinsic image decomposition and weighted least squares filter. First, the panchromatic (P) [...] Read more.
Component substitution (CS) and multiresolution analysis (MRA) based methods have been adopted in hyperspectral pansharpening. The major contribution of this paper is a novel CS-MRA hybrid framework based on intrinsic image decomposition and weighted least squares filter. First, the panchromatic (P) image is sharpened by the Gaussian-Laplacian enhancement algorithm to enhance the spatial details, and the weighted least squares (WLS) filter is performed on the enhanced P image to extract the high-frequency information of the P image. Then, the MTF-based deblurring method is applied to the interpolated hyperspectral (HS) image, and the intrinsic image decomposition (IID) is adopted to decompose the deblurred interpolated HS image into the illumination and reflectance components. Finally, the detail map is generated by making a proper compromise between the high-frequency information of the P image and the spatial information preserved in the illumination component of the HS image. The detail map is further refined by the information ratio of different bands of the HS image and injected into the deblurred interpolated HS image. Experimental results indicate that the proposed method achieves better fusion results than several state-of-the-art hyperspectral pansharpening methods. This demonstrates that a combination of an IID technique and a WLS filter is an effective way for hyperspectral pansharpening. Full article
(This article belongs to the Special Issue Hyperspectral Imaging and Applications)
Show Figures

Graphical abstract

21 pages, 10633 KiB  
Article
A Hybrid Pansharpening Algorithm of VHR Satellite Images that Employs Injection Gains Based on NDVI to Reduce Computational Costs
by Jaewan Choi, Guhyeok Kim, Nyunghee Park, Honglyun Park and Seokkeun Choi
Remote Sens. 2017, 9(10), 976; https://doi.org/10.3390/rs9100976 - 21 Sep 2017
Cited by 16 | Viewed by 5309
Abstract
The objective of this work is to develop an algorithm for pansharpening of very high resolution (VHR) satellite imagery that reduces the spectral distortion of the pansharpened images and enhances their spatial clarity with minimal computational costs. In order to minimize the spectral [...] Read more.
The objective of this work is to develop an algorithm for pansharpening of very high resolution (VHR) satellite imagery that reduces the spectral distortion of the pansharpened images and enhances their spatial clarity with minimal computational costs. In order to minimize the spectral distortion and computational costs, the global injection gain is transformed to the local injection gains using the normalized difference vegetation index (NDVI), on the assumption that the NDVI are positively or negatively correlated with local injection gains obtained from each band of the satellite data. In addition, the local injection gains are then applied in the hybrid pansharpening algorithm to optimize the spatial clarity. In particular, in the proposed algorithm, a synthetic intensity image is determined using block-based linear regression. In experiments using imagery collected by various satellites, such as KOrea Multi-Purpose SATellite-3 (KOMPSAT-3), KOMPSAT-3A and WorldView-3, the pansharpened results obtained using the proposed Hybrid Pansharpening algorithm using NDVI and based on the spectral mode (HP-NDVIspectral) provide a better representation of the values of the Erreur Relative Globale Adimensionnelle de Synthèse (ERGAS), the spectral angle mapper (SAM) and the Q4/Q8 than those produced by existing pansharpening algorithms. In terms of spatial quality, the pansharpened images obtained using the proposed pansharpening algorithm based on the spatial mode (HP-NDVIspatial) have higher average gradient (AG) values than those obtained using existing pansharpening methods. In addition, the computational complexity of our method is similar to that of a pansharpening algorithm that is based on a global injection model, although our methodology has characteristics that are similar to those of a local injection gain-based model that has a very high computational cost. Thus, the quantitative and qualitative assessments presented here indicate that the proposed algorithm can be utilized in various applications that employ spectral information or require high spatial clarity. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

14 pages, 2174 KiB  
Article
Satellite Image Pansharpening Using a Hybrid Approach for Object-Based Image Analysis
by Brian Alan Johnson, Ryutaro Tateishi and Nguyen Thanh Hoan
ISPRS Int. J. Geo-Inf. 2012, 1(3), 228-241; https://doi.org/10.3390/ijgi1030228 - 16 Oct 2012
Cited by 34 | Viewed by 10580
Abstract
Intensity-Hue-Saturation (IHS), Brovey Transform (BT), and Smoothing-Filter-Based-Intensity Modulation (SFIM) algorithms were used to pansharpen GeoEye-1 imagery. The pansharpened images were then segmented in Berkeley Image Seg using a wide range of segmentation parameters, and the spatial and spectral accuracy of image segments was [...] Read more.
Intensity-Hue-Saturation (IHS), Brovey Transform (BT), and Smoothing-Filter-Based-Intensity Modulation (SFIM) algorithms were used to pansharpen GeoEye-1 imagery. The pansharpened images were then segmented in Berkeley Image Seg using a wide range of segmentation parameters, and the spatial and spectral accuracy of image segments was measured. We found that pansharpening algorithms that preserve more of the spatial information of the higher resolution panchromatic image band (i.e., IHS and BT) led to more spatially-accurate segmentations, while pansharpening algorithms that minimize the distortion of spectral information of the lower resolution multispectral image bands (i.e., SFIM) led to more spectrally-accurate image segments. Based on these findings, we developed a new IHS-SFIM combination approach, specifically for object-based image analysis (OBIA), which combined the better spatial information of IHS and the more accurate spectral information of SFIM to produce image segments with very high spatial and spectral accuracy. Full article
Show Figures

Graphical abstract

Back to TopTop