Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (104)

Search Parameters:
Keywords = spectral sharpening

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4631 KiB  
Article
Semantic Segmentation of Rice Fields in Sub-Meter Satellite Imagery Using an HRNet-CA-Enhanced DeepLabV3+ Framework
by Yifan Shao, Pan Pan, Hongxin Zhao, Jiale Li, Guoping Yu, Guomin Zhou and Jianhua Zhang
Remote Sens. 2025, 17(14), 2404; https://doi.org/10.3390/rs17142404 - 11 Jul 2025
Viewed by 393
Abstract
Accurate monitoring of rice-planting areas underpins food security and evidence-based farm management. Recent work has advanced along three complementary lines—multi-source data fusion (to mitigate cloud and spectral confusion), temporal feature extraction (to exploit phenology), and deep-network architecture optimization. However, even the best fusion- [...] Read more.
Accurate monitoring of rice-planting areas underpins food security and evidence-based farm management. Recent work has advanced along three complementary lines—multi-source data fusion (to mitigate cloud and spectral confusion), temporal feature extraction (to exploit phenology), and deep-network architecture optimization. However, even the best fusion- and time-series-based approaches still struggle to preserve fine spatial details in sub-meter scenes. Targeting this gap, we propose an HRNet-CA-enhanced DeepLabV3+ that retains the original model’s strengths while resolving its two key weaknesses: (i) detail loss caused by repeated down-sampling and feature-pyramid compression and (ii) boundary blurring due to insufficient multi-scale information fusion. The Xception backbone is replaced with a High-Resolution Network (HRNet) to maintain full-resolution feature streams through multi-resolution parallel convolutions and cross-scale interactions. A coordinate attention (CA) block is embedded in the decoder to strengthen spatially explicit context and sharpen class boundaries. The rice dataset consisted of 23,295 images (11,295 rice + 12,000 non-rice) via preprocessing and manual labeling and benchmarked the proposed model against classical segmentation networks. Our approach boosts boundary segmentation accuracy to 92.28% MIOU and raises texture-level discrimination to 95.93% F1, without extra inference latency. Although this study focuses on architecture optimization, the HRNet-CA backbone is readily compatible with future multi-source fusion and time-series modules, offering a unified path toward operational paddy mapping in fragmented sub-meter landscapes. Full article
Show Figures

Figure 1

21 pages, 9082 KiB  
Article
Multi-Source Pansharpening of Island Sea Areas Based on Hybrid-Scale Regression Optimization
by Dongyang Fu, Jin Ma, Bei Liu and Yan Zhu
Sensors 2025, 25(11), 3530; https://doi.org/10.3390/s25113530 - 4 Jun 2025
Viewed by 792
Abstract
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors [...] Read more.
To address the demand for high spatial resolution data in the water color inversion task of multispectral satellite images in island sea areas, a feasible solution is to process through multi-source remote sensing data fusion methods. However, the inherent biases among multi-source sensors and the spectral distortion caused by the dynamic changes of water bodies in island sea areas restrict the fusion accuracy, necessitating more precise fusion solutions. Therefore, this paper proposes a pansharpening method based on Hybrid-Scale Mutual Information (HSMI). This method effectively enhances the accuracy and consistency of panchromatic sharpening results by integrating mixed-scale information into scale regression. Secondly, it introduces mutual information to quantify the spatial–spectral correlation among multi-source data to balance the fusion representation under mixed scales. Finally, the performance of various popular pansharpening methods was compared and analyzed using the coupled datasets of Sentinel-2 and Sentinel-3 in typical island and reef waters of the South China Sea. The results show that HSMI can enhance the spatial details and edge clarity of islands while better preserving the spectral characteristics of the surrounding sea areas. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

15 pages, 3905 KiB  
Article
Conditional Skipping Mamba Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Peng Liu and Tong Li
Symmetry 2024, 16(12), 1681; https://doi.org/10.3390/sym16121681 - 19 Dec 2024
Viewed by 1038
Abstract
Pan-sharpening aims to generate high-resolution multispectral (HRMS) images by combining high-resolution panchromatic (PAN) images with low-resolution multispectral (LRMS) data, while maintaining the symmetry of spatial and spectral characteristics. Traditional convolutional neural networks (CNNs) struggle with global dependency modeling due to local receptive fields, [...] Read more.
Pan-sharpening aims to generate high-resolution multispectral (HRMS) images by combining high-resolution panchromatic (PAN) images with low-resolution multispectral (LRMS) data, while maintaining the symmetry of spatial and spectral characteristics. Traditional convolutional neural networks (CNNs) struggle with global dependency modeling due to local receptive fields, and Transformer-based models are computationally expensive. Recent Mamba models offer linear complexity and effective global modeling. However, existing Mamba-based methods lack sensitivity to local feature variations, leading to suboptimal fine-detail preservation. To address this, we propose a Conditional Skipping Mamba Network (CSMN), which enhances global-local feature fusion symmetrically through two modules: (1) the Adaptive Mamba Module (AMM), which improves global perception using adaptive spatial-frequency integration; and (2) the Cross-domain Mamba Module (CDMM), optimizing cross-domain spectral-spatial representation. Experimental results on the IKONOS and WorldView-2 datasets demonstrate that CSMN surpasses existing state-of-the-art methods in achieving superior spectral consistency and preserving spatial details, with performance that is more symmetric in fine-detail preservation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

15 pages, 6962 KiB  
Article
Perceptual Quality Assessment for Pansharpened Images Based on Deep Feature Similarity Measure
by Zhenhua Zhang, Shenfu Zhang, Xiangchao Meng, Liang Chen and Feng Shao
Remote Sens. 2024, 16(24), 4621; https://doi.org/10.3390/rs16244621 - 10 Dec 2024
Viewed by 983
Abstract
Pan-sharpening aims to generate high-resolution (HR) multispectral (MS) images by fusing HR panchromatic (PAN) and low-resolution (LR) MS images covering the same area. However, due to the lack of real HR MS reference images, how to accurately evaluate the quality of a fused [...] Read more.
Pan-sharpening aims to generate high-resolution (HR) multispectral (MS) images by fusing HR panchromatic (PAN) and low-resolution (LR) MS images covering the same area. However, due to the lack of real HR MS reference images, how to accurately evaluate the quality of a fused image without reference is challenging. On the one hand, most methods evaluate the quality of the fused image using the full-reference indices based on the simulated experimental data on the popular Wald’s protocol; however, this remains controversial to the full-resolution data fusion. On the other hand, existing limited no reference methods, most of which depend on manually crafted features, cannot fully capture the sensitive spatial/spectral distortions of the fused image. Therefore, this paper proposes a perceptual quality assessment method based on deep feature similarity measure. The proposed network includes spatial/spectral feature extraction and similarity measure (FESM) branch and overall evaluation network. The Siamese FESM branch extracts the spatial and spectral deep features and calculates the similarity of the corresponding pair of deep features to obtain the spatial and spectral feature parameters, and then, the overall evaluation network realizes the overall quality assessment. Moreover, we propose to quantify both the overall precision of all the training samples and the variations among different fusion methods in a batch, thereby enhancing the network’s accuracy and robustness. The proposed method was trained and tested on a large subjective evaluation dataset comprising 13,620 fused images. The experimental results suggested the effectiveness and the competitive performance. Full article
Show Figures

Figure 1

16 pages, 4099 KiB  
Article
Multi-Frequency Spectral–Spatial Interactive Enhancement Fusion Network for Pan-Sharpening
by Yunxuan Tang, Huaguang Li, Guangxu Xie, Peng Liu and Tong Li
Electronics 2024, 13(14), 2802; https://doi.org/10.3390/electronics13142802 - 16 Jul 2024
Cited by 5 | Viewed by 1257
Abstract
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including [...] Read more.
The objective of pan-sharpening is to effectively fuse high-resolution panchromatic (PAN) images with limited spectral information and low-resolution multispectral (LR-MS) images, thereby generating a fused image with a high spatial resolution and rich spectral information. However, current fusion techniques face significant challenges, including insufficient edge detail, spectral distortion, increased noise, and limited robustness. To address these challenges, we propose a multi-frequency spectral–spatial interaction enhancement network (MFSINet) that comprises the spectral–spatial interactive fusion (SSIF) and multi-frequency feature enhancement (MFFE) subnetworks. The SSIF enhances both spatial and spectral fusion features by optimizing the characteristics of each spectral band through band-aware processing. The MFFE employs a variant of wavelet transform to perform multiresolution analyses on remote sensing scenes, enhancing the spatial resolution, spectral fidelity, and the texture and structural features of the fused images by optimizing directional and spatial properties. Moreover, qualitative analysis and quantitative comparative experiments using the IKONOS and WorldView-2 datasets indicate that this method significantly improves the fidelity and accuracy of the fused images. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

25 pages, 38210 KiB  
Article
Enhanced Hyperspectral Sharpening through Improved Relative Spectral Response Characteristic (R-SRC) Estimation for Long-Range Surveillance Applications
by Peter Yuen, Jonathan Piper, Catherine Yuen and Mehmet Cakir
Electronics 2024, 13(11), 2113; https://doi.org/10.3390/electronics13112113 - 29 May 2024
Viewed by 1249
Abstract
The fusion of low-spatial-resolution hyperspectral images (LRHSI) with high-spatial-resolution multispectral images (HRMSI) for super-resolution (SR), using coupled non-negative matrix factorization (CNMF), has been widely studied in the past few decades. However, the matching of spectral characteristics between the LRHSI and HRMSI, which is [...] Read more.
The fusion of low-spatial-resolution hyperspectral images (LRHSI) with high-spatial-resolution multispectral images (HRMSI) for super-resolution (SR), using coupled non-negative matrix factorization (CNMF), has been widely studied in the past few decades. However, the matching of spectral characteristics between the LRHSI and HRMSI, which is required before they are jointly factorized, has rarely been studied. One objective of this work is to study how the relative spectral response characteristics (R-SRC) of the LRHSI and HRMSI can be better estimated, particularly when the SRC of the latter is unknown. To this end, three variants of enhanced R-SRC algorithms were proposed, and their effectiveness was assessed by applying them for sharpening data using CNMF. The quality of the output was assessed using the L1-norm-error (L1NE) and receiver operating characteristics (ROC) of target detections performed using the adaptive coherent estimator (ACE) algorithm. Experimental results obtained from two subsets of a real scene revealed a two- to three-fold reduction in the reconstruction error when the scenes were sharpened by the proposed R-SRC algorithms, in comparison with Yokoya’s original algorithm. Experiments also revealed that a much higher proportion (by one order of magnitude) of small targets of 0.015 occupancy in the LRHSI scene could be detected by the proposed R-SRC methods compared with the baseline algorithm, for an equal false alarm rate. These results may suggest the possibility of SR to allow long-range surveillance using low-cost HSI hardware, particularly when the remaining issues of the occurrence of large reconstruction errors and comparatively higher false alarm rate for ‘rare’ species in the scene can be understood and resolved in future research. Full article
Show Figures

Figure 1

21 pages, 28215 KiB  
Article
Spatial Resolution Enhancement of Vegetation Indexes via Fusion of Hyperspectral and Multispectral Satellite Data
by Luciano Alparone, Alberto Arienzo and Andrea Garzelli
Remote Sens. 2024, 16(5), 875; https://doi.org/10.3390/rs16050875 - 1 Mar 2024
Cited by 10 | Viewed by 2501
Abstract
The definition and calculation of a spectral index suitable for characterizing vegetated landscapes depend on the number and widths of the bands of the imaging instrument. Here, we point out the advantages of performing the fusion of hyperspectral (HS) satellite data with the [...] Read more.
The definition and calculation of a spectral index suitable for characterizing vegetated landscapes depend on the number and widths of the bands of the imaging instrument. Here, we point out the advantages of performing the fusion of hyperspectral (HS) satellite data with the multispectral (MS) bands of Sentinel-2 to calculate such vegetation indexes as the normalized area over reflectance curve (NAOC) and the red-edge inflection point (REIP), which benefit from the availability of quasi-continuous pixel spectra. Unfortunately, MS data may be acquired from satellite platforms with very high spatial resolution; HS data may not. Despite their excellent spectral resolution, satellite imaging spectrometers currently resolve areas not greater than 30 × 30 m2, where different thematic classes of landscape may be mixed together to form a unique pixel spectrum. A way to resolve mixed pixels is to perform the fusion of the HS dataset with the same dataset produced by an MS scanner that images the same scene with a finer spatial resolution. The HS dataset is sharpened from 30 m to 10 m by means of the Sentinel-2 bands that have all been previously brought to 10 m. To do so, the hyper-sharpening protocol, that is, m:n fusion, is exploited in two nested steps: the first one to bring the 20 m bands of Sentinel-2 all to 10 m, the second one to sharpen all the 30 m HS bands to 10 m by using the Sentinel-2 bands previously hyper-sharpened to 10 m. Results are presented on an agricultural test site in The Netherlands imaged by Sentinel-2 and by the satellite imaging spectrometer recently launched as a part of the environmental mapping and analysis program (EnMAP). Firstly, the excellent match of statistical consistency of the fused HS data to the original MS and HS data is evaluated by means of analysis tools, existing and developed ad hoc for this specific case. Then, the spatial and radiometric accuracy of REIP and NAOC calculated from fused HS data are analyzed on the classes of pure and mixed pixels. On pure pixels, the values of REIP and NAOC calculated from fused data are consistent with those calculated from the original HS data. Conversely, mixed pixels are spectrally unmixed by the fusion process to resolve the 10 m scale of the MS data. How the proposed method can be used to check the temporal evolution of vegetation indexes when a unique HS image and many MS images are available is the object of a final discussion. Full article
Show Figures

Figure 1

28 pages, 11352 KiB  
Article
Pansharpening Low-Altitude Multispectral Images of Potato Plants Using a Generative Adversarial Network
by Sourav Modak, Jonathan Heil and Anthony Stein
Remote Sens. 2024, 16(5), 874; https://doi.org/10.3390/rs16050874 - 1 Mar 2024
Cited by 6 | Viewed by 3745
Abstract
Image preprocessing and fusion are commonly used for enhancing remote-sensing images, but the resulting images often lack useful spatial features. As the majority of research on image fusion has concentrated on the satellite domain, the image-fusion task for Unmanned Aerial Vehicle (UAV) images [...] Read more.
Image preprocessing and fusion are commonly used for enhancing remote-sensing images, but the resulting images often lack useful spatial features. As the majority of research on image fusion has concentrated on the satellite domain, the image-fusion task for Unmanned Aerial Vehicle (UAV) images has received minimal attention. This study investigated an image-improvement strategy by integrating image preprocessing and fusion tasks for UAV images. The goal is to improve spatial details and avoid color distortion in fused images. Techniques such as image denoising, sharpening, and Contrast Limited Adaptive Histogram Equalization (CLAHE) were used in the preprocessing step. The unsharp mask algorithm was used for image sharpening. Wiener and total variation denoising methods were used for image denoising. The image-fusion process was conducted in two steps: (1) fusing the spectral bands into one multispectral image and (2) pansharpening the panchromatic and multispectral images using the PanColorGAN model. The effectiveness of the proposed approach was evaluated using quantitative and qualitative assessment techniques, including no-reference image quality assessment (NR-IQA) metrics. In this experiment, the unsharp mask algorithm noticeably improved the spatial details of the pansharpened images. No preprocessing algorithm dramatically improved the color quality of the enhanced images. The proposed fusion approach improved the images without importing unnecessary blurring and color distortion issues. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Graphical abstract

16 pages, 10346 KiB  
Article
GSA-SiamNet: A Siamese Network with Gradient-Based Spatial Attention for Pan-Sharpening of Multi-Spectral Images
by Yi Gao, Mengjiao Qin, Sensen Wu, Feng Zhang and Zhenhong Du
Remote Sens. 2024, 16(4), 616; https://doi.org/10.3390/rs16040616 - 7 Feb 2024
Cited by 3 | Viewed by 1934
Abstract
Pan-sharpening is a fusion process that combines a low-spatial resolution, multi-spectral image that has rich spectral characteristics with a high-spatial resolution panchromatic (PAN) image that lacks spectral characteristics. Most previous learning-based approaches rely on the scale-shift assumption, which may not be applicable in [...] Read more.
Pan-sharpening is a fusion process that combines a low-spatial resolution, multi-spectral image that has rich spectral characteristics with a high-spatial resolution panchromatic (PAN) image that lacks spectral characteristics. Most previous learning-based approaches rely on the scale-shift assumption, which may not be applicable in the full-resolution domain. To solve this issue, we regard pan-sharpening as a multi-task problem and propose a Siamese network with Gradient-based Spatial Attention (GSA-SiamNet). GSA-SiamNet consists of four modules: a two-stream feature extraction module, a feature fusion module, a gradient-based spatial attention (GSA) module, and a progressive up-sampling module. In the GSA module, we use Laplacian and Sobel operators to extract gradient information from PAN images. Spatial attention factors, learned from the gradient prior, are multiplied during the feature fusion, up-sampling, and reconstruction stages. These factors help to keep high-frequency information on the feature map as well as suppress redundant information. We also design a multi-resolution loss function that guides the training process under the constraints of both reduced- and full-resolution domains. The experimental results on WorldView-3 satellite images obtained in Moscow and San Juan demonstrate that our proposed GSA-SiamNet is superior to traditional and other deep learning-based methods. Full article
(This article belongs to the Special Issue Remote Sensing Data Fusion and Applications)
Show Figures

Graphical abstract

31 pages, 30389 KiB  
Article
Preharvest Durum Wheat Yield, Protein Content, and Protein Yield Estimation Using Unmanned Aerial Vehicle Imagery and Pléiades Satellite Data in Field Breeding Experiments
by Dessislava Ganeva, Eugenia Roumenina, Petar Dimitrov, Alexander Gikov, Violeta Bozhanova, Rangel Dragov, Georgi Jelev and Krasimira Taneva
Remote Sens. 2024, 16(3), 559; https://doi.org/10.3390/rs16030559 - 31 Jan 2024
Cited by 3 | Viewed by 1935
Abstract
Unmanned aerial vehicles (UAVs) are extensively used to gather remote sensing data, offering high image resolution and swift data acquisition despite being labor-intensive. In contrast, satellite-based remote sensing, providing sub-meter spatial resolution and frequent revisit times, could serve as an alternative data source [...] Read more.
Unmanned aerial vehicles (UAVs) are extensively used to gather remote sensing data, offering high image resolution and swift data acquisition despite being labor-intensive. In contrast, satellite-based remote sensing, providing sub-meter spatial resolution and frequent revisit times, could serve as an alternative data source for phenotyping. In this study, we separately evaluated pan-sharpened Pléiades satellite imagery (50 cm) and UAV imagery (2.5 cm) to phenotype durum wheat in small-plot (12 m × 1.10 m) breeding trials. The Gaussian process regression (GPR) algorithm, which provides predictions with uncertainty estimates, was trained with spectral bands and а selected set of vegetation indexes (VIs) as independent variables. Grain protein content (GPC) was better predicted with Pléiades data at the growth stage of 20% of inflorescence emerged but with only moderate accuracy (validation R2: 0.58). The grain yield (GY) and protein yield (PY) were better predicted using UAV data at the late milk and watery ripe growth stages, respectively (validation: R2 0.67 and 0.62, respectively). The cumulative VIs (the sum of VIs over the available images within the growing season) did not increase the accuracy of the models for either sensor. When mapping the estimated parameters, the spatial resolution of Pléiades revealed certain limitations. Nevertheless, our findings regarding GPC suggested that the usefulness of pan-sharpened Pléiades images for phenotyping should not be dismissed and warrants further exploration, particularly for breeding experiments with larger plot sizes. Full article
Show Figures

Figure 1

22 pages, 15242 KiB  
Article
Pan-Sharpening Network of Multi-Spectral Remote Sensing Images Using Two-Stream Attention Feature Extractor and Multi-Detail Injection (TAMINet)
by Jing Wang, Jiaqing Miao, Gaoping Li, Ying Tan, Shicheng Yu, Xiaoguang Liu, Li Zeng and Guibing Li
Remote Sens. 2024, 16(1), 75; https://doi.org/10.3390/rs16010075 - 24 Dec 2023
Viewed by 2011
Abstract
Achieving a balance between spectral resolution and spatial resolution in multi-spectral remote sensing images is challenging due to physical constraints. Consequently, pan-sharpening technology was developed to address this challenge. While significant progress was recently achieved in deep-learning-based pan-sharpening techniques, most existing deep learning [...] Read more.
Achieving a balance between spectral resolution and spatial resolution in multi-spectral remote sensing images is challenging due to physical constraints. Consequently, pan-sharpening technology was developed to address this challenge. While significant progress was recently achieved in deep-learning-based pan-sharpening techniques, most existing deep learning approaches face two primary limitations: (1) convolutional neural networks (CNNs) struggle with long-range dependency issues, and (2) significant detail loss during deep network training. Moreover, despite these methods’ pan-sharpening capabilities, their generalization to full-sized raw images remains problematic due to scaling disparities, rendering them less practical. To tackle these issues, we introduce in this study a multi-spectral remote sensing image fusion network, termed TAMINet, which leverages a two-stream coordinate attention mechanism and multi-detail injection. Initially, a two-stream feature extractor augmented with the coordinate attention (CA) block is employed to derive modal-specific features from low-resolution multi-spectral (LRMS) images and panchromatic (PAN) images. This is followed by feature-domain fusion and pan-sharpening image reconstruction. Crucially, a multi-detail injection approach is incorporated during fusion and reconstruction, ensuring the reintroduction of details lost earlier in the process, which minimizes high-frequency detail loss. Finally, a novel hybrid loss function is proposed that incorporates spatial loss, spectral loss, and an additional loss component to enhance performance. The proposed methodology’s effectiveness was validated through experiments on WorldView-2 satellite images, IKONOS, and QuickBird, benchmarked against current state-of-the-art techniques. Experimental findings reveal that TAMINet significantly elevates the pan-sharpening performance for large-scale images, underscoring its potential to enhance multi-spectral remote sensing image quality. Full article
Show Figures

Graphical abstract

34 pages, 5656 KiB  
Article
FSSBP: Fast Spatial–Spectral Back Projection Based on Pan-Sharpening Iterative Optimization
by Jingzhe Tao, Weihan Ni, Chuanming Song and Xianghai Wang
Remote Sens. 2023, 15(18), 4543; https://doi.org/10.3390/rs15184543 - 15 Sep 2023
Cited by 2 | Viewed by 1536
Abstract
Pan-sharpening is an important means to improve the spatial resolution of multispectral (MS) images. Although a large number of pan-sharpening methods have been developed, improving the spatial resolution of MS while effectively maintaining its spectral information has not been well solved so far, [...] Read more.
Pan-sharpening is an important means to improve the spatial resolution of multispectral (MS) images. Although a large number of pan-sharpening methods have been developed, improving the spatial resolution of MS while effectively maintaining its spectral information has not been well solved so far, and it has also been taken as a criterion to measure whether the sharpened product can meet the practical needs. The back-projection (BP) method iteratively injects spectral information backwards into the sharpened results in a post-processing manner, which can effectively improve the generally unsatisfied spectral consistency problem in pan-sharpening methods. Although BP has received some attention in recent years in pan-sharpening research, the existing related work is basically limited to the direct utilization of the BP process and lacks a more in-depth intrinsic integration with pan-sharpening. In this paper, we analyze the current problems of improving the spectral consistency based on BP in pan-sharpening, and the main innovative works carried out on this basis include the following: (1) We introduce the spatial consistency condition and propose the spatial–spectral BP (SSBP) method, which takes into account both spatial and spectral consistency conditions, to improve the spectral quality while effectively solving the problem of spatial distortion in the results. (2) The proposed SSBP method is analyzed theoretically, and the convergence condition of SSBP and a more relaxed convergence condition for a specific BP type, degradation transpose BP, are given and proved theoretically. (3) Fast computation of BP and SSBP is investigated, and non-iterative fast BP (FBP) and fast SSBP algorithms (FSSBP) methods are given in a closed-form solution with significant improvement in computational efficiency. Experimental comparisons with combinations formed by seven different BP-related post-processing methods and up to 18 typical base methods show that the proposed methods are generally applicable to the optimization of the spatial–spectral quality of various sharpening methods. The fast method improves the computational speed by at least 27.5 times compared to the iterative version while maintaining the evaluation metrics well. Full article
Show Figures

Graphical abstract

22 pages, 23048 KiB  
Article
A Novel Adaptively Optimized PCNN Model for Hyperspectral Image Sharpening
by Xinyu Xu, Xiaojun Li, Yikun Li, Lu Kang and Junfei Ge
Remote Sens. 2023, 15(17), 4205; https://doi.org/10.3390/rs15174205 - 26 Aug 2023
Cited by 5 | Viewed by 2072
Abstract
Hyperspectral satellite imagery has developed rapidly over the last decade because of its high spectral resolution and strong material recognition capability. Nonetheless, the spatial resolution of available hyperspectral imagery is inferior, severely affecting the accuracy of ground object identification. In the paper, we [...] Read more.
Hyperspectral satellite imagery has developed rapidly over the last decade because of its high spectral resolution and strong material recognition capability. Nonetheless, the spatial resolution of available hyperspectral imagery is inferior, severely affecting the accuracy of ground object identification. In the paper, we propose an adaptively optimized pulse-coupled neural network (PCNN) model to sharpen the spatial resolution of the hyperspectral imagery to the scale of the multispectral imagery. Firstly, a SAM-CC strategy is designed to assign hyperspectral bands to the multispectral bands. Subsequently, an improved PCNN (IPCNN) is proposed, which considers the differences of the neighboring neurons. Furthermore, the Chameleon Swarm Optimization (CSA) optimization is adopted to generate the optimum fusion parameters for IPCNN. Hence, the injected spatial details are acquired in the irregular regions generated by the IPCNN. Extensive experiments are carried out to validate the superiority of the proposed model, which confirms that our method can realize hyperspectral imagery with high spatial resolution, yielding the best spatial details and spectral information among the state-of-the-art approaches. Several ablation studies further corroborate the efficiency of our method. Full article
Show Figures

Figure 1

21 pages, 3454 KiB  
Article
Swin–MRDB: Pan-Sharpening Model Based on the Swin Transformer and Multi-Scale CNN
by Zifan Rong, Xuesong Jiang, Linfeng Huang and Hongping Zhou
Appl. Sci. 2023, 13(15), 9022; https://doi.org/10.3390/app13159022 - 7 Aug 2023
Cited by 1 | Viewed by 2053
Abstract
Pan-sharpening aims to create high-resolution spectrum images by fusing low-resolution hyperspectral (HS) images with high-resolution panchromatic (PAN) images. Inspired by the Swin transformer used in image classification tasks, this research constructs a three-stream pan-sharpening network based on the Swin transformer and a multi-scale [...] Read more.
Pan-sharpening aims to create high-resolution spectrum images by fusing low-resolution hyperspectral (HS) images with high-resolution panchromatic (PAN) images. Inspired by the Swin transformer used in image classification tasks, this research constructs a three-stream pan-sharpening network based on the Swin transformer and a multi-scale feature extraction module. Unlike the traditional convolutional neural network (CNN) pan-sharpening model, we use the Swin transformer to establish global connections with the image and combine it with a multi-scale feature extraction module to extract local features of different sizes. The model combines the advantages of the Swin transformer and CNN, enabling fused images to maintain good local detail and global linkage by mitigating distortion in hyperspectral images. In order to verify the effectiveness of the method, this paper evaluates fused images with subjective visual and quantitative indicators. Experimental results show that the method proposed in this paper can better preserve the spatial and spectral information of images compared to the classical and latest models. Full article
(This article belongs to the Special Issue Intelligent Computing and Remote Sensing)
Show Figures

Figure 1

18 pages, 13656 KiB  
Article
Fine-Resolution Forest Height Estimation by Integrating ICESat-2 and Landsat 8 OLI Data with a Spatial Downscaling Method for Aboveground Biomass Quantification
by Yingxuan Wang, Yuning Peng, Xudong Hu and Penglin Zhang
Forests 2023, 14(7), 1414; https://doi.org/10.3390/f14071414 - 11 Jul 2023
Cited by 11 | Viewed by 2176
Abstract
Rapid and accurate estimation of forest aboveground biomass (AGB) with fine details is crucial for effective forest monitoring and management, where forest height plays a key role in AGB quantification. In this study, we propose a random forest (RF)-based down-scaling method to map [...] Read more.
Rapid and accurate estimation of forest aboveground biomass (AGB) with fine details is crucial for effective forest monitoring and management, where forest height plays a key role in AGB quantification. In this study, we propose a random forest (RF)-based down-scaling method to map forest height and biomass at a 15-m resolution by integrating Landsat 8 OLI and Ice, Cloud and Land Elevation Satellite-2 (ICESat-2) LiDAR data. ICESat-2 photon data are used to derive canopy parameters along 15-m segments, which are considered sample plots for the extrapolation of discrete forest height. Fourteen variables associated with spectral features, textual features and vegetation index are extracted from pan-sharpened Landsat 8 images. A regression function is established between these variables and ICESat-2-derived forest height to produce a 15-m continuous forest height distribution data based on the 30-m forest height product using the RF algorithm. Finally, a wall-to-wall forest AGB at 15-m spatial resolution is achieved by using an allometric model specific to the forest type and height. The Jilin Province in northeast China is taken as the study area, and the forest AGB estimation results reveal a density of 61.15 Mg/ha with a standard deviation of 89.46 Mg/ha. The R2 between our predicted forest heights and the ICESat-2-derived heights reaches 0.93. Validation results at the county scale demonstrate reasonable correspondence between the estimated AGB and reference data, with consistently high R2 value exceeding 0.65. This downscaling method provides a promising scheme to estimate spatial forest AGB with fine details and to enhance the accuracy of AGB estimation, which may facilitate carbon stock measurement and carbon cycle studies. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Back to TopTop