Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (147)

Search Parameters:
Keywords = spectral superresolution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4947 KiB  
Article
From Coarse to Crisp: Enhancing Tree Species Maps with Deep Learning and Satellite Imagery
by Taebin Choe, Seungpyo Jeon, Byeongcheol Kim and Seonyoung Park
Remote Sens. 2025, 17(13), 2222; https://doi.org/10.3390/rs17132222 - 28 Jun 2025
Viewed by 423
Abstract
Accurate, detailed, and up-to-date tree species distribution information is essential for effective forest management and environmental research. However, existing tree species maps face limitations in resolution and update cycle, making it difficult to meet modern demands. To overcome these limitations, this study proposes [...] Read more.
Accurate, detailed, and up-to-date tree species distribution information is essential for effective forest management and environmental research. However, existing tree species maps face limitations in resolution and update cycle, making it difficult to meet modern demands. To overcome these limitations, this study proposes a novel framework that utilizes existing medium-resolution national tree species maps as ‘weak labels’ and fuses multi-temporal Sentinel-2 and PlanetScope satellite imagery data. Specifically, a super-resolution (SR) technique, using PlanetScope imagery as a reference, was first applied to Sentinel-2 data to enhance its resolution to 2.5 m. Then, these enhanced Sentinel-2 bands were combined with PlanetScope bands to construct the final multi-spectral, multi-temporal input data. Deep learning (DL) model training data was constructed by strategically sampling information-rich pixels from the national tree species map. Applying the proposed methodology to Sobaeksan and Jirisan National Parks in South Korea, the performance of various machine learning (ML) and deep learning (DL) models was compared, including traditional ML (linear regression, random forest) and DL architectures (multilayer perceptron (MLP), spectral encoder block (SEB)—linear, and SEB-transformer). The MLP model demonstrated optimal performance, achieving over 85% overall accuracy (OA) and more than 81% accuracy in classifying spectrally similar and difficult-to-distinguish species, specifically Quercus mongolica (QM) and Quercus variabilis (QV). Furthermore, while spectral and temporal information were confirmed to contribute significantly to tree species classification, the contribution of spatial (texture) information was experimentally found to be limited at the 2.5 m resolution level. This study presents a practical method for creating high-resolution tree species maps scalable to the national level by fusing existing tree species maps with Sentinel-2 and PlanetScope imagery without requiring costly separate field surveys. Its significance lies in establishing a foundation that can contribute to various fields such as forest resource management, biodiversity conservation, and climate change research. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Figure 1

40 pages, 4919 KiB  
Article
NGSTGAN: N-Gram Swin Transformer and Multi-Attention U-Net Discriminator for Efficient Multi-Spectral Remote Sensing Image Super-Resolution
by Chao Zhan, Chunyang Wang, Bibo Lu, Wei Yang, Xian Zhang and Gaige Wang
Remote Sens. 2025, 17(12), 2079; https://doi.org/10.3390/rs17122079 - 17 Jun 2025
Viewed by 537
Abstract
The reconstruction of high-resolution (HR) remote sensing images (RSIs) from low-resolution (LR) counterparts is a critical task in remote sensing image super-resolution (RSISR). Recent advancements in convolutional neural networks (CNNs) and Transformers have significantly improved RSISR performance due to their capabilities in local [...] Read more.
The reconstruction of high-resolution (HR) remote sensing images (RSIs) from low-resolution (LR) counterparts is a critical task in remote sensing image super-resolution (RSISR). Recent advancements in convolutional neural networks (CNNs) and Transformers have significantly improved RSISR performance due to their capabilities in local feature extraction and global modeling. However, several limitations remain, including the underutilization of multi-scale features in RSIs, the limited receptive field of Swin Transformer’s window self-attention (WSA), and the computational complexity of existing methods. To address these issues, this paper introduces the NGSTGAN model, which employs an N-Gram Swin Transformer as the generator and a multi-attention U-Net as the discriminator. The discriminator enhances attention to multi-scale key features through the addition of channel, spatial, and pixel attention (CSPA) modules, while the generator utilizes an improved shallow feature extraction (ISFE) module to extract multi-scale and multi-directional features, enhancing the capture of complex textures and details. The N-Gram concept is introduced to expand the receptive field of Swin Transformer, and sliding window self-attention (S-WSA) is employed to facilitate interaction between neighboring windows. Additionally, channel-reducing group convolution (CRGC) is used to reduce the number of parameters and computational complexity. A cross-sensor multispectral dataset combining Landsat-8 (L8) and Sentinel-2 (S2) is constructed for the resolution enhancement of L8’s blue (B), green (G), red (R), and near-infrared (NIR) bands from 30 m to 10 m. Experiments show that NGSTGAN outperforms the state-of-the-art (SOTA) method, achieving improvements of 0.5180 dB in the peak signal-to-noise ratio (PSNR) and 0.0153 in the structural similarity index measure (SSIM) over the second best method, offering a more effective solution to the task. Full article
Show Figures

Figure 1

23 pages, 5811 KiB  
Article
Multi-Attitude Hybrid Network for Remote Sensing Hyperspectral Images Super-Resolution
by Chi Chen, Yunhan Sun, Xueyan Hu, Ning Zhang, Hao Feng, Zheng Li and Yongcheng Wang
Remote Sens. 2025, 17(11), 1947; https://doi.org/10.3390/rs17111947 - 4 Jun 2025
Cited by 1 | Viewed by 579
Abstract
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the [...] Read more.
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the internal information, which in turn limits the precise reconstruction of detailed texture and spectral features. Therefore, we propose the multi-attitude hybrid network (MAHN) for extracting and characterizing information from multiple feature spaces. On the one hand, we construct the spectral hypergraph cross-attention module (SHCAM) and the spatial hypergraph self-attention module (SHSAM) based on the high and low-frequency features in the spectral and the spatial domains, respectively, which are used to capture the main structure and detail changes within the image. On the other hand, high-level semantic information in mixed pixels is parsed by spectral mixture analysis, and semantic hypergraph 3D module (SH3M) are constructed based on the abundance of each category to enhance the propagation and reconstruction of semantic information. Furthermore, to mitigate the domain discrepancies among features, we introduce a sensitive bands attention mechanism (SBAM) to enhance the cross-guidance and fusion of multi-domain features. Extensive experiments demonstrate that our method achieves optimal reconstruction results compared to other state-of-the-art algorithms while effectively reducing the computational complexity. Full article
Show Figures

Figure 1

21 pages, 10091 KiB  
Article
Scalable Hyperspectral Enhancement via Patch-Wise Sparse Residual Learning: Insights from Super-Resolved EnMAP Data
by Parth Naik, Rupsa Chakraborty, Sam Thiele and Richard Gloaguen
Remote Sens. 2025, 17(11), 1878; https://doi.org/10.3390/rs17111878 - 28 May 2025
Viewed by 720
Abstract
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational [...] Read more.
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational complexity, and limited training data, particularly for new-generation sensors with unique noise patterns. In this contribution, we propose a novel parallel patch-wise sparse residual learning (P2SR) algorithm for resolution enhancement based on fusion of HSI and MSI. The proposed method uses multi-decomposition techniques (i.e., Independent component analysis, Non-negative matrix factorization, and 3D wavelet transforms) to extract spatial and spectral features to form a sparse dictionary. The spectral and spatial characteristics of the scene encoded in the dictionary enable reconstruction through a first-order optimization algorithm to ensure an efficient sparse representation. The final spatially enhanced HSI is reconstructed by combining the learned features from low-resolution HSI and applying an MSI-regulated guided filter to enhance spatial fidelity while minimizing artifacts. P2SR is deployable on a high-performance computing (HPC) system with parallel processing, ensuring scalability and computational efficiency for large HSI datasets. Extensive evaluations on three diverse study sites demonstrate that P2SR consistently outperforms traditional and state-of-the-art (SOA) methods in both quantitative metrics and qualitative spatial assessments. Specifically, P2SR achieved the best average PSNR (25.2100) and SAM (12.4542) scores, indicating superior spatio-spectral reconstruction contributing to sharper spatial features, reduced mixed pixels, and enhanced geological features. P2SR also achieved the best average ERGAS (8.9295) and Q2n (0.5156), which suggests better overall fidelity across all bands and perceptual accuracy with the least spectral distortions. Importantly, we show that P2SR preserves critical spectral signatures, such as Fe2+ absorption, and improves the detection of fine-scale environmental and geological structures. P2SR’s ability to maintain spectral fidelity while enhancing spatial detail makes it a powerful tool for high-precision remote sensing applications, including mineral mapping, land-use analysis, and environmental monitoring. Full article
Show Figures

Graphical abstract

8 pages, 3697 KiB  
Proceeding Paper
Pansharpening Remote Sensing Images Using Generative Adversarial Networks
by Bo-Hsien Chung, Jui-Hsiang Jung, Yih-Shyh Chiou, Mu-Jan Shih and Fuan Tsai
Eng. Proc. 2025, 92(1), 32; https://doi.org/10.3390/engproc2025092032 - 28 Apr 2025
Viewed by 306
Abstract
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN [...] Read more.
Pansharpening is a remote sensing image fusion technique that combines a high-resolution (HR) panchromatic (PAN) image with a low-resolution (LR) multispectral (MS) image to produce an HR MS image. The primary challenge in pansharpening lies in preserving the spatial details of the PAN image while maintaining the spectral integrity of the MS image. To address this, this article presents a generative adversarial network (GAN)-based approach to pansharpening. The GAN discriminator facilitated matching the generated image’s intensity to the HR PAN image and preserving the spectral characteristics of the LR MS image. The performance in generating images was evaluated using the peak signal-to-noise ratio (PSNR). For the experiment, original LR MS and HR PAN satellite images were partitioned into smaller patches, and the GAN model was validated using an 80:20 training-to-testing data ratio. The results illustrated that the super-resolution images generated by the SRGAN model achieved a PSNR of 31 dB. These results demonstrated the developed model’s ability to reconstruct the geometric, textural, and spectral information from the images. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

18 pages, 15380 KiB  
Article
A High-Precision Method for Warehouse Material Level Monitoring Using Millimeter-Wave Radar and 3D Surface Reconstruction
by Wenxin Zhang and Yi Gu
Sensors 2025, 25(9), 2716; https://doi.org/10.3390/s25092716 - 25 Apr 2025
Viewed by 431
Abstract
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform [...] Read more.
This study presents a high-precision warehouse material level monitoring method that integrates millimeter-wave radar with 3D surface reconstruction to address the limitations of LiDAR, which is highly susceptible to dust and haze interference in complex storage environments. The proposed method employs Chirp-Z Transform (CZT) super-resolution processing to enhance spectral resolution and measurement accuracy. To improve grain surface identification, an anomalous signal correction method based on angle–range feature fusion is introduced, mitigating errors caused by weak reflections and multipath effects. The point cloud data acquired by the radar undergo denoising, smoothing, and enhancement using statistical filtering, Moving Least Squares (MLS) smoothing, and bicubic spline interpolation to ensure data continuity and accuracy. A Poisson Surface Reconstruction algorithm is then applied to generate a continuous 3D model of the grain heap. The vector triple product method is used to estimate grain volume. Experimental results show a reconstruction volume error within 3%, demonstrating the method’s accuracy, robustness, and adaptability. The reconstructed surface accurately represents grain heap geometry, making this approach well suited for real-time warehouse monitoring and providing reliable support for material balance and intelligent storage management. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

24 pages, 19515 KiB  
Article
Extensive Feature-Inferring Deep Network for Hyperspectral and Multispectral Image Fusion
by Abdolraheem Khader, Jingxiang Yang, Sara Abdelwahab Ghorashi, Ali Ahmed, Zeinab Dehghan and Liang Xiao
Remote Sens. 2025, 17(7), 1308; https://doi.org/10.3390/rs17071308 - 5 Apr 2025
Viewed by 614
Abstract
Hyperspectral (HS) and multispectral (MS) image fusion is the most favorable way to obtain a hyperspectral image that has high resolution in terms of spatial and spectral information. This fusion problem can be accomplished by formulating a mathematical model and solving it either [...] Read more.
Hyperspectral (HS) and multispectral (MS) image fusion is the most favorable way to obtain a hyperspectral image that has high resolution in terms of spatial and spectral information. This fusion problem can be accomplished by formulating a mathematical model and solving it either analytically or iteratively. The mathematical solutions class has serious challenges, e.g., computation cost, manually tuning parameters, and the absence of imaging models that laboriously affect the fusion process. With the revolution of deep learning, the recent HS-MS image fusion techniques gained good outcomes by utilizing the power of the convolutional neural network (CNN) for feature extraction. Moreover, extracting intrinsic information, e.g., non-local spatial and global spectral features, is the most critical issue faced by deep learning methods. Therefore, this paper proposes an Extensive Feature-Inferring Deep Network (EFINet) with extensive-scale feature-interacting and global correlation refinement modules to improve the effectiveness of HS-MS image fusion. The proposed network retains the most vital information through the extensive-scale feature-interacting module in various feature scales. Moreover, the global semantic information is achieved by utilizing the global correlation refinement module. The proposed network is validated through rich experiments conducted on two popular datasets, the Houston and Chikusei datasets, and it attains good performance compared to the state-of-the-art HS-MS image fusion techniques. Full article
Show Figures

Figure 1

30 pages, 22071 KiB  
Article
Analysis of Optical Errors in Joint Fabry–Pérot Interferometer–Fourier-Transform Imaging Spectroscopy Interferometric Super-Resolution Systems
by Yu Zhang, Qunbo Lv, Jianwei Wang, Yinhui Tang, Jia Si, Xinwen Chen and Yangyang Liu
Appl. Sci. 2025, 15(6), 2938; https://doi.org/10.3390/app15062938 - 8 Mar 2025
Viewed by 877
Abstract
Fourier-transform imaging spectroscopy (FTIS) faces inherent limitations in spectral resolution due to the maximum optical path difference (OPD) achievable by its interferometer. To overcome this constraint, we propose a novel spectral super-resolution technology integrating a Fabry–Pérot interferometer (FPI) with FTIS, termed multi-component joint [...] Read more.
Fourier-transform imaging spectroscopy (FTIS) faces inherent limitations in spectral resolution due to the maximum optical path difference (OPD) achievable by its interferometer. To overcome this constraint, we propose a novel spectral super-resolution technology integrating a Fabry–Pérot interferometer (FPI) with FTIS, termed multi-component joint interferometric hyperspectral imaging (MJI-HI). This method leverages the FPI to periodically modulate the target spectrum, enabling FTIS to capture a modulated interferogram. By encoding high-frequency spectral interference information into low-frequency interference regions through FPI modulation, an advanced inversion algorithm is developed to reconstruct the encoded high-frequency components, thereby achieving spectral super-resolution. This study analyzes the impact of primary optical errors and tolerance thresholds in the FPI and FTIS on the interferograms and spectral fidelity of MJI-HI, along with proposing algorithmic improvements. Notably, certain errors in the FTIS and FPI exhibit mutual interference. The theoretical framework for error analysis is validated and discussed through numerical simulations, providing critical theoretical support for subsequent instrument development and laying a foundation for advancing novel spectral super-resolution technologies. Full article
(This article belongs to the Special Issue Spectral Detection: Technologies and Applications—2nd Edition)
Show Figures

Figure 1

21 pages, 14440 KiB  
Article
Spectral Super-Resolution Technology Based on Fabry–Perot Interferometer for Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer
by Yu Zhang, Qunbo Lv, Jianwei Wang, Yinhui Tang, Jia Si, Xinwen Chen and Yangyang Liu
Sensors 2025, 25(4), 1201; https://doi.org/10.3390/s25041201 - 16 Feb 2025
Viewed by 784
Abstract
A new spectral super-resolution technique was proposed by combining the Fabry–Perot interferometer (FPI) with Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer (TSMFTIS). This study uses the multi-beam interference of the FPI to modulate the target spectrum periodically, and it acquires the modulated [...] Read more.
A new spectral super-resolution technique was proposed by combining the Fabry–Perot interferometer (FPI) with Temporally and Spatially Modulated Fourier Transform Imaging Spectrometer (TSMFTIS). This study uses the multi-beam interference of the FPI to modulate the target spectrum periodically, and it acquires the modulated interferogram through TSMFTIS. The combined interference of the two techniques overcomes the limitations of the maximum optical path difference (OPD) on spectral resolution. FPI is used to encode high-frequency interference information into low-frequency interference information, proposing an inversion algorithm to recover high-frequency information, studying the impact of FPI optical defects on the system, and proposing targeted improvement algorithms. The simulation results indicate that this method can achieve multi-component joint interference imaging, improving spectral resolution by twofold. This technology offers advantages such as high throughput, stability, simple and compact structure, straightforward principles, high robustness, and low cost. It provides new insights into TSMFTIS spectral super-resolution research. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

27 pages, 14422 KiB  
Article
Discrimination of Larch Needle Pest Severity Based on Sentinel-2 Super-Resolution and Spectral Derivatives—A Case Study of Erannis jacobsoni Djak
by Guangyou Sun, Xiaojun Huang, Ganbat Dashzebeg, Mungunkhuyag Ariunaa, Yuhai Bao, Gang Bao, Siqin Tong, Altanchimeg Dorjsuren and Enkhnasan Davaadorj
Forests 2025, 16(1), 88; https://doi.org/10.3390/f16010088 - 7 Jan 2025
Cited by 1 | Viewed by 933
Abstract
In recent years, Jas’s Larch Inchworm (Erannis jacobsoni Djak, EJD) outbreaks have frequently occurred in forested areas of Mongolia, causing significant damage to forest ecosystems, and rapid and effective monitoring methods are urgently needed. This study focuses on a typical region of [...] Read more.
In recent years, Jas’s Larch Inchworm (Erannis jacobsoni Djak, EJD) outbreaks have frequently occurred in forested areas of Mongolia, causing significant damage to forest ecosystems, and rapid and effective monitoring methods are urgently needed. This study focuses on a typical region of EJD infestation in the larch forests located in Binder, Khentii, Mongolia. Initial super-resolution enhancement was performed on Sentinel-2 images, followed by the calculation of vegetation indices and first-order spectral derivatives. The Kruskal–Wallis H test (KW test), Dunn’s multiple comparison test (Dunn’s test), and the RF-RFECV algorithm were then employed to identify sensitive features. Using support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost) machine learning algorithms, along with field survey data and UAV remote sensing data, multiple models were developed to assess the severity of EJD infestation and the corresponding spatial distribution characteristics. Seven sensitive combined features were obtained from high-quality super-resolution Sentinel-2 images. Then, a high-precision monitoring model was constructed, and it was revealed that the areas prone to EJD infestation are located at elevations of 1171–1234 m, on gentle slopes, and in semi-shady or semi-sunny areas. The super-resolution processing of Sentinel-2 satellite data can effectively refine monitoring results. The combination of the first-order spectral derivatives and vegetation indices can improve the monitoring accuracy and the discrimination of light and moderate damage. D8a and NDVIswir can be used as important indicators for assessing the severity of EJD infestation. EJD has an adaptive preference for certain environments, and environmental factors directly or indirectly affect the diffusion and distribution of EJD. Full article
(This article belongs to the Section Forest Health)
Show Figures

Figure 1

23 pages, 4727 KiB  
Article
Self-Supervised and Zero-Shot Learning in Multi-Modal Raman Light Sheet Microscopy
by Pooja Kumari, Johann Kern and Matthias Raedle
Sensors 2024, 24(24), 8143; https://doi.org/10.3390/s24248143 - 20 Dec 2024
Cited by 2 | Viewed by 1313
Abstract
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial [...] Read more.
Advancements in Raman light sheet microscopy have provided a powerful, non-invasive, marker-free method for imaging complex 3D biological structures, such as cell cultures and spheroids. By combining 3D tomograms made by Rayleigh scattering, Raman scattering, and fluorescence detection, this modality captures complementary spatial and molecular data, critical for biomedical research, histology, and drug discovery. Despite its capabilities, Raman light sheet microscopy faces inherent limitations, including low signal intensity, high noise levels, and restricted spatial resolution, which impede the visualization of fine subcellular structures. Traditional enhancement techniques like Fourier transform filtering and spectral unmixing require extensive preprocessing and often introduce artifacts. More recently, deep learning techniques, which have shown great promise in enhancing image quality, face their own limitations. Specifically, conventional deep learning models require large quantities of high-quality, manually labeled training data for effective denoising and super-resolution tasks, which is challenging to obtain in multi-modal microscopy. In this study, we address these limitations by exploring advanced zero-shot and self-supervised learning approaches, such as ZS-DeconvNet, Noise2Noise, Noise2Void, Deep Image Prior (DIP), and Self2Self, which enhance image quality without the need for labeled and large datasets. This study offers a comparative evaluation of zero-shot and self-supervised learning methods, evaluating their effectiveness in denoising, resolution enhancement, and preserving biological structures in multi-modal Raman light sheet microscopic images. Our results demonstrate significant improvements in image clarity, offering a reliable solution for visualizing complex biological systems. These methods establish the way for future advancements in high-resolution imaging, with broad potential for enhancing biomedical research and discovery. Full article
Show Figures

Figure 1

13 pages, 2327 KiB  
Article
Reconstruction of High-Resolution Solar Spectral Irradiance Based on Residual Channel Attention Networks
by Peng Zhang, Jianwen Weng, Qing Kang and Jianjun Li
Remote Sens. 2024, 16(24), 4698; https://doi.org/10.3390/rs16244698 - 17 Dec 2024
Viewed by 740
Abstract
The accurate measurement of high-resolution solar spectral irradiance (SSI) and its variations at the top of the atmosphere is crucial for solar physics, the Earth’s climate, and the in-orbit calibration of optical satellites. However, existing space-based solar spectral irradiance instruments achieve high-precision SSI [...] Read more.
The accurate measurement of high-resolution solar spectral irradiance (SSI) and its variations at the top of the atmosphere is crucial for solar physics, the Earth’s climate, and the in-orbit calibration of optical satellites. However, existing space-based solar spectral irradiance instruments achieve high-precision SSI measurements at the cost of spectral resolution, which falls short of meeting the requirements for identifying fine solar spectral features. Therefore, this paper proposes a new method for reconstructing high-resolution solar spectral irradiance based on a residual channel attention network. This method considers the stability of SSI spectral features and employs residual channel attention blocks to enhance the expression ability of key features, achieving the high-accuracy reconstruction of spectral features. Additionally, to address the issue of excessively large output features from the residual channel attention blocks, a scaling coefficient adjustment network block is introduced to achieve the high-accuracy reconstruction of spectral absolute values. Finally, the proposed method is validated using the measured SSI dataset from SCIAMACHY on Envisat-1 and the simulated dataset from TSIS-1 SIM. The validation results show that, compared to existing scaling coefficient adjustment algorithms, the proposed method achieves single-spectrum super-resolution reconstruction without relying on external data, with a Mean Absolute Percentage Error (MAPE) of 0.0302% for the reconstructed spectra based on the dataset. The proposed method achieves higher-resolution reconstruction results while ensuring the accuracy of SSI. Full article
Show Figures

Figure 1

19 pages, 12370 KiB  
Article
Enhancing Cropland Mapping with Spatial Super-Resolution Reconstruction by Optimizing Training Samples for Image Super-Resolution Models
by Xiaofeng Jia, Xinyan Li, Zirui Wang, Zhen Hao, Dong Ren, Hui Liu, Yun Du and Feng Ling
Remote Sens. 2024, 16(24), 4678; https://doi.org/10.3390/rs16244678 - 15 Dec 2024
Cited by 2 | Viewed by 1198
Abstract
Mixed pixels often hinder accurate cropland mapping from remote sensing images with coarse spatial resolution. Image spatial super-resolution reconstruction technology is widely applied to address this issue, typically transforming coarse-resolution remote sensing images into fine spatial resolution images, which are then used to [...] Read more.
Mixed pixels often hinder accurate cropland mapping from remote sensing images with coarse spatial resolution. Image spatial super-resolution reconstruction technology is widely applied to address this issue, typically transforming coarse-resolution remote sensing images into fine spatial resolution images, which are then used to generate fine-resolution land cover maps using classification techniques. Deep learning has been widely used for image spatial super-resolution reconstruction; however, collecting training samples is often difficult for cropland mapping. Given that the quality of spatial super-resolution reconstruction directly impacts classification accuracy, this study aims to assess the impact of different types of training samples on image spatial super-resolution reconstruction and cropland mapping results by employing a Residual Channel Attention Network (RCAN) model combined with a spatial attention mechanism. Four types of samples were used for spatial super-resolution reconstruction model training, namely fine-resolution images and their corresponding coarse-resolution images, including original Sentinel-2 and degraded Sentinel-2 images, original GF-2 and degraded GF-2 images, histogram-matched GF-2 and degraded GF-2 images, and registered original GF-2 and Sentinel-2 images. The results indicate that the samples acquired by the histogram-matched GF-2 and degraded GF-2 images can resolve spectral band mismatches when simulating training samples from fine spatial resolution imagery, while the other three methods have limitations in their inability to fully address spectral and spatial mismatches. The histogram-matched method yielded the best image quality with PSNR, SSIM, and QNR values of 42.2813, 0.9778, and 0.9872, respectively, and produced the best mapping results, achieving an overall accuracy of 0.9306. By assessing the impact of training samples on image spatial super-resolution reconstruction and classification, this study addresses data limitations and contributes to improving the accuracy of cropland mapping, which is crucial for agricultural management and decision-making. Full article
Show Figures

Figure 1

28 pages, 12679 KiB  
Article
DESAT: A Distance-Enhanced Strip Attention Transformer for Remote Sensing Image Super-Resolution
by Yujie Mao, Guojin He, Guizhou Wang, Ranyu Yin, Yan Peng and Bin Guan
Remote Sens. 2024, 16(22), 4251; https://doi.org/10.3390/rs16224251 - 14 Nov 2024
Cited by 1 | Viewed by 1383
Abstract
Transformer-based methods have demonstrated impressive performance in image super-resolution tasks. However, when applied to large-scale Earth observation images, the existing transformers encounter two significant challenges: (1) insufficient consideration of spatial correlation between adjacent ground objects; and (2) performance bottlenecks due to the underutilization [...] Read more.
Transformer-based methods have demonstrated impressive performance in image super-resolution tasks. However, when applied to large-scale Earth observation images, the existing transformers encounter two significant challenges: (1) insufficient consideration of spatial correlation between adjacent ground objects; and (2) performance bottlenecks due to the underutilization of the upsample module. To address these issues, we propose a novel distance-enhanced strip attention transformer (DESAT). The DESAT integrates distance priors, easily obtainable from remote sensing images, into the strip window self-attention mechanism to capture spatial correlations more effectively. To further enhance the transfer of deep features into high-resolution outputs, we designed an attention-enhanced upsample block, which combines the pixel shuffle layer with an attention-based upsample branch implemented through the overlapping window self-attention mechanism. Additionally, to better simulate real-world scenarios, we constructed a new cross-sensor super-resolution dataset using Gaofen-6 satellite imagery. Extensive experiments on both simulated and real-world remote sensing datasets demonstrate that the DESAT outperforms state-of-the-art models by up to 1.17 dB along with superior qualitative results. Furthermore, the DESAT achieves more competitive performance in real-world tasks, effectively balancing spatial detail reconstruction and spectral transform, making it highly suitable for practical remote sensing super-resolution applications. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

20 pages, 6417 KiB  
Article
Neural Operator for Planetary Remote Sensing Super-Resolution with Spectral Learning
by Hui-Jia Zhao, Jie Lu, Wen-Xiu Guo and Xiao-Ping Lu
Mathematics 2024, 12(22), 3461; https://doi.org/10.3390/math12223461 - 6 Nov 2024
Cited by 1 | Viewed by 1178
Abstract
High-resolution planetary remote sensing imagery provides detailed information for geomorphological and topographic analyses. However, acquiring such imagery is constrained by limited deep-space communication bandwidth and challenging imaging environments. Conventional super-resolution methods typically employ separate models for different scales, treating them as independent tasks. [...] Read more.
High-resolution planetary remote sensing imagery provides detailed information for geomorphological and topographic analyses. However, acquiring such imagery is constrained by limited deep-space communication bandwidth and challenging imaging environments. Conventional super-resolution methods typically employ separate models for different scales, treating them as independent tasks. This approach limits deployment and real-time applications in planetary remote sensing. Moreover, capturing global context is crucial in planetary remote sensing images due to their contextual similarities. To address these limitations, we propose Discrete Cosine Transform (DCT)–Global Super Resolution Neural Operator (DG-SRNO), a global context-aware arbitrary-scale super-resolution model. DG-SRNO achieves super-resolution at any scale using a single framework by learning the mapping between low-resolution (LR) and high-resolution (HR) function spaces. We mathematically prove the global receptive field of DG-SRNO. To evaluate DG-SRNO’s performance in planetary remote sensing tasks, we introduce the Ceres 800 dataset, a planetary remote sensing super-resolution dataset. Extensive quantitative and qualitative experiments demonstrate DG-SRNO’s impressive reconstruction capabilities. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

Back to TopTop