Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (209)

Search Parameters:
Keywords = low resolution hyperspectral image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3875 KiB  
Article
Soil Water-Soluble Ion Inversion via Hyperspectral Data Reconstruction and Multi-Scale Attention Mechanism: A Remote Sensing Case Study of Farmland Saline–Alkali Lands
by Meichen Liu, Shengwei Zhang, Jing Gao, Bo Wang, Kedi Fang, Lu Liu, Shengwei Lv and Qian Zhang
Agronomy 2025, 15(8), 1779; https://doi.org/10.3390/agronomy15081779 - 24 Jul 2025
Viewed by 470
Abstract
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral [...] Read more.
The salinization of agricultural soils is a serious threat to farming and ecological balance in arid and semi-arid regions. Accurate estimation of soil water-soluble ions (calcium, carbonate, magnesium, and sulfate) is necessary for correct monitoring of soil salinization and sustainable land management. Hyperspectral ground-based data are valuable in soil salinization monitoring, but the acquisition cost is high, and the coverage is small. Therefore, this study proposes a two-stage deep learning framework with multispectral remote-sensing images. First, the wavelet transform is used to enhance the Transformer and extract fine-grained spectral features to reconstruct the ground-based hyperspectral data. A comparison of ground-based hyperspectral data shows that the reconstructed spectra match the measured data in the 450–998 nm range, with R2 up to 0.98 and MSE = 0.31. This high similarity compensates for the low spectral resolution and weak feature expression of multispectral remote-sensing data. Subsequently, this enhanced spectral information was integrated and fed into a novel multiscale self-attentive Transformer model (MSATransformer) to invert four water-soluble ions. Compared with BPANN, MLP, and the standard Transformer model, our model remains robust across different spectra, achieving an R2 of up to 0.95 and reducing the average relative error by more than 30%. Among them, for the strongly responsive ions magnesium and sulfate, R2 reaches 0.92 and 0.95 (with RMSE of 0.13 and 0.29 g/kg, respectively). For the weakly responsive ions calcium and carbonate, R2 stays above 0.80 (RMSE is below 0.40 g/kg). The MSATransformer framework provides a low-cost and high-accuracy solution to monitor soil salinization at large scales and supports precision farmland management. Full article
(This article belongs to the Special Issue Water and Fertilizer Regulation Theory and Technology in Crops)
Show Figures

Figure 1

29 pages, 5178 KiB  
Article
HASSDE-NAS: Heuristic–Adaptive Spectral–Spatial Neural Architecture Search with Dynamic Cell Evolution for Hyperspectral Water Body Identification
by Feng Chen, Baishun Su and Zongpu Jia
Information 2025, 16(6), 495; https://doi.org/10.3390/info16060495 - 13 Jun 2025
Viewed by 419
Abstract
The accurate identification of water bodies in hyperspectral images (HSIs) remains challenging due to hierarchical representation imbalances in deep learning models, where shallow layers overly focus on spectral features, boundary ambiguities caused by the relatively low spatial resolution of satellite imagery, and limited [...] Read more.
The accurate identification of water bodies in hyperspectral images (HSIs) remains challenging due to hierarchical representation imbalances in deep learning models, where shallow layers overly focus on spectral features, boundary ambiguities caused by the relatively low spatial resolution of satellite imagery, and limited detection capability for small-scale aquatic features such as narrow rivers. To address these challenges, this study proposes Heuristic–Adaptive Spectral–Spatial Neural Architecture Search with Dynamic Cell Evaluation (HASSDE-NAS). The architecture integrates three specialized units; a spectral-aware dynamic band selection cell suppresses redundant spectral bands, while a geometry-enhanced edge attention cell refines fragmented spatial boundaries. Additionally, a bidirectional fusion alignment cell jointly optimizes spectral and spatial dependencies. A heuristic cell search algorithm optimizes the network architecture through architecture stability, feature diversity, and gradient sensitivity analysis, which improves search efficiency and model robustness. Evaluated on the Gaofen-5 datasets from the Guangdong and Henan regions, HASSDE-NAS achieves overall accuracies of 92.61% and 96%, respectively. This approach outperforms existing methods in delineating narrow river systems and resolving water bodies with weak spectral contrast under complex backgrounds, such as vegetation or cloud shadows. By adaptively prioritizing task-relevant features, the framework provides an interpretable solution for hydrological monitoring and advances neural architecture search in intelligent remote sensing. Full article
Show Figures

Figure 1

21 pages, 6028 KiB  
Article
A Comprehensive Framework for the Development of a Compact, Cost-Effective, and Robust Hyperspectral Camera Using COTS Components and a VPH Grism
by Sukrit Thongrom, Panuwat Pengphorm, Surachet Wongarrayapanich, Apirat Prasit, Chanisa Kanjanasakul, Wiphu Rujopakarn, Saran Poshyachinda, Chalongrat Daengngam and Nawapong Unsuree
Sensors 2025, 25(12), 3631; https://doi.org/10.3390/s25123631 - 10 Jun 2025
Viewed by 654
Abstract
Hyperspectral imaging (HSI) is an effective technique for material identification and classification, utilizing spectral signatures with applications in remote sensing, environmental monitoring, and allied disciplines. Despite its potential, the broader adoption of HSI technology is hindered by challenges related to compactness, affordability, and [...] Read more.
Hyperspectral imaging (HSI) is an effective technique for material identification and classification, utilizing spectral signatures with applications in remote sensing, environmental monitoring, and allied disciplines. Despite its potential, the broader adoption of HSI technology is hindered by challenges related to compactness, affordability, and durability, exacerbated by the absence of standardized protocols for developing practical hyperspectral cameras. This study introduces a comprehensive framework for developing a compact, cost-effective, and robust hyperspectral camera, employing commercial off-the-shelf (COTS) components and a volume phase holographic (VPH) grism. The use of COTS components reduces development time and manufacturing costs while maintaining adequate performance, thereby improving accessibility for researchers and engineers. The incorporation of a VPH grism enables an on-axis optical design, enhancing compactness, reducing alignment sensitivity, and improving system robustness. The proposed framework encompasses spectrograph design, including optical simulations and tolerance analysis conducted in ZEMAX OpticStudio, alongside assembly procedures, performance assessment, and hyperspectral image acquisition via a pushbroom scanning approach, all integrated into a structured, step-by-step workflow. The resulting prototype, housed in an aluminum enclosure, operates within the 420–830 nm wavelength range, achieving a spectral resolution of 2 nm across 205 spectral bands. It effectively differentiates vegetation, water, and built structures, resolves atmospheric absorption features, and demonstrates the ability to distinguish materials in low-light conditions, providing a scalable and practical advancement in HSI technology. Full article
(This article belongs to the Topic Hyperspectral Imaging and Signal Processing)
Show Figures

Figure 1

23 pages, 5811 KiB  
Article
Multi-Attitude Hybrid Network for Remote Sensing Hyperspectral Images Super-Resolution
by Chi Chen, Yunhan Sun, Xueyan Hu, Ning Zhang, Hao Feng, Zheng Li and Yongcheng Wang
Remote Sens. 2025, 17(11), 1947; https://doi.org/10.3390/rs17111947 - 4 Jun 2025
Cited by 1 | Viewed by 577
Abstract
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the [...] Read more.
Benefiting from the development of deep learning, the super-resolution technology for remote sensing hyperspectral images (HSIs) has achieved impressive progress. However, due to the high coupling of complex components in remote sensing HSIs, it is challenging to achieve a complete characterization of the internal information, which in turn limits the precise reconstruction of detailed texture and spectral features. Therefore, we propose the multi-attitude hybrid network (MAHN) for extracting and characterizing information from multiple feature spaces. On the one hand, we construct the spectral hypergraph cross-attention module (SHCAM) and the spatial hypergraph self-attention module (SHSAM) based on the high and low-frequency features in the spectral and the spatial domains, respectively, which are used to capture the main structure and detail changes within the image. On the other hand, high-level semantic information in mixed pixels is parsed by spectral mixture analysis, and semantic hypergraph 3D module (SH3M) are constructed based on the abundance of each category to enhance the propagation and reconstruction of semantic information. Furthermore, to mitigate the domain discrepancies among features, we introduce a sensitive bands attention mechanism (SBAM) to enhance the cross-guidance and fusion of multi-domain features. Extensive experiments demonstrate that our method achieves optimal reconstruction results compared to other state-of-the-art algorithms while effectively reducing the computational complexity. Full article
Show Figures

Figure 1

24 pages, 5723 KiB  
Article
A Robust Multispectral Reconstruction Network from RGB Images Trained by Diverse Satellite Data and Application in Classification and Detection Tasks
by Xiaoning Zhang, Zhaoyang Peng, Yifei Wang, Fan Ye, Tengying Fu and Hu Zhang
Remote Sens. 2025, 17(11), 1901; https://doi.org/10.3390/rs17111901 - 30 May 2025
Viewed by 459
Abstract
Multispectral images contain richer spectral signatures than easily available RGB images, for which they are promising to contribute to information perception. However, the relatively high cost of multispectral sensors and lower spatial resolution limit the widespread application of multispectral data, and existing reconstruction [...] Read more.
Multispectral images contain richer spectral signatures than easily available RGB images, for which they are promising to contribute to information perception. However, the relatively high cost of multispectral sensors and lower spatial resolution limit the widespread application of multispectral data, and existing reconstruction algorithms suffer from a lack of diverse training datasets and insufficient reconstruction accuracy. In response to these issues, this paper proposes a novel and robust multispectral reconstruction network from low-cost natural color RGB images based on free available satellite images with various land cover types. First, to supplement paired natural color RGB and multispectral images, the Houston hyperspectral dataset was used to train a convolutional neural network Model-TN for generating natural color RGB images from true color images combining CIE standard colorimetric system theory. Then, the EuroSAT multispectral satellite images for eight land cover types were selected to produce natural RGB using Model-TN as training image pairs, which were input into a residual network integrating channel attention mechanisms to train the multispectral images reconstruction model, Model-NM. Finally, the feasibility of the reconstructed multispectral images is verified through image classification and target detection. There is a small mean relative absolute error value of 0.0081 for generating natural color RGB images, which is 0.0397 for reconstructing multispectral images. Compared to RGB images, the accuracies of classification and detection using reconstructed multispectral images have improved by 16.67% and 3.09%, respectively. This study further reveals the potential of multispectral image reconstruction from natural color RGB images and its effectiveness in target detection, which promotes low-cost visual perception of intelligent unmanned systems. Full article
Show Figures

Graphical abstract

28 pages, 3438 KiB  
Article
Optimizing Remote Sensing Image Retrieval Through a Hybrid Methodology
by Sujata Alegavi and Raghvendra Sedamkar
J. Imaging 2025, 11(6), 179; https://doi.org/10.3390/jimaging11060179 - 28 May 2025
Viewed by 561
Abstract
The contemporary challenge in remote sensing lies in the precise retrieval of increasingly abundant and high-resolution remotely sensed images (RS image) stored in expansive data warehouses. The heightened spatial and spectral resolutions, coupled with accelerated image acquisition rates, necessitate advanced tools for effective [...] Read more.
The contemporary challenge in remote sensing lies in the precise retrieval of increasingly abundant and high-resolution remotely sensed images (RS image) stored in expansive data warehouses. The heightened spatial and spectral resolutions, coupled with accelerated image acquisition rates, necessitate advanced tools for effective data management, retrieval, and exploitation. The classification of large-sized images at the pixel level generates substantial data, escalating the workload and search space for similarity measurement. Semantic-based image retrieval remains an open problem due to limitations in current artificial intelligence techniques. Furthermore, on-board storage constraints compel the application of numerous compression algorithms to reduce storage space, intensifying the difficulty of retrieving substantial, sensitive, and target-specific data. This research proposes an innovative hybrid approach to enhance the retrieval of remotely sensed images. The approach leverages multilevel classification and multiscale feature extraction strategies to enhance performance. The retrieval system comprises two primary phases: database building and retrieval. Initially, the proposed Multiscale Multiangle Mean-shift with Breaking Ties (MSMA-MSBT) algorithm selects informative unlabeled samples for hyperspectral and synthetic aperture radar images through an active learning strategy. Addressing the scaling and rotation variations in image capture, a flexible and dynamic algorithm, modified Deep Image Registration using Dynamic Inlier (IRDI), is introduced for image registration. Given the complexity of remote sensing images, feature extraction occurs at two levels. Low-level features are extracted using the modified Multiscale Multiangle Completed Local Binary Pattern (MSMA-CLBP) algorithm to capture local contexture features, while high-level features are obtained through a hybrid CNN structure combining pretrained networks (Alexnet, Caffenet, VGG-S, VGG-M, VGG-F, VGG-VDD-16, VGG-VDD-19) and a fully connected dense network. Fusion of low- and high-level features facilitates final class distinction, with soft thresholding mitigating misclassification issues. A region-based similarity measurement enhances matching percentages. Results, evaluated on high-resolution remote sensing datasets, demonstrate the effectiveness of the proposed method, outperforming traditional algorithms with an average accuracy of 86.66%. The hybrid retrieval system exhibits substantial improvements in classification accuracy, similarity measurement, and computational efficiency compared to state-of-the-art scene classification and retrieval methods. Full article
(This article belongs to the Topic Computational Intelligence in Remote Sensing: 2nd Edition)
Show Figures

Figure 1

21 pages, 10091 KiB  
Article
Scalable Hyperspectral Enhancement via Patch-Wise Sparse Residual Learning: Insights from Super-Resolved EnMAP Data
by Parth Naik, Rupsa Chakraborty, Sam Thiele and Richard Gloaguen
Remote Sens. 2025, 17(11), 1878; https://doi.org/10.3390/rs17111878 - 28 May 2025
Viewed by 716
Abstract
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational [...] Read more.
A majority of hyperspectral super-resolution methods aim to enhance the spatial resolution of hyperspectral imaging data (HSI) by integrating high-resolution multispectral imaging data (MSI), leveraging rich spectral information for various geospatial applications. Key challenges include spectral distortions from high-frequency spatial data, high computational complexity, and limited training data, particularly for new-generation sensors with unique noise patterns. In this contribution, we propose a novel parallel patch-wise sparse residual learning (P2SR) algorithm for resolution enhancement based on fusion of HSI and MSI. The proposed method uses multi-decomposition techniques (i.e., Independent component analysis, Non-negative matrix factorization, and 3D wavelet transforms) to extract spatial and spectral features to form a sparse dictionary. The spectral and spatial characteristics of the scene encoded in the dictionary enable reconstruction through a first-order optimization algorithm to ensure an efficient sparse representation. The final spatially enhanced HSI is reconstructed by combining the learned features from low-resolution HSI and applying an MSI-regulated guided filter to enhance spatial fidelity while minimizing artifacts. P2SR is deployable on a high-performance computing (HPC) system with parallel processing, ensuring scalability and computational efficiency for large HSI datasets. Extensive evaluations on three diverse study sites demonstrate that P2SR consistently outperforms traditional and state-of-the-art (SOA) methods in both quantitative metrics and qualitative spatial assessments. Specifically, P2SR achieved the best average PSNR (25.2100) and SAM (12.4542) scores, indicating superior spatio-spectral reconstruction contributing to sharper spatial features, reduced mixed pixels, and enhanced geological features. P2SR also achieved the best average ERGAS (8.9295) and Q2n (0.5156), which suggests better overall fidelity across all bands and perceptual accuracy with the least spectral distortions. Importantly, we show that P2SR preserves critical spectral signatures, such as Fe2+ absorption, and improves the detection of fine-scale environmental and geological structures. P2SR’s ability to maintain spectral fidelity while enhancing spatial detail makes it a powerful tool for high-precision remote sensing applications, including mineral mapping, land-use analysis, and environmental monitoring. Full article
Show Figures

Graphical abstract

24 pages, 6314 KiB  
Article
CDFAN: Cross-Domain Fusion Attention Network for Pansharpening
by Jinting Ding, Honghui Xu and Shengjun Zhou
Entropy 2025, 27(6), 567; https://doi.org/10.3390/e27060567 - 27 May 2025
Viewed by 472
Abstract
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs [...] Read more.
Pansharpening provides a computational solution to the resolution limitations of imaging hardware by enhancing the spatial quality of low-resolution hyperspectral (LRMS) images using high-resolution panchromatic (PAN) guidance. From an information-theoretic perspective, the task involves maximizing the mutual information between PAN and LRMS inputs while minimizing spectral distortion and redundancy in the fused output. However, traditional spatial-domain methods often fail to preserve high-frequency texture details, leading to entropy degradation in the resulting images. On the other hand, frequency-based approaches struggle to effectively integrate spatial and spectral cues, often neglecting the underlying information content distributions across domains. To address these shortcomings, we introduce a novel architecture, termed the Cross-Domain Fusion Attention Network (CDFAN), specifically designed for the pansharpening task. CDFAN is composed of two core modules: the Multi-Domain Interactive Attention (MDIA) module and the Spatial Multi-Scale Enhancement (SMCE) module. The MDIA module utilizes discrete wavelet transform (DWT) to decompose the PAN image into frequency sub-bands, which are then employed to construct attention mechanisms across both wavelet and spatial domains. Specifically, wavelet-domain features are used to formulate query vectors, while key features are derived from the spatial domain, allowing attention weights to be computed over multi-domain representations. This design facilitates more effective fusion of spectral and spatial cues, contributing to superior reconstruction of high-resolution multispectral (HRMS) images. Complementing this, the SMCE module integrates multi-scale convolutional pathways to reinforce spatial detail extraction at varying receptive fields. Additionally, an Expert Feature Compensator is introduced to adaptively balance contributions from different scales, thereby optimizing the trade-off between local detail preservation and global contextual understanding. Comprehensive experiments conducted on standard benchmark datasets demonstrate that CDFAN achieves notable improvements over existing state-of-the-art pansharpening methods, delivering enhanced spectral–spatial fidelity and producing images with higher perceptual quality. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

27 pages, 1306 KiB  
Review
Recent Advancements in Hyperspectral Image Reconstruction from a Compressive Measurement
by Xian-Hua Han, Jian Wang and Huiyan Jiang
Sensors 2025, 25(11), 3286; https://doi.org/10.3390/s25113286 - 23 May 2025
Viewed by 863
Abstract
Hyperspectral (HS) image reconstruction has become a pivotal research area in computational imaging, facilitating the recovery of high-resolution spectral information from compressive snapshot measurements. With the rapid advancement of deep neural networks, reconstruction techniques have achieved significant improvements in both accuracy and computational [...] Read more.
Hyperspectral (HS) image reconstruction has become a pivotal research area in computational imaging, facilitating the recovery of high-resolution spectral information from compressive snapshot measurements. With the rapid advancement of deep neural networks, reconstruction techniques have achieved significant improvements in both accuracy and computational efficiency, enabling more precise spectral recovery across a wide range of applications. This survey presents a comprehensive overview of recent progress in HS image reconstruction, systematically categorized into three main paradigms: traditional model-based methods, deep learning-based approaches, and hybrid frameworks that integrate data-driven priors with the mathematical modeling of the degradation process. We examine the foundational principles, strengths, and limitations of each category, with particular attention to developments such as sparsity and low-rank priors in model-based methods, the evolution from convolutional neural networks to Transformer architectures in learning-based approaches, and deep unfolding strategies in hybrid models. Furthermore, we review benchmark datasets, evaluation metrics, and prevailing challenges including spectral distortion, computational cost, and generalizability across diverse conditions. Finally, we outline potential research directions to address current limitations. This survey aims to provide a valuable reference for researchers and practitioners striving to advance the field of HS image reconstruction. Full article
(This article belongs to the Special Issue Feature Review Papers in Optical Sensors)
Show Figures

Figure 1

47 pages, 3987 KiB  
Review
Estimating Soil Attributes for Yield Gap Reduction in Africa Using Hyperspectral Remote Sensing Data with Artificial Intelligence Methods: An Extensive Review and Synthesis
by Nadir El Bouanani, Ahmed Laamrani, Hicham Hajji, Mohamed Bourriz, Francois Bourzeix, Hamd Ait Abdelali, Ali El-Battay, Abdelhakim Amazirh and Abdelghani Chehbouni
Remote Sens. 2025, 17(9), 1597; https://doi.org/10.3390/rs17091597 - 30 Apr 2025
Cited by 1 | Viewed by 1419
Abstract
Africa’s rapidly growing population is driving unprecedented demands on agricultural production systems. However, agricultural yields in Africa are far below their potential. One of the challenges leading to low productivity is Africa‘s poor soil quality. Effective soil fertility management is an essential key [...] Read more.
Africa’s rapidly growing population is driving unprecedented demands on agricultural production systems. However, agricultural yields in Africa are far below their potential. One of the challenges leading to low productivity is Africa‘s poor soil quality. Effective soil fertility management is an essential key factor for optimizing agricultural productivity while ensuring environmental sustainability. Key soil fertility properties—such as soil organic carbon (SOC), nutrient levels (i.e., nitrogen (N), phosphorus (P), potassium (K), moisture retention (MR) or moisture content (MC), and soil texture (clay, sand, and loam fractions)—are critical factors influencing crop yield. In this context, this study conducts an extensive literature review on the use of hyperspectral remote sensing technologies, with a particular focus on freely accessible hyperspectral remote sensing data (e.g., PRISMA, EnMAP), as well as an evaluation of advanced Artificial Intelligence (AI) models for analyzing and processing spectral data to map soil attributes. More specifically, the study examined progress in applying hyperspectral remote sensing technologies for monitoring and mapping soil properties in Africa over the last 15 years (2008–2024). Our results demonstrated that (i) only very few studies have explored high-resolution remote sensing sensors (i.e., hyperspectral satellite sensors) for soil property mapping in Africa; (ii) there is a considerable value in AI approaches for estimating and mapping soil attributes, with a strong recommendation to further explore the potential of deep learning techniques; (iii) despite advancements in AI-based methodologies and the availability of hyperspectral sensors, their combined application remains underexplored in the African context. To our knowledge, no studies have yet integrated these technologies for soil property mapping in Africa. This review also highlights the potential of adopting hyperspectral data (i.e., encompassing both imaging and spectroscopy) integrated with advanced AI models to enhance the accurate mapping of soil fertility properties in Africa, thereby constituting a base for addressing the question of yield gap. Full article
Show Figures

Graphical abstract

20 pages, 11001 KiB  
Article
Investigation of Peanut Leaf Spot Detection Using Superpixel Unmixing Technology for Hyperspectral UAV Images
by Qiang Guan, Shicheng Qiao, Shuai Feng and Wen Du
Agriculture 2025, 15(6), 597; https://doi.org/10.3390/agriculture15060597 - 11 Mar 2025
Cited by 2 | Viewed by 701
Abstract
Leaf spot disease significantly impacts peanut growth. Timely, effective, and accurate monitoring of leaf spot severity is crucial for high-yield and high-quality peanut production. Hyperspectral technology from unmanned aerial vehicles (UAVs) is widely employed for disease detection in agricultural fields, but the low [...] Read more.
Leaf spot disease significantly impacts peanut growth. Timely, effective, and accurate monitoring of leaf spot severity is crucial for high-yield and high-quality peanut production. Hyperspectral technology from unmanned aerial vehicles (UAVs) is widely employed for disease detection in agricultural fields, but the low spatial resolution of imagery affects accuracy. In this study, peanuts with varying levels of leaf spot disease were detected using hyperspectral images from UAVs. Spectral features of crops and backgrounds were extracted using simple linear iterative clustering (SLIC), the homogeneity index, and k-means clustering. Abundance estimation was conducted using fully constrained least squares based on a distance strategy (D-FCLS), and crop regions were extracted through threshold segmentation. Disease severity was determined based on the average spectral reflectance of crop regions, utilizing classifiers such as XGBoost, the MLP, and the GA-SVM. Results indicate that crop spectra extracted using the superpixel-based unmixing method effectively captured spectral variability, leading to more accurate disease detection. By optimizing threshold values, a better balance between completeness and the internal variability of crop regions was achieved, allowing for the precise extraction of crop regions. Compared to other unmixing methods and manual visual interpretation techniques, the proposed method achieved excellent results, with an overall accuracy of 89.08% and a Kappa coefficient of 85.42% for the GA-SVM classifier. This method provides an objective, efficient, and accurate solution for detecting peanut leaf spot disease, offering technical support for field management with promising practical applications. Full article
Show Figures

Figure 1

30 pages, 22071 KiB  
Article
Analysis of Optical Errors in Joint Fabry–Pérot Interferometer–Fourier-Transform Imaging Spectroscopy Interferometric Super-Resolution Systems
by Yu Zhang, Qunbo Lv, Jianwei Wang, Yinhui Tang, Jia Si, Xinwen Chen and Yangyang Liu
Appl. Sci. 2025, 15(6), 2938; https://doi.org/10.3390/app15062938 - 8 Mar 2025
Viewed by 871
Abstract
Fourier-transform imaging spectroscopy (FTIS) faces inherent limitations in spectral resolution due to the maximum optical path difference (OPD) achievable by its interferometer. To overcome this constraint, we propose a novel spectral super-resolution technology integrating a Fabry–Pérot interferometer (FPI) with FTIS, termed multi-component joint [...] Read more.
Fourier-transform imaging spectroscopy (FTIS) faces inherent limitations in spectral resolution due to the maximum optical path difference (OPD) achievable by its interferometer. To overcome this constraint, we propose a novel spectral super-resolution technology integrating a Fabry–Pérot interferometer (FPI) with FTIS, termed multi-component joint interferometric hyperspectral imaging (MJI-HI). This method leverages the FPI to periodically modulate the target spectrum, enabling FTIS to capture a modulated interferogram. By encoding high-frequency spectral interference information into low-frequency interference regions through FPI modulation, an advanced inversion algorithm is developed to reconstruct the encoded high-frequency components, thereby achieving spectral super-resolution. This study analyzes the impact of primary optical errors and tolerance thresholds in the FPI and FTIS on the interferograms and spectral fidelity of MJI-HI, along with proposing algorithmic improvements. Notably, certain errors in the FTIS and FPI exhibit mutual interference. The theoretical framework for error analysis is validated and discussed through numerical simulations, providing critical theoretical support for subsequent instrument development and laying a foundation for advancing novel spectral super-resolution technologies. Full article
(This article belongs to the Special Issue Spectral Detection: Technologies and Applications—2nd Edition)
Show Figures

Figure 1

21 pages, 27582 KiB  
Article
Multi-Level Spectral Attention Network for Hyperspectral BRDF Reconstruction from Multi-Angle Multi-Spectral Images
by Liyao Song and Haiwei Li
Remote Sens. 2025, 17(5), 863; https://doi.org/10.3390/rs17050863 - 28 Feb 2025
Cited by 1 | Viewed by 946
Abstract
With the rapid development of hyperspectral applications using unmanned aerial vehicles (UAVs), the traditional assumption that ground objects exhibit Lambertian reflectance is no longer sufficient to meet the high-precision requirements for quantitative inversion and airborne hyperspectral data applications. Therefore, it is necessary to [...] Read more.
With the rapid development of hyperspectral applications using unmanned aerial vehicles (UAVs), the traditional assumption that ground objects exhibit Lambertian reflectance is no longer sufficient to meet the high-precision requirements for quantitative inversion and airborne hyperspectral data applications. Therefore, it is necessary to establish a hyperspectral bidirectional reflectance distribution function (BRDF) model suitable for the area of imaging. However, obtaining multi-angle information from UAV push-broom hyperspectral data is difficult. Achieving uniform push-broom imaging and flexibly acquiring multi-angle data is challenging due to spatial distortions, particularly under heightened roll or pitch angles, and the need for multiple flights; this extends acquisition time and exacerbates uneven illumination, introducing errors in BRDF model construction. To address these issues, we propose leveraging the advantages of multi-spectral cameras, such as their compact size, lightweight design, and high signal-to-noise ratio (SNR) to reconstruct hyperspectral multi-angle data. This approach enhances spectral resolution and the number of bands while mitigating spatial distortions and effectively captures the multi-angle characteristics of ground objects. In this study, we collected UAV hyperspectral multi-angle data, corresponding illumination information, and atmospheric parameter data, which can solve the problem of existing BRDF modeling not considering outdoor ambient illumination changes, as this limits modeling accuracy. Based on this dataset, we propose an improved Walthall model, considering illumination variation. Then, the radiance consistency of BRDF multi-angle data is effectively optimized, the error caused by illumination variation in BRDF modeling is reduced, and the accuracy of BRDF modeling is improved. In addition, we adopted Transformer for spectral reconstruction, increased the number of bands on the basis of spectral dimension enhancement, and conducted BRDF modeling based on the spectral reconstruction results. For the multi-level Transformer spectral dimension enhancement algorithm, we added spectral response loss constraints to improve BRDF accuracy. In order to evaluate BRDF modeling and quantitative application potential from the reconstruction results, we conducted comparison and ablation experiments. Finally, we solved the problem of difficulty in obtaining multi-angle information due to the limitation of hyperspectral imaging equipment, and we provide a new solution for obtaining multi-angle features of objects with higher spectral resolution using low-cost imaging equipment. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

22 pages, 16205 KiB  
Article
Hyper Spectral Camera ANalyzer (HyperSCAN)
by Wen-Qian Chang, Hsun-Ya Hou, Pei-Yuan Li, Michael W. Shen, Cheng-Ling Kuo, Tang-Huang Lin, Loren C. Chang, Chi-Kuang Chao and Jann-Yenq Liu
Remote Sens. 2025, 17(5), 842; https://doi.org/10.3390/rs17050842 - 27 Feb 2025
Viewed by 1213
Abstract
HyperSCAN (Hyper Spectral Camera ANalyzer) is a hyperspectral imager which monitors the Earth’s environment and also an educational platform to integrate college students’ ideas and skills in optical design and data processing. The advantages of HyperSCAN are that it is designed for modular [...] Read more.
HyperSCAN (Hyper Spectral Camera ANalyzer) is a hyperspectral imager which monitors the Earth’s environment and also an educational platform to integrate college students’ ideas and skills in optical design and data processing. The advantages of HyperSCAN are that it is designed for modular design, is compact and lightweight, and low-cost using commercial off-the-shelf (COTS) optical components. The modular design allows for flexible and rapid development, as well as validation within college lab environments. To optimize space utilization and reduce the optical path, HyperSCAN’s optical system incorporates a folding mirror, making it ideal for the constrained environment of a CubeSat. The use of COTS components significantly lowers pre-development costs and minimizes associated risks. The compact size and cost-effectiveness of CubeSats, combined with the advanced capabilities of hyperspectral imagers, make them a powerful tool for a broad range of applications, such as environmental monitoring of Earth, disaster management, mineral and resource exploration, atmospheric and climate studies, and coastal and marine research. We conducted a spatial-resolution-boost experiment using HyperSCAN data and various hyperspectral datasets including Urban, Pavia University, Pavia Centre, Botswana, and Indian Pines. After testing various data-fusion deep learning models, the best image quality of these methods is a two-branches convolutional neural network (TBCNN), where TBCNN retrieves spatial and spectral features in parallel and reconstructs the higher-spatial-resolution data. With the aid of higher-spatial-resolution multispectral data, we can boost the spatial resolution of HyperSCAN data. Full article
Show Figures

Figure 1

36 pages, 12339 KiB  
Article
ATIS-Driven 3DCNet: A Novel Three-Stream Hyperspectral Fusion Framework with Knowledge from Downstream Classification Performance
by Quan Zhang, Jian Long, Jun Li, Chunchao Li, Jianxin Si and Yuanxi Peng
Remote Sens. 2025, 17(5), 825; https://doi.org/10.3390/rs17050825 - 26 Feb 2025
Viewed by 586
Abstract
Reconstructing high-resolution hyperspectral images (HR-HSIs) by fusing low-resolution hyperspectral images (LR-HSIs) and high-resolution multispectral images (HR-MSIs) is a significant challenge in image processing. Traditional fusion methods focus on visual and statistical metrics, often neglecting the requirements of downstream tasks. To address this gap, [...] Read more.
Reconstructing high-resolution hyperspectral images (HR-HSIs) by fusing low-resolution hyperspectral images (LR-HSIs) and high-resolution multispectral images (HR-MSIs) is a significant challenge in image processing. Traditional fusion methods focus on visual and statistical metrics, often neglecting the requirements of downstream tasks. To address this gap, we propose a novel three-stream fusion network, 3DCNet, designed to integrate spatial and spectral information from LR-HSIs and HR-MSIs. The framework includes two dedicated branches for extracting spatial and spectral features, alongside a hybrid spatial–spectral branch (HSSI). The spatial block (SpatB) and the spectral block (SpecB) are designed to extract spatial and spectral details. The training process employs the global loss, spatial edge loss, and spectral angle loss for fusion tasks, with an alternating training iteration strategy (ATIS) to enhance downstream classification by iteratively refining the fusion and classification networks. Fusion experiments on seven datasets demonstrate that 3DCNet outperforms existing methods in generating high-quality HR-HSIs. Superior performance in downstream classification tasks on four datasets proves the importance of the ATIS. Ablation studies validate the importance of each module and the ATIS process. The 3DCNet framework not only advances the fusion process by leveraging downstream knowledge but also sets a new benchmark for classification-oriented hyperspectral fusion. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Back to TopTop