Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (903)

Search Parameters:
Keywords = hyperspectral imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 16517 KB  
Article
UAV Hyperspectral Retrieval of Optically Inactive Water Quality Parameters (Total Hardness and CODMn) Using a GA-Optimized Attention-Enhanced Neural Network
by Guofang Yang, Yingjun Zhao, Yanjie Yang and Xiaoping Niu
Water 2026, 18(10), 1186; https://doi.org/10.3390/w18101186 - 14 May 2026
Abstract
Retrieving non-optically active water quality variables, such as total hardness (TH) and permanganate index (CODMn), from hyperspectral data remains challenging because these parameters are not directly linked to spectral reflectance. To improve their estimation from UAV hyperspectral imagery, a GA-MHSA-BPNN framework was developed [...] Read more.
Retrieving non-optically active water quality variables, such as total hardness (TH) and permanganate index (CODMn), from hyperspectral data remains challenging because these parameters are not directly linked to spectral reflectance. To improve their estimation from UAV hyperspectral imagery, a GA-MHSA-BPNN framework was developed by combining a genetic algorithm (GA), multi-head self-attention (MHSA), and a backpropagation neural network (BPNN). In this framework, MHSA was introduced to strengthen the representation of informative spectral features, while GA was applied to optimize the initial network parameters and thus enhance convergence stability. The proposed framework was evaluated against BPNN, GA-BPNN, MHSA-BPNN, and 1D-CNN models. Among the tested approaches, GA-MHSA-BPNN produced the most favorable results for both TH and CODMn, with R2 values of 0.878 and 0.843, respectively. Additional experiments using different proportions of training samples showed that the model remained relatively stable when the training data were reduced to 70% and 50% of the original dataset. These results indicate that integrating GA and MHSA into a UAV hyperspectral retrieval framework can improve the estimation of non-optically active water quality variables and provide useful methodological support for efficient and refined monitoring of drinking water source areas. Full article
Show Figures

Figure 1

29 pages, 6263 KB  
Article
Linking Plant Traits to Fire Potential Mapping: A Feasibility Study in Australian Ecosystems
by Andrea Viñuales, Nicolas Younes, Mbam Itumo, Marta Yebra, Ignacio de la Calle and Javier Madrigal
Remote Sens. 2026, 18(10), 1546; https://doi.org/10.3390/rs18101546 - 13 May 2026
Viewed by 18
Abstract
Given the increasing frequency, severity, and socioecological impacts of wildfires, there is an urgent need for robust frameworks to better characterize fire behavior and flammability patterns across ecosystems to support early warning, mitigation, and management strategies. However, flammability remains difficult to quantify and [...] Read more.
Given the increasing frequency, severity, and socioecological impacts of wildfires, there is an urgent need for robust frameworks to better characterize fire behavior and flammability patterns across ecosystems to support early warning, mitigation, and management strategies. However, flammability remains difficult to quantify and scale, as it involves multiple interacting components that are typically measured at the bench scale. This study aimed to establish empirical links between spectral information, plant traits, and flammability metrics, and to scale these relationships to satellite imagery to translate these metrics into a spatial context. We combined laboratory spectroscopy, plant trait measurements including leaf mass per area, carbon, and cellulose, and combustion experiments using a simple and reproducible burning device. In total, 84 samples were collected and analysed, allowing us to characterise how spectral signatures relate to vegetation traits and fire behaviour. Spectral indices were developed to estimate plant traits, which were subsequently used as predictors in flammability models. These models were then transferred to Environmental Mapping and Analysis Program (EnMAP) hyperspectral imagery to derive spatial estimates across eucalypt forests and grasslands of the Australian Capital Territory (ACT). Spectral information distinguished fuel types and captured variability of the plant traits, while these traits showed associations with combustion behaviour. Based on these links, the best-performing model predicted the rate of temperature increase, a combustibility metric, in eucalypt forests (R2 = 0.70; Root Mean Square Error = 32.48 °C/s). In contrast, grassland models showed limited predictive performance, likely due to weaker relationships between plant traits and flammability metrics. Overall, this study demonstrates a practical and scalable approach for deriving flammability maps from hyperspectral and in situ data, highlighting the potential of plant-trait-based remote sensing. The resulting maps should not be interpreted as standalone fire risk products, but rather as a characterization of the structural and biochemical drivers of flammability. The main constraint of this work is the limited sample size. Future research should expand spatial and temporal coverage to better capture vegetation variability and enable the inclusion of independent validation datasets. Exploring alternative combustion protocols and testing more advanced spectral modelling approaches for trait estimation would provide additional insights. Full article
(This article belongs to the Special Issue Hyperspectral Data Analysis of Vegetation and Soil Monitoring)
18 pages, 18215 KB  
Article
Estimation of Soil Total Nitrogen in Plateau Agriculture Regions from UAV Hyperspectral Data
by Yinan Luo, Bo-Hui Tang, Dong Wang, Fangliang Cai and Zhao-Liang Li
Remote Sens. 2026, 18(10), 1532; https://doi.org/10.3390/rs18101532 - 12 May 2026
Viewed by 160
Abstract
Soil total nitrogen (STN) is a key indicator of soil fertility and plays a fundamental role in agricultural productivity and sustainable land management. However, achieving an accurate and spatially continuous estimate of STN at the field scale remains challenging due to inherent soil [...] Read more.
Soil total nitrogen (STN) is a key indicator of soil fertility and plays a fundamental role in agricultural productivity and sustainable land management. However, achieving an accurate and spatially continuous estimate of STN at the field scale remains challenging due to inherent soil variability and the constraints of conventional sampling methods. In this study, we employed unmanned aerial vehicle (UAV)-based hyperspectral imagery to estimate STN by integrating spectral preprocessing, feature selection, and machine learning techniques. Multiple feature selection methods, including Pearson correlation analysis, variable importance in projection (VIP), and competitive adaptive reweighted sampling (CARS), were evaluated to identify the most informative spectral bands. Several regression models—support vector regression with radial basis function kernel (SVR-RBF), random forest (RF), Extra Trees, PCA-SVR-RBF, and XGBoost—were compared for STN prediction. Among these, the VIP-PCA-SVR-RBF model yielded the best performance, achieving a test R2 of approximately 0.77 and an RMSE of 0.45 g kg−1. The integration of VIP-based feature selection with PCA dimensionality reduction significantly enhanced predictive accuracy and generalization capability compared to the other models tested. Spatial prediction maps derived from the optimal model revealed considerable heterogeneity in STN distribution across the study area. These results underscore the potential of UAV hyperspectral remote sensing for high-resolution mapping of soil nitrogen and offer a promising framework for precision nutrient management in agriculture. Full article
Show Figures

Figure 1

23 pages, 11707 KB  
Technical Note
HyperCoreg: An Automated, Operational Pipeline for Co-Registering PRISMA and EnMAP Hyperspectral Imagery
by José Antonio Gámez García, Giacomo Lazzeri and Deodato Tapete
Geomatics 2026, 6(3), 47; https://doi.org/10.3390/geomatics6030047 - 11 May 2026
Viewed by 139
Abstract
HyperCoreg is an automated, end-to-end pipeline for geometric co-registration of spaceborne hyperspectral imagery (PRISMA L2D and EnMAP L2A) to Sentinel-2 Level-2A reference data. The workflow addresses scene-dependent geolocation errors that hinder reliable data fusion and multi-temporal analyses, particularly in cloud-affected acquisitions. HyperCoreg builds [...] Read more.
HyperCoreg is an automated, end-to-end pipeline for geometric co-registration of spaceborne hyperspectral imagery (PRISMA L2D and EnMAP L2A) to Sentinel-2 Level-2A reference data. The workflow addresses scene-dependent geolocation errors that hinder reliable data fusion and multi-temporal analyses, particularly in cloud-affected acquisitions. HyperCoreg builds on the AROSICS framework without replacing its image-matching engine and extends it at the workflow level through four operational functions: automated Sentinel-2 candidate selection, hyperspectral-to-multispectral band pairing, sequential alignment logic, and quality-controlled acceptance. The main output is a co-registered hyperspectral cube along with comprehensive metrics, per-scene reports, and optional diagnostic products that support accessible quality control. Performance is evaluated on a long time series of PRISMA images collected from 2019 to 2025 and an EnMAP test set acquired in 2025, over the Metropolitan City of Rome (Italy). The multi-sensor dataset encompasses heterogeneous acquisition conditions, including variable cloud cover, illumination, and seasonal variability. The results show systematic reductions in mean residual error compared with a controlled basic AROSICS-based pipeline configuration. The largest gains are achieved in challenging conditions where tie points are sparse or unevenly distributed. By improving geometric consistency, this pipeline facilitates spatial layering and integration of hyperspectral data with higher-resolution urban layers and supports a range of downstream applications where data integration and spatiotemporal consistency are cornerstones of further analysis. Full article
Show Figures

Figure 1

25 pages, 4816 KB  
Article
SASR: Sensor-Agnostic Semantic Representation Unification for Cross-Modal RGB and Hyperspectral Aerial Scene Recognition
by Muhammad Zaheer Sajid, Muhammad Fareed Hamid, Kamran Bashir Taas, Muhammad Attique Khan, Latifah Almuqren, Mohammad Alhefdi, Yunyoung Nam and Zepa Yang
Remote Sens. 2026, 18(9), 1444; https://doi.org/10.3390/rs18091444 - 6 May 2026
Viewed by 332
Abstract
Aerial scene recognition has progressed substantially with deep learning methods for RGB and hyperspectral imagery; however, existing approaches typically operate on single modalities or rely on explicit multimodal fusion, limiting scalability, flexibility, and deployment in heterogeneous sensing environments. To address this limitation, we [...] Read more.
Aerial scene recognition has progressed substantially with deep learning methods for RGB and hyperspectral imagery; however, existing approaches typically operate on single modalities or rely on explicit multimodal fusion, limiting scalability, flexibility, and deployment in heterogeneous sensing environments. To address this limitation, we propose a sensor-agnostic semantic representation learning framework that formulates multimodal learning as the unification of semantic representations rather than feature-level fusion. The proposed architecture employs modality-specific encoders and projection heads to map spatial and spectral–spatial features into a shared semantic embedding space, enabling modality-invariant representation learning while preserving discriminative characteristics of each sensing modality. A composite objective integrating cross-spectral alignment, intra-class compactness regularization, and prototype-based semantic anchoring is introduced to enforce consistent embedding geometry and improve class separability across modalities. A unified classifier operating within this shared space enables reliable inference from a single modality input without requiring paired data or explicit fusion. Extensive evaluations on multiple benchmark datasets, including Houston 2013 for cross-modality RGB–hyperspectral analysis, UC Merced for independent RGB aerial scene classification, and Indian Pines for hyperspectral land-cover recognition, demonstrate the robustness and generalization capability of the proposed framework. In Houston 2013, the method achieves 96.4% (RGB) and 97.3% (hyperspectral) overall accuracy, with cross-modality transfer performance of 87.2% (RGB → HSI) and 88.7% (HSI → RGB), further improving to 97.0% and 97.8% under joint training. On UC Merced and Indian Pines, the model attains 98.7% and 97.6% overall accuracy, respectively. These results establish semantic representation unification as a scalable and effective alternative to conventional multimodal fusion for heterogeneous remote sensing environments. Full article
Show Figures

Figure 1

36 pages, 11468 KB  
Article
A Multisensor Framework for Satellite Data Simulation: Generating Representative Datasets for Future ESA Missions—CHIME and LSTM
by Pelagia Koutsantoni, Maria Kremezi, Vassilia Karathanassi, Paola Di Lauro, José Andrés Vargas-Solano, Giulio Ceriola, Antonello Aiello and Elisabetta Lamboglia
Remote Sens. 2026, 18(9), 1384; https://doi.org/10.3390/rs18091384 - 30 Apr 2026
Viewed by 460
Abstract
The preparation for next-generation Earth Observation missions, such as the European Space Agency’s (ESA) Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and Land Surface Temperature Monitoring (LSTM), requires robust pre-launch proxy datasets. Because current simulation methodologies frequently rely on isolated, platform-specific approaches, [...] Read more.
The preparation for next-generation Earth Observation missions, such as the European Space Agency’s (ESA) Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) and Land Surface Temperature Monitoring (LSTM), requires robust pre-launch proxy datasets. Because current simulation methodologies frequently rely on isolated, platform-specific approaches, this study proposes a comprehensive, unified multisensor framework capable of dynamically generating operationally realistic CHIME and LSTM datasets from diverse airborne and satellite sources. Three distinct processing pipelines were established. For hyperspectral data simulation, precursor satellite imagery (PRISMA and EnMAP) and high-resolution airborne measurements (HySpex) were harmonized to CHIME’s 30 m specifications utilizing Spectral Response Function (SRF) adjustments, Point Spread Function (PSF) spatial resampling, and 6S atmospheric radiative transfer modeling. For thermal data simulation, archive Landsat 8/9 and ASTER imagery were transformed into LSTM’s target 50 m, 5-band configuration using a synergistic two-step approach: a physics-based Spectral Super-Resolution (SSR) module followed by an AI-driven Spatial Super-Resolution (SpSR) transformer network. Evaluated across highly diverse inland, coastal, and riverine testbeds in Italy, the simulated products demonstrated high spectral, spatial, and radiometric fidelity. While inherently constrained by the native spectral ranges of the input sensors and by the current lack of absolute on-orbit mission data for validation, the downscaled images closely reproduced complex thermal patterns and water-quality gradients. Ultimately, this scalable framework provides the remote sensing community with early access to representative datasets and mission performance assessments, while accelerating pre-launch algorithm development and testing for environmental monitoring applications—particularly those focused on water discharges. Full article
Show Figures

Figure 1

32 pages, 7017 KB  
Article
Individual Tree Species Classification in a Mining Area of the Yellow River Basin Using UAV-Based LiDAR, Hyperspectral, and RGB Data
by Guo Wang, Sheng Nie, Xiaohuan Xi, Cheng Wang and Hongtao Wang
Remote Sens. 2026, 18(9), 1361; https://doi.org/10.3390/rs18091361 - 28 Apr 2026
Viewed by 320
Abstract
The Yellow River Basin contains abundant coal resources; however, its ecological environment is inherently fragile, and vegetation degradation has been further intensified by extensive mining activities. Accurate classification of individual tree species in mining-affected areas is therefore essential for assessing ecological conditions and [...] Read more.
The Yellow River Basin contains abundant coal resources; however, its ecological environment is inherently fragile, and vegetation degradation has been further intensified by extensive mining activities. Accurate classification of individual tree species in mining-affected areas is therefore essential for assessing ecological conditions and establishing a scientific foundation for targeted restoration and sustainable management. To address this need, an evaluated machine learning framework was developed and evaluated for individual tree species classification in a coal mining area of the Yellow River Basin using integrated unmanned aerial vehicle (UAV) data. A comprehensive feature set was constructed by extracting 278 attributes per tree. These attributes included 224 spectral bands and 29 hyperspectral indices derived from hyperspectral imagery, 24 textural metrics obtained from RGB orthophotos, and one canopy height feature generated from a LiDAR-derived model. Based on ground-truth data from 1095 individual trees, seven machine learning algorithms were trained and systematically compared: Random Forest (RF), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Decision Tree (DT), Gradient Boosting (GB), Logistic Regression (LR), and XGBoost. Statistical significance testing using 5 × 5 repeated cross-validation, together with the Friedman test and post hoc Nemenyi test, and additional model stability analysis consistently identified XGBoost as the optimal classifier. On an independent test set, XGBoost achieved high accuracy (Overall Accuracy = 0.897, Kappa = 0.811) with an efficient training time of 2.36 s. Further analysis demonstrated the critical and complementary roles of hyperspectral and structural features in species discrimination. The optimized model was subsequently applied to generate a detailed wall-to-wall tree species map across the entire mining area. Overall, this study presents a statistically informed comparison of classifiers for multi-source feature-based species discrimination and delivers an evaluated and practical pipeline for effective vegetation monitoring. The proposed framework provides a scientific tool for assessing and managing ecological recovery in complex mining environments, particularly within ecologically sensitive regions such as the Yellow River Basin. Full article
(This article belongs to the Special Issue Remote Sensing and Smart Forestry (Third Edition))
Show Figures

Figure 1

32 pages, 5393 KB  
Article
TCSNet: A Thin-Cloud-Sensitive Network for Hyperspectral Remote Sensing Images via Spectral-Spatial Feature Fusion
by Yuanyuan Jia, Siwei Zhao, Xuanbin Liu and Yinnian Liu
Remote Sens. 2026, 18(9), 1326; https://doi.org/10.3390/rs18091326 - 26 Apr 2026
Viewed by 219
Abstract
Cloud detection is essential for quantitative land-surface remote sensing and cloud-climate research. However, existing methods often prioritize spatial features over spectral features, which limits thin-cloud detection. To address this issue, this paper proposes a Thin-Cloud-Sensitive Network (TCSNet) for hyperspectral imagery. TCSNet employs an [...] Read more.
Cloud detection is essential for quantitative land-surface remote sensing and cloud-climate research. However, existing methods often prioritize spatial features over spectral features, which limits thin-cloud detection. To address this issue, this paper proposes a Thin-Cloud-Sensitive Network (TCSNet) for hyperspectral imagery. TCSNet employs an encoder–decoder architecture with a dual-branch design: a convolutional neural network (CNN) extracts multi-scale local features, while a PVTv2-B2 Transformer captures long-range spectral dependencies. To effectively integrate the complementary representations from both branches, a Cross-Modal Fusion (CMF) module with a lightweight single-channel gate is introduced at each stage, followed by a channel attention mechanism (SE) for feature recalibration. Subsequently, a Multi-Scale Fusion (MSF) module is used to integrate multi-level features through a top-down pathway, enabling deep semantic information to guide shallow feature expression. Furthermore, to enhance the decoder’s feature representation capability, a Combined Attention Mechanism (CAM) is incorporated at each decoder stage. This design enables the network to simultaneously focus on important channels, salient regions, and cloud boundaries, effectively alleviating spectral confusion between thin clouds and the underlying surface. Experimental results on Gaofen-5 01 hyperspectral data demonstrate that TCSNet achieves the highest recall (92.98%), Recallthin (85.59%), and Recallthick (99.75%), thereby validating its superiority for thin-cloud detection. Full article
(This article belongs to the Special Issue Artificial Intelligence in Hyperspectral Remote Sensing Data Analysis)
Show Figures

Figure 1

19 pages, 3497 KB  
Article
A Python-Based Workflow for Asbestos Roof Mapping and Temporal Monitoring Using Satellite Imagery
by Giuseppe Bonifazi, Alice Aurigemma, José Salas-Cáceres, Javier Lorenzo-Navarro, Silvia Serranti, Federica Paglietti, Sergio Bellagamba and Sergio Malinconico
Geomatics 2026, 6(3), 41; https://doi.org/10.3390/geomatics6030041 - 25 Apr 2026
Viewed by 345
Abstract
The detection and monitoring of asbestos–cement roofing remain a critical public health and environmental challenge, especially in urban and suburban areas where asbestos-containing materials are still widespread due to their extensive use in the 20th century. Although hyperspectral and high-resolution multispectral remote sensing [...] Read more.
The detection and monitoring of asbestos–cement roofing remain a critical public health and environmental challenge, especially in urban and suburban areas where asbestos-containing materials are still widespread due to their extensive use in the 20th century. Although hyperspectral and high-resolution multispectral remote sensing have proven effective for mapping asbestos–cement roofs, many existing approaches rely on proprietary software, limiting transparency, reproducibility, and large-scale adoption. This study presents a fully reproducible, cost-free Python-based workflow for the detection and temporal monitoring of asbestos–cement roofing using high-resolution multispectral WorldView-3 imagery. The workflow integrates atmospheric correction (using the Py6S radiative transfer model), spatial preprocessing, supervised pixel-based classification, postprocessing, and building-level aggregation within an open framework. A Maximum Likelihood Classifier is applied to VNIR and SWIR data using empirically defined roof typologies to enhance class separability. Pixel-level results are aggregated to the building scale through adaptive thresholding enabling the translation of spectral classifications into meaningful building-level information. Tested over the city of Mantua (Italy), the approach achieved reliable classification performance and enabled multi-temporal comparison to identify changes potentially due to roof remediation. Evaluation metrics (precision, recall, and F1-score) highlight the importance of carefully choosing the building-level threshold. By relying exclusively on open-source tools, the workflow enhances transparency, reproducibility, and scalability for long-term monitoring. Full article
Show Figures

Figure 1

23 pages, 5969 KB  
Article
A Pyramid-Enhanced Swin Transformer for Robust Hyperspectral–Multispectral Image Fusion and Super-Resolution
by Yu Lu, Lin Hu, Jiankai Hu, Shu Gan, Xiping Yuan, Wang Li and Hailong Zhao
Remote Sens. 2026, 18(8), 1255; https://doi.org/10.3390/rs18081255 - 21 Apr 2026
Viewed by 284
Abstract
Due to the inherent limitations of both hyperspectral and multispectral imagery, balancing high spatial resolution with high spectral fidelity has become one of the fundamental challenges in remote sensing image processing. A prevailing strategy is to fuse these two types of data to [...] Read more.
Due to the inherent limitations of both hyperspectral and multispectral imagery, balancing high spatial resolution with high spectral fidelity has become one of the fundamental challenges in remote sensing image processing. A prevailing strategy is to fuse these two types of data to reconstruct images that jointly preserve their respective advantages. However, existing reconstruction approaches still suffer from complex coupling between spatial and spectral information, and limited feature extraction capabilities. To address these issues, this study proposes PMSwinNet (Pyramid Multi-scale Swin Transformer Network), a novel architecture that integrates pyramid-based feature enhancement with Transformer mechanisms. The PMSwinNet incorporates multi-scale pyramid feature fusion and window-based self-attention. Through a progressive multi-stage design and three complementary components—feature extraction and reconstruction modules—the Transformer branch leverages window partitioning and shifting operations to capture long-range spatial dependencies and local contextual cues, while the pyramid features extract both global and local information across multiple spatial scales. In addition, a high-frequency branch is introduced, which employs lightweight convolutions to enhance edges, textures, and other high-frequency details, effectively suppressing blurring and artifacts during reconstruction. Experimental evaluations on multiple public hyperspectral datasets demonstrate that the PMSwinNet outperforms state-of-the-art methods, particularly in terms of detail preservation, spectral distortion suppression, and robustness. Full article
Show Figures

Figure 1

24 pages, 34048 KB  
Article
Unsupervised Hyperspectral Unmixing Based on Multi-Faceted Graph Representation and Curriculum Learning
by Ran Liu, Junfeng Pu, Yanru Chen, Yanling Miao, Dawei Liu and Qi Wang
Remote Sens. 2026, 18(8), 1250; https://doi.org/10.3390/rs18081250 - 21 Apr 2026
Viewed by 334
Abstract
Hyperspectral unmixing aims to estimate endmember spectra and their corresponding abundance fractions at the subpixel scale, which is a critical preprocessing step for quantitative analysis of hyperspectral remote sensing imagery. While deep learning-based methods have achieved remarkable progress, three fundamental challenges remain: (i) [...] Read more.
Hyperspectral unmixing aims to estimate endmember spectra and their corresponding abundance fractions at the subpixel scale, which is a critical preprocessing step for quantitative analysis of hyperspectral remote sensing imagery. While deep learning-based methods have achieved remarkable progress, three fundamental challenges remain: (i) reliance on a single shared spatial prior that cannot decouple the heterogeneous spatial patterns of different land covers; (ii) the lack of synergy in jointly optimizing endmember extraction and abundance estimation; (iii) the poor robustness of unsupervised training to complex mixtures, noise, and class imbalance. To address these issues, we propose a novel unsupervised unmixing framework that integrates adaptive orthogonal multi-faceted graph representation with curriculum learning. Specifically, we design an Adaptive Orthogonal Multi-Faceted Graph Generator (AOMFG) to learn a set of independent orthogonal graph structures, achieving spatially informed decoupling of land cover patterns. Then, a dual-branch collaborative optimization network is constructed: a Graph Convolutional Network (GCN) branch that incorporates the learned spatial topological priors for abundance estimation, and a 1D Convolutional Neural Network (1DCNN) branch that employs a query-attention mechanism to adaptively aggregate pure spectral features for endmember extraction. Finally, we introduce a three-stage curriculum learning strategy that progressively fine-tunes the model, which significantly enhances its performance. Extensive experiments on three widely used real-world benchmark datasets demonstrate that our proposed framework consistently outperforms state-of-the-art methods in both endmember extraction and abundance estimation accuracy. Comprehensive ablation studies, parameter sensitivity analysis, and noise robustness tests further validate the effectiveness of each core component. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

31 pages, 7470 KB  
Article
Improved Quantification of Methane Point-Source Emissions from Hyperspectral Imagery Using a Spectrally Corrected Levenberg–Marquardt Matched Filter
by Zhuo He, Yan Ma, Zhengqiang Li, Ying Zhang, Cheng Fan, Lili Qie, Zihan Zhang, Zheng Shi, Tong Lu, Yuanyuan Gao, Xingyu Yao, Xiaofan Li, Chenwei Lan and Qian Yao
Remote Sens. 2026, 18(8), 1195; https://doi.org/10.3390/rs18081195 - 16 Apr 2026
Viewed by 484
Abstract
Spaceborne hyperspectral imaging spectrometers enable refined retrieval and quantification of methane point-source emissions. However, the conventional matched filter (MF) systematically underestimates methane enhancements under high-concentration conditions and remains sensitive to spectral inconsistencies across varying observation scenarios. To address these limitations, we improve MF-based [...] Read more.
Spaceborne hyperspectral imaging spectrometers enable refined retrieval and quantification of methane point-source emissions. However, the conventional matched filter (MF) systematically underestimates methane enhancements under high-concentration conditions and remains sensitive to spectral inconsistencies across varying observation scenarios. To address these limitations, we improve MF-based retrieval from two aspects: the observation model and the unit absorption spectrum (UAS) representation. First, a Levenberg–Marquardt matched filter (LMMF) is developed by extending the MF framework to a nonlinear retrieval formulation while retaining its data-driven and background-statistics-based characteristics. Specifically, the exponential absorption term is preserved, and methane enhancement is iteratively solved in the nonlinear domain, enabling a more physically consistent retrieval without requiring precise external prior knowledge. Building upon this framework, a spectrally corrected LMMF (SC-LMMF) is further proposed by introducing a lookup-table-based dynamic UAS correction to account for variations in observation geometry, surface elevation, and atmospheric state. Comprehensive validation using idealized and noise-perturbed simulations, end-to-end simulations, and controlled-release experiments demonstrates that the LMMF mitigates high-concentration underestimation relative to the MF. The SC-LMMF further reduces cross-scene systematic biases, shifting retrievals toward a near 1:1 relationship. In controlled-release experiments, the SC-LMMF increased the coefficient of determination (R2) by approximately 50% while reducing the root mean square error (RMSE) and mean absolute error (MAE) by approximately 70% relative to the MF. Overall, the proposed framework enhances the robustness and quantitative consistency of methane point-source retrievals across multisource hyperspectral satellite observations. Full article
Show Figures

Figure 1

34 pages, 6876 KB  
Article
A NIST-Traceable Lab-to-Sky Spectral and Radiometric Calibration for NASA’s High-Altitude Airborne Hyperspectral Pushbroom Imager for Cloud and Aerosol Research and Development (PICARD)
by Gary D. Hoffmann, Thomas Ellis, Haiping Su, Alok Shrestha, Julia A. Barsi, Roseanne Dominguez, Eric Fraim, James Jacobson, Steven Platnick, G. Thomas Arnold, Kerry Meyer and Jessica L. McCarty
Remote Sens. 2026, 18(8), 1168; https://doi.org/10.3390/rs18081168 - 14 Apr 2026
Viewed by 692
Abstract
The Pushbroom Imager for Cloud and Aerosol Research and Development (PICARD) visible through shortwave infrared imaging spectrometer was developed to carry a calibration laboratory environment to high altitudes, while also providing high-dynamic-range bright cloud-top radiance measurements across a field of view just under [...] Read more.
The Pushbroom Imager for Cloud and Aerosol Research and Development (PICARD) visible through shortwave infrared imaging spectrometer was developed to carry a calibration laboratory environment to high altitudes, while also providing high-dynamic-range bright cloud-top radiance measurements across a field of view just under 50 degrees. The in-flight performance of this new spectroradiometer was validated in comparison to multiple reference data sources and targets using imagery collected aboard NASA’s ER-2 high-altitude aircraft during the Western Diversity Time Series (WDTS) airborne science campaign in April 2023 and the September 2024 Plankton, Aerosol, Cloud, and ocean Ecosystem (PACE) Postlaunch Airborne eXperiment (PACE-PAX), both operating out of southern California. PICARD measurements from flights over Railroad Valley Playa, Nevada, USA, were compared to high-resolution radiance spectra of the dry lakebed provided by the Radiometric Calibration Network (RadCalNet) Working Group. Direct comparison to satellite cloud radiometry was enabled by the ER-2 flying in coordination with simultaneous overpasses of the Terra, Aqua, and NOAA-20 Earth-observing satellites during WDTS and with the PACE observatory during PACE-PAX. To account for large spectral differences between incandescent laboratory sources and solar illumination, PICARD calibration relies on measurements using the Goddard Laser for Absolute Measurements of Radiance (GLAMR) to characterize and minimize spectral stray light from the instrument’s twin Offner grating spectrometers. Good agreement in comparison to reference measurements demonstrates PICARD’s ability to provide imagery for environmental science or for testing new sensor designs and retrieval algorithms for cloud and aerosol research with verified laboratory calibrations at high altitudes. Full article
Show Figures

Figure 1

23 pages, 3553 KB  
Article
Segment-Based Spectral Characterisation of Municipal Solid Waste in African Landfills Using HISUI Hyperspectral Imagery
by Leeme Arther Baruti, Yasuhiro Sugisaki, Hirofumi Nakayama and Takayuki Shimaoka
Remote Sens. 2026, 18(8), 1156; https://doi.org/10.3390/rs18081156 - 13 Apr 2026
Viewed by 320
Abstract
Municipal solid waste management remains a major environmental challenge across Africa, where rapid urbanisation has outpaced formal waste infrastructure and routine landfill monitoring is often absent. Rather than proposing a classification algorithm, this study investigates whether spaceborne hyperspectral imagery can reveal robust spectral [...] Read more.
Municipal solid waste management remains a major environmental challenge across Africa, where rapid urbanisation has outpaced formal waste infrastructure and routine landfill monitoring is often absent. Rather than proposing a classification algorithm, this study investigates whether spaceborne hyperspectral imagery can reveal robust spectral fingerprints of landfill surfaces suitable for automated detection. Eight landfill sites across seven African countries were analysed using Hyperspectral Imager Suite (HISUI) data (400–2500 nm, 20 m resolution). A segment-based framework was applied after masking low signal-to-noise regions, combining brightness analysis, L2-normalised spectral shape comparison using Spectral Contrast Angle (SCA), and derivative spectroscopy across 109,275 pixels from six land-cover classes. Brightness-based discrimination exhibited strong inter-site variability, limiting its general applicability. In contrast, shape-based metrices revealed consistent separability between landfill-active surfaces and soil or urban classes in the shortwave infrared (SWIR), particularly within the 1538–1750 nm and 2075–2474 nm regions. Derivative analysis further identified stable extrema near approximately 1700 nm and 2200–2300 nm across all sites, indicating reproducible curvature-based fingerprints associated with exposed municipal solid waste. These results demonstrate that landfill surfaces exhibit intrinsic SWIR spectral characteristics that persist across diverse African environments. This study establishes the first multi-site hyperspectral library of African landfill surfaces, providing a physical basis for developing generalised landfill detection frameworks. Full article
Show Figures

Figure 1

19 pages, 11440 KB  
Article
Cross-Sensor Evaluation of ZY1-02E and ZY1-02D Hyperspectral Satellites for Mapping Soil Organic Matter and Texture in the Black Soil Region
by Kun Shang, He Gu, Hongzhao Tang and Chenchao Xiao
Agronomy 2026, 16(8), 781; https://doi.org/10.3390/agronomy16080781 - 10 Apr 2026
Viewed by 546
Abstract
Soil health monitoring is critical for the sustainable management of the black soil region, a key resource for global food security. However, traditional field surveys are constrained by high operational costs, limited spatial coverage, and low temporal frequency, making them inadequate for high-resolution [...] Read more.
Soil health monitoring is critical for the sustainable management of the black soil region, a key resource for global food security. However, traditional field surveys are constrained by high operational costs, limited spatial coverage, and low temporal frequency, making them inadequate for high-resolution and time-sensitive soil monitoring. The recently launched ZY1-02E satellite, equipped with an advanced hyperspectral imager, offers a new potential data source, yet its capability for quantitative soil modelling requires rigorous cross-sensor validation. This study conducts a cross-sensor evaluation of ZY1-02E and its predecessor, ZY1-02D, for mapping soil organic matter (SOM) and soil texture (sand, silt, and clay) in Northeast China. Optimal spectral indices were constructed through exhaustive band combination and correlation screening, and quantitative inversion models were established using a hybrid framework integrating Random Frog feature selection with Gaussian Process Regression (GPR) and Boosting Trees, based on synchronous ground observations. Results demonstrate strong cross-sensor consistency, with spectral indices showing significant linear correlations (R2>0.65) between ZY1-02E and ZY1-02D. Furthermore, the quantitative retrieval models applied to ZY1-02E imagery achieved robust performance, with cross-sensor retrieval consistency exceeding R2=0.60 for all parameters and SOM exhibiting the highest agreement (R2=0.74). These findings confirm the radiometric stability and algorithm transferability of ZY1-02E, demonstrating its capability to generate soil parameter products comparable to ZY1-02D without extensive model recalibration. The validated interoperability of the twin-satellite constellation substantially enhances temporal observation capacity during the narrow bare-soil window, effectively mitigating cloud-induced data gaps in high-latitude agricultural regions. Importantly, the enhanced monitoring framework provides a scalable technical paradigm for high-frequency hyperspectral soil mapping, offering critical spatial decision support for precision fertilization, soil degradation mitigation, and conservation tillage management in the Mollisol belt. Full article
Show Figures

Figure 1

Back to TopTop